Dec  8 04:08:39 np0005550137 kernel: Linux version 5.14.0-645.el9.x86_64 (mockbuild@x86-05.stream.rdu2.redhat.com) (gcc (GCC) 11.5.0 20240719 (Red Hat 11.5.0-14), GNU ld version 2.35.2-68.el9) #1 SMP PREEMPT_DYNAMIC Fri Nov 28 14:01:17 UTC 2025
Dec  8 04:08:39 np0005550137 kernel: The list of certified hardware and cloud instances for Red Hat Enterprise Linux 9 can be viewed at the Red Hat Ecosystem Catalog, https://catalog.redhat.com.
Dec  8 04:08:39 np0005550137 kernel: Command line: BOOT_IMAGE=(hd0,msdos1)/boot/vmlinuz-5.14.0-645.el9.x86_64 root=UUID=fcf6b761-831a-48a7-9f5f-068b5063763f ro console=ttyS0,115200n8 no_timer_check net.ifnames=0 crashkernel=1G-2G:192M,2G-64G:256M,64G-:512M
Dec  8 04:08:39 np0005550137 kernel: BIOS-provided physical RAM map:
Dec  8 04:08:39 np0005550137 kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009fbff] usable
Dec  8 04:08:39 np0005550137 kernel: BIOS-e820: [mem 0x000000000009fc00-0x000000000009ffff] reserved
Dec  8 04:08:39 np0005550137 kernel: BIOS-e820: [mem 0x00000000000f0000-0x00000000000fffff] reserved
Dec  8 04:08:39 np0005550137 kernel: BIOS-e820: [mem 0x0000000000100000-0x00000000bffdafff] usable
Dec  8 04:08:39 np0005550137 kernel: BIOS-e820: [mem 0x00000000bffdb000-0x00000000bfffffff] reserved
Dec  8 04:08:39 np0005550137 kernel: BIOS-e820: [mem 0x00000000feffc000-0x00000000feffffff] reserved
Dec  8 04:08:39 np0005550137 kernel: BIOS-e820: [mem 0x00000000fffc0000-0x00000000ffffffff] reserved
Dec  8 04:08:39 np0005550137 kernel: BIOS-e820: [mem 0x0000000100000000-0x000000023fffffff] usable
Dec  8 04:08:39 np0005550137 kernel: NX (Execute Disable) protection: active
Dec  8 04:08:39 np0005550137 kernel: APIC: Static calls initialized
Dec  8 04:08:39 np0005550137 kernel: SMBIOS 2.8 present.
Dec  8 04:08:39 np0005550137 kernel: DMI: OpenStack Foundation OpenStack Nova, BIOS 1.15.0-1 04/01/2014
Dec  8 04:08:39 np0005550137 kernel: Hypervisor detected: KVM
Dec  8 04:08:39 np0005550137 kernel: kvm-clock: Using msrs 4b564d01 and 4b564d00
Dec  8 04:08:39 np0005550137 kernel: kvm-clock: using sched offset of 3698855151 cycles
Dec  8 04:08:39 np0005550137 kernel: clocksource: kvm-clock: mask: 0xffffffffffffffff max_cycles: 0x1cd42e4dffb, max_idle_ns: 881590591483 ns
Dec  8 04:08:39 np0005550137 kernel: tsc: Detected 2800.000 MHz processor
Dec  8 04:08:39 np0005550137 kernel: last_pfn = 0x240000 max_arch_pfn = 0x400000000
Dec  8 04:08:39 np0005550137 kernel: MTRR map: 4 entries (3 fixed + 1 variable; max 19), built from 8 variable MTRRs
Dec  8 04:08:39 np0005550137 kernel: x86/PAT: Configuration [0-7]: WB  WC  UC- UC  WB  WP  UC- WT  
Dec  8 04:08:39 np0005550137 kernel: last_pfn = 0xbffdb max_arch_pfn = 0x400000000
Dec  8 04:08:39 np0005550137 kernel: found SMP MP-table at [mem 0x000f5ae0-0x000f5aef]
Dec  8 04:08:39 np0005550137 kernel: Using GB pages for direct mapping
Dec  8 04:08:39 np0005550137 kernel: RAMDISK: [mem 0x2d472000-0x32a30fff]
Dec  8 04:08:39 np0005550137 kernel: ACPI: Early table checksum verification disabled
Dec  8 04:08:39 np0005550137 kernel: ACPI: RSDP 0x00000000000F5AA0 000014 (v00 BOCHS )
Dec  8 04:08:39 np0005550137 kernel: ACPI: RSDT 0x00000000BFFE16BD 000030 (v01 BOCHS  BXPC     00000001 BXPC 00000001)
Dec  8 04:08:39 np0005550137 kernel: ACPI: FACP 0x00000000BFFE1571 000074 (v01 BOCHS  BXPC     00000001 BXPC 00000001)
Dec  8 04:08:39 np0005550137 kernel: ACPI: DSDT 0x00000000BFFDFC80 0018F1 (v01 BOCHS  BXPC     00000001 BXPC 00000001)
Dec  8 04:08:39 np0005550137 kernel: ACPI: FACS 0x00000000BFFDFC40 000040
Dec  8 04:08:39 np0005550137 kernel: ACPI: APIC 0x00000000BFFE15E5 0000B0 (v01 BOCHS  BXPC     00000001 BXPC 00000001)
Dec  8 04:08:39 np0005550137 kernel: ACPI: WAET 0x00000000BFFE1695 000028 (v01 BOCHS  BXPC     00000001 BXPC 00000001)
Dec  8 04:08:39 np0005550137 kernel: ACPI: Reserving FACP table memory at [mem 0xbffe1571-0xbffe15e4]
Dec  8 04:08:39 np0005550137 kernel: ACPI: Reserving DSDT table memory at [mem 0xbffdfc80-0xbffe1570]
Dec  8 04:08:39 np0005550137 kernel: ACPI: Reserving FACS table memory at [mem 0xbffdfc40-0xbffdfc7f]
Dec  8 04:08:39 np0005550137 kernel: ACPI: Reserving APIC table memory at [mem 0xbffe15e5-0xbffe1694]
Dec  8 04:08:39 np0005550137 kernel: ACPI: Reserving WAET table memory at [mem 0xbffe1695-0xbffe16bc]
Dec  8 04:08:39 np0005550137 kernel: No NUMA configuration found
Dec  8 04:08:39 np0005550137 kernel: Faking a node at [mem 0x0000000000000000-0x000000023fffffff]
Dec  8 04:08:39 np0005550137 kernel: NODE_DATA(0) allocated [mem 0x23ffd5000-0x23fffffff]
Dec  8 04:08:39 np0005550137 kernel: crashkernel reserved: 0x00000000af000000 - 0x00000000bf000000 (256 MB)
Dec  8 04:08:39 np0005550137 kernel: Zone ranges:
Dec  8 04:08:39 np0005550137 kernel:  DMA      [mem 0x0000000000001000-0x0000000000ffffff]
Dec  8 04:08:39 np0005550137 kernel:  DMA32    [mem 0x0000000001000000-0x00000000ffffffff]
Dec  8 04:08:39 np0005550137 kernel:  Normal   [mem 0x0000000100000000-0x000000023fffffff]
Dec  8 04:08:39 np0005550137 kernel:  Device   empty
Dec  8 04:08:39 np0005550137 kernel: Movable zone start for each node
Dec  8 04:08:39 np0005550137 kernel: Early memory node ranges
Dec  8 04:08:39 np0005550137 kernel:  node   0: [mem 0x0000000000001000-0x000000000009efff]
Dec  8 04:08:39 np0005550137 kernel:  node   0: [mem 0x0000000000100000-0x00000000bffdafff]
Dec  8 04:08:39 np0005550137 kernel:  node   0: [mem 0x0000000100000000-0x000000023fffffff]
Dec  8 04:08:39 np0005550137 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000023fffffff]
Dec  8 04:08:39 np0005550137 kernel: On node 0, zone DMA: 1 pages in unavailable ranges
Dec  8 04:08:39 np0005550137 kernel: On node 0, zone DMA: 97 pages in unavailable ranges
Dec  8 04:08:39 np0005550137 kernel: On node 0, zone Normal: 37 pages in unavailable ranges
Dec  8 04:08:39 np0005550137 kernel: ACPI: PM-Timer IO Port: 0x608
Dec  8 04:08:39 np0005550137 kernel: ACPI: LAPIC_NMI (acpi_id[0xff] dfl dfl lint[0x1])
Dec  8 04:08:39 np0005550137 kernel: IOAPIC[0]: apic_id 0, version 17, address 0xfec00000, GSI 0-23
Dec  8 04:08:39 np0005550137 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 0 global_irq 2 dfl dfl)
Dec  8 04:08:39 np0005550137 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 5 global_irq 5 high level)
Dec  8 04:08:39 np0005550137 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level)
Dec  8 04:08:39 np0005550137 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 10 global_irq 10 high level)
Dec  8 04:08:39 np0005550137 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 11 global_irq 11 high level)
Dec  8 04:08:39 np0005550137 kernel: ACPI: Using ACPI (MADT) for SMP configuration information
Dec  8 04:08:39 np0005550137 kernel: TSC deadline timer available
Dec  8 04:08:39 np0005550137 kernel: CPU topo: Max. logical packages:   8
Dec  8 04:08:39 np0005550137 kernel: CPU topo: Max. logical dies:       8
Dec  8 04:08:39 np0005550137 kernel: CPU topo: Max. dies per package:   1
Dec  8 04:08:39 np0005550137 kernel: CPU topo: Max. threads per core:   1
Dec  8 04:08:39 np0005550137 kernel: CPU topo: Num. cores per package:     1
Dec  8 04:08:39 np0005550137 kernel: CPU topo: Num. threads per package:   1
Dec  8 04:08:39 np0005550137 kernel: CPU topo: Allowing 8 present CPUs plus 0 hotplug CPUs
Dec  8 04:08:39 np0005550137 kernel: kvm-guest: APIC: eoi() replaced with kvm_guest_apic_eoi_write()
Dec  8 04:08:39 np0005550137 kernel: PM: hibernation: Registered nosave memory: [mem 0x00000000-0x00000fff]
Dec  8 04:08:39 np0005550137 kernel: PM: hibernation: Registered nosave memory: [mem 0x0009f000-0x0009ffff]
Dec  8 04:08:39 np0005550137 kernel: PM: hibernation: Registered nosave memory: [mem 0x000a0000-0x000effff]
Dec  8 04:08:39 np0005550137 kernel: PM: hibernation: Registered nosave memory: [mem 0x000f0000-0x000fffff]
Dec  8 04:08:39 np0005550137 kernel: PM: hibernation: Registered nosave memory: [mem 0xbffdb000-0xbfffffff]
Dec  8 04:08:39 np0005550137 kernel: PM: hibernation: Registered nosave memory: [mem 0xc0000000-0xfeffbfff]
Dec  8 04:08:39 np0005550137 kernel: PM: hibernation: Registered nosave memory: [mem 0xfeffc000-0xfeffffff]
Dec  8 04:08:39 np0005550137 kernel: PM: hibernation: Registered nosave memory: [mem 0xff000000-0xfffbffff]
Dec  8 04:08:39 np0005550137 kernel: PM: hibernation: Registered nosave memory: [mem 0xfffc0000-0xffffffff]
Dec  8 04:08:39 np0005550137 kernel: [mem 0xc0000000-0xfeffbfff] available for PCI devices
Dec  8 04:08:39 np0005550137 kernel: Booting paravirtualized kernel on KVM
Dec  8 04:08:39 np0005550137 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns
Dec  8 04:08:39 np0005550137 kernel: setup_percpu: NR_CPUS:8192 nr_cpumask_bits:8 nr_cpu_ids:8 nr_node_ids:1
Dec  8 04:08:39 np0005550137 kernel: percpu: Embedded 64 pages/cpu s225280 r8192 d28672 u262144
Dec  8 04:08:39 np0005550137 kernel: kvm-guest: PV spinlocks disabled, no host support
Dec  8 04:08:39 np0005550137 kernel: Kernel command line: BOOT_IMAGE=(hd0,msdos1)/boot/vmlinuz-5.14.0-645.el9.x86_64 root=UUID=fcf6b761-831a-48a7-9f5f-068b5063763f ro console=ttyS0,115200n8 no_timer_check net.ifnames=0 crashkernel=1G-2G:192M,2G-64G:256M,64G-:512M
Dec  8 04:08:39 np0005550137 kernel: Unknown kernel command line parameters "BOOT_IMAGE=(hd0,msdos1)/boot/vmlinuz-5.14.0-645.el9.x86_64", will be passed to user space.
Dec  8 04:08:39 np0005550137 kernel: random: crng init done
Dec  8 04:08:39 np0005550137 kernel: Dentry cache hash table entries: 1048576 (order: 11, 8388608 bytes, linear)
Dec  8 04:08:39 np0005550137 kernel: Inode-cache hash table entries: 524288 (order: 10, 4194304 bytes, linear)
Dec  8 04:08:39 np0005550137 kernel: Fallback order for Node 0: 0 
Dec  8 04:08:39 np0005550137 kernel: Built 1 zonelists, mobility grouping on.  Total pages: 2064091
Dec  8 04:08:39 np0005550137 kernel: Policy zone: Normal
Dec  8 04:08:39 np0005550137 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off
Dec  8 04:08:39 np0005550137 kernel: software IO TLB: area num 8.
Dec  8 04:08:39 np0005550137 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=8, Nodes=1
Dec  8 04:08:39 np0005550137 kernel: ftrace: allocating 49335 entries in 193 pages
Dec  8 04:08:39 np0005550137 kernel: ftrace: allocated 193 pages with 3 groups
Dec  8 04:08:39 np0005550137 kernel: Dynamic Preempt: voluntary
Dec  8 04:08:39 np0005550137 kernel: rcu: Preemptible hierarchical RCU implementation.
Dec  8 04:08:39 np0005550137 kernel: rcu: #011RCU event tracing is enabled.
Dec  8 04:08:39 np0005550137 kernel: rcu: #011RCU restricting CPUs from NR_CPUS=8192 to nr_cpu_ids=8.
Dec  8 04:08:39 np0005550137 kernel: #011Trampoline variant of Tasks RCU enabled.
Dec  8 04:08:39 np0005550137 kernel: #011Rude variant of Tasks RCU enabled.
Dec  8 04:08:39 np0005550137 kernel: #011Tracing variant of Tasks RCU enabled.
Dec  8 04:08:39 np0005550137 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies.
Dec  8 04:08:39 np0005550137 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=8
Dec  8 04:08:39 np0005550137 kernel: RCU Tasks: Setting shift to 3 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=8.
Dec  8 04:08:39 np0005550137 kernel: RCU Tasks Rude: Setting shift to 3 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=8.
Dec  8 04:08:39 np0005550137 kernel: RCU Tasks Trace: Setting shift to 3 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=8.
Dec  8 04:08:39 np0005550137 kernel: NR_IRQS: 524544, nr_irqs: 488, preallocated irqs: 16
Dec  8 04:08:39 np0005550137 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention.
Dec  8 04:08:39 np0005550137 kernel: kfence: initialized - using 2097152 bytes for 255 objects at 0x(____ptrval____)-0x(____ptrval____)
Dec  8 04:08:39 np0005550137 kernel: Console: colour VGA+ 80x25
Dec  8 04:08:39 np0005550137 kernel: printk: console [ttyS0] enabled
Dec  8 04:08:39 np0005550137 kernel: ACPI: Core revision 20230331
Dec  8 04:08:39 np0005550137 kernel: APIC: Switch to symmetric I/O mode setup
Dec  8 04:08:39 np0005550137 kernel: x2apic enabled
Dec  8 04:08:39 np0005550137 kernel: APIC: Switched APIC routing to: physical x2apic
Dec  8 04:08:39 np0005550137 kernel: tsc: Marking TSC unstable due to TSCs unsynchronized
Dec  8 04:08:39 np0005550137 kernel: Calibrating delay loop (skipped) preset value.. 5600.00 BogoMIPS (lpj=2800000)
Dec  8 04:08:39 np0005550137 kernel: x86/cpu: User Mode Instruction Prevention (UMIP) activated
Dec  8 04:08:39 np0005550137 kernel: Last level iTLB entries: 4KB 512, 2MB 255, 4MB 127
Dec  8 04:08:39 np0005550137 kernel: Last level dTLB entries: 4KB 512, 2MB 255, 4MB 127, 1GB 0
Dec  8 04:08:39 np0005550137 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization
Dec  8 04:08:39 np0005550137 kernel: Spectre V2 : Mitigation: Retpolines
Dec  8 04:08:39 np0005550137 kernel: Spectre V2 : Spectre v2 / SpectreRSB: Filling RSB on context switch and VMEXIT
Dec  8 04:08:39 np0005550137 kernel: Spectre V2 : Enabling Speculation Barrier for firmware calls
Dec  8 04:08:39 np0005550137 kernel: RETBleed: Mitigation: untrained return thunk
Dec  8 04:08:39 np0005550137 kernel: Spectre V2 : mitigation: Enabling conditional Indirect Branch Prediction Barrier
Dec  8 04:08:39 np0005550137 kernel: Speculative Store Bypass: Mitigation: Speculative Store Bypass disabled via prctl
Dec  8 04:08:39 np0005550137 kernel: Speculative Return Stack Overflow: IBPB-extending microcode not applied!
Dec  8 04:08:39 np0005550137 kernel: Speculative Return Stack Overflow: WARNING: See https://kernel.org/doc/html/latest/admin-guide/hw-vuln/srso.html for mitigation options.
Dec  8 04:08:39 np0005550137 kernel: x86/bugs: return thunk changed
Dec  8 04:08:39 np0005550137 kernel: Speculative Return Stack Overflow: Vulnerable: Safe RET, no microcode
Dec  8 04:08:39 np0005550137 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers'
Dec  8 04:08:39 np0005550137 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers'
Dec  8 04:08:39 np0005550137 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers'
Dec  8 04:08:39 np0005550137 kernel: x86/fpu: xstate_offset[2]:  576, xstate_sizes[2]:  256
Dec  8 04:08:39 np0005550137 kernel: x86/fpu: Enabled xstate features 0x7, context size is 832 bytes, using 'compacted' format.
Dec  8 04:08:39 np0005550137 kernel: Freeing SMP alternatives memory: 40K
Dec  8 04:08:39 np0005550137 kernel: pid_max: default: 32768 minimum: 301
Dec  8 04:08:39 np0005550137 kernel: LSM: initializing lsm=lockdown,capability,landlock,yama,integrity,selinux,bpf
Dec  8 04:08:39 np0005550137 kernel: landlock: Up and running.
Dec  8 04:08:39 np0005550137 kernel: Yama: becoming mindful.
Dec  8 04:08:39 np0005550137 kernel: SELinux:  Initializing.
Dec  8 04:08:39 np0005550137 kernel: LSM support for eBPF active
Dec  8 04:08:39 np0005550137 kernel: Mount-cache hash table entries: 16384 (order: 5, 131072 bytes, linear)
Dec  8 04:08:39 np0005550137 kernel: Mountpoint-cache hash table entries: 16384 (order: 5, 131072 bytes, linear)
Dec  8 04:08:39 np0005550137 kernel: smpboot: CPU0: AMD EPYC-Rome Processor (family: 0x17, model: 0x31, stepping: 0x0)
Dec  8 04:08:39 np0005550137 kernel: Performance Events: Fam17h+ core perfctr, AMD PMU driver.
Dec  8 04:08:39 np0005550137 kernel: ... version:                0
Dec  8 04:08:39 np0005550137 kernel: ... bit width:              48
Dec  8 04:08:39 np0005550137 kernel: ... generic registers:      6
Dec  8 04:08:39 np0005550137 kernel: ... value mask:             0000ffffffffffff
Dec  8 04:08:39 np0005550137 kernel: ... max period:             00007fffffffffff
Dec  8 04:08:39 np0005550137 kernel: ... fixed-purpose events:   0
Dec  8 04:08:39 np0005550137 kernel: ... event mask:             000000000000003f
Dec  8 04:08:39 np0005550137 kernel: signal: max sigframe size: 1776
Dec  8 04:08:39 np0005550137 kernel: rcu: Hierarchical SRCU implementation.
Dec  8 04:08:39 np0005550137 kernel: rcu: #011Max phase no-delay instances is 400.
Dec  8 04:08:39 np0005550137 kernel: smp: Bringing up secondary CPUs ...
Dec  8 04:08:39 np0005550137 kernel: smpboot: x86: Booting SMP configuration:
Dec  8 04:08:39 np0005550137 kernel: .... node  #0, CPUs:      #1 #2 #3 #4 #5 #6 #7
Dec  8 04:08:39 np0005550137 kernel: smp: Brought up 1 node, 8 CPUs
Dec  8 04:08:39 np0005550137 kernel: smpboot: Total of 8 processors activated (44800.00 BogoMIPS)
Dec  8 04:08:39 np0005550137 kernel: node 0 deferred pages initialised in 35ms
Dec  8 04:08:39 np0005550137 kernel: Memory: 7764056K/8388068K available (16384K kernel code, 5795K rwdata, 13908K rodata, 4196K init, 7156K bss, 618204K reserved, 0K cma-reserved)
Dec  8 04:08:39 np0005550137 kernel: devtmpfs: initialized
Dec  8 04:08:39 np0005550137 kernel: x86/mm: Memory block size: 128MB
Dec  8 04:08:39 np0005550137 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns
Dec  8 04:08:39 np0005550137 kernel: futex hash table entries: 2048 (131072 bytes on 1 NUMA nodes, total 128 KiB, linear).
Dec  8 04:08:39 np0005550137 kernel: pinctrl core: initialized pinctrl subsystem
Dec  8 04:08:39 np0005550137 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family
Dec  8 04:08:39 np0005550137 kernel: DMA: preallocated 1024 KiB GFP_KERNEL pool for atomic allocations
Dec  8 04:08:39 np0005550137 kernel: DMA: preallocated 1024 KiB GFP_KERNEL|GFP_DMA pool for atomic allocations
Dec  8 04:08:39 np0005550137 kernel: DMA: preallocated 1024 KiB GFP_KERNEL|GFP_DMA32 pool for atomic allocations
Dec  8 04:08:39 np0005550137 kernel: audit: initializing netlink subsys (disabled)
Dec  8 04:08:39 np0005550137 kernel: audit: type=2000 audit(1765184916.770:1): state=initialized audit_enabled=0 res=1
Dec  8 04:08:39 np0005550137 kernel: thermal_sys: Registered thermal governor 'fair_share'
Dec  8 04:08:39 np0005550137 kernel: thermal_sys: Registered thermal governor 'step_wise'
Dec  8 04:08:39 np0005550137 kernel: thermal_sys: Registered thermal governor 'user_space'
Dec  8 04:08:39 np0005550137 kernel: cpuidle: using governor menu
Dec  8 04:08:39 np0005550137 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5
Dec  8 04:08:39 np0005550137 kernel: PCI: Using configuration type 1 for base access
Dec  8 04:08:39 np0005550137 kernel: PCI: Using configuration type 1 for extended access
Dec  8 04:08:39 np0005550137 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible.
Dec  8 04:08:39 np0005550137 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages
Dec  8 04:08:39 np0005550137 kernel: HugeTLB: 16380 KiB vmemmap can be freed for a 1.00 GiB page
Dec  8 04:08:39 np0005550137 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages
Dec  8 04:08:39 np0005550137 kernel: HugeTLB: 28 KiB vmemmap can be freed for a 2.00 MiB page
Dec  8 04:08:39 np0005550137 kernel: Demotion targets for Node 0: null
Dec  8 04:08:39 np0005550137 kernel: cryptd: max_cpu_qlen set to 1000
Dec  8 04:08:39 np0005550137 kernel: ACPI: Added _OSI(Module Device)
Dec  8 04:08:39 np0005550137 kernel: ACPI: Added _OSI(Processor Device)
Dec  8 04:08:39 np0005550137 kernel: ACPI: Added _OSI(3.0 _SCP Extensions)
Dec  8 04:08:39 np0005550137 kernel: ACPI: Added _OSI(Processor Aggregator Device)
Dec  8 04:08:39 np0005550137 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded
Dec  8 04:08:39 np0005550137 kernel: ACPI: _OSC evaluation for CPUs failed, trying _PDC
Dec  8 04:08:39 np0005550137 kernel: ACPI: Interpreter enabled
Dec  8 04:08:39 np0005550137 kernel: ACPI: PM: (supports S0 S3 S4 S5)
Dec  8 04:08:39 np0005550137 kernel: ACPI: Using IOAPIC for interrupt routing
Dec  8 04:08:39 np0005550137 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug
Dec  8 04:08:39 np0005550137 kernel: PCI: Using E820 reservations for host bridge windows
Dec  8 04:08:39 np0005550137 kernel: ACPI: Enabled 2 GPEs in block 00 to 0F
Dec  8 04:08:39 np0005550137 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff])
Dec  8 04:08:39 np0005550137 kernel: acpi PNP0A03:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI EDR HPX-Type3]
Dec  8 04:08:39 np0005550137 kernel: acpiphp: Slot [3] registered
Dec  8 04:08:39 np0005550137 kernel: acpiphp: Slot [4] registered
Dec  8 04:08:39 np0005550137 kernel: acpiphp: Slot [5] registered
Dec  8 04:08:39 np0005550137 kernel: acpiphp: Slot [6] registered
Dec  8 04:08:39 np0005550137 kernel: acpiphp: Slot [7] registered
Dec  8 04:08:39 np0005550137 kernel: acpiphp: Slot [8] registered
Dec  8 04:08:39 np0005550137 kernel: acpiphp: Slot [9] registered
Dec  8 04:08:39 np0005550137 kernel: acpiphp: Slot [10] registered
Dec  8 04:08:39 np0005550137 kernel: acpiphp: Slot [11] registered
Dec  8 04:08:39 np0005550137 kernel: acpiphp: Slot [12] registered
Dec  8 04:08:39 np0005550137 kernel: acpiphp: Slot [13] registered
Dec  8 04:08:39 np0005550137 kernel: acpiphp: Slot [14] registered
Dec  8 04:08:39 np0005550137 kernel: acpiphp: Slot [15] registered
Dec  8 04:08:39 np0005550137 kernel: acpiphp: Slot [16] registered
Dec  8 04:08:39 np0005550137 kernel: acpiphp: Slot [17] registered
Dec  8 04:08:39 np0005550137 kernel: acpiphp: Slot [18] registered
Dec  8 04:08:39 np0005550137 kernel: acpiphp: Slot [19] registered
Dec  8 04:08:39 np0005550137 kernel: acpiphp: Slot [20] registered
Dec  8 04:08:39 np0005550137 kernel: acpiphp: Slot [21] registered
Dec  8 04:08:39 np0005550137 kernel: acpiphp: Slot [22] registered
Dec  8 04:08:39 np0005550137 kernel: acpiphp: Slot [23] registered
Dec  8 04:08:39 np0005550137 kernel: acpiphp: Slot [24] registered
Dec  8 04:08:39 np0005550137 kernel: acpiphp: Slot [25] registered
Dec  8 04:08:39 np0005550137 kernel: acpiphp: Slot [26] registered
Dec  8 04:08:39 np0005550137 kernel: acpiphp: Slot [27] registered
Dec  8 04:08:39 np0005550137 kernel: acpiphp: Slot [28] registered
Dec  8 04:08:39 np0005550137 kernel: acpiphp: Slot [29] registered
Dec  8 04:08:39 np0005550137 kernel: acpiphp: Slot [30] registered
Dec  8 04:08:39 np0005550137 kernel: acpiphp: Slot [31] registered
Dec  8 04:08:39 np0005550137 kernel: PCI host bridge to bus 0000:00
Dec  8 04:08:39 np0005550137 kernel: pci_bus 0000:00: root bus resource [io  0x0000-0x0cf7 window]
Dec  8 04:08:39 np0005550137 kernel: pci_bus 0000:00: root bus resource [io  0x0d00-0xffff window]
Dec  8 04:08:39 np0005550137 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window]
Dec  8 04:08:39 np0005550137 kernel: pci_bus 0000:00: root bus resource [mem 0xc0000000-0xfebfffff window]
Dec  8 04:08:39 np0005550137 kernel: pci_bus 0000:00: root bus resource [mem 0x240000000-0x2bfffffff window]
Dec  8 04:08:39 np0005550137 kernel: pci_bus 0000:00: root bus resource [bus 00-ff]
Dec  8 04:08:39 np0005550137 kernel: pci 0000:00:00.0: [8086:1237] type 00 class 0x060000 conventional PCI endpoint
Dec  8 04:08:39 np0005550137 kernel: pci 0000:00:01.0: [8086:7000] type 00 class 0x060100 conventional PCI endpoint
Dec  8 04:08:39 np0005550137 kernel: pci 0000:00:01.1: [8086:7010] type 00 class 0x010180 conventional PCI endpoint
Dec  8 04:08:39 np0005550137 kernel: pci 0000:00:01.1: BAR 4 [io  0xc140-0xc14f]
Dec  8 04:08:39 np0005550137 kernel: pci 0000:00:01.1: BAR 0 [io  0x01f0-0x01f7]: legacy IDE quirk
Dec  8 04:08:39 np0005550137 kernel: pci 0000:00:01.1: BAR 1 [io  0x03f6]: legacy IDE quirk
Dec  8 04:08:39 np0005550137 kernel: pci 0000:00:01.1: BAR 2 [io  0x0170-0x0177]: legacy IDE quirk
Dec  8 04:08:39 np0005550137 kernel: pci 0000:00:01.1: BAR 3 [io  0x0376]: legacy IDE quirk
Dec  8 04:08:39 np0005550137 kernel: pci 0000:00:01.2: [8086:7020] type 00 class 0x0c0300 conventional PCI endpoint
Dec  8 04:08:39 np0005550137 kernel: pci 0000:00:01.2: BAR 4 [io  0xc100-0xc11f]
Dec  8 04:08:39 np0005550137 kernel: pci 0000:00:01.3: [8086:7113] type 00 class 0x068000 conventional PCI endpoint
Dec  8 04:08:39 np0005550137 kernel: pci 0000:00:01.3: quirk: [io  0x0600-0x063f] claimed by PIIX4 ACPI
Dec  8 04:08:39 np0005550137 kernel: pci 0000:00:01.3: quirk: [io  0x0700-0x070f] claimed by PIIX4 SMB
Dec  8 04:08:39 np0005550137 kernel: pci 0000:00:02.0: [1af4:1050] type 00 class 0x030000 conventional PCI endpoint
Dec  8 04:08:39 np0005550137 kernel: pci 0000:00:02.0: BAR 0 [mem 0xfe000000-0xfe7fffff pref]
Dec  8 04:08:39 np0005550137 kernel: pci 0000:00:02.0: BAR 2 [mem 0xfe800000-0xfe803fff 64bit pref]
Dec  8 04:08:39 np0005550137 kernel: pci 0000:00:02.0: BAR 4 [mem 0xfeb90000-0xfeb90fff]
Dec  8 04:08:39 np0005550137 kernel: pci 0000:00:02.0: ROM [mem 0xfeb80000-0xfeb8ffff pref]
Dec  8 04:08:39 np0005550137 kernel: pci 0000:00:02.0: Video device with shadowed ROM at [mem 0x000c0000-0x000dffff]
Dec  8 04:08:39 np0005550137 kernel: pci 0000:00:03.0: [1af4:1000] type 00 class 0x020000 conventional PCI endpoint
Dec  8 04:08:39 np0005550137 kernel: pci 0000:00:03.0: BAR 0 [io  0xc080-0xc0bf]
Dec  8 04:08:39 np0005550137 kernel: pci 0000:00:03.0: BAR 1 [mem 0xfeb91000-0xfeb91fff]
Dec  8 04:08:39 np0005550137 kernel: pci 0000:00:03.0: BAR 4 [mem 0xfe804000-0xfe807fff 64bit pref]
Dec  8 04:08:39 np0005550137 kernel: pci 0000:00:03.0: ROM [mem 0xfeb00000-0xfeb7ffff pref]
Dec  8 04:08:39 np0005550137 kernel: pci 0000:00:04.0: [1af4:1001] type 00 class 0x010000 conventional PCI endpoint
Dec  8 04:08:39 np0005550137 kernel: pci 0000:00:04.0: BAR 0 [io  0xc000-0xc07f]
Dec  8 04:08:39 np0005550137 kernel: pci 0000:00:04.0: BAR 1 [mem 0xfeb92000-0xfeb92fff]
Dec  8 04:08:39 np0005550137 kernel: pci 0000:00:04.0: BAR 4 [mem 0xfe808000-0xfe80bfff 64bit pref]
Dec  8 04:08:39 np0005550137 kernel: pci 0000:00:05.0: [1af4:1002] type 00 class 0x00ff00 conventional PCI endpoint
Dec  8 04:08:39 np0005550137 kernel: pci 0000:00:05.0: BAR 0 [io  0xc0c0-0xc0ff]
Dec  8 04:08:39 np0005550137 kernel: pci 0000:00:05.0: BAR 4 [mem 0xfe80c000-0xfe80ffff 64bit pref]
Dec  8 04:08:39 np0005550137 kernel: pci 0000:00:06.0: [1af4:1005] type 00 class 0x00ff00 conventional PCI endpoint
Dec  8 04:08:39 np0005550137 kernel: pci 0000:00:06.0: BAR 0 [io  0xc120-0xc13f]
Dec  8 04:08:39 np0005550137 kernel: pci 0000:00:06.0: BAR 4 [mem 0xfe810000-0xfe813fff 64bit pref]
Dec  8 04:08:39 np0005550137 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 10
Dec  8 04:08:39 np0005550137 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 10
Dec  8 04:08:39 np0005550137 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 11
Dec  8 04:08:39 np0005550137 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 11
Dec  8 04:08:39 np0005550137 kernel: ACPI: PCI: Interrupt link LNKS configured for IRQ 9
Dec  8 04:08:39 np0005550137 kernel: iommu: Default domain type: Translated
Dec  8 04:08:39 np0005550137 kernel: iommu: DMA domain TLB invalidation policy: lazy mode
Dec  8 04:08:39 np0005550137 kernel: SCSI subsystem initialized
Dec  8 04:08:39 np0005550137 kernel: ACPI: bus type USB registered
Dec  8 04:08:39 np0005550137 kernel: usbcore: registered new interface driver usbfs
Dec  8 04:08:39 np0005550137 kernel: usbcore: registered new interface driver hub
Dec  8 04:08:39 np0005550137 kernel: usbcore: registered new device driver usb
Dec  8 04:08:39 np0005550137 kernel: pps_core: LinuxPPS API ver. 1 registered
Dec  8 04:08:39 np0005550137 kernel: pps_core: Software ver. 5.3.6 - Copyright 2005-2007 Rodolfo Giometti <giometti@linux.it>
Dec  8 04:08:39 np0005550137 kernel: PTP clock support registered
Dec  8 04:08:39 np0005550137 kernel: EDAC MC: Ver: 3.0.0
Dec  8 04:08:39 np0005550137 kernel: NetLabel: Initializing
Dec  8 04:08:39 np0005550137 kernel: NetLabel:  domain hash size = 128
Dec  8 04:08:39 np0005550137 kernel: NetLabel:  protocols = UNLABELED CIPSOv4 CALIPSO
Dec  8 04:08:39 np0005550137 kernel: NetLabel:  unlabeled traffic allowed by default
Dec  8 04:08:39 np0005550137 kernel: PCI: Using ACPI for IRQ routing
Dec  8 04:08:39 np0005550137 kernel: pci 0000:00:02.0: vgaarb: setting as boot VGA device
Dec  8 04:08:39 np0005550137 kernel: pci 0000:00:02.0: vgaarb: bridge control possible
Dec  8 04:08:39 np0005550137 kernel: pci 0000:00:02.0: vgaarb: VGA device added: decodes=io+mem,owns=io+mem,locks=none
Dec  8 04:08:39 np0005550137 kernel: vgaarb: loaded
Dec  8 04:08:39 np0005550137 kernel: clocksource: Switched to clocksource kvm-clock
Dec  8 04:08:39 np0005550137 kernel: VFS: Disk quotas dquot_6.6.0
Dec  8 04:08:39 np0005550137 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes)
Dec  8 04:08:39 np0005550137 kernel: pnp: PnP ACPI init
Dec  8 04:08:39 np0005550137 kernel: pnp: PnP ACPI: found 5 devices
Dec  8 04:08:39 np0005550137 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns
Dec  8 04:08:39 np0005550137 kernel: NET: Registered PF_INET protocol family
Dec  8 04:08:39 np0005550137 kernel: IP idents hash table entries: 131072 (order: 8, 1048576 bytes, linear)
Dec  8 04:08:39 np0005550137 kernel: tcp_listen_portaddr_hash hash table entries: 4096 (order: 4, 65536 bytes, linear)
Dec  8 04:08:39 np0005550137 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear)
Dec  8 04:08:39 np0005550137 kernel: TCP established hash table entries: 65536 (order: 7, 524288 bytes, linear)
Dec  8 04:08:39 np0005550137 kernel: TCP bind hash table entries: 65536 (order: 8, 1048576 bytes, linear)
Dec  8 04:08:39 np0005550137 kernel: TCP: Hash tables configured (established 65536 bind 65536)
Dec  8 04:08:39 np0005550137 kernel: MPTCP token hash table entries: 8192 (order: 5, 196608 bytes, linear)
Dec  8 04:08:39 np0005550137 kernel: UDP hash table entries: 4096 (order: 5, 131072 bytes, linear)
Dec  8 04:08:39 np0005550137 kernel: UDP-Lite hash table entries: 4096 (order: 5, 131072 bytes, linear)
Dec  8 04:08:39 np0005550137 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family
Dec  8 04:08:39 np0005550137 kernel: NET: Registered PF_XDP protocol family
Dec  8 04:08:39 np0005550137 kernel: pci_bus 0000:00: resource 4 [io  0x0000-0x0cf7 window]
Dec  8 04:08:39 np0005550137 kernel: pci_bus 0000:00: resource 5 [io  0x0d00-0xffff window]
Dec  8 04:08:39 np0005550137 kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window]
Dec  8 04:08:39 np0005550137 kernel: pci_bus 0000:00: resource 7 [mem 0xc0000000-0xfebfffff window]
Dec  8 04:08:39 np0005550137 kernel: pci_bus 0000:00: resource 8 [mem 0x240000000-0x2bfffffff window]
Dec  8 04:08:39 np0005550137 kernel: pci 0000:00:01.0: PIIX3: Enabling Passive Release
Dec  8 04:08:39 np0005550137 kernel: pci 0000:00:00.0: Limiting direct PCI/PCI transfers
Dec  8 04:08:39 np0005550137 kernel: ACPI: \_SB_.LNKD: Enabled at IRQ 11
Dec  8 04:08:39 np0005550137 kernel: pci 0000:00:01.2: quirk_usb_early_handoff+0x0/0x160 took 74409 usecs
Dec  8 04:08:39 np0005550137 kernel: PCI: CLS 0 bytes, default 64
Dec  8 04:08:39 np0005550137 kernel: PCI-DMA: Using software bounce buffering for IO (SWIOTLB)
Dec  8 04:08:39 np0005550137 kernel: software IO TLB: mapped [mem 0x00000000ab000000-0x00000000af000000] (64MB)
Dec  8 04:08:39 np0005550137 kernel: ACPI: bus type thunderbolt registered
Dec  8 04:08:39 np0005550137 kernel: Trying to unpack rootfs image as initramfs...
Dec  8 04:08:39 np0005550137 kernel: Initialise system trusted keyrings
Dec  8 04:08:39 np0005550137 kernel: Key type blacklist registered
Dec  8 04:08:39 np0005550137 kernel: workingset: timestamp_bits=36 max_order=21 bucket_order=0
Dec  8 04:08:39 np0005550137 kernel: zbud: loaded
Dec  8 04:08:39 np0005550137 kernel: integrity: Platform Keyring initialized
Dec  8 04:08:39 np0005550137 kernel: integrity: Machine keyring initialized
Dec  8 04:08:39 np0005550137 kernel: Freeing initrd memory: 87804K
Dec  8 04:08:39 np0005550137 kernel: NET: Registered PF_ALG protocol family
Dec  8 04:08:39 np0005550137 kernel: xor: automatically using best checksumming function   avx       
Dec  8 04:08:39 np0005550137 kernel: Key type asymmetric registered
Dec  8 04:08:39 np0005550137 kernel: Asymmetric key parser 'x509' registered
Dec  8 04:08:39 np0005550137 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 246)
Dec  8 04:08:39 np0005550137 kernel: io scheduler mq-deadline registered
Dec  8 04:08:39 np0005550137 kernel: io scheduler kyber registered
Dec  8 04:08:39 np0005550137 kernel: io scheduler bfq registered
Dec  8 04:08:39 np0005550137 kernel: atomic64_test: passed for x86-64 platform with CX8 and with SSE
Dec  8 04:08:39 np0005550137 kernel: shpchp: Standard Hot Plug PCI Controller Driver version: 0.4
Dec  8 04:08:39 np0005550137 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input0
Dec  8 04:08:39 np0005550137 kernel: ACPI: button: Power Button [PWRF]
Dec  8 04:08:39 np0005550137 kernel: ACPI: \_SB_.LNKB: Enabled at IRQ 10
Dec  8 04:08:39 np0005550137 kernel: ACPI: \_SB_.LNKC: Enabled at IRQ 11
Dec  8 04:08:39 np0005550137 kernel: ACPI: \_SB_.LNKA: Enabled at IRQ 10
Dec  8 04:08:39 np0005550137 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled
Dec  8 04:08:39 np0005550137 kernel: 00:00: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A
Dec  8 04:08:39 np0005550137 kernel: Non-volatile memory driver v1.3
Dec  8 04:08:39 np0005550137 kernel: rdac: device handler registered
Dec  8 04:08:39 np0005550137 kernel: hp_sw: device handler registered
Dec  8 04:08:39 np0005550137 kernel: emc: device handler registered
Dec  8 04:08:39 np0005550137 kernel: alua: device handler registered
Dec  8 04:08:39 np0005550137 kernel: uhci_hcd 0000:00:01.2: UHCI Host Controller
Dec  8 04:08:39 np0005550137 kernel: uhci_hcd 0000:00:01.2: new USB bus registered, assigned bus number 1
Dec  8 04:08:39 np0005550137 kernel: uhci_hcd 0000:00:01.2: detected 2 ports
Dec  8 04:08:39 np0005550137 kernel: uhci_hcd 0000:00:01.2: irq 11, io port 0x0000c100
Dec  8 04:08:39 np0005550137 kernel: usb usb1: New USB device found, idVendor=1d6b, idProduct=0001, bcdDevice= 5.14
Dec  8 04:08:39 np0005550137 kernel: usb usb1: New USB device strings: Mfr=3, Product=2, SerialNumber=1
Dec  8 04:08:39 np0005550137 kernel: usb usb1: Product: UHCI Host Controller
Dec  8 04:08:39 np0005550137 kernel: usb usb1: Manufacturer: Linux 5.14.0-645.el9.x86_64 uhci_hcd
Dec  8 04:08:39 np0005550137 kernel: usb usb1: SerialNumber: 0000:00:01.2
Dec  8 04:08:39 np0005550137 kernel: hub 1-0:1.0: USB hub found
Dec  8 04:08:39 np0005550137 kernel: hub 1-0:1.0: 2 ports detected
Dec  8 04:08:39 np0005550137 kernel: usbcore: registered new interface driver usbserial_generic
Dec  8 04:08:39 np0005550137 kernel: usbserial: USB Serial support registered for generic
Dec  8 04:08:39 np0005550137 kernel: i8042: PNP: PS/2 Controller [PNP0303:KBD,PNP0f13:MOU] at 0x60,0x64 irq 1,12
Dec  8 04:08:39 np0005550137 kernel: serio: i8042 KBD port at 0x60,0x64 irq 1
Dec  8 04:08:39 np0005550137 kernel: serio: i8042 AUX port at 0x60,0x64 irq 12
Dec  8 04:08:39 np0005550137 kernel: mousedev: PS/2 mouse device common for all mice
Dec  8 04:08:39 np0005550137 kernel: input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input1
Dec  8 04:08:39 np0005550137 kernel: rtc_cmos 00:04: RTC can wake from S4
Dec  8 04:08:39 np0005550137 kernel: input: VirtualPS/2 VMware VMMouse as /devices/platform/i8042/serio1/input/input4
Dec  8 04:08:39 np0005550137 kernel: rtc_cmos 00:04: registered as rtc0
Dec  8 04:08:39 np0005550137 kernel: rtc_cmos 00:04: setting system clock to 2025-12-08T09:08:38 UTC (1765184918)
Dec  8 04:08:39 np0005550137 kernel: input: VirtualPS/2 VMware VMMouse as /devices/platform/i8042/serio1/input/input3
Dec  8 04:08:39 np0005550137 kernel: rtc_cmos 00:04: alarms up to one day, y3k, 242 bytes nvram
Dec  8 04:08:39 np0005550137 kernel: amd_pstate: the _CPC object is not present in SBIOS or ACPI disabled
Dec  8 04:08:39 np0005550137 kernel: hid: raw HID events driver (C) Jiri Kosina
Dec  8 04:08:39 np0005550137 kernel: usbcore: registered new interface driver usbhid
Dec  8 04:08:39 np0005550137 kernel: usbhid: USB HID core driver
Dec  8 04:08:39 np0005550137 kernel: drop_monitor: Initializing network drop monitor service
Dec  8 04:08:39 np0005550137 kernel: Initializing XFRM netlink socket
Dec  8 04:08:39 np0005550137 kernel: NET: Registered PF_INET6 protocol family
Dec  8 04:08:39 np0005550137 kernel: Segment Routing with IPv6
Dec  8 04:08:39 np0005550137 kernel: NET: Registered PF_PACKET protocol family
Dec  8 04:08:39 np0005550137 kernel: mpls_gso: MPLS GSO support
Dec  8 04:08:39 np0005550137 kernel: IPI shorthand broadcast: enabled
Dec  8 04:08:39 np0005550137 kernel: AVX2 version of gcm_enc/dec engaged.
Dec  8 04:08:39 np0005550137 kernel: AES CTR mode by8 optimization enabled
Dec  8 04:08:39 np0005550137 kernel: sched_clock: Marking stable (2840007379, 153308096)->(3201507347, -208191872)
Dec  8 04:08:39 np0005550137 kernel: registered taskstats version 1
Dec  8 04:08:39 np0005550137 kernel: Loading compiled-in X.509 certificates
Dec  8 04:08:39 np0005550137 kernel: Loaded X.509 cert 'The CentOS Project: CentOS Stream kernel signing key: 4c28336b4850d771d036b52fb2778fdb4f02f708'
Dec  8 04:08:39 np0005550137 kernel: Loaded X.509 cert 'Red Hat Enterprise Linux Driver Update Program (key 3): bf57f3e87362bc7229d9f465321773dfd1f77a80'
Dec  8 04:08:39 np0005550137 kernel: Loaded X.509 cert 'Red Hat Enterprise Linux kpatch signing key: 4d38fd864ebe18c5f0b72e3852e2014c3a676fc8'
Dec  8 04:08:39 np0005550137 kernel: Loaded X.509 cert 'RH-IMA-CA: Red Hat IMA CA: fb31825dd0e073685b264e3038963673f753959a'
Dec  8 04:08:39 np0005550137 kernel: Loaded X.509 cert 'Nvidia GPU OOT signing 001: 55e1cef88193e60419f0b0ec379c49f77545acf0'
Dec  8 04:08:39 np0005550137 kernel: Demotion targets for Node 0: null
Dec  8 04:08:39 np0005550137 kernel: page_owner is disabled
Dec  8 04:08:39 np0005550137 kernel: Key type .fscrypt registered
Dec  8 04:08:39 np0005550137 kernel: Key type fscrypt-provisioning registered
Dec  8 04:08:39 np0005550137 kernel: Key type big_key registered
Dec  8 04:08:39 np0005550137 kernel: Key type encrypted registered
Dec  8 04:08:39 np0005550137 kernel: ima: No TPM chip found, activating TPM-bypass!
Dec  8 04:08:39 np0005550137 kernel: Loading compiled-in module X.509 certificates
Dec  8 04:08:39 np0005550137 kernel: Loaded X.509 cert 'The CentOS Project: CentOS Stream kernel signing key: 4c28336b4850d771d036b52fb2778fdb4f02f708'
Dec  8 04:08:39 np0005550137 kernel: ima: Allocated hash algorithm: sha256
Dec  8 04:08:39 np0005550137 kernel: ima: No architecture policies found
Dec  8 04:08:39 np0005550137 kernel: evm: Initialising EVM extended attributes:
Dec  8 04:08:39 np0005550137 kernel: evm: security.selinux
Dec  8 04:08:39 np0005550137 kernel: evm: security.SMACK64 (disabled)
Dec  8 04:08:39 np0005550137 kernel: evm: security.SMACK64EXEC (disabled)
Dec  8 04:08:39 np0005550137 kernel: evm: security.SMACK64TRANSMUTE (disabled)
Dec  8 04:08:39 np0005550137 kernel: evm: security.SMACK64MMAP (disabled)
Dec  8 04:08:39 np0005550137 kernel: evm: security.apparmor (disabled)
Dec  8 04:08:39 np0005550137 kernel: evm: security.ima
Dec  8 04:08:39 np0005550137 kernel: evm: security.capability
Dec  8 04:08:39 np0005550137 kernel: evm: HMAC attrs: 0x1
Dec  8 04:08:39 np0005550137 kernel: usb 1-1: new full-speed USB device number 2 using uhci_hcd
Dec  8 04:08:39 np0005550137 kernel: Running certificate verification RSA selftest
Dec  8 04:08:39 np0005550137 kernel: Loaded X.509 cert 'Certificate verification self-testing key: f58703bb33ce1b73ee02eccdee5b8817518fe3db'
Dec  8 04:08:39 np0005550137 kernel: Running certificate verification ECDSA selftest
Dec  8 04:08:39 np0005550137 kernel: Loaded X.509 cert 'Certificate verification ECDSA self-testing key: 2900bcea1deb7bc8479a84a23d758efdfdd2b2d3'
Dec  8 04:08:39 np0005550137 kernel: clk: Disabling unused clocks
Dec  8 04:08:39 np0005550137 kernel: Freeing unused decrypted memory: 2028K
Dec  8 04:08:39 np0005550137 kernel: Freeing unused kernel image (initmem) memory: 4196K
Dec  8 04:08:39 np0005550137 kernel: Write protecting the kernel read-only data: 30720k
Dec  8 04:08:39 np0005550137 kernel: Freeing unused kernel image (rodata/data gap) memory: 428K
Dec  8 04:08:39 np0005550137 kernel: x86/mm: Checked W+X mappings: passed, no W+X pages found.
Dec  8 04:08:39 np0005550137 kernel: Run /init as init process
Dec  8 04:08:39 np0005550137 systemd: systemd 252-59.el9 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT +GNUTLS +OPENSSL +ACL +BLKID +CURL +ELFUTILS +FIDO2 +IDN2 -IDN -IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY +P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK +XKBCOMMON +UTMP +SYSVINIT default-hierarchy=unified)
Dec  8 04:08:39 np0005550137 systemd: Detected virtualization kvm.
Dec  8 04:08:39 np0005550137 systemd: Detected architecture x86-64.
Dec  8 04:08:39 np0005550137 systemd: Running in initrd.
Dec  8 04:08:39 np0005550137 systemd: No hostname configured, using default hostname.
Dec  8 04:08:39 np0005550137 systemd: Hostname set to <localhost>.
Dec  8 04:08:39 np0005550137 systemd: Initializing machine ID from VM UUID.
Dec  8 04:08:39 np0005550137 kernel: usb 1-1: New USB device found, idVendor=0627, idProduct=0001, bcdDevice= 0.00
Dec  8 04:08:39 np0005550137 kernel: usb 1-1: New USB device strings: Mfr=1, Product=3, SerialNumber=10
Dec  8 04:08:39 np0005550137 kernel: usb 1-1: Product: QEMU USB Tablet
Dec  8 04:08:39 np0005550137 kernel: usb 1-1: Manufacturer: QEMU
Dec  8 04:08:39 np0005550137 kernel: usb 1-1: SerialNumber: 28754-0000:00:01.2-1
Dec  8 04:08:39 np0005550137 kernel: input: QEMU QEMU USB Tablet as /devices/pci0000:00/0000:00:01.2/usb1/1-1/1-1:1.0/0003:0627:0001.0001/input/input5
Dec  8 04:08:39 np0005550137 kernel: hid-generic 0003:0627:0001.0001: input,hidraw0: USB HID v0.01 Mouse [QEMU QEMU USB Tablet] on usb-0000:00:01.2-1/input0
Dec  8 04:08:39 np0005550137 systemd: Queued start job for default target Initrd Default Target.
Dec  8 04:08:39 np0005550137 systemd: Started Dispatch Password Requests to Console Directory Watch.
Dec  8 04:08:39 np0005550137 systemd: Reached target Local Encrypted Volumes.
Dec  8 04:08:39 np0005550137 systemd: Reached target Initrd /usr File System.
Dec  8 04:08:39 np0005550137 systemd: Reached target Local File Systems.
Dec  8 04:08:39 np0005550137 systemd: Reached target Path Units.
Dec  8 04:08:39 np0005550137 systemd: Reached target Slice Units.
Dec  8 04:08:39 np0005550137 systemd: Reached target Swaps.
Dec  8 04:08:39 np0005550137 systemd: Reached target Timer Units.
Dec  8 04:08:39 np0005550137 systemd: Listening on D-Bus System Message Bus Socket.
Dec  8 04:08:39 np0005550137 systemd: Listening on Journal Socket (/dev/log).
Dec  8 04:08:39 np0005550137 systemd: Listening on Journal Socket.
Dec  8 04:08:39 np0005550137 systemd: Listening on udev Control Socket.
Dec  8 04:08:39 np0005550137 systemd: Listening on udev Kernel Socket.
Dec  8 04:08:39 np0005550137 systemd: Reached target Socket Units.
Dec  8 04:08:39 np0005550137 systemd: Starting Create List of Static Device Nodes...
Dec  8 04:08:39 np0005550137 systemd: Starting Journal Service...
Dec  8 04:08:39 np0005550137 systemd: Load Kernel Modules was skipped because no trigger condition checks were met.
Dec  8 04:08:39 np0005550137 systemd: Starting Apply Kernel Variables...
Dec  8 04:08:39 np0005550137 systemd: Starting Create System Users...
Dec  8 04:08:39 np0005550137 systemd: Starting Setup Virtual Console...
Dec  8 04:08:39 np0005550137 systemd: Finished Create List of Static Device Nodes.
Dec  8 04:08:39 np0005550137 systemd: Finished Apply Kernel Variables.
Dec  8 04:08:39 np0005550137 systemd: Finished Create System Users.
Dec  8 04:08:39 np0005550137 systemd-journald[305]: Journal started
Dec  8 04:08:39 np0005550137 systemd-journald[305]: Runtime Journal (/run/log/journal/70b7e0bf66d34cde808b2ab631045a1a) is 8.0M, max 153.6M, 145.6M free.
Dec  8 04:08:39 np0005550137 systemd-sysusers[310]: Creating group 'users' with GID 100.
Dec  8 04:08:39 np0005550137 systemd-sysusers[310]: Creating group 'dbus' with GID 81.
Dec  8 04:08:39 np0005550137 systemd-sysusers[310]: Creating user 'dbus' (System Message Bus) with UID 81 and GID 81.
Dec  8 04:08:39 np0005550137 systemd: Started Journal Service.
Dec  8 04:08:39 np0005550137 systemd[1]: Starting Create Static Device Nodes in /dev...
Dec  8 04:08:39 np0005550137 systemd[1]: Starting Create Volatile Files and Directories...
Dec  8 04:08:39 np0005550137 systemd[1]: Finished Create Static Device Nodes in /dev.
Dec  8 04:08:39 np0005550137 systemd[1]: Finished Create Volatile Files and Directories.
Dec  8 04:08:39 np0005550137 systemd[1]: Finished Setup Virtual Console.
Dec  8 04:08:39 np0005550137 systemd[1]: dracut ask for additional cmdline parameters was skipped because no trigger condition checks were met.
Dec  8 04:08:39 np0005550137 systemd[1]: Starting dracut cmdline hook...
Dec  8 04:08:39 np0005550137 dracut-cmdline[325]: dracut-9 dracut-057-102.git20250818.el9
Dec  8 04:08:39 np0005550137 dracut-cmdline[325]: Using kernel command line parameters:    BOOT_IMAGE=(hd0,msdos1)/boot/vmlinuz-5.14.0-645.el9.x86_64 root=UUID=fcf6b761-831a-48a7-9f5f-068b5063763f ro console=ttyS0,115200n8 no_timer_check net.ifnames=0 crashkernel=1G-2G:192M,2G-64G:256M,64G-:512M
Dec  8 04:08:39 np0005550137 systemd[1]: Finished dracut cmdline hook.
Dec  8 04:08:39 np0005550137 systemd[1]: Starting dracut pre-udev hook...
Dec  8 04:08:39 np0005550137 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log.
Dec  8 04:08:39 np0005550137 kernel: device-mapper: uevent: version 1.0.3
Dec  8 04:08:39 np0005550137 kernel: device-mapper: ioctl: 4.50.0-ioctl (2025-04-28) initialised: dm-devel@lists.linux.dev
Dec  8 04:08:39 np0005550137 kernel: RPC: Registered named UNIX socket transport module.
Dec  8 04:08:39 np0005550137 kernel: RPC: Registered udp transport module.
Dec  8 04:08:39 np0005550137 kernel: RPC: Registered tcp transport module.
Dec  8 04:08:39 np0005550137 kernel: RPC: Registered tcp-with-tls transport module.
Dec  8 04:08:39 np0005550137 kernel: RPC: Registered tcp NFSv4.1 backchannel transport module.
Dec  8 04:08:39 np0005550137 rpc.statd[442]: Version 2.5.4 starting
Dec  8 04:08:39 np0005550137 rpc.statd[442]: Initializing NSM state
Dec  8 04:08:39 np0005550137 rpc.idmapd[447]: Setting log level to 0
Dec  8 04:08:39 np0005550137 systemd[1]: Finished dracut pre-udev hook.
Dec  8 04:08:39 np0005550137 systemd[1]: Starting Rule-based Manager for Device Events and Files...
Dec  8 04:08:39 np0005550137 systemd-udevd[460]: Using default interface naming scheme 'rhel-9.0'.
Dec  8 04:08:39 np0005550137 systemd[1]: Started Rule-based Manager for Device Events and Files.
Dec  8 04:08:39 np0005550137 systemd[1]: Starting dracut pre-trigger hook...
Dec  8 04:08:39 np0005550137 systemd[1]: Finished dracut pre-trigger hook.
Dec  8 04:08:39 np0005550137 systemd[1]: Starting Coldplug All udev Devices...
Dec  8 04:08:40 np0005550137 systemd[1]: Created slice Slice /system/modprobe.
Dec  8 04:08:40 np0005550137 systemd[1]: Starting Load Kernel Module configfs...
Dec  8 04:08:40 np0005550137 systemd[1]: Finished Coldplug All udev Devices.
Dec  8 04:08:40 np0005550137 systemd[1]: nm-initrd.service was skipped because of an unmet condition check (ConditionPathExists=/run/NetworkManager/initrd/neednet).
Dec  8 04:08:40 np0005550137 systemd[1]: Reached target Network.
Dec  8 04:08:40 np0005550137 systemd[1]: nm-wait-online-initrd.service was skipped because of an unmet condition check (ConditionPathExists=/run/NetworkManager/initrd/neednet).
Dec  8 04:08:40 np0005550137 systemd[1]: Starting dracut initqueue hook...
Dec  8 04:08:40 np0005550137 systemd[1]: modprobe@configfs.service: Deactivated successfully.
Dec  8 04:08:40 np0005550137 systemd[1]: Finished Load Kernel Module configfs.
Dec  8 04:08:40 np0005550137 kernel: virtio_blk virtio2: 8/0/0 default/read/poll queues
Dec  8 04:08:40 np0005550137 kernel: virtio_blk virtio2: [vda] 167772160 512-byte logical blocks (85.9 GB/80.0 GiB)
Dec  8 04:08:40 np0005550137 kernel: vda: vda1
Dec  8 04:08:40 np0005550137 kernel: scsi host0: ata_piix
Dec  8 04:08:40 np0005550137 kernel: scsi host1: ata_piix
Dec  8 04:08:40 np0005550137 kernel: ata1: PATA max MWDMA2 cmd 0x1f0 ctl 0x3f6 bmdma 0xc140 irq 14 lpm-pol 0
Dec  8 04:08:40 np0005550137 kernel: ata2: PATA max MWDMA2 cmd 0x170 ctl 0x376 bmdma 0xc148 irq 15 lpm-pol 0
Dec  8 04:08:40 np0005550137 systemd[1]: Found device /dev/disk/by-uuid/fcf6b761-831a-48a7-9f5f-068b5063763f.
Dec  8 04:08:40 np0005550137 systemd[1]: Reached target Initrd Root Device.
Dec  8 04:08:40 np0005550137 systemd[1]: Mounting Kernel Configuration File System...
Dec  8 04:08:40 np0005550137 systemd[1]: Mounted Kernel Configuration File System.
Dec  8 04:08:40 np0005550137 systemd[1]: Reached target System Initialization.
Dec  8 04:08:40 np0005550137 systemd[1]: Reached target Basic System.
Dec  8 04:08:40 np0005550137 kernel: ata1: found unknown device (class 0)
Dec  8 04:08:40 np0005550137 kernel: ata1.00: ATAPI: QEMU DVD-ROM, 2.5+, max UDMA/100
Dec  8 04:08:40 np0005550137 kernel: scsi 0:0:0:0: CD-ROM            QEMU     QEMU DVD-ROM     2.5+ PQ: 0 ANSI: 5
Dec  8 04:08:40 np0005550137 systemd-udevd[484]: Network interface NamePolicy= disabled on kernel command line.
Dec  8 04:08:40 np0005550137 kernel: scsi 0:0:0:0: Attached scsi generic sg0 type 5
Dec  8 04:08:40 np0005550137 kernel: sr 0:0:0:0: [sr0] scsi3-mmc drive: 4x/4x cd/rw xa/form2 tray
Dec  8 04:08:40 np0005550137 kernel: cdrom: Uniform CD-ROM driver Revision: 3.20
Dec  8 04:08:40 np0005550137 systemd[1]: Finished dracut initqueue hook.
Dec  8 04:08:40 np0005550137 systemd[1]: Reached target Preparation for Remote File Systems.
Dec  8 04:08:40 np0005550137 systemd[1]: Reached target Remote Encrypted Volumes.
Dec  8 04:08:40 np0005550137 systemd[1]: Reached target Remote File Systems.
Dec  8 04:08:40 np0005550137 systemd[1]: Starting dracut pre-mount hook...
Dec  8 04:08:40 np0005550137 systemd[1]: Finished dracut pre-mount hook.
Dec  8 04:08:40 np0005550137 systemd[1]: Starting File System Check on /dev/disk/by-uuid/fcf6b761-831a-48a7-9f5f-068b5063763f...
Dec  8 04:08:40 np0005550137 systemd-fsck[557]: /usr/sbin/fsck.xfs: XFS file system.
Dec  8 04:08:40 np0005550137 systemd[1]: Finished File System Check on /dev/disk/by-uuid/fcf6b761-831a-48a7-9f5f-068b5063763f.
Dec  8 04:08:40 np0005550137 systemd[1]: Mounting /sysroot...
Dec  8 04:08:41 np0005550137 kernel: SGI XFS with ACLs, security attributes, scrub, quota, no debug enabled
Dec  8 04:08:41 np0005550137 kernel: XFS (vda1): Mounting V5 Filesystem fcf6b761-831a-48a7-9f5f-068b5063763f
Dec  8 04:08:41 np0005550137 kernel: XFS (vda1): Ending clean mount
Dec  8 04:08:41 np0005550137 systemd[1]: Mounted /sysroot.
Dec  8 04:08:41 np0005550137 systemd[1]: Reached target Initrd Root File System.
Dec  8 04:08:41 np0005550137 systemd[1]: Starting Mountpoints Configured in the Real Root...
Dec  8 04:08:41 np0005550137 systemd[1]: initrd-parse-etc.service: Deactivated successfully.
Dec  8 04:08:41 np0005550137 systemd[1]: Finished Mountpoints Configured in the Real Root.
Dec  8 04:08:41 np0005550137 systemd[1]: Reached target Initrd File Systems.
Dec  8 04:08:41 np0005550137 systemd[1]: Reached target Initrd Default Target.
Dec  8 04:08:41 np0005550137 systemd[1]: Starting dracut mount hook...
Dec  8 04:08:41 np0005550137 systemd[1]: Finished dracut mount hook.
Dec  8 04:08:41 np0005550137 systemd[1]: Starting dracut pre-pivot and cleanup hook...
Dec  8 04:08:41 np0005550137 rpc.idmapd[447]: exiting on signal 15
Dec  8 04:08:41 np0005550137 systemd[1]: var-lib-nfs-rpc_pipefs.mount: Deactivated successfully.
Dec  8 04:08:41 np0005550137 systemd[1]: Finished dracut pre-pivot and cleanup hook.
Dec  8 04:08:41 np0005550137 systemd[1]: Starting Cleaning Up and Shutting Down Daemons...
Dec  8 04:08:41 np0005550137 systemd[1]: Stopped target Network.
Dec  8 04:08:41 np0005550137 systemd[1]: Stopped target Remote Encrypted Volumes.
Dec  8 04:08:41 np0005550137 systemd[1]: Stopped target Timer Units.
Dec  8 04:08:41 np0005550137 systemd[1]: dbus.socket: Deactivated successfully.
Dec  8 04:08:41 np0005550137 systemd[1]: Closed D-Bus System Message Bus Socket.
Dec  8 04:08:41 np0005550137 systemd[1]: dracut-pre-pivot.service: Deactivated successfully.
Dec  8 04:08:41 np0005550137 systemd[1]: Stopped dracut pre-pivot and cleanup hook.
Dec  8 04:08:41 np0005550137 systemd[1]: Stopped target Initrd Default Target.
Dec  8 04:08:41 np0005550137 systemd[1]: Stopped target Basic System.
Dec  8 04:08:41 np0005550137 systemd[1]: Stopped target Initrd Root Device.
Dec  8 04:08:41 np0005550137 systemd[1]: Stopped target Initrd /usr File System.
Dec  8 04:08:41 np0005550137 systemd[1]: Stopped target Path Units.
Dec  8 04:08:41 np0005550137 systemd[1]: Stopped target Remote File Systems.
Dec  8 04:08:41 np0005550137 systemd[1]: Stopped target Preparation for Remote File Systems.
Dec  8 04:08:41 np0005550137 systemd[1]: Stopped target Slice Units.
Dec  8 04:08:41 np0005550137 systemd[1]: Stopped target Socket Units.
Dec  8 04:08:41 np0005550137 systemd[1]: Stopped target System Initialization.
Dec  8 04:08:41 np0005550137 systemd[1]: Stopped target Local File Systems.
Dec  8 04:08:41 np0005550137 systemd[1]: Stopped target Swaps.
Dec  8 04:08:41 np0005550137 systemd[1]: dracut-mount.service: Deactivated successfully.
Dec  8 04:08:41 np0005550137 systemd[1]: Stopped dracut mount hook.
Dec  8 04:08:41 np0005550137 systemd[1]: dracut-pre-mount.service: Deactivated successfully.
Dec  8 04:08:41 np0005550137 systemd[1]: Stopped dracut pre-mount hook.
Dec  8 04:08:41 np0005550137 systemd[1]: Stopped target Local Encrypted Volumes.
Dec  8 04:08:41 np0005550137 systemd[1]: systemd-ask-password-console.path: Deactivated successfully.
Dec  8 04:08:41 np0005550137 systemd[1]: Stopped Dispatch Password Requests to Console Directory Watch.
Dec  8 04:08:41 np0005550137 systemd[1]: dracut-initqueue.service: Deactivated successfully.
Dec  8 04:08:41 np0005550137 systemd[1]: Stopped dracut initqueue hook.
Dec  8 04:08:41 np0005550137 systemd[1]: systemd-sysctl.service: Deactivated successfully.
Dec  8 04:08:41 np0005550137 systemd[1]: Stopped Apply Kernel Variables.
Dec  8 04:08:41 np0005550137 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully.
Dec  8 04:08:41 np0005550137 systemd[1]: Stopped Create Volatile Files and Directories.
Dec  8 04:08:41 np0005550137 systemd[1]: systemd-udev-trigger.service: Deactivated successfully.
Dec  8 04:08:41 np0005550137 systemd[1]: Stopped Coldplug All udev Devices.
Dec  8 04:08:41 np0005550137 systemd[1]: dracut-pre-trigger.service: Deactivated successfully.
Dec  8 04:08:41 np0005550137 systemd[1]: Stopped dracut pre-trigger hook.
Dec  8 04:08:41 np0005550137 systemd[1]: Stopping Rule-based Manager for Device Events and Files...
Dec  8 04:08:41 np0005550137 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully.
Dec  8 04:08:41 np0005550137 systemd[1]: Stopped Setup Virtual Console.
Dec  8 04:08:41 np0005550137 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup.service.mount: Deactivated successfully.
Dec  8 04:08:41 np0005550137 systemd[1]: run-credentials-systemd\x2dsysctl.service.mount: Deactivated successfully.
Dec  8 04:08:41 np0005550137 systemd[1]: systemd-udevd.service: Deactivated successfully.
Dec  8 04:08:41 np0005550137 systemd[1]: Stopped Rule-based Manager for Device Events and Files.
Dec  8 04:08:41 np0005550137 systemd[1]: systemd-udevd.service: Consumed 1.086s CPU time.
Dec  8 04:08:41 np0005550137 systemd[1]: initrd-cleanup.service: Deactivated successfully.
Dec  8 04:08:41 np0005550137 systemd[1]: Finished Cleaning Up and Shutting Down Daemons.
Dec  8 04:08:41 np0005550137 systemd[1]: systemd-udevd-control.socket: Deactivated successfully.
Dec  8 04:08:41 np0005550137 systemd[1]: Closed udev Control Socket.
Dec  8 04:08:41 np0005550137 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully.
Dec  8 04:08:41 np0005550137 systemd[1]: Closed udev Kernel Socket.
Dec  8 04:08:41 np0005550137 systemd[1]: dracut-pre-udev.service: Deactivated successfully.
Dec  8 04:08:41 np0005550137 systemd[1]: Stopped dracut pre-udev hook.
Dec  8 04:08:41 np0005550137 systemd[1]: dracut-cmdline.service: Deactivated successfully.
Dec  8 04:08:41 np0005550137 systemd[1]: Stopped dracut cmdline hook.
Dec  8 04:08:41 np0005550137 systemd[1]: Starting Cleanup udev Database...
Dec  8 04:08:41 np0005550137 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully.
Dec  8 04:08:41 np0005550137 systemd[1]: Stopped Create Static Device Nodes in /dev.
Dec  8 04:08:41 np0005550137 systemd[1]: kmod-static-nodes.service: Deactivated successfully.
Dec  8 04:08:41 np0005550137 systemd[1]: Stopped Create List of Static Device Nodes.
Dec  8 04:08:41 np0005550137 systemd[1]: systemd-sysusers.service: Deactivated successfully.
Dec  8 04:08:41 np0005550137 systemd[1]: Stopped Create System Users.
Dec  8 04:08:41 np0005550137 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup\x2ddev.service.mount: Deactivated successfully.
Dec  8 04:08:41 np0005550137 systemd[1]: run-credentials-systemd\x2dsysusers.service.mount: Deactivated successfully.
Dec  8 04:08:41 np0005550137 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully.
Dec  8 04:08:41 np0005550137 systemd[1]: Finished Cleanup udev Database.
Dec  8 04:08:41 np0005550137 systemd[1]: Reached target Switch Root.
Dec  8 04:08:41 np0005550137 systemd[1]: Starting Switch Root...
Dec  8 04:08:41 np0005550137 systemd[1]: Switching root.
Dec  8 04:08:41 np0005550137 systemd-journald[305]: Journal stopped
Dec  8 04:08:42 np0005550137 systemd-journald: Received SIGTERM from PID 1 (systemd).
Dec  8 04:08:42 np0005550137 kernel: audit: type=1404 audit(1765184921.716:2): enforcing=1 old_enforcing=0 auid=4294967295 ses=4294967295 enabled=1 old-enabled=1 lsm=selinux res=1
Dec  8 04:08:42 np0005550137 kernel: SELinux:  policy capability network_peer_controls=1
Dec  8 04:08:42 np0005550137 kernel: SELinux:  policy capability open_perms=1
Dec  8 04:08:42 np0005550137 kernel: SELinux:  policy capability extended_socket_class=1
Dec  8 04:08:42 np0005550137 kernel: SELinux:  policy capability always_check_network=0
Dec  8 04:08:42 np0005550137 kernel: SELinux:  policy capability cgroup_seclabel=1
Dec  8 04:08:42 np0005550137 kernel: SELinux:  policy capability nnp_nosuid_transition=1
Dec  8 04:08:42 np0005550137 kernel: SELinux:  policy capability genfs_seclabel_symlinks=1
Dec  8 04:08:42 np0005550137 kernel: audit: type=1403 audit(1765184921.863:3): auid=4294967295 ses=4294967295 lsm=selinux res=1
Dec  8 04:08:42 np0005550137 systemd: Successfully loaded SELinux policy in 150.233ms.
Dec  8 04:08:42 np0005550137 systemd: Relabelled /dev, /dev/shm, /run, /sys/fs/cgroup in 30.859ms.
Dec  8 04:08:42 np0005550137 systemd: systemd 252-59.el9 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT +GNUTLS +OPENSSL +ACL +BLKID +CURL +ELFUTILS +FIDO2 +IDN2 -IDN -IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY +P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK +XKBCOMMON +UTMP +SYSVINIT default-hierarchy=unified)
Dec  8 04:08:42 np0005550137 systemd: Detected virtualization kvm.
Dec  8 04:08:42 np0005550137 systemd: Detected architecture x86-64.
Dec  8 04:08:42 np0005550137 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Dec  8 04:08:42 np0005550137 systemd: initrd-switch-root.service: Deactivated successfully.
Dec  8 04:08:42 np0005550137 systemd: Stopped Switch Root.
Dec  8 04:08:42 np0005550137 systemd: systemd-journald.service: Scheduled restart job, restart counter is at 1.
Dec  8 04:08:42 np0005550137 systemd: Created slice Slice /system/getty.
Dec  8 04:08:42 np0005550137 systemd: Created slice Slice /system/serial-getty.
Dec  8 04:08:42 np0005550137 systemd: Created slice Slice /system/sshd-keygen.
Dec  8 04:08:42 np0005550137 systemd: Created slice User and Session Slice.
Dec  8 04:08:42 np0005550137 systemd: Started Dispatch Password Requests to Console Directory Watch.
Dec  8 04:08:42 np0005550137 systemd: Started Forward Password Requests to Wall Directory Watch.
Dec  8 04:08:42 np0005550137 systemd: Set up automount Arbitrary Executable File Formats File System Automount Point.
Dec  8 04:08:42 np0005550137 systemd: Reached target Local Encrypted Volumes.
Dec  8 04:08:42 np0005550137 systemd: Stopped target Switch Root.
Dec  8 04:08:42 np0005550137 systemd: Stopped target Initrd File Systems.
Dec  8 04:08:42 np0005550137 systemd: Stopped target Initrd Root File System.
Dec  8 04:08:42 np0005550137 systemd: Reached target Local Integrity Protected Volumes.
Dec  8 04:08:42 np0005550137 systemd: Reached target Path Units.
Dec  8 04:08:42 np0005550137 systemd: Reached target rpc_pipefs.target.
Dec  8 04:08:42 np0005550137 systemd: Reached target Slice Units.
Dec  8 04:08:42 np0005550137 systemd: Reached target Swaps.
Dec  8 04:08:42 np0005550137 systemd: Reached target Local Verity Protected Volumes.
Dec  8 04:08:42 np0005550137 systemd: Listening on RPCbind Server Activation Socket.
Dec  8 04:08:42 np0005550137 systemd: Reached target RPC Port Mapper.
Dec  8 04:08:42 np0005550137 systemd: Listening on Process Core Dump Socket.
Dec  8 04:08:42 np0005550137 systemd: Listening on initctl Compatibility Named Pipe.
Dec  8 04:08:42 np0005550137 systemd: Listening on udev Control Socket.
Dec  8 04:08:42 np0005550137 systemd: Listening on udev Kernel Socket.
Dec  8 04:08:42 np0005550137 systemd: Mounting Huge Pages File System...
Dec  8 04:08:42 np0005550137 systemd: Mounting POSIX Message Queue File System...
Dec  8 04:08:42 np0005550137 systemd: Mounting Kernel Debug File System...
Dec  8 04:08:42 np0005550137 systemd: Mounting Kernel Trace File System...
Dec  8 04:08:42 np0005550137 systemd: Kernel Module supporting RPCSEC_GSS was skipped because of an unmet condition check (ConditionPathExists=/etc/krb5.keytab).
Dec  8 04:08:42 np0005550137 systemd: Starting Create List of Static Device Nodes...
Dec  8 04:08:42 np0005550137 systemd: Starting Load Kernel Module configfs...
Dec  8 04:08:42 np0005550137 systemd: Starting Load Kernel Module drm...
Dec  8 04:08:42 np0005550137 systemd: Starting Load Kernel Module efi_pstore...
Dec  8 04:08:42 np0005550137 systemd: Starting Load Kernel Module fuse...
Dec  8 04:08:42 np0005550137 systemd: Starting Read and set NIS domainname from /etc/sysconfig/network...
Dec  8 04:08:42 np0005550137 systemd: systemd-fsck-root.service: Deactivated successfully.
Dec  8 04:08:42 np0005550137 systemd: Stopped File System Check on Root Device.
Dec  8 04:08:42 np0005550137 systemd: Stopped Journal Service.
Dec  8 04:08:42 np0005550137 systemd: Starting Journal Service...
Dec  8 04:08:42 np0005550137 systemd: Load Kernel Modules was skipped because no trigger condition checks were met.
Dec  8 04:08:42 np0005550137 systemd: Starting Generate network units from Kernel command line...
Dec  8 04:08:42 np0005550137 systemd: TPM2 PCR Machine ID Measurement was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/StubPcrKernelImage-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f).
Dec  8 04:08:42 np0005550137 systemd: Starting Remount Root and Kernel File Systems...
Dec  8 04:08:42 np0005550137 systemd: Repartition Root Disk was skipped because no trigger condition checks were met.
Dec  8 04:08:42 np0005550137 systemd: Starting Apply Kernel Variables...
Dec  8 04:08:42 np0005550137 kernel: ACPI: bus type drm_connector registered
Dec  8 04:08:42 np0005550137 systemd: Starting Coldplug All udev Devices...
Dec  8 04:08:42 np0005550137 kernel: xfs filesystem being remounted at / supports timestamps until 2038 (0x7fffffff)
Dec  8 04:08:42 np0005550137 systemd-journald[680]: Journal started
Dec  8 04:08:42 np0005550137 systemd-journald[680]: Runtime Journal (/run/log/journal/4d4ef2323cc3337bbfd9081b2a323b4e) is 8.0M, max 153.6M, 145.6M free.
Dec  8 04:08:42 np0005550137 systemd[1]: Queued start job for default target Multi-User System.
Dec  8 04:08:42 np0005550137 systemd[1]: systemd-journald.service: Deactivated successfully.
Dec  8 04:08:42 np0005550137 systemd: Started Journal Service.
Dec  8 04:08:42 np0005550137 systemd[1]: Mounted Huge Pages File System.
Dec  8 04:08:42 np0005550137 systemd[1]: Mounted POSIX Message Queue File System.
Dec  8 04:08:42 np0005550137 systemd[1]: Mounted Kernel Debug File System.
Dec  8 04:08:42 np0005550137 systemd[1]: Mounted Kernel Trace File System.
Dec  8 04:08:42 np0005550137 systemd[1]: Finished Create List of Static Device Nodes.
Dec  8 04:08:42 np0005550137 systemd[1]: modprobe@configfs.service: Deactivated successfully.
Dec  8 04:08:42 np0005550137 systemd[1]: Finished Load Kernel Module configfs.
Dec  8 04:08:42 np0005550137 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully.
Dec  8 04:08:42 np0005550137 systemd[1]: Finished Load Kernel Module efi_pstore.
Dec  8 04:08:42 np0005550137 systemd[1]: Finished Read and set NIS domainname from /etc/sysconfig/network.
Dec  8 04:08:42 np0005550137 systemd[1]: Finished Generate network units from Kernel command line.
Dec  8 04:08:42 np0005550137 systemd[1]: Finished Remount Root and Kernel File Systems.
Dec  8 04:08:42 np0005550137 systemd[1]: First Boot Wizard was skipped because of an unmet condition check (ConditionFirstBoot=yes).
Dec  8 04:08:42 np0005550137 systemd[1]: Starting Rebuild Hardware Database...
Dec  8 04:08:42 np0005550137 systemd[1]: Starting Flush Journal to Persistent Storage...
Dec  8 04:08:42 np0005550137 systemd[1]: Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore).
Dec  8 04:08:42 np0005550137 systemd[1]: Starting Load/Save OS Random Seed...
Dec  8 04:08:42 np0005550137 systemd-journald[680]: Runtime Journal (/run/log/journal/4d4ef2323cc3337bbfd9081b2a323b4e) is 8.0M, max 153.6M, 145.6M free.
Dec  8 04:08:42 np0005550137 systemd-journald[680]: Received client request to flush runtime journal.
Dec  8 04:08:42 np0005550137 systemd[1]: Starting Create System Users...
Dec  8 04:08:42 np0005550137 systemd[1]: Finished Flush Journal to Persistent Storage.
Dec  8 04:08:42 np0005550137 systemd[1]: Finished Coldplug All udev Devices.
Dec  8 04:08:42 np0005550137 kernel: fuse: init (API version 7.37)
Dec  8 04:08:42 np0005550137 systemd[1]: modprobe@drm.service: Deactivated successfully.
Dec  8 04:08:42 np0005550137 systemd[1]: Finished Load Kernel Module drm.
Dec  8 04:08:42 np0005550137 systemd[1]: modprobe@fuse.service: Deactivated successfully.
Dec  8 04:08:42 np0005550137 systemd[1]: Finished Load Kernel Module fuse.
Dec  8 04:08:42 np0005550137 systemd[1]: Mounting FUSE Control File System...
Dec  8 04:08:42 np0005550137 systemd[1]: Finished Load/Save OS Random Seed.
Dec  8 04:08:42 np0005550137 systemd[1]: Finished Apply Kernel Variables.
Dec  8 04:08:42 np0005550137 systemd[1]: Mounted FUSE Control File System.
Dec  8 04:08:42 np0005550137 systemd[1]: First Boot Complete was skipped because of an unmet condition check (ConditionFirstBoot=yes).
Dec  8 04:08:42 np0005550137 systemd[1]: Finished Create System Users.
Dec  8 04:08:42 np0005550137 systemd[1]: Starting Create Static Device Nodes in /dev...
Dec  8 04:08:42 np0005550137 systemd[1]: Finished Create Static Device Nodes in /dev.
Dec  8 04:08:42 np0005550137 systemd[1]: Reached target Preparation for Local File Systems.
Dec  8 04:08:42 np0005550137 systemd[1]: Reached target Local File Systems.
Dec  8 04:08:42 np0005550137 systemd[1]: Starting Rebuild Dynamic Linker Cache...
Dec  8 04:08:42 np0005550137 systemd[1]: Mark the need to relabel after reboot was skipped because of an unmet condition check (ConditionSecurity=!selinux).
Dec  8 04:08:42 np0005550137 systemd[1]: Set Up Additional Binary Formats was skipped because no trigger condition checks were met.
Dec  8 04:08:42 np0005550137 systemd[1]: Update Boot Loader Random Seed was skipped because no trigger condition checks were met.
Dec  8 04:08:42 np0005550137 systemd[1]: Starting Automatic Boot Loader Update...
Dec  8 04:08:42 np0005550137 systemd[1]: Commit a transient machine-id on disk was skipped because of an unmet condition check (ConditionPathIsMountPoint=/etc/machine-id).
Dec  8 04:08:42 np0005550137 systemd[1]: Starting Create Volatile Files and Directories...
Dec  8 04:08:42 np0005550137 bootctl[697]: Couldn't find EFI system partition, skipping.
Dec  8 04:08:42 np0005550137 systemd[1]: Finished Automatic Boot Loader Update.
Dec  8 04:08:43 np0005550137 systemd[1]: Finished Create Volatile Files and Directories.
Dec  8 04:08:43 np0005550137 auditd[701]: audit dispatcher initialized with q_depth=2000 and 1 active plugins
Dec  8 04:08:43 np0005550137 auditd[701]: Init complete, auditd 3.1.5 listening for events (startup state enable)
Dec  8 04:08:43 np0005550137 systemd[1]: Starting Security Auditing Service...
Dec  8 04:08:43 np0005550137 systemd[1]: Starting RPC Bind...
Dec  8 04:08:43 np0005550137 systemd[1]: Starting Rebuild Journal Catalog...
Dec  8 04:08:43 np0005550137 systemd[1]: Started RPC Bind.
Dec  8 04:08:43 np0005550137 augenrules[708]: /sbin/augenrules: No change
Dec  8 04:08:43 np0005550137 systemd[1]: Finished Rebuild Dynamic Linker Cache.
Dec  8 04:08:43 np0005550137 systemd[1]: Finished Rebuild Journal Catalog.
Dec  8 04:08:43 np0005550137 augenrules[723]: No rules
Dec  8 04:08:43 np0005550137 augenrules[723]: enabled 1
Dec  8 04:08:43 np0005550137 augenrules[723]: failure 1
Dec  8 04:08:43 np0005550137 augenrules[723]: pid 701
Dec  8 04:08:43 np0005550137 augenrules[723]: rate_limit 0
Dec  8 04:08:43 np0005550137 augenrules[723]: backlog_limit 8192
Dec  8 04:08:43 np0005550137 augenrules[723]: lost 0
Dec  8 04:08:43 np0005550137 augenrules[723]: backlog 0
Dec  8 04:08:43 np0005550137 augenrules[723]: backlog_wait_time 60000
Dec  8 04:08:43 np0005550137 augenrules[723]: backlog_wait_time_actual 0
Dec  8 04:08:43 np0005550137 augenrules[723]: enabled 1
Dec  8 04:08:43 np0005550137 augenrules[723]: failure 1
Dec  8 04:08:43 np0005550137 augenrules[723]: pid 701
Dec  8 04:08:43 np0005550137 augenrules[723]: rate_limit 0
Dec  8 04:08:43 np0005550137 augenrules[723]: backlog_limit 8192
Dec  8 04:08:43 np0005550137 augenrules[723]: lost 0
Dec  8 04:08:43 np0005550137 augenrules[723]: backlog 3
Dec  8 04:08:43 np0005550137 augenrules[723]: backlog_wait_time 60000
Dec  8 04:08:43 np0005550137 augenrules[723]: backlog_wait_time_actual 0
Dec  8 04:08:43 np0005550137 augenrules[723]: enabled 1
Dec  8 04:08:43 np0005550137 augenrules[723]: failure 1
Dec  8 04:08:43 np0005550137 augenrules[723]: pid 701
Dec  8 04:08:43 np0005550137 augenrules[723]: rate_limit 0
Dec  8 04:08:43 np0005550137 augenrules[723]: backlog_limit 8192
Dec  8 04:08:43 np0005550137 augenrules[723]: lost 0
Dec  8 04:08:43 np0005550137 augenrules[723]: backlog 1
Dec  8 04:08:43 np0005550137 augenrules[723]: backlog_wait_time 60000
Dec  8 04:08:43 np0005550137 augenrules[723]: backlog_wait_time_actual 0
Dec  8 04:08:43 np0005550137 systemd[1]: Started Security Auditing Service.
Dec  8 04:08:43 np0005550137 systemd[1]: Starting Record System Boot/Shutdown in UTMP...
Dec  8 04:08:43 np0005550137 systemd[1]: Finished Record System Boot/Shutdown in UTMP.
Dec  8 04:08:43 np0005550137 systemd[1]: Finished Rebuild Hardware Database.
Dec  8 04:08:43 np0005550137 systemd[1]: Starting Rule-based Manager for Device Events and Files...
Dec  8 04:08:43 np0005550137 systemd[1]: Starting Update is Completed...
Dec  8 04:08:43 np0005550137 systemd[1]: Finished Update is Completed.
Dec  8 04:08:43 np0005550137 systemd-udevd[731]: Using default interface naming scheme 'rhel-9.0'.
Dec  8 04:08:44 np0005550137 systemd[1]: Started Rule-based Manager for Device Events and Files.
Dec  8 04:08:44 np0005550137 systemd[1]: Reached target System Initialization.
Dec  8 04:08:44 np0005550137 systemd[1]: Started dnf makecache --timer.
Dec  8 04:08:44 np0005550137 systemd[1]: Started Daily rotation of log files.
Dec  8 04:08:44 np0005550137 systemd[1]: Started Daily Cleanup of Temporary Directories.
Dec  8 04:08:44 np0005550137 systemd[1]: Reached target Timer Units.
Dec  8 04:08:44 np0005550137 systemd[1]: Listening on D-Bus System Message Bus Socket.
Dec  8 04:08:44 np0005550137 systemd[1]: Listening on SSSD Kerberos Cache Manager responder socket.
Dec  8 04:08:44 np0005550137 systemd[1]: Reached target Socket Units.
Dec  8 04:08:44 np0005550137 systemd[1]: Starting D-Bus System Message Bus...
Dec  8 04:08:44 np0005550137 systemd[1]: TPM2 PCR Barrier (Initialization) was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/StubPcrKernelImage-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f).
Dec  8 04:08:44 np0005550137 systemd[1]: Condition check resulted in /dev/ttyS0 being skipped.
Dec  8 04:08:44 np0005550137 systemd[1]: Starting Load Kernel Module configfs...
Dec  8 04:08:44 np0005550137 systemd-udevd[757]: Network interface NamePolicy= disabled on kernel command line.
Dec  8 04:08:44 np0005550137 systemd[1]: modprobe@configfs.service: Deactivated successfully.
Dec  8 04:08:44 np0005550137 systemd[1]: Finished Load Kernel Module configfs.
Dec  8 04:08:44 np0005550137 systemd[1]: Started D-Bus System Message Bus.
Dec  8 04:08:44 np0005550137 systemd[1]: Reached target Basic System.
Dec  8 04:08:44 np0005550137 dbus-broker-lau[754]: Ready
Dec  8 04:08:44 np0005550137 systemd[1]: Starting NTP client/server...
Dec  8 04:08:44 np0005550137 chronyd[784]: chronyd version 4.8 starting (+CMDMON +REFCLOCK +RTC +PRIVDROP +SCFILTER +SIGND +NTS +SECHASH +IPV6 +DEBUG)
Dec  8 04:08:44 np0005550137 systemd[1]: Starting Cloud-init: Local Stage (pre-network)...
Dec  8 04:08:44 np0005550137 chronyd[784]: Loaded 0 symmetric keys
Dec  8 04:08:44 np0005550137 chronyd[784]: Using right/UTC timezone to obtain leap second data
Dec  8 04:08:44 np0005550137 chronyd[784]: Loaded seccomp filter (level 2)
Dec  8 04:08:44 np0005550137 kernel: piix4_smbus 0000:00:01.3: SMBus Host Controller at 0x700, revision 0
Dec  8 04:08:44 np0005550137 kernel: input: PC Speaker as /devices/platform/pcspkr/input/input6
Dec  8 04:08:44 np0005550137 kernel: i2c i2c-0: 1/1 memory slots populated (from DMI)
Dec  8 04:08:44 np0005550137 kernel: i2c i2c-0: Memory type 0x07 not supported yet, not instantiating SPD
Dec  8 04:08:44 np0005550137 systemd[1]: Starting Restore /run/initramfs on shutdown...
Dec  8 04:08:44 np0005550137 systemd[1]: Starting IPv4 firewall with iptables...
Dec  8 04:08:44 np0005550137 systemd[1]: Started irqbalance daemon.
Dec  8 04:08:44 np0005550137 systemd[1]: Load CPU microcode update was skipped because of an unmet condition check (ConditionPathExists=/sys/devices/system/cpu/microcode/reload).
Dec  8 04:08:44 np0005550137 systemd[1]: OpenSSH ecdsa Server Key Generation was skipped because of an unmet condition check (ConditionPathExists=!/run/systemd/generator.early/multi-user.target.wants/cloud-init.target).
Dec  8 04:08:44 np0005550137 systemd[1]: OpenSSH ed25519 Server Key Generation was skipped because of an unmet condition check (ConditionPathExists=!/run/systemd/generator.early/multi-user.target.wants/cloud-init.target).
Dec  8 04:08:44 np0005550137 systemd[1]: OpenSSH rsa Server Key Generation was skipped because of an unmet condition check (ConditionPathExists=!/run/systemd/generator.early/multi-user.target.wants/cloud-init.target).
Dec  8 04:08:44 np0005550137 systemd[1]: Reached target sshd-keygen.target.
Dec  8 04:08:44 np0005550137 kernel: kvm_amd: TSC scaling supported
Dec  8 04:08:44 np0005550137 kernel: kvm_amd: Nested Virtualization enabled
Dec  8 04:08:44 np0005550137 kernel: kvm_amd: Nested Paging enabled
Dec  8 04:08:44 np0005550137 kernel: kvm_amd: LBR virtualization supported
Dec  8 04:08:44 np0005550137 systemd[1]: System Security Services Daemon was skipped because no trigger condition checks were met.
Dec  8 04:08:44 np0005550137 systemd[1]: Reached target User and Group Name Lookups.
Dec  8 04:08:44 np0005550137 kernel: [drm] pci: virtio-vga detected at 0000:00:02.0
Dec  8 04:08:44 np0005550137 kernel: virtio-pci 0000:00:02.0: vgaarb: deactivate vga console
Dec  8 04:08:44 np0005550137 kernel: Console: switching to colour dummy device 80x25
Dec  8 04:08:44 np0005550137 kernel: [drm] features: -virgl +edid -resource_blob -host_visible
Dec  8 04:08:44 np0005550137 kernel: [drm] features: -context_init
Dec  8 04:08:44 np0005550137 kernel: [drm] number of scanouts: 1
Dec  8 04:08:44 np0005550137 kernel: [drm] number of cap sets: 0
Dec  8 04:08:44 np0005550137 kernel: [drm] Initialized virtio_gpu 0.1.0 for 0000:00:02.0 on minor 0
Dec  8 04:08:44 np0005550137 kernel: fbcon: virtio_gpudrmfb (fb0) is primary device
Dec  8 04:08:44 np0005550137 kernel: Console: switching to colour frame buffer device 128x48
Dec  8 04:08:44 np0005550137 kernel: virtio-pci 0000:00:02.0: [drm] fb0: virtio_gpudrmfb frame buffer device
Dec  8 04:08:44 np0005550137 systemd[1]: Starting User Login Management...
Dec  8 04:08:44 np0005550137 systemd[1]: Started NTP client/server.
Dec  8 04:08:44 np0005550137 systemd[1]: Finished Restore /run/initramfs on shutdown.
Dec  8 04:08:44 np0005550137 kernel: Warning: Deprecated Driver is detected: nft_compat will not be maintained in a future major release and may be disabled
Dec  8 04:08:44 np0005550137 kernel: Warning: Deprecated Driver is detected: nft_compat_module_init will not be maintained in a future major release and may be disabled
Dec  8 04:08:44 np0005550137 systemd-logind[805]: New seat seat0.
Dec  8 04:08:44 np0005550137 systemd-logind[805]: Watching system buttons on /dev/input/event0 (Power Button)
Dec  8 04:08:44 np0005550137 systemd-logind[805]: Watching system buttons on /dev/input/event1 (AT Translated Set 2 keyboard)
Dec  8 04:08:44 np0005550137 systemd[1]: Started User Login Management.
Dec  8 04:08:44 np0005550137 iptables.init[792]: iptables: Applying firewall rules: [  OK  ]
Dec  8 04:08:44 np0005550137 systemd[1]: Finished IPv4 firewall with iptables.
Dec  8 04:08:45 np0005550137 cloud-init[841]: Cloud-init v. 24.4-7.el9 running 'init-local' at Mon, 08 Dec 2025 09:08:45 +0000. Up 9.99 seconds.
Dec  8 04:08:46 np0005550137 systemd[1]: run-cloud\x2dinit-tmp-tmpyonuy1fd.mount: Deactivated successfully.
Dec  8 04:08:46 np0005550137 systemd[1]: Starting Hostname Service...
Dec  8 04:08:46 np0005550137 systemd[1]: Started Hostname Service.
Dec  8 04:08:46 np0005550137 systemd-hostnamed[855]: Hostname set to <np0005550137.novalocal> (static)
Dec  8 04:08:46 np0005550137 systemd[1]: Finished Cloud-init: Local Stage (pre-network).
Dec  8 04:08:46 np0005550137 systemd[1]: Reached target Preparation for Network.
Dec  8 04:08:46 np0005550137 systemd[1]: Starting Network Manager...
Dec  8 04:08:46 np0005550137 NetworkManager[859]: <info>  [1765184926.5737] NetworkManager (version 1.54.1-1.el9) is starting... (boot:17566ae0-cd05-4218-b848-5d07916a84ed)
Dec  8 04:08:46 np0005550137 NetworkManager[859]: <info>  [1765184926.5743] Read config: /etc/NetworkManager/NetworkManager.conf, /run/NetworkManager/conf.d/15-carrier-timeout.conf
Dec  8 04:08:46 np0005550137 NetworkManager[859]: <info>  [1765184926.5822] manager[0x5634860df080]: monitoring kernel firmware directory '/lib/firmware'.
Dec  8 04:08:46 np0005550137 NetworkManager[859]: <info>  [1765184926.5862] hostname: hostname: using hostnamed
Dec  8 04:08:46 np0005550137 NetworkManager[859]: <info>  [1765184926.5862] hostname: static hostname changed from (none) to "np0005550137.novalocal"
Dec  8 04:08:46 np0005550137 NetworkManager[859]: <info>  [1765184926.5868] dns-mgr: init: dns=default,systemd-resolved rc-manager=symlink (auto)
Dec  8 04:08:46 np0005550137 NetworkManager[859]: <info>  [1765184926.5965] manager[0x5634860df080]: rfkill: Wi-Fi hardware radio set enabled
Dec  8 04:08:46 np0005550137 NetworkManager[859]: <info>  [1765184926.5966] manager[0x5634860df080]: rfkill: WWAN hardware radio set enabled
Dec  8 04:08:46 np0005550137 NetworkManager[859]: <info>  [1765184926.6007] Loaded device plugin: NMTeamFactory (/usr/lib64/NetworkManager/1.54.1-1.el9/libnm-device-plugin-team.so)
Dec  8 04:08:46 np0005550137 NetworkManager[859]: <info>  [1765184926.6008] manager: rfkill: Wi-Fi enabled by radio killswitch; enabled by state file
Dec  8 04:08:46 np0005550137 systemd[1]: Listening on Load/Save RF Kill Switch Status /dev/rfkill Watch.
Dec  8 04:08:46 np0005550137 NetworkManager[859]: <info>  [1765184926.6009] manager: rfkill: WWAN enabled by radio killswitch; enabled by state file
Dec  8 04:08:46 np0005550137 NetworkManager[859]: <info>  [1765184926.6011] manager: Networking is enabled by state file
Dec  8 04:08:46 np0005550137 NetworkManager[859]: <info>  [1765184926.6016] settings: Loaded settings plugin: keyfile (internal)
Dec  8 04:08:46 np0005550137 NetworkManager[859]: <info>  [1765184926.6031] settings: Loaded settings plugin: ifcfg-rh ("/usr/lib64/NetworkManager/1.54.1-1.el9/libnm-settings-plugin-ifcfg-rh.so")
Dec  8 04:08:46 np0005550137 NetworkManager[859]: <info>  [1765184926.6048] Warning: the ifcfg-rh plugin is deprecated, please migrate connections to the keyfile format using "nmcli connection migrate"
Dec  8 04:08:46 np0005550137 NetworkManager[859]: <info>  [1765184926.6062] dhcp: init: Using DHCP client 'internal'
Dec  8 04:08:46 np0005550137 NetworkManager[859]: <info>  [1765184926.6064] manager: (lo): new Loopback device (/org/freedesktop/NetworkManager/Devices/1)
Dec  8 04:08:46 np0005550137 NetworkManager[859]: <info>  [1765184926.6077] device (lo): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Dec  8 04:08:46 np0005550137 NetworkManager[859]: <info>  [1765184926.6084] device (lo): state change: unavailable -> disconnected (reason 'connection-assumed', managed-type: 'external')
Dec  8 04:08:46 np0005550137 NetworkManager[859]: <info>  [1765184926.6092] device (lo): Activation: starting connection 'lo' (ddff0d33-f7d1-42c0-97cc-0d2df594d095)
Dec  8 04:08:46 np0005550137 NetworkManager[859]: <info>  [1765184926.6101] manager: (eth0): new Ethernet device (/org/freedesktop/NetworkManager/Devices/2)
Dec  8 04:08:46 np0005550137 NetworkManager[859]: <info>  [1765184926.6106] device (eth0): state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Dec  8 04:08:46 np0005550137 NetworkManager[859]: <info>  [1765184926.6132] bus-manager: acquired D-Bus service "org.freedesktop.NetworkManager"
Dec  8 04:08:46 np0005550137 NetworkManager[859]: <info>  [1765184926.6136] device (lo): state change: disconnected -> prepare (reason 'none', managed-type: 'external')
Dec  8 04:08:46 np0005550137 NetworkManager[859]: <info>  [1765184926.6139] device (lo): state change: prepare -> config (reason 'none', managed-type: 'external')
Dec  8 04:08:46 np0005550137 NetworkManager[859]: <info>  [1765184926.6141] device (lo): state change: config -> ip-config (reason 'none', managed-type: 'external')
Dec  8 04:08:46 np0005550137 NetworkManager[859]: <info>  [1765184926.6144] device (eth0): carrier: link connected
Dec  8 04:08:46 np0005550137 NetworkManager[859]: <info>  [1765184926.6148] device (lo): state change: ip-config -> ip-check (reason 'none', managed-type: 'external')
Dec  8 04:08:46 np0005550137 NetworkManager[859]: <info>  [1765184926.6153] device (eth0): state change: unavailable -> disconnected (reason 'carrier-changed', managed-type: 'full')
Dec  8 04:08:46 np0005550137 NetworkManager[859]: <info>  [1765184926.6158] policy: auto-activating connection 'System eth0' (5fb06bd0-0bb0-7ffb-45f1-d6edd65f3e03)
Dec  8 04:08:46 np0005550137 NetworkManager[859]: <info>  [1765184926.6161] device (eth0): Activation: starting connection 'System eth0' (5fb06bd0-0bb0-7ffb-45f1-d6edd65f3e03)
Dec  8 04:08:46 np0005550137 NetworkManager[859]: <info>  [1765184926.6162] device (eth0): state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Dec  8 04:08:46 np0005550137 NetworkManager[859]: <info>  [1765184926.6164] manager: NetworkManager state is now CONNECTING
Dec  8 04:08:46 np0005550137 NetworkManager[859]: <info>  [1765184926.6165] device (eth0): state change: prepare -> config (reason 'none', managed-type: 'full')
Dec  8 04:08:46 np0005550137 NetworkManager[859]: <info>  [1765184926.6172] device (eth0): state change: config -> ip-config (reason 'none', managed-type: 'full')
Dec  8 04:08:46 np0005550137 NetworkManager[859]: <info>  [1765184926.6175] dhcp4 (eth0): activation: beginning transaction (timeout in 45 seconds)
Dec  8 04:08:46 np0005550137 NetworkManager[859]: <info>  [1765184926.6211] dhcp4 (eth0): state changed new lease, address=38.102.83.176
Dec  8 04:08:46 np0005550137 NetworkManager[859]: <info>  [1765184926.6218] policy: set 'System eth0' (eth0) as default for IPv4 routing and DNS
Dec  8 04:08:46 np0005550137 NetworkManager[859]: <info>  [1765184926.6235] device (eth0): state change: ip-config -> ip-check (reason 'none', managed-type: 'full')
Dec  8 04:08:46 np0005550137 systemd[1]: Starting Network Manager Script Dispatcher Service...
Dec  8 04:08:46 np0005550137 systemd[1]: Started Network Manager.
Dec  8 04:08:46 np0005550137 systemd[1]: Reached target Network.
Dec  8 04:08:46 np0005550137 systemd[1]: Starting Network Manager Wait Online...
Dec  8 04:08:46 np0005550137 systemd[1]: Starting GSSAPI Proxy Daemon...
Dec  8 04:08:46 np0005550137 systemd[1]: Started Network Manager Script Dispatcher Service.
Dec  8 04:08:46 np0005550137 NetworkManager[859]: <info>  [1765184926.6459] device (lo): state change: ip-check -> secondaries (reason 'none', managed-type: 'external')
Dec  8 04:08:46 np0005550137 NetworkManager[859]: <info>  [1765184926.6461] device (lo): state change: secondaries -> activated (reason 'none', managed-type: 'external')
Dec  8 04:08:46 np0005550137 NetworkManager[859]: <info>  [1765184926.6474] device (lo): Activation: successful, device activated.
Dec  8 04:08:46 np0005550137 NetworkManager[859]: <info>  [1765184926.6490] device (eth0): state change: ip-check -> secondaries (reason 'none', managed-type: 'full')
Dec  8 04:08:46 np0005550137 NetworkManager[859]: <info>  [1765184926.6493] device (eth0): state change: secondaries -> activated (reason 'none', managed-type: 'full')
Dec  8 04:08:46 np0005550137 NetworkManager[859]: <info>  [1765184926.6498] manager: NetworkManager state is now CONNECTED_SITE
Dec  8 04:08:46 np0005550137 NetworkManager[859]: <info>  [1765184926.6502] device (eth0): Activation: successful, device activated.
Dec  8 04:08:46 np0005550137 NetworkManager[859]: <info>  [1765184926.6506] manager: NetworkManager state is now CONNECTED_GLOBAL
Dec  8 04:08:46 np0005550137 systemd[1]: Started GSSAPI Proxy Daemon.
Dec  8 04:08:46 np0005550137 NetworkManager[859]: <info>  [1765184926.6512] manager: startup complete
Dec  8 04:08:46 np0005550137 systemd[1]: RPC security service for NFS client and server was skipped because of an unmet condition check (ConditionPathExists=/etc/krb5.keytab).
Dec  8 04:08:46 np0005550137 systemd[1]: Reached target NFS client services.
Dec  8 04:08:46 np0005550137 systemd[1]: Reached target Preparation for Remote File Systems.
Dec  8 04:08:46 np0005550137 systemd[1]: Reached target Remote File Systems.
Dec  8 04:08:46 np0005550137 systemd[1]: TPM2 PCR Barrier (User) was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/StubPcrKernelImage-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f).
Dec  8 04:08:46 np0005550137 systemd[1]: Finished Network Manager Wait Online.
Dec  8 04:08:46 np0005550137 systemd[1]: Starting Cloud-init: Network Stage...
Dec  8 04:08:47 np0005550137 cloud-init[922]: Cloud-init v. 24.4-7.el9 running 'init' at Mon, 08 Dec 2025 09:08:47 +0000. Up 11.42 seconds.
Dec  8 04:08:47 np0005550137 cloud-init[922]: ci-info: +++++++++++++++++++++++++++++++++++++++Net device info+++++++++++++++++++++++++++++++++++++++
Dec  8 04:08:47 np0005550137 cloud-init[922]: ci-info: +--------+------+------------------------------+---------------+--------+-------------------+
Dec  8 04:08:47 np0005550137 cloud-init[922]: ci-info: | Device |  Up  |           Address            |      Mask     | Scope  |     Hw-Address    |
Dec  8 04:08:47 np0005550137 cloud-init[922]: ci-info: +--------+------+------------------------------+---------------+--------+-------------------+
Dec  8 04:08:47 np0005550137 cloud-init[922]: ci-info: |  eth0  | True |        38.102.83.176         | 255.255.255.0 | global | fa:16:3e:4d:92:4e |
Dec  8 04:08:47 np0005550137 cloud-init[922]: ci-info: |  eth0  | True | fe80::f816:3eff:fe4d:924e/64 |       .       |  link  | fa:16:3e:4d:92:4e |
Dec  8 04:08:47 np0005550137 cloud-init[922]: ci-info: |   lo   | True |          127.0.0.1           |   255.0.0.0   |  host  |         .         |
Dec  8 04:08:47 np0005550137 cloud-init[922]: ci-info: |   lo   | True |           ::1/128            |       .       |  host  |         .         |
Dec  8 04:08:47 np0005550137 cloud-init[922]: ci-info: +--------+------+------------------------------+---------------+--------+-------------------+
Dec  8 04:08:47 np0005550137 cloud-init[922]: ci-info: +++++++++++++++++++++++++++++++++Route IPv4 info+++++++++++++++++++++++++++++++++
Dec  8 04:08:47 np0005550137 cloud-init[922]: ci-info: +-------+-----------------+---------------+-----------------+-----------+-------+
Dec  8 04:08:47 np0005550137 cloud-init[922]: ci-info: | Route |   Destination   |    Gateway    |     Genmask     | Interface | Flags |
Dec  8 04:08:47 np0005550137 cloud-init[922]: ci-info: +-------+-----------------+---------------+-----------------+-----------+-------+
Dec  8 04:08:47 np0005550137 cloud-init[922]: ci-info: |   0   |     0.0.0.0     |  38.102.83.1  |     0.0.0.0     |    eth0   |   UG  |
Dec  8 04:08:47 np0005550137 cloud-init[922]: ci-info: |   1   |   38.102.83.0   |    0.0.0.0    |  255.255.255.0  |    eth0   |   U   |
Dec  8 04:08:47 np0005550137 cloud-init[922]: ci-info: |   2   | 169.254.169.254 | 38.102.83.126 | 255.255.255.255 |    eth0   |  UGH  |
Dec  8 04:08:47 np0005550137 cloud-init[922]: ci-info: +-------+-----------------+---------------+-----------------+-----------+-------+
Dec  8 04:08:47 np0005550137 cloud-init[922]: ci-info: +++++++++++++++++++Route IPv6 info+++++++++++++++++++
Dec  8 04:08:47 np0005550137 cloud-init[922]: ci-info: +-------+-------------+---------+-----------+-------+
Dec  8 04:08:47 np0005550137 cloud-init[922]: ci-info: | Route | Destination | Gateway | Interface | Flags |
Dec  8 04:08:47 np0005550137 cloud-init[922]: ci-info: +-------+-------------+---------+-----------+-------+
Dec  8 04:08:47 np0005550137 cloud-init[922]: ci-info: |   1   |  fe80::/64  |    ::   |    eth0   |   U   |
Dec  8 04:08:47 np0005550137 cloud-init[922]: ci-info: |   3   |  multicast  |    ::   |    eth0   |   U   |
Dec  8 04:08:47 np0005550137 cloud-init[922]: ci-info: +-------+-------------+---------+-----------+-------+
Dec  8 04:08:48 np0005550137 cloud-init[922]: Generating public/private rsa key pair.
Dec  8 04:08:48 np0005550137 cloud-init[922]: Your identification has been saved in /etc/ssh/ssh_host_rsa_key
Dec  8 04:08:48 np0005550137 cloud-init[922]: Your public key has been saved in /etc/ssh/ssh_host_rsa_key.pub
Dec  8 04:08:48 np0005550137 cloud-init[922]: The key fingerprint is:
Dec  8 04:08:48 np0005550137 cloud-init[922]: SHA256:MScpx/axlyzTzfvGa69ZBIuKFJbNXZNzrrRTuP0N96U root@np0005550137.novalocal
Dec  8 04:08:48 np0005550137 cloud-init[922]: The key's randomart image is:
Dec  8 04:08:48 np0005550137 cloud-init[922]: +---[RSA 3072]----+
Dec  8 04:08:48 np0005550137 cloud-init[922]: |              o. |
Dec  8 04:08:48 np0005550137 cloud-init[922]: |       . = . .o..|
Dec  8 04:08:48 np0005550137 cloud-init[922]: |      . % = . .= |
Dec  8 04:08:48 np0005550137 cloud-init[922]: |       = B = =ooo|
Dec  8 04:08:48 np0005550137 cloud-init[922]: |        S = *.+*.|
Dec  8 04:08:48 np0005550137 cloud-init[922]: |       . . =  *o+|
Dec  8 04:08:48 np0005550137 cloud-init[922]: |        . .   .**|
Dec  8 04:08:48 np0005550137 cloud-init[922]: |              E=*|
Dec  8 04:08:48 np0005550137 cloud-init[922]: |              ++=|
Dec  8 04:08:48 np0005550137 cloud-init[922]: +----[SHA256]-----+
Dec  8 04:08:48 np0005550137 cloud-init[922]: Generating public/private ecdsa key pair.
Dec  8 04:08:48 np0005550137 cloud-init[922]: Your identification has been saved in /etc/ssh/ssh_host_ecdsa_key
Dec  8 04:08:48 np0005550137 cloud-init[922]: Your public key has been saved in /etc/ssh/ssh_host_ecdsa_key.pub
Dec  8 04:08:48 np0005550137 cloud-init[922]: The key fingerprint is:
Dec  8 04:08:48 np0005550137 cloud-init[922]: SHA256:SxTPJgsW2iyFqwkBiLMnSD6A9HRqukd87q4WvAab/TU root@np0005550137.novalocal
Dec  8 04:08:48 np0005550137 cloud-init[922]: The key's randomart image is:
Dec  8 04:08:48 np0005550137 cloud-init[922]: +---[ECDSA 256]---+
Dec  8 04:08:48 np0005550137 cloud-init[922]: |B. . oo .        |
Dec  8 04:08:48 np0005550137 cloud-init[922]: |B.o += . +       |
Dec  8 04:08:48 np0005550137 cloud-init[922]: |=+ +o.= o +      |
Dec  8 04:08:48 np0005550137 cloud-init[922]: |=o= .o o +       |
Dec  8 04:08:48 np0005550137 cloud-init[922]: | =o= .  S        |
Dec  8 04:08:48 np0005550137 cloud-init[922]: | .=oo  . .       |
Dec  8 04:08:48 np0005550137 cloud-init[922]: | .=.o. E.        |
Dec  8 04:08:48 np0005550137 cloud-init[922]: | o.=. . .        |
Dec  8 04:08:48 np0005550137 cloud-init[922]: |  o.++           |
Dec  8 04:08:48 np0005550137 cloud-init[922]: +----[SHA256]-----+
Dec  8 04:08:48 np0005550137 cloud-init[922]: Generating public/private ed25519 key pair.
Dec  8 04:08:48 np0005550137 cloud-init[922]: Your identification has been saved in /etc/ssh/ssh_host_ed25519_key
Dec  8 04:08:48 np0005550137 cloud-init[922]: Your public key has been saved in /etc/ssh/ssh_host_ed25519_key.pub
Dec  8 04:08:48 np0005550137 cloud-init[922]: The key fingerprint is:
Dec  8 04:08:48 np0005550137 cloud-init[922]: SHA256:leaKD1WWO62Whd+yzoMaTv1xisgjXnX3yKN1eU22EUw root@np0005550137.novalocal
Dec  8 04:08:48 np0005550137 cloud-init[922]: The key's randomart image is:
Dec  8 04:08:48 np0005550137 cloud-init[922]: +--[ED25519 256]--+
Dec  8 04:08:48 np0005550137 cloud-init[922]: |               E |
Dec  8 04:08:48 np0005550137 cloud-init[922]: |           o  o  |
Dec  8 04:08:48 np0005550137 cloud-init[922]: |          B    o |
Dec  8 04:08:48 np0005550137 cloud-init[922]: |         * +    .|
Dec  8 04:08:48 np0005550137 cloud-init[922]: |        S * + ..o|
Dec  8 04:08:48 np0005550137 cloud-init[922]: |       o + B + ==|
Dec  8 04:08:48 np0005550137 cloud-init[922]: |      o = =.+ Bo=|
Dec  8 04:08:48 np0005550137 cloud-init[922]: |      .B.+.+.O o.|
Dec  8 04:08:48 np0005550137 cloud-init[922]: |     ...*o..B.   |
Dec  8 04:08:48 np0005550137 cloud-init[922]: +----[SHA256]-----+
Dec  8 04:08:48 np0005550137 systemd[1]: Finished Cloud-init: Network Stage.
Dec  8 04:08:48 np0005550137 systemd[1]: Reached target Cloud-config availability.
Dec  8 04:08:48 np0005550137 systemd[1]: Reached target Network is Online.
Dec  8 04:08:48 np0005550137 systemd[1]: Starting Cloud-init: Config Stage...
Dec  8 04:08:48 np0005550137 systemd[1]: Starting Crash recovery kernel arming...
Dec  8 04:08:48 np0005550137 systemd[1]: Starting Notify NFS peers of a restart...
Dec  8 04:08:48 np0005550137 systemd[1]: Starting System Logging Service...
Dec  8 04:08:48 np0005550137 systemd[1]: Starting OpenSSH server daemon...
Dec  8 04:08:48 np0005550137 sm-notify[1005]: Version 2.5.4 starting
Dec  8 04:08:48 np0005550137 systemd[1]: Starting Permit User Sessions...
Dec  8 04:08:48 np0005550137 systemd[1]: Started Notify NFS peers of a restart.
Dec  8 04:08:48 np0005550137 systemd[1]: Started OpenSSH server daemon.
Dec  8 04:08:48 np0005550137 systemd[1]: Finished Permit User Sessions.
Dec  8 04:08:48 np0005550137 systemd[1]: Started Command Scheduler.
Dec  8 04:08:48 np0005550137 systemd[1]: Started Getty on tty1.
Dec  8 04:08:48 np0005550137 systemd[1]: Started Serial Getty on ttyS0.
Dec  8 04:08:48 np0005550137 systemd[1]: Reached target Login Prompts.
Dec  8 04:08:48 np0005550137 rsyslogd[1006]: [origin software="rsyslogd" swVersion="8.2510.0-2.el9" x-pid="1006" x-info="https://www.rsyslog.com"] start
Dec  8 04:08:48 np0005550137 systemd[1]: Started System Logging Service.
Dec  8 04:08:48 np0005550137 rsyslogd[1006]: imjournal: No statefile exists, /var/lib/rsyslog/imjournal.state will be created (ignore if this is first run): No such file or directory [v8.2510.0-2.el9 try https://www.rsyslog.com/e/2040 ]
Dec  8 04:08:48 np0005550137 systemd[1]: Reached target Multi-User System.
Dec  8 04:08:48 np0005550137 systemd[1]: Starting Record Runlevel Change in UTMP...
Dec  8 04:08:48 np0005550137 systemd[1]: systemd-update-utmp-runlevel.service: Deactivated successfully.
Dec  8 04:08:48 np0005550137 systemd[1]: Finished Record Runlevel Change in UTMP.
Dec  8 04:08:48 np0005550137 rsyslogd[1006]: imjournal: journal files changed, reloading...  [v8.2510.0-2.el9 try https://www.rsyslog.com/e/0 ]
Dec  8 04:08:48 np0005550137 kdumpctl[1018]: kdump: No kdump initial ramdisk found.
Dec  8 04:08:48 np0005550137 kdumpctl[1018]: kdump: Rebuilding /boot/initramfs-5.14.0-645.el9.x86_64kdump.img
Dec  8 04:08:48 np0005550137 cloud-init[1138]: Cloud-init v. 24.4-7.el9 running 'modules:config' at Mon, 08 Dec 2025 09:08:48 +0000. Up 13.10 seconds.
Dec  8 04:08:48 np0005550137 systemd[1]: Finished Cloud-init: Config Stage.
Dec  8 04:08:48 np0005550137 systemd[1]: Starting Cloud-init: Final Stage...
Dec  8 04:08:49 np0005550137 dracut[1266]: dracut-057-102.git20250818.el9
Dec  8 04:08:49 np0005550137 cloud-init[1284]: Cloud-init v. 24.4-7.el9 running 'modules:final' at Mon, 08 Dec 2025 09:08:49 +0000. Up 13.56 seconds.
Dec  8 04:08:49 np0005550137 dracut[1268]: Executing: /usr/bin/dracut --quiet --hostonly --hostonly-cmdline --hostonly-i18n --hostonly-mode strict --hostonly-nics  --mount "/dev/disk/by-uuid/fcf6b761-831a-48a7-9f5f-068b5063763f /sysroot xfs rw,relatime,seclabel,attr2,inode64,logbufs=8,logbsize=32k,noquota" --squash-compressor zstd --no-hostonly-default-device --add-confdir /lib/kdump/dracut.conf.d -f /boot/initramfs-5.14.0-645.el9.x86_64kdump.img 5.14.0-645.el9.x86_64
Dec  8 04:08:49 np0005550137 cloud-init[1314]: #############################################################
Dec  8 04:08:49 np0005550137 cloud-init[1315]: -----BEGIN SSH HOST KEY FINGERPRINTS-----
Dec  8 04:08:49 np0005550137 cloud-init[1324]: 256 SHA256:SxTPJgsW2iyFqwkBiLMnSD6A9HRqukd87q4WvAab/TU root@np0005550137.novalocal (ECDSA)
Dec  8 04:08:49 np0005550137 cloud-init[1334]: 256 SHA256:leaKD1WWO62Whd+yzoMaTv1xisgjXnX3yKN1eU22EUw root@np0005550137.novalocal (ED25519)
Dec  8 04:08:49 np0005550137 cloud-init[1342]: 3072 SHA256:MScpx/axlyzTzfvGa69ZBIuKFJbNXZNzrrRTuP0N96U root@np0005550137.novalocal (RSA)
Dec  8 04:08:49 np0005550137 cloud-init[1346]: -----END SSH HOST KEY FINGERPRINTS-----
Dec  8 04:08:49 np0005550137 cloud-init[1348]: #############################################################
Dec  8 04:08:49 np0005550137 cloud-init[1284]: Cloud-init v. 24.4-7.el9 finished at Mon, 08 Dec 2025 09:08:49 +0000. Datasource DataSourceConfigDrive [net,ver=2][source=/dev/sr0].  Up 13.72 seconds
Dec  8 04:08:49 np0005550137 systemd[1]: Finished Cloud-init: Final Stage.
Dec  8 04:08:49 np0005550137 systemd[1]: Reached target Cloud-init target.
Dec  8 04:08:49 np0005550137 dracut[1268]: dracut module 'systemd-networkd' will not be installed, because command 'networkctl' could not be found!
Dec  8 04:08:49 np0005550137 dracut[1268]: dracut module 'systemd-networkd' will not be installed, because command '/usr/lib/systemd/systemd-networkd' could not be found!
Dec  8 04:08:49 np0005550137 dracut[1268]: dracut module 'systemd-networkd' will not be installed, because command '/usr/lib/systemd/systemd-networkd-wait-online' could not be found!
Dec  8 04:08:49 np0005550137 dracut[1268]: dracut module 'systemd-resolved' will not be installed, because command 'resolvectl' could not be found!
Dec  8 04:08:49 np0005550137 dracut[1268]: dracut module 'systemd-resolved' will not be installed, because command '/usr/lib/systemd/systemd-resolved' could not be found!
Dec  8 04:08:49 np0005550137 dracut[1268]: dracut module 'systemd-timesyncd' will not be installed, because command '/usr/lib/systemd/systemd-timesyncd' could not be found!
Dec  8 04:08:49 np0005550137 dracut[1268]: dracut module 'systemd-timesyncd' will not be installed, because command '/usr/lib/systemd/systemd-time-wait-sync' could not be found!
Dec  8 04:08:49 np0005550137 dracut[1268]: dracut module 'busybox' will not be installed, because command 'busybox' could not be found!
Dec  8 04:08:49 np0005550137 dracut[1268]: dracut module 'dbus-daemon' will not be installed, because command 'dbus-daemon' could not be found!
Dec  8 04:08:49 np0005550137 dracut[1268]: dracut module 'rngd' will not be installed, because command 'rngd' could not be found!
Dec  8 04:08:49 np0005550137 dracut[1268]: dracut module 'connman' will not be installed, because command 'connmand' could not be found!
Dec  8 04:08:49 np0005550137 dracut[1268]: dracut module 'connman' will not be installed, because command 'connmanctl' could not be found!
Dec  8 04:08:49 np0005550137 dracut[1268]: dracut module 'connman' will not be installed, because command 'connmand-wait-online' could not be found!
Dec  8 04:08:49 np0005550137 dracut[1268]: dracut module 'network-wicked' will not be installed, because command 'wicked' could not be found!
Dec  8 04:08:49 np0005550137 dracut[1268]: 62bluetooth: Could not find any command of '/usr/lib/bluetooth/bluetoothd /usr/libexec/bluetooth/bluetoothd'!
Dec  8 04:08:49 np0005550137 dracut[1268]: dracut module 'lvmmerge' will not be installed, because command 'lvm' could not be found!
Dec  8 04:08:49 np0005550137 dracut[1268]: dracut module 'lvmthinpool-monitor' will not be installed, because command 'lvm' could not be found!
Dec  8 04:08:49 np0005550137 dracut[1268]: dracut module 'btrfs' will not be installed, because command 'btrfs' could not be found!
Dec  8 04:08:49 np0005550137 dracut[1268]: dracut module 'dmraid' will not be installed, because command 'dmraid' could not be found!
Dec  8 04:08:49 np0005550137 dracut[1268]: dracut module 'lvm' will not be installed, because command 'lvm' could not be found!
Dec  8 04:08:50 np0005550137 dracut[1268]: dracut module 'mdraid' will not be installed, because command 'mdadm' could not be found!
Dec  8 04:08:50 np0005550137 dracut[1268]: dracut module 'pcsc' will not be installed, because command 'pcscd' could not be found!
Dec  8 04:08:50 np0005550137 dracut[1268]: dracut module 'tpm2-tss' will not be installed, because command 'tpm2' could not be found!
Dec  8 04:08:50 np0005550137 dracut[1268]: dracut module 'cifs' will not be installed, because command 'mount.cifs' could not be found!
Dec  8 04:08:50 np0005550137 dracut[1268]: dracut module 'iscsi' will not be installed, because command 'iscsi-iname' could not be found!
Dec  8 04:08:50 np0005550137 dracut[1268]: dracut module 'iscsi' will not be installed, because command 'iscsiadm' could not be found!
Dec  8 04:08:50 np0005550137 dracut[1268]: dracut module 'iscsi' will not be installed, because command 'iscsid' could not be found!
Dec  8 04:08:50 np0005550137 dracut[1268]: dracut module 'nvmf' will not be installed, because command 'nvme' could not be found!
Dec  8 04:08:50 np0005550137 dracut[1268]: dracut module 'biosdevname' will not be installed, because command 'biosdevname' could not be found!
Dec  8 04:08:50 np0005550137 dracut[1268]: dracut module 'memstrack' will not be installed, because command 'memstrack' could not be found!
Dec  8 04:08:50 np0005550137 dracut[1268]: memstrack is not available
Dec  8 04:08:50 np0005550137 dracut[1268]: If you need to use rd.memdebug>=4, please install memstrack and procps-ng
Dec  8 04:08:50 np0005550137 dracut[1268]: dracut module 'systemd-resolved' will not be installed, because command 'resolvectl' could not be found!
Dec  8 04:08:50 np0005550137 dracut[1268]: dracut module 'systemd-resolved' will not be installed, because command '/usr/lib/systemd/systemd-resolved' could not be found!
Dec  8 04:08:50 np0005550137 dracut[1268]: dracut module 'systemd-timesyncd' will not be installed, because command '/usr/lib/systemd/systemd-timesyncd' could not be found!
Dec  8 04:08:50 np0005550137 dracut[1268]: dracut module 'systemd-timesyncd' will not be installed, because command '/usr/lib/systemd/systemd-time-wait-sync' could not be found!
Dec  8 04:08:50 np0005550137 dracut[1268]: dracut module 'busybox' will not be installed, because command 'busybox' could not be found!
Dec  8 04:08:50 np0005550137 dracut[1268]: dracut module 'dbus-daemon' will not be installed, because command 'dbus-daemon' could not be found!
Dec  8 04:08:50 np0005550137 dracut[1268]: dracut module 'rngd' will not be installed, because command 'rngd' could not be found!
Dec  8 04:08:50 np0005550137 dracut[1268]: dracut module 'connman' will not be installed, because command 'connmand' could not be found!
Dec  8 04:08:50 np0005550137 dracut[1268]: dracut module 'connman' will not be installed, because command 'connmanctl' could not be found!
Dec  8 04:08:50 np0005550137 dracut[1268]: dracut module 'connman' will not be installed, because command 'connmand-wait-online' could not be found!
Dec  8 04:08:50 np0005550137 dracut[1268]: dracut module 'network-wicked' will not be installed, because command 'wicked' could not be found!
Dec  8 04:08:50 np0005550137 dracut[1268]: 62bluetooth: Could not find any command of '/usr/lib/bluetooth/bluetoothd /usr/libexec/bluetooth/bluetoothd'!
Dec  8 04:08:50 np0005550137 dracut[1268]: dracut module 'lvmmerge' will not be installed, because command 'lvm' could not be found!
Dec  8 04:08:50 np0005550137 dracut[1268]: dracut module 'lvmthinpool-monitor' will not be installed, because command 'lvm' could not be found!
Dec  8 04:08:50 np0005550137 dracut[1268]: dracut module 'btrfs' will not be installed, because command 'btrfs' could not be found!
Dec  8 04:08:50 np0005550137 dracut[1268]: dracut module 'dmraid' will not be installed, because command 'dmraid' could not be found!
Dec  8 04:08:50 np0005550137 dracut[1268]: dracut module 'lvm' will not be installed, because command 'lvm' could not be found!
Dec  8 04:08:50 np0005550137 dracut[1268]: dracut module 'mdraid' will not be installed, because command 'mdadm' could not be found!
Dec  8 04:08:50 np0005550137 dracut[1268]: dracut module 'pcsc' will not be installed, because command 'pcscd' could not be found!
Dec  8 04:08:50 np0005550137 dracut[1268]: dracut module 'tpm2-tss' will not be installed, because command 'tpm2' could not be found!
Dec  8 04:08:50 np0005550137 dracut[1268]: dracut module 'cifs' will not be installed, because command 'mount.cifs' could not be found!
Dec  8 04:08:50 np0005550137 dracut[1268]: dracut module 'iscsi' will not be installed, because command 'iscsi-iname' could not be found!
Dec  8 04:08:50 np0005550137 dracut[1268]: dracut module 'iscsi' will not be installed, because command 'iscsiadm' could not be found!
Dec  8 04:08:50 np0005550137 dracut[1268]: dracut module 'iscsi' will not be installed, because command 'iscsid' could not be found!
Dec  8 04:08:50 np0005550137 dracut[1268]: dracut module 'nvmf' will not be installed, because command 'nvme' could not be found!
Dec  8 04:08:50 np0005550137 dracut[1268]: dracut module 'memstrack' will not be installed, because command 'memstrack' could not be found!
Dec  8 04:08:50 np0005550137 dracut[1268]: memstrack is not available
Dec  8 04:08:50 np0005550137 dracut[1268]: If you need to use rd.memdebug>=4, please install memstrack and procps-ng
Dec  8 04:08:50 np0005550137 dracut[1268]: *** Including module: systemd ***
Dec  8 04:08:51 np0005550137 dracut[1268]: *** Including module: fips ***
Dec  8 04:08:51 np0005550137 chronyd[784]: Selected source 198.181.199.82 (2.centos.pool.ntp.org)
Dec  8 04:08:51 np0005550137 chronyd[784]: System clock TAI offset set to 37 seconds
Dec  8 04:08:51 np0005550137 dracut[1268]: *** Including module: systemd-initrd ***
Dec  8 04:08:51 np0005550137 dracut[1268]: *** Including module: i18n ***
Dec  8 04:08:51 np0005550137 dracut[1268]: *** Including module: drm ***
Dec  8 04:08:52 np0005550137 dracut[1268]: *** Including module: prefixdevname ***
Dec  8 04:08:52 np0005550137 dracut[1268]: *** Including module: kernel-modules ***
Dec  8 04:08:52 np0005550137 kernel: block vda: the capability attribute has been deprecated.
Dec  8 04:08:52 np0005550137 dracut[1268]: *** Including module: kernel-modules-extra ***
Dec  8 04:08:52 np0005550137 dracut[1268]: *** Including module: qemu ***
Dec  8 04:08:52 np0005550137 dracut[1268]: *** Including module: fstab-sys ***
Dec  8 04:08:52 np0005550137 dracut[1268]: *** Including module: rootfs-block ***
Dec  8 04:08:52 np0005550137 dracut[1268]: *** Including module: terminfo ***
Dec  8 04:08:52 np0005550137 dracut[1268]: *** Including module: udev-rules ***
Dec  8 04:08:52 np0005550137 chronyd[784]: Selected source 54.39.23.64 (2.centos.pool.ntp.org)
Dec  8 04:08:53 np0005550137 dracut[1268]: Skipping udev rule: 91-permissions.rules
Dec  8 04:08:53 np0005550137 dracut[1268]: Skipping udev rule: 80-drivers-modprobe.rules
Dec  8 04:08:53 np0005550137 dracut[1268]: *** Including module: virtiofs ***
Dec  8 04:08:53 np0005550137 dracut[1268]: *** Including module: dracut-systemd ***
Dec  8 04:08:53 np0005550137 dracut[1268]: *** Including module: usrmount ***
Dec  8 04:08:53 np0005550137 dracut[1268]: *** Including module: base ***
Dec  8 04:08:53 np0005550137 dracut[1268]: *** Including module: fs-lib ***
Dec  8 04:08:53 np0005550137 dracut[1268]: *** Including module: kdumpbase ***
Dec  8 04:08:54 np0005550137 dracut[1268]: *** Including module: microcode_ctl-fw_dir_override ***
Dec  8 04:08:54 np0005550137 dracut[1268]:  microcode_ctl module: mangling fw_dir
Dec  8 04:08:54 np0005550137 dracut[1268]:    microcode_ctl: reset fw_dir to "/lib/firmware/updates /lib/firmware"
Dec  8 04:08:54 np0005550137 dracut[1268]:    microcode_ctl: processing data directory  "/usr/share/microcode_ctl/ucode_with_caveats/intel"...
Dec  8 04:08:54 np0005550137 dracut[1268]:    microcode_ctl: configuration "intel" is ignored
Dec  8 04:08:54 np0005550137 dracut[1268]:    microcode_ctl: processing data directory  "/usr/share/microcode_ctl/ucode_with_caveats/intel-06-2d-07"...
Dec  8 04:08:54 np0005550137 dracut[1268]:    microcode_ctl: configuration "intel-06-2d-07" is ignored
Dec  8 04:08:54 np0005550137 dracut[1268]:    microcode_ctl: processing data directory  "/usr/share/microcode_ctl/ucode_with_caveats/intel-06-4e-03"...
Dec  8 04:08:54 np0005550137 dracut[1268]:    microcode_ctl: configuration "intel-06-4e-03" is ignored
Dec  8 04:08:54 np0005550137 dracut[1268]:    microcode_ctl: processing data directory  "/usr/share/microcode_ctl/ucode_with_caveats/intel-06-4f-01"...
Dec  8 04:08:54 np0005550137 dracut[1268]:    microcode_ctl: configuration "intel-06-4f-01" is ignored
Dec  8 04:08:54 np0005550137 dracut[1268]:    microcode_ctl: processing data directory  "/usr/share/microcode_ctl/ucode_with_caveats/intel-06-55-04"...
Dec  8 04:08:54 np0005550137 dracut[1268]:    microcode_ctl: configuration "intel-06-55-04" is ignored
Dec  8 04:08:54 np0005550137 dracut[1268]:    microcode_ctl: processing data directory  "/usr/share/microcode_ctl/ucode_with_caveats/intel-06-5e-03"...
Dec  8 04:08:54 np0005550137 dracut[1268]:    microcode_ctl: configuration "intel-06-5e-03" is ignored
Dec  8 04:08:54 np0005550137 dracut[1268]:    microcode_ctl: processing data directory  "/usr/share/microcode_ctl/ucode_with_caveats/intel-06-8c-01"...
Dec  8 04:08:54 np0005550137 dracut[1268]:    microcode_ctl: configuration "intel-06-8c-01" is ignored
Dec  8 04:08:54 np0005550137 dracut[1268]:    microcode_ctl: processing data directory  "/usr/share/microcode_ctl/ucode_with_caveats/intel-06-8e-9e-0x-0xca"...
Dec  8 04:08:54 np0005550137 dracut[1268]:    microcode_ctl: configuration "intel-06-8e-9e-0x-0xca" is ignored
Dec  8 04:08:54 np0005550137 dracut[1268]:    microcode_ctl: processing data directory  "/usr/share/microcode_ctl/ucode_with_caveats/intel-06-8e-9e-0x-dell"...
Dec  8 04:08:54 np0005550137 dracut[1268]:    microcode_ctl: configuration "intel-06-8e-9e-0x-dell" is ignored
Dec  8 04:08:54 np0005550137 dracut[1268]:    microcode_ctl: processing data directory  "/usr/share/microcode_ctl/ucode_with_caveats/intel-06-8f-08"...
Dec  8 04:08:54 np0005550137 irqbalance[797]: Cannot change IRQ 25 affinity: Operation not permitted
Dec  8 04:08:54 np0005550137 irqbalance[797]: IRQ 25 affinity is now unmanaged
Dec  8 04:08:54 np0005550137 irqbalance[797]: Cannot change IRQ 31 affinity: Operation not permitted
Dec  8 04:08:54 np0005550137 irqbalance[797]: IRQ 31 affinity is now unmanaged
Dec  8 04:08:54 np0005550137 irqbalance[797]: Cannot change IRQ 28 affinity: Operation not permitted
Dec  8 04:08:54 np0005550137 irqbalance[797]: IRQ 28 affinity is now unmanaged
Dec  8 04:08:54 np0005550137 irqbalance[797]: Cannot change IRQ 32 affinity: Operation not permitted
Dec  8 04:08:54 np0005550137 irqbalance[797]: IRQ 32 affinity is now unmanaged
Dec  8 04:08:54 np0005550137 irqbalance[797]: Cannot change IRQ 30 affinity: Operation not permitted
Dec  8 04:08:54 np0005550137 irqbalance[797]: IRQ 30 affinity is now unmanaged
Dec  8 04:08:54 np0005550137 irqbalance[797]: Cannot change IRQ 29 affinity: Operation not permitted
Dec  8 04:08:54 np0005550137 irqbalance[797]: IRQ 29 affinity is now unmanaged
Dec  8 04:08:54 np0005550137 dracut[1268]:    microcode_ctl: configuration "intel-06-8f-08" is ignored
Dec  8 04:08:54 np0005550137 dracut[1268]:    microcode_ctl: final fw_dir: "/lib/firmware/updates /lib/firmware"
Dec  8 04:08:54 np0005550137 dracut[1268]: *** Including module: openssl ***
Dec  8 04:08:54 np0005550137 dracut[1268]: *** Including module: shutdown ***
Dec  8 04:08:54 np0005550137 dracut[1268]: *** Including module: squash ***
Dec  8 04:08:54 np0005550137 dracut[1268]: *** Including modules done ***
Dec  8 04:08:54 np0005550137 dracut[1268]: *** Installing kernel module dependencies ***
Dec  8 04:08:55 np0005550137 dracut[1268]: *** Installing kernel module dependencies done ***
Dec  8 04:08:55 np0005550137 dracut[1268]: *** Resolving executable dependencies ***
Dec  8 04:08:56 np0005550137 systemd[1]: NetworkManager-dispatcher.service: Deactivated successfully.
Dec  8 04:08:57 np0005550137 dracut[1268]: *** Resolving executable dependencies done ***
Dec  8 04:08:57 np0005550137 dracut[1268]: *** Generating early-microcode cpio image ***
Dec  8 04:08:57 np0005550137 dracut[1268]: *** Store current command line parameters ***
Dec  8 04:08:57 np0005550137 dracut[1268]: Stored kernel commandline:
Dec  8 04:08:57 np0005550137 dracut[1268]: No dracut internal kernel commandline stored in the initramfs
Dec  8 04:08:57 np0005550137 dracut[1268]: *** Install squash loader ***
Dec  8 04:08:58 np0005550137 dracut[1268]: *** Squashing the files inside the initramfs ***
Dec  8 04:08:59 np0005550137 dracut[1268]: *** Squashing the files inside the initramfs done ***
Dec  8 04:08:59 np0005550137 dracut[1268]: *** Creating image file '/boot/initramfs-5.14.0-645.el9.x86_64kdump.img' ***
Dec  8 04:08:59 np0005550137 dracut[1268]: *** Hardlinking files ***
Dec  8 04:08:59 np0005550137 dracut[1268]: *** Hardlinking files done ***
Dec  8 04:08:59 np0005550137 dracut[1268]: *** Creating initramfs image file '/boot/initramfs-5.14.0-645.el9.x86_64kdump.img' done ***
Dec  8 04:09:00 np0005550137 kdumpctl[1018]: kdump: kexec: loaded kdump kernel
Dec  8 04:09:00 np0005550137 kdumpctl[1018]: kdump: Starting kdump: [OK]
Dec  8 04:09:00 np0005550137 systemd[1]: Finished Crash recovery kernel arming.
Dec  8 04:09:00 np0005550137 systemd[1]: Startup finished in 3.197s (kernel) + 2.793s (initrd) + 18.679s (userspace) = 24.670s.
Dec  8 04:09:08 np0005550137 systemd[1]: Created slice User Slice of UID 1000.
Dec  8 04:09:08 np0005550137 systemd[1]: Starting User Runtime Directory /run/user/1000...
Dec  8 04:09:08 np0005550137 systemd-logind[805]: New session 1 of user zuul.
Dec  8 04:09:08 np0005550137 systemd[1]: Finished User Runtime Directory /run/user/1000.
Dec  8 04:09:08 np0005550137 systemd[1]: Starting User Manager for UID 1000...
Dec  8 04:09:08 np0005550137 systemd[4299]: Queued start job for default target Main User Target.
Dec  8 04:09:08 np0005550137 systemd[4299]: Created slice User Application Slice.
Dec  8 04:09:08 np0005550137 systemd[4299]: Started Mark boot as successful after the user session has run 2 minutes.
Dec  8 04:09:08 np0005550137 systemd[4299]: Started Daily Cleanup of User's Temporary Directories.
Dec  8 04:09:08 np0005550137 systemd[4299]: Reached target Paths.
Dec  8 04:09:08 np0005550137 systemd[4299]: Reached target Timers.
Dec  8 04:09:08 np0005550137 systemd[4299]: Starting D-Bus User Message Bus Socket...
Dec  8 04:09:08 np0005550137 systemd[4299]: Starting Create User's Volatile Files and Directories...
Dec  8 04:09:08 np0005550137 systemd[4299]: Finished Create User's Volatile Files and Directories.
Dec  8 04:09:08 np0005550137 systemd[4299]: Listening on D-Bus User Message Bus Socket.
Dec  8 04:09:08 np0005550137 systemd[4299]: Reached target Sockets.
Dec  8 04:09:08 np0005550137 systemd[4299]: Reached target Basic System.
Dec  8 04:09:08 np0005550137 systemd[4299]: Reached target Main User Target.
Dec  8 04:09:08 np0005550137 systemd[4299]: Startup finished in 112ms.
Dec  8 04:09:08 np0005550137 systemd[1]: Started User Manager for UID 1000.
Dec  8 04:09:08 np0005550137 systemd[1]: Started Session 1 of User zuul.
Dec  8 04:09:09 np0005550137 python3[4381]: ansible-setup Invoked with gather_subset=['!all'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Dec  8 04:09:11 np0005550137 python3[4409]: ansible-ansible.legacy.setup Invoked with gather_subset=['all'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Dec  8 04:09:16 np0005550137 systemd[1]: systemd-hostnamed.service: Deactivated successfully.
Dec  8 04:09:19 np0005550137 python3[4469]: ansible-setup Invoked with gather_subset=['network'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Dec  8 04:09:20 np0005550137 python3[4509]: ansible-zuul_console Invoked with path=/tmp/console-{log_uuid}.log port=19885 state=present
Dec  8 04:09:22 np0005550137 python3[4535]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQDNPPDJvGKcIuqeJPP+WbKWWQSzP2AJ/vKgCj78QYpHv/amAovd2vQw3w1iZfWCor/0upP4zWNZmAlvksskv7wb7ZPLhbmqsqKWGaUIk2sv48oO/cncA/qO3rs6C6o1AaGyy1wUBS9ESyom2uAc2Ai3NDfrqxfhcEcMQ56KX43PEQnvA+Z47OmYHmZqSUiJrIrCkMHU5yrc/8xSh1heDBsXdoQkPewf0iuTPY56Y7kvzEkmg4aa89jVT/sZhQSFg97A60CkTUGiDqMew2uCxpbmTRUYUKfe/C9afwqtykmzzUCa6svhRsZyzh7hDPzGFVfeTbkp5ieh01Z94nIuaYnwLVIw2VaOa2Eka34Mkc/OaVPHFmSu42kEU5hJWA3IBkkuyJPMZRHN/8m8C8uTXGiRGlPCOyz6FzyRPav1ypdhQuIvCUoM7fts0ySlGsNXUIOIoDLqSbHvSDpyaxIABMTf9J9OB66RWaa+35TRmHgzmfCZuheI7KyTjaCQuoONpxE= zuul-build-sshkey manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Dec  8 04:09:22 np0005550137 python3[4559]: ansible-file Invoked with state=directory path=/home/zuul/.ssh mode=448 recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  8 04:09:23 np0005550137 python3[4658]: ansible-ansible.legacy.stat Invoked with path=/home/zuul/.ssh/id_rsa follow=False get_checksum=False checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Dec  8 04:09:23 np0005550137 python3[4729]: ansible-ansible.legacy.copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1765184962.7813122-251-91925721672203/source dest=/home/zuul/.ssh/id_rsa mode=384 force=False _original_basename=1df8dede247c48adb2424ce13fe8a4bf_id_rsa follow=False checksum=18291d8501757c280404496843d3fff4bb4fa318 backup=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  8 04:09:24 np0005550137 python3[4852]: ansible-ansible.legacy.stat Invoked with path=/home/zuul/.ssh/id_rsa.pub follow=False get_checksum=False checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Dec  8 04:09:24 np0005550137 python3[4923]: ansible-ansible.legacy.copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1765184963.7541304-306-104916056942399/source dest=/home/zuul/.ssh/id_rsa.pub mode=420 force=False _original_basename=1df8dede247c48adb2424ce13fe8a4bf_id_rsa.pub follow=False checksum=ec2d2d9aa3ea7e171027eb81d26b909d9e883caa backup=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  8 04:09:25 np0005550137 python3[4971]: ansible-ping Invoked with data=pong
Dec  8 04:09:26 np0005550137 python3[4995]: ansible-setup Invoked with gather_subset=['all'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Dec  8 04:09:31 np0005550137 python3[5053]: ansible-zuul_debug_info Invoked with ipv4_route_required=False ipv6_route_required=False image_manifest_files=['/etc/dib-builddate.txt', '/etc/image-hostname.txt'] image_manifest=None traceroute_host=None
Dec  8 04:09:32 np0005550137 python3[5085]: ansible-file Invoked with path=/home/zuul/zuul-output/logs state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  8 04:09:32 np0005550137 python3[5109]: ansible-file Invoked with path=/home/zuul/zuul-output/artifacts state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  8 04:09:32 np0005550137 python3[5133]: ansible-file Invoked with path=/home/zuul/zuul-output/docs state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  8 04:09:33 np0005550137 python3[5157]: ansible-file Invoked with path=/home/zuul/zuul-output/logs state=directory mode=493 recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  8 04:09:33 np0005550137 python3[5181]: ansible-file Invoked with path=/home/zuul/zuul-output/artifacts state=directory mode=493 recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  8 04:09:33 np0005550137 python3[5205]: ansible-file Invoked with path=/home/zuul/zuul-output/docs state=directory mode=493 recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  8 04:09:35 np0005550137 python3[5231]: ansible-file Invoked with path=/etc/ci state=directory owner=root group=root mode=493 recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  8 04:09:35 np0005550137 python3[5309]: ansible-ansible.legacy.stat Invoked with path=/etc/ci/mirror_info.sh follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Dec  8 04:09:36 np0005550137 python3[5382]: ansible-ansible.legacy.copy Invoked with dest=/etc/ci/mirror_info.sh owner=root group=root mode=420 src=/home/zuul/.ansible/tmp/ansible-tmp-1765184975.555731-31-156869637307239/source follow=False _original_basename=mirror_info.sh.j2 checksum=92d92a03afdddee82732741071f662c729080c35 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  8 04:09:37 np0005550137 python3[5430]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-rsa AAAAB3NzaC1yc2EAAAABIwAAAQEA4Z/c9osaGGtU6X8fgELwfj/yayRurfcKA0HMFfdpPxev2dbwljysMuzoVp4OZmW1gvGtyYPSNRvnzgsaabPNKNo2ym5NToCP6UM+KSe93aln4BcM/24mXChYAbXJQ5Bqq/pIzsGs/pKetQN+vwvMxLOwTvpcsCJBXaa981RKML6xj9l/UZ7IIq1HSEKMvPLxZMWdu0Ut8DkCd5F4nOw9Wgml2uYpDCj5LLCrQQ9ChdOMz8hz6SighhNlRpPkvPaet3OXxr/ytFMu7j7vv06CaEnuMMiY2aTWN1Imin9eHAylIqFHta/3gFfQSWt9jXM7owkBLKL7ATzhaAn+fjNupw== arxcruz@redhat.com manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Dec  8 04:09:37 np0005550137 python3[5454]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABAQDS4Fn6k4deCnIlOtLWqZJyksbepjQt04j8Ed8CGx9EKkj0fKiAxiI4TadXQYPuNHMixZy4Nevjb6aDhL5Z906TfvNHKUrjrG7G26a0k8vdc61NEQ7FmcGMWRLwwc6ReDO7lFpzYKBMk4YqfWgBuGU/K6WLKiVW2cVvwIuGIaYrE1OiiX0iVUUk7KApXlDJMXn7qjSYynfO4mF629NIp8FJal38+Kv+HA+0QkE5Y2xXnzD4Lar5+keymiCHRntPppXHeLIRzbt0gxC7v3L72hpQ3BTBEzwHpeS8KY+SX1y5lRMN45thCHfJqGmARJREDjBvWG8JXOPmVIKQtZmVcD5b mandreou@redhat.com manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Dec  8 04:09:37 np0005550137 python3[5478]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABAQC9MiLfy30deHA7xPOAlew5qUq3UP2gmRMYJi8PtkjFB20/DKeWwWNnkZPqP9AayruRoo51SIiVg870gbZE2jYl+Ncx/FYDe56JeC3ySZsXoAVkC9bP7gkOGqOmJjirvAgPMI7bogVz8i+66Q4Ar7OKTp3762G4IuWPPEg4ce4Y7lx9qWocZapHYq4cYKMxrOZ7SEbFSATBbe2bPZAPKTw8do/Eny+Hq/LkHFhIeyra6cqTFQYShr+zPln0Cr+ro/pDX3bB+1ubFgTpjpkkkQsLhDfR6cCdCWM2lgnS3BTtYj5Ct9/JRPR5YOphqZz+uB+OEu2IL68hmU9vNTth1KeX rlandy@redhat.com manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Dec  8 04:09:37 np0005550137 python3[5502]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIFCbgz8gdERiJlk2IKOtkjQxEXejrio6ZYMJAVJYpOIp raukadah@gmail.com manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Dec  8 04:09:38 np0005550137 python3[5526]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIBqb3Q/9uDf4LmihQ7xeJ9gA/STIQUFPSfyyV0m8AoQi bshewale@redhat.com manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Dec  8 04:09:38 np0005550137 python3[5550]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABAQC0I8QqQx0Az2ysJt2JuffucLijhBqnsXKEIx5GyHwxVULROa8VtNFXUDH6ZKZavhiMcmfHB2+TBTda+lDP4FldYj06dGmzCY+IYGa+uDRdxHNGYjvCfLFcmLlzRK6fNbTcui+KlUFUdKe0fb9CRoGKyhlJD5GRkM1Dv+Yb6Bj+RNnmm1fVGYxzmrD2utvffYEb0SZGWxq2R9gefx1q/3wCGjeqvufEV+AskPhVGc5T7t9eyZ4qmslkLh1/nMuaIBFcr9AUACRajsvk6mXrAN1g3HlBf2gQlhi1UEyfbqIQvzzFtsbLDlSum/KmKjy818GzvWjERfQ0VkGzCd9bSLVL dviroel@redhat.com manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Dec  8 04:09:38 np0005550137 python3[5574]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQDLOQd4ZLtkZXQGY6UwAr/06ppWQK4fDO3HaqxPk98csyOCBXsliSKK39Bso828+5srIXiW7aI6aC9P5mwi4mUZlGPfJlQbfrcGvY+b/SocuvaGK+1RrHLoJCT52LBhwgrzlXio2jeksZeein8iaTrhsPrOAs7KggIL/rB9hEiB3NaOPWhhoCP4vlW6MEMExGcqB/1FVxXFBPnLkEyW0Lk7ycVflZl2ocRxbfjZi0+tI1Wlinp8PvSQSc/WVrAcDgKjc/mB4ODPOyYy3G8FHgfMsrXSDEyjBKgLKMsdCrAUcqJQWjkqXleXSYOV4q3pzL+9umK+q/e3P/bIoSFQzmJKTU1eDfuvPXmow9F5H54fii/Da7ezlMJ+wPGHJrRAkmzvMbALy7xwswLhZMkOGNtRcPqaKYRmIBKpw3o6bCTtcNUHOtOQnzwY8JzrM2eBWJBXAANYw+9/ho80JIiwhg29CFNpVBuHbql2YxJQNrnl90guN65rYNpDxdIluweyUf8= anbanerj@kaermorhen manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Dec  8 04:09:39 np0005550137 python3[5598]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQC3VwV8Im9kRm49lt3tM36hj4Zv27FxGo4C1Q/0jqhzFmHY7RHbmeRr8ObhwWoHjXSozKWg8FL5ER0z3hTwL0W6lez3sL7hUaCmSuZmG5Hnl3x4vTSxDI9JZ/Y65rtYiiWQo2fC5xJhU/4+0e5e/pseCm8cKRSu+SaxhO+sd6FDojA2x1BzOzKiQRDy/1zWGp/cZkxcEuB1wHI5LMzN03c67vmbu+fhZRAUO4dQkvcnj2LrhQtpa+ytvnSjr8icMDosf1OsbSffwZFyHB/hfWGAfe0eIeSA2XPraxiPknXxiPKx2MJsaUTYbsZcm3EjFdHBBMumw5rBI74zLrMRvCO9GwBEmGT4rFng1nP+yw5DB8sn2zqpOsPg1LYRwCPOUveC13P6pgsZZPh812e8v5EKnETct+5XI3dVpdw6CnNiLwAyVAF15DJvBGT/u1k0Myg/bQn+Gv9k2MSj6LvQmf6WbZu2Wgjm30z3FyCneBqTL7mLF19YXzeC0ufHz5pnO1E= dasm@fedora manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Dec  8 04:09:39 np0005550137 python3[5622]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIHUnwjB20UKmsSed9X73eGNV5AOEFccQ3NYrRW776pEk cjeanner manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Dec  8 04:09:39 np0005550137 python3[5646]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIDercCMGn8rW1C4P67tHgtflPdTeXlpyUJYH+6XDd2lR jgilaber@redhat.com manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Dec  8 04:09:39 np0005550137 python3[5670]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIAMI6kkg9Wg0sG7jIJmyZemEBwUn1yzNpQQd3gnulOmZ adrianfuscoarnejo@gmail.com manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Dec  8 04:09:40 np0005550137 python3[5694]: ansible-authorized_key Invoked with user=zuul state=present key=ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBPijwpQu/3jhhhBZInXNOLEH57DrknPc3PLbsRvYyJIFzwYjX+WD4a7+nGnMYS42MuZk6TJcVqgnqofVx4isoD4= ramishra@redhat.com manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Dec  8 04:09:40 np0005550137 python3[5718]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIGpU/BepK3qX0NRf5Np+dOBDqzQEefhNrw2DCZaH3uWW rebtoor@monolith manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Dec  8 04:09:40 np0005550137 python3[5742]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIDK0iKdi8jQTpQrDdLVH/AAgLVYyTXF7AQ1gjc/5uT3t ykarel@yatinkarel manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Dec  8 04:09:40 np0005550137 python3[5766]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIF/V/cLotA6LZeO32VL45Hd78skuA2lJA425Sm2LlQeZ fmount@horcrux manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Dec  8 04:09:41 np0005550137 python3[5790]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIDa7QCjuDMVmRPo1rREbGwzYeBCYVN+Ou/3WKXZEC6Sr manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Dec  8 04:09:41 np0005550137 python3[5814]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAACAQCfNtF7NvKl915TGsGGoseUb06Hj8L/S4toWf0hExeY+F00woL6NvBlJD0nDct+P5a22I4EhvoQCRQ8reaPCm1lybR3uiRIJsj+8zkVvLwby9LXzfZorlNG9ofjd00FEmB09uW/YvTl6Q9XwwwX6tInzIOv3TMqTHHGOL74ibbj8J/FJR0cFEyj0z4WQRvtkh32xAHl83gbuINryMt0sqRI+clj2381NKL55DRLQrVw0gsfqqxiHAnXg21qWmc4J+b9e9kiuAFQjcjwTVkwJCcg3xbPwC/qokYRby/Y5S40UUd7/jEARGXT7RZgpzTuDd1oZiCVrnrqJNPaMNdVv5MLeFdf1B7iIe5aa/fGouX7AO4SdKhZUdnJmCFAGvjC6S3JMZ2wAcUl+OHnssfmdj7XL50cLo27vjuzMtLAgSqi6N99m92WCF2s8J9aVzszX7Xz9OKZCeGsiVJp3/NdABKzSEAyM9xBD/5Vho894Sav+otpySHe3p6RUTgbB5Zu8VyZRZ/UtB3ueXxyo764yrc6qWIDqrehm84Xm9g+/jpIBzGPl07NUNJpdt/6Sgf9RIKXw/7XypO5yZfUcuFNGTxLfqjTNrtgLZNcjfav6sSdVXVcMPL//XNuRdKmVFaO76eV/oGMQGr1fGcCD+N+CpI7+Q+fCNB6VFWG4nZFuI/Iuw== averdagu@redhat.com manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Dec  8 04:09:41 np0005550137 python3[5838]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQDq8l27xI+QlQVdS4djp9ogSoyrNE2+Ox6vKPdhSNL1J3PE5w+WCSvMz9A5gnNuH810zwbekEApbxTze/gLQJwBHA52CChfURpXrFaxY7ePXRElwKAL3mJfzBWY/c5jnNL9TCVmFJTGZkFZP3Nh+BMgZvL6xBkt3WKm6Uq18qzd9XeKcZusrA+O+uLv1fVeQnadY9RIqOCyeFYCzLWrUfTyE8x/XG0hAWIM7qpnF2cALQS2h9n4hW5ybiUN790H08wf9hFwEf5nxY9Z9dVkPFQiTSGKNBzmnCXU9skxS/xhpFjJ5duGSZdtAHe9O+nGZm9c67hxgtf8e5PDuqAdXEv2cf6e3VBAt+Bz8EKI3yosTj0oZHfwr42Yzb1l/SKy14Rggsrc9KAQlrGXan6+u2jcQqqx7l+SWmnpFiWTV9u5cWj2IgOhApOitmRBPYqk9rE2usfO0hLn/Pj/R/Nau4803e1/EikdLE7Ps95s9mX5jRDjAoUa2JwFF5RsVFyL910= ashigupt@ashigupt.remote.csb manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Dec  8 04:09:42 np0005550137 python3[5862]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIOKLl0NYKwoZ/JY5KeZU8VwRAggeOxqQJeoqp3dsAaY9 manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Dec  8 04:09:42 np0005550137 python3[5886]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIASASQOH2BcOyLKuuDOdWZlPi2orcjcA8q4400T73DLH evallesp@fedora manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Dec  8 04:09:42 np0005550137 python3[5910]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAILeBWlamUph+jRKV2qrx1PGU7vWuGIt5+z9k96I8WehW amsinha@amsinha-mac manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Dec  8 04:09:42 np0005550137 python3[5934]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIANvVgvJBlK3gb1yz5uef/JqIGq4HLEmY2dYA8e37swb morenod@redhat-laptop manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Dec  8 04:09:43 np0005550137 python3[5958]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAACAQDZdI7t1cxYx65heVI24HTV4F7oQLW1zyfxHreL2TIJKxjyrUUKIFEUmTutcBlJRLNT2Eoix6x1sOw9YrchloCLcn//SGfTElr9mSc5jbjb7QXEU+zJMhtxyEJ1Po3CUGnj7ckiIXw7wcawZtrEOAQ9pH3ExYCJcEMiyNjRQZCxT3tPK+S4B95EWh5Fsrz9CkwpjNRPPH7LigCeQTM3Wc7r97utAslBUUvYceDSLA7rMgkitJE38b7rZBeYzsGQ8YYUBjTCtehqQXxCRjizbHWaaZkBU+N3zkKB6n/iCNGIO690NK7A/qb6msTijiz1PeuM8ThOsi9qXnbX5v0PoTpcFSojV7NHAQ71f0XXuS43FhZctT+Dcx44dT8Fb5vJu2cJGrk+qF8ZgJYNpRS7gPg0EG2EqjK7JMf9ULdjSu0r+KlqIAyLvtzT4eOnQipoKlb/WG5D/0ohKv7OMQ352ggfkBFIQsRXyyTCT98Ft9juqPuahi3CAQmP4H9dyE+7+Kz437PEtsxLmfm6naNmWi7Ee1DqWPwS8rEajsm4sNM4wW9gdBboJQtc0uZw0DfLj1I9r3Mc8Ol0jYtz0yNQDSzVLrGCaJlC311trU70tZ+ZkAVV6Mn8lOhSbj1cK0lvSr6ZK4dgqGl3I1eTZJJhbLNdg7UOVaiRx9543+C/p/As7w== brjackma@redhat.com manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Dec  8 04:09:43 np0005550137 python3[5982]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIKwedoZ0TWPJX/z/4TAbO/kKcDZOQVgRH0hAqrL5UCI1 vcastell@redhat.com manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Dec  8 04:09:43 np0005550137 python3[6006]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIEmv8sE8GCk6ZTPIqF0FQrttBdL3mq7rCm/IJy0xDFh7 michburk@redhat.com manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Dec  8 04:09:44 np0005550137 python3[6030]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAICy6GpGEtwevXEEn4mmLR5lmSLe23dGgAvzkB9DMNbkf rsafrono@rsafrono manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Dec  8 04:09:46 np0005550137 python3[6056]: ansible-community.general.timezone Invoked with name=UTC hwclock=None
Dec  8 04:09:47 np0005550137 systemd[1]: Starting Time & Date Service...
Dec  8 04:09:47 np0005550137 systemd[1]: Started Time & Date Service.
Dec  8 04:09:47 np0005550137 systemd-timedated[6058]: Changed time zone to 'UTC' (UTC).
Dec  8 04:09:47 np0005550137 python3[6087]: ansible-file Invoked with path=/etc/nodepool state=directory mode=511 recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  8 04:09:47 np0005550137 python3[6163]: ansible-ansible.legacy.stat Invoked with path=/etc/nodepool/sub_nodes follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Dec  8 04:09:48 np0005550137 python3[6234]: ansible-ansible.legacy.copy Invoked with dest=/etc/nodepool/sub_nodes src=/home/zuul/.ansible/tmp/ansible-tmp-1765184987.6559658-251-264605672216010/source _original_basename=tmphlyjdmn9 follow=False checksum=da39a3ee5e6b4b0d3255bfef95601890afd80709 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  8 04:09:48 np0005550137 python3[6334]: ansible-ansible.legacy.stat Invoked with path=/etc/nodepool/sub_nodes_private follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Dec  8 04:09:49 np0005550137 python3[6405]: ansible-ansible.legacy.copy Invoked with dest=/etc/nodepool/sub_nodes_private src=/home/zuul/.ansible/tmp/ansible-tmp-1765184988.5151603-301-185263630644434/source _original_basename=tmp2_4l5cbx follow=False checksum=da39a3ee5e6b4b0d3255bfef95601890afd80709 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  8 04:09:50 np0005550137 python3[6507]: ansible-ansible.legacy.stat Invoked with path=/etc/nodepool/node_private follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Dec  8 04:09:50 np0005550137 python3[6580]: ansible-ansible.legacy.copy Invoked with dest=/etc/nodepool/node_private src=/home/zuul/.ansible/tmp/ansible-tmp-1765184989.994374-381-187131886456729/source _original_basename=tmpoa8bjt4b follow=False checksum=8e0e434468aa50922357fbdb56d8b197f48f0949 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  8 04:09:51 np0005550137 python3[6628]: ansible-ansible.legacy.command Invoked with _raw_params=cp .ssh/id_rsa /etc/nodepool/id_rsa zuul_log_id=in-loop-ignore zuul_ansible_split_streams=False _uses_shell=False warn=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec  8 04:09:51 np0005550137 python3[6654]: ansible-ansible.legacy.command Invoked with _raw_params=cp .ssh/id_rsa.pub /etc/nodepool/id_rsa.pub zuul_log_id=in-loop-ignore zuul_ansible_split_streams=False _uses_shell=False warn=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec  8 04:09:51 np0005550137 python3[6734]: ansible-ansible.legacy.stat Invoked with path=/etc/sudoers.d/zuul-sudo-grep follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Dec  8 04:09:52 np0005550137 python3[6807]: ansible-ansible.legacy.copy Invoked with dest=/etc/sudoers.d/zuul-sudo-grep mode=288 src=/home/zuul/.ansible/tmp/ansible-tmp-1765184991.6593788-451-22384674875626/source _original_basename=tmpcfv6jdgb follow=False checksum=bdca1a77493d00fb51567671791f4aa30f66c2f0 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  8 04:09:53 np0005550137 python3[6858]: ansible-ansible.legacy.command Invoked with _raw_params=/usr/sbin/visudo -c zuul_log_id=fa163e3b-3c83-cef2-4a2f-00000000001f-1-compute0 zuul_ansible_split_streams=False _uses_shell=False warn=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec  8 04:09:53 np0005550137 python3[6886]: ansible-ansible.legacy.command Invoked with executable=/bin/bash _raw_params=env#012 _uses_shell=True zuul_log_id=fa163e3b-3c83-cef2-4a2f-000000000020-1-compute0 zuul_ansible_split_streams=False warn=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None creates=None removes=None stdin=None
Dec  8 04:09:55 np0005550137 python3[6914]: ansible-file Invoked with path=/home/zuul/workspace state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  8 04:09:57 np0005550137 chronyd[784]: Selected source 198.181.199.82 (2.centos.pool.ntp.org)
Dec  8 04:10:04 np0005550137 irqbalance[797]: Cannot change IRQ 26 affinity: Operation not permitted
Dec  8 04:10:04 np0005550137 irqbalance[797]: IRQ 26 affinity is now unmanaged
Dec  8 04:10:13 np0005550137 python3[6940]: ansible-ansible.builtin.file Invoked with path=/etc/ci/env state=directory mode=0755 recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  8 04:10:17 np0005550137 systemd[1]: systemd-timedated.service: Deactivated successfully.
Dec  8 04:10:59 np0005550137 kernel: pci 0000:00:07.0: [1af4:1000] type 00 class 0x020000 conventional PCI endpoint
Dec  8 04:10:59 np0005550137 kernel: pci 0000:00:07.0: BAR 0 [io  0x0000-0x003f]
Dec  8 04:10:59 np0005550137 kernel: pci 0000:00:07.0: BAR 1 [mem 0x00000000-0x00000fff]
Dec  8 04:10:59 np0005550137 kernel: pci 0000:00:07.0: BAR 4 [mem 0x00000000-0x00003fff 64bit pref]
Dec  8 04:10:59 np0005550137 kernel: pci 0000:00:07.0: ROM [mem 0x00000000-0x0007ffff pref]
Dec  8 04:10:59 np0005550137 kernel: pci 0000:00:07.0: ROM [mem 0xc0000000-0xc007ffff pref]: assigned
Dec  8 04:10:59 np0005550137 kernel: pci 0000:00:07.0: BAR 4 [mem 0x240000000-0x240003fff 64bit pref]: assigned
Dec  8 04:10:59 np0005550137 kernel: pci 0000:00:07.0: BAR 1 [mem 0xc0080000-0xc0080fff]: assigned
Dec  8 04:10:59 np0005550137 kernel: pci 0000:00:07.0: BAR 0 [io  0x1000-0x103f]: assigned
Dec  8 04:10:59 np0005550137 kernel: virtio-pci 0000:00:07.0: enabling device (0000 -> 0003)
Dec  8 04:10:59 np0005550137 NetworkManager[859]: <info>  [1765185059.3550] manager: (eth1): new Ethernet device (/org/freedesktop/NetworkManager/Devices/3)
Dec  8 04:10:59 np0005550137 systemd-udevd[6944]: Network interface NamePolicy= disabled on kernel command line.
Dec  8 04:10:59 np0005550137 NetworkManager[859]: <info>  [1765185059.3734] device (eth1): state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Dec  8 04:10:59 np0005550137 NetworkManager[859]: <info>  [1765185059.3766] settings: (eth1): created default wired connection 'Wired connection 1'
Dec  8 04:10:59 np0005550137 NetworkManager[859]: <info>  [1765185059.3771] device (eth1): carrier: link connected
Dec  8 04:10:59 np0005550137 NetworkManager[859]: <info>  [1765185059.3774] device (eth1): state change: unavailable -> disconnected (reason 'carrier-changed', managed-type: 'full')
Dec  8 04:10:59 np0005550137 NetworkManager[859]: <info>  [1765185059.3781] policy: auto-activating connection 'Wired connection 1' (fe561d8b-32b6-34ed-83aa-8b6f6081cb76)
Dec  8 04:10:59 np0005550137 NetworkManager[859]: <info>  [1765185059.3784] device (eth1): Activation: starting connection 'Wired connection 1' (fe561d8b-32b6-34ed-83aa-8b6f6081cb76)
Dec  8 04:10:59 np0005550137 NetworkManager[859]: <info>  [1765185059.3786] device (eth1): state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Dec  8 04:10:59 np0005550137 NetworkManager[859]: <info>  [1765185059.3790] device (eth1): state change: prepare -> config (reason 'none', managed-type: 'full')
Dec  8 04:10:59 np0005550137 NetworkManager[859]: <info>  [1765185059.3795] device (eth1): state change: config -> ip-config (reason 'none', managed-type: 'full')
Dec  8 04:10:59 np0005550137 NetworkManager[859]: <info>  [1765185059.3800] dhcp4 (eth1): activation: beginning transaction (timeout in 45 seconds)
Dec  8 04:11:00 np0005550137 python3[6970]: ansible-ansible.legacy.command Invoked with _raw_params=ip -j link zuul_log_id=fa163e3b-3c83-4210-dbc9-000000000128-0-controller zuul_ansible_split_streams=False _uses_shell=False warn=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec  8 04:11:10 np0005550137 python3[7050]: ansible-ansible.legacy.stat Invoked with path=/etc/NetworkManager/system-connections/ci-private-network.nmconnection follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Dec  8 04:11:10 np0005550137 python3[7123]: ansible-ansible.legacy.copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1765185069.8783472-104-63379579832819/source dest=/etc/NetworkManager/system-connections/ci-private-network.nmconnection mode=0600 owner=root group=root follow=False _original_basename=bootstrap-ci-network-nm-connection.nmconnection.j2 checksum=d2cd323a43fb88624e1a5ac958951f3cbb6d561d backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  8 04:11:11 np0005550137 python3[7173]: ansible-ansible.builtin.systemd Invoked with name=NetworkManager state=restarted daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Dec  8 04:11:11 np0005550137 systemd[1]: NetworkManager-wait-online.service: Deactivated successfully.
Dec  8 04:11:11 np0005550137 systemd[1]: Stopped Network Manager Wait Online.
Dec  8 04:11:11 np0005550137 systemd[1]: Stopping Network Manager Wait Online...
Dec  8 04:11:11 np0005550137 systemd[1]: Stopping Network Manager...
Dec  8 04:11:11 np0005550137 NetworkManager[859]: <info>  [1765185071.3384] caught SIGTERM, shutting down normally.
Dec  8 04:11:11 np0005550137 NetworkManager[859]: <info>  [1765185071.3397] dhcp4 (eth0): canceled DHCP transaction
Dec  8 04:11:11 np0005550137 NetworkManager[859]: <info>  [1765185071.3397] dhcp4 (eth0): activation: beginning transaction (timeout in 45 seconds)
Dec  8 04:11:11 np0005550137 NetworkManager[859]: <info>  [1765185071.3397] dhcp4 (eth0): state changed no lease
Dec  8 04:11:11 np0005550137 NetworkManager[859]: <info>  [1765185071.3400] manager: NetworkManager state is now CONNECTING
Dec  8 04:11:11 np0005550137 NetworkManager[859]: <info>  [1765185071.3487] dhcp4 (eth1): canceled DHCP transaction
Dec  8 04:11:11 np0005550137 NetworkManager[859]: <info>  [1765185071.3487] dhcp4 (eth1): state changed no lease
Dec  8 04:11:11 np0005550137 NetworkManager[859]: <info>  [1765185071.3542] exiting (success)
Dec  8 04:11:11 np0005550137 systemd[1]: Starting Network Manager Script Dispatcher Service...
Dec  8 04:11:11 np0005550137 systemd[1]: Started Network Manager Script Dispatcher Service.
Dec  8 04:11:11 np0005550137 systemd[1]: NetworkManager.service: Deactivated successfully.
Dec  8 04:11:11 np0005550137 systemd[1]: Stopped Network Manager.
Dec  8 04:11:11 np0005550137 systemd[1]: Starting Network Manager...
Dec  8 04:11:11 np0005550137 NetworkManager[7182]: <info>  [1765185071.4077] NetworkManager (version 1.54.1-1.el9) is starting... (after a restart, boot:17566ae0-cd05-4218-b848-5d07916a84ed)
Dec  8 04:11:11 np0005550137 NetworkManager[7182]: <info>  [1765185071.4079] Read config: /etc/NetworkManager/NetworkManager.conf, /run/NetworkManager/conf.d/15-carrier-timeout.conf
Dec  8 04:11:11 np0005550137 NetworkManager[7182]: <info>  [1765185071.4139] manager[0x55e2b3d6d070]: monitoring kernel firmware directory '/lib/firmware'.
Dec  8 04:11:11 np0005550137 systemd[1]: Starting Hostname Service...
Dec  8 04:11:11 np0005550137 systemd[1]: Started Hostname Service.
Dec  8 04:11:11 np0005550137 NetworkManager[7182]: <info>  [1765185071.5166] hostname: hostname: using hostnamed
Dec  8 04:11:11 np0005550137 NetworkManager[7182]: <info>  [1765185071.5168] hostname: static hostname changed from (none) to "np0005550137.novalocal"
Dec  8 04:11:11 np0005550137 NetworkManager[7182]: <info>  [1765185071.5175] dns-mgr: init: dns=default,systemd-resolved rc-manager=symlink (auto)
Dec  8 04:11:11 np0005550137 NetworkManager[7182]: <info>  [1765185071.5182] manager[0x55e2b3d6d070]: rfkill: Wi-Fi hardware radio set enabled
Dec  8 04:11:11 np0005550137 NetworkManager[7182]: <info>  [1765185071.5183] manager[0x55e2b3d6d070]: rfkill: WWAN hardware radio set enabled
Dec  8 04:11:11 np0005550137 NetworkManager[7182]: <info>  [1765185071.5212] Loaded device plugin: NMTeamFactory (/usr/lib64/NetworkManager/1.54.1-1.el9/libnm-device-plugin-team.so)
Dec  8 04:11:11 np0005550137 NetworkManager[7182]: <info>  [1765185071.5213] manager: rfkill: Wi-Fi enabled by radio killswitch; enabled by state file
Dec  8 04:11:11 np0005550137 NetworkManager[7182]: <info>  [1765185071.5214] manager: rfkill: WWAN enabled by radio killswitch; enabled by state file
Dec  8 04:11:11 np0005550137 NetworkManager[7182]: <info>  [1765185071.5215] manager: Networking is enabled by state file
Dec  8 04:11:11 np0005550137 NetworkManager[7182]: <info>  [1765185071.5219] settings: Loaded settings plugin: keyfile (internal)
Dec  8 04:11:11 np0005550137 NetworkManager[7182]: <info>  [1765185071.5223] settings: Loaded settings plugin: ifcfg-rh ("/usr/lib64/NetworkManager/1.54.1-1.el9/libnm-settings-plugin-ifcfg-rh.so")
Dec  8 04:11:11 np0005550137 NetworkManager[7182]: <info>  [1765185071.5250] Warning: the ifcfg-rh plugin is deprecated, please migrate connections to the keyfile format using "nmcli connection migrate"
Dec  8 04:11:11 np0005550137 NetworkManager[7182]: <info>  [1765185071.5263] dhcp: init: Using DHCP client 'internal'
Dec  8 04:11:11 np0005550137 NetworkManager[7182]: <info>  [1765185071.5266] manager: (lo): new Loopback device (/org/freedesktop/NetworkManager/Devices/1)
Dec  8 04:11:11 np0005550137 NetworkManager[7182]: <info>  [1765185071.5272] device (lo): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Dec  8 04:11:11 np0005550137 NetworkManager[7182]: <info>  [1765185071.5278] device (lo): state change: unavailable -> disconnected (reason 'connection-assumed', managed-type: 'external')
Dec  8 04:11:11 np0005550137 NetworkManager[7182]: <info>  [1765185071.5286] device (lo): Activation: starting connection 'lo' (ddff0d33-f7d1-42c0-97cc-0d2df594d095)
Dec  8 04:11:11 np0005550137 NetworkManager[7182]: <info>  [1765185071.5291] device (eth0): carrier: link connected
Dec  8 04:11:11 np0005550137 NetworkManager[7182]: <info>  [1765185071.5295] manager: (eth0): new Ethernet device (/org/freedesktop/NetworkManager/Devices/2)
Dec  8 04:11:11 np0005550137 NetworkManager[7182]: <info>  [1765185071.5299] manager: (eth0): assume: will attempt to assume matching connection 'System eth0' (5fb06bd0-0bb0-7ffb-45f1-d6edd65f3e03) (indicated)
Dec  8 04:11:11 np0005550137 NetworkManager[7182]: <info>  [1765185071.5301] device (eth0): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'assume')
Dec  8 04:11:11 np0005550137 NetworkManager[7182]: <info>  [1765185071.5307] device (eth0): state change: unavailable -> disconnected (reason 'connection-assumed', managed-type: 'assume')
Dec  8 04:11:11 np0005550137 NetworkManager[7182]: <info>  [1765185071.5312] device (eth0): Activation: starting connection 'System eth0' (5fb06bd0-0bb0-7ffb-45f1-d6edd65f3e03)
Dec  8 04:11:11 np0005550137 NetworkManager[7182]: <info>  [1765185071.5317] device (eth1): carrier: link connected
Dec  8 04:11:11 np0005550137 NetworkManager[7182]: <info>  [1765185071.5322] manager: (eth1): new Ethernet device (/org/freedesktop/NetworkManager/Devices/3)
Dec  8 04:11:11 np0005550137 NetworkManager[7182]: <info>  [1765185071.5326] manager: (eth1): assume: will attempt to assume matching connection 'Wired connection 1' (fe561d8b-32b6-34ed-83aa-8b6f6081cb76) (indicated)
Dec  8 04:11:11 np0005550137 NetworkManager[7182]: <info>  [1765185071.5327] device (eth1): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'assume')
Dec  8 04:11:11 np0005550137 NetworkManager[7182]: <info>  [1765185071.5331] device (eth1): state change: unavailable -> disconnected (reason 'connection-assumed', managed-type: 'assume')
Dec  8 04:11:11 np0005550137 NetworkManager[7182]: <info>  [1765185071.5337] device (eth1): Activation: starting connection 'Wired connection 1' (fe561d8b-32b6-34ed-83aa-8b6f6081cb76)
Dec  8 04:11:11 np0005550137 NetworkManager[7182]: <info>  [1765185071.5343] bus-manager: acquired D-Bus service "org.freedesktop.NetworkManager"
Dec  8 04:11:11 np0005550137 systemd[1]: Started Network Manager.
Dec  8 04:11:11 np0005550137 NetworkManager[7182]: <info>  [1765185071.5347] device (lo): state change: disconnected -> prepare (reason 'none', managed-type: 'external')
Dec  8 04:11:11 np0005550137 NetworkManager[7182]: <info>  [1765185071.5349] device (lo): state change: prepare -> config (reason 'none', managed-type: 'external')
Dec  8 04:11:11 np0005550137 NetworkManager[7182]: <info>  [1765185071.5351] device (lo): state change: config -> ip-config (reason 'none', managed-type: 'external')
Dec  8 04:11:11 np0005550137 NetworkManager[7182]: <info>  [1765185071.5353] device (eth0): state change: disconnected -> prepare (reason 'none', managed-type: 'assume')
Dec  8 04:11:11 np0005550137 NetworkManager[7182]: <info>  [1765185071.5355] device (eth0): state change: prepare -> config (reason 'none', managed-type: 'assume')
Dec  8 04:11:11 np0005550137 NetworkManager[7182]: <info>  [1765185071.5358] device (eth1): state change: disconnected -> prepare (reason 'none', managed-type: 'assume')
Dec  8 04:11:11 np0005550137 NetworkManager[7182]: <info>  [1765185071.5360] device (eth1): state change: prepare -> config (reason 'none', managed-type: 'assume')
Dec  8 04:11:11 np0005550137 NetworkManager[7182]: <info>  [1765185071.5363] device (lo): state change: ip-config -> ip-check (reason 'none', managed-type: 'external')
Dec  8 04:11:11 np0005550137 NetworkManager[7182]: <info>  [1765185071.5368] device (eth0): state change: config -> ip-config (reason 'none', managed-type: 'assume')
Dec  8 04:11:11 np0005550137 NetworkManager[7182]: <info>  [1765185071.5370] dhcp4 (eth0): activation: beginning transaction (timeout in 45 seconds)
Dec  8 04:11:11 np0005550137 NetworkManager[7182]: <info>  [1765185071.5377] device (eth1): state change: config -> ip-config (reason 'none', managed-type: 'assume')
Dec  8 04:11:11 np0005550137 NetworkManager[7182]: <info>  [1765185071.5380] dhcp4 (eth1): activation: beginning transaction (timeout in 45 seconds)
Dec  8 04:11:11 np0005550137 NetworkManager[7182]: <info>  [1765185071.5392] device (lo): state change: ip-check -> secondaries (reason 'none', managed-type: 'external')
Dec  8 04:11:11 np0005550137 NetworkManager[7182]: <info>  [1765185071.5397] device (lo): state change: secondaries -> activated (reason 'none', managed-type: 'external')
Dec  8 04:11:11 np0005550137 NetworkManager[7182]: <info>  [1765185071.5401] device (lo): Activation: successful, device activated.
Dec  8 04:11:11 np0005550137 NetworkManager[7182]: <info>  [1765185071.5410] dhcp4 (eth0): state changed new lease, address=38.102.83.176
Dec  8 04:11:11 np0005550137 NetworkManager[7182]: <info>  [1765185071.5414] policy: set 'System eth0' (eth0) as default for IPv4 routing and DNS
Dec  8 04:11:11 np0005550137 systemd[1]: Starting Network Manager Wait Online...
Dec  8 04:11:11 np0005550137 NetworkManager[7182]: <info>  [1765185071.5485] device (eth0): state change: ip-config -> ip-check (reason 'none', managed-type: 'assume')
Dec  8 04:11:11 np0005550137 NetworkManager[7182]: <info>  [1765185071.5526] device (eth0): state change: ip-check -> secondaries (reason 'none', managed-type: 'assume')
Dec  8 04:11:11 np0005550137 NetworkManager[7182]: <info>  [1765185071.5529] device (eth0): state change: secondaries -> activated (reason 'none', managed-type: 'assume')
Dec  8 04:11:11 np0005550137 NetworkManager[7182]: <info>  [1765185071.5533] manager: NetworkManager state is now CONNECTED_SITE
Dec  8 04:11:11 np0005550137 NetworkManager[7182]: <info>  [1765185071.5537] device (eth0): Activation: successful, device activated.
Dec  8 04:11:11 np0005550137 NetworkManager[7182]: <info>  [1765185071.5542] manager: NetworkManager state is now CONNECTED_GLOBAL
Dec  8 04:11:11 np0005550137 python3[7257]: ansible-ansible.legacy.command Invoked with _raw_params=ip route zuul_log_id=fa163e3b-3c83-4210-dbc9-0000000000bd-0-controller zuul_ansible_split_streams=False _uses_shell=False warn=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec  8 04:11:21 np0005550137 systemd[1]: NetworkManager-dispatcher.service: Deactivated successfully.
Dec  8 04:11:23 np0005550137 systemd[4299]: Starting Mark boot as successful...
Dec  8 04:11:23 np0005550137 systemd[4299]: Finished Mark boot as successful.
Dec  8 04:11:41 np0005550137 systemd[1]: systemd-hostnamed.service: Deactivated successfully.
Dec  8 04:11:56 np0005550137 NetworkManager[7182]: <info>  [1765185116.7246] device (eth1): state change: ip-config -> ip-check (reason 'none', managed-type: 'assume')
Dec  8 04:11:56 np0005550137 systemd[1]: Starting Network Manager Script Dispatcher Service...
Dec  8 04:11:56 np0005550137 systemd[1]: Started Network Manager Script Dispatcher Service.
Dec  8 04:11:56 np0005550137 NetworkManager[7182]: <info>  [1765185116.7571] device (eth1): state change: ip-check -> secondaries (reason 'none', managed-type: 'assume')
Dec  8 04:11:56 np0005550137 NetworkManager[7182]: <info>  [1765185116.7574] device (eth1): state change: secondaries -> activated (reason 'none', managed-type: 'assume')
Dec  8 04:11:56 np0005550137 NetworkManager[7182]: <info>  [1765185116.7583] device (eth1): Activation: successful, device activated.
Dec  8 04:11:56 np0005550137 NetworkManager[7182]: <info>  [1765185116.7588] manager: startup complete
Dec  8 04:11:56 np0005550137 NetworkManager[7182]: <info>  [1765185116.7590] device (eth1): state change: activated -> failed (reason 'ip-config-unavailable', managed-type: 'full')
Dec  8 04:11:56 np0005550137 NetworkManager[7182]: <warn>  [1765185116.7593] device (eth1): Activation: failed for connection 'Wired connection 1'
Dec  8 04:11:56 np0005550137 NetworkManager[7182]: <info>  [1765185116.7599] device (eth1): state change: failed -> disconnected (reason 'none', managed-type: 'full')
Dec  8 04:11:56 np0005550137 systemd[1]: Finished Network Manager Wait Online.
Dec  8 04:11:56 np0005550137 NetworkManager[7182]: <info>  [1765185116.7697] dhcp4 (eth1): canceled DHCP transaction
Dec  8 04:11:56 np0005550137 NetworkManager[7182]: <info>  [1765185116.7699] dhcp4 (eth1): activation: beginning transaction (timeout in 45 seconds)
Dec  8 04:11:56 np0005550137 NetworkManager[7182]: <info>  [1765185116.7699] dhcp4 (eth1): state changed no lease
Dec  8 04:11:56 np0005550137 NetworkManager[7182]: <info>  [1765185116.7717] policy: auto-activating connection 'ci-private-network' (ab271149-d1e0-5f20-aeea-443d463a255c)
Dec  8 04:11:56 np0005550137 NetworkManager[7182]: <info>  [1765185116.7721] device (eth1): Activation: starting connection 'ci-private-network' (ab271149-d1e0-5f20-aeea-443d463a255c)
Dec  8 04:11:56 np0005550137 NetworkManager[7182]: <info>  [1765185116.7722] device (eth1): state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Dec  8 04:11:56 np0005550137 NetworkManager[7182]: <info>  [1765185116.7724] device (eth1): state change: prepare -> config (reason 'none', managed-type: 'full')
Dec  8 04:11:56 np0005550137 NetworkManager[7182]: <info>  [1765185116.7732] device (eth1): state change: config -> ip-config (reason 'none', managed-type: 'full')
Dec  8 04:11:56 np0005550137 NetworkManager[7182]: <info>  [1765185116.7741] device (eth1): state change: ip-config -> ip-check (reason 'none', managed-type: 'full')
Dec  8 04:11:56 np0005550137 NetworkManager[7182]: <info>  [1765185116.7791] device (eth1): state change: ip-check -> secondaries (reason 'none', managed-type: 'full')
Dec  8 04:11:56 np0005550137 NetworkManager[7182]: <info>  [1765185116.7794] device (eth1): state change: secondaries -> activated (reason 'none', managed-type: 'full')
Dec  8 04:11:56 np0005550137 NetworkManager[7182]: <info>  [1765185116.7802] device (eth1): Activation: successful, device activated.
Dec  8 04:12:06 np0005550137 chronyd[784]: Selected source 54.39.23.64 (2.centos.pool.ntp.org)
Dec  8 04:12:06 np0005550137 systemd[1]: NetworkManager-dispatcher.service: Deactivated successfully.
Dec  8 04:12:11 np0005550137 systemd-logind[805]: Session 1 logged out. Waiting for processes to exit.
Dec  8 04:13:06 np0005550137 systemd-logind[805]: New session 3 of user zuul.
Dec  8 04:13:06 np0005550137 systemd[1]: Started Session 3 of User zuul.
Dec  8 04:13:07 np0005550137 python3[7368]: ansible-ansible.legacy.stat Invoked with path=/etc/ci/env/networking-info.yml follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Dec  8 04:13:07 np0005550137 python3[7441]: ansible-ansible.legacy.copy Invoked with dest=/etc/ci/env/networking-info.yml owner=root group=root mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1765185186.7369-373-148262730635308/source _original_basename=tmp0e8h8km0 follow=False checksum=8271a5ec204270f8e7a020ec8c20e039e9f6795b backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  8 04:13:11 np0005550137 systemd[1]: session-3.scope: Deactivated successfully.
Dec  8 04:13:11 np0005550137 systemd-logind[805]: Session 3 logged out. Waiting for processes to exit.
Dec  8 04:13:11 np0005550137 systemd-logind[805]: Removed session 3.
Dec  8 04:14:23 np0005550137 systemd[4299]: Created slice User Background Tasks Slice.
Dec  8 04:14:23 np0005550137 systemd[4299]: Starting Cleanup of User's Temporary Files and Directories...
Dec  8 04:14:23 np0005550137 systemd[4299]: Finished Cleanup of User's Temporary Files and Directories.
Dec  8 04:18:33 np0005550137 systemd-logind[805]: New session 4 of user zuul.
Dec  8 04:18:33 np0005550137 systemd[1]: Started Session 4 of User zuul.
Dec  8 04:18:33 np0005550137 python3[7504]: ansible-ansible.legacy.command Invoked with _raw_params=lsblk -nd -o MAJ:MIN /dev/vda#012 _uses_shell=True zuul_log_id=fa163e3b-3c83-7257-7d30-000000001cd4-1-compute0 zuul_ansible_split_streams=False warn=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec  8 04:18:33 np0005550137 python3[7533]: ansible-ansible.builtin.file Invoked with path=/sys/fs/cgroup/init.scope state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  8 04:18:34 np0005550137 python3[7559]: ansible-ansible.builtin.file Invoked with path=/sys/fs/cgroup/machine.slice state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  8 04:18:34 np0005550137 python3[7585]: ansible-ansible.builtin.file Invoked with path=/sys/fs/cgroup/system.slice state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  8 04:18:34 np0005550137 python3[7611]: ansible-ansible.builtin.file Invoked with path=/sys/fs/cgroup/user.slice state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  8 04:18:35 np0005550137 python3[7637]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system.conf.d state=directory mode=0755 recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  8 04:18:35 np0005550137 python3[7715]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system.conf.d/override.conf follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Dec  8 04:18:36 np0005550137 python3[7788]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system.conf.d/override.conf mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1765185515.6736107-516-104625530689214/source _original_basename=tmp6iv4hhrm follow=False checksum=a05098bd3d2321238ea1169d0e6f135b35b392d4 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  8 04:18:37 np0005550137 python3[7838]: ansible-ansible.builtin.systemd_service Invoked with daemon_reload=True daemon_reexec=False scope=system no_block=False name=None state=None enabled=None force=None masked=None
Dec  8 04:18:37 np0005550137 systemd[1]: Reloading.
Dec  8 04:18:37 np0005550137 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Dec  8 04:18:39 np0005550137 python3[7895]: ansible-ansible.builtin.wait_for Invoked with path=/sys/fs/cgroup/system.slice/io.max state=present timeout=30 host=127.0.0.1 connect_timeout=5 delay=0 active_connection_states=['ESTABLISHED', 'FIN_WAIT1', 'FIN_WAIT2', 'SYN_RECV', 'SYN_SENT', 'TIME_WAIT'] sleep=1 port=None search_regex=None exclude_hosts=None msg=None
Dec  8 04:18:39 np0005550137 python3[7921]: ansible-ansible.legacy.command Invoked with _raw_params=echo "252:0   riops=18000 wiops=18000 rbps=262144000 wbps=262144000" > /sys/fs/cgroup/init.scope/io.max#012 _uses_shell=True zuul_log_id=in-loop-ignore zuul_ansible_split_streams=False warn=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec  8 04:18:39 np0005550137 python3[7950]: ansible-ansible.legacy.command Invoked with _raw_params=echo "252:0   riops=18000 wiops=18000 rbps=262144000 wbps=262144000" > /sys/fs/cgroup/machine.slice/io.max#012 _uses_shell=True zuul_log_id=in-loop-ignore zuul_ansible_split_streams=False warn=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec  8 04:18:40 np0005550137 python3[7978]: ansible-ansible.legacy.command Invoked with _raw_params=echo "252:0   riops=18000 wiops=18000 rbps=262144000 wbps=262144000" > /sys/fs/cgroup/system.slice/io.max#012 _uses_shell=True zuul_log_id=in-loop-ignore zuul_ansible_split_streams=False warn=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec  8 04:18:40 np0005550137 python3[8006]: ansible-ansible.legacy.command Invoked with _raw_params=echo "252:0   riops=18000 wiops=18000 rbps=262144000 wbps=262144000" > /sys/fs/cgroup/user.slice/io.max#012 _uses_shell=True zuul_log_id=in-loop-ignore zuul_ansible_split_streams=False warn=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec  8 04:18:41 np0005550137 python3[8033]: ansible-ansible.legacy.command Invoked with _raw_params=echo "init";    cat /sys/fs/cgroup/init.scope/io.max; echo "machine"; cat /sys/fs/cgroup/machine.slice/io.max; echo "system";  cat /sys/fs/cgroup/system.slice/io.max; echo "user";    cat /sys/fs/cgroup/user.slice/io.max;#012 _uses_shell=True zuul_log_id=fa163e3b-3c83-7257-7d30-000000001cdb-1-compute0 zuul_ansible_split_streams=False warn=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec  8 04:18:41 np0005550137 python3[8063]: ansible-ansible.builtin.stat Invoked with path=/sys/fs/cgroup/kubepods.slice/io.max follow=False get_md5=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Dec  8 04:18:44 np0005550137 systemd[1]: session-4.scope: Deactivated successfully.
Dec  8 04:18:44 np0005550137 systemd[1]: session-4.scope: Consumed 4.325s CPU time.
Dec  8 04:18:44 np0005550137 systemd-logind[805]: Session 4 logged out. Waiting for processes to exit.
Dec  8 04:18:44 np0005550137 systemd-logind[805]: Removed session 4.
Dec  8 04:18:46 np0005550137 systemd-logind[805]: New session 5 of user zuul.
Dec  8 04:18:46 np0005550137 systemd[1]: Started Session 5 of User zuul.
Dec  8 04:18:46 np0005550137 python3[8096]: ansible-ansible.legacy.dnf Invoked with name=['podman', 'buildah'] state=present allow_downgrade=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_repoquery=True install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 allowerasing=False nobest=False use_backend=auto conf_file=None disable_excludes=None download_dir=None list=None releasever=None
Dec  8 04:19:01 np0005550137 kernel: SELinux:  Converting 385 SID table entries...
Dec  8 04:19:01 np0005550137 kernel: SELinux:  policy capability network_peer_controls=1
Dec  8 04:19:01 np0005550137 kernel: SELinux:  policy capability open_perms=1
Dec  8 04:19:01 np0005550137 kernel: SELinux:  policy capability extended_socket_class=1
Dec  8 04:19:01 np0005550137 kernel: SELinux:  policy capability always_check_network=0
Dec  8 04:19:01 np0005550137 kernel: SELinux:  policy capability cgroup_seclabel=1
Dec  8 04:19:01 np0005550137 kernel: SELinux:  policy capability nnp_nosuid_transition=1
Dec  8 04:19:01 np0005550137 kernel: SELinux:  policy capability genfs_seclabel_symlinks=1
Dec  8 04:19:11 np0005550137 kernel: SELinux:  Converting 385 SID table entries...
Dec  8 04:19:11 np0005550137 kernel: SELinux:  policy capability network_peer_controls=1
Dec  8 04:19:11 np0005550137 kernel: SELinux:  policy capability open_perms=1
Dec  8 04:19:11 np0005550137 kernel: SELinux:  policy capability extended_socket_class=1
Dec  8 04:19:11 np0005550137 kernel: SELinux:  policy capability always_check_network=0
Dec  8 04:19:11 np0005550137 kernel: SELinux:  policy capability cgroup_seclabel=1
Dec  8 04:19:11 np0005550137 kernel: SELinux:  policy capability nnp_nosuid_transition=1
Dec  8 04:19:11 np0005550137 kernel: SELinux:  policy capability genfs_seclabel_symlinks=1
Dec  8 04:19:20 np0005550137 kernel: SELinux:  Converting 385 SID table entries...
Dec  8 04:19:20 np0005550137 kernel: SELinux:  policy capability network_peer_controls=1
Dec  8 04:19:20 np0005550137 kernel: SELinux:  policy capability open_perms=1
Dec  8 04:19:20 np0005550137 kernel: SELinux:  policy capability extended_socket_class=1
Dec  8 04:19:20 np0005550137 kernel: SELinux:  policy capability always_check_network=0
Dec  8 04:19:20 np0005550137 kernel: SELinux:  policy capability cgroup_seclabel=1
Dec  8 04:19:20 np0005550137 kernel: SELinux:  policy capability nnp_nosuid_transition=1
Dec  8 04:19:20 np0005550137 kernel: SELinux:  policy capability genfs_seclabel_symlinks=1
Dec  8 04:19:21 np0005550137 setsebool[8163]: The virt_use_nfs policy boolean was changed to 1 by root
Dec  8 04:19:21 np0005550137 setsebool[8163]: The virt_sandbox_use_all_caps policy boolean was changed to 1 by root
Dec  8 04:19:33 np0005550137 kernel: SELinux:  Converting 388 SID table entries...
Dec  8 04:19:33 np0005550137 kernel: SELinux:  policy capability network_peer_controls=1
Dec  8 04:19:33 np0005550137 kernel: SELinux:  policy capability open_perms=1
Dec  8 04:19:33 np0005550137 kernel: SELinux:  policy capability extended_socket_class=1
Dec  8 04:19:33 np0005550137 kernel: SELinux:  policy capability always_check_network=0
Dec  8 04:19:33 np0005550137 kernel: SELinux:  policy capability cgroup_seclabel=1
Dec  8 04:19:33 np0005550137 kernel: SELinux:  policy capability nnp_nosuid_transition=1
Dec  8 04:19:33 np0005550137 kernel: SELinux:  policy capability genfs_seclabel_symlinks=1
Dec  8 04:19:52 np0005550137 dbus-broker-launch[775]: avc:  op=load_policy lsm=selinux seqno=6 res=1
Dec  8 04:19:52 np0005550137 systemd[1]: Started /usr/bin/systemctl start man-db-cache-update.
Dec  8 04:19:52 np0005550137 systemd[1]: Starting man-db-cache-update.service...
Dec  8 04:19:52 np0005550137 systemd[1]: Reloading.
Dec  8 04:19:52 np0005550137 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Dec  8 04:19:53 np0005550137 systemd[1]: Queuing reload/restart jobs for marked units…
Dec  8 04:20:04 np0005550137 python3[16328]: ansible-ansible.legacy.command Invoked with _raw_params=echo "openstack-k8s-operators+cirobot"#012 _uses_shell=True zuul_log_id=fa163e3b-3c83-b914-8a75-00000000000c-1-compute0 zuul_ansible_split_streams=False warn=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec  8 04:20:05 np0005550137 kernel: evm: overlay not supported
Dec  8 04:20:05 np0005550137 systemd[4299]: Starting D-Bus User Message Bus...
Dec  8 04:20:05 np0005550137 dbus-broker-launch[16826]: Policy to allow eavesdropping in /usr/share/dbus-1/session.conf +31: Eavesdropping is deprecated and ignored
Dec  8 04:20:05 np0005550137 dbus-broker-launch[16826]: Policy to allow eavesdropping in /usr/share/dbus-1/session.conf +33: Eavesdropping is deprecated and ignored
Dec  8 04:20:05 np0005550137 systemd[4299]: Started D-Bus User Message Bus.
Dec  8 04:20:05 np0005550137 dbus-broker-lau[16826]: Ready
Dec  8 04:20:05 np0005550137 systemd[4299]: selinux: avc:  op=load_policy lsm=selinux seqno=6 res=1
Dec  8 04:20:05 np0005550137 systemd[4299]: Created slice Slice /user.
Dec  8 04:20:05 np0005550137 systemd[4299]: podman-16759.scope: unit configures an IP firewall, but not running as root.
Dec  8 04:20:05 np0005550137 systemd[4299]: (This warning is only shown for the first unit using IP firewalling.)
Dec  8 04:20:05 np0005550137 systemd[4299]: Started podman-16759.scope.
Dec  8 04:20:05 np0005550137 systemd[4299]: Started podman-pause-9c13ac16.scope.
Dec  8 04:20:06 np0005550137 python3[17387]: ansible-ansible.builtin.blockinfile Invoked with state=present insertafter=EOF dest=/etc/containers/registries.conf content=[[registry]]#012location = "38.102.83.113:5001"#012insecure = true path=/etc/containers/registries.conf block=[[registry]]#012location = "38.102.83.113:5001"#012insecure = true marker=# {mark} ANSIBLE MANAGED BLOCK create=False backup=False marker_begin=BEGIN marker_end=END unsafe_writes=False insertbefore=None validate=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  8 04:20:06 np0005550137 python3[17387]: ansible-ansible.builtin.blockinfile [WARNING] Module remote_tmp /root/.ansible/tmp did not exist and was created with a mode of 0700, this may cause issues when running as another user. To avoid this, create the remote_tmp dir with the correct permissions manually
Dec  8 04:20:06 np0005550137 systemd[1]: session-5.scope: Deactivated successfully.
Dec  8 04:20:06 np0005550137 systemd[1]: session-5.scope: Consumed 1min 4.505s CPU time.
Dec  8 04:20:06 np0005550137 systemd-logind[805]: Session 5 logged out. Waiting for processes to exit.
Dec  8 04:20:06 np0005550137 systemd-logind[805]: Removed session 5.
Dec  8 04:20:24 np0005550137 irqbalance[797]: Cannot change IRQ 27 affinity: Operation not permitted
Dec  8 04:20:24 np0005550137 irqbalance[797]: IRQ 27 affinity is now unmanaged
Dec  8 04:20:32 np0005550137 systemd-logind[805]: New session 6 of user zuul.
Dec  8 04:20:32 np0005550137 systemd[1]: Started Session 6 of User zuul.
Dec  8 04:20:32 np0005550137 python3[28833]: ansible-ansible.posix.authorized_key Invoked with user=zuul key=ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBNodgDEEVbs9D+eDo6354ceaXxTqfvK3Z/cF5DrtyS1CjWtL7DbY6RVV+akTh6jQVHA4k5uRzHYDQ4i2DnhKCz8= zuul@np0005550136.novalocal#012 manage_dir=True state=present exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Dec  8 04:20:33 np0005550137 python3[29055]: ansible-ansible.posix.authorized_key Invoked with user=root key=ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBNodgDEEVbs9D+eDo6354ceaXxTqfvK3Z/cF5DrtyS1CjWtL7DbY6RVV+akTh6jQVHA4k5uRzHYDQ4i2DnhKCz8= zuul@np0005550136.novalocal#012 manage_dir=True state=present exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Dec  8 04:20:34 np0005550137 python3[29536]: ansible-ansible.builtin.user Invoked with name=cloud-admin shell=/bin/bash state=present non_unique=False force=False remove=False create_home=True system=False move_home=False append=False ssh_key_bits=0 ssh_key_type=rsa ssh_key_comment=ansible-generated on np0005550137.novalocal update_password=always uid=None group=None groups=None comment=None home=None password=NOT_LOGGING_PARAMETER login_class=None password_expire_max=None password_expire_min=None hidden=None seuser=None skeleton=None generate_ssh_key=None ssh_key_file=None ssh_key_passphrase=NOT_LOGGING_PARAMETER expires=None password_lock=None local=None profile=None authorization=None role=None umask=None
Dec  8 04:20:34 np0005550137 systemd[1]: man-db-cache-update.service: Deactivated successfully.
Dec  8 04:20:34 np0005550137 systemd[1]: Finished man-db-cache-update.service.
Dec  8 04:20:34 np0005550137 systemd[1]: man-db-cache-update.service: Consumed 50.387s CPU time.
Dec  8 04:20:34 np0005550137 systemd[1]: run-ra75d2d07bb8040a08195f08b7a3c4145.service: Deactivated successfully.
Dec  8 04:20:34 np0005550137 python3[29754]: ansible-ansible.posix.authorized_key Invoked with user=cloud-admin key=ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBNodgDEEVbs9D+eDo6354ceaXxTqfvK3Z/cF5DrtyS1CjWtL7DbY6RVV+akTh6jQVHA4k5uRzHYDQ4i2DnhKCz8= zuul@np0005550136.novalocal#012 manage_dir=True state=present exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Dec  8 04:20:34 np0005550137 python3[29833]: ansible-ansible.legacy.stat Invoked with path=/etc/sudoers.d/cloud-admin follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Dec  8 04:20:35 np0005550137 python3[29906]: ansible-ansible.legacy.copy Invoked with dest=/etc/sudoers.d/cloud-admin mode=0640 src=/home/zuul/.ansible/tmp/ansible-tmp-1765185634.6679733-167-5822354480582/source _original_basename=tmpby7ep_ms follow=False checksum=e7614e5ad3ab06eaae55b8efaa2ed81b63ea5634 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  8 04:20:36 np0005550137 python3[29956]: ansible-ansible.builtin.hostname Invoked with name=compute-0 use=systemd
Dec  8 04:20:36 np0005550137 systemd[1]: Starting Hostname Service...
Dec  8 04:20:36 np0005550137 systemd[1]: Started Hostname Service.
Dec  8 04:20:36 np0005550137 systemd-hostnamed[29960]: Changed pretty hostname to 'compute-0'
Dec  8 04:20:36 np0005550137 systemd-hostnamed[29960]: Hostname set to <compute-0> (static)
Dec  8 04:20:36 np0005550137 NetworkManager[7182]: <info>  [1765185636.4919] hostname: static hostname changed from "np0005550137.novalocal" to "compute-0"
Dec  8 04:20:36 np0005550137 systemd[1]: Starting Network Manager Script Dispatcher Service...
Dec  8 04:20:36 np0005550137 systemd[1]: Started Network Manager Script Dispatcher Service.
Dec  8 04:20:37 np0005550137 systemd[1]: session-6.scope: Deactivated successfully.
Dec  8 04:20:37 np0005550137 systemd[1]: session-6.scope: Consumed 2.273s CPU time.
Dec  8 04:20:37 np0005550137 systemd-logind[805]: Session 6 logged out. Waiting for processes to exit.
Dec  8 04:20:37 np0005550137 systemd-logind[805]: Removed session 6.
Dec  8 04:20:46 np0005550137 systemd[1]: NetworkManager-dispatcher.service: Deactivated successfully.
Dec  8 04:21:06 np0005550137 systemd[1]: systemd-hostnamed.service: Deactivated successfully.
Dec  8 04:24:16 np0005550137 systemd[1]: Starting Cleanup of Temporary Directories...
Dec  8 04:24:16 np0005550137 systemd-logind[805]: New session 7 of user zuul.
Dec  8 04:24:16 np0005550137 systemd[1]: Started Session 7 of User zuul.
Dec  8 04:24:16 np0005550137 systemd[1]: systemd-tmpfiles-clean.service: Deactivated successfully.
Dec  8 04:24:16 np0005550137 systemd[1]: Finished Cleanup of Temporary Directories.
Dec  8 04:24:16 np0005550137 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dclean.service.mount: Deactivated successfully.
Dec  8 04:24:16 np0005550137 python3[30059]: ansible-ansible.legacy.setup Invoked with gather_subset=['all'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Dec  8 04:24:18 np0005550137 python3[30175]: ansible-ansible.legacy.stat Invoked with path=/etc/yum.repos.d/delorean.repo follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Dec  8 04:24:19 np0005550137 python3[30248]: ansible-ansible.legacy.copy Invoked with dest=/etc/yum.repos.d/ src=/home/zuul/.ansible/tmp/ansible-tmp-1765185858.2241616-33979-274696465849257/source mode=0755 _original_basename=delorean.repo follow=False checksum=0f7c85cc67bf467c48edf98d5acc63e62d808324 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  8 04:24:19 np0005550137 python3[30274]: ansible-ansible.legacy.stat Invoked with path=/etc/yum.repos.d/delorean-antelope-testing.repo follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Dec  8 04:24:19 np0005550137 python3[30347]: ansible-ansible.legacy.copy Invoked with dest=/etc/yum.repos.d/ src=/home/zuul/.ansible/tmp/ansible-tmp-1765185858.2241616-33979-274696465849257/source mode=0755 _original_basename=delorean-antelope-testing.repo follow=False checksum=4ebc56dead962b5d40b8d420dad43b948b84d3fc backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  8 04:24:19 np0005550137 python3[30373]: ansible-ansible.legacy.stat Invoked with path=/etc/yum.repos.d/repo-setup-centos-highavailability.repo follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Dec  8 04:24:20 np0005550137 python3[30446]: ansible-ansible.legacy.copy Invoked with dest=/etc/yum.repos.d/ src=/home/zuul/.ansible/tmp/ansible-tmp-1765185858.2241616-33979-274696465849257/source mode=0755 _original_basename=repo-setup-centos-highavailability.repo follow=False checksum=55d0f695fd0d8f47cbc3044ce0dcf5f88862490f backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  8 04:24:20 np0005550137 python3[30472]: ansible-ansible.legacy.stat Invoked with path=/etc/yum.repos.d/repo-setup-centos-powertools.repo follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Dec  8 04:24:20 np0005550137 python3[30545]: ansible-ansible.legacy.copy Invoked with dest=/etc/yum.repos.d/ src=/home/zuul/.ansible/tmp/ansible-tmp-1765185858.2241616-33979-274696465849257/source mode=0755 _original_basename=repo-setup-centos-powertools.repo follow=False checksum=4b0cf99aa89c5c5be0151545863a7a7568f67568 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  8 04:24:21 np0005550137 python3[30571]: ansible-ansible.legacy.stat Invoked with path=/etc/yum.repos.d/repo-setup-centos-appstream.repo follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Dec  8 04:24:21 np0005550137 python3[30644]: ansible-ansible.legacy.copy Invoked with dest=/etc/yum.repos.d/ src=/home/zuul/.ansible/tmp/ansible-tmp-1765185858.2241616-33979-274696465849257/source mode=0755 _original_basename=repo-setup-centos-appstream.repo follow=False checksum=e89244d2503b2996429dda1857290c1e91e393a1 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  8 04:24:21 np0005550137 python3[30670]: ansible-ansible.legacy.stat Invoked with path=/etc/yum.repos.d/repo-setup-centos-baseos.repo follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Dec  8 04:24:21 np0005550137 python3[30743]: ansible-ansible.legacy.copy Invoked with dest=/etc/yum.repos.d/ src=/home/zuul/.ansible/tmp/ansible-tmp-1765185858.2241616-33979-274696465849257/source mode=0755 _original_basename=repo-setup-centos-baseos.repo follow=False checksum=36d926db23a40dbfa5c84b5e4d43eac6fa2301d6 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  8 04:24:22 np0005550137 python3[30769]: ansible-ansible.legacy.stat Invoked with path=/etc/yum.repos.d/delorean.repo.md5 follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Dec  8 04:24:22 np0005550137 python3[30842]: ansible-ansible.legacy.copy Invoked with dest=/etc/yum.repos.d/ src=/home/zuul/.ansible/tmp/ansible-tmp-1765185858.2241616-33979-274696465849257/source mode=0755 _original_basename=delorean.repo.md5 follow=False checksum=2583a70b3ee76a9837350b0837bc004a8e52405c backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  8 04:24:34 np0005550137 python3[30900]: ansible-ansible.legacy.command Invoked with _raw_params=hostname _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec  8 04:29:34 np0005550137 systemd[1]: session-7.scope: Deactivated successfully.
Dec  8 04:29:34 np0005550137 systemd[1]: session-7.scope: Consumed 4.926s CPU time.
Dec  8 04:29:34 np0005550137 systemd-logind[805]: Session 7 logged out. Waiting for processes to exit.
Dec  8 04:29:34 np0005550137 systemd-logind[805]: Removed session 7.
Dec  8 04:36:10 np0005550137 systemd-logind[805]: New session 8 of user zuul.
Dec  8 04:36:10 np0005550137 systemd[1]: Started Session 8 of User zuul.
Dec  8 04:36:12 np0005550137 python3.9[31086]: ansible-ansible.legacy.setup Invoked with gather_subset=['all'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Dec  8 04:36:13 np0005550137 python3.9[31267]: ansible-ansible.legacy.command Invoked with _raw_params=set -euxo pipefail#012pushd /var/tmp#012curl -sL https://github.com/openstack-k8s-operators/repo-setup/archive/refs/heads/main.tar.gz | tar -xz#012pushd repo-setup-main#012python3 -m venv ./venv#012PBR_VERSION=0.0.0 ./venv/bin/pip install ./#012./venv/bin/repo-setup current-podified -b antelope#012popd#012rm -rf repo-setup-main#012 _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec  8 04:36:21 np0005550137 systemd[1]: session-8.scope: Deactivated successfully.
Dec  8 04:36:21 np0005550137 systemd[1]: session-8.scope: Consumed 8.488s CPU time.
Dec  8 04:36:21 np0005550137 systemd-logind[805]: Session 8 logged out. Waiting for processes to exit.
Dec  8 04:36:21 np0005550137 systemd-logind[805]: Removed session 8.
Dec  8 04:36:37 np0005550137 systemd-logind[805]: New session 9 of user zuul.
Dec  8 04:36:37 np0005550137 systemd[1]: Started Session 9 of User zuul.
Dec  8 04:36:38 np0005550137 python3.9[31483]: ansible-ansible.legacy.ping Invoked with data=pong
Dec  8 04:36:39 np0005550137 python3.9[31657]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Dec  8 04:36:40 np0005550137 python3.9[31809]: ansible-ansible.legacy.command Invoked with _raw_params=PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin which growvols#012 _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec  8 04:36:41 np0005550137 python3.9[31962]: ansible-ansible.builtin.stat Invoked with path=/etc/ansible/facts.d/bootc.fact follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Dec  8 04:36:42 np0005550137 python3.9[32114]: ansible-ansible.builtin.file Invoked with mode=755 path=/etc/ansible/facts.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  8 04:36:43 np0005550137 python3.9[32266]: ansible-ansible.legacy.stat Invoked with path=/etc/ansible/facts.d/bootc.fact follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  8 04:36:44 np0005550137 python3.9[32389]: ansible-ansible.legacy.copy Invoked with dest=/etc/ansible/facts.d/bootc.fact mode=755 src=/home/zuul/.ansible/tmp/ansible-tmp-1765186602.8605733-177-241859153816432/.source.fact _original_basename=bootc.fact follow=False checksum=eb4122ce7fc50a38407beb511c4ff8c178005b12 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  8 04:36:44 np0005550137 python3.9[32541]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Dec  8 04:36:45 np0005550137 python3.9[32697]: ansible-ansible.builtin.file Invoked with group=root mode=0750 owner=root path=/var/log/journal setype=var_log_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Dec  8 04:36:46 np0005550137 python3.9[32849]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/config-data/ansible-generated recurse=True setype=container_file_t state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Dec  8 04:36:47 np0005550137 python3.9[33001]: ansible-ansible.builtin.service_facts Invoked
Dec  8 04:36:53 np0005550137 python3.9[33256]: ansible-ansible.builtin.lineinfile Invoked with line=cloud-init=disabled path=/proc/cmdline state=present encoding=utf-8 backrefs=False create=False backup=False firstmatch=False unsafe_writes=False regexp=None search_string=None insertafter=None insertbefore=None validate=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  8 04:36:54 np0005550137 python3.9[33406]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Dec  8 04:36:55 np0005550137 python3.9[33560]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local', 'distribution'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Dec  8 04:36:56 np0005550137 python3.9[33718]: ansible-ansible.legacy.setup Invoked with filter=['ansible_pkg_mgr'] gather_subset=['!all'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Dec  8 04:36:57 np0005550137 python3.9[33802]: ansible-ansible.legacy.dnf Invoked with name=['driverctl', 'lvm2', 'crudini', 'jq', 'nftables', 'NetworkManager', 'openstack-selinux', 'python3-libselinux', 'python3-pyyaml', 'rsync', 'tmpwatch', 'sysstat', 'iproute-tc', 'ksmtuned', 'systemd-container', 'crypto-policies-scripts', 'grubby', 'sos'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Dec  8 04:37:43 np0005550137 systemd[1]: Reloading.
Dec  8 04:37:43 np0005550137 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Dec  8 04:37:43 np0005550137 systemd[1]: Starting dnf makecache...
Dec  8 04:37:43 np0005550137 systemd[1]: Listening on Device-mapper event daemon FIFOs.
Dec  8 04:37:43 np0005550137 dnf[34017]: Failed determining last makecache time.
Dec  8 04:37:43 np0005550137 systemd[1]: Reloading.
Dec  8 04:37:43 np0005550137 dnf[34017]: delorean-openstack-barbican-42b4c41831408a8e323 140 kB/s | 3.0 kB     00:00
Dec  8 04:37:43 np0005550137 dnf[34017]: delorean-python-glean-10df0bd91b9bc5c9fd9cc02d7 170 kB/s | 3.0 kB     00:00
Dec  8 04:37:43 np0005550137 dnf[34017]: delorean-openstack-cinder-1c00d6490d88e436f26ef 157 kB/s | 3.0 kB     00:00
Dec  8 04:37:43 np0005550137 dnf[34017]: delorean-python-stevedore-c4acc5639fd2329372142 128 kB/s | 3.0 kB     00:00
Dec  8 04:37:43 np0005550137 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Dec  8 04:37:43 np0005550137 dnf[34017]: delorean-python-cloudkitty-tests-tempest-2c80f8 160 kB/s | 3.0 kB     00:00
Dec  8 04:37:43 np0005550137 dnf[34017]: delorean-os-refresh-config-9bfc52b5049be2d8de61 158 kB/s | 3.0 kB     00:00
Dec  8 04:37:43 np0005550137 dnf[34017]: delorean-openstack-nova-6f8decf0b4f1aa2e96292b6 165 kB/s | 3.0 kB     00:00
Dec  8 04:37:43 np0005550137 dnf[34017]: delorean-python-designate-tests-tempest-347fdbc 169 kB/s | 3.0 kB     00:00
Dec  8 04:37:43 np0005550137 dnf[34017]: delorean-openstack-glance-1fd12c29b339f30fe823e 123 kB/s | 3.0 kB     00:00
Dec  8 04:37:43 np0005550137 systemd[1]: Starting Monitoring of LVM2 mirrors, snapshots etc. using dmeventd or progress polling...
Dec  8 04:37:43 np0005550137 dnf[34017]: delorean-openstack-keystone-e4b40af0ae3698fbbbb 156 kB/s | 3.0 kB     00:00
Dec  8 04:37:43 np0005550137 dnf[34017]: delorean-openstack-manila-3c01b7181572c95dac462 173 kB/s | 3.0 kB     00:00
Dec  8 04:37:43 np0005550137 systemd[1]: Finished Monitoring of LVM2 mirrors, snapshots etc. using dmeventd or progress polling.
Dec  8 04:37:43 np0005550137 dnf[34017]: delorean-python-whitebox-neutron-tests-tempest- 164 kB/s | 3.0 kB     00:00
Dec  8 04:37:43 np0005550137 dnf[34017]: delorean-openstack-octavia-ba397f07a7331190208c 168 kB/s | 3.0 kB     00:00
Dec  8 04:37:43 np0005550137 systemd[1]: Reloading.
Dec  8 04:37:43 np0005550137 dnf[34017]: delorean-openstack-watcher-c014f81a8647287f6dcc 154 kB/s | 3.0 kB     00:00
Dec  8 04:37:43 np0005550137 dnf[34017]: delorean-ansible-config_template-5ccaa22121a7ff 155 kB/s | 3.0 kB     00:00
Dec  8 04:37:43 np0005550137 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Dec  8 04:37:43 np0005550137 dnf[34017]: delorean-puppet-ceph-7352068d7b8c84ded636ab3158 137 kB/s | 3.0 kB     00:00
Dec  8 04:37:43 np0005550137 dnf[34017]: delorean-openstack-swift-dc98a8463506ac520c469a 145 kB/s | 3.0 kB     00:00
Dec  8 04:37:43 np0005550137 dnf[34017]: delorean-python-tempestconf-8515371b7cceebd4282 146 kB/s | 3.0 kB     00:00
Dec  8 04:37:43 np0005550137 dnf[34017]: delorean-openstack-heat-ui-013accbfd179753bc3f0 189 kB/s | 3.0 kB     00:00
Dec  8 04:37:43 np0005550137 systemd[1]: Listening on LVM2 poll daemon socket.
Dec  8 04:37:44 np0005550137 dnf[34017]: CentOS Stream 9 - BaseOS                         70 kB/s | 7.3 kB     00:00
Dec  8 04:37:44 np0005550137 dbus-broker-launch[754]: Noticed file-system modification, trigger reload.
Dec  8 04:37:44 np0005550137 dbus-broker-launch[754]: Noticed file-system modification, trigger reload.
Dec  8 04:37:44 np0005550137 dbus-broker-launch[754]: Noticed file-system modification, trigger reload.
Dec  8 04:37:44 np0005550137 dnf[34017]: CentOS Stream 9 - AppStream                      33 kB/s | 7.4 kB     00:00
Dec  8 04:37:44 np0005550137 dnf[34017]: CentOS Stream 9 - CRB                            72 kB/s | 7.2 kB     00:00
Dec  8 04:37:44 np0005550137 dnf[34017]: CentOS Stream 9 - Extras packages                63 kB/s | 8.3 kB     00:00
Dec  8 04:37:44 np0005550137 dnf[34017]: dlrn-antelope-testing                            99 kB/s | 3.0 kB     00:00
Dec  8 04:37:44 np0005550137 dnf[34017]: dlrn-antelope-build-deps                        145 kB/s | 3.0 kB     00:00
Dec  8 04:37:44 np0005550137 dnf[34017]: centos9-rabbitmq                                122 kB/s | 3.0 kB     00:00
Dec  8 04:37:44 np0005550137 dnf[34017]: centos9-storage                                 138 kB/s | 3.0 kB     00:00
Dec  8 04:37:44 np0005550137 dnf[34017]: centos9-opstools                                137 kB/s | 3.0 kB     00:00
Dec  8 04:37:44 np0005550137 dnf[34017]: NFV SIG OpenvSwitch                             140 kB/s | 3.0 kB     00:00
Dec  8 04:37:44 np0005550137 dnf[34017]: repo-setup-centos-appstream                     200 kB/s | 4.4 kB     00:00
Dec  8 04:37:45 np0005550137 dnf[34017]: repo-setup-centos-baseos                        162 kB/s | 3.9 kB     00:00
Dec  8 04:37:45 np0005550137 dnf[34017]: repo-setup-centos-highavailability              164 kB/s | 3.9 kB     00:00
Dec  8 04:37:45 np0005550137 dnf[34017]: repo-setup-centos-powertools                    178 kB/s | 4.3 kB     00:00
Dec  8 04:37:45 np0005550137 dnf[34017]: Extra Packages for Enterprise Linux 9 - x86_64  222 kB/s |  34 kB     00:00
Dec  8 04:37:46 np0005550137 dnf[34017]: Extra Packages for Enterprise Linux 9 - x86_64   25 MB/s |  20 MB     00:00
Dec  8 04:37:57 np0005550137 dnf[34017]: Metadata cache created.
Dec  8 04:37:57 np0005550137 systemd[1]: dnf-makecache.service: Deactivated successfully.
Dec  8 04:37:57 np0005550137 systemd[1]: Finished dnf makecache.
Dec  8 04:37:57 np0005550137 systemd[1]: dnf-makecache.service: Consumed 12.570s CPU time.
Dec  8 04:38:48 np0005550137 kernel: SELinux:  Converting 2718 SID table entries...
Dec  8 04:38:48 np0005550137 kernel: SELinux:  policy capability network_peer_controls=1
Dec  8 04:38:48 np0005550137 kernel: SELinux:  policy capability open_perms=1
Dec  8 04:38:48 np0005550137 kernel: SELinux:  policy capability extended_socket_class=1
Dec  8 04:38:48 np0005550137 kernel: SELinux:  policy capability always_check_network=0
Dec  8 04:38:48 np0005550137 kernel: SELinux:  policy capability cgroup_seclabel=1
Dec  8 04:38:48 np0005550137 kernel: SELinux:  policy capability nnp_nosuid_transition=1
Dec  8 04:38:48 np0005550137 kernel: SELinux:  policy capability genfs_seclabel_symlinks=1
Dec  8 04:38:48 np0005550137 dbus-broker-launch[775]: avc:  op=load_policy lsm=selinux seqno=8 res=1
Dec  8 04:38:48 np0005550137 systemd[1]: Started /usr/bin/systemctl start man-db-cache-update.
Dec  8 04:38:48 np0005550137 systemd[1]: Starting man-db-cache-update.service...
Dec  8 04:38:48 np0005550137 systemd[1]: Reloading.
Dec  8 04:38:48 np0005550137 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Dec  8 04:38:48 np0005550137 systemd[1]: Queuing reload/restart jobs for marked units…
Dec  8 04:38:49 np0005550137 systemd[1]: man-db-cache-update.service: Deactivated successfully.
Dec  8 04:38:49 np0005550137 systemd[1]: Finished man-db-cache-update.service.
Dec  8 04:38:49 np0005550137 systemd[1]: man-db-cache-update.service: Consumed 1.233s CPU time.
Dec  8 04:38:49 np0005550137 systemd[1]: run-r869d2fddb1ff4da196c8ef860a3daa26.service: Deactivated successfully.
Dec  8 04:38:54 np0005550137 python3.9[35401]: ansible-ansible.legacy.command Invoked with _raw_params=rpm -V driverctl lvm2 crudini jq nftables NetworkManager openstack-selinux python3-libselinux python3-pyyaml rsync tmpwatch sysstat iproute-tc ksmtuned systemd-container crypto-policies-scripts grubby sos _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec  8 04:38:56 np0005550137 python3.9[35682]: ansible-ansible.posix.selinux Invoked with policy=targeted state=enforcing configfile=/etc/selinux/config update_kernel_param=False
Dec  8 04:38:57 np0005550137 python3.9[35834]: ansible-ansible.legacy.command Invoked with cmd=dd if=/dev/zero of=/swap count=1024 bs=1M creates=/swap _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None removes=None stdin=None
Dec  8 04:39:04 np0005550137 python3.9[35987]: ansible-ansible.builtin.file Invoked with group=root mode=0600 owner=root path=/swap recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False state=None _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  8 04:39:05 np0005550137 python3.9[36139]: ansible-ansible.posix.mount Invoked with dump=0 fstype=swap name=none opts=sw passno=0 src=/swap state=present path=none boot=True opts_no_log=False backup=False fstab=None
Dec  8 04:39:09 np0005550137 python3.9[36293]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/pki/ca-trust/source/anchors setype=cert_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Dec  8 04:39:10 np0005550137 python3.9[36445]: ansible-ansible.legacy.stat Invoked with path=/etc/pki/ca-trust/source/anchors/tls-ca-bundle.pem follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  8 04:39:10 np0005550137 python3.9[36568]: ansible-ansible.legacy.copy Invoked with dest=/etc/pki/ca-trust/source/anchors/tls-ca-bundle.pem group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1765186749.6275783-666-202684881412173/.source.pem _original_basename=tls-ca-bundle.pem follow=False checksum=747873c1ecad1b42bf7284bb8d89d0dfb93dcb85 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  8 04:39:12 np0005550137 python3.9[36720]: ansible-ansible.builtin.stat Invoked with path=/etc/lvm/devices/system.devices follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Dec  8 04:39:13 np0005550137 python3.9[36874]: ansible-ansible.legacy.command Invoked with _raw_params=/usr/sbin/vgimportdevices --all _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec  8 04:39:14 np0005550137 python3.9[37027]: ansible-ansible.builtin.file Invoked with group=root mode=0600 owner=root path=/etc/lvm/devices/system.devices state=touch recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  8 04:39:15 np0005550137 python3.9[37179]: ansible-ansible.builtin.getent Invoked with database=passwd key=qemu fail_key=True service=None split=None
Dec  8 04:39:15 np0005550137 rsyslogd[1006]: imjournal: journal files changed, reloading...  [v8.2510.0-2.el9 try https://www.rsyslog.com/e/0 ]
Dec  8 04:39:16 np0005550137 python3.9[37333]: ansible-ansible.builtin.group Invoked with gid=107 name=qemu state=present force=False system=False local=False non_unique=False gid_min=None gid_max=None
Dec  8 04:39:17 np0005550137 python3.9[37491]: ansible-ansible.builtin.user Invoked with comment=qemu user group=qemu groups=[''] name=qemu shell=/sbin/nologin state=present uid=107 non_unique=False force=False remove=False create_home=True system=False move_home=False append=False ssh_key_bits=0 ssh_key_type=rsa ssh_key_comment=ansible-generated on compute-0 update_password=always home=None password=NOT_LOGGING_PARAMETER login_class=None password_expire_max=None password_expire_min=None password_expire_warn=None hidden=None seuser=None skeleton=None generate_ssh_key=None ssh_key_file=None ssh_key_passphrase=NOT_LOGGING_PARAMETER expires=None password_lock=None local=None profile=None authorization=None role=None umask=None password_expire_account_disable=None uid_min=None uid_max=None
Dec  8 04:39:18 np0005550137 python3.9[37651]: ansible-ansible.builtin.getent Invoked with database=passwd key=hugetlbfs fail_key=True service=None split=None
Dec  8 04:39:18 np0005550137 python3.9[37804]: ansible-ansible.builtin.group Invoked with gid=42477 name=hugetlbfs state=present force=False system=False local=False non_unique=False gid_min=None gid_max=None
Dec  8 04:39:19 np0005550137 python3.9[37962]: ansible-ansible.builtin.file Invoked with group=qemu mode=0755 owner=qemu path=/var/lib/vhost_sockets setype=virt_cache_t seuser=system_u state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None serole=None selevel=None attributes=None
Dec  8 04:39:20 np0005550137 python3.9[38114]: ansible-ansible.legacy.dnf Invoked with name=['dracut-config-generic'] state=absent allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Dec  8 04:39:23 np0005550137 python3.9[38267]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/modules-load.d setype=etc_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Dec  8 04:39:23 np0005550137 python3.9[38419]: ansible-ansible.legacy.stat Invoked with path=/etc/modules-load.d/99-edpm.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  8 04:39:24 np0005550137 python3.9[38542]: ansible-ansible.legacy.copy Invoked with dest=/etc/modules-load.d/99-edpm.conf group=root mode=0644 owner=root setype=etc_t src=/home/zuul/.ansible/tmp/ansible-tmp-1765186763.4443173-1023-273004439974330/.source.conf follow=False _original_basename=edpm-modprobe.conf.j2 checksum=8021efe01721d8fa8cab46b95c00ec1be6dbb9d0 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None attributes=None
Dec  8 04:39:25 np0005550137 python3.9[38694]: ansible-ansible.builtin.systemd Invoked with name=systemd-modules-load.service state=restarted daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Dec  8 04:39:25 np0005550137 systemd[1]: Starting Load Kernel Modules...
Dec  8 04:39:25 np0005550137 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this.
Dec  8 04:39:25 np0005550137 kernel: Bridge firewalling registered
Dec  8 04:39:25 np0005550137 systemd-modules-load[38698]: Inserted module 'br_netfilter'
Dec  8 04:39:25 np0005550137 systemd[1]: Finished Load Kernel Modules.
Dec  8 04:39:26 np0005550137 python3.9[38855]: ansible-ansible.legacy.stat Invoked with path=/etc/sysctl.d/99-edpm.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  8 04:39:27 np0005550137 python3.9[38978]: ansible-ansible.legacy.copy Invoked with dest=/etc/sysctl.d/99-edpm.conf group=root mode=0644 owner=root setype=etc_t src=/home/zuul/.ansible/tmp/ansible-tmp-1765186766.0197241-1092-14627537614517/.source.conf follow=False _original_basename=edpm-sysctl.conf.j2 checksum=2a366439721b855adcfe4d7f152babb68596a007 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None attributes=None
Dec  8 04:39:28 np0005550137 python3.9[39130]: ansible-ansible.legacy.dnf Invoked with name=['tuned', 'tuned-profiles-cpu-partitioning'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Dec  8 04:39:31 np0005550137 dbus-broker-launch[754]: Noticed file-system modification, trigger reload.
Dec  8 04:39:31 np0005550137 dbus-broker-launch[754]: Noticed file-system modification, trigger reload.
Dec  8 04:39:32 np0005550137 systemd[1]: Started /usr/bin/systemctl start man-db-cache-update.
Dec  8 04:39:32 np0005550137 systemd[1]: Starting man-db-cache-update.service...
Dec  8 04:39:32 np0005550137 systemd[1]: Reloading.
Dec  8 04:39:32 np0005550137 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Dec  8 04:39:32 np0005550137 systemd[1]: Queuing reload/restart jobs for marked units…
Dec  8 04:39:34 np0005550137 python3.9[41220]: ansible-ansible.builtin.stat Invoked with path=/etc/tuned/active_profile follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Dec  8 04:39:35 np0005550137 python3.9[42288]: ansible-ansible.builtin.slurp Invoked with src=/etc/tuned/active_profile
Dec  8 04:39:35 np0005550137 python3.9[42990]: ansible-ansible.builtin.stat Invoked with path=/etc/tuned/throughput-performance-variables.conf follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Dec  8 04:39:35 np0005550137 systemd[1]: man-db-cache-update.service: Deactivated successfully.
Dec  8 04:39:35 np0005550137 systemd[1]: Finished man-db-cache-update.service.
Dec  8 04:39:35 np0005550137 systemd[1]: man-db-cache-update.service: Consumed 4.602s CPU time.
Dec  8 04:39:35 np0005550137 systemd[1]: run-r18a8ef6e05a14d7aa133fc30e72f7396.service: Deactivated successfully.
Dec  8 04:39:36 np0005550137 python3.9[43291]: ansible-ansible.legacy.command Invoked with _raw_params=/usr/sbin/tuned-adm profile throughput-performance _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec  8 04:39:36 np0005550137 systemd[1]: Starting Dynamic System Tuning Daemon...
Dec  8 04:39:37 np0005550137 systemd[1]: Starting Authorization Manager...
Dec  8 04:39:37 np0005550137 systemd[1]: Started Dynamic System Tuning Daemon.
Dec  8 04:39:37 np0005550137 polkitd[43510]: Started polkitd version 0.117
Dec  8 04:39:37 np0005550137 systemd[1]: Started Authorization Manager.
Dec  8 04:39:38 np0005550137 python3.9[43680]: ansible-ansible.builtin.systemd Invoked with enabled=True name=tuned state=restarted daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Dec  8 04:39:38 np0005550137 systemd[1]: Stopping Dynamic System Tuning Daemon...
Dec  8 04:39:38 np0005550137 systemd[1]: tuned.service: Deactivated successfully.
Dec  8 04:39:38 np0005550137 systemd[1]: Stopped Dynamic System Tuning Daemon.
Dec  8 04:39:38 np0005550137 systemd[1]: Starting Dynamic System Tuning Daemon...
Dec  8 04:39:38 np0005550137 systemd[1]: Started Dynamic System Tuning Daemon.
Dec  8 04:39:39 np0005550137 python3.9[43842]: ansible-ansible.builtin.slurp Invoked with src=/proc/cmdline
Dec  8 04:39:43 np0005550137 python3.9[43994]: ansible-ansible.builtin.systemd Invoked with enabled=False name=ksm.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Dec  8 04:39:43 np0005550137 systemd[1]: Reloading.
Dec  8 04:39:43 np0005550137 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Dec  8 04:39:44 np0005550137 python3.9[44183]: ansible-ansible.builtin.systemd Invoked with enabled=False name=ksmtuned.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Dec  8 04:39:44 np0005550137 systemd[1]: Reloading.
Dec  8 04:39:44 np0005550137 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Dec  8 04:39:47 np0005550137 python3.9[44373]: ansible-ansible.legacy.command Invoked with _raw_params=mkswap "/swap" _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec  8 04:39:48 np0005550137 python3.9[44526]: ansible-ansible.legacy.command Invoked with _raw_params=swapon "/swap" _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec  8 04:39:48 np0005550137 kernel: Adding 1048572k swap on /swap.  Priority:-2 extents:1 across:1048572k 
Dec  8 04:39:49 np0005550137 python3.9[44679]: ansible-ansible.legacy.command Invoked with _raw_params=/usr/bin/update-ca-trust _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec  8 04:39:51 np0005550137 python3.9[44841]: ansible-ansible.legacy.command Invoked with _raw_params=echo 2 >/sys/kernel/mm/ksm/run _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec  8 04:39:52 np0005550137 python3.9[44994]: ansible-ansible.builtin.systemd Invoked with name=systemd-sysctl.service state=restarted daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Dec  8 04:39:52 np0005550137 systemd[1]: systemd-sysctl.service: Deactivated successfully.
Dec  8 04:39:52 np0005550137 systemd[1]: Stopped Apply Kernel Variables.
Dec  8 04:39:52 np0005550137 systemd[1]: Stopping Apply Kernel Variables...
Dec  8 04:39:52 np0005550137 systemd[1]: Starting Apply Kernel Variables...
Dec  8 04:39:52 np0005550137 systemd[1]: run-credentials-systemd\x2dsysctl.service.mount: Deactivated successfully.
Dec  8 04:39:52 np0005550137 systemd[1]: Finished Apply Kernel Variables.
Dec  8 04:39:52 np0005550137 systemd[1]: session-9.scope: Deactivated successfully.
Dec  8 04:39:52 np0005550137 systemd[1]: session-9.scope: Consumed 2min 21.487s CPU time.
Dec  8 04:39:52 np0005550137 systemd-logind[805]: Session 9 logged out. Waiting for processes to exit.
Dec  8 04:39:52 np0005550137 systemd-logind[805]: Removed session 9.
Dec  8 04:39:58 np0005550137 systemd-logind[805]: New session 10 of user zuul.
Dec  8 04:39:58 np0005550137 systemd[1]: Started Session 10 of User zuul.
Dec  8 04:39:59 np0005550137 python3.9[45179]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Dec  8 04:40:00 np0005550137 python3.9[45335]: ansible-ansible.builtin.getent Invoked with database=passwd key=openvswitch fail_key=True service=None split=None
Dec  8 04:40:01 np0005550137 python3.9[45488]: ansible-ansible.builtin.group Invoked with gid=42476 name=openvswitch state=present force=False system=False local=False non_unique=False gid_min=None gid_max=None
Dec  8 04:40:02 np0005550137 python3.9[45646]: ansible-ansible.builtin.user Invoked with comment=openvswitch user group=openvswitch groups=['hugetlbfs'] name=openvswitch shell=/sbin/nologin state=present uid=42476 non_unique=False force=False remove=False create_home=True system=False move_home=False append=False ssh_key_bits=0 ssh_key_type=rsa ssh_key_comment=ansible-generated on compute-0 update_password=always home=None password=NOT_LOGGING_PARAMETER login_class=None password_expire_max=None password_expire_min=None password_expire_warn=None hidden=None seuser=None skeleton=None generate_ssh_key=None ssh_key_file=None ssh_key_passphrase=NOT_LOGGING_PARAMETER expires=None password_lock=None local=None profile=None authorization=None role=None umask=None password_expire_account_disable=None uid_min=None uid_max=None
Dec  8 04:40:04 np0005550137 python3.9[45806]: ansible-ansible.legacy.setup Invoked with filter=['ansible_pkg_mgr'] gather_subset=['!all'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Dec  8 04:40:05 np0005550137 python3.9[45890]: ansible-ansible.legacy.dnf Invoked with download_only=True name=['openvswitch'] allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None state=None
Dec  8 04:40:08 np0005550137 python3.9[46053]: ansible-ansible.legacy.dnf Invoked with name=['openvswitch'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Dec  8 04:40:21 np0005550137 kernel: SELinux:  Converting 2730 SID table entries...
Dec  8 04:40:21 np0005550137 kernel: SELinux:  policy capability network_peer_controls=1
Dec  8 04:40:21 np0005550137 kernel: SELinux:  policy capability open_perms=1
Dec  8 04:40:21 np0005550137 kernel: SELinux:  policy capability extended_socket_class=1
Dec  8 04:40:21 np0005550137 kernel: SELinux:  policy capability always_check_network=0
Dec  8 04:40:21 np0005550137 kernel: SELinux:  policy capability cgroup_seclabel=1
Dec  8 04:40:21 np0005550137 kernel: SELinux:  policy capability nnp_nosuid_transition=1
Dec  8 04:40:21 np0005550137 kernel: SELinux:  policy capability genfs_seclabel_symlinks=1
Dec  8 04:40:21 np0005550137 dbus-broker-launch[775]: avc:  op=load_policy lsm=selinux seqno=9 res=1
Dec  8 04:40:21 np0005550137 systemd[1]: Started daily update of the root trust anchor for DNSSEC.
Dec  8 04:40:22 np0005550137 systemd[1]: Started /usr/bin/systemctl start man-db-cache-update.
Dec  8 04:40:22 np0005550137 systemd[1]: Starting man-db-cache-update.service...
Dec  8 04:40:22 np0005550137 systemd[1]: Reloading.
Dec  8 04:40:22 np0005550137 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Dec  8 04:40:22 np0005550137 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec  8 04:40:22 np0005550137 systemd[1]: Queuing reload/restart jobs for marked units…
Dec  8 04:40:23 np0005550137 systemd[1]: man-db-cache-update.service: Deactivated successfully.
Dec  8 04:40:23 np0005550137 systemd[1]: Finished man-db-cache-update.service.
Dec  8 04:40:23 np0005550137 systemd[1]: run-r0ab72e7b632240ca92760c8aa7a75883.service: Deactivated successfully.
Dec  8 04:40:24 np0005550137 python3.9[47154]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=openvswitch.service state=started daemon_reload=False daemon_reexec=False scope=system no_block=False force=None
Dec  8 04:40:24 np0005550137 systemd[1]: Reloading.
Dec  8 04:40:24 np0005550137 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec  8 04:40:24 np0005550137 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Dec  8 04:40:24 np0005550137 systemd[1]: Starting Open vSwitch Database Unit...
Dec  8 04:40:24 np0005550137 chown[47198]: /usr/bin/chown: cannot access '/run/openvswitch': No such file or directory
Dec  8 04:40:24 np0005550137 ovs-ctl[47203]: /etc/openvswitch/conf.db does not exist ... (warning).
Dec  8 04:40:24 np0005550137 ovs-ctl[47203]: Creating empty database /etc/openvswitch/conf.db [  OK  ]
Dec  8 04:40:24 np0005550137 ovs-ctl[47203]: Starting ovsdb-server [  OK  ]
Dec  8 04:40:24 np0005550137 ovs-vsctl[47253]: ovs|00001|vsctl|INFO|Called as ovs-vsctl --no-wait -- init -- set Open_vSwitch . db-version=8.5.1
Dec  8 04:40:25 np0005550137 ovs-vsctl[47273]: ovs|00001|vsctl|INFO|Called as ovs-vsctl --no-wait set Open_vSwitch . ovs-version=3.3.5-115.el9s "external-ids:system-id=\"6f33d14f-a221-4c23-87a0-ea255c11696f\"" "external-ids:rundir=\"/var/run/openvswitch\"" "system-type=\"centos\"" "system-version=\"9\""
Dec  8 04:40:25 np0005550137 ovs-ctl[47203]: Configuring Open vSwitch system IDs [  OK  ]
Dec  8 04:40:25 np0005550137 ovs-ctl[47203]: Enabling remote OVSDB managers [  OK  ]
Dec  8 04:40:25 np0005550137 ovs-vsctl[47279]: ovs|00001|vsctl|INFO|Called as ovs-vsctl --no-wait add Open_vSwitch . external-ids hostname=compute-0
Dec  8 04:40:25 np0005550137 systemd[1]: Started Open vSwitch Database Unit.
Dec  8 04:40:25 np0005550137 systemd[1]: Starting Open vSwitch Delete Transient Ports...
Dec  8 04:40:25 np0005550137 systemd[1]: Finished Open vSwitch Delete Transient Ports.
Dec  8 04:40:25 np0005550137 systemd[1]: Starting Open vSwitch Forwarding Unit...
Dec  8 04:40:25 np0005550137 kernel: openvswitch: Open vSwitch switching datapath
Dec  8 04:40:25 np0005550137 ovs-ctl[47324]: Inserting openvswitch module [  OK  ]
Dec  8 04:40:25 np0005550137 ovs-ctl[47293]: Starting ovs-vswitchd [  OK  ]
Dec  8 04:40:25 np0005550137 ovs-vsctl[47341]: ovs|00001|vsctl|INFO|Called as ovs-vsctl --no-wait add Open_vSwitch . external-ids hostname=compute-0
Dec  8 04:40:25 np0005550137 ovs-ctl[47293]: Enabling remote OVSDB managers [  OK  ]
Dec  8 04:40:25 np0005550137 systemd[1]: Started Open vSwitch Forwarding Unit.
Dec  8 04:40:25 np0005550137 systemd[1]: Starting Open vSwitch...
Dec  8 04:40:25 np0005550137 systemd[1]: Finished Open vSwitch.
Dec  8 04:40:26 np0005550137 python3.9[47495]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'selinux'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Dec  8 04:40:27 np0005550137 python3.9[47647]: ansible-community.general.sefcontext Invoked with selevel=s0 setype=container_file_t state=present target=/var/lib/edpm-config(/.*)? ignore_selinux_state=False ftype=a reload=True substitute=None seuser=None
Dec  8 04:40:28 np0005550137 kernel: SELinux:  Converting 2744 SID table entries...
Dec  8 04:40:28 np0005550137 kernel: SELinux:  policy capability network_peer_controls=1
Dec  8 04:40:28 np0005550137 kernel: SELinux:  policy capability open_perms=1
Dec  8 04:40:28 np0005550137 kernel: SELinux:  policy capability extended_socket_class=1
Dec  8 04:40:28 np0005550137 kernel: SELinux:  policy capability always_check_network=0
Dec  8 04:40:28 np0005550137 kernel: SELinux:  policy capability cgroup_seclabel=1
Dec  8 04:40:28 np0005550137 kernel: SELinux:  policy capability nnp_nosuid_transition=1
Dec  8 04:40:28 np0005550137 kernel: SELinux:  policy capability genfs_seclabel_symlinks=1
Dec  8 04:40:29 np0005550137 python3.9[47802]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local', 'distribution'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Dec  8 04:40:30 np0005550137 dbus-broker-launch[775]: avc:  op=load_policy lsm=selinux seqno=10 res=1
Dec  8 04:40:30 np0005550137 python3.9[47960]: ansible-ansible.legacy.dnf Invoked with name=['driverctl', 'lvm2', 'crudini', 'jq', 'nftables', 'NetworkManager', 'openstack-selinux', 'python3-libselinux', 'python3-pyyaml', 'rsync', 'tmpwatch', 'sysstat', 'iproute-tc', 'ksmtuned', 'systemd-container', 'crypto-policies-scripts', 'grubby', 'sos'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Dec  8 04:40:33 np0005550137 python3.9[48113]: ansible-ansible.legacy.command Invoked with _raw_params=rpm -V driverctl lvm2 crudini jq nftables NetworkManager openstack-selinux python3-libselinux python3-pyyaml rsync tmpwatch sysstat iproute-tc ksmtuned systemd-container crypto-policies-scripts grubby sos _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec  8 04:40:34 np0005550137 python3.9[48400]: ansible-ansible.builtin.file Invoked with mode=0750 path=/var/lib/edpm-config selevel=s0 setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None attributes=None
Dec  8 04:40:35 np0005550137 python3.9[48550]: ansible-ansible.builtin.stat Invoked with path=/etc/cloud/cloud.cfg.d follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Dec  8 04:40:36 np0005550137 python3.9[48704]: ansible-ansible.legacy.dnf Invoked with name=['NetworkManager-ovs'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Dec  8 04:40:38 np0005550137 systemd[1]: Started /usr/bin/systemctl start man-db-cache-update.
Dec  8 04:40:38 np0005550137 systemd[1]: Starting man-db-cache-update.service...
Dec  8 04:40:38 np0005550137 systemd[1]: Reloading.
Dec  8 04:40:38 np0005550137 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Dec  8 04:40:38 np0005550137 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec  8 04:40:38 np0005550137 systemd[1]: Queuing reload/restart jobs for marked units…
Dec  8 04:40:38 np0005550137 systemd[1]: man-db-cache-update.service: Deactivated successfully.
Dec  8 04:40:38 np0005550137 systemd[1]: Finished man-db-cache-update.service.
Dec  8 04:40:38 np0005550137 systemd[1]: run-r9c0c584aa166474490c4eb2f15551999.service: Deactivated successfully.
Dec  8 04:40:39 np0005550137 python3.9[49023]: ansible-ansible.builtin.systemd Invoked with name=NetworkManager state=restarted daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Dec  8 04:40:39 np0005550137 systemd[1]: NetworkManager-wait-online.service: Deactivated successfully.
Dec  8 04:40:39 np0005550137 systemd[1]: Stopped Network Manager Wait Online.
Dec  8 04:40:39 np0005550137 systemd[1]: Stopping Network Manager Wait Online...
Dec  8 04:40:39 np0005550137 systemd[1]: Stopping Network Manager...
Dec  8 04:40:39 np0005550137 NetworkManager[7182]: <info>  [1765186839.9159] caught SIGTERM, shutting down normally.
Dec  8 04:40:39 np0005550137 NetworkManager[7182]: <info>  [1765186839.9178] dhcp4 (eth0): canceled DHCP transaction
Dec  8 04:40:39 np0005550137 NetworkManager[7182]: <info>  [1765186839.9178] dhcp4 (eth0): activation: beginning transaction (timeout in 45 seconds)
Dec  8 04:40:39 np0005550137 NetworkManager[7182]: <info>  [1765186839.9178] dhcp4 (eth0): state changed no lease
Dec  8 04:40:39 np0005550137 NetworkManager[7182]: <info>  [1765186839.9180] manager: NetworkManager state is now CONNECTED_SITE
Dec  8 04:40:39 np0005550137 NetworkManager[7182]: <info>  [1765186839.9244] exiting (success)
Dec  8 04:40:39 np0005550137 systemd[1]: Starting Network Manager Script Dispatcher Service...
Dec  8 04:40:39 np0005550137 systemd[1]: Started Network Manager Script Dispatcher Service.
Dec  8 04:40:39 np0005550137 systemd[1]: NetworkManager.service: Deactivated successfully.
Dec  8 04:40:39 np0005550137 systemd[1]: Stopped Network Manager.
Dec  8 04:40:39 np0005550137 systemd[1]: NetworkManager.service: Consumed 13.777s CPU time, 4.3M memory peak, read 0B from disk, written 31.0K to disk.
Dec  8 04:40:39 np0005550137 systemd[1]: Starting Network Manager...
Dec  8 04:40:39 np0005550137 NetworkManager[49035]: <info>  [1765186839.9961] NetworkManager (version 1.54.1-1.el9) is starting... (after a restart, boot:17566ae0-cd05-4218-b848-5d07916a84ed)
Dec  8 04:40:39 np0005550137 NetworkManager[49035]: <info>  [1765186839.9962] Read config: /etc/NetworkManager/NetworkManager.conf, /run/NetworkManager/conf.d/15-carrier-timeout.conf
Dec  8 04:40:40 np0005550137 NetworkManager[49035]: <info>  [1765186840.0022] manager[0x5563fa5eb090]: monitoring kernel firmware directory '/lib/firmware'.
Dec  8 04:40:40 np0005550137 systemd[1]: Starting Hostname Service...
Dec  8 04:40:40 np0005550137 systemd[1]: Started Hostname Service.
Dec  8 04:40:40 np0005550137 NetworkManager[49035]: <info>  [1765186840.0880] hostname: hostname: using hostnamed
Dec  8 04:40:40 np0005550137 NetworkManager[49035]: <info>  [1765186840.0881] hostname: static hostname changed from (none) to "compute-0"
Dec  8 04:40:40 np0005550137 NetworkManager[49035]: <info>  [1765186840.0887] dns-mgr: init: dns=default,systemd-resolved rc-manager=symlink (auto)
Dec  8 04:40:40 np0005550137 NetworkManager[49035]: <info>  [1765186840.0892] manager[0x5563fa5eb090]: rfkill: Wi-Fi hardware radio set enabled
Dec  8 04:40:40 np0005550137 NetworkManager[49035]: <info>  [1765186840.0893] manager[0x5563fa5eb090]: rfkill: WWAN hardware radio set enabled
Dec  8 04:40:40 np0005550137 NetworkManager[49035]: <info>  [1765186840.0912] Loaded device plugin: NMOvsFactory (/usr/lib64/NetworkManager/1.54.1-1.el9/libnm-device-plugin-ovs.so)
Dec  8 04:40:40 np0005550137 NetworkManager[49035]: <info>  [1765186840.0920] Loaded device plugin: NMTeamFactory (/usr/lib64/NetworkManager/1.54.1-1.el9/libnm-device-plugin-team.so)
Dec  8 04:40:40 np0005550137 NetworkManager[49035]: <info>  [1765186840.0921] manager: rfkill: Wi-Fi enabled by radio killswitch; enabled by state file
Dec  8 04:40:40 np0005550137 NetworkManager[49035]: <info>  [1765186840.0921] manager: rfkill: WWAN enabled by radio killswitch; enabled by state file
Dec  8 04:40:40 np0005550137 NetworkManager[49035]: <info>  [1765186840.0921] manager: Networking is enabled by state file
Dec  8 04:40:40 np0005550137 NetworkManager[49035]: <info>  [1765186840.0923] settings: Loaded settings plugin: keyfile (internal)
Dec  8 04:40:40 np0005550137 NetworkManager[49035]: <info>  [1765186840.0926] settings: Loaded settings plugin: ifcfg-rh ("/usr/lib64/NetworkManager/1.54.1-1.el9/libnm-settings-plugin-ifcfg-rh.so")
Dec  8 04:40:40 np0005550137 NetworkManager[49035]: <info>  [1765186840.0948] Warning: the ifcfg-rh plugin is deprecated, please migrate connections to the keyfile format using "nmcli connection migrate"
Dec  8 04:40:40 np0005550137 NetworkManager[49035]: <info>  [1765186840.0956] dhcp: init: Using DHCP client 'internal'
Dec  8 04:40:40 np0005550137 NetworkManager[49035]: <info>  [1765186840.0958] manager: (lo): new Loopback device (/org/freedesktop/NetworkManager/Devices/1)
Dec  8 04:40:40 np0005550137 NetworkManager[49035]: <info>  [1765186840.0962] device (lo): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Dec  8 04:40:40 np0005550137 NetworkManager[49035]: <info>  [1765186840.0967] device (lo): state change: unavailable -> disconnected (reason 'connection-assumed', managed-type: 'external')
Dec  8 04:40:40 np0005550137 NetworkManager[49035]: <info>  [1765186840.0973] device (lo): Activation: starting connection 'lo' (ddff0d33-f7d1-42c0-97cc-0d2df594d095)
Dec  8 04:40:40 np0005550137 NetworkManager[49035]: <info>  [1765186840.0979] device (eth0): carrier: link connected
Dec  8 04:40:40 np0005550137 NetworkManager[49035]: <info>  [1765186840.0982] manager: (eth0): new Ethernet device (/org/freedesktop/NetworkManager/Devices/2)
Dec  8 04:40:40 np0005550137 NetworkManager[49035]: <info>  [1765186840.0985] manager: (eth0): assume: will attempt to assume matching connection 'System eth0' (5fb06bd0-0bb0-7ffb-45f1-d6edd65f3e03) (indicated)
Dec  8 04:40:40 np0005550137 NetworkManager[49035]: <info>  [1765186840.0986] device (eth0): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'assume')
Dec  8 04:40:40 np0005550137 NetworkManager[49035]: <info>  [1765186840.0992] device (eth0): state change: unavailable -> disconnected (reason 'connection-assumed', managed-type: 'assume')
Dec  8 04:40:40 np0005550137 NetworkManager[49035]: <info>  [1765186840.0997] device (eth0): Activation: starting connection 'System eth0' (5fb06bd0-0bb0-7ffb-45f1-d6edd65f3e03)
Dec  8 04:40:40 np0005550137 NetworkManager[49035]: <info>  [1765186840.1001] device (eth1): carrier: link connected
Dec  8 04:40:40 np0005550137 NetworkManager[49035]: <info>  [1765186840.1005] manager: (eth1): new Ethernet device (/org/freedesktop/NetworkManager/Devices/3)
Dec  8 04:40:40 np0005550137 NetworkManager[49035]: <info>  [1765186840.1009] manager: (eth1): assume: will attempt to assume matching connection 'ci-private-network' (ab271149-d1e0-5f20-aeea-443d463a255c) (indicated)
Dec  8 04:40:40 np0005550137 NetworkManager[49035]: <info>  [1765186840.1009] device (eth1): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'assume')
Dec  8 04:40:40 np0005550137 NetworkManager[49035]: <info>  [1765186840.1013] device (eth1): state change: unavailable -> disconnected (reason 'connection-assumed', managed-type: 'assume')
Dec  8 04:40:40 np0005550137 NetworkManager[49035]: <info>  [1765186840.1020] device (eth1): Activation: starting connection 'ci-private-network' (ab271149-d1e0-5f20-aeea-443d463a255c)
Dec  8 04:40:40 np0005550137 NetworkManager[49035]: <info>  [1765186840.1025] bus-manager: acquired D-Bus service "org.freedesktop.NetworkManager"
Dec  8 04:40:40 np0005550137 systemd[1]: Started Network Manager.
Dec  8 04:40:40 np0005550137 NetworkManager[49035]: <info>  [1765186840.1033] device (lo): state change: disconnected -> prepare (reason 'none', managed-type: 'external')
Dec  8 04:40:40 np0005550137 NetworkManager[49035]: <info>  [1765186840.1034] device (lo): state change: prepare -> config (reason 'none', managed-type: 'external')
Dec  8 04:40:40 np0005550137 NetworkManager[49035]: <info>  [1765186840.1036] device (lo): state change: config -> ip-config (reason 'none', managed-type: 'external')
Dec  8 04:40:40 np0005550137 NetworkManager[49035]: <info>  [1765186840.1038] device (eth0): state change: disconnected -> prepare (reason 'none', managed-type: 'assume')
Dec  8 04:40:40 np0005550137 NetworkManager[49035]: <info>  [1765186840.1041] device (eth0): state change: prepare -> config (reason 'none', managed-type: 'assume')
Dec  8 04:40:40 np0005550137 NetworkManager[49035]: <info>  [1765186840.1042] device (eth1): state change: disconnected -> prepare (reason 'none', managed-type: 'assume')
Dec  8 04:40:40 np0005550137 NetworkManager[49035]: <info>  [1765186840.1045] device (eth1): state change: prepare -> config (reason 'none', managed-type: 'assume')
Dec  8 04:40:40 np0005550137 NetworkManager[49035]: <info>  [1765186840.1049] device (lo): state change: ip-config -> ip-check (reason 'none', managed-type: 'external')
Dec  8 04:40:40 np0005550137 NetworkManager[49035]: <info>  [1765186840.1057] device (eth0): state change: config -> ip-config (reason 'none', managed-type: 'assume')
Dec  8 04:40:40 np0005550137 NetworkManager[49035]: <info>  [1765186840.1059] dhcp4 (eth0): activation: beginning transaction (timeout in 45 seconds)
Dec  8 04:40:40 np0005550137 NetworkManager[49035]: <info>  [1765186840.1067] device (eth1): state change: config -> ip-config (reason 'none', managed-type: 'assume')
Dec  8 04:40:40 np0005550137 NetworkManager[49035]: <info>  [1765186840.1079] device (eth1): state change: ip-config -> ip-check (reason 'none', managed-type: 'assume')
Dec  8 04:40:40 np0005550137 NetworkManager[49035]: <info>  [1765186840.1088] device (lo): state change: ip-check -> secondaries (reason 'none', managed-type: 'external')
Dec  8 04:40:40 np0005550137 NetworkManager[49035]: <info>  [1765186840.1089] device (lo): state change: secondaries -> activated (reason 'none', managed-type: 'external')
Dec  8 04:40:40 np0005550137 NetworkManager[49035]: <info>  [1765186840.1095] device (lo): Activation: successful, device activated.
Dec  8 04:40:40 np0005550137 NetworkManager[49035]: <info>  [1765186840.1102] dhcp4 (eth0): state changed new lease, address=38.102.83.176
Dec  8 04:40:40 np0005550137 NetworkManager[49035]: <info>  [1765186840.1107] policy: set 'System eth0' (eth0) as default for IPv4 routing and DNS
Dec  8 04:40:40 np0005550137 NetworkManager[49035]: <info>  [1765186840.1184] device (eth1): state change: ip-check -> secondaries (reason 'none', managed-type: 'assume')
Dec  8 04:40:40 np0005550137 NetworkManager[49035]: <info>  [1765186840.1189] device (eth0): state change: ip-config -> ip-check (reason 'none', managed-type: 'assume')
Dec  8 04:40:40 np0005550137 NetworkManager[49035]: <info>  [1765186840.1195] device (eth1): state change: secondaries -> activated (reason 'none', managed-type: 'assume')
Dec  8 04:40:40 np0005550137 NetworkManager[49035]: <info>  [1765186840.1198] manager: NetworkManager state is now CONNECTED_LOCAL
Dec  8 04:40:40 np0005550137 NetworkManager[49035]: <info>  [1765186840.1201] device (eth1): Activation: successful, device activated.
Dec  8 04:40:40 np0005550137 systemd[1]: Starting Network Manager Wait Online...
Dec  8 04:40:40 np0005550137 NetworkManager[49035]: <info>  [1765186840.1238] device (eth0): state change: ip-check -> secondaries (reason 'none', managed-type: 'assume')
Dec  8 04:40:40 np0005550137 NetworkManager[49035]: <info>  [1765186840.1240] device (eth0): state change: secondaries -> activated (reason 'none', managed-type: 'assume')
Dec  8 04:40:40 np0005550137 NetworkManager[49035]: <info>  [1765186840.1242] manager: NetworkManager state is now CONNECTED_SITE
Dec  8 04:40:40 np0005550137 NetworkManager[49035]: <info>  [1765186840.1245] device (eth0): Activation: successful, device activated.
Dec  8 04:40:40 np0005550137 NetworkManager[49035]: <info>  [1765186840.1250] manager: NetworkManager state is now CONNECTED_GLOBAL
Dec  8 04:40:40 np0005550137 NetworkManager[49035]: <info>  [1765186840.1271] manager: startup complete
Dec  8 04:40:40 np0005550137 systemd[1]: Finished Network Manager Wait Online.
Dec  8 04:40:41 np0005550137 python3.9[49249]: ansible-ansible.legacy.dnf Invoked with name=['os-net-config'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Dec  8 04:40:45 np0005550137 systemd[1]: Started /usr/bin/systemctl start man-db-cache-update.
Dec  8 04:40:46 np0005550137 systemd[1]: Starting man-db-cache-update.service...
Dec  8 04:40:46 np0005550137 systemd[1]: Reloading.
Dec  8 04:40:46 np0005550137 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec  8 04:40:46 np0005550137 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Dec  8 04:40:46 np0005550137 systemd[1]: Queuing reload/restart jobs for marked units…
Dec  8 04:40:46 np0005550137 systemd[1]: man-db-cache-update.service: Deactivated successfully.
Dec  8 04:40:46 np0005550137 systemd[1]: Finished man-db-cache-update.service.
Dec  8 04:40:46 np0005550137 systemd[1]: run-r98cd6d0cbad54afc9c97024a885a3ee2.service: Deactivated successfully.
Dec  8 04:40:48 np0005550137 python3.9[49708]: ansible-ansible.builtin.stat Invoked with path=/var/lib/edpm-config/os-net-config.returncode follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Dec  8 04:40:48 np0005550137 python3.9[49860]: ansible-community.general.ini_file Invoked with backup=True mode=0644 no_extra_spaces=True option=no-auto-default path=/etc/NetworkManager/NetworkManager.conf section=main state=present value=* exclusive=True ignore_spaces=False allow_no_value=False modify_inactive_option=True create=True follow=False unsafe_writes=False section_has_values=None values=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  8 04:40:49 np0005550137 python3.9[50014]: ansible-community.general.ini_file Invoked with backup=True mode=0644 no_extra_spaces=True option=dns path=/etc/NetworkManager/NetworkManager.conf section=main state=absent value=none exclusive=True ignore_spaces=False allow_no_value=False modify_inactive_option=True create=True follow=False unsafe_writes=False section_has_values=None values=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  8 04:40:50 np0005550137 systemd[1]: NetworkManager-dispatcher.service: Deactivated successfully.
Dec  8 04:40:50 np0005550137 python3.9[50166]: ansible-community.general.ini_file Invoked with backup=True mode=0644 no_extra_spaces=True option=dns path=/etc/NetworkManager/conf.d/99-cloud-init.conf section=main state=absent value=none exclusive=True ignore_spaces=False allow_no_value=False modify_inactive_option=True create=True follow=False unsafe_writes=False section_has_values=None values=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  8 04:40:51 np0005550137 python3.9[50318]: ansible-community.general.ini_file Invoked with backup=True mode=0644 no_extra_spaces=True option=rc-manager path=/etc/NetworkManager/NetworkManager.conf section=main state=absent value=unmanaged exclusive=True ignore_spaces=False allow_no_value=False modify_inactive_option=True create=True follow=False unsafe_writes=False section_has_values=None values=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  8 04:40:51 np0005550137 python3.9[50470]: ansible-community.general.ini_file Invoked with backup=True mode=0644 no_extra_spaces=True option=rc-manager path=/etc/NetworkManager/conf.d/99-cloud-init.conf section=main state=absent value=unmanaged exclusive=True ignore_spaces=False allow_no_value=False modify_inactive_option=True create=True follow=False unsafe_writes=False section_has_values=None values=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  8 04:40:52 np0005550137 python3.9[50626]: ansible-ansible.legacy.stat Invoked with path=/etc/dhcp/dhclient-enter-hooks follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  8 04:40:53 np0005550137 python3.9[50749]: ansible-ansible.legacy.copy Invoked with dest=/etc/dhcp/dhclient-enter-hooks mode=0755 src=/home/zuul/.ansible/tmp/ansible-tmp-1765186852.2284439-647-52876982905976/.source _original_basename=.zy6o8yk7 follow=False checksum=f6278a40de79a9841f6ed1fc584538225566990c backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  8 04:40:54 np0005550137 python3.9[50901]: ansible-ansible.builtin.file Invoked with mode=0755 path=/etc/os-net-config state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  8 04:40:54 np0005550137 python3.9[51053]: ansible-edpm_os_net_config_mappings Invoked with net_config_data_lookup={}
Dec  8 04:40:55 np0005550137 python3.9[51205]: ansible-ansible.builtin.file Invoked with path=/var/lib/edpm-config/scripts state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  8 04:40:58 np0005550137 python3.9[51632]: ansible-ansible.builtin.slurp Invoked with path=/etc/os-net-config/config.yaml src=/etc/os-net-config/config.yaml
Dec  8 04:40:59 np0005550137 ansible-async_wrapper.py[51807]: Invoked with j417824183564 300 /home/zuul/.ansible/tmp/ansible-tmp-1765186858.4535775-845-192053812360842/AnsiballZ_edpm_os_net_config.py _
Dec  8 04:40:59 np0005550137 ansible-async_wrapper.py[51810]: Starting module and watcher
Dec  8 04:40:59 np0005550137 ansible-async_wrapper.py[51810]: Start watching 51811 (300)
Dec  8 04:40:59 np0005550137 ansible-async_wrapper.py[51811]: Start module (51811)
Dec  8 04:40:59 np0005550137 ansible-async_wrapper.py[51807]: Return async_wrapper task started.
Dec  8 04:40:59 np0005550137 python3.9[51812]: ansible-edpm_os_net_config Invoked with cleanup=True config_file=/etc/os-net-config/config.yaml debug=True detailed_exit_codes=True safe_defaults=False use_nmstate=True
Dec  8 04:41:00 np0005550137 kernel: cfg80211: Loading compiled-in X.509 certificates for regulatory database
Dec  8 04:41:00 np0005550137 kernel: Loaded X.509 cert 'sforshee: 00b28ddf47aef9cea7'
Dec  8 04:41:00 np0005550137 kernel: Loaded X.509 cert 'wens: 61c038651aabdcf94bd0ac7ff06c7248db18c600'
Dec  8 04:41:00 np0005550137 kernel: platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
Dec  8 04:41:00 np0005550137 kernel: cfg80211: failed to load regulatory.db
Dec  8 04:41:01 np0005550137 NetworkManager[49035]: <info>  [1765186861.3393] audit: op="checkpoint-create" arg="/org/freedesktop/NetworkManager/Checkpoint/1" pid=51813 uid=0 result="success"
Dec  8 04:41:01 np0005550137 NetworkManager[49035]: <info>  [1765186861.3407] audit: op="checkpoint-adjust-rollback-timeout" arg="/org/freedesktop/NetworkManager/Checkpoint/1" pid=51813 uid=0 result="success"
Dec  8 04:41:01 np0005550137 NetworkManager[49035]: <info>  [1765186861.4012] manager: (br-ex): new Open vSwitch Bridge device (/org/freedesktop/NetworkManager/Devices/4)
Dec  8 04:41:01 np0005550137 NetworkManager[49035]: <info>  [1765186861.4013] audit: op="connection-add" uuid="b992c4a3-43a9-4034-b989-4fd3a63ef551" name="br-ex-br" pid=51813 uid=0 result="success"
Dec  8 04:41:01 np0005550137 NetworkManager[49035]: <info>  [1765186861.4030] manager: (br-ex): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/5)
Dec  8 04:41:01 np0005550137 NetworkManager[49035]: <info>  [1765186861.4031] audit: op="connection-add" uuid="c50fc8ed-4019-44a7-a93f-5906c555f41b" name="br-ex-port" pid=51813 uid=0 result="success"
Dec  8 04:41:01 np0005550137 NetworkManager[49035]: <info>  [1765186861.4046] manager: (eth1): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/6)
Dec  8 04:41:01 np0005550137 NetworkManager[49035]: <info>  [1765186861.4047] audit: op="connection-add" uuid="34838639-67f2-4691-a87a-2168efbafbeb" name="eth1-port" pid=51813 uid=0 result="success"
Dec  8 04:41:01 np0005550137 NetworkManager[49035]: <info>  [1765186861.4065] manager: (vlan20): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/7)
Dec  8 04:41:01 np0005550137 NetworkManager[49035]: <info>  [1765186861.4066] audit: op="connection-add" uuid="1b73c901-a5e7-4b34-86d7-b3370c61dc4a" name="vlan20-port" pid=51813 uid=0 result="success"
Dec  8 04:41:01 np0005550137 NetworkManager[49035]: <info>  [1765186861.4078] manager: (vlan21): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/8)
Dec  8 04:41:01 np0005550137 NetworkManager[49035]: <info>  [1765186861.4080] audit: op="connection-add" uuid="75523864-9ad9-41d2-bbd7-6a26b380bad2" name="vlan21-port" pid=51813 uid=0 result="success"
Dec  8 04:41:01 np0005550137 NetworkManager[49035]: <info>  [1765186861.4091] manager: (vlan22): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/9)
Dec  8 04:41:01 np0005550137 NetworkManager[49035]: <info>  [1765186861.4093] audit: op="connection-add" uuid="62ac51d5-7500-4545-8b47-491f7b717be5" name="vlan22-port" pid=51813 uid=0 result="success"
Dec  8 04:41:01 np0005550137 NetworkManager[49035]: <info>  [1765186861.4105] manager: (vlan23): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/10)
Dec  8 04:41:01 np0005550137 NetworkManager[49035]: <info>  [1765186861.4107] audit: op="connection-add" uuid="8288a675-4016-45ce-b8c1-dc63d16c5ecb" name="vlan23-port" pid=51813 uid=0 result="success"
Dec  8 04:41:01 np0005550137 NetworkManager[49035]: <info>  [1765186861.4126] audit: op="connection-update" uuid="5fb06bd0-0bb0-7ffb-45f1-d6edd65f3e03" name="System eth0" args="ipv4.dhcp-timeout,ipv4.dhcp-client-id,ipv6.addr-gen-mode,ipv6.dhcp-timeout,ipv6.method,802-3-ethernet.mtu,connection.autoconnect-priority,connection.timestamp" pid=51813 uid=0 result="success"
Dec  8 04:41:01 np0005550137 NetworkManager[49035]: <info>  [1765186861.4142] manager: (br-ex): new Open vSwitch Interface device (/org/freedesktop/NetworkManager/Devices/11)
Dec  8 04:41:01 np0005550137 NetworkManager[49035]: <info>  [1765186861.4143] audit: op="connection-add" uuid="f2ec1d5b-3c31-4507-b511-43c00680ad59" name="br-ex-if" pid=51813 uid=0 result="success"
Dec  8 04:41:01 np0005550137 NetworkManager[49035]: <info>  [1765186861.4193] audit: op="connection-update" uuid="ab271149-d1e0-5f20-aeea-443d463a255c" name="ci-private-network" args="ovs-interface.type,ipv4.routing-rules,ipv4.dns,ipv4.addresses,ipv4.method,ipv4.never-default,ipv4.routes,ipv6.routing-rules,ipv6.dns,ipv6.addresses,ipv6.addr-gen-mode,ipv6.method,ipv6.routes,connection.master,connection.port-type,connection.timestamp,connection.slave-type,connection.controller,ovs-external-ids.data" pid=51813 uid=0 result="success"
Dec  8 04:41:01 np0005550137 NetworkManager[49035]: <info>  [1765186861.4213] manager: (vlan20): new Open vSwitch Interface device (/org/freedesktop/NetworkManager/Devices/12)
Dec  8 04:41:01 np0005550137 NetworkManager[49035]: <info>  [1765186861.4215] audit: op="connection-add" uuid="10dbaa3f-724f-4d2d-8fed-50edbbec14b9" name="vlan20-if" pid=51813 uid=0 result="success"
Dec  8 04:41:01 np0005550137 NetworkManager[49035]: <info>  [1765186861.4232] manager: (vlan21): new Open vSwitch Interface device (/org/freedesktop/NetworkManager/Devices/13)
Dec  8 04:41:01 np0005550137 NetworkManager[49035]: <info>  [1765186861.4233] audit: op="connection-add" uuid="aefe9ab5-ffeb-4c44-9f2c-35111b6f86d4" name="vlan21-if" pid=51813 uid=0 result="success"
Dec  8 04:41:01 np0005550137 NetworkManager[49035]: <info>  [1765186861.4252] manager: (vlan22): new Open vSwitch Interface device (/org/freedesktop/NetworkManager/Devices/14)
Dec  8 04:41:01 np0005550137 NetworkManager[49035]: <info>  [1765186861.4254] audit: op="connection-add" uuid="43879cdc-2e76-4a4f-860a-8a4a58eeb07c" name="vlan22-if" pid=51813 uid=0 result="success"
Dec  8 04:41:01 np0005550137 NetworkManager[49035]: <info>  [1765186861.4270] manager: (vlan23): new Open vSwitch Interface device (/org/freedesktop/NetworkManager/Devices/15)
Dec  8 04:41:01 np0005550137 NetworkManager[49035]: <info>  [1765186861.4272] audit: op="connection-add" uuid="b3ed8639-5ae1-4a3f-9043-d91ee1bd2c31" name="vlan23-if" pid=51813 uid=0 result="success"
Dec  8 04:41:01 np0005550137 NetworkManager[49035]: <info>  [1765186861.4285] audit: op="connection-delete" uuid="fe561d8b-32b6-34ed-83aa-8b6f6081cb76" name="Wired connection 1" pid=51813 uid=0 result="success"
Dec  8 04:41:01 np0005550137 NetworkManager[49035]: <info>  [1765186861.4299] device (br-ex)[Open vSwitch Bridge]: state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Dec  8 04:41:01 np0005550137 NetworkManager[49035]: <info>  [1765186861.4309] device (br-ex)[Open vSwitch Bridge]: state change: unavailable -> disconnected (reason 'user-requested', managed-type: 'full')
Dec  8 04:41:01 np0005550137 NetworkManager[49035]: <info>  [1765186861.4312] device (br-ex)[Open vSwitch Bridge]: Activation: starting connection 'br-ex-br' (b992c4a3-43a9-4034-b989-4fd3a63ef551)
Dec  8 04:41:01 np0005550137 NetworkManager[49035]: <info>  [1765186861.4313] audit: op="connection-activate" uuid="b992c4a3-43a9-4034-b989-4fd3a63ef551" name="br-ex-br" pid=51813 uid=0 result="success"
Dec  8 04:41:01 np0005550137 NetworkManager[49035]: <info>  [1765186861.4314] device (br-ex)[Open vSwitch Port]: state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Dec  8 04:41:01 np0005550137 NetworkManager[49035]: <info>  [1765186861.4319] device (br-ex)[Open vSwitch Port]: state change: unavailable -> disconnected (reason 'user-requested', managed-type: 'full')
Dec  8 04:41:01 np0005550137 NetworkManager[49035]: <info>  [1765186861.4323] device (br-ex)[Open vSwitch Port]: Activation: starting connection 'br-ex-port' (c50fc8ed-4019-44a7-a93f-5906c555f41b)
Dec  8 04:41:01 np0005550137 NetworkManager[49035]: <info>  [1765186861.4325] device (eth1)[Open vSwitch Port]: state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Dec  8 04:41:01 np0005550137 NetworkManager[49035]: <info>  [1765186861.4331] device (eth1)[Open vSwitch Port]: state change: unavailable -> disconnected (reason 'user-requested', managed-type: 'full')
Dec  8 04:41:01 np0005550137 NetworkManager[49035]: <info>  [1765186861.4335] device (eth1)[Open vSwitch Port]: Activation: starting connection 'eth1-port' (34838639-67f2-4691-a87a-2168efbafbeb)
Dec  8 04:41:01 np0005550137 NetworkManager[49035]: <info>  [1765186861.4337] device (vlan20)[Open vSwitch Port]: state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Dec  8 04:41:01 np0005550137 NetworkManager[49035]: <info>  [1765186861.4343] device (vlan20)[Open vSwitch Port]: state change: unavailable -> disconnected (reason 'user-requested', managed-type: 'full')
Dec  8 04:41:01 np0005550137 NetworkManager[49035]: <info>  [1765186861.4348] device (vlan20)[Open vSwitch Port]: Activation: starting connection 'vlan20-port' (1b73c901-a5e7-4b34-86d7-b3370c61dc4a)
Dec  8 04:41:01 np0005550137 NetworkManager[49035]: <info>  [1765186861.4349] device (vlan21)[Open vSwitch Port]: state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Dec  8 04:41:01 np0005550137 NetworkManager[49035]: <info>  [1765186861.4354] device (vlan21)[Open vSwitch Port]: state change: unavailable -> disconnected (reason 'user-requested', managed-type: 'full')
Dec  8 04:41:01 np0005550137 NetworkManager[49035]: <info>  [1765186861.4358] device (vlan21)[Open vSwitch Port]: Activation: starting connection 'vlan21-port' (75523864-9ad9-41d2-bbd7-6a26b380bad2)
Dec  8 04:41:01 np0005550137 NetworkManager[49035]: <info>  [1765186861.4360] device (vlan22)[Open vSwitch Port]: state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Dec  8 04:41:01 np0005550137 NetworkManager[49035]: <info>  [1765186861.4367] device (vlan22)[Open vSwitch Port]: state change: unavailable -> disconnected (reason 'user-requested', managed-type: 'full')
Dec  8 04:41:01 np0005550137 NetworkManager[49035]: <info>  [1765186861.4370] device (vlan22)[Open vSwitch Port]: Activation: starting connection 'vlan22-port' (62ac51d5-7500-4545-8b47-491f7b717be5)
Dec  8 04:41:01 np0005550137 NetworkManager[49035]: <info>  [1765186861.4372] device (vlan23)[Open vSwitch Port]: state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Dec  8 04:41:01 np0005550137 NetworkManager[49035]: <info>  [1765186861.4377] device (vlan23)[Open vSwitch Port]: state change: unavailable -> disconnected (reason 'user-requested', managed-type: 'full')
Dec  8 04:41:01 np0005550137 NetworkManager[49035]: <info>  [1765186861.4380] device (vlan23)[Open vSwitch Port]: Activation: starting connection 'vlan23-port' (8288a675-4016-45ce-b8c1-dc63d16c5ecb)
Dec  8 04:41:01 np0005550137 NetworkManager[49035]: <info>  [1765186861.4380] device (br-ex)[Open vSwitch Bridge]: state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Dec  8 04:41:01 np0005550137 NetworkManager[49035]: <info>  [1765186861.4382] device (br-ex)[Open vSwitch Bridge]: state change: prepare -> config (reason 'none', managed-type: 'full')
Dec  8 04:41:01 np0005550137 NetworkManager[49035]: <info>  [1765186861.4384] device (br-ex)[Open vSwitch Bridge]: state change: config -> ip-config (reason 'none', managed-type: 'full')
Dec  8 04:41:01 np0005550137 NetworkManager[49035]: <info>  [1765186861.4390] device (br-ex)[Open vSwitch Interface]: state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Dec  8 04:41:01 np0005550137 NetworkManager[49035]: <info>  [1765186861.4393] device (br-ex)[Open vSwitch Interface]: state change: unavailable -> disconnected (reason 'user-requested', managed-type: 'full')
Dec  8 04:41:01 np0005550137 NetworkManager[49035]: <info>  [1765186861.4397] device (br-ex)[Open vSwitch Interface]: Activation: starting connection 'br-ex-if' (f2ec1d5b-3c31-4507-b511-43c00680ad59)
Dec  8 04:41:01 np0005550137 NetworkManager[49035]: <info>  [1765186861.4398] device (br-ex)[Open vSwitch Port]: state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Dec  8 04:41:01 np0005550137 NetworkManager[49035]: <info>  [1765186861.4401] device (br-ex)[Open vSwitch Port]: state change: prepare -> config (reason 'none', managed-type: 'full')
Dec  8 04:41:01 np0005550137 NetworkManager[49035]: <info>  [1765186861.4402] device (br-ex)[Open vSwitch Port]: state change: config -> ip-config (reason 'none', managed-type: 'full')
Dec  8 04:41:01 np0005550137 NetworkManager[49035]: <info>  [1765186861.4403] device (br-ex)[Open vSwitch Port]: Activation: connection 'br-ex-port' attached as port, continuing activation
Dec  8 04:41:01 np0005550137 NetworkManager[49035]: <info>  [1765186861.4404] device (eth1): state change: activated -> deactivating (reason 'new-activation', managed-type: 'full')
Dec  8 04:41:01 np0005550137 NetworkManager[49035]: <info>  [1765186861.4413] device (eth1): disconnecting for new activation request.
Dec  8 04:41:01 np0005550137 NetworkManager[49035]: <info>  [1765186861.4414] device (eth1)[Open vSwitch Port]: state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Dec  8 04:41:01 np0005550137 NetworkManager[49035]: <info>  [1765186861.4416] device (eth1)[Open vSwitch Port]: state change: prepare -> config (reason 'none', managed-type: 'full')
Dec  8 04:41:01 np0005550137 NetworkManager[49035]: <info>  [1765186861.4418] device (eth1)[Open vSwitch Port]: state change: config -> ip-config (reason 'none', managed-type: 'full')
Dec  8 04:41:01 np0005550137 NetworkManager[49035]: <info>  [1765186861.4419] device (eth1)[Open vSwitch Port]: Activation: connection 'eth1-port' attached as port, continuing activation
Dec  8 04:41:01 np0005550137 NetworkManager[49035]: <info>  [1765186861.4421] device (vlan20)[Open vSwitch Interface]: state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Dec  8 04:41:01 np0005550137 NetworkManager[49035]: <info>  [1765186861.4426] device (vlan20)[Open vSwitch Interface]: state change: unavailable -> disconnected (reason 'user-requested', managed-type: 'full')
Dec  8 04:41:01 np0005550137 NetworkManager[49035]: <info>  [1765186861.4430] device (vlan20)[Open vSwitch Interface]: Activation: starting connection 'vlan20-if' (10dbaa3f-724f-4d2d-8fed-50edbbec14b9)
Dec  8 04:41:01 np0005550137 NetworkManager[49035]: <info>  [1765186861.4430] device (vlan20)[Open vSwitch Port]: state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Dec  8 04:41:01 np0005550137 NetworkManager[49035]: <info>  [1765186861.4433] device (vlan20)[Open vSwitch Port]: state change: prepare -> config (reason 'none', managed-type: 'full')
Dec  8 04:41:01 np0005550137 NetworkManager[49035]: <info>  [1765186861.4435] device (vlan20)[Open vSwitch Port]: state change: config -> ip-config (reason 'none', managed-type: 'full')
Dec  8 04:41:01 np0005550137 NetworkManager[49035]: <info>  [1765186861.4435] device (vlan20)[Open vSwitch Port]: Activation: connection 'vlan20-port' attached as port, continuing activation
Dec  8 04:41:01 np0005550137 NetworkManager[49035]: <info>  [1765186861.4438] device (vlan21)[Open vSwitch Interface]: state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Dec  8 04:41:01 np0005550137 NetworkManager[49035]: <info>  [1765186861.4441] device (vlan21)[Open vSwitch Interface]: state change: unavailable -> disconnected (reason 'user-requested', managed-type: 'full')
Dec  8 04:41:01 np0005550137 NetworkManager[49035]: <info>  [1765186861.4444] device (vlan21)[Open vSwitch Interface]: Activation: starting connection 'vlan21-if' (aefe9ab5-ffeb-4c44-9f2c-35111b6f86d4)
Dec  8 04:41:01 np0005550137 NetworkManager[49035]: <info>  [1765186861.4445] device (vlan21)[Open vSwitch Port]: state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Dec  8 04:41:01 np0005550137 NetworkManager[49035]: <info>  [1765186861.4447] device (vlan21)[Open vSwitch Port]: state change: prepare -> config (reason 'none', managed-type: 'full')
Dec  8 04:41:01 np0005550137 NetworkManager[49035]: <info>  [1765186861.4448] device (vlan21)[Open vSwitch Port]: state change: config -> ip-config (reason 'none', managed-type: 'full')
Dec  8 04:41:01 np0005550137 NetworkManager[49035]: <info>  [1765186861.4449] device (vlan21)[Open vSwitch Port]: Activation: connection 'vlan21-port' attached as port, continuing activation
Dec  8 04:41:01 np0005550137 NetworkManager[49035]: <info>  [1765186861.4451] device (vlan22)[Open vSwitch Interface]: state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Dec  8 04:41:01 np0005550137 NetworkManager[49035]: <info>  [1765186861.4456] device (vlan22)[Open vSwitch Interface]: state change: unavailable -> disconnected (reason 'user-requested', managed-type: 'full')
Dec  8 04:41:01 np0005550137 NetworkManager[49035]: <info>  [1765186861.4459] device (vlan22)[Open vSwitch Interface]: Activation: starting connection 'vlan22-if' (43879cdc-2e76-4a4f-860a-8a4a58eeb07c)
Dec  8 04:41:01 np0005550137 NetworkManager[49035]: <info>  [1765186861.4460] device (vlan22)[Open vSwitch Port]: state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Dec  8 04:41:01 np0005550137 NetworkManager[49035]: <info>  [1765186861.4462] device (vlan22)[Open vSwitch Port]: state change: prepare -> config (reason 'none', managed-type: 'full')
Dec  8 04:41:01 np0005550137 NetworkManager[49035]: <info>  [1765186861.4463] device (vlan22)[Open vSwitch Port]: state change: config -> ip-config (reason 'none', managed-type: 'full')
Dec  8 04:41:01 np0005550137 NetworkManager[49035]: <info>  [1765186861.4464] device (vlan22)[Open vSwitch Port]: Activation: connection 'vlan22-port' attached as port, continuing activation
Dec  8 04:41:01 np0005550137 NetworkManager[49035]: <info>  [1765186861.4466] device (vlan23)[Open vSwitch Interface]: state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Dec  8 04:41:01 np0005550137 NetworkManager[49035]: <info>  [1765186861.4470] device (vlan23)[Open vSwitch Interface]: state change: unavailable -> disconnected (reason 'user-requested', managed-type: 'full')
Dec  8 04:41:01 np0005550137 NetworkManager[49035]: <info>  [1765186861.4473] device (vlan23)[Open vSwitch Interface]: Activation: starting connection 'vlan23-if' (b3ed8639-5ae1-4a3f-9043-d91ee1bd2c31)
Dec  8 04:41:01 np0005550137 NetworkManager[49035]: <info>  [1765186861.4473] device (vlan23)[Open vSwitch Port]: state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Dec  8 04:41:01 np0005550137 NetworkManager[49035]: <info>  [1765186861.4475] device (vlan23)[Open vSwitch Port]: state change: prepare -> config (reason 'none', managed-type: 'full')
Dec  8 04:41:01 np0005550137 NetworkManager[49035]: <info>  [1765186861.4477] device (vlan23)[Open vSwitch Port]: state change: config -> ip-config (reason 'none', managed-type: 'full')
Dec  8 04:41:01 np0005550137 NetworkManager[49035]: <info>  [1765186861.4478] device (vlan23)[Open vSwitch Port]: Activation: connection 'vlan23-port' attached as port, continuing activation
Dec  8 04:41:01 np0005550137 NetworkManager[49035]: <info>  [1765186861.4479] device (br-ex)[Open vSwitch Bridge]: state change: ip-config -> ip-check (reason 'none', managed-type: 'full')
Dec  8 04:41:01 np0005550137 NetworkManager[49035]: <info>  [1765186861.4491] audit: op="device-reapply" interface="eth0" ifindex=2 args="ipv4.dhcp-timeout,ipv4.dhcp-client-id,ipv6.addr-gen-mode,ipv6.method,802-3-ethernet.mtu,connection.autoconnect-priority" pid=51813 uid=0 result="success"
Dec  8 04:41:01 np0005550137 NetworkManager[49035]: <info>  [1765186861.4492] device (br-ex)[Open vSwitch Interface]: state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Dec  8 04:41:01 np0005550137 NetworkManager[49035]: <info>  [1765186861.4495] device (br-ex)[Open vSwitch Interface]: state change: prepare -> config (reason 'none', managed-type: 'full')
Dec  8 04:41:01 np0005550137 NetworkManager[49035]: <info>  [1765186861.4496] device (br-ex)[Open vSwitch Interface]: state change: config -> ip-config (reason 'none', managed-type: 'full')
Dec  8 04:41:01 np0005550137 NetworkManager[49035]: <info>  [1765186861.4503] device (br-ex)[Open vSwitch Port]: state change: ip-config -> ip-check (reason 'none', managed-type: 'full')
Dec  8 04:41:01 np0005550137 NetworkManager[49035]: <info>  [1765186861.4507] device (eth1)[Open vSwitch Port]: state change: ip-config -> ip-check (reason 'none', managed-type: 'full')
Dec  8 04:41:01 np0005550137 NetworkManager[49035]: <info>  [1765186861.4511] device (vlan20)[Open vSwitch Interface]: state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Dec  8 04:41:01 np0005550137 NetworkManager[49035]: <info>  [1765186861.4514] device (vlan20)[Open vSwitch Interface]: state change: prepare -> config (reason 'none', managed-type: 'full')
Dec  8 04:41:01 np0005550137 NetworkManager[49035]: <info>  [1765186861.4517] device (vlan20)[Open vSwitch Interface]: state change: config -> ip-config (reason 'none', managed-type: 'full')
Dec  8 04:41:01 np0005550137 NetworkManager[49035]: <info>  [1765186861.4522] device (vlan20)[Open vSwitch Port]: state change: ip-config -> ip-check (reason 'none', managed-type: 'full')
Dec  8 04:41:01 np0005550137 kernel: ovs-system: entered promiscuous mode
Dec  8 04:41:01 np0005550137 NetworkManager[49035]: <info>  [1765186861.4527] device (vlan21)[Open vSwitch Interface]: state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Dec  8 04:41:01 np0005550137 NetworkManager[49035]: <info>  [1765186861.4530] device (vlan21)[Open vSwitch Interface]: state change: prepare -> config (reason 'none', managed-type: 'full')
Dec  8 04:41:01 np0005550137 NetworkManager[49035]: <info>  [1765186861.4531] device (vlan21)[Open vSwitch Interface]: state change: config -> ip-config (reason 'none', managed-type: 'full')
Dec  8 04:41:01 np0005550137 NetworkManager[49035]: <info>  [1765186861.4535] device (vlan21)[Open vSwitch Port]: state change: ip-config -> ip-check (reason 'none', managed-type: 'full')
Dec  8 04:41:01 np0005550137 NetworkManager[49035]: <info>  [1765186861.4538] device (vlan22)[Open vSwitch Interface]: state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Dec  8 04:41:01 np0005550137 NetworkManager[49035]: <info>  [1765186861.4541] device (vlan22)[Open vSwitch Interface]: state change: prepare -> config (reason 'none', managed-type: 'full')
Dec  8 04:41:01 np0005550137 NetworkManager[49035]: <info>  [1765186861.4542] device (vlan22)[Open vSwitch Interface]: state change: config -> ip-config (reason 'none', managed-type: 'full')
Dec  8 04:41:01 np0005550137 NetworkManager[49035]: <info>  [1765186861.4546] device (vlan22)[Open vSwitch Port]: state change: ip-config -> ip-check (reason 'none', managed-type: 'full')
Dec  8 04:41:01 np0005550137 NetworkManager[49035]: <info>  [1765186861.4550] device (vlan23)[Open vSwitch Interface]: state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Dec  8 04:41:01 np0005550137 NetworkManager[49035]: <info>  [1765186861.4553] device (vlan23)[Open vSwitch Interface]: state change: prepare -> config (reason 'none', managed-type: 'full')
Dec  8 04:41:01 np0005550137 kernel: Timeout policy base is empty
Dec  8 04:41:01 np0005550137 NetworkManager[49035]: <info>  [1765186861.4555] device (vlan23)[Open vSwitch Interface]: state change: config -> ip-config (reason 'none', managed-type: 'full')
Dec  8 04:41:01 np0005550137 NetworkManager[49035]: <info>  [1765186861.4559] device (vlan23)[Open vSwitch Port]: state change: ip-config -> ip-check (reason 'none', managed-type: 'full')
Dec  8 04:41:01 np0005550137 NetworkManager[49035]: <info>  [1765186861.4562] dhcp4 (eth0): canceled DHCP transaction
Dec  8 04:41:01 np0005550137 systemd-udevd[51818]: Network interface NamePolicy= disabled on kernel command line.
Dec  8 04:41:01 np0005550137 NetworkManager[49035]: <info>  [1765186861.4563] dhcp4 (eth0): activation: beginning transaction (timeout in 45 seconds)
Dec  8 04:41:01 np0005550137 NetworkManager[49035]: <info>  [1765186861.4563] dhcp4 (eth0): state changed no lease
Dec  8 04:41:01 np0005550137 NetworkManager[49035]: <info>  [1765186861.4565] dhcp4 (eth0): activation: beginning transaction (no timeout)
Dec  8 04:41:01 np0005550137 systemd[1]: Starting Network Manager Script Dispatcher Service...
Dec  8 04:41:01 np0005550137 NetworkManager[49035]: <info>  [1765186861.4577] device (br-ex)[Open vSwitch Interface]: Activation: connection 'br-ex-if' attached as port, continuing activation
Dec  8 04:41:01 np0005550137 NetworkManager[49035]: <info>  [1765186861.4579] audit: op="device-reapply" interface="eth1" ifindex=3 pid=51813 uid=0 result="fail" reason="Device is not activated"
Dec  8 04:41:01 np0005550137 NetworkManager[49035]: <info>  [1765186861.4585] device (vlan20)[Open vSwitch Interface]: Activation: connection 'vlan20-if' attached as port, continuing activation
Dec  8 04:41:01 np0005550137 NetworkManager[49035]: <info>  [1765186861.4617] device (vlan21)[Open vSwitch Interface]: Activation: connection 'vlan21-if' attached as port, continuing activation
Dec  8 04:41:01 np0005550137 NetworkManager[49035]: <info>  [1765186861.4619] dhcp4 (eth0): state changed new lease, address=38.102.83.176
Dec  8 04:41:01 np0005550137 NetworkManager[49035]: <info>  [1765186861.4668] device (vlan22)[Open vSwitch Interface]: Activation: connection 'vlan22-if' attached as port, continuing activation
Dec  8 04:41:01 np0005550137 systemd[1]: Started Network Manager Script Dispatcher Service.
Dec  8 04:41:01 np0005550137 NetworkManager[49035]: <info>  [1765186861.4681] device (eth1): disconnecting for new activation request.
Dec  8 04:41:01 np0005550137 NetworkManager[49035]: <info>  [1765186861.4682] audit: op="connection-activate" uuid="ab271149-d1e0-5f20-aeea-443d463a255c" name="ci-private-network" pid=51813 uid=0 result="success"
Dec  8 04:41:01 np0005550137 NetworkManager[49035]: <info>  [1765186861.4683] device (vlan23)[Open vSwitch Interface]: Activation: connection 'vlan23-if' attached as port, continuing activation
Dec  8 04:41:01 np0005550137 NetworkManager[49035]: <info>  [1765186861.4703] device (eth1): state change: deactivating -> disconnected (reason 'new-activation', managed-type: 'full')
Dec  8 04:41:01 np0005550137 NetworkManager[49035]: <info>  [1765186861.4917] device (eth1): Activation: starting connection 'ci-private-network' (ab271149-d1e0-5f20-aeea-443d463a255c)
Dec  8 04:41:01 np0005550137 NetworkManager[49035]: <info>  [1765186861.4924] device (br-ex)[Open vSwitch Bridge]: state change: ip-check -> secondaries (reason 'none', managed-type: 'full')
Dec  8 04:41:01 np0005550137 NetworkManager[49035]: <info>  [1765186861.4948] device (eth1): state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Dec  8 04:41:01 np0005550137 NetworkManager[49035]: <info>  [1765186861.4954] device (eth1): state change: prepare -> config (reason 'none', managed-type: 'full')
Dec  8 04:41:01 np0005550137 NetworkManager[49035]: <info>  [1765186861.4965] device (br-ex)[Open vSwitch Bridge]: state change: secondaries -> activated (reason 'none', managed-type: 'full')
Dec  8 04:41:01 np0005550137 NetworkManager[49035]: <info>  [1765186861.4973] device (br-ex)[Open vSwitch Bridge]: Activation: successful, device activated.
Dec  8 04:41:01 np0005550137 NetworkManager[49035]: <info>  [1765186861.4979] audit: op="checkpoint-adjust-rollback-timeout" arg="/org/freedesktop/NetworkManager/Checkpoint/1" pid=51813 uid=0 result="success"
Dec  8 04:41:01 np0005550137 NetworkManager[49035]: <info>  [1765186861.4980] device (br-ex)[Open vSwitch Port]: state change: ip-check -> secondaries (reason 'none', managed-type: 'full')
Dec  8 04:41:01 np0005550137 kernel: br-ex: entered promiscuous mode
Dec  8 04:41:01 np0005550137 NetworkManager[49035]: <info>  [1765186861.5032] device (eth1)[Open vSwitch Port]: state change: ip-check -> secondaries (reason 'none', managed-type: 'full')
Dec  8 04:41:01 np0005550137 NetworkManager[49035]: <info>  [1765186861.5035] device (vlan20)[Open vSwitch Port]: state change: ip-check -> secondaries (reason 'none', managed-type: 'full')
Dec  8 04:41:01 np0005550137 NetworkManager[49035]: <info>  [1765186861.5036] device (vlan21)[Open vSwitch Port]: state change: ip-check -> secondaries (reason 'none', managed-type: 'full')
Dec  8 04:41:01 np0005550137 NetworkManager[49035]: <info>  [1765186861.5037] device (vlan22)[Open vSwitch Port]: state change: ip-check -> secondaries (reason 'none', managed-type: 'full')
Dec  8 04:41:01 np0005550137 NetworkManager[49035]: <info>  [1765186861.5038] device (vlan23)[Open vSwitch Port]: state change: ip-check -> secondaries (reason 'none', managed-type: 'full')
Dec  8 04:41:01 np0005550137 NetworkManager[49035]: <info>  [1765186861.5042] device (eth1): state change: config -> ip-config (reason 'none', managed-type: 'full')
Dec  8 04:41:01 np0005550137 NetworkManager[49035]: <info>  [1765186861.5048] device (br-ex)[Open vSwitch Port]: state change: secondaries -> activated (reason 'none', managed-type: 'full')
Dec  8 04:41:01 np0005550137 NetworkManager[49035]: <info>  [1765186861.5051] device (br-ex)[Open vSwitch Port]: Activation: successful, device activated.
Dec  8 04:41:01 np0005550137 NetworkManager[49035]: <info>  [1765186861.5053] device (eth1)[Open vSwitch Port]: state change: secondaries -> activated (reason 'none', managed-type: 'full')
Dec  8 04:41:01 np0005550137 NetworkManager[49035]: <info>  [1765186861.5057] device (eth1)[Open vSwitch Port]: Activation: successful, device activated.
Dec  8 04:41:01 np0005550137 NetworkManager[49035]: <info>  [1765186861.5060] device (vlan20)[Open vSwitch Port]: state change: secondaries -> activated (reason 'none', managed-type: 'full')
Dec  8 04:41:01 np0005550137 NetworkManager[49035]: <info>  [1765186861.5063] device (vlan20)[Open vSwitch Port]: Activation: successful, device activated.
Dec  8 04:41:01 np0005550137 NetworkManager[49035]: <info>  [1765186861.5066] device (vlan21)[Open vSwitch Port]: state change: secondaries -> activated (reason 'none', managed-type: 'full')
Dec  8 04:41:01 np0005550137 NetworkManager[49035]: <info>  [1765186861.5069] device (vlan21)[Open vSwitch Port]: Activation: successful, device activated.
Dec  8 04:41:01 np0005550137 NetworkManager[49035]: <info>  [1765186861.5072] device (vlan22)[Open vSwitch Port]: state change: secondaries -> activated (reason 'none', managed-type: 'full')
Dec  8 04:41:01 np0005550137 NetworkManager[49035]: <info>  [1765186861.5075] device (vlan22)[Open vSwitch Port]: Activation: successful, device activated.
Dec  8 04:41:01 np0005550137 NetworkManager[49035]: <info>  [1765186861.5078] device (vlan23)[Open vSwitch Port]: state change: secondaries -> activated (reason 'none', managed-type: 'full')
Dec  8 04:41:01 np0005550137 NetworkManager[49035]: <info>  [1765186861.5082] device (vlan23)[Open vSwitch Port]: Activation: successful, device activated.
Dec  8 04:41:01 np0005550137 NetworkManager[49035]: <info>  [1765186861.5091] device (eth1): Activation: connection 'ci-private-network' attached as port, continuing activation
Dec  8 04:41:01 np0005550137 NetworkManager[49035]: <info>  [1765186861.5097] device (eth1): state change: ip-config -> ip-check (reason 'none', managed-type: 'full')
Dec  8 04:41:01 np0005550137 kernel: vlan22: entered promiscuous mode
Dec  8 04:41:01 np0005550137 systemd-udevd[51817]: Network interface NamePolicy= disabled on kernel command line.
Dec  8 04:41:01 np0005550137 NetworkManager[49035]: <info>  [1765186861.5189] device (br-ex)[Open vSwitch Interface]: carrier: link connected
Dec  8 04:41:01 np0005550137 NetworkManager[49035]: <info>  [1765186861.5194] device (eth1): state change: ip-check -> secondaries (reason 'none', managed-type: 'full')
Dec  8 04:41:01 np0005550137 NetworkManager[49035]: <info>  [1765186861.5209] device (eth1): state change: secondaries -> activated (reason 'none', managed-type: 'full')
Dec  8 04:41:01 np0005550137 NetworkManager[49035]: <info>  [1765186861.5214] device (eth1): Activation: successful, device activated.
Dec  8 04:41:01 np0005550137 NetworkManager[49035]: <info>  [1765186861.5231] device (br-ex)[Open vSwitch Interface]: state change: ip-config -> ip-check (reason 'none', managed-type: 'full')
Dec  8 04:41:01 np0005550137 NetworkManager[49035]: <info>  [1765186861.5264] device (br-ex)[Open vSwitch Interface]: state change: ip-check -> secondaries (reason 'none', managed-type: 'full')
Dec  8 04:41:01 np0005550137 kernel: vlan20: entered promiscuous mode
Dec  8 04:41:01 np0005550137 NetworkManager[49035]: <info>  [1765186861.5279] device (br-ex)[Open vSwitch Interface]: state change: secondaries -> activated (reason 'none', managed-type: 'full')
Dec  8 04:41:01 np0005550137 NetworkManager[49035]: <info>  [1765186861.5285] device (br-ex)[Open vSwitch Interface]: Activation: successful, device activated.
Dec  8 04:41:01 np0005550137 NetworkManager[49035]: <info>  [1765186861.5293] device (vlan22)[Open vSwitch Interface]: carrier: link connected
Dec  8 04:41:01 np0005550137 NetworkManager[49035]: <info>  [1765186861.5310] device (vlan22)[Open vSwitch Interface]: state change: ip-config -> ip-check (reason 'none', managed-type: 'full')
Dec  8 04:41:01 np0005550137 NetworkManager[49035]: <info>  [1765186861.5345] device (vlan22)[Open vSwitch Interface]: state change: ip-check -> secondaries (reason 'none', managed-type: 'full')
Dec  8 04:41:01 np0005550137 NetworkManager[49035]: <info>  [1765186861.5347] device (vlan22)[Open vSwitch Interface]: state change: secondaries -> activated (reason 'none', managed-type: 'full')
Dec  8 04:41:01 np0005550137 kernel: vlan21: entered promiscuous mode
Dec  8 04:41:01 np0005550137 NetworkManager[49035]: <info>  [1765186861.5368] device (vlan22)[Open vSwitch Interface]: Activation: successful, device activated.
Dec  8 04:41:01 np0005550137 NetworkManager[49035]: <info>  [1765186861.5394] device (vlan20)[Open vSwitch Interface]: carrier: link connected
Dec  8 04:41:01 np0005550137 NetworkManager[49035]: <info>  [1765186861.5412] device (vlan20)[Open vSwitch Interface]: state change: ip-config -> ip-check (reason 'none', managed-type: 'full')
Dec  8 04:41:01 np0005550137 kernel: vlan23: entered promiscuous mode
Dec  8 04:41:01 np0005550137 kernel: virtio_net virtio5 eth1: entered promiscuous mode
Dec  8 04:41:01 np0005550137 NetworkManager[49035]: <info>  [1765186861.5477] device (vlan21)[Open vSwitch Interface]: carrier: link connected
Dec  8 04:41:01 np0005550137 NetworkManager[49035]: <info>  [1765186861.5478] device (vlan20)[Open vSwitch Interface]: state change: ip-check -> secondaries (reason 'none', managed-type: 'full')
Dec  8 04:41:01 np0005550137 NetworkManager[49035]: <info>  [1765186861.5483] device (vlan20)[Open vSwitch Interface]: state change: secondaries -> activated (reason 'none', managed-type: 'full')
Dec  8 04:41:01 np0005550137 NetworkManager[49035]: <info>  [1765186861.5489] device (vlan20)[Open vSwitch Interface]: Activation: successful, device activated.
Dec  8 04:41:01 np0005550137 NetworkManager[49035]: <info>  [1765186861.5510] device (vlan21)[Open vSwitch Interface]: state change: ip-config -> ip-check (reason 'none', managed-type: 'full')
Dec  8 04:41:01 np0005550137 NetworkManager[49035]: <info>  [1765186861.5547] device (vlan23)[Open vSwitch Interface]: carrier: link connected
Dec  8 04:41:01 np0005550137 NetworkManager[49035]: <info>  [1765186861.5547] device (vlan21)[Open vSwitch Interface]: state change: ip-check -> secondaries (reason 'none', managed-type: 'full')
Dec  8 04:41:01 np0005550137 NetworkManager[49035]: <info>  [1765186861.5550] device (vlan21)[Open vSwitch Interface]: state change: secondaries -> activated (reason 'none', managed-type: 'full')
Dec  8 04:41:01 np0005550137 NetworkManager[49035]: <info>  [1765186861.5555] device (vlan21)[Open vSwitch Interface]: Activation: successful, device activated.
Dec  8 04:41:01 np0005550137 NetworkManager[49035]: <info>  [1765186861.5569] device (vlan23)[Open vSwitch Interface]: state change: ip-config -> ip-check (reason 'none', managed-type: 'full')
Dec  8 04:41:01 np0005550137 NetworkManager[49035]: <info>  [1765186861.5605] device (vlan23)[Open vSwitch Interface]: state change: ip-check -> secondaries (reason 'none', managed-type: 'full')
Dec  8 04:41:01 np0005550137 NetworkManager[49035]: <info>  [1765186861.5607] device (vlan23)[Open vSwitch Interface]: state change: secondaries -> activated (reason 'none', managed-type: 'full')
Dec  8 04:41:01 np0005550137 NetworkManager[49035]: <info>  [1765186861.5615] device (vlan23)[Open vSwitch Interface]: Activation: successful, device activated.
Dec  8 04:41:02 np0005550137 NetworkManager[49035]: <info>  [1765186862.6821] audit: op="checkpoint-adjust-rollback-timeout" arg="/org/freedesktop/NetworkManager/Checkpoint/1" pid=51813 uid=0 result="success"
Dec  8 04:41:02 np0005550137 NetworkManager[49035]: <info>  [1765186862.8471] checkpoint[0x5563fa5c1950]: destroy /org/freedesktop/NetworkManager/Checkpoint/1
Dec  8 04:41:02 np0005550137 NetworkManager[49035]: <info>  [1765186862.8475] audit: op="checkpoint-destroy" arg="/org/freedesktop/NetworkManager/Checkpoint/1" pid=51813 uid=0 result="success"
Dec  8 04:41:03 np0005550137 python3.9[52170]: ansible-ansible.legacy.async_status Invoked with jid=j417824183564.51807 mode=status _async_dir=/root/.ansible_async
Dec  8 04:41:03 np0005550137 NetworkManager[49035]: <info>  [1765186863.1687] audit: op="checkpoint-create" arg="/org/freedesktop/NetworkManager/Checkpoint/2" pid=51813 uid=0 result="success"
Dec  8 04:41:03 np0005550137 NetworkManager[49035]: <info>  [1765186863.1699] audit: op="checkpoint-adjust-rollback-timeout" arg="/org/freedesktop/NetworkManager/Checkpoint/2" pid=51813 uid=0 result="success"
Dec  8 04:41:03 np0005550137 NetworkManager[49035]: <info>  [1765186863.4284] audit: op="networking-control" arg="global-dns-configuration" pid=51813 uid=0 result="success"
Dec  8 04:41:03 np0005550137 NetworkManager[49035]: <info>  [1765186863.4313] config: signal: SET_VALUES,values,values-intern,global-dns-config (/etc/NetworkManager/NetworkManager.conf, /run/NetworkManager/conf.d/15-carrier-timeout.conf)
Dec  8 04:41:03 np0005550137 NetworkManager[49035]: <info>  [1765186863.4343] audit: op="networking-control" arg="global-dns-configuration" pid=51813 uid=0 result="success"
Dec  8 04:41:03 np0005550137 NetworkManager[49035]: <info>  [1765186863.4366] audit: op="checkpoint-adjust-rollback-timeout" arg="/org/freedesktop/NetworkManager/Checkpoint/2" pid=51813 uid=0 result="success"
Dec  8 04:41:03 np0005550137 NetworkManager[49035]: <info>  [1765186863.5973] checkpoint[0x5563fa5c1a20]: destroy /org/freedesktop/NetworkManager/Checkpoint/2
Dec  8 04:41:03 np0005550137 NetworkManager[49035]: <info>  [1765186863.5976] audit: op="checkpoint-destroy" arg="/org/freedesktop/NetworkManager/Checkpoint/2" pid=51813 uid=0 result="success"
Dec  8 04:41:03 np0005550137 ansible-async_wrapper.py[51811]: Module complete (51811)
Dec  8 04:41:04 np0005550137 ansible-async_wrapper.py[51810]: Done in kid B.
Dec  8 04:41:06 np0005550137 python3.9[52276]: ansible-ansible.legacy.async_status Invoked with jid=j417824183564.51807 mode=status _async_dir=/root/.ansible_async
Dec  8 04:41:06 np0005550137 python3.9[52376]: ansible-ansible.legacy.async_status Invoked with jid=j417824183564.51807 mode=cleanup _async_dir=/root/.ansible_async
Dec  8 04:41:07 np0005550137 python3.9[52528]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/os-net-config.returncode follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  8 04:41:08 np0005550137 python3.9[52651]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/edpm-config/os-net-config.returncode mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1765186867.4101567-926-48675956951425/.source.returncode _original_basename=.51p87l51 follow=False checksum=b6589fc6ab0dc82cf12099d1c2d40ab994e8410c backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  8 04:41:09 np0005550137 python3.9[52803]: ansible-ansible.legacy.stat Invoked with path=/etc/cloud/cloud.cfg.d/99-edpm-disable-network-config.cfg follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  8 04:41:09 np0005550137 python3.9[52926]: ansible-ansible.legacy.copy Invoked with dest=/etc/cloud/cloud.cfg.d/99-edpm-disable-network-config.cfg mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1765186868.820863-974-22725866394665/.source.cfg _original_basename=.patjural follow=False checksum=f3c5952a9cd4c6c31b314b25eb897168971cc86e backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  8 04:41:10 np0005550137 systemd[1]: systemd-hostnamed.service: Deactivated successfully.
Dec  8 04:41:10 np0005550137 python3.9[53082]: ansible-ansible.builtin.systemd Invoked with name=NetworkManager state=reloaded daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Dec  8 04:41:10 np0005550137 systemd[1]: Reloading Network Manager...
Dec  8 04:41:10 np0005550137 NetworkManager[49035]: <info>  [1765186870.8024] audit: op="reload" arg="0" pid=53086 uid=0 result="success"
Dec  8 04:41:10 np0005550137 NetworkManager[49035]: <info>  [1765186870.8043] config: signal: SIGHUP,config-files,values,values-user,no-auto-default (/etc/NetworkManager/NetworkManager.conf, /usr/lib/NetworkManager/conf.d/00-server.conf, /run/NetworkManager/conf.d/15-carrier-timeout.conf, /var/lib/NetworkManager/NetworkManager-intern.conf)
Dec  8 04:41:10 np0005550137 systemd[1]: Reloaded Network Manager.
Dec  8 04:41:11 np0005550137 systemd[1]: session-10.scope: Deactivated successfully.
Dec  8 04:41:11 np0005550137 systemd[1]: session-10.scope: Consumed 51.573s CPU time.
Dec  8 04:41:11 np0005550137 systemd-logind[805]: Session 10 logged out. Waiting for processes to exit.
Dec  8 04:41:11 np0005550137 systemd-logind[805]: Removed session 10.
Dec  8 04:41:16 np0005550137 systemd-logind[805]: New session 11 of user zuul.
Dec  8 04:41:16 np0005550137 systemd[1]: Started Session 11 of User zuul.
Dec  8 04:41:17 np0005550137 python3.9[53270]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Dec  8 04:41:18 np0005550137 python3.9[53426]: ansible-ansible.builtin.setup Invoked with filter=['ansible_default_ipv4'] gather_subset=['!all', '!min', 'network'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Dec  8 04:41:19 np0005550137 python3.9[53619]: ansible-ansible.legacy.command Invoked with _raw_params=hostname -f _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec  8 04:41:20 np0005550137 systemd[1]: session-11.scope: Deactivated successfully.
Dec  8 04:41:20 np0005550137 systemd[1]: session-11.scope: Consumed 2.339s CPU time.
Dec  8 04:41:20 np0005550137 systemd-logind[805]: Session 11 logged out. Waiting for processes to exit.
Dec  8 04:41:20 np0005550137 systemd-logind[805]: Removed session 11.
Dec  8 04:41:20 np0005550137 systemd[1]: NetworkManager-dispatcher.service: Deactivated successfully.
Dec  8 04:41:25 np0005550137 systemd-logind[805]: New session 12 of user zuul.
Dec  8 04:41:25 np0005550137 systemd[1]: Started Session 12 of User zuul.
Dec  8 04:41:26 np0005550137 python3.9[53804]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Dec  8 04:41:27 np0005550137 python3.9[53958]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Dec  8 04:41:28 np0005550137 python3.9[54114]: ansible-ansible.legacy.setup Invoked with filter=['ansible_pkg_mgr'] gather_subset=['!all'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Dec  8 04:41:29 np0005550137 python3.9[54199]: ansible-ansible.legacy.dnf Invoked with name=['podman'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Dec  8 04:41:31 np0005550137 python3.9[54354]: ansible-ansible.builtin.setup Invoked with filter=['ansible_interfaces'] gather_subset=['!all', '!min', 'network'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Dec  8 04:41:33 np0005550137 python3.9[54550]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/containers/networks recurse=True state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  8 04:41:34 np0005550137 python3.9[54702]: ansible-ansible.legacy.command Invoked with _raw_params=podman network inspect podman#012 _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec  8 04:41:34 np0005550137 podman[54703]: 2025-12-08 09:41:34.870675878 +0000 UTC m=+0.068035770 system refresh
Dec  8 04:41:35 np0005550137 python3.9[54865]: ansible-ansible.legacy.stat Invoked with path=/etc/containers/networks/podman.json follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  8 04:41:35 np0005550137 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Dec  8 04:41:36 np0005550137 python3.9[54988]: ansible-ansible.legacy.copy Invoked with dest=/etc/containers/networks/podman.json group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1765186895.1201682-197-198129907868729/.source.json follow=False _original_basename=podman_network_config.j2 checksum=219c806bd3cd96f4787ca9b43f1b94cbee109279 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  8 04:41:37 np0005550137 python3.9[55140]: ansible-ansible.legacy.stat Invoked with path=/etc/containers/registries.conf.d/20-edpm-podman-registries.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  8 04:41:37 np0005550137 python3.9[55263]: ansible-ansible.legacy.copy Invoked with dest=/etc/containers/registries.conf.d/20-edpm-podman-registries.conf group=root mode=0644 owner=root setype=etc_t src=/home/zuul/.ansible/tmp/ansible-tmp-1765186896.6676621-242-115945364348622/.source.conf follow=False _original_basename=registries.conf.j2 checksum=c2a85b7389d30a5066b1ae0058c9a8ae1bc25688 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None attributes=None
Dec  8 04:41:38 np0005550137 python3.9[55415]: ansible-community.general.ini_file Invoked with create=True group=root mode=0644 option=pids_limit owner=root path=/etc/containers/containers.conf section=containers setype=etc_t value=4096 backup=False state=present exclusive=True no_extra_spaces=False ignore_spaces=False allow_no_value=False modify_inactive_option=True follow=False unsafe_writes=False section_has_values=None values=None seuser=None serole=None selevel=None attributes=None
Dec  8 04:41:39 np0005550137 python3.9[55567]: ansible-community.general.ini_file Invoked with create=True group=root mode=0644 option=events_logger owner=root path=/etc/containers/containers.conf section=engine setype=etc_t value="journald" backup=False state=present exclusive=True no_extra_spaces=False ignore_spaces=False allow_no_value=False modify_inactive_option=True follow=False unsafe_writes=False section_has_values=None values=None seuser=None serole=None selevel=None attributes=None
Dec  8 04:41:40 np0005550137 python3.9[55719]: ansible-community.general.ini_file Invoked with create=True group=root mode=0644 option=runtime owner=root path=/etc/containers/containers.conf section=engine setype=etc_t value="crun" backup=False state=present exclusive=True no_extra_spaces=False ignore_spaces=False allow_no_value=False modify_inactive_option=True follow=False unsafe_writes=False section_has_values=None values=None seuser=None serole=None selevel=None attributes=None
Dec  8 04:41:40 np0005550137 python3.9[55871]: ansible-community.general.ini_file Invoked with create=True group=root mode=0644 option=network_backend owner=root path=/etc/containers/containers.conf section=network setype=etc_t value="netavark" backup=False state=present exclusive=True no_extra_spaces=False ignore_spaces=False allow_no_value=False modify_inactive_option=True follow=False unsafe_writes=False section_has_values=None values=None seuser=None serole=None selevel=None attributes=None
Dec  8 04:41:41 np0005550137 python3.9[56023]: ansible-ansible.legacy.dnf Invoked with name=['openssh-server'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Dec  8 04:41:44 np0005550137 python3.9[56176]: ansible-setup Invoked with gather_subset=['!all', '!min', 'distribution', 'distribution_major_version', 'distribution_version', 'os_family'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Dec  8 04:41:44 np0005550137 python3.9[56330]: ansible-stat Invoked with path=/run/ostree-booted follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Dec  8 04:41:45 np0005550137 python3.9[56484]: ansible-stat Invoked with path=/sbin/transactional-update follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Dec  8 04:41:46 np0005550137 python3.9[56636]: ansible-ansible.legacy.command Invoked with _raw_params=systemctl is-system-running _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec  8 04:41:47 np0005550137 python3.9[56789]: ansible-service_facts Invoked
Dec  8 04:41:47 np0005550137 network[56806]: You are using 'network' service provided by 'network-scripts', which are now deprecated.
Dec  8 04:41:47 np0005550137 network[56807]: 'network-scripts' will be removed from distribution in near future.
Dec  8 04:41:47 np0005550137 network[56808]: It is advised to switch to 'NetworkManager' instead for network management.
Dec  8 04:41:53 np0005550137 python3.9[57263]: ansible-ansible.legacy.dnf Invoked with name=['chrony'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Dec  8 04:41:56 np0005550137 python3.9[57416]: ansible-package_facts Invoked with manager=['auto'] strategy=first
Dec  8 04:41:57 np0005550137 python3.9[57570]: ansible-ansible.legacy.stat Invoked with path=/etc/chrony.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  8 04:41:58 np0005550137 python3.9[57695]: ansible-ansible.legacy.copy Invoked with backup=True dest=/etc/chrony.conf mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1765186917.0753078-674-118986380672679/.source.conf follow=False _original_basename=chrony.conf.j2 checksum=cfb003e56d02d0d2c65555452eb1a05073fecdad force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  8 04:41:59 np0005550137 python3.9[57849]: ansible-ansible.legacy.stat Invoked with path=/etc/sysconfig/chronyd follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  8 04:41:59 np0005550137 python3.9[57974]: ansible-ansible.legacy.copy Invoked with backup=True dest=/etc/sysconfig/chronyd mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1765186918.4490705-719-153720520103599/.source follow=False _original_basename=chronyd.sysconfig.j2 checksum=dd196b1ff1f915b23eebc37ec77405b5dd3df76c force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  8 04:42:01 np0005550137 python3.9[58128]: ansible-lineinfile Invoked with backup=True create=True dest=/etc/sysconfig/network line=PEERNTP=no mode=0644 regexp=^PEERNTP= state=present path=/etc/sysconfig/network encoding=utf-8 backrefs=False firstmatch=False unsafe_writes=False search_string=None insertafter=None insertbefore=None validate=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  8 04:42:03 np0005550137 python3.9[58282]: ansible-ansible.legacy.setup Invoked with gather_subset=['!all'] filter=['ansible_service_mgr'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Dec  8 04:42:04 np0005550137 python3.9[58366]: ansible-ansible.legacy.systemd Invoked with enabled=True name=chronyd state=started daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Dec  8 04:42:05 np0005550137 python3.9[58520]: ansible-ansible.legacy.setup Invoked with gather_subset=['!all'] filter=['ansible_service_mgr'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Dec  8 04:42:06 np0005550137 python3.9[58604]: ansible-ansible.legacy.systemd Invoked with name=chronyd state=restarted daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Dec  8 04:42:06 np0005550137 systemd[1]: Stopping NTP client/server...
Dec  8 04:42:06 np0005550137 chronyd[784]: chronyd exiting
Dec  8 04:42:06 np0005550137 systemd[1]: chronyd.service: Deactivated successfully.
Dec  8 04:42:06 np0005550137 systemd[1]: Stopped NTP client/server.
Dec  8 04:42:06 np0005550137 systemd[1]: Starting NTP client/server...
Dec  8 04:42:06 np0005550137 chronyd[58614]: chronyd version 4.8 starting (+CMDMON +REFCLOCK +RTC +PRIVDROP +SCFILTER +SIGND +NTS +SECHASH +IPV6 +DEBUG)
Dec  8 04:42:06 np0005550137 chronyd[58614]: Frequency -31.859 +/- 0.338 ppm read from /var/lib/chrony/drift
Dec  8 04:42:06 np0005550137 chronyd[58614]: Loaded seccomp filter (level 2)
Dec  8 04:42:06 np0005550137 systemd[1]: Started NTP client/server.
Dec  8 04:42:07 np0005550137 systemd[1]: session-12.scope: Deactivated successfully.
Dec  8 04:42:07 np0005550137 systemd[1]: session-12.scope: Consumed 25.870s CPU time.
Dec  8 04:42:07 np0005550137 systemd-logind[805]: Session 12 logged out. Waiting for processes to exit.
Dec  8 04:42:07 np0005550137 systemd-logind[805]: Removed session 12.
Dec  8 04:42:12 np0005550137 systemd-logind[805]: New session 13 of user zuul.
Dec  8 04:42:12 np0005550137 systemd[1]: Started Session 13 of User zuul.
Dec  8 04:42:13 np0005550137 python3.9[58795]: ansible-ansible.builtin.file Invoked with group=root mode=0750 owner=root path=/var/lib/edpm-config/firewall state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  8 04:42:14 np0005550137 python3.9[58947]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/firewall/ceph-networks.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  8 04:42:15 np0005550137 python3.9[59070]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/edpm-config/firewall/ceph-networks.yaml mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1765186933.8281605-62-178266426059343/.source.yaml follow=False _original_basename=firewall.yaml.j2 checksum=729ea8396013e3343245d6e934e0dcef55029ad2 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  8 04:42:15 np0005550137 systemd[1]: session-13.scope: Deactivated successfully.
Dec  8 04:42:15 np0005550137 systemd[1]: session-13.scope: Consumed 1.659s CPU time.
Dec  8 04:42:15 np0005550137 systemd-logind[805]: Session 13 logged out. Waiting for processes to exit.
Dec  8 04:42:15 np0005550137 systemd-logind[805]: Removed session 13.
Dec  8 04:42:21 np0005550137 systemd-logind[805]: New session 14 of user zuul.
Dec  8 04:42:21 np0005550137 systemd[1]: Started Session 14 of User zuul.
Dec  8 04:42:22 np0005550137 python3.9[59248]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Dec  8 04:42:23 np0005550137 python3.9[59404]: ansible-ansible.builtin.file Invoked with group=zuul mode=0770 owner=zuul path=/root/.config/containers recurse=True state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  8 04:42:24 np0005550137 python3.9[59579]: ansible-ansible.legacy.stat Invoked with path=/root/.config/containers/auth.json follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  8 04:42:24 np0005550137 python3.9[59702]: ansible-ansible.legacy.copy Invoked with dest=/root/.config/containers/auth.json group=zuul mode=0660 owner=zuul src=/home/zuul/.ansible/tmp/ansible-tmp-1765186943.6029503-83-241579082378154/.source.json _original_basename=.7b75d_z6 follow=False checksum=bf21a9e8fbc5a3846fb05b4fa0859e0917b2202f backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  8 04:42:26 np0005550137 python3.9[59854]: ansible-ansible.legacy.stat Invoked with path=/etc/sysconfig/podman_drop_in follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  8 04:42:26 np0005550137 python3.9[59977]: ansible-ansible.legacy.copy Invoked with dest=/etc/sysconfig/podman_drop_in mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1765186945.547272-152-191363993679387/.source _original_basename=.i5mtswt4 follow=False checksum=125299ce8dea7711a76292961206447f0043248b backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  8 04:42:27 np0005550137 python3.9[60129]: ansible-ansible.builtin.file Invoked with path=/var/local/libexec recurse=True setype=container_file_t state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Dec  8 04:42:28 np0005550137 python3.9[60281]: ansible-ansible.legacy.stat Invoked with path=/var/local/libexec/edpm-container-shutdown follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  8 04:42:28 np0005550137 python3.9[60404]: ansible-ansible.legacy.copy Invoked with dest=/var/local/libexec/edpm-container-shutdown group=root mode=0700 owner=root setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1765186947.8142223-224-115539043291135/.source _original_basename=edpm-container-shutdown follow=False checksum=632c3792eb3dce4288b33ae7b265b71950d69f13 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None attributes=None
Dec  8 04:42:29 np0005550137 python3.9[60556]: ansible-ansible.legacy.stat Invoked with path=/var/local/libexec/edpm-start-podman-container follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  8 04:42:30 np0005550137 python3.9[60679]: ansible-ansible.legacy.copy Invoked with dest=/var/local/libexec/edpm-start-podman-container group=root mode=0700 owner=root setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1765186949.0125427-224-269998033837300/.source _original_basename=edpm-start-podman-container follow=False checksum=b963c569d75a655c0ccae95d9bb4a2a9a4df27d1 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None attributes=None
Dec  8 04:42:31 np0005550137 python3.9[60831]: ansible-ansible.builtin.file Invoked with mode=420 path=/etc/systemd/system-preset state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  8 04:42:31 np0005550137 python3.9[60983]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/edpm-container-shutdown.service follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  8 04:42:32 np0005550137 python3.9[61106]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/edpm-container-shutdown.service group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1765186951.27861-335-255039681399883/.source.service _original_basename=edpm-container-shutdown-service follow=False checksum=6336835cb0f888670cc99de31e19c8c071444d33 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  8 04:42:33 np0005550137 python3.9[61258]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system-preset/91-edpm-container-shutdown.preset follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  8 04:42:33 np0005550137 python3.9[61381]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system-preset/91-edpm-container-shutdown.preset group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1765186952.5047145-380-259719895786249/.source.preset _original_basename=91-edpm-container-shutdown-preset follow=False checksum=b275e4375287528cb63464dd32f622c4f142a915 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  8 04:42:34 np0005550137 python3.9[61533]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True enabled=True name=edpm-container-shutdown state=started daemon_reexec=False scope=system no_block=False force=None masked=None
Dec  8 04:42:34 np0005550137 systemd[1]: Reloading.
Dec  8 04:42:35 np0005550137 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Dec  8 04:42:35 np0005550137 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec  8 04:42:35 np0005550137 systemd[1]: Reloading.
Dec  8 04:42:35 np0005550137 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec  8 04:42:35 np0005550137 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Dec  8 04:42:35 np0005550137 systemd[1]: Starting EDPM Container Shutdown...
Dec  8 04:42:35 np0005550137 systemd[1]: Finished EDPM Container Shutdown.
Dec  8 04:42:36 np0005550137 python3.9[61759]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/netns-placeholder.service follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  8 04:42:36 np0005550137 python3.9[61882]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/netns-placeholder.service group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1765186955.709158-449-4718990599613/.source.service _original_basename=netns-placeholder-service follow=False checksum=b61b1b5918c20c877b8b226fbf34ff89a082d972 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  8 04:42:37 np0005550137 python3.9[62034]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system-preset/91-netns-placeholder.preset follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  8 04:42:38 np0005550137 python3.9[62157]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system-preset/91-netns-placeholder.preset group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1765186957.0688086-494-25372433794187/.source.preset _original_basename=91-netns-placeholder-preset follow=False checksum=28b7b9aa893525d134a1eeda8a0a48fb25b736b9 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  8 04:42:38 np0005550137 python3.9[62309]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True enabled=True name=netns-placeholder state=started daemon_reexec=False scope=system no_block=False force=None masked=None
Dec  8 04:42:38 np0005550137 systemd[1]: Reloading.
Dec  8 04:42:39 np0005550137 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Dec  8 04:42:39 np0005550137 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec  8 04:42:39 np0005550137 systemd[1]: Reloading.
Dec  8 04:42:39 np0005550137 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Dec  8 04:42:39 np0005550137 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec  8 04:42:39 np0005550137 systemd[1]: Starting Create netns directory...
Dec  8 04:42:39 np0005550137 systemd[1]: run-netns-placeholder.mount: Deactivated successfully.
Dec  8 04:42:39 np0005550137 systemd[1]: netns-placeholder.service: Deactivated successfully.
Dec  8 04:42:39 np0005550137 systemd[1]: Finished Create netns directory.
Dec  8 04:42:40 np0005550137 python3.9[62534]: ansible-ansible.builtin.service_facts Invoked
Dec  8 04:42:40 np0005550137 network[62551]: You are using 'network' service provided by 'network-scripts', which are now deprecated.
Dec  8 04:42:40 np0005550137 network[62552]: 'network-scripts' will be removed from distribution in near future.
Dec  8 04:42:40 np0005550137 network[62553]: It is advised to switch to 'NetworkManager' instead for network management.
Dec  8 04:42:44 np0005550137 python3.9[62815]: ansible-ansible.builtin.systemd Invoked with enabled=False name=iptables.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Dec  8 04:42:44 np0005550137 systemd[1]: Reloading.
Dec  8 04:42:44 np0005550137 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Dec  8 04:42:44 np0005550137 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec  8 04:42:44 np0005550137 systemd[1]: Stopping IPv4 firewall with iptables...
Dec  8 04:42:45 np0005550137 iptables.init[62855]: iptables: Setting chains to policy ACCEPT: raw mangle filter nat [  OK  ]
Dec  8 04:42:45 np0005550137 iptables.init[62855]: iptables: Flushing firewall rules: [  OK  ]
Dec  8 04:42:45 np0005550137 systemd[1]: iptables.service: Deactivated successfully.
Dec  8 04:42:45 np0005550137 systemd[1]: Stopped IPv4 firewall with iptables.
Dec  8 04:42:46 np0005550137 python3.9[63051]: ansible-ansible.builtin.systemd Invoked with enabled=False name=ip6tables.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Dec  8 04:42:48 np0005550137 python3.9[63205]: ansible-ansible.builtin.systemd Invoked with enabled=True name=nftables state=started daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Dec  8 04:42:48 np0005550137 systemd[1]: Reloading.
Dec  8 04:42:48 np0005550137 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Dec  8 04:42:48 np0005550137 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec  8 04:42:48 np0005550137 systemd[1]: Starting Netfilter Tables...
Dec  8 04:42:48 np0005550137 systemd[1]: Finished Netfilter Tables.
Dec  8 04:42:49 np0005550137 python3.9[63399]: ansible-ansible.legacy.command Invoked with _raw_params=nft flush ruleset _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec  8 04:42:50 np0005550137 python3.9[63552]: ansible-ansible.legacy.stat Invoked with path=/etc/ssh/sshd_config follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  8 04:42:51 np0005550137 python3.9[63677]: ansible-ansible.legacy.copy Invoked with dest=/etc/ssh/sshd_config mode=0600 src=/home/zuul/.ansible/tmp/ansible-tmp-1765186970.2663662-701-248952604165025/.source validate=/usr/sbin/sshd -T -f %s follow=False _original_basename=sshd_config_block.j2 checksum=6c79f4cb960ad444688fde322eeacb8402e22d79 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  8 04:42:52 np0005550137 python3.9[63830]: ansible-ansible.builtin.systemd Invoked with name=sshd state=reloaded daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Dec  8 04:42:52 np0005550137 systemd[1]: Reloading OpenSSH server daemon...
Dec  8 04:42:52 np0005550137 systemd[1]: Reloaded OpenSSH server daemon.
Dec  8 04:42:53 np0005550137 python3.9[63986]: ansible-ansible.builtin.file Invoked with group=root mode=0750 owner=root path=/var/lib/edpm-config/firewall state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  8 04:42:54 np0005550137 python3.9[64138]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/firewall/sshd-networks.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  8 04:42:54 np0005550137 python3.9[64261]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/edpm-config/firewall/sshd-networks.yaml group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1765186973.536615-794-86813522921195/.source.yaml follow=False _original_basename=firewall.yaml.j2 checksum=0bfc8440fd8f39002ab90252479fb794f51b5ae8 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  8 04:42:55 np0005550137 python3.9[64413]: ansible-community.general.timezone Invoked with name=UTC hwclock=None
Dec  8 04:42:55 np0005550137 systemd[1]: Starting Time & Date Service...
Dec  8 04:42:55 np0005550137 systemd[1]: Started Time & Date Service.
Dec  8 04:42:56 np0005550137 python3.9[64569]: ansible-ansible.builtin.file Invoked with group=root mode=0750 owner=root path=/var/lib/edpm-config/firewall state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  8 04:42:57 np0005550137 python3.9[64723]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/firewall/edpm-nftables-base.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  8 04:42:58 np0005550137 python3.9[64846]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/edpm-config/firewall/edpm-nftables-base.yaml mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1765186977.068461-899-194553419558740/.source.yaml follow=False _original_basename=base-rules.yaml.j2 checksum=450456afcafded6d4bdecceec7a02e806eebd8b3 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  8 04:42:58 np0005550137 python3.9[64998]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/firewall/edpm-nftables-user-rules.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  8 04:42:59 np0005550137 python3.9[65121]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/edpm-config/firewall/edpm-nftables-user-rules.yaml mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1765186978.4616644-944-114564354831099/.source.yaml _original_basename=.rn_1jeal follow=False checksum=97d170e1550eee4afc0af065b78cda302a97674c backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  8 04:43:00 np0005550137 python3.9[65273]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/iptables.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  8 04:43:00 np0005550137 python3.9[65396]: ansible-ansible.legacy.copy Invoked with dest=/etc/nftables/iptables.nft group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1765186979.9203427-989-270207311779186/.source.nft _original_basename=iptables.nft follow=False checksum=3e02df08f1f3ab4a513e94056dbd390e3d38fe30 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  8 04:43:01 np0005550137 python3.9[65548]: ansible-ansible.legacy.command Invoked with _raw_params=nft -f /etc/nftables/iptables.nft _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec  8 04:43:02 np0005550137 python3.9[65701]: ansible-ansible.legacy.command Invoked with _raw_params=nft -j list ruleset _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec  8 04:43:03 np0005550137 python3[65854]: ansible-edpm_nftables_from_files Invoked with src=/var/lib/edpm-config/firewall
Dec  8 04:43:04 np0005550137 python3.9[66006]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-jumps.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  8 04:43:05 np0005550137 python3.9[66131]: ansible-ansible.legacy.copy Invoked with dest=/etc/nftables/edpm-jumps.nft group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1765186984.197287-1106-5108830459062/.source.nft follow=False _original_basename=jump-chain.j2 checksum=4c6f036d2d5808f109acc0880c19aa74ca48c961 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  8 04:43:06 np0005550137 python3.9[66283]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-update-jumps.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  8 04:43:07 np0005550137 python3.9[66406]: ansible-ansible.legacy.copy Invoked with dest=/etc/nftables/edpm-update-jumps.nft group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1765186986.1404767-1151-233624638845396/.source.nft follow=False _original_basename=jump-chain.j2 checksum=4c6f036d2d5808f109acc0880c19aa74ca48c961 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  8 04:43:08 np0005550137 python3.9[66558]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-flushes.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  8 04:43:09 np0005550137 python3.9[66681]: ansible-ansible.legacy.copy Invoked with dest=/etc/nftables/edpm-flushes.nft group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1765186987.650324-1196-62512673227693/.source.nft follow=False _original_basename=flush-chain.j2 checksum=d16337256a56373421842284fe09e4e6c7df417e backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  8 04:43:09 np0005550137 python3.9[66833]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-chains.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  8 04:43:10 np0005550137 python3.9[66956]: ansible-ansible.legacy.copy Invoked with dest=/etc/nftables/edpm-chains.nft group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1765186989.1865232-1241-225607424499748/.source.nft follow=False _original_basename=chains.j2 checksum=2079f3b60590a165d1d502e763170876fc8e2984 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  8 04:43:11 np0005550137 python3.9[67108]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-rules.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  8 04:43:11 np0005550137 python3.9[67231]: ansible-ansible.legacy.copy Invoked with dest=/etc/nftables/edpm-rules.nft group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1765186990.5995498-1286-190142509334048/.source.nft follow=False _original_basename=ruleset.j2 checksum=693377dc03e5b6b24713cb537b18b88774724e35 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  8 04:43:12 np0005550137 python3.9[67385]: ansible-ansible.builtin.file Invoked with group=root mode=0600 owner=root path=/etc/nftables/edpm-rules.nft.changed state=touch recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  8 04:43:13 np0005550137 python3.9[67537]: ansible-ansible.legacy.command Invoked with _raw_params=set -o pipefail; cat /etc/nftables/edpm-chains.nft /etc/nftables/edpm-flushes.nft /etc/nftables/edpm-rules.nft /etc/nftables/edpm-update-jumps.nft /etc/nftables/edpm-jumps.nft | nft -c -f - _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec  8 04:43:14 np0005550137 python3.9[67696]: ansible-ansible.builtin.blockinfile Invoked with backup=False block=include "/etc/nftables/iptables.nft"#012include "/etc/nftables/edpm-chains.nft"#012include "/etc/nftables/edpm-rules.nft"#012include "/etc/nftables/edpm-jumps.nft"#012 path=/etc/sysconfig/nftables.conf validate=nft -c -f %s state=present marker=# {mark} ANSIBLE MANAGED BLOCK create=False marker_begin=BEGIN marker_end=END append_newline=False prepend_newline=False encoding=utf-8 unsafe_writes=False insertafter=None insertbefore=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  8 04:43:15 np0005550137 python3.9[67849]: ansible-ansible.builtin.file Invoked with group=hugetlbfs mode=0775 owner=zuul path=/dev/hugepages1G state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  8 04:43:15 np0005550137 python3.9[68001]: ansible-ansible.builtin.file Invoked with group=hugetlbfs mode=0775 owner=zuul path=/dev/hugepages2M state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  8 04:43:16 np0005550137 python3.9[68153]: ansible-ansible.posix.mount Invoked with fstype=hugetlbfs opts=pagesize=1G path=/dev/hugepages1G src=none state=mounted boot=True dump=0 opts_no_log=False passno=0 backup=False fstab=None
Dec  8 04:43:17 np0005550137 python3.9[68306]: ansible-ansible.posix.mount Invoked with fstype=hugetlbfs opts=pagesize=2M path=/dev/hugepages2M src=none state=mounted boot=True dump=0 opts_no_log=False passno=0 backup=False fstab=None
Dec  8 04:43:18 np0005550137 systemd[1]: session-14.scope: Deactivated successfully.
Dec  8 04:43:18 np0005550137 systemd[1]: session-14.scope: Consumed 38.771s CPU time.
Dec  8 04:43:18 np0005550137 systemd-logind[805]: Session 14 logged out. Waiting for processes to exit.
Dec  8 04:43:18 np0005550137 systemd-logind[805]: Removed session 14.
Dec  8 04:43:23 np0005550137 systemd-logind[805]: New session 15 of user zuul.
Dec  8 04:43:23 np0005550137 systemd[1]: Started Session 15 of User zuul.
Dec  8 04:43:24 np0005550137 python3.9[68489]: ansible-ansible.builtin.tempfile Invoked with state=file prefix=ansible. suffix= path=None
Dec  8 04:43:25 np0005550137 python3.9[68641]: ansible-ansible.builtin.stat Invoked with path=/etc/ssh/ssh_known_hosts follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Dec  8 04:43:26 np0005550137 systemd[1]: systemd-timedated.service: Deactivated successfully.
Dec  8 04:43:26 np0005550137 python3.9[68795]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'ssh_host_key_rsa_public', 'ssh_host_key_ed25519_public', 'ssh_host_key_ecdsa_public'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Dec  8 04:43:27 np0005550137 python3.9[68947]: ansible-ansible.builtin.blockinfile Invoked with block=compute-2.ctlplane.example.com,192.168.122.102,compute-2* ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQDJ8NVpl4t0huV4+T0af6g85GnyyXSuSwbTPUcRC4oID0GzeUJSQ95u2BvKUkl7F6B5EGewEFy3IY9b514xvpbAdhjs7n5SblOcPNUJ+y693K3gdRy/ANBX6zwRxcDMLMyKa8JM7Tdp31exvoQgef6Ep9i62uFn/NfDJtCfGlrN47cRnnIdsYNIImcLTHBGS3hYjDfXiIsNjK3/QjWQXtvDY8RtPZFkdEbVyb7U8G30FzDbr4XI93l9Gr9VRGBtV4lCUgkTnXGFf232VBqxvuyHgk+SXuLKDpTE+BcVxTMwJkyHsksB+UnvpLUbxZddGUIr1vQ4jJ9VJGjVjuztU6Nyje6OrMgs6HBt1mja5G9KkYFxAEQRKP+X4fo6vT0V0DhMJ6aQgUoHTrlvK7dcPY9cNg9D3MonkMsmq0Pnwh8KuCqvB1rIu6tGFXV80vnbpGgq4Gk7mpYr96b41joz7Cx5OBZvmSL5BFvd3CiwI/gncrrztIjmaL6bUmhCxhnnTMc=#012compute-2.ctlplane.example.com,192.168.122.102,compute-2* ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIDhxvi6X/2xlfPUZnHs0RwyPTPsqOO1DYh2NBIbGdlbz#012compute-2.ctlplane.example.com,192.168.122.102,compute-2* ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBKw6RojtVQ4sb307auN8wfR/el1N1E7X58nYuMS8W5BYU/mWwfoBlDfdW5gSR8N6MMMTBCtk95axIutM9UTPHjA=#012compute-0.ctlplane.example.com,192.168.122.100,compute-0* ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQCqGrhm6XY5gL0qKV70KB+VWpiiWHOdu0MlsKHGYoCPmup7NRaI53l8P2B2gElQCUWIOrGSqX+krCxY+35BNmEjUpjw2KT+AFloSBIschbqhiwMysoE8FO+lpr/XU7s+rWWR/O4i/v/olXuA8mKbrkuJSfJHx2AdeXiykSViUK48d1CXdPz56NX/f9lJvPo6S96EhJShAVdPDwMFIPPDc321VAJdd0sXRu9K5njusTG2DlBTHNfHQb3XGTuZQcaP266UMa7a/K+w7hsSOGu1m8dZ3PloU3bAZJV3QUDpIzYwRXGO2w1BcvHVS6YvCLutLZqaMHz2KfT8atjyYylLKRl3Lf7OsrLbreyUVDVtlTgASxom/DE18McJlpdfM7gduJWByn5CKIsncToF5DFI/F+o/hi8ffGZWwegtxxhj/Zvo6GkTnvhppR/rClJpfUYWk1ufmryiUXqo/UsNViCnHjxVVqcFBUcJCj0uBpgQdlXPD3AUgiUcCAP9fh9Dk8iwM=#012compute-0.ctlplane.example.com,192.168.122.100,compute-0* ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIKv92xjYFN5l4X2DvW+J//ZkQOtLVnjqyglt5FFEdTjH#012compute-0.ctlplane.example.com,192.168.122.100,compute-0* ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBJHSgMad2U38teRVX3WClQWpAI17/0L20etvnwLywKQrzqu+b2F4ZqSpsu2yCFYcfRJoEtDQwSmG5UmhpK3Kadw=#012compute-1.ctlplane.example.com,192.168.122.101,compute-1* ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQDB/zZYe8DfA99Ng/3XEzOPTwv/qjSjlCTbcZi22m+osE/6ubIaQtU6hZZ2UiWc5OMDuUGGTWciB+bOwgu4HPT648N8k8XawJ1ZE3yPo7GhPG4jt7+lRmK+VKR+yqR7V8udNU5cfkL5J4lcxOUNxyrZjEodEovTMNeHctTE33QgcntogqUmaGntfJA3jK4xa/i3INl643DoELTFJLNdvHN1qMJ7v32SIF49fjNuKORX6eXYA2ukSiPZ23COyZNgL9OgpXXceoF6gpYggg5sLTck2S3p07p/GNt0SQSx9Sf2edAFUVg7IJzqEL6+hBheK2L/kpEQmujn3VKQrCJ7fL0wu7ys1rnbA3g/jDT3DqpjOL3n+U9frJKSH9MHAD+TjfOZa7oHLQtdBUSLjkmL5Ph14Nmtryoei9MZrKdhwr1taPd46t1jogP0XyrvkLWJmtiAcVQjubJqiRVVTb813nHgI9Txp2W1RM9V5oMdCRBgWsVuMoB52GcVSG5ACm33/xk=#012compute-1.ctlplane.example.com,192.168.122.101,compute-1* ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIPg0wKCpmeAJBOHJra27vJw1dBiql8GMRNSifIDzjK9p#012compute-1.ctlplane.example.com,192.168.122.101,compute-1* ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBIYHdYuKMSpYhFy8rWCrlTlBjprkLeMIvpYpr5DwhaVqN10fCxq9CoQYeZUDQNOCtaemskK9zzUyW1cfqwnaTnI=#012 create=True mode=0644 path=/tmp/ansible.6dug0z7t state=present marker=# {mark} ANSIBLE MANAGED BLOCK backup=False marker_begin=BEGIN marker_end=END append_newline=False prepend_newline=False encoding=utf-8 unsafe_writes=False insertafter=None insertbefore=None validate=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  8 04:43:28 np0005550137 python3.9[69099]: ansible-ansible.legacy.command Invoked with _raw_params=cat '/tmp/ansible.6dug0z7t' > /etc/ssh/ssh_known_hosts _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec  8 04:43:29 np0005550137 python3.9[69253]: ansible-ansible.builtin.file Invoked with path=/tmp/ansible.6dug0z7t state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  8 04:43:29 np0005550137 systemd[1]: session-15.scope: Deactivated successfully.
Dec  8 04:43:29 np0005550137 systemd[1]: session-15.scope: Consumed 3.805s CPU time.
Dec  8 04:43:29 np0005550137 systemd-logind[805]: Session 15 logged out. Waiting for processes to exit.
Dec  8 04:43:29 np0005550137 systemd-logind[805]: Removed session 15.
Dec  8 04:43:34 np0005550137 systemd-logind[805]: New session 16 of user zuul.
Dec  8 04:43:34 np0005550137 systemd[1]: Started Session 16 of User zuul.
Dec  8 04:43:35 np0005550137 python3.9[69435]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Dec  8 04:43:36 np0005550137 python3.9[69591]: ansible-ansible.builtin.systemd Invoked with enabled=True name=sshd daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None masked=None
Dec  8 04:43:37 np0005550137 python3.9[69745]: ansible-ansible.builtin.systemd Invoked with name=sshd state=started daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Dec  8 04:43:38 np0005550137 python3.9[69898]: ansible-ansible.legacy.command Invoked with _raw_params=nft -f /etc/nftables/edpm-chains.nft _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec  8 04:43:39 np0005550137 python3.9[70051]: ansible-ansible.builtin.stat Invoked with path=/etc/nftables/edpm-rules.nft.changed follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Dec  8 04:43:40 np0005550137 python3.9[70205]: ansible-ansible.legacy.command Invoked with _raw_params=set -o pipefail; cat /etc/nftables/edpm-flushes.nft /etc/nftables/edpm-rules.nft /etc/nftables/edpm-update-jumps.nft | nft -f - _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec  8 04:43:41 np0005550137 python3.9[70360]: ansible-ansible.builtin.file Invoked with path=/etc/nftables/edpm-rules.nft.changed state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  8 04:43:41 np0005550137 systemd[1]: session-16.scope: Deactivated successfully.
Dec  8 04:43:41 np0005550137 systemd[1]: session-16.scope: Consumed 5.087s CPU time.
Dec  8 04:43:41 np0005550137 systemd-logind[805]: Session 16 logged out. Waiting for processes to exit.
Dec  8 04:43:41 np0005550137 systemd-logind[805]: Removed session 16.
Dec  8 04:43:47 np0005550137 systemd-logind[805]: New session 17 of user zuul.
Dec  8 04:43:47 np0005550137 systemd[1]: Started Session 17 of User zuul.
Dec  8 04:43:48 np0005550137 python3.9[70538]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Dec  8 04:43:49 np0005550137 python3.9[70694]: ansible-ansible.legacy.setup Invoked with filter=['ansible_pkg_mgr'] gather_subset=['!all'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Dec  8 04:43:50 np0005550137 python3.9[70778]: ansible-ansible.legacy.dnf Invoked with name=['yum-utils'] allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None state=None
Dec  8 04:43:53 np0005550137 python3.9[70929]: ansible-ansible.legacy.command Invoked with _raw_params=needs-restarting -r _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec  8 04:43:54 np0005550137 python3.9[71080]: ansible-ansible.builtin.find Invoked with paths=['/var/lib/openstack/reboot_required/'] patterns=[] read_whole_file=False file_type=file age_stamp=mtime recurse=False hidden=False follow=False get_checksum=False checksum_algorithm=sha1 use_regex=False exact_mode=True excludes=None contains=None age=None size=None depth=None mode=None encoding=None limit=None
Dec  8 04:43:55 np0005550137 python3.9[71230]: ansible-ansible.builtin.stat Invoked with path=/var/lib/config-data/puppet-generated follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Dec  8 04:43:55 np0005550137 rsyslogd[1006]: imjournal: journal files changed, reloading...  [v8.2510.0-2.el9 try https://www.rsyslog.com/e/0 ]
Dec  8 04:43:55 np0005550137 rsyslogd[1006]: imjournal: journal files changed, reloading...  [v8.2510.0-2.el9 try https://www.rsyslog.com/e/0 ]
Dec  8 04:43:56 np0005550137 python3.9[71381]: ansible-ansible.builtin.stat Invoked with path=/var/lib/openstack/config follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Dec  8 04:43:56 np0005550137 systemd[1]: session-17.scope: Deactivated successfully.
Dec  8 04:43:56 np0005550137 systemd[1]: session-17.scope: Consumed 6.363s CPU time.
Dec  8 04:43:56 np0005550137 systemd-logind[805]: Session 17 logged out. Waiting for processes to exit.
Dec  8 04:43:56 np0005550137 systemd-logind[805]: Removed session 17.
Dec  8 04:44:04 np0005550137 systemd-logind[805]: New session 18 of user zuul.
Dec  8 04:44:04 np0005550137 systemd[1]: Started Session 18 of User zuul.
Dec  8 04:44:10 np0005550137 python3[72149]: ansible-ansible.legacy.setup Invoked with gather_subset=['all'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Dec  8 04:44:12 np0005550137 python3[72244]: ansible-ansible.legacy.dnf Invoked with name=['util-linux', 'lvm2', 'jq', 'podman'] state=present allow_downgrade=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_repoquery=True install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 allowerasing=False nobest=False use_backend=auto conf_file=None disable_excludes=None download_dir=None list=None releasever=None
Dec  8 04:44:14 np0005550137 python3[72271]: ansible-ansible.builtin.stat Invoked with path=/dev/loop3 follow=False get_md5=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Dec  8 04:44:14 np0005550137 python3[72297]: ansible-ansible.legacy.command Invoked with _raw_params=dd if=/dev/zero of=/var/lib/ceph-osd-0.img bs=1 count=0 seek=20G#012losetup /dev/loop3 /var/lib/ceph-osd-0.img#012lsblk _uses_shell=True stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec  8 04:44:14 np0005550137 kernel: loop: module loaded
Dec  8 04:44:14 np0005550137 kernel: loop3: detected capacity change from 0 to 41943040
Dec  8 04:44:15 np0005550137 python3[72332]: ansible-ansible.legacy.command Invoked with _raw_params=pvcreate /dev/loop3#012vgcreate ceph_vg0 /dev/loop3#012lvcreate -n ceph_lv0 -l +100%FREE ceph_vg0#012lvs _uses_shell=True stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec  8 04:44:15 np0005550137 lvm[72335]: PV /dev/loop3 not used.
Dec  8 04:44:15 np0005550137 lvm[72344]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Dec  8 04:44:15 np0005550137 systemd[1]: Started /usr/sbin/lvm vgchange -aay --autoactivation event ceph_vg0.
Dec  8 04:44:15 np0005550137 lvm[72346]:  1 logical volume(s) in volume group "ceph_vg0" now active
Dec  8 04:44:15 np0005550137 systemd[1]: lvm-activate-ceph_vg0.service: Deactivated successfully.
Dec  8 04:44:15 np0005550137 chronyd[58614]: Selected source 198.181.199.82 (pool.ntp.org)
Dec  8 04:44:15 np0005550137 python3[72424]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/ceph-osd-losetup-0.service follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Dec  8 04:44:16 np0005550137 python3[72499]: ansible-ansible.legacy.copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1765187055.5084276-36797-190082586437348/source dest=/etc/systemd/system/ceph-osd-losetup-0.service mode=0644 force=True follow=False _original_basename=ceph-osd-losetup.service.j2 checksum=427b1db064a970126b729b07acf99fa7d0eecb9c backup=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  8 04:44:16 np0005550137 python3[72549]: ansible-ansible.builtin.systemd Invoked with state=started enabled=True name=ceph-osd-losetup-0.service daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Dec  8 04:44:17 np0005550137 systemd[1]: Reloading.
Dec  8 04:44:17 np0005550137 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec  8 04:44:17 np0005550137 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Dec  8 04:44:17 np0005550137 systemd[1]: Starting Ceph OSD losetup...
Dec  8 04:44:17 np0005550137 bash[72590]: /dev/loop3: [64513]:4194934 (/var/lib/ceph-osd-0.img)
Dec  8 04:44:17 np0005550137 systemd[1]: Finished Ceph OSD losetup.
Dec  8 04:44:17 np0005550137 lvm[72591]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Dec  8 04:44:17 np0005550137 lvm[72591]: VG ceph_vg0 finished
Dec  8 04:44:19 np0005550137 python3[72615]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'network'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Dec  8 04:44:22 np0005550137 python3[72708]: ansible-ansible.legacy.dnf Invoked with name=['centos-release-ceph-squid'] state=present allow_downgrade=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_repoquery=True install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 allowerasing=False nobest=False use_backend=auto conf_file=None disable_excludes=None download_dir=None list=None releasever=None
Dec  8 04:44:24 np0005550137 python3[72768]: ansible-ansible.legacy.dnf Invoked with name=['cephadm'] state=present allow_downgrade=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_repoquery=True install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 allowerasing=False nobest=False use_backend=auto conf_file=None disable_excludes=None download_dir=None list=None releasever=None
Dec  8 04:44:27 np0005550137 systemd[1]: Started /usr/bin/systemctl start man-db-cache-update.
Dec  8 04:44:27 np0005550137 systemd[1]: Starting man-db-cache-update.service...
Dec  8 04:44:28 np0005550137 systemd[1]: man-db-cache-update.service: Deactivated successfully.
Dec  8 04:44:28 np0005550137 systemd[1]: Finished man-db-cache-update.service.
Dec  8 04:44:28 np0005550137 systemd[1]: run-r1028fa6e40f64c4ea619b0e264c64742.service: Deactivated successfully.
Dec  8 04:44:28 np0005550137 python3[72885]: ansible-ansible.builtin.stat Invoked with path=/usr/sbin/cephadm follow=False get_md5=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Dec  8 04:44:28 np0005550137 python3[72915]: ansible-ansible.legacy.command Invoked with _raw_params=/usr/sbin/cephadm ls --no-detail _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec  8 04:44:29 np0005550137 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Dec  8 04:44:29 np0005550137 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Dec  8 04:44:29 np0005550137 python3[72978]: ansible-ansible.builtin.file Invoked with path=/etc/ceph state=directory mode=0755 recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  8 04:44:30 np0005550137 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Dec  8 04:44:30 np0005550137 python3[73004]: ansible-ansible.builtin.file Invoked with path=/home/ceph-admin/specs owner=ceph-admin group=ceph-admin mode=0755 state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  8 04:44:30 np0005550137 python3[73082]: ansible-ansible.legacy.stat Invoked with path=/home/ceph-admin/specs/ceph_spec.yaml follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Dec  8 04:44:31 np0005550137 python3[73155]: ansible-ansible.legacy.copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1765187070.6401625-36989-6378358918453/source dest=/home/ceph-admin/specs/ceph_spec.yaml owner=ceph-admin group=ceph-admin mode=0644 _original_basename=ceph_spec.yml follow=False checksum=a2c84611a4e46cfce32a90c112eae0345cab6abb backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  8 04:44:32 np0005550137 python3[73257]: ansible-ansible.legacy.stat Invoked with path=/home/ceph-admin/assimilate_ceph.conf follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Dec  8 04:44:32 np0005550137 python3[73330]: ansible-ansible.legacy.copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1765187071.862274-37007-262402172710259/source dest=/home/ceph-admin/assimilate_ceph.conf owner=ceph-admin group=ceph-admin mode=0644 _original_basename=initial_ceph.conf follow=False checksum=41828f7c2442fdf376911255e33c12863fc3b1b3 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  8 04:44:32 np0005550137 python3[73380]: ansible-ansible.builtin.stat Invoked with path=/home/ceph-admin/.ssh/id_rsa follow=False get_md5=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Dec  8 04:44:33 np0005550137 python3[73408]: ansible-ansible.builtin.stat Invoked with path=/home/ceph-admin/.ssh/id_rsa.pub follow=False get_md5=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Dec  8 04:44:33 np0005550137 python3[73436]: ansible-ansible.builtin.stat Invoked with path=/home/ceph-admin/assimilate_ceph.conf follow=False get_md5=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Dec  8 04:44:33 np0005550137 python3[73464]: ansible-ansible.legacy.command Invoked with _raw_params=/usr/sbin/cephadm bootstrap --skip-firewalld --ssh-private-key /home/ceph-admin/.ssh/id_rsa --ssh-public-key /home/ceph-admin/.ssh/id_rsa.pub --ssh-user ceph-admin --allow-fqdn-hostname --output-keyring /etc/ceph/ceph.client.admin.keyring --output-config /etc/ceph/ceph.conf --fsid ceb838ef-9d5d-54e4-bddb-2f01adce2ad4 --config /home/ceph-admin/assimilate_ceph.conf \--skip-monitoring-stack --skip-dashboard --mon-ip 192.168.122.100#012 _uses_shell=True stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec  8 04:44:34 np0005550137 systemd-logind[805]: New session 19 of user ceph-admin.
Dec  8 04:44:34 np0005550137 systemd[1]: Created slice User Slice of UID 42477.
Dec  8 04:44:34 np0005550137 systemd[1]: Starting User Runtime Directory /run/user/42477...
Dec  8 04:44:34 np0005550137 systemd[1]: Finished User Runtime Directory /run/user/42477.
Dec  8 04:44:34 np0005550137 systemd[1]: Starting User Manager for UID 42477...
Dec  8 04:44:34 np0005550137 systemd[73472]: Queued start job for default target Main User Target.
Dec  8 04:44:34 np0005550137 systemd[73472]: Created slice User Application Slice.
Dec  8 04:44:34 np0005550137 systemd[73472]: Started Mark boot as successful after the user session has run 2 minutes.
Dec  8 04:44:34 np0005550137 systemd[73472]: Started Daily Cleanup of User's Temporary Directories.
Dec  8 04:44:34 np0005550137 systemd[73472]: Reached target Paths.
Dec  8 04:44:34 np0005550137 systemd[73472]: Reached target Timers.
Dec  8 04:44:34 np0005550137 systemd[73472]: Starting D-Bus User Message Bus Socket...
Dec  8 04:44:34 np0005550137 systemd[73472]: Starting Create User's Volatile Files and Directories...
Dec  8 04:44:34 np0005550137 systemd[73472]: Listening on D-Bus User Message Bus Socket.
Dec  8 04:44:34 np0005550137 systemd[73472]: Reached target Sockets.
Dec  8 04:44:34 np0005550137 systemd[73472]: Finished Create User's Volatile Files and Directories.
Dec  8 04:44:34 np0005550137 systemd[73472]: Reached target Basic System.
Dec  8 04:44:34 np0005550137 systemd[73472]: Reached target Main User Target.
Dec  8 04:44:34 np0005550137 systemd[73472]: Startup finished in 150ms.
Dec  8 04:44:34 np0005550137 systemd[1]: Started User Manager for UID 42477.
Dec  8 04:44:34 np0005550137 systemd[1]: Started Session 19 of User ceph-admin.
Dec  8 04:44:34 np0005550137 systemd[1]: session-19.scope: Deactivated successfully.
Dec  8 04:44:34 np0005550137 systemd-logind[805]: Session 19 logged out. Waiting for processes to exit.
Dec  8 04:44:34 np0005550137 systemd-logind[805]: Removed session 19.
Dec  8 04:44:34 np0005550137 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Dec  8 04:44:34 np0005550137 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Dec  8 04:44:37 np0005550137 systemd[1]: var-lib-containers-storage-overlay-compat1891359764-lower\x2dmapped.mount: Deactivated successfully.
Dec  8 04:44:44 np0005550137 systemd[1]: Stopping User Manager for UID 42477...
Dec  8 04:44:45 np0005550137 systemd[73472]: Activating special unit Exit the Session...
Dec  8 04:44:45 np0005550137 systemd[73472]: Stopped target Main User Target.
Dec  8 04:44:45 np0005550137 systemd[73472]: Stopped target Basic System.
Dec  8 04:44:45 np0005550137 systemd[73472]: Stopped target Paths.
Dec  8 04:44:45 np0005550137 systemd[73472]: Stopped target Sockets.
Dec  8 04:44:45 np0005550137 systemd[73472]: Stopped target Timers.
Dec  8 04:44:45 np0005550137 systemd[73472]: Stopped Mark boot as successful after the user session has run 2 minutes.
Dec  8 04:44:45 np0005550137 systemd[73472]: Stopped Daily Cleanup of User's Temporary Directories.
Dec  8 04:44:45 np0005550137 systemd[73472]: Closed D-Bus User Message Bus Socket.
Dec  8 04:44:45 np0005550137 systemd[73472]: Stopped Create User's Volatile Files and Directories.
Dec  8 04:44:45 np0005550137 systemd[73472]: Removed slice User Application Slice.
Dec  8 04:44:45 np0005550137 systemd[73472]: Reached target Shutdown.
Dec  8 04:44:45 np0005550137 systemd[73472]: Finished Exit the Session.
Dec  8 04:44:45 np0005550137 systemd[73472]: Reached target Exit the Session.
Dec  8 04:44:45 np0005550137 systemd[1]: user@42477.service: Deactivated successfully.
Dec  8 04:44:45 np0005550137 systemd[1]: Stopped User Manager for UID 42477.
Dec  8 04:44:45 np0005550137 systemd[1]: Stopping User Runtime Directory /run/user/42477...
Dec  8 04:44:45 np0005550137 systemd[1]: run-user-42477.mount: Deactivated successfully.
Dec  8 04:44:45 np0005550137 systemd[1]: user-runtime-dir@42477.service: Deactivated successfully.
Dec  8 04:44:45 np0005550137 systemd[1]: Stopped User Runtime Directory /run/user/42477.
Dec  8 04:44:45 np0005550137 systemd[1]: Removed slice User Slice of UID 42477.
Dec  8 04:44:51 np0005550137 podman[73568]: 2025-12-08 09:44:51.824834564 +0000 UTC m=+16.815892502 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Dec  8 04:44:51 np0005550137 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Dec  8 04:44:51 np0005550137 podman[73643]: 2025-12-08 09:44:51.904428151 +0000 UTC m=+0.044908425 container create a8fa0c2a13f168be9f78f1dea58847a9c39734f8eebff5e8c91aca07097f705f (image=quay.io/ceph/ceph:v19, name=blissful_panini, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0)
Dec  8 04:44:51 np0005550137 systemd[1]: Created slice Virtual Machine and Container Slice.
Dec  8 04:44:51 np0005550137 systemd[1]: Started libpod-conmon-a8fa0c2a13f168be9f78f1dea58847a9c39734f8eebff5e8c91aca07097f705f.scope.
Dec  8 04:44:51 np0005550137 systemd[1]: Started libcrun container.
Dec  8 04:44:51 np0005550137 podman[73643]: 2025-12-08 09:44:51.884222195 +0000 UTC m=+0.024702499 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Dec  8 04:44:52 np0005550137 podman[73643]: 2025-12-08 09:44:52.012510676 +0000 UTC m=+0.152991050 container init a8fa0c2a13f168be9f78f1dea58847a9c39734f8eebff5e8c91aca07097f705f (image=quay.io/ceph/ceph:v19, name=blissful_panini, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, ceph=True, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.vendor=CentOS)
Dec  8 04:44:52 np0005550137 podman[73643]: 2025-12-08 09:44:52.021403306 +0000 UTC m=+0.161883580 container start a8fa0c2a13f168be9f78f1dea58847a9c39734f8eebff5e8c91aca07097f705f (image=quay.io/ceph/ceph:v19, name=blissful_panini, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.build-date=20250325, CEPH_REF=squid, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec  8 04:44:52 np0005550137 podman[73643]: 2025-12-08 09:44:52.025821275 +0000 UTC m=+0.166301569 container attach a8fa0c2a13f168be9f78f1dea58847a9c39734f8eebff5e8c91aca07097f705f (image=quay.io/ceph/ceph:v19, name=blissful_panini, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec  8 04:44:52 np0005550137 blissful_panini[73658]: ceph version 19.2.3 (c92aebb279828e9c3c1f5d24613efca272649e62) squid (stable)
Dec  8 04:44:52 np0005550137 systemd[1]: libpod-a8fa0c2a13f168be9f78f1dea58847a9c39734f8eebff5e8c91aca07097f705f.scope: Deactivated successfully.
Dec  8 04:44:52 np0005550137 podman[73643]: 2025-12-08 09:44:52.14216678 +0000 UTC m=+0.282647054 container died a8fa0c2a13f168be9f78f1dea58847a9c39734f8eebff5e8c91aca07097f705f (image=quay.io/ceph/ceph:v19, name=blissful_panini, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.build-date=20250325, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image)
Dec  8 04:44:52 np0005550137 systemd[1]: var-lib-containers-storage-overlay-3198b81c6a49aa99c2b94402a137dd152dc0c1eb8b94caae1cd157737bf11f73-merged.mount: Deactivated successfully.
Dec  8 04:44:52 np0005550137 podman[73643]: 2025-12-08 09:44:52.189992307 +0000 UTC m=+0.330472591 container remove a8fa0c2a13f168be9f78f1dea58847a9c39734f8eebff5e8c91aca07097f705f (image=quay.io/ceph/ceph:v19, name=blissful_panini, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec  8 04:44:52 np0005550137 systemd[1]: libpod-conmon-a8fa0c2a13f168be9f78f1dea58847a9c39734f8eebff5e8c91aca07097f705f.scope: Deactivated successfully.
Dec  8 04:44:52 np0005550137 podman[73675]: 2025-12-08 09:44:52.265833056 +0000 UTC m=+0.052534477 container create f70fcf2a48b6de45ee712a9c1b6b6ec1b28ac986e2702fa15278eb25b1cc5d88 (image=quay.io/ceph/ceph:v19, name=gifted_mccarthy, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, CEPH_REF=squid, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Dec  8 04:44:52 np0005550137 systemd[1]: Started libpod-conmon-f70fcf2a48b6de45ee712a9c1b6b6ec1b28ac986e2702fa15278eb25b1cc5d88.scope.
Dec  8 04:44:52 np0005550137 systemd[1]: Started libcrun container.
Dec  8 04:44:52 np0005550137 podman[73675]: 2025-12-08 09:44:52.239523787 +0000 UTC m=+0.026225258 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Dec  8 04:44:52 np0005550137 podman[73675]: 2025-12-08 09:44:52.340132796 +0000 UTC m=+0.126834247 container init f70fcf2a48b6de45ee712a9c1b6b6ec1b28ac986e2702fa15278eb25b1cc5d88 (image=quay.io/ceph/ceph:v19, name=gifted_mccarthy, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image)
Dec  8 04:44:52 np0005550137 podman[73675]: 2025-12-08 09:44:52.347820928 +0000 UTC m=+0.134522359 container start f70fcf2a48b6de45ee712a9c1b6b6ec1b28ac986e2702fa15278eb25b1cc5d88 (image=quay.io/ceph/ceph:v19, name=gifted_mccarthy, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid)
Dec  8 04:44:52 np0005550137 podman[73675]: 2025-12-08 09:44:52.351713951 +0000 UTC m=+0.138415402 container attach f70fcf2a48b6de45ee712a9c1b6b6ec1b28ac986e2702fa15278eb25b1cc5d88 (image=quay.io/ceph/ceph:v19, name=gifted_mccarthy, org.label-schema.license=GPLv2, OSD_FLAVOR=default, ceph=True, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS)
Dec  8 04:44:52 np0005550137 gifted_mccarthy[73692]: 167 167
Dec  8 04:44:52 np0005550137 systemd[1]: libpod-f70fcf2a48b6de45ee712a9c1b6b6ec1b28ac986e2702fa15278eb25b1cc5d88.scope: Deactivated successfully.
Dec  8 04:44:52 np0005550137 podman[73675]: 2025-12-08 09:44:52.353050983 +0000 UTC m=+0.139752414 container died f70fcf2a48b6de45ee712a9c1b6b6ec1b28ac986e2702fa15278eb25b1cc5d88 (image=quay.io/ceph/ceph:v19, name=gifted_mccarthy, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec  8 04:44:52 np0005550137 podman[73675]: 2025-12-08 09:44:52.395902413 +0000 UTC m=+0.182603834 container remove f70fcf2a48b6de45ee712a9c1b6b6ec1b28ac986e2702fa15278eb25b1cc5d88 (image=quay.io/ceph/ceph:v19, name=gifted_mccarthy, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True)
Dec  8 04:44:52 np0005550137 systemd[1]: libpod-conmon-f70fcf2a48b6de45ee712a9c1b6b6ec1b28ac986e2702fa15278eb25b1cc5d88.scope: Deactivated successfully.
Dec  8 04:44:52 np0005550137 podman[73709]: 2025-12-08 09:44:52.462346236 +0000 UTC m=+0.038776623 container create fba58195d383902e1439043199511b4e29970d6fd4dfd479f7e97cee1f8fec01 (image=quay.io/ceph/ceph:v19, name=distracted_wing, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, CEPH_REF=squid)
Dec  8 04:44:52 np0005550137 systemd[1]: Started libpod-conmon-fba58195d383902e1439043199511b4e29970d6fd4dfd479f7e97cee1f8fec01.scope.
Dec  8 04:44:52 np0005550137 systemd[1]: Started libcrun container.
Dec  8 04:44:52 np0005550137 podman[73709]: 2025-12-08 09:44:52.520367564 +0000 UTC m=+0.096797951 container init fba58195d383902e1439043199511b4e29970d6fd4dfd479f7e97cee1f8fec01 (image=quay.io/ceph/ceph:v19, name=distracted_wing, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, CEPH_REF=squid, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec  8 04:44:52 np0005550137 podman[73709]: 2025-12-08 09:44:52.526898299 +0000 UTC m=+0.103328686 container start fba58195d383902e1439043199511b4e29970d6fd4dfd479f7e97cee1f8fec01 (image=quay.io/ceph/ceph:v19, name=distracted_wing, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec  8 04:44:52 np0005550137 podman[73709]: 2025-12-08 09:44:52.530021868 +0000 UTC m=+0.106452255 container attach fba58195d383902e1439043199511b4e29970d6fd4dfd479f7e97cee1f8fec01 (image=quay.io/ceph/ceph:v19, name=distracted_wing, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec  8 04:44:52 np0005550137 podman[73709]: 2025-12-08 09:44:52.447013233 +0000 UTC m=+0.023443650 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Dec  8 04:44:52 np0005550137 distracted_wing[73725]: AQAUnjZpnYueIBAAZvL1MsSSfM9oEjv2de5Ivg==
Dec  8 04:44:52 np0005550137 systemd[1]: libpod-fba58195d383902e1439043199511b4e29970d6fd4dfd479f7e97cee1f8fec01.scope: Deactivated successfully.
Dec  8 04:44:52 np0005550137 podman[73709]: 2025-12-08 09:44:52.550909096 +0000 UTC m=+0.127339483 container died fba58195d383902e1439043199511b4e29970d6fd4dfd479f7e97cee1f8fec01 (image=quay.io/ceph/ceph:v19, name=distracted_wing, org.label-schema.license=GPLv2, CEPH_REF=squid, org.label-schema.vendor=CentOS, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec  8 04:44:52 np0005550137 podman[73709]: 2025-12-08 09:44:52.585531656 +0000 UTC m=+0.161962043 container remove fba58195d383902e1439043199511b4e29970d6fd4dfd479f7e97cee1f8fec01 (image=quay.io/ceph/ceph:v19, name=distracted_wing, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325)
Dec  8 04:44:52 np0005550137 systemd[1]: libpod-conmon-fba58195d383902e1439043199511b4e29970d6fd4dfd479f7e97cee1f8fec01.scope: Deactivated successfully.
Dec  8 04:44:52 np0005550137 podman[73743]: 2025-12-08 09:44:52.671744293 +0000 UTC m=+0.062257223 container create ec620bff6f73a0c0d0af7dbfd630dbb57740b5ca8f8dd52abfad11b06a839573 (image=quay.io/ceph/ceph:v19, name=epic_grothendieck, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1)
Dec  8 04:44:52 np0005550137 systemd[1]: Started libpod-conmon-ec620bff6f73a0c0d0af7dbfd630dbb57740b5ca8f8dd52abfad11b06a839573.scope.
Dec  8 04:44:52 np0005550137 podman[73743]: 2025-12-08 09:44:52.639058632 +0000 UTC m=+0.029571642 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Dec  8 04:44:52 np0005550137 systemd[1]: Started libcrun container.
Dec  8 04:44:52 np0005550137 podman[73743]: 2025-12-08 09:44:52.761120608 +0000 UTC m=+0.151633618 container init ec620bff6f73a0c0d0af7dbfd630dbb57740b5ca8f8dd52abfad11b06a839573 (image=quay.io/ceph/ceph:v19, name=epic_grothendieck, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.license=GPLv2)
Dec  8 04:44:52 np0005550137 podman[73743]: 2025-12-08 09:44:52.769293295 +0000 UTC m=+0.159806265 container start ec620bff6f73a0c0d0af7dbfd630dbb57740b5ca8f8dd52abfad11b06a839573 (image=quay.io/ceph/ceph:v19, name=epic_grothendieck, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Dec  8 04:44:52 np0005550137 podman[73743]: 2025-12-08 09:44:52.774093576 +0000 UTC m=+0.164606536 container attach ec620bff6f73a0c0d0af7dbfd630dbb57740b5ca8f8dd52abfad11b06a839573 (image=quay.io/ceph/ceph:v19, name=epic_grothendieck, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, ceph=True, OSD_FLAVOR=default, CEPH_REF=squid, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec  8 04:44:52 np0005550137 epic_grothendieck[73759]: AQAUnjZpGvTxLhAAyjXOXQfMrH/6JwbRKjUWhQ==
Dec  8 04:44:52 np0005550137 systemd[1]: libpod-ec620bff6f73a0c0d0af7dbfd630dbb57740b5ca8f8dd52abfad11b06a839573.scope: Deactivated successfully.
Dec  8 04:44:52 np0005550137 podman[73743]: 2025-12-08 09:44:52.791552337 +0000 UTC m=+0.182065277 container died ec620bff6f73a0c0d0af7dbfd630dbb57740b5ca8f8dd52abfad11b06a839573 (image=quay.io/ceph/ceph:v19, name=epic_grothendieck, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, OSD_FLAVOR=default)
Dec  8 04:44:52 np0005550137 podman[73743]: 2025-12-08 09:44:52.841412227 +0000 UTC m=+0.231925177 container remove ec620bff6f73a0c0d0af7dbfd630dbb57740b5ca8f8dd52abfad11b06a839573 (image=quay.io/ceph/ceph:v19, name=epic_grothendieck, ceph=True, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec  8 04:44:52 np0005550137 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Dec  8 04:44:52 np0005550137 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Dec  8 04:44:52 np0005550137 systemd[1]: libpod-conmon-ec620bff6f73a0c0d0af7dbfd630dbb57740b5ca8f8dd52abfad11b06a839573.scope: Deactivated successfully.
Dec  8 04:44:52 np0005550137 podman[73779]: 2025-12-08 09:44:52.932363632 +0000 UTC m=+0.063447950 container create 57b2d31046870db0f58242fb6ad7463d4d36c7c324c70d9d7043d7e39b6f7b8e (image=quay.io/ceph/ceph:v19, name=great_diffie, ceph=True, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec  8 04:44:52 np0005550137 systemd[1]: Started libpod-conmon-57b2d31046870db0f58242fb6ad7463d4d36c7c324c70d9d7043d7e39b6f7b8e.scope.
Dec  8 04:44:52 np0005550137 systemd[1]: Started libcrun container.
Dec  8 04:44:52 np0005550137 podman[73779]: 2025-12-08 09:44:52.906948932 +0000 UTC m=+0.038033280 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Dec  8 04:44:53 np0005550137 podman[73779]: 2025-12-08 09:44:53.278547637 +0000 UTC m=+0.409631975 container init 57b2d31046870db0f58242fb6ad7463d4d36c7c324c70d9d7043d7e39b6f7b8e (image=quay.io/ceph/ceph:v19, name=great_diffie, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec  8 04:44:53 np0005550137 podman[73779]: 2025-12-08 09:44:53.283511594 +0000 UTC m=+0.414595902 container start 57b2d31046870db0f58242fb6ad7463d4d36c7c324c70d9d7043d7e39b6f7b8e (image=quay.io/ceph/ceph:v19, name=great_diffie, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, CEPH_REF=squid, ceph=True, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image)
Dec  8 04:44:53 np0005550137 great_diffie[73795]: AQAVnjZpR0HwERAAc7X8EWsptJyW0tlvuA6QQg==
Dec  8 04:44:53 np0005550137 systemd[1]: libpod-57b2d31046870db0f58242fb6ad7463d4d36c7c324c70d9d7043d7e39b6f7b8e.scope: Deactivated successfully.
Dec  8 04:44:53 np0005550137 podman[73779]: 2025-12-08 09:44:53.42434374 +0000 UTC m=+0.555428098 container attach 57b2d31046870db0f58242fb6ad7463d4d36c7c324c70d9d7043d7e39b6f7b8e (image=quay.io/ceph/ceph:v19, name=great_diffie, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, org.label-schema.vendor=CentOS)
Dec  8 04:44:53 np0005550137 podman[73779]: 2025-12-08 09:44:53.424948839 +0000 UTC m=+0.556033167 container died 57b2d31046870db0f58242fb6ad7463d4d36c7c324c70d9d7043d7e39b6f7b8e (image=quay.io/ceph/ceph:v19, name=great_diffie, CEPH_REF=squid, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec  8 04:44:54 np0005550137 systemd[1]: var-lib-containers-storage-overlay-1e1ce1b488b99d7d30c5cd616d949d65cb23316dad6670297c1585be83bc3ff3-merged.mount: Deactivated successfully.
Dec  8 04:44:54 np0005550137 podman[73779]: 2025-12-08 09:44:54.932402185 +0000 UTC m=+2.063486493 container remove 57b2d31046870db0f58242fb6ad7463d4d36c7c324c70d9d7043d7e39b6f7b8e (image=quay.io/ceph/ceph:v19, name=great_diffie, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, OSD_FLAVOR=default, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec  8 04:44:54 np0005550137 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Dec  8 04:44:54 np0005550137 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Dec  8 04:44:54 np0005550137 systemd[1]: libpod-conmon-57b2d31046870db0f58242fb6ad7463d4d36c7c324c70d9d7043d7e39b6f7b8e.scope: Deactivated successfully.
Dec  8 04:44:54 np0005550137 podman[73815]: 2025-12-08 09:44:54.995551095 +0000 UTC m=+0.038318719 container create 36c9e81d99fc6b21b617a04cdb6f05c7b9c46c439a303adb95c45bca5f16bf75 (image=quay.io/ceph/ceph:v19, name=inspiring_noyce, OSD_FLAVOR=default, ceph=True, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec  8 04:44:55 np0005550137 systemd[1]: Started libpod-conmon-36c9e81d99fc6b21b617a04cdb6f05c7b9c46c439a303adb95c45bca5f16bf75.scope.
Dec  8 04:44:55 np0005550137 podman[73815]: 2025-12-08 09:44:54.978334823 +0000 UTC m=+0.021102467 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Dec  8 04:44:55 np0005550137 systemd[1]: Started libcrun container.
Dec  8 04:44:55 np0005550137 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/43db4879fab1d170fc452573a661919c8b976bff367c71c02d61f20c525e299a/merged/tmp/monmap supports timestamps until 2038 (0x7fffffff)
Dec  8 04:44:55 np0005550137 podman[73815]: 2025-12-08 09:44:55.108617416 +0000 UTC m=+0.151385110 container init 36c9e81d99fc6b21b617a04cdb6f05c7b9c46c439a303adb95c45bca5f16bf75 (image=quay.io/ceph/ceph:v19, name=inspiring_noyce, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Dec  8 04:44:55 np0005550137 podman[73815]: 2025-12-08 09:44:55.116634959 +0000 UTC m=+0.159402603 container start 36c9e81d99fc6b21b617a04cdb6f05c7b9c46c439a303adb95c45bca5f16bf75 (image=quay.io/ceph/ceph:v19, name=inspiring_noyce, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, ceph=True, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec  8 04:44:55 np0005550137 podman[73815]: 2025-12-08 09:44:55.120255023 +0000 UTC m=+0.163022657 container attach 36c9e81d99fc6b21b617a04cdb6f05c7b9c46c439a303adb95c45bca5f16bf75 (image=quay.io/ceph/ceph:v19, name=inspiring_noyce, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, CEPH_REF=squid, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, io.buildah.version=1.40.1)
Dec  8 04:44:55 np0005550137 inspiring_noyce[73832]: /usr/bin/monmaptool: monmap file /tmp/monmap
Dec  8 04:44:55 np0005550137 inspiring_noyce[73832]: setting min_mon_release = quincy
Dec  8 04:44:55 np0005550137 inspiring_noyce[73832]: /usr/bin/monmaptool: set fsid to ceb838ef-9d5d-54e4-bddb-2f01adce2ad4
Dec  8 04:44:55 np0005550137 inspiring_noyce[73832]: /usr/bin/monmaptool: writing epoch 0 to /tmp/monmap (1 monitors)
Dec  8 04:44:55 np0005550137 systemd[1]: libpod-36c9e81d99fc6b21b617a04cdb6f05c7b9c46c439a303adb95c45bca5f16bf75.scope: Deactivated successfully.
Dec  8 04:44:55 np0005550137 podman[73815]: 2025-12-08 09:44:55.169576316 +0000 UTC m=+0.212344000 container died 36c9e81d99fc6b21b617a04cdb6f05c7b9c46c439a303adb95c45bca5f16bf75 (image=quay.io/ceph/ceph:v19, name=inspiring_noyce, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_REF=squid)
Dec  8 04:44:55 np0005550137 podman[73815]: 2025-12-08 09:44:55.212701015 +0000 UTC m=+0.255468649 container remove 36c9e81d99fc6b21b617a04cdb6f05c7b9c46c439a303adb95c45bca5f16bf75 (image=quay.io/ceph/ceph:v19, name=inspiring_noyce, org.label-schema.license=GPLv2, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, ceph=True)
Dec  8 04:44:55 np0005550137 systemd[1]: libpod-conmon-36c9e81d99fc6b21b617a04cdb6f05c7b9c46c439a303adb95c45bca5f16bf75.scope: Deactivated successfully.
Dec  8 04:44:55 np0005550137 podman[73851]: 2025-12-08 09:44:55.281019678 +0000 UTC m=+0.045807985 container create 8129a4326ccd49b8a78e32cc59f45d581592773926257cc490b11d420ae0e2b8 (image=quay.io/ceph/ceph:v19, name=nice_euler, ceph=True, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325)
Dec  8 04:44:55 np0005550137 systemd[1]: Started libpod-conmon-8129a4326ccd49b8a78e32cc59f45d581592773926257cc490b11d420ae0e2b8.scope.
Dec  8 04:44:55 np0005550137 systemd[1]: Started libcrun container.
Dec  8 04:44:55 np0005550137 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/bacee09e322e6bb01a317bce2bac3f78f5e5e0550d83953f0306c5020ca396f2/merged/tmp/monmap supports timestamps until 2038 (0x7fffffff)
Dec  8 04:44:55 np0005550137 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/bacee09e322e6bb01a317bce2bac3f78f5e5e0550d83953f0306c5020ca396f2/merged/tmp/keyring supports timestamps until 2038 (0x7fffffff)
Dec  8 04:44:55 np0005550137 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/bacee09e322e6bb01a317bce2bac3f78f5e5e0550d83953f0306c5020ca396f2/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec  8 04:44:55 np0005550137 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/bacee09e322e6bb01a317bce2bac3f78f5e5e0550d83953f0306c5020ca396f2/merged/var/lib/ceph/mon/ceph-compute-0 supports timestamps until 2038 (0x7fffffff)
Dec  8 04:44:55 np0005550137 podman[73851]: 2025-12-08 09:44:55.261184333 +0000 UTC m=+0.025972640 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Dec  8 04:44:55 np0005550137 podman[73851]: 2025-12-08 09:44:55.363215136 +0000 UTC m=+0.128003443 container init 8129a4326ccd49b8a78e32cc59f45d581592773926257cc490b11d420ae0e2b8 (image=quay.io/ceph/ceph:v19, name=nice_euler, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, ceph=True, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec  8 04:44:55 np0005550137 podman[73851]: 2025-12-08 09:44:55.373854721 +0000 UTC m=+0.138642998 container start 8129a4326ccd49b8a78e32cc59f45d581592773926257cc490b11d420ae0e2b8 (image=quay.io/ceph/ceph:v19, name=nice_euler, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_REF=squid, org.label-schema.vendor=CentOS, ceph=True, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec  8 04:44:55 np0005550137 podman[73851]: 2025-12-08 09:44:55.377571349 +0000 UTC m=+0.142359626 container attach 8129a4326ccd49b8a78e32cc59f45d581592773926257cc490b11d420ae0e2b8 (image=quay.io/ceph/ceph:v19, name=nice_euler, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.build-date=20250325, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image)
Dec  8 04:44:55 np0005550137 systemd[1]: libpod-8129a4326ccd49b8a78e32cc59f45d581592773926257cc490b11d420ae0e2b8.scope: Deactivated successfully.
Dec  8 04:44:55 np0005550137 podman[73851]: 2025-12-08 09:44:55.456422983 +0000 UTC m=+0.221211270 container died 8129a4326ccd49b8a78e32cc59f45d581592773926257cc490b11d420ae0e2b8 (image=quay.io/ceph/ceph:v19, name=nice_euler, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid)
Dec  8 04:44:55 np0005550137 podman[73851]: 2025-12-08 09:44:55.495411501 +0000 UTC m=+0.260199778 container remove 8129a4326ccd49b8a78e32cc59f45d581592773926257cc490b11d420ae0e2b8 (image=quay.io/ceph/ceph:v19, name=nice_euler, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec  8 04:44:55 np0005550137 systemd[1]: libpod-conmon-8129a4326ccd49b8a78e32cc59f45d581592773926257cc490b11d420ae0e2b8.scope: Deactivated successfully.
Dec  8 04:44:55 np0005550137 systemd[1]: Reloading.
Dec  8 04:44:55 np0005550137 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Dec  8 04:44:55 np0005550137 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec  8 04:44:55 np0005550137 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Dec  8 04:44:55 np0005550137 systemd[1]: Reloading.
Dec  8 04:44:55 np0005550137 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Dec  8 04:44:55 np0005550137 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec  8 04:44:56 np0005550137 systemd[1]: Reached target All Ceph clusters and services.
Dec  8 04:44:56 np0005550137 systemd[1]: Reloading.
Dec  8 04:44:56 np0005550137 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Dec  8 04:44:56 np0005550137 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec  8 04:44:56 np0005550137 systemd[1]: Reached target Ceph cluster ceb838ef-9d5d-54e4-bddb-2f01adce2ad4.
Dec  8 04:44:56 np0005550137 systemd[1]: Reloading.
Dec  8 04:44:56 np0005550137 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Dec  8 04:44:56 np0005550137 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec  8 04:44:56 np0005550137 systemd[1]: Reloading.
Dec  8 04:44:56 np0005550137 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Dec  8 04:44:56 np0005550137 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec  8 04:44:56 np0005550137 systemd[1]: Created slice Slice /system/ceph-ceb838ef-9d5d-54e4-bddb-2f01adce2ad4.
Dec  8 04:44:56 np0005550137 systemd[1]: Reached target System Time Set.
Dec  8 04:44:56 np0005550137 systemd[1]: Reached target System Time Synchronized.
Dec  8 04:44:56 np0005550137 systemd[1]: Starting Ceph mon.compute-0 for ceb838ef-9d5d-54e4-bddb-2f01adce2ad4...
Dec  8 04:44:56 np0005550137 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Dec  8 04:44:56 np0005550137 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Dec  8 04:44:57 np0005550137 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Dec  8 04:44:57 np0005550137 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Dec  8 04:44:57 np0005550137 podman[74144]: 2025-12-08 09:44:57.135464514 +0000 UTC m=+0.047746155 container create 35fecc86a949afee6dd81fee0b399e927144f302743f5c64ba32a2469e6927de (image=quay.io/ceph/ceph:v19, name=ceph-ceb838ef-9d5d-54e4-bddb-2f01adce2ad4-mon-compute-0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, CEPH_REF=squid, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec  8 04:44:57 np0005550137 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/58123813f905106696b0a3f9c9ad9f83f8347a9798a3be8ed019f517e59cd669/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec  8 04:44:57 np0005550137 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/58123813f905106696b0a3f9c9ad9f83f8347a9798a3be8ed019f517e59cd669/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec  8 04:44:57 np0005550137 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/58123813f905106696b0a3f9c9ad9f83f8347a9798a3be8ed019f517e59cd669/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec  8 04:44:57 np0005550137 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/58123813f905106696b0a3f9c9ad9f83f8347a9798a3be8ed019f517e59cd669/merged/var/lib/ceph/mon/ceph-compute-0 supports timestamps until 2038 (0x7fffffff)
Dec  8 04:44:57 np0005550137 podman[74144]: 2025-12-08 09:44:57.20037955 +0000 UTC m=+0.112661251 container init 35fecc86a949afee6dd81fee0b399e927144f302743f5c64ba32a2469e6927de (image=quay.io/ceph/ceph:v19, name=ceph-ceb838ef-9d5d-54e4-bddb-2f01adce2ad4-mon-compute-0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, OSD_FLAVOR=default, org.label-schema.license=GPLv2, ceph=True, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325)
Dec  8 04:44:57 np0005550137 podman[74144]: 2025-12-08 09:44:57.208761244 +0000 UTC m=+0.121042895 container start 35fecc86a949afee6dd81fee0b399e927144f302743f5c64ba32a2469e6927de (image=quay.io/ceph/ceph:v19, name=ceph-ceb838ef-9d5d-54e4-bddb-2f01adce2ad4-mon-compute-0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Dec  8 04:44:57 np0005550137 podman[74144]: 2025-12-08 09:44:57.116480747 +0000 UTC m=+0.028762418 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Dec  8 04:44:57 np0005550137 bash[74144]: 35fecc86a949afee6dd81fee0b399e927144f302743f5c64ba32a2469e6927de
Dec  8 04:44:57 np0005550137 systemd[1]: Started Ceph mon.compute-0 for ceb838ef-9d5d-54e4-bddb-2f01adce2ad4.
Dec  8 04:44:57 np0005550137 ceph-mon[74164]: set uid:gid to 167:167 (ceph:ceph)
Dec  8 04:44:57 np0005550137 ceph-mon[74164]: ceph version 19.2.3 (c92aebb279828e9c3c1f5d24613efca272649e62) squid (stable), process ceph-mon, pid 2
Dec  8 04:44:57 np0005550137 ceph-mon[74164]: pidfile_write: ignore empty --pid-file
Dec  8 04:44:57 np0005550137 ceph-mon[74164]: load: jerasure load: lrc 
Dec  8 04:44:57 np0005550137 ceph-mon[74164]: rocksdb: RocksDB version: 7.9.2
Dec  8 04:44:57 np0005550137 ceph-mon[74164]: rocksdb: Git sha 0
Dec  8 04:44:57 np0005550137 ceph-mon[74164]: rocksdb: Compile date 2025-07-17 03:12:14
Dec  8 04:44:57 np0005550137 ceph-mon[74164]: rocksdb: DB SUMMARY
Dec  8 04:44:57 np0005550137 ceph-mon[74164]: rocksdb: DB Session ID:  5X6ZUXOI3CRFVSAT561D
Dec  8 04:44:57 np0005550137 ceph-mon[74164]: rocksdb: CURRENT file:  CURRENT
Dec  8 04:44:57 np0005550137 ceph-mon[74164]: rocksdb: IDENTITY file:  IDENTITY
Dec  8 04:44:57 np0005550137 ceph-mon[74164]: rocksdb: MANIFEST file:  MANIFEST-000005 size: 59 Bytes
Dec  8 04:44:57 np0005550137 ceph-mon[74164]: rocksdb: SST files in /var/lib/ceph/mon/ceph-compute-0/store.db dir, Total Num: 0, files: 
Dec  8 04:44:57 np0005550137 ceph-mon[74164]: rocksdb: Write Ahead Log file in /var/lib/ceph/mon/ceph-compute-0/store.db: 000004.log size: 807 ; 
Dec  8 04:44:57 np0005550137 ceph-mon[74164]: rocksdb:                         Options.error_if_exists: 0
Dec  8 04:44:57 np0005550137 ceph-mon[74164]: rocksdb:                       Options.create_if_missing: 0
Dec  8 04:44:57 np0005550137 ceph-mon[74164]: rocksdb:                         Options.paranoid_checks: 1
Dec  8 04:44:57 np0005550137 ceph-mon[74164]: rocksdb:             Options.flush_verify_memtable_count: 1
Dec  8 04:44:57 np0005550137 ceph-mon[74164]: rocksdb:                               Options.track_and_verify_wals_in_manifest: 0
Dec  8 04:44:57 np0005550137 ceph-mon[74164]: rocksdb:        Options.verify_sst_unique_id_in_manifest: 1
Dec  8 04:44:57 np0005550137 ceph-mon[74164]: rocksdb:                                     Options.env: 0x5640240d5c20
Dec  8 04:44:57 np0005550137 ceph-mon[74164]: rocksdb:                                      Options.fs: PosixFileSystem
Dec  8 04:44:57 np0005550137 ceph-mon[74164]: rocksdb:                                Options.info_log: 0x56402611ed60
Dec  8 04:44:57 np0005550137 ceph-mon[74164]: rocksdb:                Options.max_file_opening_threads: 16
Dec  8 04:44:57 np0005550137 ceph-mon[74164]: rocksdb:                              Options.statistics: (nil)
Dec  8 04:44:57 np0005550137 ceph-mon[74164]: rocksdb:                               Options.use_fsync: 0
Dec  8 04:44:57 np0005550137 ceph-mon[74164]: rocksdb:                       Options.max_log_file_size: 0
Dec  8 04:44:57 np0005550137 ceph-mon[74164]: rocksdb:                  Options.max_manifest_file_size: 1073741824
Dec  8 04:44:57 np0005550137 ceph-mon[74164]: rocksdb:                   Options.log_file_time_to_roll: 0
Dec  8 04:44:57 np0005550137 ceph-mon[74164]: rocksdb:                       Options.keep_log_file_num: 1000
Dec  8 04:44:57 np0005550137 ceph-mon[74164]: rocksdb:                    Options.recycle_log_file_num: 0
Dec  8 04:44:57 np0005550137 ceph-mon[74164]: rocksdb:                         Options.allow_fallocate: 1
Dec  8 04:44:57 np0005550137 ceph-mon[74164]: rocksdb:                        Options.allow_mmap_reads: 0
Dec  8 04:44:57 np0005550137 ceph-mon[74164]: rocksdb:                       Options.allow_mmap_writes: 0
Dec  8 04:44:57 np0005550137 ceph-mon[74164]: rocksdb:                        Options.use_direct_reads: 0
Dec  8 04:44:57 np0005550137 ceph-mon[74164]: rocksdb:                        Options.use_direct_io_for_flush_and_compaction: 0
Dec  8 04:44:57 np0005550137 ceph-mon[74164]: rocksdb:          Options.create_missing_column_families: 0
Dec  8 04:44:57 np0005550137 ceph-mon[74164]: rocksdb:                              Options.db_log_dir: 
Dec  8 04:44:57 np0005550137 ceph-mon[74164]: rocksdb:                                 Options.wal_dir: 
Dec  8 04:44:57 np0005550137 ceph-mon[74164]: rocksdb:                Options.table_cache_numshardbits: 6
Dec  8 04:44:57 np0005550137 ceph-mon[74164]: rocksdb:                         Options.WAL_ttl_seconds: 0
Dec  8 04:44:57 np0005550137 ceph-mon[74164]: rocksdb:                       Options.WAL_size_limit_MB: 0
Dec  8 04:44:57 np0005550137 ceph-mon[74164]: rocksdb:                        Options.max_write_batch_group_size_bytes: 1048576
Dec  8 04:44:57 np0005550137 ceph-mon[74164]: rocksdb:             Options.manifest_preallocation_size: 4194304
Dec  8 04:44:57 np0005550137 ceph-mon[74164]: rocksdb:                     Options.is_fd_close_on_exec: 1
Dec  8 04:44:57 np0005550137 ceph-mon[74164]: rocksdb:                   Options.advise_random_on_open: 1
Dec  8 04:44:57 np0005550137 ceph-mon[74164]: rocksdb:                    Options.db_write_buffer_size: 0
Dec  8 04:44:57 np0005550137 ceph-mon[74164]: rocksdb:                    Options.write_buffer_manager: 0x564026123900
Dec  8 04:44:57 np0005550137 ceph-mon[74164]: rocksdb:         Options.access_hint_on_compaction_start: 1
Dec  8 04:44:57 np0005550137 ceph-mon[74164]: rocksdb:           Options.random_access_max_buffer_size: 1048576
Dec  8 04:44:57 np0005550137 ceph-mon[74164]: rocksdb:                      Options.use_adaptive_mutex: 0
Dec  8 04:44:57 np0005550137 ceph-mon[74164]: rocksdb:                            Options.rate_limiter: (nil)
Dec  8 04:44:57 np0005550137 ceph-mon[74164]: rocksdb:     Options.sst_file_manager.rate_bytes_per_sec: 0
Dec  8 04:44:57 np0005550137 ceph-mon[74164]: rocksdb:                       Options.wal_recovery_mode: 2
Dec  8 04:44:57 np0005550137 ceph-mon[74164]: rocksdb:                  Options.enable_thread_tracking: 0
Dec  8 04:44:57 np0005550137 ceph-mon[74164]: rocksdb:                  Options.enable_pipelined_write: 0
Dec  8 04:44:57 np0005550137 ceph-mon[74164]: rocksdb:                  Options.unordered_write: 0
Dec  8 04:44:57 np0005550137 ceph-mon[74164]: rocksdb:         Options.allow_concurrent_memtable_write: 1
Dec  8 04:44:57 np0005550137 ceph-mon[74164]: rocksdb:      Options.enable_write_thread_adaptive_yield: 1
Dec  8 04:44:57 np0005550137 ceph-mon[74164]: rocksdb:             Options.write_thread_max_yield_usec: 100
Dec  8 04:44:57 np0005550137 ceph-mon[74164]: rocksdb:            Options.write_thread_slow_yield_usec: 3
Dec  8 04:44:57 np0005550137 ceph-mon[74164]: rocksdb:                               Options.row_cache: None
Dec  8 04:44:57 np0005550137 ceph-mon[74164]: rocksdb:                              Options.wal_filter: None
Dec  8 04:44:57 np0005550137 ceph-mon[74164]: rocksdb:             Options.avoid_flush_during_recovery: 0
Dec  8 04:44:57 np0005550137 ceph-mon[74164]: rocksdb:             Options.allow_ingest_behind: 0
Dec  8 04:44:57 np0005550137 ceph-mon[74164]: rocksdb:             Options.two_write_queues: 0
Dec  8 04:44:57 np0005550137 ceph-mon[74164]: rocksdb:             Options.manual_wal_flush: 0
Dec  8 04:44:57 np0005550137 ceph-mon[74164]: rocksdb:             Options.wal_compression: 0
Dec  8 04:44:57 np0005550137 ceph-mon[74164]: rocksdb:             Options.atomic_flush: 0
Dec  8 04:44:57 np0005550137 ceph-mon[74164]: rocksdb:             Options.avoid_unnecessary_blocking_io: 0
Dec  8 04:44:57 np0005550137 ceph-mon[74164]: rocksdb:                 Options.persist_stats_to_disk: 0
Dec  8 04:44:57 np0005550137 ceph-mon[74164]: rocksdb:                 Options.write_dbid_to_manifest: 0
Dec  8 04:44:57 np0005550137 ceph-mon[74164]: rocksdb:                 Options.log_readahead_size: 0
Dec  8 04:44:57 np0005550137 ceph-mon[74164]: rocksdb:                 Options.file_checksum_gen_factory: Unknown
Dec  8 04:44:57 np0005550137 ceph-mon[74164]: rocksdb:                 Options.best_efforts_recovery: 0
Dec  8 04:44:57 np0005550137 ceph-mon[74164]: rocksdb:                Options.max_bgerror_resume_count: 2147483647
Dec  8 04:44:57 np0005550137 ceph-mon[74164]: rocksdb:            Options.bgerror_resume_retry_interval: 1000000
Dec  8 04:44:57 np0005550137 ceph-mon[74164]: rocksdb:             Options.allow_data_in_errors: 0
Dec  8 04:44:57 np0005550137 ceph-mon[74164]: rocksdb:             Options.db_host_id: __hostname__
Dec  8 04:44:57 np0005550137 ceph-mon[74164]: rocksdb:             Options.enforce_single_del_contracts: true
Dec  8 04:44:57 np0005550137 ceph-mon[74164]: rocksdb:             Options.max_background_jobs: 2
Dec  8 04:44:57 np0005550137 ceph-mon[74164]: rocksdb:             Options.max_background_compactions: -1
Dec  8 04:44:57 np0005550137 ceph-mon[74164]: rocksdb:             Options.max_subcompactions: 1
Dec  8 04:44:57 np0005550137 ceph-mon[74164]: rocksdb:             Options.avoid_flush_during_shutdown: 0
Dec  8 04:44:57 np0005550137 ceph-mon[74164]: rocksdb:           Options.writable_file_max_buffer_size: 1048576
Dec  8 04:44:57 np0005550137 ceph-mon[74164]: rocksdb:             Options.delayed_write_rate : 16777216
Dec  8 04:44:57 np0005550137 ceph-mon[74164]: rocksdb:             Options.max_total_wal_size: 0
Dec  8 04:44:57 np0005550137 ceph-mon[74164]: rocksdb:             Options.delete_obsolete_files_period_micros: 21600000000
Dec  8 04:44:57 np0005550137 ceph-mon[74164]: rocksdb:                   Options.stats_dump_period_sec: 600
Dec  8 04:44:57 np0005550137 ceph-mon[74164]: rocksdb:                 Options.stats_persist_period_sec: 600
Dec  8 04:44:57 np0005550137 ceph-mon[74164]: rocksdb:                 Options.stats_history_buffer_size: 1048576
Dec  8 04:44:57 np0005550137 ceph-mon[74164]: rocksdb:                          Options.max_open_files: -1
Dec  8 04:44:57 np0005550137 ceph-mon[74164]: rocksdb:                          Options.bytes_per_sync: 0
Dec  8 04:44:57 np0005550137 ceph-mon[74164]: rocksdb:                      Options.wal_bytes_per_sync: 0
Dec  8 04:44:57 np0005550137 ceph-mon[74164]: rocksdb:                   Options.strict_bytes_per_sync: 0
Dec  8 04:44:57 np0005550137 ceph-mon[74164]: rocksdb:       Options.compaction_readahead_size: 0
Dec  8 04:44:57 np0005550137 ceph-mon[74164]: rocksdb:                  Options.max_background_flushes: -1
Dec  8 04:44:57 np0005550137 ceph-mon[74164]: rocksdb: Compression algorithms supported:
Dec  8 04:44:57 np0005550137 ceph-mon[74164]: rocksdb: #011kZSTD supported: 0
Dec  8 04:44:57 np0005550137 ceph-mon[74164]: rocksdb: #011kXpressCompression supported: 0
Dec  8 04:44:57 np0005550137 ceph-mon[74164]: rocksdb: #011kBZip2Compression supported: 0
Dec  8 04:44:57 np0005550137 ceph-mon[74164]: rocksdb: #011kZSTDNotFinalCompression supported: 0
Dec  8 04:44:57 np0005550137 ceph-mon[74164]: rocksdb: #011kLZ4Compression supported: 1
Dec  8 04:44:57 np0005550137 ceph-mon[74164]: rocksdb: #011kZlibCompression supported: 1
Dec  8 04:44:57 np0005550137 ceph-mon[74164]: rocksdb: #011kLZ4HCCompression supported: 1
Dec  8 04:44:57 np0005550137 ceph-mon[74164]: rocksdb: #011kSnappyCompression supported: 1
Dec  8 04:44:57 np0005550137 ceph-mon[74164]: rocksdb: Fast CRC32 supported: Supported on x86
Dec  8 04:44:57 np0005550137 ceph-mon[74164]: rocksdb: DMutex implementation: pthread_mutex_t
Dec  8 04:44:57 np0005550137 ceph-mon[74164]: rocksdb: [db/version_set.cc:5527] Recovering from manifest file: /var/lib/ceph/mon/ceph-compute-0/store.db/MANIFEST-000005
Dec  8 04:44:57 np0005550137 ceph-mon[74164]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [default]:
Dec  8 04:44:57 np0005550137 ceph-mon[74164]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Dec  8 04:44:57 np0005550137 ceph-mon[74164]: rocksdb:           Options.merge_operator: 
Dec  8 04:44:57 np0005550137 ceph-mon[74164]: rocksdb:        Options.compaction_filter: None
Dec  8 04:44:57 np0005550137 ceph-mon[74164]: rocksdb:        Options.compaction_filter_factory: None
Dec  8 04:44:57 np0005550137 ceph-mon[74164]: rocksdb:  Options.sst_partitioner_factory: None
Dec  8 04:44:57 np0005550137 ceph-mon[74164]: rocksdb:         Options.memtable_factory: SkipListFactory
Dec  8 04:44:57 np0005550137 ceph-mon[74164]: rocksdb:            Options.table_factory: BlockBasedTable
Dec  8 04:44:57 np0005550137 ceph-mon[74164]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x56402611e500)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x564026143350#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 536870912#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Dec  8 04:44:57 np0005550137 ceph-mon[74164]: rocksdb:        Options.write_buffer_size: 33554432
Dec  8 04:44:57 np0005550137 ceph-mon[74164]: rocksdb:  Options.max_write_buffer_number: 2
Dec  8 04:44:57 np0005550137 ceph-mon[74164]: rocksdb:          Options.compression: NoCompression
Dec  8 04:44:57 np0005550137 ceph-mon[74164]: rocksdb:                  Options.bottommost_compression: Disabled
Dec  8 04:44:57 np0005550137 ceph-mon[74164]: rocksdb:       Options.prefix_extractor: nullptr
Dec  8 04:44:57 np0005550137 ceph-mon[74164]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Dec  8 04:44:57 np0005550137 ceph-mon[74164]: rocksdb:             Options.num_levels: 7
Dec  8 04:44:57 np0005550137 ceph-mon[74164]: rocksdb:        Options.min_write_buffer_number_to_merge: 1
Dec  8 04:44:57 np0005550137 ceph-mon[74164]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Dec  8 04:44:57 np0005550137 ceph-mon[74164]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Dec  8 04:44:57 np0005550137 ceph-mon[74164]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Dec  8 04:44:57 np0005550137 ceph-mon[74164]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Dec  8 04:44:57 np0005550137 ceph-mon[74164]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Dec  8 04:44:57 np0005550137 ceph-mon[74164]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Dec  8 04:44:57 np0005550137 ceph-mon[74164]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Dec  8 04:44:57 np0005550137 ceph-mon[74164]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Dec  8 04:44:57 np0005550137 ceph-mon[74164]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Dec  8 04:44:57 np0005550137 ceph-mon[74164]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Dec  8 04:44:57 np0005550137 ceph-mon[74164]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Dec  8 04:44:57 np0005550137 ceph-mon[74164]: rocksdb:            Options.compression_opts.window_bits: -14
Dec  8 04:44:57 np0005550137 ceph-mon[74164]: rocksdb:                  Options.compression_opts.level: 32767
Dec  8 04:44:57 np0005550137 ceph-mon[74164]: rocksdb:               Options.compression_opts.strategy: 0
Dec  8 04:44:57 np0005550137 ceph-mon[74164]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Dec  8 04:44:57 np0005550137 ceph-mon[74164]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Dec  8 04:44:57 np0005550137 ceph-mon[74164]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Dec  8 04:44:57 np0005550137 ceph-mon[74164]: rocksdb:         Options.compression_opts.parallel_threads: 1
Dec  8 04:44:57 np0005550137 ceph-mon[74164]: rocksdb:                  Options.compression_opts.enabled: false
Dec  8 04:44:57 np0005550137 ceph-mon[74164]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Dec  8 04:44:57 np0005550137 ceph-mon[74164]: rocksdb:      Options.level0_file_num_compaction_trigger: 4
Dec  8 04:44:57 np0005550137 ceph-mon[74164]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Dec  8 04:44:57 np0005550137 ceph-mon[74164]: rocksdb:              Options.level0_stop_writes_trigger: 36
Dec  8 04:44:57 np0005550137 ceph-mon[74164]: rocksdb:                   Options.target_file_size_base: 67108864
Dec  8 04:44:57 np0005550137 ceph-mon[74164]: rocksdb:             Options.target_file_size_multiplier: 1
Dec  8 04:44:57 np0005550137 ceph-mon[74164]: rocksdb:                Options.max_bytes_for_level_base: 268435456
Dec  8 04:44:57 np0005550137 ceph-mon[74164]: rocksdb: Options.level_compaction_dynamic_level_bytes: 1
Dec  8 04:44:57 np0005550137 ceph-mon[74164]: rocksdb:          Options.max_bytes_for_level_multiplier: 10.000000
Dec  8 04:44:57 np0005550137 ceph-mon[74164]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Dec  8 04:44:57 np0005550137 ceph-mon[74164]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Dec  8 04:44:57 np0005550137 ceph-mon[74164]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Dec  8 04:44:57 np0005550137 ceph-mon[74164]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Dec  8 04:44:57 np0005550137 ceph-mon[74164]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Dec  8 04:44:57 np0005550137 ceph-mon[74164]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Dec  8 04:44:57 np0005550137 ceph-mon[74164]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Dec  8 04:44:57 np0005550137 ceph-mon[74164]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Dec  8 04:44:57 np0005550137 ceph-mon[74164]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Dec  8 04:44:57 np0005550137 ceph-mon[74164]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Dec  8 04:44:57 np0005550137 ceph-mon[74164]: rocksdb:                        Options.arena_block_size: 1048576
Dec  8 04:44:57 np0005550137 ceph-mon[74164]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Dec  8 04:44:57 np0005550137 ceph-mon[74164]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Dec  8 04:44:57 np0005550137 ceph-mon[74164]: rocksdb:                Options.disable_auto_compactions: 0
Dec  8 04:44:57 np0005550137 ceph-mon[74164]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Dec  8 04:44:57 np0005550137 ceph-mon[74164]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Dec  8 04:44:57 np0005550137 ceph-mon[74164]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Dec  8 04:44:57 np0005550137 ceph-mon[74164]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Dec  8 04:44:57 np0005550137 ceph-mon[74164]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Dec  8 04:44:57 np0005550137 ceph-mon[74164]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Dec  8 04:44:57 np0005550137 ceph-mon[74164]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Dec  8 04:44:57 np0005550137 ceph-mon[74164]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Dec  8 04:44:57 np0005550137 ceph-mon[74164]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Dec  8 04:44:57 np0005550137 ceph-mon[74164]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Dec  8 04:44:57 np0005550137 ceph-mon[74164]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Dec  8 04:44:57 np0005550137 ceph-mon[74164]: rocksdb:                   Options.inplace_update_support: 0
Dec  8 04:44:57 np0005550137 ceph-mon[74164]: rocksdb:                 Options.inplace_update_num_locks: 10000
Dec  8 04:44:57 np0005550137 ceph-mon[74164]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Dec  8 04:44:57 np0005550137 ceph-mon[74164]: rocksdb:               Options.memtable_whole_key_filtering: 0
Dec  8 04:44:57 np0005550137 ceph-mon[74164]: rocksdb:   Options.memtable_huge_page_size: 0
Dec  8 04:44:57 np0005550137 ceph-mon[74164]: rocksdb:                           Options.bloom_locality: 0
Dec  8 04:44:57 np0005550137 ceph-mon[74164]: rocksdb:                    Options.max_successive_merges: 0
Dec  8 04:44:57 np0005550137 ceph-mon[74164]: rocksdb:                Options.optimize_filters_for_hits: 0
Dec  8 04:44:57 np0005550137 ceph-mon[74164]: rocksdb:                Options.paranoid_file_checks: 0
Dec  8 04:44:57 np0005550137 ceph-mon[74164]: rocksdb:                Options.force_consistency_checks: 1
Dec  8 04:44:57 np0005550137 ceph-mon[74164]: rocksdb:                Options.report_bg_io_stats: 0
Dec  8 04:44:57 np0005550137 ceph-mon[74164]: rocksdb:                               Options.ttl: 2592000
Dec  8 04:44:57 np0005550137 ceph-mon[74164]: rocksdb:          Options.periodic_compaction_seconds: 0
Dec  8 04:44:57 np0005550137 ceph-mon[74164]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Dec  8 04:44:57 np0005550137 ceph-mon[74164]: rocksdb:    Options.preserve_internal_time_seconds: 0
Dec  8 04:44:57 np0005550137 ceph-mon[74164]: rocksdb:                       Options.enable_blob_files: false
Dec  8 04:44:57 np0005550137 ceph-mon[74164]: rocksdb:                           Options.min_blob_size: 0
Dec  8 04:44:57 np0005550137 ceph-mon[74164]: rocksdb:                          Options.blob_file_size: 268435456
Dec  8 04:44:57 np0005550137 ceph-mon[74164]: rocksdb:                   Options.blob_compression_type: NoCompression
Dec  8 04:44:57 np0005550137 ceph-mon[74164]: rocksdb:          Options.enable_blob_garbage_collection: false
Dec  8 04:44:57 np0005550137 ceph-mon[74164]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Dec  8 04:44:57 np0005550137 ceph-mon[74164]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Dec  8 04:44:57 np0005550137 ceph-mon[74164]: rocksdb:          Options.blob_compaction_readahead_size: 0
Dec  8 04:44:57 np0005550137 ceph-mon[74164]: rocksdb:                Options.blob_file_starting_level: 0
Dec  8 04:44:57 np0005550137 ceph-mon[74164]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Dec  8 04:44:57 np0005550137 ceph-mon[74164]: rocksdb: [db/version_set.cc:5566] Recovered from manifest file:/var/lib/ceph/mon/ceph-compute-0/store.db/MANIFEST-000005 succeeded,manifest_file_number is 5, next_file_number is 7, last_sequence is 0, log_number is 0,prev_log_number is 0,max_column_family is 0,min_log_number_to_keep is 0
Dec  8 04:44:57 np0005550137 ceph-mon[74164]: rocksdb: [db/version_set.cc:5581] Column family [default] (ID 0), log number is 0
Dec  8 04:44:57 np0005550137 ceph-mon[74164]: rocksdb: [db/db_impl/db_impl_open.cc:539] DB ID: 80444841-be0f-461b-9293-2c19ffebbf01
Dec  8 04:44:57 np0005550137 ceph-mon[74164]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765187097263288, "job": 1, "event": "recovery_started", "wal_files": [4]}
Dec  8 04:44:57 np0005550137 ceph-mon[74164]: rocksdb: [db/db_impl/db_impl_open.cc:1043] Recovering log #4 mode 2
Dec  8 04:44:57 np0005550137 ceph-mon[74164]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765187097265240, "cf_name": "default", "job": 1, "event": "table_file_creation", "file_number": 8, "file_size": 1944, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 1, "largest_seqno": 5, "table_properties": {"data_size": 819, "index_size": 31, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 69, "raw_key_size": 115, "raw_average_key_size": 23, "raw_value_size": 696, "raw_average_value_size": 139, "num_data_blocks": 1, "num_entries": 5, "num_filter_entries": 5, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1765187097, "oldest_key_time": 0, "file_creation_time": 0, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "80444841-be0f-461b-9293-2c19ffebbf01", "db_session_id": "5X6ZUXOI3CRFVSAT561D", "orig_file_number": 8, "seqno_to_time_mapping": "N/A"}}
Dec  8 04:44:57 np0005550137 ceph-mon[74164]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765187097265361, "job": 1, "event": "recovery_finished"}
Dec  8 04:44:57 np0005550137 ceph-mon[74164]: rocksdb: [db/version_set.cc:5047] Creating manifest 10
Dec  8 04:44:57 np0005550137 ceph-mon[74164]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000004.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Dec  8 04:44:57 np0005550137 ceph-mon[74164]: rocksdb: [db/db_impl/db_impl_open.cc:1987] SstFileManager instance 0x564026144e00
Dec  8 04:44:57 np0005550137 ceph-mon[74164]: rocksdb: DB pointer 0x56402624e000
Dec  8 04:44:57 np0005550137 ceph-mon[74164]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Dec  8 04:44:57 np0005550137 ceph-mon[74164]: rocksdb: [db/db_impl/db_impl.cc:1111] #012** DB Stats **#012Uptime(secs): 0.0 total, 0.0 interval#012Cumulative writes: 0 writes, 0 keys, 0 commit groups, 0.0 writes per commit group, ingest: 0.00 GB, 0.00 MB/s#012Cumulative WAL: 0 writes, 0 syncs, 0.00 writes per sync, written: 0.00 GB, 0.00 MB/s#012Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent#012Interval writes: 0 writes, 0 keys, 0 commit groups, 0.0 writes per commit group, ingest: 0.00 MB, 0.00 MB/s#012Interval WAL: 0 writes, 0 syncs, 0.00 writes per sync, written: 0.00 GB, 0.00 MB/s#012Interval stall: 00:00:0.000 H:M:S, 0.0 percent#012#012** Compaction Stats [default] **#012Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012  L0      1/0    1.90 KB   0.2      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.9      0.00              0.00         1    0.002       0      0       0.0       0.0#012 Sum      1/0    1.90 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.9      0.00              0.00         1    0.002       0      0       0.0       0.0#012 Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.9      0.00              0.00         1    0.002       0      0       0.0       0.0#012#012** Compaction Stats [default] **#012Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.9      0.00              0.00         1    0.002       0      0       0.0       0.0#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 0.0 total, 0.0 interval#012Flush(GB): cumulative 0.000, interval 0.000#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.00 GB write, 0.13 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Interval compaction: 0.00 GB write, 0.13 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count#012Block cache BinnedLRUCache@0x564026143350#2 capacity: 512.00 MB usage: 0.22 KB table_size: 0 occupancy: 18446744073709551615 collections: 1 last_copies: 0 last_secs: 1.6e-05 secs_since: 0#012Block cache entry stats(count,size,portion): FilterBlock(1,0.11 KB,2.08616e-05%) IndexBlock(1,0.11 KB,2.08616e-05%) Misc(1,0.00 KB,0%)#012#012** File Read Latency Histogram By Level [default] **
Dec  8 04:44:57 np0005550137 ceph-mon[74164]: starting mon.compute-0 rank 0 at public addrs [v2:192.168.122.100:3300/0,v1:192.168.122.100:6789/0] at bind addrs [v2:192.168.122.100:3300/0,v1:192.168.122.100:6789/0] mon_data /var/lib/ceph/mon/ceph-compute-0 fsid ceb838ef-9d5d-54e4-bddb-2f01adce2ad4
Dec  8 04:44:57 np0005550137 ceph-mon[74164]: mon.compute-0@-1(???) e0 preinit fsid ceb838ef-9d5d-54e4-bddb-2f01adce2ad4
Dec  8 04:44:57 np0005550137 ceph-mon[74164]: mon.compute-0@-1(probing) e0  my rank is now 0 (was -1)
Dec  8 04:44:57 np0005550137 ceph-mon[74164]: mon.compute-0@0(probing) e0 win_standalone_election
Dec  8 04:44:57 np0005550137 ceph-mon[74164]: paxos.0).electionLogic(0) init, first boot, initializing epoch at 1 
Dec  8 04:44:57 np0005550137 ceph-mon[74164]: mon.compute-0@0(electing) e0 collect_metadata vda:  no unique device id for vda: fallback method has no model nor serial
Dec  8 04:44:57 np0005550137 ceph-mon[74164]: log_channel(cluster) log [INF] : mon.compute-0 is new leader, mons compute-0 in quorum (ranks 0)
Dec  8 04:44:57 np0005550137 ceph-mon[74164]: mon.compute-0@0(leader).osd e0 create_pending setting backfillfull_ratio = 0.9
Dec  8 04:44:57 np0005550137 ceph-mon[74164]: mon.compute-0@0(leader).osd e0 create_pending setting full_ratio = 0.95
Dec  8 04:44:57 np0005550137 ceph-mon[74164]: mon.compute-0@0(leader).osd e0 create_pending setting nearfull_ratio = 0.85
Dec  8 04:44:57 np0005550137 ceph-mon[74164]: mon.compute-0@0(leader).osd e0 do_prune osdmap full prune enabled
Dec  8 04:44:57 np0005550137 ceph-mon[74164]: mon.compute-0@0(leader).osd e0 encode_pending skipping prime_pg_temp; mapping job did not start
Dec  8 04:44:57 np0005550137 ceph-mon[74164]: mon.compute-0@0(leader) e0 _apply_compatset_features enabling new quorum features: compat={},rocompat={},incompat={4=support erasure code pools,5=new-style osdmap encoding,6=support isa/lrc erasure code,7=support shec erasure code}
Dec  8 04:44:57 np0005550137 ceph-mon[74164]: mon.compute-0@0(leader).paxosservice(auth 0..0) refresh upgraded, format 3 -> 0
Dec  8 04:44:57 np0005550137 ceph-mon[74164]: mon.compute-0@0(probing) e1 win_standalone_election
Dec  8 04:44:57 np0005550137 ceph-mon[74164]: paxos.0).electionLogic(2) init, last seen epoch 2
Dec  8 04:44:57 np0005550137 ceph-mon[74164]: mon.compute-0@0(electing) e1 collect_metadata vda:  no unique device id for vda: fallback method has no model nor serial
Dec  8 04:44:57 np0005550137 ceph-mon[74164]: log_channel(cluster) log [INF] : mon.compute-0 is new leader, mons compute-0 in quorum (ranks 0)
Dec  8 04:44:57 np0005550137 ceph-mon[74164]: log_channel(cluster) log [DBG] : monmap epoch 1
Dec  8 04:44:57 np0005550137 ceph-mon[74164]: log_channel(cluster) log [DBG] : fsid ceb838ef-9d5d-54e4-bddb-2f01adce2ad4
Dec  8 04:44:57 np0005550137 ceph-mon[74164]: log_channel(cluster) log [DBG] : last_changed 2025-12-08T09:44:55.163607+0000
Dec  8 04:44:57 np0005550137 ceph-mon[74164]: log_channel(cluster) log [DBG] : created 2025-12-08T09:44:55.163607+0000
Dec  8 04:44:57 np0005550137 ceph-mon[74164]: log_channel(cluster) log [DBG] : min_mon_release 19 (squid)
Dec  8 04:44:57 np0005550137 ceph-mon[74164]: log_channel(cluster) log [DBG] : election_strategy: 1
Dec  8 04:44:57 np0005550137 ceph-mon[74164]: log_channel(cluster) log [DBG] : 0: [v2:192.168.122.100:3300/0,v1:192.168.122.100:6789/0] mon.compute-0
Dec  8 04:44:57 np0005550137 ceph-mon[74164]: mon.compute-0@0(leader) e1 collect_metadata vda:  no unique device id for vda: fallback method has no model nor serial
Dec  8 04:44:57 np0005550137 ceph-mon[74164]: mgrc update_daemon_metadata mon.compute-0 metadata {addrs=[v2:192.168.122.100:3300/0,v1:192.168.122.100:6789/0],arch=x86_64,ceph_release=squid,ceph_version=ceph version 19.2.3 (c92aebb279828e9c3c1f5d24613efca272649e62) squid (stable),ceph_version_short=19.2.3,compression_algorithms=none, snappy, zlib, zstd, lz4,container_hostname=compute-0,container_image=quay.io/ceph/ceph:v19,cpu=AMD EPYC-Rome Processor,device_ids=,device_paths=vda=/dev/disk/by-path/pci-0000:00:04.0,devices=vda,distro=centos,distro_description=CentOS Stream 9,distro_version=9,hostname=compute-0,kernel_description=#1 SMP PREEMPT_DYNAMIC Fri Nov 28 14:01:17 UTC 2025,kernel_version=5.14.0-645.el9.x86_64,mem_swap_kb=1048572,mem_total_kb=7864320,os=Linux}
Dec  8 04:44:57 np0005550137 ceph-mon[74164]: mon.compute-0@0(leader).osd e0 create_pending setting backfillfull_ratio = 0.9
Dec  8 04:44:57 np0005550137 ceph-mon[74164]: mon.compute-0@0(leader).osd e0 create_pending setting full_ratio = 0.95
Dec  8 04:44:57 np0005550137 ceph-mon[74164]: mon.compute-0@0(leader).osd e0 create_pending setting nearfull_ratio = 0.85
Dec  8 04:44:57 np0005550137 ceph-mon[74164]: mon.compute-0@0(leader).osd e0 do_prune osdmap full prune enabled
Dec  8 04:44:57 np0005550137 ceph-mon[74164]: mon.compute-0@0(leader).osd e0 encode_pending skipping prime_pg_temp; mapping job did not start
Dec  8 04:44:57 np0005550137 podman[74165]: 2025-12-08 09:44:57.303730495 +0000 UTC m=+0.053124114 container create 9023697c7463dfa43ebe3abfdd5840b614f45e0c2581a7934eb6619af6304b0d (image=quay.io/ceph/ceph:v19, name=tender_gould, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325)
Dec  8 04:44:57 np0005550137 ceph-mon[74164]: mon.compute-0@0(leader) e1 _apply_compatset_features enabling new quorum features: compat={},rocompat={},incompat={8=support monmap features,9=luminous ondisk layout,10=mimic ondisk layout,11=nautilus ondisk layout,12=octopus ondisk layout,13=pacific ondisk layout,14=quincy ondisk layout,15=reef ondisk layout,16=squid ondisk layout}
Dec  8 04:44:57 np0005550137 ceph-mon[74164]: mon.compute-0@0(leader).mds e1 new map
Dec  8 04:44:57 np0005550137 ceph-mon[74164]: mon.compute-0@0(leader).mds e1 print_map#012e1#012btime 2025-12-08T09:44:57:301434+0000#012enable_multiple, ever_enabled_multiple: 1,1#012default compat: compat={},rocompat={},incompat={1=base v0.20,2=client writeable ranges,3=default file layouts on dirs,4=dir inode in separate object,5=mds uses versioned encoding,6=dirfrag is stored in omap,8=no anchor table,9=file layout v2,10=snaprealm v2,11=minor log segments,12=quiesce subvolumes}#012legacy client fscid: -1#012 #012No filesystems configured
Dec  8 04:44:57 np0005550137 ceph-mon[74164]: mon.compute-0@0(leader).paxosservice(auth 0..0) refresh upgraded, format 3 -> 0
Dec  8 04:44:57 np0005550137 ceph-mon[74164]: log_channel(cluster) log [DBG] : fsmap 
Dec  8 04:44:57 np0005550137 ceph-mon[74164]: mon.compute-0@0(leader).osd e0 _set_cache_ratios kv ratio 0.25 inc ratio 0.375 full ratio 0.375
Dec  8 04:44:57 np0005550137 ceph-mon[74164]: mon.compute-0@0(leader).osd e0 register_cache_with_pcm pcm target: 2147483648 pcm max: 1020054732 pcm min: 134217728 inc_osd_cache size: 1
Dec  8 04:44:57 np0005550137 ceph-mon[74164]: mon.compute-0@0(leader).osd e1 e1: 0 total, 0 up, 0 in
Dec  8 04:44:57 np0005550137 ceph-mon[74164]: mon.compute-0@0(leader).osd e1 crush map has features 3314932999778484224, adjusting msgr requires
Dec  8 04:44:57 np0005550137 ceph-mon[74164]: mon.compute-0@0(leader).osd e1 crush map has features 288514050185494528, adjusting msgr requires
Dec  8 04:44:57 np0005550137 ceph-mon[74164]: mon.compute-0@0(leader).osd e1 crush map has features 288514050185494528, adjusting msgr requires
Dec  8 04:44:57 np0005550137 ceph-mon[74164]: mon.compute-0@0(leader).osd e1 crush map has features 288514050185494528, adjusting msgr requires
Dec  8 04:44:57 np0005550137 ceph-mon[74164]: mkfs ceb838ef-9d5d-54e4-bddb-2f01adce2ad4
Dec  8 04:44:57 np0005550137 ceph-mon[74164]: mon.compute-0@0(leader).paxosservice(auth 1..1) refresh upgraded, format 0 -> 3
Dec  8 04:44:57 np0005550137 ceph-mon[74164]: log_channel(cluster) log [DBG] : osdmap e1: 0 total, 0 up, 0 in
Dec  8 04:44:57 np0005550137 ceph-mon[74164]: log_channel(cluster) log [DBG] : mgrmap e1: no daemons active
Dec  8 04:44:57 np0005550137 ceph-mon[74164]: mon.compute-0 is new leader, mons compute-0 in quorum (ranks 0)
Dec  8 04:44:57 np0005550137 systemd[1]: Started libpod-conmon-9023697c7463dfa43ebe3abfdd5840b614f45e0c2581a7934eb6619af6304b0d.scope.
Dec  8 04:44:57 np0005550137 systemd[1]: Started libcrun container.
Dec  8 04:44:57 np0005550137 podman[74165]: 2025-12-08 09:44:57.284155188 +0000 UTC m=+0.033548817 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Dec  8 04:44:57 np0005550137 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/270871fc11452ef77005eb77c122e79f4b276355d3e40a8c41a51f1ce336c915/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Dec  8 04:44:57 np0005550137 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/270871fc11452ef77005eb77c122e79f4b276355d3e40a8c41a51f1ce336c915/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec  8 04:44:57 np0005550137 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/270871fc11452ef77005eb77c122e79f4b276355d3e40a8c41a51f1ce336c915/merged/var/lib/ceph/mon/ceph-compute-0 supports timestamps until 2038 (0x7fffffff)
Dec  8 04:44:57 np0005550137 podman[74165]: 2025-12-08 09:44:57.40005776 +0000 UTC m=+0.149451419 container init 9023697c7463dfa43ebe3abfdd5840b614f45e0c2581a7934eb6619af6304b0d (image=quay.io/ceph/ceph:v19, name=tender_gould, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, ceph=True, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, OSD_FLAVOR=default)
Dec  8 04:44:57 np0005550137 podman[74165]: 2025-12-08 09:44:57.410944803 +0000 UTC m=+0.160338412 container start 9023697c7463dfa43ebe3abfdd5840b614f45e0c2581a7934eb6619af6304b0d (image=quay.io/ceph/ceph:v19, name=tender_gould, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, CEPH_REF=squid, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec  8 04:44:57 np0005550137 podman[74165]: 2025-12-08 09:44:57.416232499 +0000 UTC m=+0.165626128 container attach 9023697c7463dfa43ebe3abfdd5840b614f45e0c2581a7934eb6619af6304b0d (image=quay.io/ceph/ceph:v19, name=tender_gould, CEPH_REF=squid, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec  8 04:44:57 np0005550137 ceph-mon[74164]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "status"} v 0)
Dec  8 04:44:57 np0005550137 ceph-mon[74164]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/222542050' entity='client.admin' cmd=[{"prefix": "status"}]: dispatch
Dec  8 04:44:57 np0005550137 tender_gould[74219]:  cluster:
Dec  8 04:44:57 np0005550137 tender_gould[74219]:    id:     ceb838ef-9d5d-54e4-bddb-2f01adce2ad4
Dec  8 04:44:57 np0005550137 tender_gould[74219]:    health: HEALTH_OK
Dec  8 04:44:57 np0005550137 tender_gould[74219]: 
Dec  8 04:44:57 np0005550137 tender_gould[74219]:  services:
Dec  8 04:44:57 np0005550137 tender_gould[74219]:    mon: 1 daemons, quorum compute-0 (age 0.325705s)
Dec  8 04:44:57 np0005550137 tender_gould[74219]:    mgr: no daemons active
Dec  8 04:44:57 np0005550137 tender_gould[74219]:    osd: 0 osds: 0 up, 0 in
Dec  8 04:44:57 np0005550137 tender_gould[74219]: 
Dec  8 04:44:57 np0005550137 tender_gould[74219]:  data:
Dec  8 04:44:57 np0005550137 tender_gould[74219]:    pools:   0 pools, 0 pgs
Dec  8 04:44:57 np0005550137 tender_gould[74219]:    objects: 0 objects, 0 B
Dec  8 04:44:57 np0005550137 tender_gould[74219]:    usage:   0 B used, 0 B / 0 B avail
Dec  8 04:44:57 np0005550137 tender_gould[74219]:    pgs:     
Dec  8 04:44:57 np0005550137 tender_gould[74219]: 
Dec  8 04:44:57 np0005550137 systemd[1]: libpod-9023697c7463dfa43ebe3abfdd5840b614f45e0c2581a7934eb6619af6304b0d.scope: Deactivated successfully.
Dec  8 04:44:57 np0005550137 podman[74165]: 2025-12-08 09:44:57.639839193 +0000 UTC m=+0.389232802 container died 9023697c7463dfa43ebe3abfdd5840b614f45e0c2581a7934eb6619af6304b0d (image=quay.io/ceph/ceph:v19, name=tender_gould, io.buildah.version=1.40.1, CEPH_REF=squid, org.label-schema.build-date=20250325, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Dec  8 04:44:57 np0005550137 podman[74165]: 2025-12-08 09:44:57.736868429 +0000 UTC m=+0.486262048 container remove 9023697c7463dfa43ebe3abfdd5840b614f45e0c2581a7934eb6619af6304b0d (image=quay.io/ceph/ceph:v19, name=tender_gould, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0)
Dec  8 04:44:57 np0005550137 systemd[1]: libpod-conmon-9023697c7463dfa43ebe3abfdd5840b614f45e0c2581a7934eb6619af6304b0d.scope: Deactivated successfully.
Dec  8 04:44:57 np0005550137 podman[74257]: 2025-12-08 09:44:57.821488735 +0000 UTC m=+0.054584780 container create dab801d67ae8272013d9bef86adc3e5baf4264bc655acc1028e130f573fa2cc6 (image=quay.io/ceph/ceph:v19, name=fervent_wright, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, OSD_FLAVOR=default)
Dec  8 04:44:57 np0005550137 systemd[1]: Started libpod-conmon-dab801d67ae8272013d9bef86adc3e5baf4264bc655acc1028e130f573fa2cc6.scope.
Dec  8 04:44:57 np0005550137 systemd[1]: Started libcrun container.
Dec  8 04:44:57 np0005550137 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2176d47dc00074af31028230c90681f72b218f2c6d12a91f347c46053219bc25/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec  8 04:44:57 np0005550137 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2176d47dc00074af31028230c90681f72b218f2c6d12a91f347c46053219bc25/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec  8 04:44:57 np0005550137 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2176d47dc00074af31028230c90681f72b218f2c6d12a91f347c46053219bc25/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Dec  8 04:44:57 np0005550137 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2176d47dc00074af31028230c90681f72b218f2c6d12a91f347c46053219bc25/merged/var/lib/ceph/mon/ceph-compute-0 supports timestamps until 2038 (0x7fffffff)
Dec  8 04:44:57 np0005550137 podman[74257]: 2025-12-08 09:44:57.800940817 +0000 UTC m=+0.034036872 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Dec  8 04:44:57 np0005550137 podman[74257]: 2025-12-08 09:44:57.903287102 +0000 UTC m=+0.136383137 container init dab801d67ae8272013d9bef86adc3e5baf4264bc655acc1028e130f573fa2cc6 (image=quay.io/ceph/ceph:v19, name=fervent_wright, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True)
Dec  8 04:44:57 np0005550137 podman[74257]: 2025-12-08 09:44:57.912401679 +0000 UTC m=+0.145497714 container start dab801d67ae8272013d9bef86adc3e5baf4264bc655acc1028e130f573fa2cc6 (image=quay.io/ceph/ceph:v19, name=fervent_wright, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2)
Dec  8 04:44:57 np0005550137 podman[74257]: 2025-12-08 09:44:57.917226871 +0000 UTC m=+0.150322896 container attach dab801d67ae8272013d9bef86adc3e5baf4264bc655acc1028e130f573fa2cc6 (image=quay.io/ceph/ceph:v19, name=fervent_wright, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, ceph=True, org.label-schema.vendor=CentOS, CEPH_REF=squid, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image)
Dec  8 04:44:58 np0005550137 ceph-mon[74164]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config assimilate-conf"} v 0)
Dec  8 04:44:58 np0005550137 ceph-mon[74164]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/3967667729' entity='client.admin' cmd=[{"prefix": "config assimilate-conf"}]: dispatch
Dec  8 04:44:58 np0005550137 ceph-mon[74164]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/3967667729' entity='client.admin' cmd='[{"prefix": "config assimilate-conf"}]': finished
Dec  8 04:44:58 np0005550137 fervent_wright[74274]: 
Dec  8 04:44:58 np0005550137 fervent_wright[74274]: [global]
Dec  8 04:44:58 np0005550137 fervent_wright[74274]: #011fsid = ceb838ef-9d5d-54e4-bddb-2f01adce2ad4
Dec  8 04:44:58 np0005550137 fervent_wright[74274]: #011mon_host = [v2:192.168.122.100:3300,v1:192.168.122.100:6789]
Dec  8 04:44:58 np0005550137 systemd[1]: libpod-dab801d67ae8272013d9bef86adc3e5baf4264bc655acc1028e130f573fa2cc6.scope: Deactivated successfully.
Dec  8 04:44:58 np0005550137 podman[74257]: 2025-12-08 09:44:58.164964815 +0000 UTC m=+0.398060840 container died dab801d67ae8272013d9bef86adc3e5baf4264bc655acc1028e130f573fa2cc6 (image=quay.io/ceph/ceph:v19, name=fervent_wright, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec  8 04:44:58 np0005550137 systemd[1]: var-lib-containers-storage-overlay-2176d47dc00074af31028230c90681f72b218f2c6d12a91f347c46053219bc25-merged.mount: Deactivated successfully.
Dec  8 04:44:58 np0005550137 podman[74257]: 2025-12-08 09:44:58.206554386 +0000 UTC m=+0.439650451 container remove dab801d67ae8272013d9bef86adc3e5baf4264bc655acc1028e130f573fa2cc6 (image=quay.io/ceph/ceph:v19, name=fervent_wright, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Dec  8 04:44:58 np0005550137 systemd[1]: libpod-conmon-dab801d67ae8272013d9bef86adc3e5baf4264bc655acc1028e130f573fa2cc6.scope: Deactivated successfully.
Dec  8 04:44:58 np0005550137 podman[74312]: 2025-12-08 09:44:58.277908073 +0000 UTC m=+0.043451909 container create 7ba703470f5704b8bc90ab6e3057d7dd27ac3fad4f3faf1f9a89a9c4d8f899ee (image=quay.io/ceph/ceph:v19, name=practical_franklin, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, OSD_FLAVOR=default)
Dec  8 04:44:58 np0005550137 podman[74312]: 2025-12-08 09:44:58.258294445 +0000 UTC m=+0.023838261 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Dec  8 04:44:58 np0005550137 ceph-mon[74164]: mon.compute-0 is new leader, mons compute-0 in quorum (ranks 0)
Dec  8 04:44:58 np0005550137 ceph-mon[74164]: from='client.? 192.168.122.100:0/3967667729' entity='client.admin' cmd=[{"prefix": "config assimilate-conf"}]: dispatch
Dec  8 04:44:58 np0005550137 ceph-mon[74164]: from='client.? 192.168.122.100:0/3967667729' entity='client.admin' cmd='[{"prefix": "config assimilate-conf"}]': finished
Dec  8 04:44:58 np0005550137 systemd[1]: Started libpod-conmon-7ba703470f5704b8bc90ab6e3057d7dd27ac3fad4f3faf1f9a89a9c4d8f899ee.scope.
Dec  8 04:44:58 np0005550137 systemd[1]: Started libcrun container.
Dec  8 04:44:58 np0005550137 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/28909cb7d5a1795f7f2decf607e72b156593cbc4167f3e7a05c108f436fe7a53/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Dec  8 04:44:58 np0005550137 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/28909cb7d5a1795f7f2decf607e72b156593cbc4167f3e7a05c108f436fe7a53/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec  8 04:44:58 np0005550137 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/28909cb7d5a1795f7f2decf607e72b156593cbc4167f3e7a05c108f436fe7a53/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec  8 04:44:58 np0005550137 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/28909cb7d5a1795f7f2decf607e72b156593cbc4167f3e7a05c108f436fe7a53/merged/var/lib/ceph/mon/ceph-compute-0 supports timestamps until 2038 (0x7fffffff)
Dec  8 04:44:58 np0005550137 podman[74312]: 2025-12-08 09:44:58.445267915 +0000 UTC m=+0.210811781 container init 7ba703470f5704b8bc90ab6e3057d7dd27ac3fad4f3faf1f9a89a9c4d8f899ee (image=quay.io/ceph/ceph:v19, name=practical_franklin, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, org.label-schema.build-date=20250325, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1)
Dec  8 04:44:58 np0005550137 podman[74312]: 2025-12-08 09:44:58.453822224 +0000 UTC m=+0.219366050 container start 7ba703470f5704b8bc90ab6e3057d7dd27ac3fad4f3faf1f9a89a9c4d8f899ee (image=quay.io/ceph/ceph:v19, name=practical_franklin, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0)
Dec  8 04:44:58 np0005550137 podman[74312]: 2025-12-08 09:44:58.457854702 +0000 UTC m=+0.223398498 container attach 7ba703470f5704b8bc90ab6e3057d7dd27ac3fad4f3faf1f9a89a9c4d8f899ee (image=quay.io/ceph/ceph:v19, name=practical_franklin, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec  8 04:44:58 np0005550137 ceph-mon[74164]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Dec  8 04:44:58 np0005550137 ceph-mon[74164]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/843437649' entity='client.admin' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec  8 04:44:58 np0005550137 systemd[1]: libpod-7ba703470f5704b8bc90ab6e3057d7dd27ac3fad4f3faf1f9a89a9c4d8f899ee.scope: Deactivated successfully.
Dec  8 04:44:58 np0005550137 podman[74312]: 2025-12-08 09:44:58.690832361 +0000 UTC m=+0.456376157 container died 7ba703470f5704b8bc90ab6e3057d7dd27ac3fad4f3faf1f9a89a9c4d8f899ee (image=quay.io/ceph/ceph:v19, name=practical_franklin, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, CEPH_REF=squid)
Dec  8 04:44:58 np0005550137 systemd[1]: var-lib-containers-storage-overlay-28909cb7d5a1795f7f2decf607e72b156593cbc4167f3e7a05c108f436fe7a53-merged.mount: Deactivated successfully.
Dec  8 04:44:58 np0005550137 podman[74312]: 2025-12-08 09:44:58.734246308 +0000 UTC m=+0.499790104 container remove 7ba703470f5704b8bc90ab6e3057d7dd27ac3fad4f3faf1f9a89a9c4d8f899ee (image=quay.io/ceph/ceph:v19, name=practical_franklin, OSD_FLAVOR=default, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid)
Dec  8 04:44:58 np0005550137 systemd[1]: libpod-conmon-7ba703470f5704b8bc90ab6e3057d7dd27ac3fad4f3faf1f9a89a9c4d8f899ee.scope: Deactivated successfully.
Dec  8 04:44:58 np0005550137 systemd[1]: Stopping Ceph mon.compute-0 for ceb838ef-9d5d-54e4-bddb-2f01adce2ad4...
Dec  8 04:44:58 np0005550137 ceph-mon[74164]: received  signal: Terminated from /run/podman-init -- /usr/bin/ceph-mon -n mon.compute-0 -f --setuser ceph --setgroup ceph --default-log-to-file=false --default-log-to-journald=true --default-log-to-stderr=false --default-mon-cluster-log-to-file=false --default-mon-cluster-log-to-journald=true --default-mon-cluster-log-to-stderr=false  (PID: 1) UID: 0
Dec  8 04:44:58 np0005550137 ceph-mon[74164]: mon.compute-0@0(leader) e1 *** Got Signal Terminated ***
Dec  8 04:44:58 np0005550137 ceph-mon[74164]: mon.compute-0@0(leader) e1 shutdown
Dec  8 04:44:58 np0005550137 ceph-ceb838ef-9d5d-54e4-bddb-2f01adce2ad4-mon-compute-0[74160]: 2025-12-08T09:44:58.939+0000 7fba7391d640 -1 received  signal: Terminated from /run/podman-init -- /usr/bin/ceph-mon -n mon.compute-0 -f --setuser ceph --setgroup ceph --default-log-to-file=false --default-log-to-journald=true --default-log-to-stderr=false --default-mon-cluster-log-to-file=false --default-mon-cluster-log-to-journald=true --default-mon-cluster-log-to-stderr=false  (PID: 1) UID: 0
Dec  8 04:44:58 np0005550137 ceph-ceb838ef-9d5d-54e4-bddb-2f01adce2ad4-mon-compute-0[74160]: 2025-12-08T09:44:58.939+0000 7fba7391d640 -1 mon.compute-0@0(leader) e1 *** Got Signal Terminated ***
Dec  8 04:44:58 np0005550137 ceph-mon[74164]: rocksdb: [db/db_impl/db_impl.cc:496] Shutdown: canceling all background work
Dec  8 04:44:58 np0005550137 ceph-mon[74164]: rocksdb: [db/db_impl/db_impl.cc:704] Shutdown complete
Dec  8 04:44:59 np0005550137 podman[74393]: 2025-12-08 09:44:59.102807948 +0000 UTC m=+0.202207380 container died 35fecc86a949afee6dd81fee0b399e927144f302743f5c64ba32a2469e6927de (image=quay.io/ceph/ceph:v19, name=ceph-ceb838ef-9d5d-54e4-bddb-2f01adce2ad4-mon-compute-0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, CEPH_REF=squid, io.buildah.version=1.40.1, OSD_FLAVOR=default, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec  8 04:44:59 np0005550137 systemd[1]: var-lib-containers-storage-overlay-58123813f905106696b0a3f9c9ad9f83f8347a9798a3be8ed019f517e59cd669-merged.mount: Deactivated successfully.
Dec  8 04:44:59 np0005550137 podman[74393]: 2025-12-08 09:44:59.139143032 +0000 UTC m=+0.238542464 container remove 35fecc86a949afee6dd81fee0b399e927144f302743f5c64ba32a2469e6927de (image=quay.io/ceph/ceph:v19, name=ceph-ceb838ef-9d5d-54e4-bddb-2f01adce2ad4-mon-compute-0, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec  8 04:44:59 np0005550137 bash[74393]: ceph-ceb838ef-9d5d-54e4-bddb-2f01adce2ad4-mon-compute-0
Dec  8 04:44:59 np0005550137 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Dec  8 04:44:59 np0005550137 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Dec  8 04:44:59 np0005550137 systemd[1]: ceph-ceb838ef-9d5d-54e4-bddb-2f01adce2ad4@mon.compute-0.service: Deactivated successfully.
Dec  8 04:44:59 np0005550137 systemd[1]: Stopped Ceph mon.compute-0 for ceb838ef-9d5d-54e4-bddb-2f01adce2ad4.
Dec  8 04:44:59 np0005550137 systemd[1]: ceph-ceb838ef-9d5d-54e4-bddb-2f01adce2ad4@mon.compute-0.service: Consumed 1.022s CPU time.
Dec  8 04:44:59 np0005550137 systemd[1]: Starting Ceph mon.compute-0 for ceb838ef-9d5d-54e4-bddb-2f01adce2ad4...
Dec  8 04:44:59 np0005550137 podman[74496]: 2025-12-08 09:44:59.641950782 +0000 UTC m=+0.061545580 container create e9eed32aa882af654b63b8cfe5830bc1e3b4cbe08ed44f11c04405084f3a1007 (image=quay.io/ceph/ceph:v19, name=ceph-ceb838ef-9d5d-54e4-bddb-2f01adce2ad4-mon-compute-0, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.build-date=20250325, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, org.label-schema.license=GPLv2)
Dec  8 04:44:59 np0005550137 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7dde9c8bfdcc13cbe4d2101ded2da9825ff52b7f30d5cff55e8f2fc47020fd23/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec  8 04:44:59 np0005550137 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7dde9c8bfdcc13cbe4d2101ded2da9825ff52b7f30d5cff55e8f2fc47020fd23/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec  8 04:44:59 np0005550137 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7dde9c8bfdcc13cbe4d2101ded2da9825ff52b7f30d5cff55e8f2fc47020fd23/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec  8 04:44:59 np0005550137 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7dde9c8bfdcc13cbe4d2101ded2da9825ff52b7f30d5cff55e8f2fc47020fd23/merged/var/lib/ceph/mon/ceph-compute-0 supports timestamps until 2038 (0x7fffffff)
Dec  8 04:44:59 np0005550137 podman[74496]: 2025-12-08 09:44:59.611301106 +0000 UTC m=+0.030895944 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Dec  8 04:44:59 np0005550137 podman[74496]: 2025-12-08 09:44:59.722174999 +0000 UTC m=+0.141769847 container init e9eed32aa882af654b63b8cfe5830bc1e3b4cbe08ed44f11c04405084f3a1007 (image=quay.io/ceph/ceph:v19, name=ceph-ceb838ef-9d5d-54e4-bddb-2f01adce2ad4-mon-compute-0, ceph=True, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image)
Dec  8 04:44:59 np0005550137 podman[74496]: 2025-12-08 09:44:59.740908698 +0000 UTC m=+0.160503456 container start e9eed32aa882af654b63b8cfe5830bc1e3b4cbe08ed44f11c04405084f3a1007 (image=quay.io/ceph/ceph:v19, name=ceph-ceb838ef-9d5d-54e4-bddb-2f01adce2ad4-mon-compute-0, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image)
Dec  8 04:44:59 np0005550137 bash[74496]: e9eed32aa882af654b63b8cfe5830bc1e3b4cbe08ed44f11c04405084f3a1007
Dec  8 04:44:59 np0005550137 systemd[1]: Started Ceph mon.compute-0 for ceb838ef-9d5d-54e4-bddb-2f01adce2ad4.
Dec  8 04:44:59 np0005550137 ceph-mon[74516]: set uid:gid to 167:167 (ceph:ceph)
Dec  8 04:44:59 np0005550137 ceph-mon[74516]: ceph version 19.2.3 (c92aebb279828e9c3c1f5d24613efca272649e62) squid (stable), process ceph-mon, pid 2
Dec  8 04:44:59 np0005550137 ceph-mon[74516]: pidfile_write: ignore empty --pid-file
Dec  8 04:44:59 np0005550137 ceph-mon[74516]: load: jerasure load: lrc 
Dec  8 04:44:59 np0005550137 ceph-mon[74516]: rocksdb: RocksDB version: 7.9.2
Dec  8 04:44:59 np0005550137 ceph-mon[74516]: rocksdb: Git sha 0
Dec  8 04:44:59 np0005550137 ceph-mon[74516]: rocksdb: Compile date 2025-07-17 03:12:14
Dec  8 04:44:59 np0005550137 ceph-mon[74516]: rocksdb: DB SUMMARY
Dec  8 04:44:59 np0005550137 ceph-mon[74516]: rocksdb: DB Session ID:  WSOFQ4I8QWDIF20O9U4H
Dec  8 04:44:59 np0005550137 ceph-mon[74516]: rocksdb: CURRENT file:  CURRENT
Dec  8 04:44:59 np0005550137 ceph-mon[74516]: rocksdb: IDENTITY file:  IDENTITY
Dec  8 04:44:59 np0005550137 ceph-mon[74516]: rocksdb: MANIFEST file:  MANIFEST-000010 size: 179 Bytes
Dec  8 04:44:59 np0005550137 ceph-mon[74516]: rocksdb: SST files in /var/lib/ceph/mon/ceph-compute-0/store.db dir, Total Num: 1, files: 000008.sst 
Dec  8 04:44:59 np0005550137 ceph-mon[74516]: rocksdb: Write Ahead Log file in /var/lib/ceph/mon/ceph-compute-0/store.db: 000009.log size: 58739 ; 
Dec  8 04:44:59 np0005550137 ceph-mon[74516]: rocksdb:                         Options.error_if_exists: 0
Dec  8 04:44:59 np0005550137 ceph-mon[74516]: rocksdb:                       Options.create_if_missing: 0
Dec  8 04:44:59 np0005550137 ceph-mon[74516]: rocksdb:                         Options.paranoid_checks: 1
Dec  8 04:44:59 np0005550137 ceph-mon[74516]: rocksdb:             Options.flush_verify_memtable_count: 1
Dec  8 04:44:59 np0005550137 ceph-mon[74516]: rocksdb:                               Options.track_and_verify_wals_in_manifest: 0
Dec  8 04:44:59 np0005550137 ceph-mon[74516]: rocksdb:        Options.verify_sst_unique_id_in_manifest: 1
Dec  8 04:44:59 np0005550137 ceph-mon[74516]: rocksdb:                                     Options.env: 0x55fed2b66c20
Dec  8 04:44:59 np0005550137 ceph-mon[74516]: rocksdb:                                      Options.fs: PosixFileSystem
Dec  8 04:44:59 np0005550137 ceph-mon[74516]: rocksdb:                                Options.info_log: 0x55fed372dac0
Dec  8 04:44:59 np0005550137 ceph-mon[74516]: rocksdb:                Options.max_file_opening_threads: 16
Dec  8 04:44:59 np0005550137 ceph-mon[74516]: rocksdb:                              Options.statistics: (nil)
Dec  8 04:44:59 np0005550137 ceph-mon[74516]: rocksdb:                               Options.use_fsync: 0
Dec  8 04:44:59 np0005550137 ceph-mon[74516]: rocksdb:                       Options.max_log_file_size: 0
Dec  8 04:44:59 np0005550137 ceph-mon[74516]: rocksdb:                  Options.max_manifest_file_size: 1073741824
Dec  8 04:44:59 np0005550137 ceph-mon[74516]: rocksdb:                   Options.log_file_time_to_roll: 0
Dec  8 04:44:59 np0005550137 ceph-mon[74516]: rocksdb:                       Options.keep_log_file_num: 1000
Dec  8 04:44:59 np0005550137 ceph-mon[74516]: rocksdb:                    Options.recycle_log_file_num: 0
Dec  8 04:44:59 np0005550137 ceph-mon[74516]: rocksdb:                         Options.allow_fallocate: 1
Dec  8 04:44:59 np0005550137 ceph-mon[74516]: rocksdb:                        Options.allow_mmap_reads: 0
Dec  8 04:44:59 np0005550137 ceph-mon[74516]: rocksdb:                       Options.allow_mmap_writes: 0
Dec  8 04:44:59 np0005550137 ceph-mon[74516]: rocksdb:                        Options.use_direct_reads: 0
Dec  8 04:44:59 np0005550137 ceph-mon[74516]: rocksdb:                        Options.use_direct_io_for_flush_and_compaction: 0
Dec  8 04:44:59 np0005550137 ceph-mon[74516]: rocksdb:          Options.create_missing_column_families: 0
Dec  8 04:44:59 np0005550137 ceph-mon[74516]: rocksdb:                              Options.db_log_dir: 
Dec  8 04:44:59 np0005550137 ceph-mon[74516]: rocksdb:                                 Options.wal_dir: 
Dec  8 04:44:59 np0005550137 ceph-mon[74516]: rocksdb:                Options.table_cache_numshardbits: 6
Dec  8 04:44:59 np0005550137 ceph-mon[74516]: rocksdb:                         Options.WAL_ttl_seconds: 0
Dec  8 04:44:59 np0005550137 ceph-mon[74516]: rocksdb:                       Options.WAL_size_limit_MB: 0
Dec  8 04:44:59 np0005550137 ceph-mon[74516]: rocksdb:                        Options.max_write_batch_group_size_bytes: 1048576
Dec  8 04:44:59 np0005550137 ceph-mon[74516]: rocksdb:             Options.manifest_preallocation_size: 4194304
Dec  8 04:44:59 np0005550137 ceph-mon[74516]: rocksdb:                     Options.is_fd_close_on_exec: 1
Dec  8 04:44:59 np0005550137 ceph-mon[74516]: rocksdb:                   Options.advise_random_on_open: 1
Dec  8 04:44:59 np0005550137 ceph-mon[74516]: rocksdb:                    Options.db_write_buffer_size: 0
Dec  8 04:44:59 np0005550137 ceph-mon[74516]: rocksdb:                    Options.write_buffer_manager: 0x55fed3731900
Dec  8 04:44:59 np0005550137 ceph-mon[74516]: rocksdb:         Options.access_hint_on_compaction_start: 1
Dec  8 04:44:59 np0005550137 ceph-mon[74516]: rocksdb:           Options.random_access_max_buffer_size: 1048576
Dec  8 04:44:59 np0005550137 ceph-mon[74516]: rocksdb:                      Options.use_adaptive_mutex: 0
Dec  8 04:44:59 np0005550137 ceph-mon[74516]: rocksdb:                            Options.rate_limiter: (nil)
Dec  8 04:44:59 np0005550137 ceph-mon[74516]: rocksdb:     Options.sst_file_manager.rate_bytes_per_sec: 0
Dec  8 04:44:59 np0005550137 ceph-mon[74516]: rocksdb:                       Options.wal_recovery_mode: 2
Dec  8 04:44:59 np0005550137 ceph-mon[74516]: rocksdb:                  Options.enable_thread_tracking: 0
Dec  8 04:44:59 np0005550137 ceph-mon[74516]: rocksdb:                  Options.enable_pipelined_write: 0
Dec  8 04:44:59 np0005550137 ceph-mon[74516]: rocksdb:                  Options.unordered_write: 0
Dec  8 04:44:59 np0005550137 ceph-mon[74516]: rocksdb:         Options.allow_concurrent_memtable_write: 1
Dec  8 04:44:59 np0005550137 ceph-mon[74516]: rocksdb:      Options.enable_write_thread_adaptive_yield: 1
Dec  8 04:44:59 np0005550137 ceph-mon[74516]: rocksdb:             Options.write_thread_max_yield_usec: 100
Dec  8 04:44:59 np0005550137 ceph-mon[74516]: rocksdb:            Options.write_thread_slow_yield_usec: 3
Dec  8 04:44:59 np0005550137 ceph-mon[74516]: rocksdb:                               Options.row_cache: None
Dec  8 04:44:59 np0005550137 ceph-mon[74516]: rocksdb:                              Options.wal_filter: None
Dec  8 04:44:59 np0005550137 ceph-mon[74516]: rocksdb:             Options.avoid_flush_during_recovery: 0
Dec  8 04:44:59 np0005550137 ceph-mon[74516]: rocksdb:             Options.allow_ingest_behind: 0
Dec  8 04:44:59 np0005550137 ceph-mon[74516]: rocksdb:             Options.two_write_queues: 0
Dec  8 04:44:59 np0005550137 ceph-mon[74516]: rocksdb:             Options.manual_wal_flush: 0
Dec  8 04:44:59 np0005550137 ceph-mon[74516]: rocksdb:             Options.wal_compression: 0
Dec  8 04:44:59 np0005550137 ceph-mon[74516]: rocksdb:             Options.atomic_flush: 0
Dec  8 04:44:59 np0005550137 ceph-mon[74516]: rocksdb:             Options.avoid_unnecessary_blocking_io: 0
Dec  8 04:44:59 np0005550137 ceph-mon[74516]: rocksdb:                 Options.persist_stats_to_disk: 0
Dec  8 04:44:59 np0005550137 ceph-mon[74516]: rocksdb:                 Options.write_dbid_to_manifest: 0
Dec  8 04:44:59 np0005550137 ceph-mon[74516]: rocksdb:                 Options.log_readahead_size: 0
Dec  8 04:44:59 np0005550137 ceph-mon[74516]: rocksdb:                 Options.file_checksum_gen_factory: Unknown
Dec  8 04:44:59 np0005550137 ceph-mon[74516]: rocksdb:                 Options.best_efforts_recovery: 0
Dec  8 04:44:59 np0005550137 ceph-mon[74516]: rocksdb:                Options.max_bgerror_resume_count: 2147483647
Dec  8 04:44:59 np0005550137 ceph-mon[74516]: rocksdb:            Options.bgerror_resume_retry_interval: 1000000
Dec  8 04:44:59 np0005550137 ceph-mon[74516]: rocksdb:             Options.allow_data_in_errors: 0
Dec  8 04:44:59 np0005550137 ceph-mon[74516]: rocksdb:             Options.db_host_id: __hostname__
Dec  8 04:44:59 np0005550137 ceph-mon[74516]: rocksdb:             Options.enforce_single_del_contracts: true
Dec  8 04:44:59 np0005550137 ceph-mon[74516]: rocksdb:             Options.max_background_jobs: 2
Dec  8 04:44:59 np0005550137 ceph-mon[74516]: rocksdb:             Options.max_background_compactions: -1
Dec  8 04:44:59 np0005550137 ceph-mon[74516]: rocksdb:             Options.max_subcompactions: 1
Dec  8 04:44:59 np0005550137 ceph-mon[74516]: rocksdb:             Options.avoid_flush_during_shutdown: 0
Dec  8 04:44:59 np0005550137 ceph-mon[74516]: rocksdb:           Options.writable_file_max_buffer_size: 1048576
Dec  8 04:44:59 np0005550137 ceph-mon[74516]: rocksdb:             Options.delayed_write_rate : 16777216
Dec  8 04:44:59 np0005550137 ceph-mon[74516]: rocksdb:             Options.max_total_wal_size: 0
Dec  8 04:44:59 np0005550137 ceph-mon[74516]: rocksdb:             Options.delete_obsolete_files_period_micros: 21600000000
Dec  8 04:44:59 np0005550137 ceph-mon[74516]: rocksdb:                   Options.stats_dump_period_sec: 600
Dec  8 04:44:59 np0005550137 ceph-mon[74516]: rocksdb:                 Options.stats_persist_period_sec: 600
Dec  8 04:44:59 np0005550137 ceph-mon[74516]: rocksdb:                 Options.stats_history_buffer_size: 1048576
Dec  8 04:44:59 np0005550137 ceph-mon[74516]: rocksdb:                          Options.max_open_files: -1
Dec  8 04:44:59 np0005550137 ceph-mon[74516]: rocksdb:                          Options.bytes_per_sync: 0
Dec  8 04:44:59 np0005550137 ceph-mon[74516]: rocksdb:                      Options.wal_bytes_per_sync: 0
Dec  8 04:44:59 np0005550137 ceph-mon[74516]: rocksdb:                   Options.strict_bytes_per_sync: 0
Dec  8 04:44:59 np0005550137 ceph-mon[74516]: rocksdb:       Options.compaction_readahead_size: 0
Dec  8 04:44:59 np0005550137 ceph-mon[74516]: rocksdb:                  Options.max_background_flushes: -1
Dec  8 04:44:59 np0005550137 ceph-mon[74516]: rocksdb: Compression algorithms supported:
Dec  8 04:44:59 np0005550137 ceph-mon[74516]: rocksdb: #011kZSTD supported: 0
Dec  8 04:44:59 np0005550137 ceph-mon[74516]: rocksdb: #011kXpressCompression supported: 0
Dec  8 04:44:59 np0005550137 ceph-mon[74516]: rocksdb: #011kBZip2Compression supported: 0
Dec  8 04:44:59 np0005550137 ceph-mon[74516]: rocksdb: #011kZSTDNotFinalCompression supported: 0
Dec  8 04:44:59 np0005550137 ceph-mon[74516]: rocksdb: #011kLZ4Compression supported: 1
Dec  8 04:44:59 np0005550137 ceph-mon[74516]: rocksdb: #011kZlibCompression supported: 1
Dec  8 04:44:59 np0005550137 ceph-mon[74516]: rocksdb: #011kLZ4HCCompression supported: 1
Dec  8 04:44:59 np0005550137 ceph-mon[74516]: rocksdb: #011kSnappyCompression supported: 1
Dec  8 04:44:59 np0005550137 ceph-mon[74516]: rocksdb: Fast CRC32 supported: Supported on x86
Dec  8 04:44:59 np0005550137 ceph-mon[74516]: rocksdb: DMutex implementation: pthread_mutex_t
Dec  8 04:44:59 np0005550137 ceph-mon[74516]: rocksdb: [db/version_set.cc:5527] Recovering from manifest file: /var/lib/ceph/mon/ceph-compute-0/store.db/MANIFEST-000010
Dec  8 04:44:59 np0005550137 ceph-mon[74516]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [default]:
Dec  8 04:44:59 np0005550137 ceph-mon[74516]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Dec  8 04:44:59 np0005550137 ceph-mon[74516]: rocksdb:           Options.merge_operator: 
Dec  8 04:44:59 np0005550137 ceph-mon[74516]: rocksdb:        Options.compaction_filter: None
Dec  8 04:44:59 np0005550137 ceph-mon[74516]: rocksdb:        Options.compaction_filter_factory: None
Dec  8 04:44:59 np0005550137 ceph-mon[74516]: rocksdb:  Options.sst_partitioner_factory: None
Dec  8 04:44:59 np0005550137 ceph-mon[74516]: rocksdb:         Options.memtable_factory: SkipListFactory
Dec  8 04:44:59 np0005550137 ceph-mon[74516]: rocksdb:            Options.table_factory: BlockBasedTable
Dec  8 04:44:59 np0005550137 ceph-mon[74516]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x55fed372caa0)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x55fed3751350#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 536870912#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Dec  8 04:44:59 np0005550137 ceph-mon[74516]: rocksdb:        Options.write_buffer_size: 33554432
Dec  8 04:44:59 np0005550137 ceph-mon[74516]: rocksdb:  Options.max_write_buffer_number: 2
Dec  8 04:44:59 np0005550137 ceph-mon[74516]: rocksdb:          Options.compression: NoCompression
Dec  8 04:44:59 np0005550137 ceph-mon[74516]: rocksdb:                  Options.bottommost_compression: Disabled
Dec  8 04:44:59 np0005550137 ceph-mon[74516]: rocksdb:       Options.prefix_extractor: nullptr
Dec  8 04:44:59 np0005550137 ceph-mon[74516]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Dec  8 04:44:59 np0005550137 ceph-mon[74516]: rocksdb:             Options.num_levels: 7
Dec  8 04:44:59 np0005550137 ceph-mon[74516]: rocksdb:        Options.min_write_buffer_number_to_merge: 1
Dec  8 04:44:59 np0005550137 ceph-mon[74516]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Dec  8 04:44:59 np0005550137 ceph-mon[74516]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Dec  8 04:44:59 np0005550137 ceph-mon[74516]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Dec  8 04:44:59 np0005550137 ceph-mon[74516]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Dec  8 04:44:59 np0005550137 ceph-mon[74516]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Dec  8 04:44:59 np0005550137 ceph-mon[74516]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Dec  8 04:44:59 np0005550137 ceph-mon[74516]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Dec  8 04:44:59 np0005550137 ceph-mon[74516]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Dec  8 04:44:59 np0005550137 ceph-mon[74516]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Dec  8 04:44:59 np0005550137 ceph-mon[74516]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Dec  8 04:44:59 np0005550137 ceph-mon[74516]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Dec  8 04:44:59 np0005550137 ceph-mon[74516]: rocksdb:            Options.compression_opts.window_bits: -14
Dec  8 04:44:59 np0005550137 ceph-mon[74516]: rocksdb:                  Options.compression_opts.level: 32767
Dec  8 04:44:59 np0005550137 ceph-mon[74516]: rocksdb:               Options.compression_opts.strategy: 0
Dec  8 04:44:59 np0005550137 ceph-mon[74516]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Dec  8 04:44:59 np0005550137 ceph-mon[74516]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Dec  8 04:44:59 np0005550137 ceph-mon[74516]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Dec  8 04:44:59 np0005550137 ceph-mon[74516]: rocksdb:         Options.compression_opts.parallel_threads: 1
Dec  8 04:44:59 np0005550137 ceph-mon[74516]: rocksdb:                  Options.compression_opts.enabled: false
Dec  8 04:44:59 np0005550137 ceph-mon[74516]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Dec  8 04:44:59 np0005550137 ceph-mon[74516]: rocksdb:      Options.level0_file_num_compaction_trigger: 4
Dec  8 04:44:59 np0005550137 ceph-mon[74516]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Dec  8 04:44:59 np0005550137 ceph-mon[74516]: rocksdb:              Options.level0_stop_writes_trigger: 36
Dec  8 04:44:59 np0005550137 ceph-mon[74516]: rocksdb:                   Options.target_file_size_base: 67108864
Dec  8 04:44:59 np0005550137 ceph-mon[74516]: rocksdb:             Options.target_file_size_multiplier: 1
Dec  8 04:44:59 np0005550137 ceph-mon[74516]: rocksdb:                Options.max_bytes_for_level_base: 268435456
Dec  8 04:44:59 np0005550137 ceph-mon[74516]: rocksdb: Options.level_compaction_dynamic_level_bytes: 1
Dec  8 04:44:59 np0005550137 ceph-mon[74516]: rocksdb:          Options.max_bytes_for_level_multiplier: 10.000000
Dec  8 04:44:59 np0005550137 ceph-mon[74516]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Dec  8 04:44:59 np0005550137 ceph-mon[74516]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Dec  8 04:44:59 np0005550137 ceph-mon[74516]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Dec  8 04:44:59 np0005550137 ceph-mon[74516]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Dec  8 04:44:59 np0005550137 ceph-mon[74516]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Dec  8 04:44:59 np0005550137 ceph-mon[74516]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Dec  8 04:44:59 np0005550137 ceph-mon[74516]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Dec  8 04:44:59 np0005550137 ceph-mon[74516]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Dec  8 04:44:59 np0005550137 ceph-mon[74516]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Dec  8 04:44:59 np0005550137 ceph-mon[74516]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Dec  8 04:44:59 np0005550137 ceph-mon[74516]: rocksdb:                        Options.arena_block_size: 1048576
Dec  8 04:44:59 np0005550137 ceph-mon[74516]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Dec  8 04:44:59 np0005550137 ceph-mon[74516]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Dec  8 04:44:59 np0005550137 ceph-mon[74516]: rocksdb:                Options.disable_auto_compactions: 0
Dec  8 04:44:59 np0005550137 ceph-mon[74516]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Dec  8 04:44:59 np0005550137 ceph-mon[74516]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Dec  8 04:44:59 np0005550137 ceph-mon[74516]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Dec  8 04:44:59 np0005550137 ceph-mon[74516]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Dec  8 04:44:59 np0005550137 ceph-mon[74516]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Dec  8 04:44:59 np0005550137 ceph-mon[74516]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Dec  8 04:44:59 np0005550137 ceph-mon[74516]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Dec  8 04:44:59 np0005550137 ceph-mon[74516]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Dec  8 04:44:59 np0005550137 ceph-mon[74516]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Dec  8 04:44:59 np0005550137 ceph-mon[74516]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Dec  8 04:44:59 np0005550137 ceph-mon[74516]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Dec  8 04:44:59 np0005550137 ceph-mon[74516]: rocksdb:                   Options.inplace_update_support: 0
Dec  8 04:44:59 np0005550137 ceph-mon[74516]: rocksdb:                 Options.inplace_update_num_locks: 10000
Dec  8 04:44:59 np0005550137 ceph-mon[74516]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Dec  8 04:44:59 np0005550137 ceph-mon[74516]: rocksdb:               Options.memtable_whole_key_filtering: 0
Dec  8 04:44:59 np0005550137 ceph-mon[74516]: rocksdb:   Options.memtable_huge_page_size: 0
Dec  8 04:44:59 np0005550137 ceph-mon[74516]: rocksdb:                           Options.bloom_locality: 0
Dec  8 04:44:59 np0005550137 ceph-mon[74516]: rocksdb:                    Options.max_successive_merges: 0
Dec  8 04:44:59 np0005550137 ceph-mon[74516]: rocksdb:                Options.optimize_filters_for_hits: 0
Dec  8 04:44:59 np0005550137 ceph-mon[74516]: rocksdb:                Options.paranoid_file_checks: 0
Dec  8 04:44:59 np0005550137 ceph-mon[74516]: rocksdb:                Options.force_consistency_checks: 1
Dec  8 04:44:59 np0005550137 ceph-mon[74516]: rocksdb:                Options.report_bg_io_stats: 0
Dec  8 04:44:59 np0005550137 ceph-mon[74516]: rocksdb:                               Options.ttl: 2592000
Dec  8 04:44:59 np0005550137 ceph-mon[74516]: rocksdb:          Options.periodic_compaction_seconds: 0
Dec  8 04:44:59 np0005550137 ceph-mon[74516]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Dec  8 04:44:59 np0005550137 ceph-mon[74516]: rocksdb:    Options.preserve_internal_time_seconds: 0
Dec  8 04:44:59 np0005550137 ceph-mon[74516]: rocksdb:                       Options.enable_blob_files: false
Dec  8 04:44:59 np0005550137 ceph-mon[74516]: rocksdb:                           Options.min_blob_size: 0
Dec  8 04:44:59 np0005550137 ceph-mon[74516]: rocksdb:                          Options.blob_file_size: 268435456
Dec  8 04:44:59 np0005550137 ceph-mon[74516]: rocksdb:                   Options.blob_compression_type: NoCompression
Dec  8 04:44:59 np0005550137 ceph-mon[74516]: rocksdb:          Options.enable_blob_garbage_collection: false
Dec  8 04:44:59 np0005550137 ceph-mon[74516]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Dec  8 04:44:59 np0005550137 ceph-mon[74516]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Dec  8 04:44:59 np0005550137 ceph-mon[74516]: rocksdb:          Options.blob_compaction_readahead_size: 0
Dec  8 04:44:59 np0005550137 ceph-mon[74516]: rocksdb:                Options.blob_file_starting_level: 0
Dec  8 04:44:59 np0005550137 ceph-mon[74516]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Dec  8 04:44:59 np0005550137 ceph-mon[74516]: rocksdb: [db/version_set.cc:5566] Recovered from manifest file:/var/lib/ceph/mon/ceph-compute-0/store.db/MANIFEST-000010 succeeded,manifest_file_number is 10, next_file_number is 12, last_sequence is 5, log_number is 5,prev_log_number is 0,max_column_family is 0,min_log_number_to_keep is 5
Dec  8 04:44:59 np0005550137 ceph-mon[74516]: rocksdb: [db/version_set.cc:5581] Column family [default] (ID 0), log number is 5
Dec  8 04:44:59 np0005550137 ceph-mon[74516]: rocksdb: [db/db_impl/db_impl_open.cc:539] DB ID: 80444841-be0f-461b-9293-2c19ffebbf01
Dec  8 04:44:59 np0005550137 ceph-mon[74516]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765187099784387, "job": 1, "event": "recovery_started", "wal_files": [9]}
Dec  8 04:44:59 np0005550137 ceph-mon[74516]: rocksdb: [db/db_impl/db_impl_open.cc:1043] Recovering log #9 mode 2
Dec  8 04:44:59 np0005550137 ceph-mon[74516]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765187099789571, "cf_name": "default", "job": 1, "event": "table_file_creation", "file_number": 13, "file_size": 58490, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 8, "largest_seqno": 137, "table_properties": {"data_size": 56964, "index_size": 168, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 325, "raw_key_size": 3182, "raw_average_key_size": 30, "raw_value_size": 54481, "raw_average_value_size": 523, "num_data_blocks": 9, "num_entries": 104, "num_filter_entries": 104, "num_deletions": 3, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1765187099, "oldest_key_time": 0, "file_creation_time": 0, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "80444841-be0f-461b-9293-2c19ffebbf01", "db_session_id": "WSOFQ4I8QWDIF20O9U4H", "orig_file_number": 13, "seqno_to_time_mapping": "N/A"}}
Dec  8 04:44:59 np0005550137 ceph-mon[74516]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765187099789859, "job": 1, "event": "recovery_finished"}
Dec  8 04:44:59 np0005550137 ceph-mon[74516]: rocksdb: [db/version_set.cc:5047] Creating manifest 15
Dec  8 04:44:59 np0005550137 ceph-mon[74516]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000009.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Dec  8 04:44:59 np0005550137 ceph-mon[74516]: rocksdb: [db/db_impl/db_impl_open.cc:1987] SstFileManager instance 0x55fed3752e00
Dec  8 04:44:59 np0005550137 ceph-mon[74516]: rocksdb: DB pointer 0x55fed385c000
Dec  8 04:44:59 np0005550137 ceph-mon[74516]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Dec  8 04:44:59 np0005550137 ceph-mon[74516]: rocksdb: [db/db_impl/db_impl.cc:1111] #012** DB Stats **#012Uptime(secs): 0.0 total, 0.0 interval#012Cumulative writes: 0 writes, 0 keys, 0 commit groups, 0.0 writes per commit group, ingest: 0.00 GB, 0.00 MB/s#012Cumulative WAL: 0 writes, 0 syncs, 0.00 writes per sync, written: 0.00 GB, 0.00 MB/s#012Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent#012Interval writes: 0 writes, 0 keys, 0 commit groups, 0.0 writes per commit group, ingest: 0.00 MB, 0.00 MB/s#012Interval WAL: 0 writes, 0 syncs, 0.00 writes per sync, written: 0.00 GB, 0.00 MB/s#012Interval stall: 00:00:0.000 H:M:S, 0.0 percent#012#012** Compaction Stats [default] **#012Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012  L0      2/0   59.02 KB   0.5      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0     11.5      0.00              0.00         1    0.005       0      0       0.0       0.0#012 Sum      2/0   59.02 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0     11.5      0.00              0.00         1    0.005       0      0       0.0       0.0#012 Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0     11.5      0.00              0.00         1    0.005       0      0       0.0       0.0#012#012** Compaction Stats [default] **#012Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0     11.5      0.00              0.00         1    0.005       0      0       0.0       0.0#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 0.0 total, 0.0 interval#012Flush(GB): cumulative 0.000, interval 0.000#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.00 GB write, 3.25 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Interval compaction: 0.00 GB write, 3.25 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count#012Block cache BinnedLRUCache@0x55fed3751350#2 capacity: 512.00 MB usage: 0.84 KB table_size: 0 occupancy: 18446744073709551615 collections: 1 last_copies: 0 last_secs: 3.2e-05 secs_since: 0#012Block cache entry stats(count,size,portion): FilterBlock(2,0.48 KB,9.23872e-05%) IndexBlock(2,0.36 KB,6.85453e-05%) Misc(1,0.00 KB,0%)#012#012** File Read Latency Histogram By Level [default] **
Dec  8 04:44:59 np0005550137 ceph-mon[74516]: starting mon.compute-0 rank 0 at public addrs [v2:192.168.122.100:3300/0,v1:192.168.122.100:6789/0] at bind addrs [v2:192.168.122.100:3300/0,v1:192.168.122.100:6789/0] mon_data /var/lib/ceph/mon/ceph-compute-0 fsid ceb838ef-9d5d-54e4-bddb-2f01adce2ad4
Dec  8 04:44:59 np0005550137 ceph-mon[74516]: mon.compute-0@-1(???) e1 preinit fsid ceb838ef-9d5d-54e4-bddb-2f01adce2ad4
Dec  8 04:44:59 np0005550137 ceph-mon[74516]: mon.compute-0@-1(???).mds e1 new map
Dec  8 04:44:59 np0005550137 ceph-mon[74516]: mon.compute-0@-1(???).mds e1 print_map#012e1#012btime 2025-12-08T09:44:57:301434+0000#012enable_multiple, ever_enabled_multiple: 1,1#012default compat: compat={},rocompat={},incompat={1=base v0.20,2=client writeable ranges,3=default file layouts on dirs,4=dir inode in separate object,5=mds uses versioned encoding,6=dirfrag is stored in omap,8=no anchor table,9=file layout v2,10=snaprealm v2,11=minor log segments,12=quiesce subvolumes}#012legacy client fscid: -1#012 #012No filesystems configured
Dec  8 04:44:59 np0005550137 ceph-mon[74516]: mon.compute-0@-1(???).osd e1 crush map has features 3314932999778484224, adjusting msgr requires
Dec  8 04:44:59 np0005550137 ceph-mon[74516]: mon.compute-0@-1(???).osd e1 crush map has features 288514050185494528, adjusting msgr requires
Dec  8 04:44:59 np0005550137 ceph-mon[74516]: mon.compute-0@-1(???).osd e1 crush map has features 288514050185494528, adjusting msgr requires
Dec  8 04:44:59 np0005550137 ceph-mon[74516]: mon.compute-0@-1(???).osd e1 crush map has features 288514050185494528, adjusting msgr requires
Dec  8 04:44:59 np0005550137 ceph-mon[74516]: mon.compute-0@-1(???).paxosservice(auth 1..2) refresh upgraded, format 0 -> 3
Dec  8 04:44:59 np0005550137 ceph-mon[74516]: mon.compute-0@-1(probing) e1  my rank is now 0 (was -1)
Dec  8 04:44:59 np0005550137 ceph-mon[74516]: mon.compute-0@0(probing) e1 win_standalone_election
Dec  8 04:44:59 np0005550137 ceph-mon[74516]: paxos.0).electionLogic(3) init, last seen epoch 3, mid-election, bumping
Dec  8 04:44:59 np0005550137 ceph-mon[74516]: mon.compute-0@0(electing) e1 collect_metadata vda:  no unique device id for vda: fallback method has no model nor serial
Dec  8 04:44:59 np0005550137 ceph-mon[74516]: log_channel(cluster) log [INF] : mon.compute-0 is new leader, mons compute-0 in quorum (ranks 0)
Dec  8 04:44:59 np0005550137 ceph-mon[74516]: log_channel(cluster) log [DBG] : monmap epoch 1
Dec  8 04:44:59 np0005550137 ceph-mon[74516]: log_channel(cluster) log [DBG] : fsid ceb838ef-9d5d-54e4-bddb-2f01adce2ad4
Dec  8 04:44:59 np0005550137 ceph-mon[74516]: log_channel(cluster) log [DBG] : last_changed 2025-12-08T09:44:55.163607+0000
Dec  8 04:44:59 np0005550137 ceph-mon[74516]: log_channel(cluster) log [DBG] : created 2025-12-08T09:44:55.163607+0000
Dec  8 04:44:59 np0005550137 ceph-mon[74516]: log_channel(cluster) log [DBG] : min_mon_release 19 (squid)
Dec  8 04:44:59 np0005550137 ceph-mon[74516]: log_channel(cluster) log [DBG] : election_strategy: 1
Dec  8 04:44:59 np0005550137 ceph-mon[74516]: log_channel(cluster) log [DBG] : 0: [v2:192.168.122.100:3300/0,v1:192.168.122.100:6789/0] mon.compute-0
Dec  8 04:44:59 np0005550137 ceph-mon[74516]: mon.compute-0@0(leader) e1 collect_metadata vda:  no unique device id for vda: fallback method has no model nor serial
Dec  8 04:44:59 np0005550137 ceph-mon[74516]: log_channel(cluster) log [DBG] : fsmap 
Dec  8 04:44:59 np0005550137 ceph-mon[74516]: log_channel(cluster) log [DBG] : osdmap e1: 0 total, 0 up, 0 in
Dec  8 04:44:59 np0005550137 ceph-mon[74516]: log_channel(cluster) log [DBG] : mgrmap e1: no daemons active
Dec  8 04:44:59 np0005550137 podman[74517]: 2025-12-08 09:44:59.833864347 +0000 UTC m=+0.055704116 container create 4a9d3dd3df3ece3f77b948176e38ace76b1a807f17b3caefaa1844f0db9cc222 (image=quay.io/ceph/ceph:v19, name=agitated_wilbur, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_REF=squid, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec  8 04:44:59 np0005550137 ceph-mon[74516]: mon.compute-0 is new leader, mons compute-0 in quorum (ranks 0)
Dec  8 04:44:59 np0005550137 systemd[1]: Started libpod-conmon-4a9d3dd3df3ece3f77b948176e38ace76b1a807f17b3caefaa1844f0db9cc222.scope.
Dec  8 04:44:59 np0005550137 podman[74517]: 2025-12-08 09:44:59.806048361 +0000 UTC m=+0.027888140 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Dec  8 04:44:59 np0005550137 systemd[1]: Started libcrun container.
Dec  8 04:44:59 np0005550137 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9323d803a62a067c3a9e0ab9a1ac2639b34d68700064e285e59f3f09b331a536/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Dec  8 04:44:59 np0005550137 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9323d803a62a067c3a9e0ab9a1ac2639b34d68700064e285e59f3f09b331a536/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec  8 04:44:59 np0005550137 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9323d803a62a067c3a9e0ab9a1ac2639b34d68700064e285e59f3f09b331a536/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec  8 04:44:59 np0005550137 podman[74517]: 2025-12-08 09:44:59.932521245 +0000 UTC m=+0.154361044 container init 4a9d3dd3df3ece3f77b948176e38ace76b1a807f17b3caefaa1844f0db9cc222 (image=quay.io/ceph/ceph:v19, name=agitated_wilbur, CEPH_REF=squid, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True)
Dec  8 04:44:59 np0005550137 podman[74517]: 2025-12-08 09:44:59.945628827 +0000 UTC m=+0.167468606 container start 4a9d3dd3df3ece3f77b948176e38ace76b1a807f17b3caefaa1844f0db9cc222 (image=quay.io/ceph/ceph:v19, name=agitated_wilbur, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, io.buildah.version=1.40.1, CEPH_REF=squid, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True)
Dec  8 04:44:59 np0005550137 podman[74517]: 2025-12-08 09:44:59.952013449 +0000 UTC m=+0.173853198 container attach 4a9d3dd3df3ece3f77b948176e38ace76b1a807f17b3caefaa1844f0db9cc222 (image=quay.io/ceph/ceph:v19, name=agitated_wilbur, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, ceph=True)
Dec  8 04:45:00 np0005550137 ceph-mon[74516]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config set, name=public_network}] v 0)
Dec  8 04:45:00 np0005550137 systemd[1]: libpod-4a9d3dd3df3ece3f77b948176e38ace76b1a807f17b3caefaa1844f0db9cc222.scope: Deactivated successfully.
Dec  8 04:45:00 np0005550137 podman[74517]: 2025-12-08 09:45:00.212575497 +0000 UTC m=+0.434415246 container died 4a9d3dd3df3ece3f77b948176e38ace76b1a807f17b3caefaa1844f0db9cc222 (image=quay.io/ceph/ceph:v19, name=agitated_wilbur, org.label-schema.build-date=20250325, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True)
Dec  8 04:45:00 np0005550137 systemd[1]: var-lib-containers-storage-overlay-9323d803a62a067c3a9e0ab9a1ac2639b34d68700064e285e59f3f09b331a536-merged.mount: Deactivated successfully.
Dec  8 04:45:00 np0005550137 podman[74517]: 2025-12-08 09:45:00.245712441 +0000 UTC m=+0.467552180 container remove 4a9d3dd3df3ece3f77b948176e38ace76b1a807f17b3caefaa1844f0db9cc222 (image=quay.io/ceph/ceph:v19, name=agitated_wilbur, org.label-schema.vendor=CentOS, CEPH_REF=squid, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default)
Dec  8 04:45:00 np0005550137 systemd[1]: libpod-conmon-4a9d3dd3df3ece3f77b948176e38ace76b1a807f17b3caefaa1844f0db9cc222.scope: Deactivated successfully.
Dec  8 04:45:00 np0005550137 podman[74608]: 2025-12-08 09:45:00.309559422 +0000 UTC m=+0.043553953 container create 3e9574008f2874d4b4e4dad211dfc48cac017e7fe94664ca72ee74e98dd6f2f3 (image=quay.io/ceph/ceph:v19, name=zealous_curie, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, OSD_FLAVOR=default, ceph=True, org.label-schema.vendor=CentOS)
Dec  8 04:45:00 np0005550137 systemd[1]: Started libpod-conmon-3e9574008f2874d4b4e4dad211dfc48cac017e7fe94664ca72ee74e98dd6f2f3.scope.
Dec  8 04:45:00 np0005550137 systemd[1]: Started libcrun container.
Dec  8 04:45:00 np0005550137 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d5b849722b6ddaf477ad2575e2bb5e67e9d00b81b1ec1620c460182de4f25a75/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec  8 04:45:00 np0005550137 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d5b849722b6ddaf477ad2575e2bb5e67e9d00b81b1ec1620c460182de4f25a75/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec  8 04:45:00 np0005550137 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d5b849722b6ddaf477ad2575e2bb5e67e9d00b81b1ec1620c460182de4f25a75/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Dec  8 04:45:00 np0005550137 podman[74608]: 2025-12-08 09:45:00.383907434 +0000 UTC m=+0.117901995 container init 3e9574008f2874d4b4e4dad211dfc48cac017e7fe94664ca72ee74e98dd6f2f3 (image=quay.io/ceph/ceph:v19, name=zealous_curie, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Dec  8 04:45:00 np0005550137 podman[74608]: 2025-12-08 09:45:00.290367697 +0000 UTC m=+0.024362258 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Dec  8 04:45:00 np0005550137 podman[74608]: 2025-12-08 09:45:00.391631217 +0000 UTC m=+0.125625748 container start 3e9574008f2874d4b4e4dad211dfc48cac017e7fe94664ca72ee74e98dd6f2f3 (image=quay.io/ceph/ceph:v19, name=zealous_curie, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, OSD_FLAVOR=default, ceph=True, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0)
Dec  8 04:45:00 np0005550137 podman[74608]: 2025-12-08 09:45:00.394324702 +0000 UTC m=+0.128319263 container attach 3e9574008f2874d4b4e4dad211dfc48cac017e7fe94664ca72ee74e98dd6f2f3 (image=quay.io/ceph/ceph:v19, name=zealous_curie, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Dec  8 04:45:00 np0005550137 ceph-mon[74516]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config set, name=cluster_network}] v 0)
Dec  8 04:45:00 np0005550137 systemd[1]: libpod-3e9574008f2874d4b4e4dad211dfc48cac017e7fe94664ca72ee74e98dd6f2f3.scope: Deactivated successfully.
Dec  8 04:45:00 np0005550137 podman[74608]: 2025-12-08 09:45:00.622430028 +0000 UTC m=+0.356424569 container died 3e9574008f2874d4b4e4dad211dfc48cac017e7fe94664ca72ee74e98dd6f2f3 (image=quay.io/ceph/ceph:v19, name=zealous_curie, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325)
Dec  8 04:45:00 np0005550137 systemd[1]: var-lib-containers-storage-overlay-d5b849722b6ddaf477ad2575e2bb5e67e9d00b81b1ec1620c460182de4f25a75-merged.mount: Deactivated successfully.
Dec  8 04:45:00 np0005550137 podman[74608]: 2025-12-08 09:45:00.659988621 +0000 UTC m=+0.393983152 container remove 3e9574008f2874d4b4e4dad211dfc48cac017e7fe94664ca72ee74e98dd6f2f3 (image=quay.io/ceph/ceph:v19, name=zealous_curie, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec  8 04:45:00 np0005550137 systemd[1]: libpod-conmon-3e9574008f2874d4b4e4dad211dfc48cac017e7fe94664ca72ee74e98dd6f2f3.scope: Deactivated successfully.
Dec  8 04:45:00 np0005550137 systemd[1]: Reloading.
Dec  8 04:45:00 np0005550137 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Dec  8 04:45:00 np0005550137 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec  8 04:45:00 np0005550137 systemd[1]: Reloading.
Dec  8 04:45:01 np0005550137 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Dec  8 04:45:01 np0005550137 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec  8 04:45:01 np0005550137 systemd[1]: Starting Ceph mgr.compute-0.kitiwu for ceb838ef-9d5d-54e4-bddb-2f01adce2ad4...
Dec  8 04:45:01 np0005550137 podman[74787]: 2025-12-08 09:45:01.436488142 +0000 UTC m=+0.048076686 container create 45414a27262c2a067a3abaa4867dd11d7c645d2215df8fa1caae96e3c46967fc (image=quay.io/ceph/ceph:v19, name=ceph-ceb838ef-9d5d-54e4-bddb-2f01adce2ad4-mgr-compute-0-kitiwu, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True)
Dec  8 04:45:01 np0005550137 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4228bf6e2635c5b42a9352853ddbc9e9aeb485ff4d849759098fae2da8614302/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec  8 04:45:01 np0005550137 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4228bf6e2635c5b42a9352853ddbc9e9aeb485ff4d849759098fae2da8614302/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec  8 04:45:01 np0005550137 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4228bf6e2635c5b42a9352853ddbc9e9aeb485ff4d849759098fae2da8614302/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec  8 04:45:01 np0005550137 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4228bf6e2635c5b42a9352853ddbc9e9aeb485ff4d849759098fae2da8614302/merged/var/lib/ceph/mgr/ceph-compute-0.kitiwu supports timestamps until 2038 (0x7fffffff)
Dec  8 04:45:01 np0005550137 podman[74787]: 2025-12-08 09:45:01.415771699 +0000 UTC m=+0.027360283 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Dec  8 04:45:01 np0005550137 podman[74787]: 2025-12-08 09:45:01.512522347 +0000 UTC m=+0.124110981 container init 45414a27262c2a067a3abaa4867dd11d7c645d2215df8fa1caae96e3c46967fc (image=quay.io/ceph/ceph:v19, name=ceph-ceb838ef-9d5d-54e4-bddb-2f01adce2ad4-mgr-compute-0-kitiwu, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec  8 04:45:01 np0005550137 podman[74787]: 2025-12-08 09:45:01.521373175 +0000 UTC m=+0.132961749 container start 45414a27262c2a067a3abaa4867dd11d7c645d2215df8fa1caae96e3c46967fc (image=quay.io/ceph/ceph:v19, name=ceph-ceb838ef-9d5d-54e4-bddb-2f01adce2ad4-mgr-compute-0-kitiwu, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, ceph=True, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default)
Dec  8 04:45:01 np0005550137 bash[74787]: 45414a27262c2a067a3abaa4867dd11d7c645d2215df8fa1caae96e3c46967fc
Dec  8 04:45:01 np0005550137 systemd[1]: Started Ceph mgr.compute-0.kitiwu for ceb838ef-9d5d-54e4-bddb-2f01adce2ad4.
Dec  8 04:45:01 np0005550137 ceph-mgr[74806]: set uid:gid to 167:167 (ceph:ceph)
Dec  8 04:45:01 np0005550137 ceph-mgr[74806]: ceph version 19.2.3 (c92aebb279828e9c3c1f5d24613efca272649e62) squid (stable), process ceph-mgr, pid 2
Dec  8 04:45:01 np0005550137 ceph-mgr[74806]: pidfile_write: ignore empty --pid-file
Dec  8 04:45:01 np0005550137 ceph-mgr[74806]: mgr[py] Loading python module 'alerts'
Dec  8 04:45:01 np0005550137 podman[74807]: 2025-12-08 09:45:01.61802969 +0000 UTC m=+0.052532135 container create fb830bfe2af609cdc1c3a861c041f4f6274c8c3c38db4696088700307c33382b (image=quay.io/ceph/ceph:v19, name=admiring_euclid, ceph=True, io.buildah.version=1.40.1, CEPH_REF=squid, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325)
Dec  8 04:45:01 np0005550137 systemd[1]: Started libpod-conmon-fb830bfe2af609cdc1c3a861c041f4f6274c8c3c38db4696088700307c33382b.scope.
Dec  8 04:45:01 np0005550137 systemd[1]: Started libcrun container.
Dec  8 04:45:01 np0005550137 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8a85cb690b8dbb166db3a31230cf59f77e6449f6de4c87177c61740956ab5c4e/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Dec  8 04:45:01 np0005550137 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8a85cb690b8dbb166db3a31230cf59f77e6449f6de4c87177c61740956ab5c4e/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec  8 04:45:01 np0005550137 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8a85cb690b8dbb166db3a31230cf59f77e6449f6de4c87177c61740956ab5c4e/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec  8 04:45:01 np0005550137 podman[74807]: 2025-12-08 09:45:01.598129823 +0000 UTC m=+0.032632258 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Dec  8 04:45:01 np0005550137 podman[74807]: 2025-12-08 09:45:01.708698147 +0000 UTC m=+0.143200592 container init fb830bfe2af609cdc1c3a861c041f4f6274c8c3c38db4696088700307c33382b (image=quay.io/ceph/ceph:v19, name=admiring_euclid, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Dec  8 04:45:01 np0005550137 ceph-mgr[74806]: mgr[py] Module alerts has missing NOTIFY_TYPES member
Dec  8 04:45:01 np0005550137 ceph-mgr[74806]: mgr[py] Loading python module 'balancer'
Dec  8 04:45:01 np0005550137 ceph-ceb838ef-9d5d-54e4-bddb-2f01adce2ad4-mgr-compute-0-kitiwu[74802]: 2025-12-08T09:45:01.713+0000 7f9b030a2140 -1 mgr[py] Module alerts has missing NOTIFY_TYPES member
Dec  8 04:45:01 np0005550137 podman[74807]: 2025-12-08 09:45:01.719758535 +0000 UTC m=+0.154260950 container start fb830bfe2af609cdc1c3a861c041f4f6274c8c3c38db4696088700307c33382b (image=quay.io/ceph/ceph:v19, name=admiring_euclid, io.buildah.version=1.40.1, CEPH_REF=squid, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec  8 04:45:01 np0005550137 podman[74807]: 2025-12-08 09:45:01.723848264 +0000 UTC m=+0.158350709 container attach fb830bfe2af609cdc1c3a861c041f4f6274c8c3c38db4696088700307c33382b (image=quay.io/ceph/ceph:v19, name=admiring_euclid, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default)
Dec  8 04:45:01 np0005550137 ceph-mgr[74806]: mgr[py] Module balancer has missing NOTIFY_TYPES member
Dec  8 04:45:01 np0005550137 ceph-mgr[74806]: mgr[py] Loading python module 'cephadm'
Dec  8 04:45:01 np0005550137 ceph-ceb838ef-9d5d-54e4-bddb-2f01adce2ad4-mgr-compute-0-kitiwu[74802]: 2025-12-08T09:45:01.797+0000 7f9b030a2140 -1 mgr[py] Module balancer has missing NOTIFY_TYPES member
Dec  8 04:45:01 np0005550137 ceph-mon[74516]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "status", "format": "json-pretty"} v 0)
Dec  8 04:45:01 np0005550137 ceph-mon[74516]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3386026561' entity='client.admin' cmd=[{"prefix": "status", "format": "json-pretty"}]: dispatch
Dec  8 04:45:01 np0005550137 admiring_euclid[74841]: 
Dec  8 04:45:01 np0005550137 admiring_euclid[74841]: {
Dec  8 04:45:01 np0005550137 admiring_euclid[74841]:    "fsid": "ceb838ef-9d5d-54e4-bddb-2f01adce2ad4",
Dec  8 04:45:01 np0005550137 admiring_euclid[74841]:    "health": {
Dec  8 04:45:01 np0005550137 admiring_euclid[74841]:        "status": "HEALTH_OK",
Dec  8 04:45:01 np0005550137 admiring_euclid[74841]:        "checks": {},
Dec  8 04:45:01 np0005550137 admiring_euclid[74841]:        "mutes": []
Dec  8 04:45:01 np0005550137 admiring_euclid[74841]:    },
Dec  8 04:45:01 np0005550137 admiring_euclid[74841]:    "election_epoch": 5,
Dec  8 04:45:01 np0005550137 admiring_euclid[74841]:    "quorum": [
Dec  8 04:45:01 np0005550137 admiring_euclid[74841]:        0
Dec  8 04:45:01 np0005550137 admiring_euclid[74841]:    ],
Dec  8 04:45:01 np0005550137 admiring_euclid[74841]:    "quorum_names": [
Dec  8 04:45:01 np0005550137 admiring_euclid[74841]:        "compute-0"
Dec  8 04:45:01 np0005550137 admiring_euclid[74841]:    ],
Dec  8 04:45:01 np0005550137 admiring_euclid[74841]:    "quorum_age": 2,
Dec  8 04:45:01 np0005550137 admiring_euclid[74841]:    "monmap": {
Dec  8 04:45:01 np0005550137 admiring_euclid[74841]:        "epoch": 1,
Dec  8 04:45:01 np0005550137 admiring_euclid[74841]:        "min_mon_release_name": "squid",
Dec  8 04:45:01 np0005550137 admiring_euclid[74841]:        "num_mons": 1
Dec  8 04:45:01 np0005550137 admiring_euclid[74841]:    },
Dec  8 04:45:01 np0005550137 admiring_euclid[74841]:    "osdmap": {
Dec  8 04:45:01 np0005550137 admiring_euclid[74841]:        "epoch": 1,
Dec  8 04:45:01 np0005550137 admiring_euclid[74841]:        "num_osds": 0,
Dec  8 04:45:01 np0005550137 admiring_euclid[74841]:        "num_up_osds": 0,
Dec  8 04:45:01 np0005550137 admiring_euclid[74841]:        "osd_up_since": 0,
Dec  8 04:45:01 np0005550137 admiring_euclid[74841]:        "num_in_osds": 0,
Dec  8 04:45:01 np0005550137 admiring_euclid[74841]:        "osd_in_since": 0,
Dec  8 04:45:01 np0005550137 admiring_euclid[74841]:        "num_remapped_pgs": 0
Dec  8 04:45:01 np0005550137 admiring_euclid[74841]:    },
Dec  8 04:45:01 np0005550137 admiring_euclid[74841]:    "pgmap": {
Dec  8 04:45:01 np0005550137 admiring_euclid[74841]:        "pgs_by_state": [],
Dec  8 04:45:01 np0005550137 admiring_euclid[74841]:        "num_pgs": 0,
Dec  8 04:45:01 np0005550137 admiring_euclid[74841]:        "num_pools": 0,
Dec  8 04:45:01 np0005550137 admiring_euclid[74841]:        "num_objects": 0,
Dec  8 04:45:01 np0005550137 admiring_euclid[74841]:        "data_bytes": 0,
Dec  8 04:45:01 np0005550137 admiring_euclid[74841]:        "bytes_used": 0,
Dec  8 04:45:01 np0005550137 admiring_euclid[74841]:        "bytes_avail": 0,
Dec  8 04:45:01 np0005550137 admiring_euclid[74841]:        "bytes_total": 0
Dec  8 04:45:01 np0005550137 admiring_euclid[74841]:    },
Dec  8 04:45:01 np0005550137 admiring_euclid[74841]:    "fsmap": {
Dec  8 04:45:01 np0005550137 admiring_euclid[74841]:        "epoch": 1,
Dec  8 04:45:01 np0005550137 admiring_euclid[74841]:        "btime": "2025-12-08T09:44:57:301434+0000",
Dec  8 04:45:01 np0005550137 admiring_euclid[74841]:        "by_rank": [],
Dec  8 04:45:01 np0005550137 admiring_euclid[74841]:        "up:standby": 0
Dec  8 04:45:01 np0005550137 admiring_euclid[74841]:    },
Dec  8 04:45:01 np0005550137 admiring_euclid[74841]:    "mgrmap": {
Dec  8 04:45:01 np0005550137 admiring_euclid[74841]:        "available": false,
Dec  8 04:45:01 np0005550137 admiring_euclid[74841]:        "num_standbys": 0,
Dec  8 04:45:01 np0005550137 admiring_euclid[74841]:        "modules": [
Dec  8 04:45:01 np0005550137 admiring_euclid[74841]:            "iostat",
Dec  8 04:45:01 np0005550137 admiring_euclid[74841]:            "nfs",
Dec  8 04:45:01 np0005550137 admiring_euclid[74841]:            "restful"
Dec  8 04:45:01 np0005550137 admiring_euclid[74841]:        ],
Dec  8 04:45:01 np0005550137 admiring_euclid[74841]:        "services": {}
Dec  8 04:45:01 np0005550137 admiring_euclid[74841]:    },
Dec  8 04:45:01 np0005550137 admiring_euclid[74841]:    "servicemap": {
Dec  8 04:45:01 np0005550137 admiring_euclid[74841]:        "epoch": 1,
Dec  8 04:45:01 np0005550137 admiring_euclid[74841]:        "modified": "2025-12-08T09:44:57.306979+0000",
Dec  8 04:45:01 np0005550137 admiring_euclid[74841]:        "services": {}
Dec  8 04:45:01 np0005550137 admiring_euclid[74841]:    },
Dec  8 04:45:01 np0005550137 admiring_euclid[74841]:    "progress_events": {}
Dec  8 04:45:01 np0005550137 admiring_euclid[74841]: }
Dec  8 04:45:01 np0005550137 systemd[1]: libpod-fb830bfe2af609cdc1c3a861c041f4f6274c8c3c38db4696088700307c33382b.scope: Deactivated successfully.
Dec  8 04:45:01 np0005550137 podman[74807]: 2025-12-08 09:45:01.931927639 +0000 UTC m=+0.366430084 container died fb830bfe2af609cdc1c3a861c041f4f6274c8c3c38db4696088700307c33382b (image=quay.io/ceph/ceph:v19, name=admiring_euclid, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_REF=squid, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec  8 04:45:01 np0005550137 systemd[1]: var-lib-containers-storage-overlay-8a85cb690b8dbb166db3a31230cf59f77e6449f6de4c87177c61740956ab5c4e-merged.mount: Deactivated successfully.
Dec  8 04:45:01 np0005550137 podman[74807]: 2025-12-08 09:45:01.970500574 +0000 UTC m=+0.405002999 container remove fb830bfe2af609cdc1c3a861c041f4f6274c8c3c38db4696088700307c33382b (image=quay.io/ceph/ceph:v19, name=admiring_euclid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_REF=squid, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.vendor=CentOS)
Dec  8 04:45:01 np0005550137 systemd[1]: libpod-conmon-fb830bfe2af609cdc1c3a861c041f4f6274c8c3c38db4696088700307c33382b.scope: Deactivated successfully.
Dec  8 04:45:02 np0005550137 ceph-mgr[74806]: mgr[py] Loading python module 'crash'
Dec  8 04:45:02 np0005550137 ceph-mgr[74806]: mgr[py] Module crash has missing NOTIFY_TYPES member
Dec  8 04:45:02 np0005550137 ceph-mgr[74806]: mgr[py] Loading python module 'dashboard'
Dec  8 04:45:02 np0005550137 ceph-ceb838ef-9d5d-54e4-bddb-2f01adce2ad4-mgr-compute-0-kitiwu[74802]: 2025-12-08T09:45:02.643+0000 7f9b030a2140 -1 mgr[py] Module crash has missing NOTIFY_TYPES member
Dec  8 04:45:03 np0005550137 ceph-mgr[74806]: mgr[py] Loading python module 'devicehealth'
Dec  8 04:45:03 np0005550137 ceph-mgr[74806]: mgr[py] Module devicehealth has missing NOTIFY_TYPES member
Dec  8 04:45:03 np0005550137 ceph-mgr[74806]: mgr[py] Loading python module 'diskprediction_local'
Dec  8 04:45:03 np0005550137 ceph-ceb838ef-9d5d-54e4-bddb-2f01adce2ad4-mgr-compute-0-kitiwu[74802]: 2025-12-08T09:45:03.292+0000 7f9b030a2140 -1 mgr[py] Module devicehealth has missing NOTIFY_TYPES member
Dec  8 04:45:03 np0005550137 ceph-ceb838ef-9d5d-54e4-bddb-2f01adce2ad4-mgr-compute-0-kitiwu[74802]: /lib64/python3.9/site-packages/scipy/__init__.py:73: UserWarning: NumPy was imported from a Python sub-interpreter but NumPy does not properly support sub-interpreters. This will likely work for most users but might cause hard to track down issues or subtle bugs. A common user of the rare sub-interpreter feature is wsgi which also allows single-interpreter mode.
Dec  8 04:45:03 np0005550137 ceph-ceb838ef-9d5d-54e4-bddb-2f01adce2ad4-mgr-compute-0-kitiwu[74802]: Improvements in the case of bugs are welcome, but is not on the NumPy roadmap, and full support may require significant effort to achieve.
Dec  8 04:45:03 np0005550137 ceph-ceb838ef-9d5d-54e4-bddb-2f01adce2ad4-mgr-compute-0-kitiwu[74802]:  from numpy import show_config as show_numpy_config
Dec  8 04:45:03 np0005550137 ceph-mgr[74806]: mgr[py] Module diskprediction_local has missing NOTIFY_TYPES member
Dec  8 04:45:03 np0005550137 ceph-mgr[74806]: mgr[py] Loading python module 'influx'
Dec  8 04:45:03 np0005550137 ceph-ceb838ef-9d5d-54e4-bddb-2f01adce2ad4-mgr-compute-0-kitiwu[74802]: 2025-12-08T09:45:03.462+0000 7f9b030a2140 -1 mgr[py] Module diskprediction_local has missing NOTIFY_TYPES member
Dec  8 04:45:03 np0005550137 ceph-mgr[74806]: mgr[py] Module influx has missing NOTIFY_TYPES member
Dec  8 04:45:03 np0005550137 ceph-mgr[74806]: mgr[py] Loading python module 'insights'
Dec  8 04:45:03 np0005550137 ceph-ceb838ef-9d5d-54e4-bddb-2f01adce2ad4-mgr-compute-0-kitiwu[74802]: 2025-12-08T09:45:03.533+0000 7f9b030a2140 -1 mgr[py] Module influx has missing NOTIFY_TYPES member
Dec  8 04:45:03 np0005550137 ceph-mgr[74806]: mgr[py] Loading python module 'iostat'
Dec  8 04:45:03 np0005550137 ceph-mgr[74806]: mgr[py] Module iostat has missing NOTIFY_TYPES member
Dec  8 04:45:03 np0005550137 ceph-mgr[74806]: mgr[py] Loading python module 'k8sevents'
Dec  8 04:45:03 np0005550137 ceph-ceb838ef-9d5d-54e4-bddb-2f01adce2ad4-mgr-compute-0-kitiwu[74802]: 2025-12-08T09:45:03.670+0000 7f9b030a2140 -1 mgr[py] Module iostat has missing NOTIFY_TYPES member
Dec  8 04:45:04 np0005550137 podman[74892]: 2025-12-08 09:45:04.054185562 +0000 UTC m=+0.049969625 container create fb0956ddd028bdd53c176052c694d301fe2727f2bc023650fa3dbb82ef74b991 (image=quay.io/ceph/ceph:v19, name=eloquent_newton, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0)
Dec  8 04:45:04 np0005550137 ceph-mgr[74806]: mgr[py] Loading python module 'localpool'
Dec  8 04:45:04 np0005550137 systemd[1]: Started libpod-conmon-fb0956ddd028bdd53c176052c694d301fe2727f2bc023650fa3dbb82ef74b991.scope.
Dec  8 04:45:04 np0005550137 systemd[1]: Started libcrun container.
Dec  8 04:45:04 np0005550137 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/cef709ffd07db359d8de5cb031ced1cce0f9423a37c55e87da69ff083bda27cf/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec  8 04:45:04 np0005550137 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/cef709ffd07db359d8de5cb031ced1cce0f9423a37c55e87da69ff083bda27cf/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec  8 04:45:04 np0005550137 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/cef709ffd07db359d8de5cb031ced1cce0f9423a37c55e87da69ff083bda27cf/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Dec  8 04:45:04 np0005550137 podman[74892]: 2025-12-08 09:45:04.037187207 +0000 UTC m=+0.032971270 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Dec  8 04:45:04 np0005550137 podman[74892]: 2025-12-08 09:45:04.139900922 +0000 UTC m=+0.135685035 container init fb0956ddd028bdd53c176052c694d301fe2727f2bc023650fa3dbb82ef74b991 (image=quay.io/ceph/ceph:v19, name=eloquent_newton, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_REF=squid, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, ceph=True)
Dec  8 04:45:04 np0005550137 ceph-mgr[74806]: mgr[py] Loading python module 'mds_autoscaler'
Dec  8 04:45:04 np0005550137 podman[74892]: 2025-12-08 09:45:04.149641049 +0000 UTC m=+0.145425112 container start fb0956ddd028bdd53c176052c694d301fe2727f2bc023650fa3dbb82ef74b991 (image=quay.io/ceph/ceph:v19, name=eloquent_newton, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, org.label-schema.license=GPLv2)
Dec  8 04:45:04 np0005550137 podman[74892]: 2025-12-08 09:45:04.153088827 +0000 UTC m=+0.148872950 container attach fb0956ddd028bdd53c176052c694d301fe2727f2bc023650fa3dbb82ef74b991 (image=quay.io/ceph/ceph:v19, name=eloquent_newton, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325)
Dec  8 04:45:04 np0005550137 ceph-mon[74516]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "status", "format": "json-pretty"} v 0)
Dec  8 04:45:04 np0005550137 ceph-mon[74516]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2930241048' entity='client.admin' cmd=[{"prefix": "status", "format": "json-pretty"}]: dispatch
Dec  8 04:45:04 np0005550137 eloquent_newton[74909]: 
Dec  8 04:45:04 np0005550137 eloquent_newton[74909]: {
Dec  8 04:45:04 np0005550137 eloquent_newton[74909]:    "fsid": "ceb838ef-9d5d-54e4-bddb-2f01adce2ad4",
Dec  8 04:45:04 np0005550137 eloquent_newton[74909]:    "health": {
Dec  8 04:45:04 np0005550137 eloquent_newton[74909]:        "status": "HEALTH_OK",
Dec  8 04:45:04 np0005550137 eloquent_newton[74909]:        "checks": {},
Dec  8 04:45:04 np0005550137 eloquent_newton[74909]:        "mutes": []
Dec  8 04:45:04 np0005550137 eloquent_newton[74909]:    },
Dec  8 04:45:04 np0005550137 eloquent_newton[74909]:    "election_epoch": 5,
Dec  8 04:45:04 np0005550137 eloquent_newton[74909]:    "quorum": [
Dec  8 04:45:04 np0005550137 eloquent_newton[74909]:        0
Dec  8 04:45:04 np0005550137 eloquent_newton[74909]:    ],
Dec  8 04:45:04 np0005550137 eloquent_newton[74909]:    "quorum_names": [
Dec  8 04:45:04 np0005550137 eloquent_newton[74909]:        "compute-0"
Dec  8 04:45:04 np0005550137 eloquent_newton[74909]:    ],
Dec  8 04:45:04 np0005550137 eloquent_newton[74909]:    "quorum_age": 4,
Dec  8 04:45:04 np0005550137 eloquent_newton[74909]:    "monmap": {
Dec  8 04:45:04 np0005550137 eloquent_newton[74909]:        "epoch": 1,
Dec  8 04:45:04 np0005550137 eloquent_newton[74909]:        "min_mon_release_name": "squid",
Dec  8 04:45:04 np0005550137 eloquent_newton[74909]:        "num_mons": 1
Dec  8 04:45:04 np0005550137 eloquent_newton[74909]:    },
Dec  8 04:45:04 np0005550137 eloquent_newton[74909]:    "osdmap": {
Dec  8 04:45:04 np0005550137 eloquent_newton[74909]:        "epoch": 1,
Dec  8 04:45:04 np0005550137 eloquent_newton[74909]:        "num_osds": 0,
Dec  8 04:45:04 np0005550137 eloquent_newton[74909]:        "num_up_osds": 0,
Dec  8 04:45:04 np0005550137 eloquent_newton[74909]:        "osd_up_since": 0,
Dec  8 04:45:04 np0005550137 eloquent_newton[74909]:        "num_in_osds": 0,
Dec  8 04:45:04 np0005550137 eloquent_newton[74909]:        "osd_in_since": 0,
Dec  8 04:45:04 np0005550137 eloquent_newton[74909]:        "num_remapped_pgs": 0
Dec  8 04:45:04 np0005550137 eloquent_newton[74909]:    },
Dec  8 04:45:04 np0005550137 eloquent_newton[74909]:    "pgmap": {
Dec  8 04:45:04 np0005550137 eloquent_newton[74909]:        "pgs_by_state": [],
Dec  8 04:45:04 np0005550137 eloquent_newton[74909]:        "num_pgs": 0,
Dec  8 04:45:04 np0005550137 eloquent_newton[74909]:        "num_pools": 0,
Dec  8 04:45:04 np0005550137 eloquent_newton[74909]:        "num_objects": 0,
Dec  8 04:45:04 np0005550137 eloquent_newton[74909]:        "data_bytes": 0,
Dec  8 04:45:04 np0005550137 eloquent_newton[74909]:        "bytes_used": 0,
Dec  8 04:45:04 np0005550137 eloquent_newton[74909]:        "bytes_avail": 0,
Dec  8 04:45:04 np0005550137 eloquent_newton[74909]:        "bytes_total": 0
Dec  8 04:45:04 np0005550137 eloquent_newton[74909]:    },
Dec  8 04:45:04 np0005550137 eloquent_newton[74909]:    "fsmap": {
Dec  8 04:45:04 np0005550137 eloquent_newton[74909]:        "epoch": 1,
Dec  8 04:45:04 np0005550137 eloquent_newton[74909]:        "btime": "2025-12-08T09:44:57:301434+0000",
Dec  8 04:45:04 np0005550137 eloquent_newton[74909]:        "by_rank": [],
Dec  8 04:45:04 np0005550137 eloquent_newton[74909]:        "up:standby": 0
Dec  8 04:45:04 np0005550137 eloquent_newton[74909]:    },
Dec  8 04:45:04 np0005550137 eloquent_newton[74909]:    "mgrmap": {
Dec  8 04:45:04 np0005550137 eloquent_newton[74909]:        "available": false,
Dec  8 04:45:04 np0005550137 eloquent_newton[74909]:        "num_standbys": 0,
Dec  8 04:45:04 np0005550137 eloquent_newton[74909]:        "modules": [
Dec  8 04:45:04 np0005550137 eloquent_newton[74909]:            "iostat",
Dec  8 04:45:04 np0005550137 eloquent_newton[74909]:            "nfs",
Dec  8 04:45:04 np0005550137 eloquent_newton[74909]:            "restful"
Dec  8 04:45:04 np0005550137 eloquent_newton[74909]:        ],
Dec  8 04:45:04 np0005550137 eloquent_newton[74909]:        "services": {}
Dec  8 04:45:04 np0005550137 eloquent_newton[74909]:    },
Dec  8 04:45:04 np0005550137 eloquent_newton[74909]:    "servicemap": {
Dec  8 04:45:04 np0005550137 eloquent_newton[74909]:        "epoch": 1,
Dec  8 04:45:04 np0005550137 eloquent_newton[74909]:        "modified": "2025-12-08T09:44:57.306979+0000",
Dec  8 04:45:04 np0005550137 eloquent_newton[74909]:        "services": {}
Dec  8 04:45:04 np0005550137 eloquent_newton[74909]:    },
Dec  8 04:45:04 np0005550137 eloquent_newton[74909]:    "progress_events": {}
Dec  8 04:45:04 np0005550137 eloquent_newton[74909]: }
Dec  8 04:45:04 np0005550137 systemd[1]: libpod-fb0956ddd028bdd53c176052c694d301fe2727f2bc023650fa3dbb82ef74b991.scope: Deactivated successfully.
Dec  8 04:45:04 np0005550137 podman[74892]: 2025-12-08 09:45:04.347026077 +0000 UTC m=+0.342810150 container died fb0956ddd028bdd53c176052c694d301fe2727f2bc023650fa3dbb82ef74b991 (image=quay.io/ceph/ceph:v19, name=eloquent_newton, org.label-schema.build-date=20250325, OSD_FLAVOR=default, ceph=True, org.label-schema.vendor=CentOS, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec  8 04:45:04 np0005550137 systemd[1]: var-lib-containers-storage-overlay-cef709ffd07db359d8de5cb031ced1cce0f9423a37c55e87da69ff083bda27cf-merged.mount: Deactivated successfully.
Dec  8 04:45:04 np0005550137 ceph-mgr[74806]: mgr[py] Loading python module 'mirroring'
Dec  8 04:45:04 np0005550137 podman[74892]: 2025-12-08 09:45:04.391786427 +0000 UTC m=+0.387570510 container remove fb0956ddd028bdd53c176052c694d301fe2727f2bc023650fa3dbb82ef74b991 (image=quay.io/ceph/ceph:v19, name=eloquent_newton, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.build-date=20250325, ceph=True)
Dec  8 04:45:04 np0005550137 systemd[1]: libpod-conmon-fb0956ddd028bdd53c176052c694d301fe2727f2bc023650fa3dbb82ef74b991.scope: Deactivated successfully.
Dec  8 04:45:04 np0005550137 ceph-mgr[74806]: mgr[py] Loading python module 'nfs'
Dec  8 04:45:04 np0005550137 ceph-mgr[74806]: mgr[py] Module nfs has missing NOTIFY_TYPES member
Dec  8 04:45:04 np0005550137 ceph-mgr[74806]: mgr[py] Loading python module 'orchestrator'
Dec  8 04:45:04 np0005550137 ceph-ceb838ef-9d5d-54e4-bddb-2f01adce2ad4-mgr-compute-0-kitiwu[74802]: 2025-12-08T09:45:04.697+0000 7f9b030a2140 -1 mgr[py] Module nfs has missing NOTIFY_TYPES member
Dec  8 04:45:04 np0005550137 ceph-mgr[74806]: mgr[py] Module orchestrator has missing NOTIFY_TYPES member
Dec  8 04:45:04 np0005550137 ceph-mgr[74806]: mgr[py] Loading python module 'osd_perf_query'
Dec  8 04:45:04 np0005550137 ceph-ceb838ef-9d5d-54e4-bddb-2f01adce2ad4-mgr-compute-0-kitiwu[74802]: 2025-12-08T09:45:04.930+0000 7f9b030a2140 -1 mgr[py] Module orchestrator has missing NOTIFY_TYPES member
Dec  8 04:45:05 np0005550137 ceph-mgr[74806]: mgr[py] Module osd_perf_query has missing NOTIFY_TYPES member
Dec  8 04:45:05 np0005550137 ceph-mgr[74806]: mgr[py] Loading python module 'osd_support'
Dec  8 04:45:05 np0005550137 ceph-ceb838ef-9d5d-54e4-bddb-2f01adce2ad4-mgr-compute-0-kitiwu[74802]: 2025-12-08T09:45:05.006+0000 7f9b030a2140 -1 mgr[py] Module osd_perf_query has missing NOTIFY_TYPES member
Dec  8 04:45:05 np0005550137 ceph-mgr[74806]: mgr[py] Module osd_support has missing NOTIFY_TYPES member
Dec  8 04:45:05 np0005550137 ceph-mgr[74806]: mgr[py] Loading python module 'pg_autoscaler'
Dec  8 04:45:05 np0005550137 ceph-ceb838ef-9d5d-54e4-bddb-2f01adce2ad4-mgr-compute-0-kitiwu[74802]: 2025-12-08T09:45:05.087+0000 7f9b030a2140 -1 mgr[py] Module osd_support has missing NOTIFY_TYPES member
Dec  8 04:45:05 np0005550137 ceph-mgr[74806]: mgr[py] Module pg_autoscaler has missing NOTIFY_TYPES member
Dec  8 04:45:05 np0005550137 ceph-mgr[74806]: mgr[py] Loading python module 'progress'
Dec  8 04:45:05 np0005550137 ceph-ceb838ef-9d5d-54e4-bddb-2f01adce2ad4-mgr-compute-0-kitiwu[74802]: 2025-12-08T09:45:05.184+0000 7f9b030a2140 -1 mgr[py] Module pg_autoscaler has missing NOTIFY_TYPES member
Dec  8 04:45:05 np0005550137 ceph-mgr[74806]: mgr[py] Module progress has missing NOTIFY_TYPES member
Dec  8 04:45:05 np0005550137 ceph-mgr[74806]: mgr[py] Loading python module 'prometheus'
Dec  8 04:45:05 np0005550137 ceph-ceb838ef-9d5d-54e4-bddb-2f01adce2ad4-mgr-compute-0-kitiwu[74802]: 2025-12-08T09:45:05.265+0000 7f9b030a2140 -1 mgr[py] Module progress has missing NOTIFY_TYPES member
Dec  8 04:45:05 np0005550137 ceph-mgr[74806]: mgr[py] Module prometheus has missing NOTIFY_TYPES member
Dec  8 04:45:05 np0005550137 ceph-mgr[74806]: mgr[py] Loading python module 'rbd_support'
Dec  8 04:45:05 np0005550137 ceph-ceb838ef-9d5d-54e4-bddb-2f01adce2ad4-mgr-compute-0-kitiwu[74802]: 2025-12-08T09:45:05.643+0000 7f9b030a2140 -1 mgr[py] Module prometheus has missing NOTIFY_TYPES member
Dec  8 04:45:05 np0005550137 ceph-mgr[74806]: mgr[py] Module rbd_support has missing NOTIFY_TYPES member
Dec  8 04:45:05 np0005550137 ceph-mgr[74806]: mgr[py] Loading python module 'restful'
Dec  8 04:45:05 np0005550137 ceph-ceb838ef-9d5d-54e4-bddb-2f01adce2ad4-mgr-compute-0-kitiwu[74802]: 2025-12-08T09:45:05.749+0000 7f9b030a2140 -1 mgr[py] Module rbd_support has missing NOTIFY_TYPES member
Dec  8 04:45:05 np0005550137 ceph-mgr[74806]: mgr[py] Loading python module 'rgw'
Dec  8 04:45:06 np0005550137 ceph-mgr[74806]: mgr[py] Module rgw has missing NOTIFY_TYPES member
Dec  8 04:45:06 np0005550137 ceph-mgr[74806]: mgr[py] Loading python module 'rook'
Dec  8 04:45:06 np0005550137 ceph-ceb838ef-9d5d-54e4-bddb-2f01adce2ad4-mgr-compute-0-kitiwu[74802]: 2025-12-08T09:45:06.189+0000 7f9b030a2140 -1 mgr[py] Module rgw has missing NOTIFY_TYPES member
Dec  8 04:45:06 np0005550137 podman[74947]: 2025-12-08 09:45:06.483118586 +0000 UTC m=+0.064996729 container create 9c81cb5055bcb341afb008838681804b34d6ef86076cc4b107fa11eed2c70abd (image=quay.io/ceph/ceph:v19, name=jovial_lumiere, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1)
Dec  8 04:45:06 np0005550137 systemd[1]: Started libpod-conmon-9c81cb5055bcb341afb008838681804b34d6ef86076cc4b107fa11eed2c70abd.scope.
Dec  8 04:45:06 np0005550137 systemd[1]: Started libcrun container.
Dec  8 04:45:06 np0005550137 podman[74947]: 2025-12-08 09:45:06.450346144 +0000 UTC m=+0.032224347 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Dec  8 04:45:06 np0005550137 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f989aa711b30bb9714e42388eb7d2eac16b9cf524932a0ae14210cfe1d4029ec/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec  8 04:45:06 np0005550137 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f989aa711b30bb9714e42388eb7d2eac16b9cf524932a0ae14210cfe1d4029ec/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Dec  8 04:45:06 np0005550137 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f989aa711b30bb9714e42388eb7d2eac16b9cf524932a0ae14210cfe1d4029ec/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec  8 04:45:06 np0005550137 podman[74947]: 2025-12-08 09:45:06.561247287 +0000 UTC m=+0.143125450 container init 9c81cb5055bcb341afb008838681804b34d6ef86076cc4b107fa11eed2c70abd (image=quay.io/ceph/ceph:v19, name=jovial_lumiere, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_REF=squid, org.label-schema.schema-version=1.0, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1)
Dec  8 04:45:06 np0005550137 podman[74947]: 2025-12-08 09:45:06.568485816 +0000 UTC m=+0.150363939 container start 9c81cb5055bcb341afb008838681804b34d6ef86076cc4b107fa11eed2c70abd (image=quay.io/ceph/ceph:v19, name=jovial_lumiere, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, ceph=True, org.label-schema.vendor=CentOS, CEPH_REF=squid)
Dec  8 04:45:06 np0005550137 podman[74947]: 2025-12-08 09:45:06.573173583 +0000 UTC m=+0.155051746 container attach 9c81cb5055bcb341afb008838681804b34d6ef86076cc4b107fa11eed2c70abd (image=quay.io/ceph/ceph:v19, name=jovial_lumiere, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.vendor=CentOS, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1)
Dec  8 04:45:06 np0005550137 ceph-mon[74516]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "status", "format": "json-pretty"} v 0)
Dec  8 04:45:06 np0005550137 ceph-mon[74516]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1550793316' entity='client.admin' cmd=[{"prefix": "status", "format": "json-pretty"}]: dispatch
Dec  8 04:45:06 np0005550137 jovial_lumiere[74963]: 
Dec  8 04:45:06 np0005550137 jovial_lumiere[74963]: {
Dec  8 04:45:06 np0005550137 jovial_lumiere[74963]:    "fsid": "ceb838ef-9d5d-54e4-bddb-2f01adce2ad4",
Dec  8 04:45:06 np0005550137 jovial_lumiere[74963]:    "health": {
Dec  8 04:45:06 np0005550137 jovial_lumiere[74963]:        "status": "HEALTH_OK",
Dec  8 04:45:06 np0005550137 jovial_lumiere[74963]:        "checks": {},
Dec  8 04:45:06 np0005550137 jovial_lumiere[74963]:        "mutes": []
Dec  8 04:45:06 np0005550137 jovial_lumiere[74963]:    },
Dec  8 04:45:06 np0005550137 jovial_lumiere[74963]:    "election_epoch": 5,
Dec  8 04:45:06 np0005550137 jovial_lumiere[74963]:    "quorum": [
Dec  8 04:45:06 np0005550137 jovial_lumiere[74963]:        0
Dec  8 04:45:06 np0005550137 jovial_lumiere[74963]:    ],
Dec  8 04:45:06 np0005550137 jovial_lumiere[74963]:    "quorum_names": [
Dec  8 04:45:06 np0005550137 jovial_lumiere[74963]:        "compute-0"
Dec  8 04:45:06 np0005550137 jovial_lumiere[74963]:    ],
Dec  8 04:45:06 np0005550137 jovial_lumiere[74963]:    "quorum_age": 6,
Dec  8 04:45:06 np0005550137 jovial_lumiere[74963]:    "monmap": {
Dec  8 04:45:06 np0005550137 jovial_lumiere[74963]:        "epoch": 1,
Dec  8 04:45:06 np0005550137 jovial_lumiere[74963]:        "min_mon_release_name": "squid",
Dec  8 04:45:06 np0005550137 jovial_lumiere[74963]:        "num_mons": 1
Dec  8 04:45:06 np0005550137 jovial_lumiere[74963]:    },
Dec  8 04:45:06 np0005550137 jovial_lumiere[74963]:    "osdmap": {
Dec  8 04:45:06 np0005550137 jovial_lumiere[74963]:        "epoch": 1,
Dec  8 04:45:06 np0005550137 jovial_lumiere[74963]:        "num_osds": 0,
Dec  8 04:45:06 np0005550137 jovial_lumiere[74963]:        "num_up_osds": 0,
Dec  8 04:45:06 np0005550137 jovial_lumiere[74963]:        "osd_up_since": 0,
Dec  8 04:45:06 np0005550137 jovial_lumiere[74963]:        "num_in_osds": 0,
Dec  8 04:45:06 np0005550137 jovial_lumiere[74963]:        "osd_in_since": 0,
Dec  8 04:45:06 np0005550137 jovial_lumiere[74963]:        "num_remapped_pgs": 0
Dec  8 04:45:06 np0005550137 jovial_lumiere[74963]:    },
Dec  8 04:45:06 np0005550137 jovial_lumiere[74963]:    "pgmap": {
Dec  8 04:45:06 np0005550137 jovial_lumiere[74963]:        "pgs_by_state": [],
Dec  8 04:45:06 np0005550137 jovial_lumiere[74963]:        "num_pgs": 0,
Dec  8 04:45:06 np0005550137 jovial_lumiere[74963]:        "num_pools": 0,
Dec  8 04:45:06 np0005550137 jovial_lumiere[74963]:        "num_objects": 0,
Dec  8 04:45:06 np0005550137 jovial_lumiere[74963]:        "data_bytes": 0,
Dec  8 04:45:06 np0005550137 jovial_lumiere[74963]:        "bytes_used": 0,
Dec  8 04:45:06 np0005550137 jovial_lumiere[74963]:        "bytes_avail": 0,
Dec  8 04:45:06 np0005550137 jovial_lumiere[74963]:        "bytes_total": 0
Dec  8 04:45:06 np0005550137 jovial_lumiere[74963]:    },
Dec  8 04:45:06 np0005550137 jovial_lumiere[74963]:    "fsmap": {
Dec  8 04:45:06 np0005550137 jovial_lumiere[74963]:        "epoch": 1,
Dec  8 04:45:06 np0005550137 jovial_lumiere[74963]:        "btime": "2025-12-08T09:44:57:301434+0000",
Dec  8 04:45:06 np0005550137 jovial_lumiere[74963]:        "by_rank": [],
Dec  8 04:45:06 np0005550137 jovial_lumiere[74963]:        "up:standby": 0
Dec  8 04:45:06 np0005550137 jovial_lumiere[74963]:    },
Dec  8 04:45:06 np0005550137 jovial_lumiere[74963]:    "mgrmap": {
Dec  8 04:45:06 np0005550137 jovial_lumiere[74963]:        "available": false,
Dec  8 04:45:06 np0005550137 jovial_lumiere[74963]:        "num_standbys": 0,
Dec  8 04:45:06 np0005550137 jovial_lumiere[74963]:        "modules": [
Dec  8 04:45:06 np0005550137 jovial_lumiere[74963]:            "iostat",
Dec  8 04:45:06 np0005550137 jovial_lumiere[74963]:            "nfs",
Dec  8 04:45:06 np0005550137 jovial_lumiere[74963]:            "restful"
Dec  8 04:45:06 np0005550137 jovial_lumiere[74963]:        ],
Dec  8 04:45:06 np0005550137 jovial_lumiere[74963]:        "services": {}
Dec  8 04:45:06 np0005550137 jovial_lumiere[74963]:    },
Dec  8 04:45:06 np0005550137 jovial_lumiere[74963]:    "servicemap": {
Dec  8 04:45:06 np0005550137 jovial_lumiere[74963]:        "epoch": 1,
Dec  8 04:45:06 np0005550137 jovial_lumiere[74963]:        "modified": "2025-12-08T09:44:57.306979+0000",
Dec  8 04:45:06 np0005550137 jovial_lumiere[74963]:        "services": {}
Dec  8 04:45:06 np0005550137 jovial_lumiere[74963]:    },
Dec  8 04:45:06 np0005550137 jovial_lumiere[74963]:    "progress_events": {}
Dec  8 04:45:06 np0005550137 jovial_lumiere[74963]: }
Dec  8 04:45:06 np0005550137 systemd[1]: libpod-9c81cb5055bcb341afb008838681804b34d6ef86076cc4b107fa11eed2c70abd.scope: Deactivated successfully.
Dec  8 04:45:06 np0005550137 ceph-mgr[74806]: mgr[py] Module rook has missing NOTIFY_TYPES member
Dec  8 04:45:06 np0005550137 ceph-mgr[74806]: mgr[py] Loading python module 'selftest'
Dec  8 04:45:06 np0005550137 ceph-ceb838ef-9d5d-54e4-bddb-2f01adce2ad4-mgr-compute-0-kitiwu[74802]: 2025-12-08T09:45:06.794+0000 7f9b030a2140 -1 mgr[py] Module rook has missing NOTIFY_TYPES member
Dec  8 04:45:06 np0005550137 podman[74989]: 2025-12-08 09:45:06.838294164 +0000 UTC m=+0.032196235 container died 9c81cb5055bcb341afb008838681804b34d6ef86076cc4b107fa11eed2c70abd (image=quay.io/ceph/ceph:v19, name=jovial_lumiere, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, CEPH_REF=squid)
Dec  8 04:45:06 np0005550137 systemd[1]: var-lib-containers-storage-overlay-f989aa711b30bb9714e42388eb7d2eac16b9cf524932a0ae14210cfe1d4029ec-merged.mount: Deactivated successfully.
Dec  8 04:45:06 np0005550137 ceph-mgr[74806]: mgr[py] Module selftest has missing NOTIFY_TYPES member
Dec  8 04:45:06 np0005550137 ceph-mgr[74806]: mgr[py] Loading python module 'snap_schedule'
Dec  8 04:45:06 np0005550137 ceph-ceb838ef-9d5d-54e4-bddb-2f01adce2ad4-mgr-compute-0-kitiwu[74802]: 2025-12-08T09:45:06.869+0000 7f9b030a2140 -1 mgr[py] Module selftest has missing NOTIFY_TYPES member
Dec  8 04:45:06 np0005550137 podman[74989]: 2025-12-08 09:45:06.882417095 +0000 UTC m=+0.076319156 container remove 9c81cb5055bcb341afb008838681804b34d6ef86076cc4b107fa11eed2c70abd (image=quay.io/ceph/ceph:v19, name=jovial_lumiere, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, ceph=True, org.label-schema.build-date=20250325, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec  8 04:45:06 np0005550137 systemd[1]: libpod-conmon-9c81cb5055bcb341afb008838681804b34d6ef86076cc4b107fa11eed2c70abd.scope: Deactivated successfully.
Dec  8 04:45:06 np0005550137 ceph-mgr[74806]: mgr[py] Module snap_schedule has missing NOTIFY_TYPES member
Dec  8 04:45:06 np0005550137 ceph-mgr[74806]: mgr[py] Loading python module 'stats'
Dec  8 04:45:06 np0005550137 ceph-ceb838ef-9d5d-54e4-bddb-2f01adce2ad4-mgr-compute-0-kitiwu[74802]: 2025-12-08T09:45:06.949+0000 7f9b030a2140 -1 mgr[py] Module snap_schedule has missing NOTIFY_TYPES member
Dec  8 04:45:07 np0005550137 ceph-mgr[74806]: mgr[py] Loading python module 'status'
Dec  8 04:45:07 np0005550137 ceph-mgr[74806]: mgr[py] Module status has missing NOTIFY_TYPES member
Dec  8 04:45:07 np0005550137 ceph-mgr[74806]: mgr[py] Loading python module 'telegraf'
Dec  8 04:45:07 np0005550137 ceph-ceb838ef-9d5d-54e4-bddb-2f01adce2ad4-mgr-compute-0-kitiwu[74802]: 2025-12-08T09:45:07.100+0000 7f9b030a2140 -1 mgr[py] Module status has missing NOTIFY_TYPES member
Dec  8 04:45:07 np0005550137 ceph-mgr[74806]: mgr[py] Module telegraf has missing NOTIFY_TYPES member
Dec  8 04:45:07 np0005550137 ceph-mgr[74806]: mgr[py] Loading python module 'telemetry'
Dec  8 04:45:07 np0005550137 ceph-ceb838ef-9d5d-54e4-bddb-2f01adce2ad4-mgr-compute-0-kitiwu[74802]: 2025-12-08T09:45:07.175+0000 7f9b030a2140 -1 mgr[py] Module telegraf has missing NOTIFY_TYPES member
Dec  8 04:45:07 np0005550137 ceph-mgr[74806]: mgr[py] Module telemetry has missing NOTIFY_TYPES member
Dec  8 04:45:07 np0005550137 ceph-mgr[74806]: mgr[py] Loading python module 'test_orchestrator'
Dec  8 04:45:07 np0005550137 ceph-ceb838ef-9d5d-54e4-bddb-2f01adce2ad4-mgr-compute-0-kitiwu[74802]: 2025-12-08T09:45:07.333+0000 7f9b030a2140 -1 mgr[py] Module telemetry has missing NOTIFY_TYPES member
Dec  8 04:45:07 np0005550137 ceph-mgr[74806]: mgr[py] Module test_orchestrator has missing NOTIFY_TYPES member
Dec  8 04:45:07 np0005550137 ceph-mgr[74806]: mgr[py] Loading python module 'volumes'
Dec  8 04:45:07 np0005550137 ceph-ceb838ef-9d5d-54e4-bddb-2f01adce2ad4-mgr-compute-0-kitiwu[74802]: 2025-12-08T09:45:07.566+0000 7f9b030a2140 -1 mgr[py] Module test_orchestrator has missing NOTIFY_TYPES member
Dec  8 04:45:07 np0005550137 ceph-mgr[74806]: mgr[py] Module volumes has missing NOTIFY_TYPES member
Dec  8 04:45:07 np0005550137 ceph-mgr[74806]: mgr[py] Loading python module 'zabbix'
Dec  8 04:45:07 np0005550137 ceph-ceb838ef-9d5d-54e4-bddb-2f01adce2ad4-mgr-compute-0-kitiwu[74802]: 2025-12-08T09:45:07.838+0000 7f9b030a2140 -1 mgr[py] Module volumes has missing NOTIFY_TYPES member
Dec  8 04:45:07 np0005550137 ceph-mgr[74806]: mgr[py] Module zabbix has missing NOTIFY_TYPES member
Dec  8 04:45:07 np0005550137 ceph-ceb838ef-9d5d-54e4-bddb-2f01adce2ad4-mgr-compute-0-kitiwu[74802]: 2025-12-08T09:45:07.912+0000 7f9b030a2140 -1 mgr[py] Module zabbix has missing NOTIFY_TYPES member
Dec  8 04:45:07 np0005550137 ceph-mgr[74806]: ms_deliver_dispatch: unhandled message 0x558b8678c9c0 mon_map magic: 0 from mon.0 v2:192.168.122.100:3300/0
Dec  8 04:45:07 np0005550137 ceph-mon[74516]: log_channel(cluster) log [INF] : Activating manager daemon compute-0.kitiwu
Dec  8 04:45:07 np0005550137 ceph-mgr[74806]: mgr handle_mgr_map Activating!
Dec  8 04:45:07 np0005550137 ceph-mon[74516]: log_channel(cluster) log [DBG] : mgrmap e2: compute-0.kitiwu(active, starting, since 0.0132595s)
Dec  8 04:45:07 np0005550137 ceph-mgr[74806]: mgr handle_mgr_map I am now activating
Dec  8 04:45:07 np0005550137 ceph-mon[74516]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mds metadata"} v 0)
Dec  8 04:45:07 np0005550137 ceph-mon[74516]: log_channel(audit) log [DBG] : from='mgr.14102 192.168.122.100:0/2692427801' entity='mgr.compute-0.kitiwu' cmd=[{"prefix": "mds metadata"}]: dispatch
Dec  8 04:45:07 np0005550137 ceph-mon[74516]: mon.compute-0@0(leader).mds e1 all = 1
Dec  8 04:45:07 np0005550137 ceph-mon[74516]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata"} v 0)
Dec  8 04:45:07 np0005550137 ceph-mon[74516]: log_channel(audit) log [DBG] : from='mgr.14102 192.168.122.100:0/2692427801' entity='mgr.compute-0.kitiwu' cmd=[{"prefix": "osd metadata"}]: dispatch
Dec  8 04:45:07 np0005550137 ceph-mon[74516]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon metadata"} v 0)
Dec  8 04:45:07 np0005550137 ceph-mon[74516]: log_channel(audit) log [DBG] : from='mgr.14102 192.168.122.100:0/2692427801' entity='mgr.compute-0.kitiwu' cmd=[{"prefix": "mon metadata"}]: dispatch
Dec  8 04:45:07 np0005550137 ceph-mon[74516]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon metadata", "id": "compute-0"} v 0)
Dec  8 04:45:07 np0005550137 ceph-mon[74516]: log_channel(audit) log [DBG] : from='mgr.14102 192.168.122.100:0/2692427801' entity='mgr.compute-0.kitiwu' cmd=[{"prefix": "mon metadata", "id": "compute-0"}]: dispatch
Dec  8 04:45:07 np0005550137 ceph-mon[74516]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mgr metadata", "who": "compute-0.kitiwu", "id": "compute-0.kitiwu"} v 0)
Dec  8 04:45:07 np0005550137 ceph-mon[74516]: log_channel(audit) log [DBG] : from='mgr.14102 192.168.122.100:0/2692427801' entity='mgr.compute-0.kitiwu' cmd=[{"prefix": "mgr metadata", "who": "compute-0.kitiwu", "id": "compute-0.kitiwu"}]: dispatch
Dec  8 04:45:07 np0005550137 ceph-mgr[74806]: [balancer DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Dec  8 04:45:07 np0005550137 ceph-mgr[74806]: mgr load Constructed class from module: balancer
Dec  8 04:45:07 np0005550137 ceph-mgr[74806]: [balancer INFO root] Starting
Dec  8 04:45:07 np0005550137 ceph-mgr[74806]: [crash DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Dec  8 04:45:07 np0005550137 ceph-mon[74516]: log_channel(cluster) log [INF] : Manager daemon compute-0.kitiwu is now available
Dec  8 04:45:07 np0005550137 ceph-mgr[74806]: mgr load Constructed class from module: crash
Dec  8 04:45:07 np0005550137 ceph-mgr[74806]: [balancer INFO root] Optimize plan auto_2025-12-08_09:45:07
Dec  8 04:45:07 np0005550137 ceph-mgr[74806]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Dec  8 04:45:07 np0005550137 ceph-mgr[74806]: [balancer INFO root] do_upmap
Dec  8 04:45:07 np0005550137 ceph-mgr[74806]: [balancer INFO root] No pools available
Dec  8 04:45:07 np0005550137 ceph-mgr[74806]: [devicehealth DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Dec  8 04:45:07 np0005550137 ceph-mgr[74806]: mgr load Constructed class from module: devicehealth
Dec  8 04:45:07 np0005550137 ceph-mgr[74806]: [devicehealth INFO root] Starting
Dec  8 04:45:07 np0005550137 ceph-mgr[74806]: [iostat DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Dec  8 04:45:07 np0005550137 ceph-mgr[74806]: mgr load Constructed class from module: iostat
Dec  8 04:45:07 np0005550137 ceph-mgr[74806]: [nfs DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Dec  8 04:45:07 np0005550137 ceph-mgr[74806]: mgr load Constructed class from module: nfs
Dec  8 04:45:07 np0005550137 ceph-mgr[74806]: [orchestrator DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Dec  8 04:45:07 np0005550137 ceph-mgr[74806]: mgr load Constructed class from module: orchestrator
Dec  8 04:45:07 np0005550137 ceph-mgr[74806]: [pg_autoscaler DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Dec  8 04:45:07 np0005550137 ceph-mgr[74806]: mgr load Constructed class from module: pg_autoscaler
Dec  8 04:45:07 np0005550137 ceph-mgr[74806]: [progress DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Dec  8 04:45:07 np0005550137 ceph-mgr[74806]: mgr load Constructed class from module: progress
Dec  8 04:45:07 np0005550137 ceph-mgr[74806]: [progress INFO root] Loading...
Dec  8 04:45:07 np0005550137 ceph-mgr[74806]: [progress INFO root] No stored events to load
Dec  8 04:45:07 np0005550137 ceph-mgr[74806]: [progress INFO root] Loaded [] historic events
Dec  8 04:45:07 np0005550137 ceph-mgr[74806]: [rbd_support DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Dec  8 04:45:07 np0005550137 ceph-mgr[74806]: [progress INFO root] Loaded OSDMap, ready.
Dec  8 04:45:07 np0005550137 ceph-mgr[74806]: [pg_autoscaler INFO root] _maybe_adjust
Dec  8 04:45:07 np0005550137 ceph-mgr[74806]: [rbd_support INFO root] recovery thread starting
Dec  8 04:45:07 np0005550137 ceph-mgr[74806]: [rbd_support INFO root] starting setup
Dec  8 04:45:07 np0005550137 ceph-mgr[74806]: mgr load Constructed class from module: rbd_support
Dec  8 04:45:07 np0005550137 ceph-mgr[74806]: [restful DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Dec  8 04:45:07 np0005550137 ceph-mgr[74806]: mgr load Constructed class from module: restful
Dec  8 04:45:07 np0005550137 ceph-mgr[74806]: [restful INFO root] server_addr: :: server_port: 8003
Dec  8 04:45:07 np0005550137 ceph-mon[74516]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/compute-0.kitiwu/mirror_snapshot_schedule"} v 0)
Dec  8 04:45:07 np0005550137 ceph-mon[74516]: log_channel(audit) log [INF] : from='mgr.14102 192.168.122.100:0/2692427801' entity='mgr.compute-0.kitiwu' cmd=[{"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/compute-0.kitiwu/mirror_snapshot_schedule"}]: dispatch
Dec  8 04:45:07 np0005550137 ceph-mgr[74806]: [status DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Dec  8 04:45:07 np0005550137 ceph-mgr[74806]: mgr load Constructed class from module: status
Dec  8 04:45:07 np0005550137 ceph-mgr[74806]: [telemetry DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Dec  8 04:45:07 np0005550137 ceph-mgr[74806]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Dec  8 04:45:07 np0005550137 ceph-mgr[74806]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: starting
Dec  8 04:45:07 np0005550137 ceph-mgr[74806]: [restful WARNING root] server not running: no certificate configured
Dec  8 04:45:07 np0005550137 ceph-mgr[74806]: mgr load Constructed class from module: telemetry
Dec  8 04:45:07 np0005550137 ceph-mgr[74806]: [rbd_support INFO root] PerfHandler: starting
Dec  8 04:45:07 np0005550137 ceph-mon[74516]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/telemetry/report_id}] v 0)
Dec  8 04:45:07 np0005550137 ceph-mgr[74806]: [volumes DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Dec  8 04:45:07 np0005550137 ceph-mgr[74806]: [rbd_support INFO root] TaskHandler: starting
Dec  8 04:45:07 np0005550137 ceph-mon[74516]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/compute-0.kitiwu/trash_purge_schedule"} v 0)
Dec  8 04:45:07 np0005550137 ceph-mon[74516]: log_channel(audit) log [INF] : from='mgr.14102 192.168.122.100:0/2692427801' entity='mgr.compute-0.kitiwu' cmd=[{"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/compute-0.kitiwu/trash_purge_schedule"}]: dispatch
Dec  8 04:45:07 np0005550137 ceph-mon[74516]: log_channel(audit) log [INF] : from='mgr.14102 192.168.122.100:0/2692427801' entity='mgr.compute-0.kitiwu' 
Dec  8 04:45:07 np0005550137 ceph-mgr[74806]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Dec  8 04:45:07 np0005550137 ceph-mon[74516]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/telemetry/salt}] v 0)
Dec  8 04:45:07 np0005550137 ceph-mgr[74806]: [rbd_support INFO root] TrashPurgeScheduleHandler: starting
Dec  8 04:45:07 np0005550137 ceph-mgr[74806]: [rbd_support INFO root] setup complete
Dec  8 04:45:07 np0005550137 ceph-mon[74516]: log_channel(audit) log [INF] : from='mgr.14102 192.168.122.100:0/2692427801' entity='mgr.compute-0.kitiwu' 
Dec  8 04:45:07 np0005550137 ceph-mon[74516]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/telemetry/collection}] v 0)
Dec  8 04:45:07 np0005550137 ceph-mon[74516]: Activating manager daemon compute-0.kitiwu
Dec  8 04:45:07 np0005550137 ceph-mon[74516]: Manager daemon compute-0.kitiwu is now available
Dec  8 04:45:07 np0005550137 ceph-mon[74516]: from='mgr.14102 192.168.122.100:0/2692427801' entity='mgr.compute-0.kitiwu' cmd=[{"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/compute-0.kitiwu/mirror_snapshot_schedule"}]: dispatch
Dec  8 04:45:07 np0005550137 ceph-mon[74516]: from='mgr.14102 192.168.122.100:0/2692427801' entity='mgr.compute-0.kitiwu' cmd=[{"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/compute-0.kitiwu/trash_purge_schedule"}]: dispatch
Dec  8 04:45:07 np0005550137 ceph-mon[74516]: from='mgr.14102 192.168.122.100:0/2692427801' entity='mgr.compute-0.kitiwu' 
Dec  8 04:45:07 np0005550137 ceph-mgr[74806]: mgr load Constructed class from module: volumes
Dec  8 04:45:07 np0005550137 ceph-mon[74516]: log_channel(audit) log [INF] : from='mgr.14102 192.168.122.100:0/2692427801' entity='mgr.compute-0.kitiwu' 
Dec  8 04:45:08 np0005550137 ceph-mon[74516]: log_channel(cluster) log [DBG] : mgrmap e3: compute-0.kitiwu(active, since 1.02583s)
Dec  8 04:45:08 np0005550137 podman[75084]: 2025-12-08 09:45:08.969296913 +0000 UTC m=+0.049300223 container create 73baf202f781584e0075afe9ea48f1f0df83480705f5b9e0938073a527b872a9 (image=quay.io/ceph/ceph:v19, name=agitated_shirley, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, ceph=True, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2)
Dec  8 04:45:08 np0005550137 ceph-mon[74516]: from='mgr.14102 192.168.122.100:0/2692427801' entity='mgr.compute-0.kitiwu' 
Dec  8 04:45:08 np0005550137 ceph-mon[74516]: from='mgr.14102 192.168.122.100:0/2692427801' entity='mgr.compute-0.kitiwu' 
Dec  8 04:45:08 np0005550137 systemd[1]: Started libpod-conmon-73baf202f781584e0075afe9ea48f1f0df83480705f5b9e0938073a527b872a9.scope.
Dec  8 04:45:09 np0005550137 systemd[1]: Started libcrun container.
Dec  8 04:45:09 np0005550137 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3661ae2550bc158995e8c71b98ed823af1334391ec5e495b47572dc2b086d869/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec  8 04:45:09 np0005550137 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3661ae2550bc158995e8c71b98ed823af1334391ec5e495b47572dc2b086d869/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Dec  8 04:45:09 np0005550137 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3661ae2550bc158995e8c71b98ed823af1334391ec5e495b47572dc2b086d869/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec  8 04:45:09 np0005550137 podman[75084]: 2025-12-08 09:45:09.023940755 +0000 UTC m=+0.103944075 container init 73baf202f781584e0075afe9ea48f1f0df83480705f5b9e0938073a527b872a9 (image=quay.io/ceph/ceph:v19, name=agitated_shirley, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Dec  8 04:45:09 np0005550137 podman[75084]: 2025-12-08 09:45:09.03014263 +0000 UTC m=+0.110145930 container start 73baf202f781584e0075afe9ea48f1f0df83480705f5b9e0938073a527b872a9 (image=quay.io/ceph/ceph:v19, name=agitated_shirley, org.label-schema.build-date=20250325, ceph=True, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec  8 04:45:09 np0005550137 podman[75084]: 2025-12-08 09:45:09.033081322 +0000 UTC m=+0.113084652 container attach 73baf202f781584e0075afe9ea48f1f0df83480705f5b9e0938073a527b872a9 (image=quay.io/ceph/ceph:v19, name=agitated_shirley, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, CEPH_REF=squid)
Dec  8 04:45:09 np0005550137 podman[75084]: 2025-12-08 09:45:08.946405122 +0000 UTC m=+0.026408452 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Dec  8 04:45:09 np0005550137 ceph-mon[74516]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "status", "format": "json-pretty"} v 0)
Dec  8 04:45:09 np0005550137 ceph-mon[74516]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3772794215' entity='client.admin' cmd=[{"prefix": "status", "format": "json-pretty"}]: dispatch
Dec  8 04:45:09 np0005550137 agitated_shirley[75100]: 
Dec  8 04:45:09 np0005550137 agitated_shirley[75100]: {
Dec  8 04:45:09 np0005550137 agitated_shirley[75100]:    "fsid": "ceb838ef-9d5d-54e4-bddb-2f01adce2ad4",
Dec  8 04:45:09 np0005550137 agitated_shirley[75100]:    "health": {
Dec  8 04:45:09 np0005550137 agitated_shirley[75100]:        "status": "HEALTH_OK",
Dec  8 04:45:09 np0005550137 agitated_shirley[75100]:        "checks": {},
Dec  8 04:45:09 np0005550137 agitated_shirley[75100]:        "mutes": []
Dec  8 04:45:09 np0005550137 agitated_shirley[75100]:    },
Dec  8 04:45:09 np0005550137 agitated_shirley[75100]:    "election_epoch": 5,
Dec  8 04:45:09 np0005550137 agitated_shirley[75100]:    "quorum": [
Dec  8 04:45:09 np0005550137 agitated_shirley[75100]:        0
Dec  8 04:45:09 np0005550137 agitated_shirley[75100]:    ],
Dec  8 04:45:09 np0005550137 agitated_shirley[75100]:    "quorum_names": [
Dec  8 04:45:09 np0005550137 agitated_shirley[75100]:        "compute-0"
Dec  8 04:45:09 np0005550137 agitated_shirley[75100]:    ],
Dec  8 04:45:09 np0005550137 agitated_shirley[75100]:    "quorum_age": 9,
Dec  8 04:45:09 np0005550137 agitated_shirley[75100]:    "monmap": {
Dec  8 04:45:09 np0005550137 agitated_shirley[75100]:        "epoch": 1,
Dec  8 04:45:09 np0005550137 agitated_shirley[75100]:        "min_mon_release_name": "squid",
Dec  8 04:45:09 np0005550137 agitated_shirley[75100]:        "num_mons": 1
Dec  8 04:45:09 np0005550137 agitated_shirley[75100]:    },
Dec  8 04:45:09 np0005550137 agitated_shirley[75100]:    "osdmap": {
Dec  8 04:45:09 np0005550137 agitated_shirley[75100]:        "epoch": 1,
Dec  8 04:45:09 np0005550137 agitated_shirley[75100]:        "num_osds": 0,
Dec  8 04:45:09 np0005550137 agitated_shirley[75100]:        "num_up_osds": 0,
Dec  8 04:45:09 np0005550137 agitated_shirley[75100]:        "osd_up_since": 0,
Dec  8 04:45:09 np0005550137 agitated_shirley[75100]:        "num_in_osds": 0,
Dec  8 04:45:09 np0005550137 agitated_shirley[75100]:        "osd_in_since": 0,
Dec  8 04:45:09 np0005550137 agitated_shirley[75100]:        "num_remapped_pgs": 0
Dec  8 04:45:09 np0005550137 agitated_shirley[75100]:    },
Dec  8 04:45:09 np0005550137 agitated_shirley[75100]:    "pgmap": {
Dec  8 04:45:09 np0005550137 agitated_shirley[75100]:        "pgs_by_state": [],
Dec  8 04:45:09 np0005550137 agitated_shirley[75100]:        "num_pgs": 0,
Dec  8 04:45:09 np0005550137 agitated_shirley[75100]:        "num_pools": 0,
Dec  8 04:45:09 np0005550137 agitated_shirley[75100]:        "num_objects": 0,
Dec  8 04:45:09 np0005550137 agitated_shirley[75100]:        "data_bytes": 0,
Dec  8 04:45:09 np0005550137 agitated_shirley[75100]:        "bytes_used": 0,
Dec  8 04:45:09 np0005550137 agitated_shirley[75100]:        "bytes_avail": 0,
Dec  8 04:45:09 np0005550137 agitated_shirley[75100]:        "bytes_total": 0
Dec  8 04:45:09 np0005550137 agitated_shirley[75100]:    },
Dec  8 04:45:09 np0005550137 agitated_shirley[75100]:    "fsmap": {
Dec  8 04:45:09 np0005550137 agitated_shirley[75100]:        "epoch": 1,
Dec  8 04:45:09 np0005550137 agitated_shirley[75100]:        "btime": "2025-12-08T09:44:57:301434+0000",
Dec  8 04:45:09 np0005550137 agitated_shirley[75100]:        "by_rank": [],
Dec  8 04:45:09 np0005550137 agitated_shirley[75100]:        "up:standby": 0
Dec  8 04:45:09 np0005550137 agitated_shirley[75100]:    },
Dec  8 04:45:09 np0005550137 agitated_shirley[75100]:    "mgrmap": {
Dec  8 04:45:09 np0005550137 agitated_shirley[75100]:        "available": true,
Dec  8 04:45:09 np0005550137 agitated_shirley[75100]:        "num_standbys": 0,
Dec  8 04:45:09 np0005550137 agitated_shirley[75100]:        "modules": [
Dec  8 04:45:09 np0005550137 agitated_shirley[75100]:            "iostat",
Dec  8 04:45:09 np0005550137 agitated_shirley[75100]:            "nfs",
Dec  8 04:45:09 np0005550137 agitated_shirley[75100]:            "restful"
Dec  8 04:45:09 np0005550137 agitated_shirley[75100]:        ],
Dec  8 04:45:09 np0005550137 agitated_shirley[75100]:        "services": {}
Dec  8 04:45:09 np0005550137 agitated_shirley[75100]:    },
Dec  8 04:45:09 np0005550137 agitated_shirley[75100]:    "servicemap": {
Dec  8 04:45:09 np0005550137 agitated_shirley[75100]:        "epoch": 1,
Dec  8 04:45:09 np0005550137 agitated_shirley[75100]:        "modified": "2025-12-08T09:44:57.306979+0000",
Dec  8 04:45:09 np0005550137 agitated_shirley[75100]:        "services": {}
Dec  8 04:45:09 np0005550137 agitated_shirley[75100]:    },
Dec  8 04:45:09 np0005550137 agitated_shirley[75100]:    "progress_events": {}
Dec  8 04:45:09 np0005550137 agitated_shirley[75100]: }
Dec  8 04:45:09 np0005550137 systemd[1]: libpod-73baf202f781584e0075afe9ea48f1f0df83480705f5b9e0938073a527b872a9.scope: Deactivated successfully.
Dec  8 04:45:09 np0005550137 podman[75127]: 2025-12-08 09:45:09.495472228 +0000 UTC m=+0.023355036 container died 73baf202f781584e0075afe9ea48f1f0df83480705f5b9e0938073a527b872a9 (image=quay.io/ceph/ceph:v19, name=agitated_shirley, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, CEPH_REF=squid, ceph=True, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default)
Dec  8 04:45:09 np0005550137 systemd[1]: var-lib-containers-storage-overlay-3661ae2550bc158995e8c71b98ed823af1334391ec5e495b47572dc2b086d869-merged.mount: Deactivated successfully.
Dec  8 04:45:09 np0005550137 podman[75127]: 2025-12-08 09:45:09.538135373 +0000 UTC m=+0.066018111 container remove 73baf202f781584e0075afe9ea48f1f0df83480705f5b9e0938073a527b872a9 (image=quay.io/ceph/ceph:v19, name=agitated_shirley, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0)
Dec  8 04:45:09 np0005550137 systemd[1]: libpod-conmon-73baf202f781584e0075afe9ea48f1f0df83480705f5b9e0938073a527b872a9.scope: Deactivated successfully.
Dec  8 04:45:09 np0005550137 podman[75140]: 2025-12-08 09:45:09.622069997 +0000 UTC m=+0.051851745 container create aef22abc87a2c618341e6d9466ece31798b444d3148fea7e5e46a608f411e0cb (image=quay.io/ceph/ceph:v19, name=laughing_newton, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325)
Dec  8 04:45:09 np0005550137 systemd[1]: Started libpod-conmon-aef22abc87a2c618341e6d9466ece31798b444d3148fea7e5e46a608f411e0cb.scope.
Dec  8 04:45:09 np0005550137 systemd[1]: Started libcrun container.
Dec  8 04:45:09 np0005550137 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7bec93223a7c110a00cc54a965b564ac024d8acdb77beea6d8c80b12663a294f/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec  8 04:45:09 np0005550137 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7bec93223a7c110a00cc54a965b564ac024d8acdb77beea6d8c80b12663a294f/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec  8 04:45:09 np0005550137 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7bec93223a7c110a00cc54a965b564ac024d8acdb77beea6d8c80b12663a294f/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Dec  8 04:45:09 np0005550137 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7bec93223a7c110a00cc54a965b564ac024d8acdb77beea6d8c80b12663a294f/merged/var/lib/ceph/user.conf supports timestamps until 2038 (0x7fffffff)
Dec  8 04:45:09 np0005550137 podman[75140]: 2025-12-08 09:45:09.599106773 +0000 UTC m=+0.028888591 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Dec  8 04:45:09 np0005550137 podman[75140]: 2025-12-08 09:45:09.695570222 +0000 UTC m=+0.125352000 container init aef22abc87a2c618341e6d9466ece31798b444d3148fea7e5e46a608f411e0cb (image=quay.io/ceph/ceph:v19, name=laughing_newton, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image)
Dec  8 04:45:09 np0005550137 podman[75140]: 2025-12-08 09:45:09.701027884 +0000 UTC m=+0.130809602 container start aef22abc87a2c618341e6d9466ece31798b444d3148fea7e5e46a608f411e0cb (image=quay.io/ceph/ceph:v19, name=laughing_newton, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.build-date=20250325, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, ceph=True)
Dec  8 04:45:09 np0005550137 podman[75140]: 2025-12-08 09:45:09.704683438 +0000 UTC m=+0.134465207 container attach aef22abc87a2c618341e6d9466ece31798b444d3148fea7e5e46a608f411e0cb (image=quay.io/ceph/ceph:v19, name=laughing_newton, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, org.label-schema.build-date=20250325, ceph=True, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec  8 04:45:09 np0005550137 ceph-mgr[74806]: mgr.server send_report Not sending PG status to monitor yet, waiting for OSDs
Dec  8 04:45:10 np0005550137 ceph-mon[74516]: log_channel(cluster) log [DBG] : mgrmap e4: compute-0.kitiwu(active, since 2s)
Dec  8 04:45:10 np0005550137 ceph-mon[74516]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config assimilate-conf"} v 0)
Dec  8 04:45:10 np0005550137 ceph-mon[74516]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/3051030368' entity='client.admin' cmd=[{"prefix": "config assimilate-conf"}]: dispatch
Dec  8 04:45:10 np0005550137 laughing_newton[75156]: 
Dec  8 04:45:10 np0005550137 laughing_newton[75156]: [global]
Dec  8 04:45:10 np0005550137 laughing_newton[75156]: #011fsid = ceb838ef-9d5d-54e4-bddb-2f01adce2ad4
Dec  8 04:45:10 np0005550137 laughing_newton[75156]: #011mon_host = [v2:192.168.122.100:3300,v1:192.168.122.100:6789]
Dec  8 04:45:10 np0005550137 systemd[1]: libpod-aef22abc87a2c618341e6d9466ece31798b444d3148fea7e5e46a608f411e0cb.scope: Deactivated successfully.
Dec  8 04:45:10 np0005550137 podman[75140]: 2025-12-08 09:45:10.078109472 +0000 UTC m=+0.507891190 container died aef22abc87a2c618341e6d9466ece31798b444d3148fea7e5e46a608f411e0cb (image=quay.io/ceph/ceph:v19, name=laughing_newton, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, CEPH_REF=squid, ceph=True, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Dec  8 04:45:10 np0005550137 systemd[1]: var-lib-containers-storage-overlay-7bec93223a7c110a00cc54a965b564ac024d8acdb77beea6d8c80b12663a294f-merged.mount: Deactivated successfully.
Dec  8 04:45:10 np0005550137 podman[75140]: 2025-12-08 09:45:10.121087257 +0000 UTC m=+0.550869015 container remove aef22abc87a2c618341e6d9466ece31798b444d3148fea7e5e46a608f411e0cb (image=quay.io/ceph/ceph:v19, name=laughing_newton, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec  8 04:45:10 np0005550137 systemd[1]: libpod-conmon-aef22abc87a2c618341e6d9466ece31798b444d3148fea7e5e46a608f411e0cb.scope: Deactivated successfully.
Dec  8 04:45:10 np0005550137 podman[75194]: 2025-12-08 09:45:10.19991268 +0000 UTC m=+0.048452148 container create e2f52702b160d3aa01df340359a37bdc503d95842700e51a5265dd3476229e08 (image=quay.io/ceph/ceph:v19, name=nifty_zhukovsky, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, ceph=True, org.label-schema.license=GPLv2, CEPH_REF=squid, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec  8 04:45:10 np0005550137 systemd[1]: Started libpod-conmon-e2f52702b160d3aa01df340359a37bdc503d95842700e51a5265dd3476229e08.scope.
Dec  8 04:45:10 np0005550137 systemd[1]: Started libcrun container.
Dec  8 04:45:10 np0005550137 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/961bbd2d40a2babd283f8370db0815d878d95c2c65e385df2f623229e764122d/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec  8 04:45:10 np0005550137 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/961bbd2d40a2babd283f8370db0815d878d95c2c65e385df2f623229e764122d/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Dec  8 04:45:10 np0005550137 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/961bbd2d40a2babd283f8370db0815d878d95c2c65e385df2f623229e764122d/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec  8 04:45:10 np0005550137 podman[75194]: 2025-12-08 09:45:10.258927948 +0000 UTC m=+0.107467476 container init e2f52702b160d3aa01df340359a37bdc503d95842700e51a5265dd3476229e08 (image=quay.io/ceph/ceph:v19, name=nifty_zhukovsky, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.license=GPLv2, CEPH_REF=squid, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec  8 04:45:10 np0005550137 podman[75194]: 2025-12-08 09:45:10.263872024 +0000 UTC m=+0.112411492 container start e2f52702b160d3aa01df340359a37bdc503d95842700e51a5265dd3476229e08 (image=quay.io/ceph/ceph:v19, name=nifty_zhukovsky, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2)
Dec  8 04:45:10 np0005550137 podman[75194]: 2025-12-08 09:45:10.182691297 +0000 UTC m=+0.031230795 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Dec  8 04:45:10 np0005550137 podman[75194]: 2025-12-08 09:45:10.28372683 +0000 UTC m=+0.132266328 container attach e2f52702b160d3aa01df340359a37bdc503d95842700e51a5265dd3476229e08 (image=quay.io/ceph/ceph:v19, name=nifty_zhukovsky, org.label-schema.schema-version=1.0, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec  8 04:45:10 np0005550137 ceph-mon[74516]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mgr module enable", "module": "cephadm"} v 0)
Dec  8 04:45:10 np0005550137 ceph-mon[74516]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/108743182' entity='client.admin' cmd=[{"prefix": "mgr module enable", "module": "cephadm"}]: dispatch
Dec  8 04:45:11 np0005550137 ceph-mon[74516]: from='client.? 192.168.122.100:0/3051030368' entity='client.admin' cmd=[{"prefix": "config assimilate-conf"}]: dispatch
Dec  8 04:45:11 np0005550137 ceph-mon[74516]: from='client.? 192.168.122.100:0/108743182' entity='client.admin' cmd=[{"prefix": "mgr module enable", "module": "cephadm"}]: dispatch
Dec  8 04:45:11 np0005550137 ceph-mon[74516]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/108743182' entity='client.admin' cmd='[{"prefix": "mgr module enable", "module": "cephadm"}]': finished
Dec  8 04:45:11 np0005550137 ceph-mgr[74806]: mgr handle_mgr_map respawning because set of enabled modules changed!
Dec  8 04:45:11 np0005550137 ceph-mgr[74806]: mgr respawn  e: '/usr/bin/ceph-mgr'
Dec  8 04:45:11 np0005550137 ceph-mgr[74806]: mgr respawn  0: '/usr/bin/ceph-mgr'
Dec  8 04:45:11 np0005550137 ceph-mgr[74806]: mgr respawn  1: '-n'
Dec  8 04:45:11 np0005550137 ceph-mgr[74806]: mgr respawn  2: 'mgr.compute-0.kitiwu'
Dec  8 04:45:11 np0005550137 ceph-mgr[74806]: mgr respawn  3: '-f'
Dec  8 04:45:11 np0005550137 ceph-mgr[74806]: mgr respawn  4: '--setuser'
Dec  8 04:45:11 np0005550137 ceph-mgr[74806]: mgr respawn  5: 'ceph'
Dec  8 04:45:11 np0005550137 ceph-mgr[74806]: mgr respawn  6: '--setgroup'
Dec  8 04:45:11 np0005550137 ceph-mgr[74806]: mgr respawn  7: 'ceph'
Dec  8 04:45:11 np0005550137 ceph-mgr[74806]: mgr respawn  8: '--default-log-to-file=false'
Dec  8 04:45:11 np0005550137 ceph-mgr[74806]: mgr respawn  9: '--default-log-to-journald=true'
Dec  8 04:45:11 np0005550137 ceph-mgr[74806]: mgr respawn  10: '--default-log-to-stderr=false'
Dec  8 04:45:11 np0005550137 ceph-mgr[74806]: mgr respawn respawning with exe /usr/bin/ceph-mgr
Dec  8 04:45:11 np0005550137 ceph-mgr[74806]: mgr respawn  exe_path /proc/self/exe
Dec  8 04:45:11 np0005550137 ceph-mon[74516]: log_channel(cluster) log [DBG] : mgrmap e5: compute-0.kitiwu(active, since 3s)
Dec  8 04:45:11 np0005550137 systemd[1]: libpod-e2f52702b160d3aa01df340359a37bdc503d95842700e51a5265dd3476229e08.scope: Deactivated successfully.
Dec  8 04:45:11 np0005550137 podman[75194]: 2025-12-08 09:45:11.039106875 +0000 UTC m=+0.887646353 container died e2f52702b160d3aa01df340359a37bdc503d95842700e51a5265dd3476229e08 (image=quay.io/ceph/ceph:v19, name=nifty_zhukovsky, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec  8 04:45:11 np0005550137 systemd[1]: var-lib-containers-storage-overlay-961bbd2d40a2babd283f8370db0815d878d95c2c65e385df2f623229e764122d-merged.mount: Deactivated successfully.
Dec  8 04:45:11 np0005550137 podman[75194]: 2025-12-08 09:45:11.077794674 +0000 UTC m=+0.926334152 container remove e2f52702b160d3aa01df340359a37bdc503d95842700e51a5265dd3476229e08 (image=quay.io/ceph/ceph:v19, name=nifty_zhukovsky, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.schema-version=1.0)
Dec  8 04:45:11 np0005550137 systemd[1]: libpod-conmon-e2f52702b160d3aa01df340359a37bdc503d95842700e51a5265dd3476229e08.scope: Deactivated successfully.
Dec  8 04:45:11 np0005550137 ceph-ceb838ef-9d5d-54e4-bddb-2f01adce2ad4-mgr-compute-0-kitiwu[74802]: ignoring --setuser ceph since I am not root
Dec  8 04:45:11 np0005550137 ceph-ceb838ef-9d5d-54e4-bddb-2f01adce2ad4-mgr-compute-0-kitiwu[74802]: ignoring --setgroup ceph since I am not root
Dec  8 04:45:11 np0005550137 ceph-mgr[74806]: ceph version 19.2.3 (c92aebb279828e9c3c1f5d24613efca272649e62) squid (stable), process ceph-mgr, pid 2
Dec  8 04:45:11 np0005550137 ceph-mgr[74806]: pidfile_write: ignore empty --pid-file
Dec  8 04:45:11 np0005550137 podman[75249]: 2025-12-08 09:45:11.141783399 +0000 UTC m=+0.044212623 container create 97ce393d9d598bd04394ca9cd2e5e262808bf10ed9e64168a6c74649e4699cc8 (image=quay.io/ceph/ceph:v19, name=suspicious_thompson, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS)
Dec  8 04:45:11 np0005550137 ceph-mgr[74806]: mgr[py] Loading python module 'alerts'
Dec  8 04:45:11 np0005550137 systemd[1]: Started libpod-conmon-97ce393d9d598bd04394ca9cd2e5e262808bf10ed9e64168a6c74649e4699cc8.scope.
Dec  8 04:45:11 np0005550137 systemd[1]: Started libcrun container.
Dec  8 04:45:11 np0005550137 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b45e58fbf3fe37c06a7d78a4b64a5aaa1aed81bf30d31044f2ee4da4ee43fb4b/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Dec  8 04:45:11 np0005550137 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b45e58fbf3fe37c06a7d78a4b64a5aaa1aed81bf30d31044f2ee4da4ee43fb4b/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec  8 04:45:11 np0005550137 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b45e58fbf3fe37c06a7d78a4b64a5aaa1aed81bf30d31044f2ee4da4ee43fb4b/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec  8 04:45:11 np0005550137 podman[75249]: 2025-12-08 09:45:11.120179859 +0000 UTC m=+0.022609093 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Dec  8 04:45:11 np0005550137 podman[75249]: 2025-12-08 09:45:11.2265505 +0000 UTC m=+0.128979724 container init 97ce393d9d598bd04394ca9cd2e5e262808bf10ed9e64168a6c74649e4699cc8 (image=quay.io/ceph/ceph:v19, name=suspicious_thompson, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, CEPH_REF=squid)
Dec  8 04:45:11 np0005550137 podman[75249]: 2025-12-08 09:45:11.235868374 +0000 UTC m=+0.138297588 container start 97ce393d9d598bd04394ca9cd2e5e262808bf10ed9e64168a6c74649e4699cc8 (image=quay.io/ceph/ceph:v19, name=suspicious_thompson, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default)
Dec  8 04:45:11 np0005550137 podman[75249]: 2025-12-08 09:45:11.239760546 +0000 UTC m=+0.142189790 container attach 97ce393d9d598bd04394ca9cd2e5e262808bf10ed9e64168a6c74649e4699cc8 (image=quay.io/ceph/ceph:v19, name=suspicious_thompson, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.schema-version=1.0)
Dec  8 04:45:11 np0005550137 ceph-mgr[74806]: mgr[py] Module alerts has missing NOTIFY_TYPES member
Dec  8 04:45:11 np0005550137 ceph-mgr[74806]: mgr[py] Loading python module 'balancer'
Dec  8 04:45:11 np0005550137 ceph-ceb838ef-9d5d-54e4-bddb-2f01adce2ad4-mgr-compute-0-kitiwu[74802]: 2025-12-08T09:45:11.243+0000 7fa1ca95c140 -1 mgr[py] Module alerts has missing NOTIFY_TYPES member
Dec  8 04:45:11 np0005550137 ceph-mgr[74806]: mgr[py] Module balancer has missing NOTIFY_TYPES member
Dec  8 04:45:11 np0005550137 ceph-mgr[74806]: mgr[py] Loading python module 'cephadm'
Dec  8 04:45:11 np0005550137 ceph-ceb838ef-9d5d-54e4-bddb-2f01adce2ad4-mgr-compute-0-kitiwu[74802]: 2025-12-08T09:45:11.323+0000 7fa1ca95c140 -1 mgr[py] Module balancer has missing NOTIFY_TYPES member
Dec  8 04:45:11 np0005550137 ceph-mon[74516]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mgr stat"} v 0)
Dec  8 04:45:11 np0005550137 ceph-mon[74516]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/53754266' entity='client.admin' cmd=[{"prefix": "mgr stat"}]: dispatch
Dec  8 04:45:11 np0005550137 suspicious_thompson[75285]: {
Dec  8 04:45:11 np0005550137 suspicious_thompson[75285]:    "epoch": 5,
Dec  8 04:45:11 np0005550137 suspicious_thompson[75285]:    "available": true,
Dec  8 04:45:11 np0005550137 suspicious_thompson[75285]:    "active_name": "compute-0.kitiwu",
Dec  8 04:45:11 np0005550137 suspicious_thompson[75285]:    "num_standby": 0
Dec  8 04:45:11 np0005550137 suspicious_thompson[75285]: }
Dec  8 04:45:11 np0005550137 systemd[1]: libpod-97ce393d9d598bd04394ca9cd2e5e262808bf10ed9e64168a6c74649e4699cc8.scope: Deactivated successfully.
Dec  8 04:45:11 np0005550137 podman[75249]: 2025-12-08 09:45:11.672309212 +0000 UTC m=+0.574738426 container died 97ce393d9d598bd04394ca9cd2e5e262808bf10ed9e64168a6c74649e4699cc8 (image=quay.io/ceph/ceph:v19, name=suspicious_thompson, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, ceph=True, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.vendor=CentOS)
Dec  8 04:45:11 np0005550137 systemd[1]: var-lib-containers-storage-overlay-b45e58fbf3fe37c06a7d78a4b64a5aaa1aed81bf30d31044f2ee4da4ee43fb4b-merged.mount: Deactivated successfully.
Dec  8 04:45:11 np0005550137 podman[75249]: 2025-12-08 09:45:11.718223948 +0000 UTC m=+0.620653192 container remove 97ce393d9d598bd04394ca9cd2e5e262808bf10ed9e64168a6c74649e4699cc8 (image=quay.io/ceph/ceph:v19, name=suspicious_thompson, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec  8 04:45:11 np0005550137 systemd[1]: libpod-conmon-97ce393d9d598bd04394ca9cd2e5e262808bf10ed9e64168a6c74649e4699cc8.scope: Deactivated successfully.
Dec  8 04:45:11 np0005550137 podman[75331]: 2025-12-08 09:45:11.786397966 +0000 UTC m=+0.043544234 container create bb7fbcbb373b9097a5348b091e9b0ae2d532921c8bec1fd5a2e3a198db802b93 (image=quay.io/ceph/ceph:v19, name=angry_noyce, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Dec  8 04:45:11 np0005550137 systemd[1]: Started libpod-conmon-bb7fbcbb373b9097a5348b091e9b0ae2d532921c8bec1fd5a2e3a198db802b93.scope.
Dec  8 04:45:11 np0005550137 systemd[1]: Started libcrun container.
Dec  8 04:45:11 np0005550137 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1102dbfafb217f5e11dcf4f53193abf46b272f2382dcf843af20f80cdecf38ff/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec  8 04:45:11 np0005550137 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1102dbfafb217f5e11dcf4f53193abf46b272f2382dcf843af20f80cdecf38ff/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Dec  8 04:45:11 np0005550137 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1102dbfafb217f5e11dcf4f53193abf46b272f2382dcf843af20f80cdecf38ff/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec  8 04:45:11 np0005550137 podman[75331]: 2025-12-08 09:45:11.866899441 +0000 UTC m=+0.124045749 container init bb7fbcbb373b9097a5348b091e9b0ae2d532921c8bec1fd5a2e3a198db802b93 (image=quay.io/ceph/ceph:v19, name=angry_noyce, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Dec  8 04:45:11 np0005550137 podman[75331]: 2025-12-08 09:45:11.768793041 +0000 UTC m=+0.025939319 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Dec  8 04:45:11 np0005550137 podman[75331]: 2025-12-08 09:45:11.874307954 +0000 UTC m=+0.131454212 container start bb7fbcbb373b9097a5348b091e9b0ae2d532921c8bec1fd5a2e3a198db802b93 (image=quay.io/ceph/ceph:v19, name=angry_noyce, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2)
Dec  8 04:45:11 np0005550137 podman[75331]: 2025-12-08 09:45:11.8776762 +0000 UTC m=+0.134822458 container attach bb7fbcbb373b9097a5348b091e9b0ae2d532921c8bec1fd5a2e3a198db802b93 (image=quay.io/ceph/ceph:v19, name=angry_noyce, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, org.label-schema.vendor=CentOS, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default)
Dec  8 04:45:12 np0005550137 ceph-mon[74516]: from='client.? 192.168.122.100:0/108743182' entity='client.admin' cmd='[{"prefix": "mgr module enable", "module": "cephadm"}]': finished
Dec  8 04:45:12 np0005550137 ceph-mgr[74806]: mgr[py] Loading python module 'crash'
Dec  8 04:45:12 np0005550137 ceph-mgr[74806]: mgr[py] Module crash has missing NOTIFY_TYPES member
Dec  8 04:45:12 np0005550137 ceph-mgr[74806]: mgr[py] Loading python module 'dashboard'
Dec  8 04:45:12 np0005550137 ceph-ceb838ef-9d5d-54e4-bddb-2f01adce2ad4-mgr-compute-0-kitiwu[74802]: 2025-12-08T09:45:12.230+0000 7fa1ca95c140 -1 mgr[py] Module crash has missing NOTIFY_TYPES member
Dec  8 04:45:12 np0005550137 ceph-mgr[74806]: mgr[py] Loading python module 'devicehealth'
Dec  8 04:45:12 np0005550137 ceph-mgr[74806]: mgr[py] Module devicehealth has missing NOTIFY_TYPES member
Dec  8 04:45:12 np0005550137 ceph-mgr[74806]: mgr[py] Loading python module 'diskprediction_local'
Dec  8 04:45:12 np0005550137 ceph-ceb838ef-9d5d-54e4-bddb-2f01adce2ad4-mgr-compute-0-kitiwu[74802]: 2025-12-08T09:45:12.893+0000 7fa1ca95c140 -1 mgr[py] Module devicehealth has missing NOTIFY_TYPES member
Dec  8 04:45:13 np0005550137 ceph-ceb838ef-9d5d-54e4-bddb-2f01adce2ad4-mgr-compute-0-kitiwu[74802]: /lib64/python3.9/site-packages/scipy/__init__.py:73: UserWarning: NumPy was imported from a Python sub-interpreter but NumPy does not properly support sub-interpreters. This will likely work for most users but might cause hard to track down issues or subtle bugs. A common user of the rare sub-interpreter feature is wsgi which also allows single-interpreter mode.
Dec  8 04:45:13 np0005550137 ceph-ceb838ef-9d5d-54e4-bddb-2f01adce2ad4-mgr-compute-0-kitiwu[74802]: Improvements in the case of bugs are welcome, but is not on the NumPy roadmap, and full support may require significant effort to achieve.
Dec  8 04:45:13 np0005550137 ceph-ceb838ef-9d5d-54e4-bddb-2f01adce2ad4-mgr-compute-0-kitiwu[74802]:  from numpy import show_config as show_numpy_config
Dec  8 04:45:13 np0005550137 ceph-mgr[74806]: mgr[py] Module diskprediction_local has missing NOTIFY_TYPES member
Dec  8 04:45:13 np0005550137 ceph-mgr[74806]: mgr[py] Loading python module 'influx'
Dec  8 04:45:13 np0005550137 ceph-ceb838ef-9d5d-54e4-bddb-2f01adce2ad4-mgr-compute-0-kitiwu[74802]: 2025-12-08T09:45:13.047+0000 7fa1ca95c140 -1 mgr[py] Module diskprediction_local has missing NOTIFY_TYPES member
Dec  8 04:45:13 np0005550137 ceph-mgr[74806]: mgr[py] Module influx has missing NOTIFY_TYPES member
Dec  8 04:45:13 np0005550137 ceph-mgr[74806]: mgr[py] Loading python module 'insights'
Dec  8 04:45:13 np0005550137 ceph-ceb838ef-9d5d-54e4-bddb-2f01adce2ad4-mgr-compute-0-kitiwu[74802]: 2025-12-08T09:45:13.117+0000 7fa1ca95c140 -1 mgr[py] Module influx has missing NOTIFY_TYPES member
Dec  8 04:45:13 np0005550137 ceph-mgr[74806]: mgr[py] Loading python module 'iostat'
Dec  8 04:45:13 np0005550137 ceph-mgr[74806]: mgr[py] Module iostat has missing NOTIFY_TYPES member
Dec  8 04:45:13 np0005550137 ceph-mgr[74806]: mgr[py] Loading python module 'k8sevents'
Dec  8 04:45:13 np0005550137 ceph-ceb838ef-9d5d-54e4-bddb-2f01adce2ad4-mgr-compute-0-kitiwu[74802]: 2025-12-08T09:45:13.283+0000 7fa1ca95c140 -1 mgr[py] Module iostat has missing NOTIFY_TYPES member
Dec  8 04:45:13 np0005550137 ceph-mgr[74806]: mgr[py] Loading python module 'localpool'
Dec  8 04:45:13 np0005550137 ceph-mgr[74806]: mgr[py] Loading python module 'mds_autoscaler'
Dec  8 04:45:14 np0005550137 ceph-mgr[74806]: mgr[py] Loading python module 'mirroring'
Dec  8 04:45:14 np0005550137 ceph-mgr[74806]: mgr[py] Loading python module 'nfs'
Dec  8 04:45:14 np0005550137 ceph-mgr[74806]: mgr[py] Module nfs has missing NOTIFY_TYPES member
Dec  8 04:45:14 np0005550137 ceph-mgr[74806]: mgr[py] Loading python module 'orchestrator'
Dec  8 04:45:14 np0005550137 ceph-ceb838ef-9d5d-54e4-bddb-2f01adce2ad4-mgr-compute-0-kitiwu[74802]: 2025-12-08T09:45:14.387+0000 7fa1ca95c140 -1 mgr[py] Module nfs has missing NOTIFY_TYPES member
Dec  8 04:45:14 np0005550137 ceph-mgr[74806]: mgr[py] Module orchestrator has missing NOTIFY_TYPES member
Dec  8 04:45:14 np0005550137 ceph-ceb838ef-9d5d-54e4-bddb-2f01adce2ad4-mgr-compute-0-kitiwu[74802]: 2025-12-08T09:45:14.617+0000 7fa1ca95c140 -1 mgr[py] Module orchestrator has missing NOTIFY_TYPES member
Dec  8 04:45:14 np0005550137 ceph-mgr[74806]: mgr[py] Loading python module 'osd_perf_query'
Dec  8 04:45:14 np0005550137 ceph-mgr[74806]: mgr[py] Module osd_perf_query has missing NOTIFY_TYPES member
Dec  8 04:45:14 np0005550137 ceph-mgr[74806]: mgr[py] Loading python module 'osd_support'
Dec  8 04:45:14 np0005550137 ceph-ceb838ef-9d5d-54e4-bddb-2f01adce2ad4-mgr-compute-0-kitiwu[74802]: 2025-12-08T09:45:14.705+0000 7fa1ca95c140 -1 mgr[py] Module osd_perf_query has missing NOTIFY_TYPES member
Dec  8 04:45:14 np0005550137 ceph-mgr[74806]: mgr[py] Module osd_support has missing NOTIFY_TYPES member
Dec  8 04:45:14 np0005550137 ceph-mgr[74806]: mgr[py] Loading python module 'pg_autoscaler'
Dec  8 04:45:14 np0005550137 ceph-ceb838ef-9d5d-54e4-bddb-2f01adce2ad4-mgr-compute-0-kitiwu[74802]: 2025-12-08T09:45:14.786+0000 7fa1ca95c140 -1 mgr[py] Module osd_support has missing NOTIFY_TYPES member
Dec  8 04:45:14 np0005550137 ceph-mgr[74806]: mgr[py] Module pg_autoscaler has missing NOTIFY_TYPES member
Dec  8 04:45:14 np0005550137 ceph-mgr[74806]: mgr[py] Loading python module 'progress'
Dec  8 04:45:14 np0005550137 ceph-ceb838ef-9d5d-54e4-bddb-2f01adce2ad4-mgr-compute-0-kitiwu[74802]: 2025-12-08T09:45:14.881+0000 7fa1ca95c140 -1 mgr[py] Module pg_autoscaler has missing NOTIFY_TYPES member
Dec  8 04:45:14 np0005550137 ceph-mgr[74806]: mgr[py] Module progress has missing NOTIFY_TYPES member
Dec  8 04:45:14 np0005550137 ceph-mgr[74806]: mgr[py] Loading python module 'prometheus'
Dec  8 04:45:14 np0005550137 ceph-ceb838ef-9d5d-54e4-bddb-2f01adce2ad4-mgr-compute-0-kitiwu[74802]: 2025-12-08T09:45:14.969+0000 7fa1ca95c140 -1 mgr[py] Module progress has missing NOTIFY_TYPES member
Dec  8 04:45:15 np0005550137 ceph-mgr[74806]: mgr[py] Module prometheus has missing NOTIFY_TYPES member
Dec  8 04:45:15 np0005550137 ceph-ceb838ef-9d5d-54e4-bddb-2f01adce2ad4-mgr-compute-0-kitiwu[74802]: 2025-12-08T09:45:15.342+0000 7fa1ca95c140 -1 mgr[py] Module prometheus has missing NOTIFY_TYPES member
Dec  8 04:45:15 np0005550137 ceph-mgr[74806]: mgr[py] Loading python module 'rbd_support'
Dec  8 04:45:15 np0005550137 ceph-mgr[74806]: mgr[py] Module rbd_support has missing NOTIFY_TYPES member
Dec  8 04:45:15 np0005550137 ceph-mgr[74806]: mgr[py] Loading python module 'restful'
Dec  8 04:45:15 np0005550137 ceph-ceb838ef-9d5d-54e4-bddb-2f01adce2ad4-mgr-compute-0-kitiwu[74802]: 2025-12-08T09:45:15.441+0000 7fa1ca95c140 -1 mgr[py] Module rbd_support has missing NOTIFY_TYPES member
Dec  8 04:45:15 np0005550137 ceph-mgr[74806]: mgr[py] Loading python module 'rgw'
Dec  8 04:45:15 np0005550137 ceph-mgr[74806]: mgr[py] Module rgw has missing NOTIFY_TYPES member
Dec  8 04:45:15 np0005550137 ceph-mgr[74806]: mgr[py] Loading python module 'rook'
Dec  8 04:45:15 np0005550137 ceph-ceb838ef-9d5d-54e4-bddb-2f01adce2ad4-mgr-compute-0-kitiwu[74802]: 2025-12-08T09:45:15.922+0000 7fa1ca95c140 -1 mgr[py] Module rgw has missing NOTIFY_TYPES member
Dec  8 04:45:16 np0005550137 ceph-mgr[74806]: mgr[py] Module rook has missing NOTIFY_TYPES member
Dec  8 04:45:16 np0005550137 ceph-mgr[74806]: mgr[py] Loading python module 'selftest'
Dec  8 04:45:16 np0005550137 ceph-ceb838ef-9d5d-54e4-bddb-2f01adce2ad4-mgr-compute-0-kitiwu[74802]: 2025-12-08T09:45:16.501+0000 7fa1ca95c140 -1 mgr[py] Module rook has missing NOTIFY_TYPES member
Dec  8 04:45:16 np0005550137 ceph-mgr[74806]: mgr[py] Module selftest has missing NOTIFY_TYPES member
Dec  8 04:45:16 np0005550137 ceph-mgr[74806]: mgr[py] Loading python module 'snap_schedule'
Dec  8 04:45:16 np0005550137 ceph-ceb838ef-9d5d-54e4-bddb-2f01adce2ad4-mgr-compute-0-kitiwu[74802]: 2025-12-08T09:45:16.577+0000 7fa1ca95c140 -1 mgr[py] Module selftest has missing NOTIFY_TYPES member
Dec  8 04:45:16 np0005550137 ceph-mgr[74806]: mgr[py] Module snap_schedule has missing NOTIFY_TYPES member
Dec  8 04:45:16 np0005550137 ceph-mgr[74806]: mgr[py] Loading python module 'stats'
Dec  8 04:45:16 np0005550137 ceph-ceb838ef-9d5d-54e4-bddb-2f01adce2ad4-mgr-compute-0-kitiwu[74802]: 2025-12-08T09:45:16.660+0000 7fa1ca95c140 -1 mgr[py] Module snap_schedule has missing NOTIFY_TYPES member
Dec  8 04:45:16 np0005550137 ceph-mgr[74806]: mgr[py] Loading python module 'status'
Dec  8 04:45:16 np0005550137 ceph-mgr[74806]: mgr[py] Module status has missing NOTIFY_TYPES member
Dec  8 04:45:16 np0005550137 ceph-mgr[74806]: mgr[py] Loading python module 'telegraf'
Dec  8 04:45:16 np0005550137 ceph-ceb838ef-9d5d-54e4-bddb-2f01adce2ad4-mgr-compute-0-kitiwu[74802]: 2025-12-08T09:45:16.812+0000 7fa1ca95c140 -1 mgr[py] Module status has missing NOTIFY_TYPES member
Dec  8 04:45:16 np0005550137 ceph-mgr[74806]: mgr[py] Module telegraf has missing NOTIFY_TYPES member
Dec  8 04:45:16 np0005550137 ceph-mgr[74806]: mgr[py] Loading python module 'telemetry'
Dec  8 04:45:16 np0005550137 ceph-ceb838ef-9d5d-54e4-bddb-2f01adce2ad4-mgr-compute-0-kitiwu[74802]: 2025-12-08T09:45:16.882+0000 7fa1ca95c140 -1 mgr[py] Module telegraf has missing NOTIFY_TYPES member
Dec  8 04:45:17 np0005550137 ceph-mgr[74806]: mgr[py] Module telemetry has missing NOTIFY_TYPES member
Dec  8 04:45:17 np0005550137 ceph-mgr[74806]: mgr[py] Loading python module 'test_orchestrator'
Dec  8 04:45:17 np0005550137 ceph-ceb838ef-9d5d-54e4-bddb-2f01adce2ad4-mgr-compute-0-kitiwu[74802]: 2025-12-08T09:45:17.048+0000 7fa1ca95c140 -1 mgr[py] Module telemetry has missing NOTIFY_TYPES member
Dec  8 04:45:17 np0005550137 ceph-mgr[74806]: mgr[py] Module test_orchestrator has missing NOTIFY_TYPES member
Dec  8 04:45:17 np0005550137 ceph-mgr[74806]: mgr[py] Loading python module 'volumes'
Dec  8 04:45:17 np0005550137 ceph-ceb838ef-9d5d-54e4-bddb-2f01adce2ad4-mgr-compute-0-kitiwu[74802]: 2025-12-08T09:45:17.273+0000 7fa1ca95c140 -1 mgr[py] Module test_orchestrator has missing NOTIFY_TYPES member
Dec  8 04:45:17 np0005550137 ceph-mgr[74806]: mgr[py] Module volumes has missing NOTIFY_TYPES member
Dec  8 04:45:17 np0005550137 ceph-mgr[74806]: mgr[py] Loading python module 'zabbix'
Dec  8 04:45:17 np0005550137 ceph-ceb838ef-9d5d-54e4-bddb-2f01adce2ad4-mgr-compute-0-kitiwu[74802]: 2025-12-08T09:45:17.547+0000 7fa1ca95c140 -1 mgr[py] Module volumes has missing NOTIFY_TYPES member
Dec  8 04:45:17 np0005550137 ceph-mgr[74806]: mgr[py] Module zabbix has missing NOTIFY_TYPES member
Dec  8 04:45:17 np0005550137 ceph-ceb838ef-9d5d-54e4-bddb-2f01adce2ad4-mgr-compute-0-kitiwu[74802]: 2025-12-08T09:45:17.618+0000 7fa1ca95c140 -1 mgr[py] Module zabbix has missing NOTIFY_TYPES member
Dec  8 04:45:17 np0005550137 ceph-mon[74516]: log_channel(cluster) log [INF] : Active manager daemon compute-0.kitiwu restarted
Dec  8 04:45:17 np0005550137 ceph-mon[74516]: mon.compute-0@0(leader).osd e1 do_prune osdmap full prune enabled
Dec  8 04:45:17 np0005550137 ceph-mon[74516]: mon.compute-0@0(leader).osd e1 encode_pending skipping prime_pg_temp; mapping job did not start
Dec  8 04:45:17 np0005550137 ceph-mon[74516]: log_channel(cluster) log [INF] : Activating manager daemon compute-0.kitiwu
Dec  8 04:45:17 np0005550137 ceph-mgr[74806]: ms_deliver_dispatch: unhandled message 0x55fa43090d00 mon_map magic: 0 from mon.0 v2:192.168.122.100:3300/0
Dec  8 04:45:17 np0005550137 ceph-mon[74516]: mon.compute-0@0(leader).osd e1 _set_cache_ratios kv ratio 0.25 inc ratio 0.375 full ratio 0.375
Dec  8 04:45:17 np0005550137 ceph-mon[74516]: mon.compute-0@0(leader).osd e1 register_cache_with_pcm pcm target: 2147483648 pcm max: 1020054732 pcm min: 134217728 inc_osd_cache size: 1
Dec  8 04:45:17 np0005550137 ceph-mon[74516]: mon.compute-0@0(leader).osd e2 e2: 0 total, 0 up, 0 in
Dec  8 04:45:17 np0005550137 ceph-mon[74516]: log_channel(cluster) log [DBG] : osdmap e2: 0 total, 0 up, 0 in
Dec  8 04:45:17 np0005550137 ceph-mon[74516]: log_channel(cluster) log [DBG] : mgrmap e6: compute-0.kitiwu(active, starting, since 0.010349s)
Dec  8 04:45:17 np0005550137 ceph-mgr[74806]: mgr handle_mgr_map Activating!
Dec  8 04:45:17 np0005550137 ceph-mgr[74806]: mgr handle_mgr_map I am now activating
Dec  8 04:45:17 np0005550137 ceph-mon[74516]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon metadata", "id": "compute-0"} v 0)
Dec  8 04:45:17 np0005550137 ceph-mon[74516]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/1555784564' entity='mgr.compute-0.kitiwu' cmd=[{"prefix": "mon metadata", "id": "compute-0"}]: dispatch
Dec  8 04:45:17 np0005550137 ceph-mon[74516]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mgr metadata", "who": "compute-0.kitiwu", "id": "compute-0.kitiwu"} v 0)
Dec  8 04:45:17 np0005550137 ceph-mon[74516]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/1555784564' entity='mgr.compute-0.kitiwu' cmd=[{"prefix": "mgr metadata", "who": "compute-0.kitiwu", "id": "compute-0.kitiwu"}]: dispatch
Dec  8 04:45:17 np0005550137 ceph-mon[74516]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mds metadata"} v 0)
Dec  8 04:45:17 np0005550137 ceph-mon[74516]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/1555784564' entity='mgr.compute-0.kitiwu' cmd=[{"prefix": "mds metadata"}]: dispatch
Dec  8 04:45:17 np0005550137 ceph-mon[74516]: mon.compute-0@0(leader).mds e1 all = 1
Dec  8 04:45:17 np0005550137 ceph-mon[74516]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata"} v 0)
Dec  8 04:45:17 np0005550137 ceph-mon[74516]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/1555784564' entity='mgr.compute-0.kitiwu' cmd=[{"prefix": "osd metadata"}]: dispatch
Dec  8 04:45:17 np0005550137 ceph-mon[74516]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon metadata"} v 0)
Dec  8 04:45:17 np0005550137 ceph-mon[74516]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/1555784564' entity='mgr.compute-0.kitiwu' cmd=[{"prefix": "mon metadata"}]: dispatch
Dec  8 04:45:17 np0005550137 ceph-mon[74516]: log_channel(cluster) log [INF] : Manager daemon compute-0.kitiwu is now available
Dec  8 04:45:17 np0005550137 ceph-mgr[74806]: [balancer DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Dec  8 04:45:17 np0005550137 ceph-mgr[74806]: mgr load Constructed class from module: balancer
Dec  8 04:45:17 np0005550137 ceph-mgr[74806]: [balancer INFO root] Starting
Dec  8 04:45:17 np0005550137 ceph-mgr[74806]: [cephadm DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Dec  8 04:45:17 np0005550137 ceph-mgr[74806]: [balancer INFO root] Optimize plan auto_2025-12-08_09:45:17
Dec  8 04:45:17 np0005550137 ceph-mgr[74806]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Dec  8 04:45:17 np0005550137 ceph-mgr[74806]: [balancer INFO root] do_upmap
Dec  8 04:45:17 np0005550137 ceph-mgr[74806]: [balancer INFO root] No pools available
Dec  8 04:45:17 np0005550137 ceph-mgr[74806]: [cephadm INFO cephadm.migrations] Found migration_current of "None". Setting to last migration.
Dec  8 04:45:17 np0005550137 ceph-mgr[74806]: log_channel(cephadm) log [INF] : Found migration_current of "None". Setting to last migration.
Dec  8 04:45:17 np0005550137 ceph-mon[74516]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config set, name=mgr/cephadm/migration_current}] v 0)
Dec  8 04:45:17 np0005550137 ceph-mon[74516]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1555784564' entity='mgr.compute-0.kitiwu' 
Dec  8 04:45:17 np0005550137 ceph-mon[74516]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/config_checks}] v 0)
Dec  8 04:45:17 np0005550137 ceph-mon[74516]: Active manager daemon compute-0.kitiwu restarted
Dec  8 04:45:17 np0005550137 ceph-mon[74516]: Activating manager daemon compute-0.kitiwu
Dec  8 04:45:17 np0005550137 ceph-mon[74516]: Manager daemon compute-0.kitiwu is now available
Dec  8 04:45:17 np0005550137 ceph-mon[74516]: from='mgr.14122 192.168.122.100:0/1555784564' entity='mgr.compute-0.kitiwu' 
Dec  8 04:45:17 np0005550137 ceph-mon[74516]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1555784564' entity='mgr.compute-0.kitiwu' 
Dec  8 04:45:17 np0005550137 ceph-mgr[74806]: mgr load Constructed class from module: cephadm
Dec  8 04:45:17 np0005550137 ceph-mgr[74806]: [crash DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Dec  8 04:45:17 np0005550137 ceph-mgr[74806]: mgr load Constructed class from module: crash
Dec  8 04:45:17 np0005550137 ceph-mon[74516]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config dump", "format": "json"} v 0)
Dec  8 04:45:17 np0005550137 ceph-mon[74516]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/1555784564' entity='mgr.compute-0.kitiwu' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch
Dec  8 04:45:17 np0005550137 ceph-mgr[74806]: [devicehealth DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Dec  8 04:45:17 np0005550137 ceph-mgr[74806]: mgr load Constructed class from module: devicehealth
Dec  8 04:45:17 np0005550137 ceph-mgr[74806]: [devicehealth INFO root] Starting
Dec  8 04:45:17 np0005550137 ceph-mon[74516]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config dump", "format": "json"} v 0)
Dec  8 04:45:17 np0005550137 ceph-mon[74516]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/1555784564' entity='mgr.compute-0.kitiwu' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch
Dec  8 04:45:17 np0005550137 ceph-mgr[74806]: [iostat DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Dec  8 04:45:17 np0005550137 ceph-mgr[74806]: mgr load Constructed class from module: iostat
Dec  8 04:45:17 np0005550137 ceph-mgr[74806]: [nfs DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Dec  8 04:45:17 np0005550137 ceph-mgr[74806]: mgr load Constructed class from module: nfs
Dec  8 04:45:17 np0005550137 ceph-mgr[74806]: [orchestrator DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Dec  8 04:45:17 np0005550137 ceph-mgr[74806]: mgr load Constructed class from module: orchestrator
Dec  8 04:45:17 np0005550137 ceph-mgr[74806]: [pg_autoscaler DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Dec  8 04:45:17 np0005550137 ceph-mgr[74806]: mgr load Constructed class from module: pg_autoscaler
Dec  8 04:45:17 np0005550137 ceph-mgr[74806]: [pg_autoscaler INFO root] _maybe_adjust
Dec  8 04:45:17 np0005550137 ceph-mgr[74806]: [progress DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Dec  8 04:45:17 np0005550137 ceph-mgr[74806]: mgr load Constructed class from module: progress
Dec  8 04:45:17 np0005550137 ceph-mgr[74806]: [progress INFO root] Loading...
Dec  8 04:45:17 np0005550137 ceph-mgr[74806]: [progress INFO root] No stored events to load
Dec  8 04:45:17 np0005550137 ceph-mgr[74806]: [progress INFO root] Loaded [] historic events
Dec  8 04:45:17 np0005550137 ceph-mgr[74806]: [progress INFO root] Loaded OSDMap, ready.
Dec  8 04:45:17 np0005550137 ceph-mgr[74806]: [rbd_support DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Dec  8 04:45:17 np0005550137 ceph-mgr[74806]: [rbd_support INFO root] recovery thread starting
Dec  8 04:45:17 np0005550137 ceph-mgr[74806]: [rbd_support INFO root] starting setup
Dec  8 04:45:17 np0005550137 ceph-mgr[74806]: mgr load Constructed class from module: rbd_support
Dec  8 04:45:17 np0005550137 ceph-mgr[74806]: [restful DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Dec  8 04:45:17 np0005550137 ceph-mgr[74806]: mgr load Constructed class from module: restful
Dec  8 04:45:17 np0005550137 ceph-mgr[74806]: [restful INFO root] server_addr: :: server_port: 8003
Dec  8 04:45:17 np0005550137 ceph-mon[74516]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/compute-0.kitiwu/mirror_snapshot_schedule"} v 0)
Dec  8 04:45:17 np0005550137 ceph-mon[74516]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1555784564' entity='mgr.compute-0.kitiwu' cmd=[{"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/compute-0.kitiwu/mirror_snapshot_schedule"}]: dispatch
Dec  8 04:45:17 np0005550137 ceph-mgr[74806]: [restful WARNING root] server not running: no certificate configured
Dec  8 04:45:17 np0005550137 ceph-mgr[74806]: [status DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Dec  8 04:45:17 np0005550137 ceph-mgr[74806]: mgr load Constructed class from module: status
Dec  8 04:45:17 np0005550137 ceph-mgr[74806]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Dec  8 04:45:17 np0005550137 ceph-mgr[74806]: [telemetry DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Dec  8 04:45:17 np0005550137 ceph-mgr[74806]: mgr load Constructed class from module: telemetry
Dec  8 04:45:17 np0005550137 ceph-mgr[74806]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: starting
Dec  8 04:45:17 np0005550137 ceph-mgr[74806]: [rbd_support INFO root] PerfHandler: starting
Dec  8 04:45:17 np0005550137 ceph-mgr[74806]: [volumes DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Dec  8 04:45:17 np0005550137 ceph-mgr[74806]: [rbd_support INFO root] TaskHandler: starting
Dec  8 04:45:17 np0005550137 ceph-mon[74516]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/compute-0.kitiwu/trash_purge_schedule"} v 0)
Dec  8 04:45:17 np0005550137 ceph-mon[74516]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1555784564' entity='mgr.compute-0.kitiwu' cmd=[{"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/compute-0.kitiwu/trash_purge_schedule"}]: dispatch
Dec  8 04:45:17 np0005550137 ceph-mgr[74806]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Dec  8 04:45:17 np0005550137 ceph-mgr[74806]: [rbd_support INFO root] TrashPurgeScheduleHandler: starting
Dec  8 04:45:17 np0005550137 ceph-mgr[74806]: [rbd_support INFO root] setup complete
Dec  8 04:45:17 np0005550137 ceph-mgr[74806]: mgr load Constructed class from module: volumes
Dec  8 04:45:18 np0005550137 ceph-mon[74516]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/cert_store.cert.agent_endpoint_root_cert}] v 0)
Dec  8 04:45:18 np0005550137 ceph-mon[74516]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1555784564' entity='mgr.compute-0.kitiwu' 
Dec  8 04:45:18 np0005550137 ceph-mon[74516]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/cert_store.key.agent_endpoint_key}] v 0)
Dec  8 04:45:18 np0005550137 ceph-mon[74516]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1555784564' entity='mgr.compute-0.kitiwu' 
Dec  8 04:45:18 np0005550137 ceph-mgr[74806]: log_channel(audit) log [DBG] : from='client.14126 -' entity='client.admin' cmd=[{"prefix": "get_command_descriptions"}]: dispatch
Dec  8 04:45:18 np0005550137 ceph-mon[74516]: log_channel(cluster) log [DBG] : mgrmap e7: compute-0.kitiwu(active, since 1.02237s)
Dec  8 04:45:18 np0005550137 ceph-mgr[74806]: log_channel(audit) log [DBG] : from='client.14126 -' entity='client.admin' cmd=[{"prefix": "mgr_status"}]: dispatch
Dec  8 04:45:18 np0005550137 angry_noyce[75350]: {
Dec  8 04:45:18 np0005550137 angry_noyce[75350]:    "mgrmap_epoch": 7,
Dec  8 04:45:18 np0005550137 angry_noyce[75350]:    "initialized": true
Dec  8 04:45:18 np0005550137 angry_noyce[75350]: }
Dec  8 04:45:18 np0005550137 systemd[1]: libpod-bb7fbcbb373b9097a5348b091e9b0ae2d532921c8bec1fd5a2e3a198db802b93.scope: Deactivated successfully.
Dec  8 04:45:18 np0005550137 podman[75331]: 2025-12-08 09:45:18.674565251 +0000 UTC m=+6.931711509 container died bb7fbcbb373b9097a5348b091e9b0ae2d532921c8bec1fd5a2e3a198db802b93 (image=quay.io/ceph/ceph:v19, name=angry_noyce, ceph=True, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid)
Dec  8 04:45:18 np0005550137 ceph-mon[74516]: Found migration_current of "None". Setting to last migration.
Dec  8 04:45:18 np0005550137 ceph-mon[74516]: from='mgr.14122 192.168.122.100:0/1555784564' entity='mgr.compute-0.kitiwu' 
Dec  8 04:45:18 np0005550137 ceph-mon[74516]: from='mgr.14122 192.168.122.100:0/1555784564' entity='mgr.compute-0.kitiwu' cmd=[{"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/compute-0.kitiwu/mirror_snapshot_schedule"}]: dispatch
Dec  8 04:45:18 np0005550137 ceph-mon[74516]: from='mgr.14122 192.168.122.100:0/1555784564' entity='mgr.compute-0.kitiwu' cmd=[{"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/compute-0.kitiwu/trash_purge_schedule"}]: dispatch
Dec  8 04:45:18 np0005550137 ceph-mon[74516]: from='mgr.14122 192.168.122.100:0/1555784564' entity='mgr.compute-0.kitiwu' 
Dec  8 04:45:18 np0005550137 ceph-mon[74516]: from='mgr.14122 192.168.122.100:0/1555784564' entity='mgr.compute-0.kitiwu' 
Dec  8 04:45:18 np0005550137 systemd[1]: var-lib-containers-storage-overlay-1102dbfafb217f5e11dcf4f53193abf46b272f2382dcf843af20f80cdecf38ff-merged.mount: Deactivated successfully.
Dec  8 04:45:18 np0005550137 podman[75331]: 2025-12-08 09:45:18.719370233 +0000 UTC m=+6.976516491 container remove bb7fbcbb373b9097a5348b091e9b0ae2d532921c8bec1fd5a2e3a198db802b93 (image=quay.io/ceph/ceph:v19, name=angry_noyce, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.license=GPLv2)
Dec  8 04:45:18 np0005550137 systemd[1]: libpod-conmon-bb7fbcbb373b9097a5348b091e9b0ae2d532921c8bec1fd5a2e3a198db802b93.scope: Deactivated successfully.
Dec  8 04:45:18 np0005550137 podman[75499]: 2025-12-08 09:45:18.784817505 +0000 UTC m=+0.042827721 container create cf8572af7d19cbadac00f118069ad27a2d72d7d4cb0447d7decd4d8d7f268662 (image=quay.io/ceph/ceph:v19, name=hopeful_cerf, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, CEPH_REF=squid, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, OSD_FLAVOR=default, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Dec  8 04:45:18 np0005550137 systemd[1]: Started libpod-conmon-cf8572af7d19cbadac00f118069ad27a2d72d7d4cb0447d7decd4d8d7f268662.scope.
Dec  8 04:45:18 np0005550137 systemd[1]: Started libcrun container.
Dec  8 04:45:18 np0005550137 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0bd8c38125acc00e70fbeec9d759263bdddf950024bcdde90915de2f5442dec2/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec  8 04:45:18 np0005550137 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0bd8c38125acc00e70fbeec9d759263bdddf950024bcdde90915de2f5442dec2/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Dec  8 04:45:18 np0005550137 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0bd8c38125acc00e70fbeec9d759263bdddf950024bcdde90915de2f5442dec2/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec  8 04:45:18 np0005550137 podman[75499]: 2025-12-08 09:45:18.763887736 +0000 UTC m=+0.021897972 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Dec  8 04:45:18 np0005550137 podman[75499]: 2025-12-08 09:45:18.871487105 +0000 UTC m=+0.129497331 container init cf8572af7d19cbadac00f118069ad27a2d72d7d4cb0447d7decd4d8d7f268662 (image=quay.io/ceph/ceph:v19, name=hopeful_cerf, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325)
Dec  8 04:45:18 np0005550137 podman[75499]: 2025-12-08 09:45:18.877753462 +0000 UTC m=+0.135763678 container start cf8572af7d19cbadac00f118069ad27a2d72d7d4cb0447d7decd4d8d7f268662 (image=quay.io/ceph/ceph:v19, name=hopeful_cerf, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec  8 04:45:18 np0005550137 podman[75499]: 2025-12-08 09:45:18.881302024 +0000 UTC m=+0.139312240 container attach cf8572af7d19cbadac00f118069ad27a2d72d7d4cb0447d7decd4d8d7f268662 (image=quay.io/ceph/ceph:v19, name=hopeful_cerf, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_REF=squid, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec  8 04:45:19 np0005550137 ceph-mgr[74806]: log_channel(audit) log [DBG] : from='client.14134 -' entity='client.admin' cmd=[{"prefix": "orch set backend", "module_name": "cephadm", "target": ["mon-mgr", ""]}]: dispatch
Dec  8 04:45:19 np0005550137 ceph-mon[74516]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config set, name=mgr/orchestrator/orchestrator}] v 0)
Dec  8 04:45:19 np0005550137 ceph-mon[74516]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1555784564' entity='mgr.compute-0.kitiwu' 
Dec  8 04:45:19 np0005550137 ceph-mon[74516]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config dump", "format": "json"} v 0)
Dec  8 04:45:19 np0005550137 ceph-mon[74516]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/1555784564' entity='mgr.compute-0.kitiwu' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch
Dec  8 04:45:19 np0005550137 systemd[1]: libpod-cf8572af7d19cbadac00f118069ad27a2d72d7d4cb0447d7decd4d8d7f268662.scope: Deactivated successfully.
Dec  8 04:45:19 np0005550137 podman[75499]: 2025-12-08 09:45:19.254096317 +0000 UTC m=+0.512106533 container died cf8572af7d19cbadac00f118069ad27a2d72d7d4cb0447d7decd4d8d7f268662 (image=quay.io/ceph/ceph:v19, name=hopeful_cerf, CEPH_REF=squid, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325)
Dec  8 04:45:19 np0005550137 ceph-mgr[74806]: [cephadm INFO cherrypy.error] [08/Dec/2025:09:45:19] ENGINE Bus STARTING
Dec  8 04:45:19 np0005550137 ceph-mgr[74806]: log_channel(cephadm) log [INF] : [08/Dec/2025:09:45:19] ENGINE Bus STARTING
Dec  8 04:45:19 np0005550137 systemd[1]: var-lib-containers-storage-overlay-0bd8c38125acc00e70fbeec9d759263bdddf950024bcdde90915de2f5442dec2-merged.mount: Deactivated successfully.
Dec  8 04:45:19 np0005550137 podman[75499]: 2025-12-08 09:45:19.292914471 +0000 UTC m=+0.550924697 container remove cf8572af7d19cbadac00f118069ad27a2d72d7d4cb0447d7decd4d8d7f268662 (image=quay.io/ceph/ceph:v19, name=hopeful_cerf, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, ceph=True, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec  8 04:45:19 np0005550137 systemd[1]: libpod-conmon-cf8572af7d19cbadac00f118069ad27a2d72d7d4cb0447d7decd4d8d7f268662.scope: Deactivated successfully.
Dec  8 04:45:19 np0005550137 ceph-mgr[74806]: [cephadm INFO cherrypy.error] [08/Dec/2025:09:45:19] ENGINE Serving on http://192.168.122.100:8765
Dec  8 04:45:19 np0005550137 ceph-mgr[74806]: log_channel(cephadm) log [INF] : [08/Dec/2025:09:45:19] ENGINE Serving on http://192.168.122.100:8765
Dec  8 04:45:19 np0005550137 podman[75565]: 2025-12-08 09:45:19.392751355 +0000 UTC m=+0.076438289 container create f93a2beee609c5f9f7f62008c88c8b9d9b3a63838d0c3fdc4ea8b04b50cd430d (image=quay.io/ceph/ceph:v19, name=stoic_maxwell, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec  8 04:45:19 np0005550137 systemd[1]: Started libpod-conmon-f93a2beee609c5f9f7f62008c88c8b9d9b3a63838d0c3fdc4ea8b04b50cd430d.scope.
Dec  8 04:45:19 np0005550137 systemd[1]: Started libcrun container.
Dec  8 04:45:19 np0005550137 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e8990a73b6cbd7fdae293271dc6a54d42533e7194d4043e20b141217766aef59/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Dec  8 04:45:19 np0005550137 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e8990a73b6cbd7fdae293271dc6a54d42533e7194d4043e20b141217766aef59/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec  8 04:45:19 np0005550137 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e8990a73b6cbd7fdae293271dc6a54d42533e7194d4043e20b141217766aef59/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec  8 04:45:19 np0005550137 podman[75565]: 2025-12-08 09:45:19.452331402 +0000 UTC m=+0.136018336 container init f93a2beee609c5f9f7f62008c88c8b9d9b3a63838d0c3fdc4ea8b04b50cd430d (image=quay.io/ceph/ceph:v19, name=stoic_maxwell, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec  8 04:45:19 np0005550137 podman[75565]: 2025-12-08 09:45:19.456960748 +0000 UTC m=+0.140647682 container start f93a2beee609c5f9f7f62008c88c8b9d9b3a63838d0c3fdc4ea8b04b50cd430d (image=quay.io/ceph/ceph:v19, name=stoic_maxwell, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_REF=squid, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Dec  8 04:45:19 np0005550137 podman[75565]: 2025-12-08 09:45:19.460071356 +0000 UTC m=+0.143758370 container attach f93a2beee609c5f9f7f62008c88c8b9d9b3a63838d0c3fdc4ea8b04b50cd430d (image=quay.io/ceph/ceph:v19, name=stoic_maxwell, CEPH_REF=squid, org.label-schema.schema-version=1.0, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2)
Dec  8 04:45:19 np0005550137 podman[75565]: 2025-12-08 09:45:19.371798354 +0000 UTC m=+0.055485328 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Dec  8 04:45:19 np0005550137 ceph-mgr[74806]: [cephadm INFO cherrypy.error] [08/Dec/2025:09:45:19] ENGINE Serving on https://192.168.122.100:7150
Dec  8 04:45:19 np0005550137 ceph-mgr[74806]: log_channel(cephadm) log [INF] : [08/Dec/2025:09:45:19] ENGINE Serving on https://192.168.122.100:7150
Dec  8 04:45:19 np0005550137 ceph-mgr[74806]: [cephadm INFO cherrypy.error] [08/Dec/2025:09:45:19] ENGINE Bus STARTED
Dec  8 04:45:19 np0005550137 ceph-mgr[74806]: log_channel(cephadm) log [INF] : [08/Dec/2025:09:45:19] ENGINE Bus STARTED
Dec  8 04:45:19 np0005550137 ceph-mon[74516]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config dump", "format": "json"} v 0)
Dec  8 04:45:19 np0005550137 ceph-mon[74516]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/1555784564' entity='mgr.compute-0.kitiwu' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch
Dec  8 04:45:19 np0005550137 ceph-mgr[74806]: [cephadm INFO cherrypy.error] [08/Dec/2025:09:45:19] ENGINE Client ('192.168.122.100', 47040) lost — peer dropped the TLS connection suddenly, during handshake: (6, 'TLS/SSL connection has been closed (EOF) (_ssl.c:1147)')
Dec  8 04:45:19 np0005550137 ceph-mgr[74806]: log_channel(cephadm) log [INF] : [08/Dec/2025:09:45:19] ENGINE Client ('192.168.122.100', 47040) lost — peer dropped the TLS connection suddenly, during handshake: (6, 'TLS/SSL connection has been closed (EOF) (_ssl.c:1147)')
Dec  8 04:45:19 np0005550137 ceph-mgr[74806]: mgr.server send_report Not sending PG status to monitor yet, waiting for OSDs
Dec  8 04:45:19 np0005550137 ceph-mon[74516]: from='mgr.14122 192.168.122.100:0/1555784564' entity='mgr.compute-0.kitiwu' 
Dec  8 04:45:19 np0005550137 ceph-mon[74516]: mon.compute-0@0(leader).osd e2 _set_new_cache_sizes cache_size:1019918341 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  8 04:45:19 np0005550137 ceph-mgr[74806]: log_channel(audit) log [DBG] : from='client.14136 -' entity='client.admin' cmd=[{"prefix": "cephadm set-user", "user": "ceph-admin", "target": ["mon-mgr", ""]}]: dispatch
Dec  8 04:45:19 np0005550137 ceph-mon[74516]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/ssh_user}] v 0)
Dec  8 04:45:19 np0005550137 ceph-mon[74516]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1555784564' entity='mgr.compute-0.kitiwu' 
Dec  8 04:45:19 np0005550137 ceph-mgr[74806]: [cephadm INFO root] Set ssh ssh_user
Dec  8 04:45:19 np0005550137 ceph-mgr[74806]: log_channel(cephadm) log [INF] : Set ssh ssh_user
Dec  8 04:45:19 np0005550137 ceph-mon[74516]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/ssh_config}] v 0)
Dec  8 04:45:19 np0005550137 ceph-mon[74516]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1555784564' entity='mgr.compute-0.kitiwu' 
Dec  8 04:45:19 np0005550137 ceph-mgr[74806]: [cephadm INFO root] Set ssh ssh_config
Dec  8 04:45:19 np0005550137 ceph-mgr[74806]: log_channel(cephadm) log [INF] : Set ssh ssh_config
Dec  8 04:45:19 np0005550137 ceph-mgr[74806]: [cephadm INFO root] ssh user set to ceph-admin. sudo will be used
Dec  8 04:45:19 np0005550137 ceph-mgr[74806]: log_channel(cephadm) log [INF] : ssh user set to ceph-admin. sudo will be used
Dec  8 04:45:19 np0005550137 stoic_maxwell[75593]: ssh user set to ceph-admin. sudo will be used
Dec  8 04:45:19 np0005550137 systemd[1]: libpod-f93a2beee609c5f9f7f62008c88c8b9d9b3a63838d0c3fdc4ea8b04b50cd430d.scope: Deactivated successfully.
Dec  8 04:45:19 np0005550137 podman[75619]: 2025-12-08 09:45:19.889597926 +0000 UTC m=+0.024248075 container died f93a2beee609c5f9f7f62008c88c8b9d9b3a63838d0c3fdc4ea8b04b50cd430d (image=quay.io/ceph/ceph:v19, name=stoic_maxwell, CEPH_REF=squid, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec  8 04:45:19 np0005550137 systemd[1]: var-lib-containers-storage-overlay-e8990a73b6cbd7fdae293271dc6a54d42533e7194d4043e20b141217766aef59-merged.mount: Deactivated successfully.
Dec  8 04:45:19 np0005550137 podman[75619]: 2025-12-08 09:45:19.9284212 +0000 UTC m=+0.063071289 container remove f93a2beee609c5f9f7f62008c88c8b9d9b3a63838d0c3fdc4ea8b04b50cd430d (image=quay.io/ceph/ceph:v19, name=stoic_maxwell, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_REF=squid, ceph=True, io.buildah.version=1.40.1)
Dec  8 04:45:19 np0005550137 systemd[1]: libpod-conmon-f93a2beee609c5f9f7f62008c88c8b9d9b3a63838d0c3fdc4ea8b04b50cd430d.scope: Deactivated successfully.
Dec  8 04:45:19 np0005550137 podman[75634]: 2025-12-08 09:45:19.994647816 +0000 UTC m=+0.041330423 container create f08cd8bf8f03b7b98e97e16383880cb403329a6674eb50ae168335faca16cb1b (image=quay.io/ceph/ceph:v19, name=quirky_noether, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS)
Dec  8 04:45:20 np0005550137 systemd[1]: Started libpod-conmon-f08cd8bf8f03b7b98e97e16383880cb403329a6674eb50ae168335faca16cb1b.scope.
Dec  8 04:45:20 np0005550137 systemd[1]: Started libcrun container.
Dec  8 04:45:20 np0005550137 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3e798f0adcd6857ddf895ecf13402abb7b37e1299457e0702f3bb2d08486192f/merged/tmp/cephadm-ssh-key supports timestamps until 2038 (0x7fffffff)
Dec  8 04:45:20 np0005550137 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3e798f0adcd6857ddf895ecf13402abb7b37e1299457e0702f3bb2d08486192f/merged/tmp/cephadm-ssh-key.pub supports timestamps until 2038 (0x7fffffff)
Dec  8 04:45:20 np0005550137 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3e798f0adcd6857ddf895ecf13402abb7b37e1299457e0702f3bb2d08486192f/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Dec  8 04:45:20 np0005550137 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3e798f0adcd6857ddf895ecf13402abb7b37e1299457e0702f3bb2d08486192f/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec  8 04:45:20 np0005550137 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3e798f0adcd6857ddf895ecf13402abb7b37e1299457e0702f3bb2d08486192f/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec  8 04:45:20 np0005550137 podman[75634]: 2025-12-08 09:45:20.066141737 +0000 UTC m=+0.112824364 container init f08cd8bf8f03b7b98e97e16383880cb403329a6674eb50ae168335faca16cb1b (image=quay.io/ceph/ceph:v19, name=quirky_noether, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec  8 04:45:20 np0005550137 podman[75634]: 2025-12-08 09:45:19.975364828 +0000 UTC m=+0.022047475 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Dec  8 04:45:20 np0005550137 podman[75634]: 2025-12-08 09:45:20.075701759 +0000 UTC m=+0.122384386 container start f08cd8bf8f03b7b98e97e16383880cb403329a6674eb50ae168335faca16cb1b (image=quay.io/ceph/ceph:v19, name=quirky_noether, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, ceph=True, OSD_FLAVOR=default, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Dec  8 04:45:20 np0005550137 podman[75634]: 2025-12-08 09:45:20.079469228 +0000 UTC m=+0.126151855 container attach f08cd8bf8f03b7b98e97e16383880cb403329a6674eb50ae168335faca16cb1b (image=quay.io/ceph/ceph:v19, name=quirky_noether, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image)
Dec  8 04:45:20 np0005550137 ceph-mon[74516]: log_channel(cluster) log [DBG] : mgrmap e8: compute-0.kitiwu(active, since 2s)
Dec  8 04:45:20 np0005550137 ceph-mgr[74806]: log_channel(audit) log [DBG] : from='client.14138 -' entity='client.admin' cmd=[{"prefix": "cephadm set-priv-key", "target": ["mon-mgr", ""]}]: dispatch
Dec  8 04:45:20 np0005550137 ceph-mon[74516]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/ssh_identity_key}] v 0)
Dec  8 04:45:20 np0005550137 ceph-mon[74516]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1555784564' entity='mgr.compute-0.kitiwu' 
Dec  8 04:45:20 np0005550137 ceph-mgr[74806]: [cephadm INFO root] Set ssh ssh_identity_key
Dec  8 04:45:20 np0005550137 ceph-mgr[74806]: log_channel(cephadm) log [INF] : Set ssh ssh_identity_key
Dec  8 04:45:20 np0005550137 ceph-mgr[74806]: [cephadm INFO root] Set ssh private key
Dec  8 04:45:20 np0005550137 ceph-mgr[74806]: log_channel(cephadm) log [INF] : Set ssh private key
Dec  8 04:45:20 np0005550137 systemd[1]: libpod-f08cd8bf8f03b7b98e97e16383880cb403329a6674eb50ae168335faca16cb1b.scope: Deactivated successfully.
Dec  8 04:45:20 np0005550137 podman[75634]: 2025-12-08 09:45:20.496935548 +0000 UTC m=+0.543618155 container died f08cd8bf8f03b7b98e97e16383880cb403329a6674eb50ae168335faca16cb1b (image=quay.io/ceph/ceph:v19, name=quirky_noether, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, ceph=True, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default)
Dec  8 04:45:20 np0005550137 systemd[1]: var-lib-containers-storage-overlay-3e798f0adcd6857ddf895ecf13402abb7b37e1299457e0702f3bb2d08486192f-merged.mount: Deactivated successfully.
Dec  8 04:45:20 np0005550137 podman[75634]: 2025-12-08 09:45:20.53605821 +0000 UTC m=+0.582740817 container remove f08cd8bf8f03b7b98e97e16383880cb403329a6674eb50ae168335faca16cb1b (image=quay.io/ceph/ceph:v19, name=quirky_noether, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, OSD_FLAVOR=default)
Dec  8 04:45:20 np0005550137 systemd[1]: libpod-conmon-f08cd8bf8f03b7b98e97e16383880cb403329a6674eb50ae168335faca16cb1b.scope: Deactivated successfully.
Dec  8 04:45:20 np0005550137 podman[75689]: 2025-12-08 09:45:20.597223668 +0000 UTC m=+0.040486397 container create 6cfcf572d6da12bae1087898d8067e7ebf9050cce6ac3e51a63eda0417beec2f (image=quay.io/ceph/ceph:v19, name=zen_tu, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec  8 04:45:20 np0005550137 systemd[1]: Started libpod-conmon-6cfcf572d6da12bae1087898d8067e7ebf9050cce6ac3e51a63eda0417beec2f.scope.
Dec  8 04:45:20 np0005550137 systemd[1]: Started libcrun container.
Dec  8 04:45:20 np0005550137 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/37c97ab9b98280fb78b898769267e7b5f65d704098974ef2903934a33117480f/merged/tmp/cephadm-ssh-key supports timestamps until 2038 (0x7fffffff)
Dec  8 04:45:20 np0005550137 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/37c97ab9b98280fb78b898769267e7b5f65d704098974ef2903934a33117480f/merged/tmp/cephadm-ssh-key.pub supports timestamps until 2038 (0x7fffffff)
Dec  8 04:45:20 np0005550137 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/37c97ab9b98280fb78b898769267e7b5f65d704098974ef2903934a33117480f/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec  8 04:45:20 np0005550137 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/37c97ab9b98280fb78b898769267e7b5f65d704098974ef2903934a33117480f/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Dec  8 04:45:20 np0005550137 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/37c97ab9b98280fb78b898769267e7b5f65d704098974ef2903934a33117480f/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec  8 04:45:20 np0005550137 podman[75689]: 2025-12-08 09:45:20.578301631 +0000 UTC m=+0.021564380 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Dec  8 04:45:20 np0005550137 podman[75689]: 2025-12-08 09:45:20.679799039 +0000 UTC m=+0.123061788 container init 6cfcf572d6da12bae1087898d8067e7ebf9050cce6ac3e51a63eda0417beec2f (image=quay.io/ceph/ceph:v19, name=zen_tu, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, ceph=True, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325)
Dec  8 04:45:20 np0005550137 podman[75689]: 2025-12-08 09:45:20.686028575 +0000 UTC m=+0.129291294 container start 6cfcf572d6da12bae1087898d8067e7ebf9050cce6ac3e51a63eda0417beec2f (image=quay.io/ceph/ceph:v19, name=zen_tu, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Dec  8 04:45:20 np0005550137 podman[75689]: 2025-12-08 09:45:20.689264827 +0000 UTC m=+0.132527556 container attach 6cfcf572d6da12bae1087898d8067e7ebf9050cce6ac3e51a63eda0417beec2f (image=quay.io/ceph/ceph:v19, name=zen_tu, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, OSD_FLAVOR=default, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec  8 04:45:20 np0005550137 ceph-mon[74516]: [08/Dec/2025:09:45:19] ENGINE Bus STARTING
Dec  8 04:45:20 np0005550137 ceph-mon[74516]: [08/Dec/2025:09:45:19] ENGINE Serving on http://192.168.122.100:8765
Dec  8 04:45:20 np0005550137 ceph-mon[74516]: [08/Dec/2025:09:45:19] ENGINE Serving on https://192.168.122.100:7150
Dec  8 04:45:20 np0005550137 ceph-mon[74516]: [08/Dec/2025:09:45:19] ENGINE Bus STARTED
Dec  8 04:45:20 np0005550137 ceph-mon[74516]: [08/Dec/2025:09:45:19] ENGINE Client ('192.168.122.100', 47040) lost — peer dropped the TLS connection suddenly, during handshake: (6, 'TLS/SSL connection has been closed (EOF) (_ssl.c:1147)')
Dec  8 04:45:20 np0005550137 ceph-mon[74516]: from='mgr.14122 192.168.122.100:0/1555784564' entity='mgr.compute-0.kitiwu' 
Dec  8 04:45:20 np0005550137 ceph-mon[74516]: Set ssh ssh_user
Dec  8 04:45:20 np0005550137 ceph-mon[74516]: from='mgr.14122 192.168.122.100:0/1555784564' entity='mgr.compute-0.kitiwu' 
Dec  8 04:45:20 np0005550137 ceph-mon[74516]: Set ssh ssh_config
Dec  8 04:45:20 np0005550137 ceph-mon[74516]: ssh user set to ceph-admin. sudo will be used
Dec  8 04:45:20 np0005550137 ceph-mon[74516]: from='mgr.14122 192.168.122.100:0/1555784564' entity='mgr.compute-0.kitiwu' 
Dec  8 04:45:21 np0005550137 ceph-mgr[74806]: log_channel(audit) log [DBG] : from='client.14140 -' entity='client.admin' cmd=[{"prefix": "cephadm set-pub-key", "target": ["mon-mgr", ""]}]: dispatch
Dec  8 04:45:21 np0005550137 ceph-mon[74516]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/ssh_identity_pub}] v 0)
Dec  8 04:45:21 np0005550137 ceph-mon[74516]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1555784564' entity='mgr.compute-0.kitiwu' 
Dec  8 04:45:21 np0005550137 ceph-mgr[74806]: [cephadm INFO root] Set ssh ssh_identity_pub
Dec  8 04:45:21 np0005550137 ceph-mgr[74806]: log_channel(cephadm) log [INF] : Set ssh ssh_identity_pub
Dec  8 04:45:21 np0005550137 systemd[1]: libpod-6cfcf572d6da12bae1087898d8067e7ebf9050cce6ac3e51a63eda0417beec2f.scope: Deactivated successfully.
Dec  8 04:45:21 np0005550137 podman[75689]: 2025-12-08 09:45:21.066815246 +0000 UTC m=+0.510077995 container died 6cfcf572d6da12bae1087898d8067e7ebf9050cce6ac3e51a63eda0417beec2f (image=quay.io/ceph/ceph:v19, name=zen_tu, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, ceph=True, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default)
Dec  8 04:45:21 np0005550137 systemd[1]: var-lib-containers-storage-overlay-37c97ab9b98280fb78b898769267e7b5f65d704098974ef2903934a33117480f-merged.mount: Deactivated successfully.
Dec  8 04:45:21 np0005550137 podman[75689]: 2025-12-08 09:45:21.110009651 +0000 UTC m=+0.553272420 container remove 6cfcf572d6da12bae1087898d8067e7ebf9050cce6ac3e51a63eda0417beec2f (image=quay.io/ceph/ceph:v19, name=zen_tu, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, OSD_FLAVOR=default, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image)
Dec  8 04:45:21 np0005550137 systemd[1]: libpod-conmon-6cfcf572d6da12bae1087898d8067e7ebf9050cce6ac3e51a63eda0417beec2f.scope: Deactivated successfully.
Dec  8 04:45:21 np0005550137 podman[75742]: 2025-12-08 09:45:21.173607662 +0000 UTC m=+0.041595636 container create ea81fa875c43ccf6bcb2671196ab06bbb111262c0ae688510ae204a28fbe7df8 (image=quay.io/ceph/ceph:v19, name=gracious_boyd, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, ceph=True, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec  8 04:45:21 np0005550137 systemd[1]: Started libpod-conmon-ea81fa875c43ccf6bcb2671196ab06bbb111262c0ae688510ae204a28fbe7df8.scope.
Dec  8 04:45:21 np0005550137 systemd[1]: Started libcrun container.
Dec  8 04:45:21 np0005550137 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/aac615aeea04109f6ebfde8817973493b813025e6b2cbe0ba80040c508d9ebc1/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec  8 04:45:21 np0005550137 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/aac615aeea04109f6ebfde8817973493b813025e6b2cbe0ba80040c508d9ebc1/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Dec  8 04:45:21 np0005550137 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/aac615aeea04109f6ebfde8817973493b813025e6b2cbe0ba80040c508d9ebc1/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec  8 04:45:21 np0005550137 podman[75742]: 2025-12-08 09:45:21.238124782 +0000 UTC m=+0.106112726 container init ea81fa875c43ccf6bcb2671196ab06bbb111262c0ae688510ae204a28fbe7df8 (image=quay.io/ceph/ceph:v19, name=gracious_boyd, org.label-schema.schema-version=1.0, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, ceph=True, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Dec  8 04:45:21 np0005550137 podman[75742]: 2025-12-08 09:45:21.242806279 +0000 UTC m=+0.110794213 container start ea81fa875c43ccf6bcb2671196ab06bbb111262c0ae688510ae204a28fbe7df8 (image=quay.io/ceph/ceph:v19, name=gracious_boyd, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_REF=squid, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325)
Dec  8 04:45:21 np0005550137 podman[75742]: 2025-12-08 09:45:21.246231188 +0000 UTC m=+0.114219142 container attach ea81fa875c43ccf6bcb2671196ab06bbb111262c0ae688510ae204a28fbe7df8 (image=quay.io/ceph/ceph:v19, name=gracious_boyd, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS)
Dec  8 04:45:21 np0005550137 podman[75742]: 2025-12-08 09:45:21.153796556 +0000 UTC m=+0.021784510 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Dec  8 04:45:21 np0005550137 ceph-mgr[74806]: log_channel(audit) log [DBG] : from='client.14142 -' entity='client.admin' cmd=[{"prefix": "cephadm get-pub-key", "target": ["mon-mgr", ""]}]: dispatch
Dec  8 04:45:21 np0005550137 gracious_boyd[75760]: ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQC3UmoyzEUIqDL86xY+A4pIdseWngZf5mx1S4MS1+a1kB0bjk+bKZZzr9c+huYtdRdegR2qZ3FXkFOCsYfqjl6siwfO5l5yWob/eaMK52wzX1eCsc5CH6ekfM0ALMAgXryHQMvlMLTFXvTCJ5eF+0Ni91kkAwPK2JPQI2qsEAJhYtmWNsifVudnfJzvcTwqNESIL0uShnu2uTTm1S6ae87CvZ0MgqZfyJWNhgmuAZOfBxs/IHZGmpKIIdO+cZHPPgDwyR+gp8NXAHgmbai7s4uLdYZKSVhzjNcuzOS7iHsVU0hFl65OBFX6eCA7cfeZaOHN0dLvuZfnH6qmNqQmJegjSuORouS6QVRTBKXnYQEg+0CQw7SQEop/FK2GkaXn4TieQnOZTek40yLsA3Wbc7tBFfRlrY5GFOxPgoq6uDDjPaoEZxqAw6DsY9pqWNbv0BoIo2lJUPGnanrHq68kaS1UlIyhQ7/5k0ISb1tCie8FmVquD1TIzon9FB5mh4Gn67k= zuul@controller
Dec  8 04:45:21 np0005550137 systemd[1]: libpod-ea81fa875c43ccf6bcb2671196ab06bbb111262c0ae688510ae204a28fbe7df8.scope: Deactivated successfully.
Dec  8 04:45:21 np0005550137 podman[75742]: 2025-12-08 09:45:21.594092815 +0000 UTC m=+0.462080749 container died ea81fa875c43ccf6bcb2671196ab06bbb111262c0ae688510ae204a28fbe7df8 (image=quay.io/ceph/ceph:v19, name=gracious_boyd, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, OSD_FLAVOR=default, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec  8 04:45:21 np0005550137 systemd[1]: var-lib-containers-storage-overlay-aac615aeea04109f6ebfde8817973493b813025e6b2cbe0ba80040c508d9ebc1-merged.mount: Deactivated successfully.
Dec  8 04:45:21 np0005550137 podman[75742]: 2025-12-08 09:45:21.62844242 +0000 UTC m=+0.496430354 container remove ea81fa875c43ccf6bcb2671196ab06bbb111262c0ae688510ae204a28fbe7df8 (image=quay.io/ceph/ceph:v19, name=gracious_boyd, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid)
Dec  8 04:45:21 np0005550137 ceph-mgr[74806]: mgr.server send_report Not sending PG status to monitor yet, waiting for OSDs
Dec  8 04:45:21 np0005550137 systemd[1]: libpod-conmon-ea81fa875c43ccf6bcb2671196ab06bbb111262c0ae688510ae204a28fbe7df8.scope: Deactivated successfully.
Dec  8 04:45:21 np0005550137 podman[75793]: 2025-12-08 09:45:21.693070273 +0000 UTC m=+0.041690368 container create c536ebee4c22b836a95a7fe4d33a3b7da260e125bc8a00afba3cab02cef9d38b (image=quay.io/ceph/ceph:v19, name=happy_tharp, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, OSD_FLAVOR=default)
Dec  8 04:45:21 np0005550137 systemd[1]: Started libpod-conmon-c536ebee4c22b836a95a7fe4d33a3b7da260e125bc8a00afba3cab02cef9d38b.scope.
Dec  8 04:45:21 np0005550137 systemd[1]: Started libcrun container.
Dec  8 04:45:21 np0005550137 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/34d6f76b62518dceece5d1532e923bd9088b697a6e951b6dc4f190e5ef4ec927/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec  8 04:45:21 np0005550137 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/34d6f76b62518dceece5d1532e923bd9088b697a6e951b6dc4f190e5ef4ec927/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Dec  8 04:45:21 np0005550137 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/34d6f76b62518dceece5d1532e923bd9088b697a6e951b6dc4f190e5ef4ec927/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec  8 04:45:21 np0005550137 podman[75793]: 2025-12-08 09:45:21.765833544 +0000 UTC m=+0.114453669 container init c536ebee4c22b836a95a7fe4d33a3b7da260e125bc8a00afba3cab02cef9d38b (image=quay.io/ceph/ceph:v19, name=happy_tharp, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325)
Dec  8 04:45:21 np0005550137 podman[75793]: 2025-12-08 09:45:21.676079006 +0000 UTC m=+0.024699141 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Dec  8 04:45:21 np0005550137 podman[75793]: 2025-12-08 09:45:21.771881545 +0000 UTC m=+0.120501900 container start c536ebee4c22b836a95a7fe4d33a3b7da260e125bc8a00afba3cab02cef9d38b (image=quay.io/ceph/ceph:v19, name=happy_tharp, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True)
Dec  8 04:45:21 np0005550137 podman[75793]: 2025-12-08 09:45:21.782551482 +0000 UTC m=+0.131171577 container attach c536ebee4c22b836a95a7fe4d33a3b7da260e125bc8a00afba3cab02cef9d38b (image=quay.io/ceph/ceph:v19, name=happy_tharp, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0)
Dec  8 04:45:22 np0005550137 ceph-mon[74516]: Set ssh ssh_identity_key
Dec  8 04:45:22 np0005550137 ceph-mon[74516]: Set ssh private key
Dec  8 04:45:22 np0005550137 ceph-mon[74516]: from='mgr.14122 192.168.122.100:0/1555784564' entity='mgr.compute-0.kitiwu' 
Dec  8 04:45:22 np0005550137 ceph-mon[74516]: Set ssh ssh_identity_pub
Dec  8 04:45:22 np0005550137 ceph-mgr[74806]: log_channel(audit) log [DBG] : from='client.14144 -' entity='client.admin' cmd=[{"prefix": "orch host add", "hostname": "compute-0", "addr": "192.168.122.100", "target": ["mon-mgr", ""]}]: dispatch
Dec  8 04:45:22 np0005550137 systemd-logind[805]: New session 21 of user ceph-admin.
Dec  8 04:45:22 np0005550137 systemd[1]: Created slice User Slice of UID 42477.
Dec  8 04:45:22 np0005550137 systemd[1]: Starting User Runtime Directory /run/user/42477...
Dec  8 04:45:22 np0005550137 systemd[1]: Finished User Runtime Directory /run/user/42477.
Dec  8 04:45:22 np0005550137 systemd[1]: Starting User Manager for UID 42477...
Dec  8 04:45:22 np0005550137 systemd[75846]: Queued start job for default target Main User Target.
Dec  8 04:45:22 np0005550137 systemd-logind[805]: New session 23 of user ceph-admin.
Dec  8 04:45:22 np0005550137 systemd[75846]: Created slice User Application Slice.
Dec  8 04:45:22 np0005550137 systemd[75846]: Started Mark boot as successful after the user session has run 2 minutes.
Dec  8 04:45:22 np0005550137 systemd[75846]: Started Daily Cleanup of User's Temporary Directories.
Dec  8 04:45:22 np0005550137 systemd[75846]: Reached target Paths.
Dec  8 04:45:22 np0005550137 systemd[75846]: Reached target Timers.
Dec  8 04:45:22 np0005550137 systemd[75846]: Starting D-Bus User Message Bus Socket...
Dec  8 04:45:22 np0005550137 systemd[75846]: Starting Create User's Volatile Files and Directories...
Dec  8 04:45:22 np0005550137 systemd[75846]: Finished Create User's Volatile Files and Directories.
Dec  8 04:45:22 np0005550137 systemd[75846]: Listening on D-Bus User Message Bus Socket.
Dec  8 04:45:22 np0005550137 systemd[75846]: Reached target Sockets.
Dec  8 04:45:22 np0005550137 systemd[75846]: Reached target Basic System.
Dec  8 04:45:22 np0005550137 systemd[75846]: Reached target Main User Target.
Dec  8 04:45:22 np0005550137 systemd[75846]: Startup finished in 135ms.
Dec  8 04:45:22 np0005550137 systemd[1]: Started User Manager for UID 42477.
Dec  8 04:45:22 np0005550137 systemd[1]: Started Session 21 of User ceph-admin.
Dec  8 04:45:22 np0005550137 systemd[1]: Started Session 23 of User ceph-admin.
Dec  8 04:45:22 np0005550137 systemd-logind[805]: New session 24 of user ceph-admin.
Dec  8 04:45:23 np0005550137 systemd[1]: Started Session 24 of User ceph-admin.
Dec  8 04:45:23 np0005550137 systemd-logind[805]: New session 25 of user ceph-admin.
Dec  8 04:45:23 np0005550137 systemd[1]: Started Session 25 of User ceph-admin.
Dec  8 04:45:23 np0005550137 ceph-mgr[74806]: [cephadm INFO cephadm.serve] Deploying cephadm binary to compute-0
Dec  8 04:45:23 np0005550137 ceph-mgr[74806]: log_channel(cephadm) log [INF] : Deploying cephadm binary to compute-0
Dec  8 04:45:23 np0005550137 ceph-mgr[74806]: mgr.server send_report Not sending PG status to monitor yet, waiting for OSDs
Dec  8 04:45:23 np0005550137 systemd-logind[805]: New session 26 of user ceph-admin.
Dec  8 04:45:23 np0005550137 systemd[1]: Started Session 26 of User ceph-admin.
Dec  8 04:45:23 np0005550137 systemd-logind[805]: New session 27 of user ceph-admin.
Dec  8 04:45:24 np0005550137 systemd[1]: Started Session 27 of User ceph-admin.
Dec  8 04:45:24 np0005550137 systemd-logind[805]: New session 28 of user ceph-admin.
Dec  8 04:45:24 np0005550137 systemd[1]: Started Session 28 of User ceph-admin.
Dec  8 04:45:24 np0005550137 systemd-logind[805]: New session 29 of user ceph-admin.
Dec  8 04:45:24 np0005550137 systemd[1]: Started Session 29 of User ceph-admin.
Dec  8 04:45:24 np0005550137 ceph-mon[74516]: mon.compute-0@0(leader).osd e2 _set_new_cache_sizes cache_size:1020052959 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  8 04:45:25 np0005550137 systemd-logind[805]: New session 30 of user ceph-admin.
Dec  8 04:45:25 np0005550137 systemd[1]: Started Session 30 of User ceph-admin.
Dec  8 04:45:25 np0005550137 ceph-mon[74516]: Deploying cephadm binary to compute-0
Dec  8 04:45:25 np0005550137 systemd-logind[805]: New session 31 of user ceph-admin.
Dec  8 04:45:25 np0005550137 systemd[1]: Started Session 31 of User ceph-admin.
Dec  8 04:45:25 np0005550137 ceph-mgr[74806]: mgr.server send_report Not sending PG status to monitor yet, waiting for OSDs
Dec  8 04:45:26 np0005550137 systemd-logind[805]: New session 32 of user ceph-admin.
Dec  8 04:45:26 np0005550137 systemd[1]: Started Session 32 of User ceph-admin.
Dec  8 04:45:26 np0005550137 systemd-logind[805]: New session 33 of user ceph-admin.
Dec  8 04:45:26 np0005550137 systemd[1]: Started Session 33 of User ceph-admin.
Dec  8 04:45:27 np0005550137 ceph-mon[74516]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/inventory}] v 0)
Dec  8 04:45:27 np0005550137 ceph-mon[74516]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1555784564' entity='mgr.compute-0.kitiwu' 
Dec  8 04:45:27 np0005550137 ceph-mgr[74806]: [cephadm INFO root] Added host compute-0
Dec  8 04:45:27 np0005550137 ceph-mgr[74806]: log_channel(cephadm) log [INF] : Added host compute-0
Dec  8 04:45:27 np0005550137 ceph-mon[74516]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config dump", "format": "json"} v 0)
Dec  8 04:45:27 np0005550137 ceph-mon[74516]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/1555784564' entity='mgr.compute-0.kitiwu' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch
Dec  8 04:45:27 np0005550137 happy_tharp[75816]: Added host 'compute-0' with addr '192.168.122.100'
Dec  8 04:45:27 np0005550137 systemd[1]: libpod-c536ebee4c22b836a95a7fe4d33a3b7da260e125bc8a00afba3cab02cef9d38b.scope: Deactivated successfully.
Dec  8 04:45:27 np0005550137 podman[75793]: 2025-12-08 09:45:27.410735916 +0000 UTC m=+5.759356031 container died c536ebee4c22b836a95a7fe4d33a3b7da260e125bc8a00afba3cab02cef9d38b (image=quay.io/ceph/ceph:v19, name=happy_tharp, org.label-schema.vendor=CentOS, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec  8 04:45:27 np0005550137 systemd[1]: var-lib-containers-storage-overlay-34d6f76b62518dceece5d1532e923bd9088b697a6e951b6dc4f190e5ef4ec927-merged.mount: Deactivated successfully.
Dec  8 04:45:27 np0005550137 podman[75793]: 2025-12-08 09:45:27.462347197 +0000 UTC m=+5.810967292 container remove c536ebee4c22b836a95a7fe4d33a3b7da260e125bc8a00afba3cab02cef9d38b (image=quay.io/ceph/ceph:v19, name=happy_tharp, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec  8 04:45:27 np0005550137 systemd[1]: libpod-conmon-c536ebee4c22b836a95a7fe4d33a3b7da260e125bc8a00afba3cab02cef9d38b.scope: Deactivated successfully.
Dec  8 04:45:27 np0005550137 podman[76238]: 2025-12-08 09:45:27.530298605 +0000 UTC m=+0.045798558 container create 09cc8707272f8fac7c5dc62b260e07be300c6da0b23adc68157488c0542cbb93 (image=quay.io/ceph/ceph:v19, name=sad_joliot, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, CEPH_REF=squid)
Dec  8 04:45:27 np0005550137 systemd[1]: Started libpod-conmon-09cc8707272f8fac7c5dc62b260e07be300c6da0b23adc68157488c0542cbb93.scope.
Dec  8 04:45:27 np0005550137 systemd[1]: Started libcrun container.
Dec  8 04:45:27 np0005550137 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3b62be1309a334e6572482d06b36091e5352d161b348276afb123170d8cc27a2/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Dec  8 04:45:27 np0005550137 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3b62be1309a334e6572482d06b36091e5352d161b348276afb123170d8cc27a2/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec  8 04:45:27 np0005550137 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3b62be1309a334e6572482d06b36091e5352d161b348276afb123170d8cc27a2/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec  8 04:45:27 np0005550137 podman[76238]: 2025-12-08 09:45:27.509523179 +0000 UTC m=+0.025023152 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Dec  8 04:45:27 np0005550137 podman[76238]: 2025-12-08 09:45:27.616795099 +0000 UTC m=+0.132295062 container init 09cc8707272f8fac7c5dc62b260e07be300c6da0b23adc68157488c0542cbb93 (image=quay.io/ceph/ceph:v19, name=sad_joliot, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2)
Dec  8 04:45:27 np0005550137 podman[76238]: 2025-12-08 09:45:27.624283847 +0000 UTC m=+0.139783790 container start 09cc8707272f8fac7c5dc62b260e07be300c6da0b23adc68157488c0542cbb93 (image=quay.io/ceph/ceph:v19, name=sad_joliot, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec  8 04:45:27 np0005550137 podman[76238]: 2025-12-08 09:45:27.629096439 +0000 UTC m=+0.144596382 container attach 09cc8707272f8fac7c5dc62b260e07be300c6da0b23adc68157488c0542cbb93 (image=quay.io/ceph/ceph:v19, name=sad_joliot, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image)
Dec  8 04:45:27 np0005550137 ceph-mgr[74806]: mgr.server send_report Not sending PG status to monitor yet, waiting for OSDs
Dec  8 04:45:27 np0005550137 ceph-mgr[74806]: log_channel(audit) log [DBG] : from='client.14146 -' entity='client.admin' cmd=[{"prefix": "orch apply", "service_type": "mon", "target": ["mon-mgr", ""]}]: dispatch
Dec  8 04:45:27 np0005550137 ceph-mgr[74806]: [cephadm INFO root] Saving service mon spec with placement count:5
Dec  8 04:45:27 np0005550137 ceph-mgr[74806]: log_channel(cephadm) log [INF] : Saving service mon spec with placement count:5
Dec  8 04:45:27 np0005550137 ceph-mon[74516]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.mon}] v 0)
Dec  8 04:45:28 np0005550137 ceph-mon[74516]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1555784564' entity='mgr.compute-0.kitiwu' 
Dec  8 04:45:28 np0005550137 sad_joliot[76276]: Scheduled mon update...
Dec  8 04:45:28 np0005550137 systemd[1]: libpod-09cc8707272f8fac7c5dc62b260e07be300c6da0b23adc68157488c0542cbb93.scope: Deactivated successfully.
Dec  8 04:45:28 np0005550137 podman[76238]: 2025-12-08 09:45:28.334150967 +0000 UTC m=+0.849650940 container died 09cc8707272f8fac7c5dc62b260e07be300c6da0b23adc68157488c0542cbb93 (image=quay.io/ceph/ceph:v19, name=sad_joliot, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Dec  8 04:45:28 np0005550137 systemd[1]: var-lib-containers-storage-overlay-3b62be1309a334e6572482d06b36091e5352d161b348276afb123170d8cc27a2-merged.mount: Deactivated successfully.
Dec  8 04:45:28 np0005550137 podman[76238]: 2025-12-08 09:45:28.375071461 +0000 UTC m=+0.890571404 container remove 09cc8707272f8fac7c5dc62b260e07be300c6da0b23adc68157488c0542cbb93 (image=quay.io/ceph/ceph:v19, name=sad_joliot, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, org.label-schema.schema-version=1.0)
Dec  8 04:45:28 np0005550137 systemd[1]: libpod-conmon-09cc8707272f8fac7c5dc62b260e07be300c6da0b23adc68157488c0542cbb93.scope: Deactivated successfully.
Dec  8 04:45:28 np0005550137 ceph-mon[74516]: from='mgr.14122 192.168.122.100:0/1555784564' entity='mgr.compute-0.kitiwu' 
Dec  8 04:45:28 np0005550137 ceph-mon[74516]: Added host compute-0
Dec  8 04:45:28 np0005550137 ceph-mon[74516]: Saving service mon spec with placement count:5
Dec  8 04:45:28 np0005550137 ceph-mon[74516]: from='mgr.14122 192.168.122.100:0/1555784564' entity='mgr.compute-0.kitiwu' 
Dec  8 04:45:28 np0005550137 podman[76338]: 2025-12-08 09:45:28.437270367 +0000 UTC m=+0.042100712 container create aa0e6185c4beb814cbe358d9670ea39499e353f924534aa248dafac104b0240b (image=quay.io/ceph/ceph:v19, name=eloquent_ganguly, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec  8 04:45:28 np0005550137 systemd[1]: Started libpod-conmon-aa0e6185c4beb814cbe358d9670ea39499e353f924534aa248dafac104b0240b.scope.
Dec  8 04:45:28 np0005550137 systemd[1]: Started libcrun container.
Dec  8 04:45:28 np0005550137 podman[76338]: 2025-12-08 09:45:28.421740717 +0000 UTC m=+0.026571082 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Dec  8 04:45:28 np0005550137 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5c5c2b0a3c7faa7577b2acb42b317df3237178bbd0e14c6e7778aaff8911679e/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec  8 04:45:28 np0005550137 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5c5c2b0a3c7faa7577b2acb42b317df3237178bbd0e14c6e7778aaff8911679e/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec  8 04:45:28 np0005550137 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5c5c2b0a3c7faa7577b2acb42b317df3237178bbd0e14c6e7778aaff8911679e/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Dec  8 04:45:28 np0005550137 podman[76338]: 2025-12-08 09:45:28.533417937 +0000 UTC m=+0.138248312 container init aa0e6185c4beb814cbe358d9670ea39499e353f924534aa248dafac104b0240b (image=quay.io/ceph/ceph:v19, name=eloquent_ganguly, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_REF=squid, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default)
Dec  8 04:45:28 np0005550137 podman[76338]: 2025-12-08 09:45:28.540365716 +0000 UTC m=+0.145196061 container start aa0e6185c4beb814cbe358d9670ea39499e353f924534aa248dafac104b0240b (image=quay.io/ceph/ceph:v19, name=eloquent_ganguly, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1)
Dec  8 04:45:28 np0005550137 podman[76338]: 2025-12-08 09:45:28.544620621 +0000 UTC m=+0.149450976 container attach aa0e6185c4beb814cbe358d9670ea39499e353f924534aa248dafac104b0240b (image=quay.io/ceph/ceph:v19, name=eloquent_ganguly, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, CEPH_REF=squid, ceph=True, org.label-schema.license=GPLv2, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0)
Dec  8 04:45:28 np0005550137 podman[76313]: 2025-12-08 09:45:28.78122704 +0000 UTC m=+0.464467364 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Dec  8 04:45:28 np0005550137 podman[76395]: 2025-12-08 09:45:28.884566077 +0000 UTC m=+0.045545930 container create b1874a762f85339fa7da42c7ebbba9505f30d14605591933c0c32374a87c5f99 (image=quay.io/ceph/ceph:v19, name=distracted_black, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, CEPH_REF=squid, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Dec  8 04:45:28 np0005550137 ceph-mgr[74806]: log_channel(audit) log [DBG] : from='client.14148 -' entity='client.admin' cmd=[{"prefix": "orch apply", "service_type": "mgr", "target": ["mon-mgr", ""]}]: dispatch
Dec  8 04:45:28 np0005550137 ceph-mgr[74806]: [cephadm INFO root] Saving service mgr spec with placement count:2
Dec  8 04:45:28 np0005550137 ceph-mgr[74806]: log_channel(cephadm) log [INF] : Saving service mgr spec with placement count:2
Dec  8 04:45:28 np0005550137 ceph-mon[74516]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.mgr}] v 0)
Dec  8 04:45:28 np0005550137 ceph-mon[74516]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1555784564' entity='mgr.compute-0.kitiwu' 
Dec  8 04:45:28 np0005550137 eloquent_ganguly[76357]: Scheduled mgr update...
Dec  8 04:45:28 np0005550137 systemd[1]: Started libpod-conmon-b1874a762f85339fa7da42c7ebbba9505f30d14605591933c0c32374a87c5f99.scope.
Dec  8 04:45:28 np0005550137 systemd[1]: libpod-aa0e6185c4beb814cbe358d9670ea39499e353f924534aa248dafac104b0240b.scope: Deactivated successfully.
Dec  8 04:45:28 np0005550137 systemd[1]: Started libcrun container.
Dec  8 04:45:28 np0005550137 podman[76395]: 2025-12-08 09:45:28.863105119 +0000 UTC m=+0.024085012 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Dec  8 04:45:28 np0005550137 podman[76395]: 2025-12-08 09:45:28.976076491 +0000 UTC m=+0.137056384 container init b1874a762f85339fa7da42c7ebbba9505f30d14605591933c0c32374a87c5f99 (image=quay.io/ceph/ceph:v19, name=distracted_black, OSD_FLAVOR=default, io.buildah.version=1.40.1, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS)
Dec  8 04:45:28 np0005550137 podman[76414]: 2025-12-08 09:45:28.977108443 +0000 UTC m=+0.032756106 container died aa0e6185c4beb814cbe358d9670ea39499e353f924534aa248dafac104b0240b (image=quay.io/ceph/ceph:v19, name=eloquent_ganguly, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_REF=squid, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Dec  8 04:45:28 np0005550137 podman[76395]: 2025-12-08 09:45:28.983306369 +0000 UTC m=+0.144286222 container start b1874a762f85339fa7da42c7ebbba9505f30d14605591933c0c32374a87c5f99 (image=quay.io/ceph/ceph:v19, name=distracted_black, OSD_FLAVOR=default, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Dec  8 04:45:28 np0005550137 podman[76395]: 2025-12-08 09:45:28.987902585 +0000 UTC m=+0.148882458 container attach b1874a762f85339fa7da42c7ebbba9505f30d14605591933c0c32374a87c5f99 (image=quay.io/ceph/ceph:v19, name=distracted_black, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1)
Dec  8 04:45:29 np0005550137 systemd[1]: var-lib-containers-storage-overlay-5c5c2b0a3c7faa7577b2acb42b317df3237178bbd0e14c6e7778aaff8911679e-merged.mount: Deactivated successfully.
Dec  8 04:45:29 np0005550137 podman[76414]: 2025-12-08 09:45:29.017261212 +0000 UTC m=+0.072908885 container remove aa0e6185c4beb814cbe358d9670ea39499e353f924534aa248dafac104b0240b (image=quay.io/ceph/ceph:v19, name=eloquent_ganguly, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325)
Dec  8 04:45:29 np0005550137 systemd[1]: libpod-conmon-aa0e6185c4beb814cbe358d9670ea39499e353f924534aa248dafac104b0240b.scope: Deactivated successfully.
Dec  8 04:45:29 np0005550137 podman[76431]: 2025-12-08 09:45:29.105389179 +0000 UTC m=+0.057321574 container create 59ff5bd707dda950848c4240f6907e3256d9d5c6c463943022bc3b09cd6c6ef4 (image=quay.io/ceph/ceph:v19, name=frosty_faraday, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, OSD_FLAVOR=default, org.label-schema.schema-version=1.0)
Dec  8 04:45:29 np0005550137 distracted_black[76415]: ceph version 19.2.3 (c92aebb279828e9c3c1f5d24613efca272649e62) squid (stable)
Dec  8 04:45:29 np0005550137 systemd[1]: libpod-b1874a762f85339fa7da42c7ebbba9505f30d14605591933c0c32374a87c5f99.scope: Deactivated successfully.
Dec  8 04:45:29 np0005550137 podman[76395]: 2025-12-08 09:45:29.119006749 +0000 UTC m=+0.279986672 container died b1874a762f85339fa7da42c7ebbba9505f30d14605591933c0c32374a87c5f99 (image=quay.io/ceph/ceph:v19, name=distracted_black, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0)
Dec  8 04:45:29 np0005550137 systemd[1]: Started libpod-conmon-59ff5bd707dda950848c4240f6907e3256d9d5c6c463943022bc3b09cd6c6ef4.scope.
Dec  8 04:45:29 np0005550137 systemd[1]: var-lib-containers-storage-overlay-7733f83ee06a437b6db40957b1c7eb79371ff7e753e495058a63eecb00f32145-merged.mount: Deactivated successfully.
Dec  8 04:45:29 np0005550137 podman[76431]: 2025-12-08 09:45:29.0782403 +0000 UTC m=+0.030172775 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Dec  8 04:45:29 np0005550137 podman[76395]: 2025-12-08 09:45:29.173841202 +0000 UTC m=+0.334821055 container remove b1874a762f85339fa7da42c7ebbba9505f30d14605591933c0c32374a87c5f99 (image=quay.io/ceph/ceph:v19, name=distracted_black, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True)
Dec  8 04:45:29 np0005550137 systemd[1]: Started libcrun container.
Dec  8 04:45:29 np0005550137 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fa793faaf7dec8c9d4e46cc396c37f4fff779d67030363817b0e7f7a67871341/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec  8 04:45:29 np0005550137 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fa793faaf7dec8c9d4e46cc396c37f4fff779d67030363817b0e7f7a67871341/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec  8 04:45:29 np0005550137 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fa793faaf7dec8c9d4e46cc396c37f4fff779d67030363817b0e7f7a67871341/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Dec  8 04:45:29 np0005550137 systemd[1]: libpod-conmon-b1874a762f85339fa7da42c7ebbba9505f30d14605591933c0c32374a87c5f99.scope: Deactivated successfully.
Dec  8 04:45:29 np0005550137 podman[76431]: 2025-12-08 09:45:29.202429676 +0000 UTC m=+0.154362071 container init 59ff5bd707dda950848c4240f6907e3256d9d5c6c463943022bc3b09cd6c6ef4 (image=quay.io/ceph/ceph:v19, name=frosty_faraday, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, ceph=True, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default)
Dec  8 04:45:29 np0005550137 podman[76431]: 2025-12-08 09:45:29.208779027 +0000 UTC m=+0.160711412 container start 59ff5bd707dda950848c4240f6907e3256d9d5c6c463943022bc3b09cd6c6ef4 (image=quay.io/ceph/ceph:v19, name=frosty_faraday, CEPH_REF=squid, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Dec  8 04:45:29 np0005550137 ceph-mon[74516]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config set, name=container_image}] v 0)
Dec  8 04:45:29 np0005550137 podman[76431]: 2025-12-08 09:45:29.213126664 +0000 UTC m=+0.165059089 container attach 59ff5bd707dda950848c4240f6907e3256d9d5c6c463943022bc3b09cd6c6ef4 (image=quay.io/ceph/ceph:v19, name=frosty_faraday, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS)
Dec  8 04:45:29 np0005550137 ceph-mon[74516]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1555784564' entity='mgr.compute-0.kitiwu' 
Dec  8 04:45:29 np0005550137 ceph-mgr[74806]: log_channel(audit) log [DBG] : from='client.14150 -' entity='client.admin' cmd=[{"prefix": "orch apply", "service_type": "crash", "target": ["mon-mgr", ""]}]: dispatch
Dec  8 04:45:29 np0005550137 ceph-mgr[74806]: [cephadm INFO root] Saving service crash spec with placement *
Dec  8 04:45:29 np0005550137 ceph-mgr[74806]: log_channel(cephadm) log [INF] : Saving service crash spec with placement *
Dec  8 04:45:29 np0005550137 ceph-mon[74516]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.crash}] v 0)
Dec  8 04:45:29 np0005550137 ceph-mon[74516]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1555784564' entity='mgr.compute-0.kitiwu' 
Dec  8 04:45:29 np0005550137 frosty_faraday[76458]: Scheduled crash update...
Dec  8 04:45:29 np0005550137 systemd[1]: libpod-59ff5bd707dda950848c4240f6907e3256d9d5c6c463943022bc3b09cd6c6ef4.scope: Deactivated successfully.
Dec  8 04:45:29 np0005550137 podman[76431]: 2025-12-08 09:45:29.606595693 +0000 UTC m=+0.558528068 container died 59ff5bd707dda950848c4240f6907e3256d9d5c6c463943022bc3b09cd6c6ef4 (image=quay.io/ceph/ceph:v19, name=frosty_faraday, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec  8 04:45:29 np0005550137 systemd[1]: var-lib-containers-storage-overlay-fa793faaf7dec8c9d4e46cc396c37f4fff779d67030363817b0e7f7a67871341-merged.mount: Deactivated successfully.
Dec  8 04:45:29 np0005550137 ceph-mgr[74806]: mgr.server send_report Not sending PG status to monitor yet, waiting for OSDs
Dec  8 04:45:29 np0005550137 podman[76431]: 2025-12-08 09:45:29.645846544 +0000 UTC m=+0.597778919 container remove 59ff5bd707dda950848c4240f6907e3256d9d5c6c463943022bc3b09cd6c6ef4 (image=quay.io/ceph/ceph:v19, name=frosty_faraday, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS)
Dec  8 04:45:29 np0005550137 systemd[1]: libpod-conmon-59ff5bd707dda950848c4240f6907e3256d9d5c6c463943022bc3b09cd6c6ef4.scope: Deactivated successfully.
Dec  8 04:45:29 np0005550137 ceph-mon[74516]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Dec  8 04:45:29 np0005550137 ceph-mon[74516]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1555784564' entity='mgr.compute-0.kitiwu' 
Dec  8 04:45:29 np0005550137 podman[76567]: 2025-12-08 09:45:29.737047577 +0000 UTC m=+0.072083279 container create f00cd4af081cd19092c61dfffb12c56d66dbd2cb5847045e358610ff5acb3060 (image=quay.io/ceph/ceph:v19, name=interesting_davinci, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec  8 04:45:29 np0005550137 systemd[1]: Started libpod-conmon-f00cd4af081cd19092c61dfffb12c56d66dbd2cb5847045e358610ff5acb3060.scope.
Dec  8 04:45:29 np0005550137 systemd[1]: Started libcrun container.
Dec  8 04:45:29 np0005550137 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fb0be2546211cfe689c7b5c7d4de8ed61dda23ddaa148854b5ec973c1cda12af/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec  8 04:45:29 np0005550137 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fb0be2546211cfe689c7b5c7d4de8ed61dda23ddaa148854b5ec973c1cda12af/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Dec  8 04:45:29 np0005550137 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fb0be2546211cfe689c7b5c7d4de8ed61dda23ddaa148854b5ec973c1cda12af/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec  8 04:45:29 np0005550137 podman[76567]: 2025-12-08 09:45:29.806507653 +0000 UTC m=+0.141543355 container init f00cd4af081cd19092c61dfffb12c56d66dbd2cb5847045e358610ff5acb3060 (image=quay.io/ceph/ceph:v19, name=interesting_davinci, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, CEPH_REF=squid, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2)
Dec  8 04:45:29 np0005550137 ceph-mon[74516]: mon.compute-0@0(leader).osd e2 _set_new_cache_sizes cache_size:1020054708 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  8 04:45:29 np0005550137 podman[76567]: 2025-12-08 09:45:29.718106978 +0000 UTC m=+0.053142700 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Dec  8 04:45:29 np0005550137 podman[76567]: 2025-12-08 09:45:29.814905208 +0000 UTC m=+0.149940910 container start f00cd4af081cd19092c61dfffb12c56d66dbd2cb5847045e358610ff5acb3060 (image=quay.io/ceph/ceph:v19, name=interesting_davinci, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, OSD_FLAVOR=default, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec  8 04:45:29 np0005550137 podman[76567]: 2025-12-08 09:45:29.818223753 +0000 UTC m=+0.153259455 container attach f00cd4af081cd19092c61dfffb12c56d66dbd2cb5847045e358610ff5acb3060 (image=quay.io/ceph/ceph:v19, name=interesting_davinci, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, org.label-schema.build-date=20250325, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS)
Dec  8 04:45:29 np0005550137 ceph-mon[74516]: Saving service mgr spec with placement count:2
Dec  8 04:45:29 np0005550137 ceph-mon[74516]: from='mgr.14122 192.168.122.100:0/1555784564' entity='mgr.compute-0.kitiwu' 
Dec  8 04:45:29 np0005550137 ceph-mon[74516]: from='mgr.14122 192.168.122.100:0/1555784564' entity='mgr.compute-0.kitiwu' 
Dec  8 04:45:29 np0005550137 ceph-mon[74516]: from='mgr.14122 192.168.122.100:0/1555784564' entity='mgr.compute-0.kitiwu' 
Dec  8 04:45:29 np0005550137 ceph-mon[74516]: from='mgr.14122 192.168.122.100:0/1555784564' entity='mgr.compute-0.kitiwu' 
Dec  8 04:45:30 np0005550137 ceph-mon[74516]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config set, name=mgr/cephadm/container_init}] v 0)
Dec  8 04:45:30 np0005550137 ceph-mon[74516]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/3246878316' entity='client.admin' 
Dec  8 04:45:30 np0005550137 systemd[1]: libpod-f00cd4af081cd19092c61dfffb12c56d66dbd2cb5847045e358610ff5acb3060.scope: Deactivated successfully.
Dec  8 04:45:30 np0005550137 podman[76567]: 2025-12-08 09:45:30.20626151 +0000 UTC m=+0.541297212 container died f00cd4af081cd19092c61dfffb12c56d66dbd2cb5847045e358610ff5acb3060 (image=quay.io/ceph/ceph:v19, name=interesting_davinci, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default)
Dec  8 04:45:30 np0005550137 systemd[1]: var-lib-containers-storage-overlay-fb0be2546211cfe689c7b5c7d4de8ed61dda23ddaa148854b5ec973c1cda12af-merged.mount: Deactivated successfully.
Dec  8 04:45:30 np0005550137 podman[76567]: 2025-12-08 09:45:30.241550656 +0000 UTC m=+0.576586358 container remove f00cd4af081cd19092c61dfffb12c56d66dbd2cb5847045e358610ff5acb3060 (image=quay.io/ceph/ceph:v19, name=interesting_davinci, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325)
Dec  8 04:45:30 np0005550137 systemd[1]: libpod-conmon-f00cd4af081cd19092c61dfffb12c56d66dbd2cb5847045e358610ff5acb3060.scope: Deactivated successfully.
Dec  8 04:45:30 np0005550137 podman[76728]: 2025-12-08 09:45:30.300027525 +0000 UTC m=+0.040024967 container create 751d71c1cacbae9006096b7b6d5daa887d211c9d33b80c6d994f3f6bafa2b90f (image=quay.io/ceph/ceph:v19, name=ecstatic_noyce, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1)
Dec  8 04:45:30 np0005550137 systemd[1]: Started libpod-conmon-751d71c1cacbae9006096b7b6d5daa887d211c9d33b80c6d994f3f6bafa2b90f.scope.
Dec  8 04:45:30 np0005550137 systemd[1]: Started libcrun container.
Dec  8 04:45:30 np0005550137 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e02f7624a0cff4037c8d3ec735376979edb505bf66fe66fa6b2eeb1c0e3ef336/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec  8 04:45:30 np0005550137 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e02f7624a0cff4037c8d3ec735376979edb505bf66fe66fa6b2eeb1c0e3ef336/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec  8 04:45:30 np0005550137 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e02f7624a0cff4037c8d3ec735376979edb505bf66fe66fa6b2eeb1c0e3ef336/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Dec  8 04:45:30 np0005550137 podman[76728]: 2025-12-08 09:45:30.375326895 +0000 UTC m=+0.115324347 container init 751d71c1cacbae9006096b7b6d5daa887d211c9d33b80c6d994f3f6bafa2b90f (image=quay.io/ceph/ceph:v19, name=ecstatic_noyce, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid)
Dec  8 04:45:30 np0005550137 podman[76728]: 2025-12-08 09:45:30.279846536 +0000 UTC m=+0.019843998 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Dec  8 04:45:30 np0005550137 podman[76760]: 2025-12-08 09:45:30.379425854 +0000 UTC m=+0.057879190 container exec e9eed32aa882af654b63b8cfe5830bc1e3b4cbe08ed44f11c04405084f3a1007 (image=quay.io/ceph/ceph:v19, name=ceph-ceb838ef-9d5d-54e4-bddb-2f01adce2ad4-mon-compute-0, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_REF=squid)
Dec  8 04:45:30 np0005550137 podman[76728]: 2025-12-08 09:45:30.384239487 +0000 UTC m=+0.124236929 container start 751d71c1cacbae9006096b7b6d5daa887d211c9d33b80c6d994f3f6bafa2b90f (image=quay.io/ceph/ceph:v19, name=ecstatic_noyce, CEPH_REF=squid, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image)
Dec  8 04:45:30 np0005550137 podman[76728]: 2025-12-08 09:45:30.390878246 +0000 UTC m=+0.130875688 container attach 751d71c1cacbae9006096b7b6d5daa887d211c9d33b80c6d994f3f6bafa2b90f (image=quay.io/ceph/ceph:v19, name=ecstatic_noyce, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec  8 04:45:30 np0005550137 podman[76760]: 2025-12-08 09:45:30.478323391 +0000 UTC m=+0.156776697 container exec_died e9eed32aa882af654b63b8cfe5830bc1e3b4cbe08ed44f11c04405084f3a1007 (image=quay.io/ceph/ceph:v19, name=ceph-ceb838ef-9d5d-54e4-bddb-2f01adce2ad4-mon-compute-0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_REF=squid, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, ceph=True, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec  8 04:45:30 np0005550137 ceph-mon[74516]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Dec  8 04:45:30 np0005550137 ceph-mon[74516]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1555784564' entity='mgr.compute-0.kitiwu' 
Dec  8 04:45:30 np0005550137 ceph-mgr[74806]: log_channel(audit) log [DBG] : from='client.14154 -' entity='client.admin' cmd=[{"prefix": "orch client-keyring set", "entity": "client.admin", "placement": "label:_admin", "target": ["mon-mgr", ""]}]: dispatch
Dec  8 04:45:30 np0005550137 ceph-mon[74516]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/client_keyrings}] v 0)
Dec  8 04:45:30 np0005550137 ceph-mon[74516]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1555784564' entity='mgr.compute-0.kitiwu' 
Dec  8 04:45:30 np0005550137 systemd[1]: libpod-751d71c1cacbae9006096b7b6d5daa887d211c9d33b80c6d994f3f6bafa2b90f.scope: Deactivated successfully.
Dec  8 04:45:30 np0005550137 podman[76887]: 2025-12-08 09:45:30.808981464 +0000 UTC m=+0.020570852 container died 751d71c1cacbae9006096b7b6d5daa887d211c9d33b80c6d994f3f6bafa2b90f (image=quay.io/ceph/ceph:v19, name=ecstatic_noyce, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, CEPH_REF=squid, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec  8 04:45:30 np0005550137 systemd[1]: var-lib-containers-storage-overlay-e02f7624a0cff4037c8d3ec735376979edb505bf66fe66fa6b2eeb1c0e3ef336-merged.mount: Deactivated successfully.
Dec  8 04:45:30 np0005550137 podman[76887]: 2025-12-08 09:45:30.846597313 +0000 UTC m=+0.058186701 container remove 751d71c1cacbae9006096b7b6d5daa887d211c9d33b80c6d994f3f6bafa2b90f (image=quay.io/ceph/ceph:v19, name=ecstatic_noyce, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True)
Dec  8 04:45:30 np0005550137 systemd[1]: libpod-conmon-751d71c1cacbae9006096b7b6d5daa887d211c9d33b80c6d994f3f6bafa2b90f.scope: Deactivated successfully.
Dec  8 04:45:30 np0005550137 ceph-mon[74516]: Saving service crash spec with placement *
Dec  8 04:45:30 np0005550137 ceph-mon[74516]: from='client.? 192.168.122.100:0/3246878316' entity='client.admin' 
Dec  8 04:45:30 np0005550137 ceph-mon[74516]: from='mgr.14122 192.168.122.100:0/1555784564' entity='mgr.compute-0.kitiwu' 
Dec  8 04:45:30 np0005550137 ceph-mon[74516]: from='mgr.14122 192.168.122.100:0/1555784564' entity='mgr.compute-0.kitiwu' 
Dec  8 04:45:30 np0005550137 podman[76902]: 2025-12-08 09:45:30.922054249 +0000 UTC m=+0.046258974 container create 954f51a09be379214675b86c5626038b827505209e6bf12c6194ca1ff8672a59 (image=quay.io/ceph/ceph:v19, name=eloquent_poincare, ceph=True, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325)
Dec  8 04:45:30 np0005550137 systemd[1]: Started libpod-conmon-954f51a09be379214675b86c5626038b827505209e6bf12c6194ca1ff8672a59.scope.
Dec  8 04:45:30 np0005550137 systemd[1]: Started libcrun container.
Dec  8 04:45:30 np0005550137 systemd[1]: proc-sys-fs-binfmt_misc.automount: Got automount request for /proc/sys/fs/binfmt_misc, triggered by 76933 (sysctl)
Dec  8 04:45:30 np0005550137 systemd[1]: Mounting Arbitrary Executable File Formats File System...
Dec  8 04:45:30 np0005550137 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a1b954e0368e0d7f5ee02ce10d4592610aadc20b11f8084820a7e675c96b3399/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec  8 04:45:30 np0005550137 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a1b954e0368e0d7f5ee02ce10d4592610aadc20b11f8084820a7e675c96b3399/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Dec  8 04:45:30 np0005550137 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a1b954e0368e0d7f5ee02ce10d4592610aadc20b11f8084820a7e675c96b3399/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec  8 04:45:30 np0005550137 podman[76902]: 2025-12-08 09:45:30.9021703 +0000 UTC m=+0.026375015 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Dec  8 04:45:31 np0005550137 podman[76902]: 2025-12-08 09:45:31.016806734 +0000 UTC m=+0.141011439 container init 954f51a09be379214675b86c5626038b827505209e6bf12c6194ca1ff8672a59 (image=quay.io/ceph/ceph:v19, name=eloquent_poincare, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid)
Dec  8 04:45:31 np0005550137 podman[76902]: 2025-12-08 09:45:31.021937636 +0000 UTC m=+0.146142321 container start 954f51a09be379214675b86c5626038b827505209e6bf12c6194ca1ff8672a59 (image=quay.io/ceph/ceph:v19, name=eloquent_poincare, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, OSD_FLAVOR=default, ceph=True)
Dec  8 04:45:31 np0005550137 systemd[1]: Mounted Arbitrary Executable File Formats File System.
Dec  8 04:45:31 np0005550137 podman[76902]: 2025-12-08 09:45:31.027666797 +0000 UTC m=+0.151871502 container attach 954f51a09be379214675b86c5626038b827505209e6bf12c6194ca1ff8672a59 (image=quay.io/ceph/ceph:v19, name=eloquent_poincare, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image)
Dec  8 04:45:31 np0005550137 ceph-mgr[74806]: log_channel(audit) log [DBG] : from='client.14156 -' entity='client.admin' cmd=[{"prefix": "orch host label add", "hostname": "compute-0", "label": "_admin", "target": ["mon-mgr", ""]}]: dispatch
Dec  8 04:45:31 np0005550137 ceph-mon[74516]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/inventory}] v 0)
Dec  8 04:45:31 np0005550137 ceph-mon[74516]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1555784564' entity='mgr.compute-0.kitiwu' 
Dec  8 04:45:31 np0005550137 ceph-mgr[74806]: [cephadm INFO root] Added label _admin to host compute-0
Dec  8 04:45:31 np0005550137 ceph-mgr[74806]: log_channel(cephadm) log [INF] : Added label _admin to host compute-0
Dec  8 04:45:31 np0005550137 eloquent_poincare[76930]: Added label _admin to host compute-0
Dec  8 04:45:31 np0005550137 systemd[1]: libpod-954f51a09be379214675b86c5626038b827505209e6bf12c6194ca1ff8672a59.scope: Deactivated successfully.
Dec  8 04:45:31 np0005550137 podman[76902]: 2025-12-08 09:45:31.441177289 +0000 UTC m=+0.565381974 container died 954f51a09be379214675b86c5626038b827505209e6bf12c6194ca1ff8672a59 (image=quay.io/ceph/ceph:v19, name=eloquent_poincare, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0)
Dec  8 04:45:31 np0005550137 systemd[1]: var-lib-containers-storage-overlay-a1b954e0368e0d7f5ee02ce10d4592610aadc20b11f8084820a7e675c96b3399-merged.mount: Deactivated successfully.
Dec  8 04:45:31 np0005550137 podman[76902]: 2025-12-08 09:45:31.478293173 +0000 UTC m=+0.602497858 container remove 954f51a09be379214675b86c5626038b827505209e6bf12c6194ca1ff8672a59 (image=quay.io/ceph/ceph:v19, name=eloquent_poincare, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec  8 04:45:31 np0005550137 systemd[1]: libpod-conmon-954f51a09be379214675b86c5626038b827505209e6bf12c6194ca1ff8672a59.scope: Deactivated successfully.
Dec  8 04:45:31 np0005550137 podman[77041]: 2025-12-08 09:45:31.54019098 +0000 UTC m=+0.040067028 container create 235bcf341d1b44c57ea08699b49304bd82f26b00ed8688c985a6c80310d589c4 (image=quay.io/ceph/ceph:v19, name=busy_buck, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default)
Dec  8 04:45:31 np0005550137 systemd[1]: Started libpod-conmon-235bcf341d1b44c57ea08699b49304bd82f26b00ed8688c985a6c80310d589c4.scope.
Dec  8 04:45:31 np0005550137 systemd[1]: Started libcrun container.
Dec  8 04:45:31 np0005550137 podman[77041]: 2025-12-08 09:45:31.52120834 +0000 UTC m=+0.021084408 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Dec  8 04:45:31 np0005550137 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e816ac81fd7d212d5c1a5b3203ff9b5a614ca4ccf966d58f831ea9e0d5e8e26c/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec  8 04:45:31 np0005550137 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e816ac81fd7d212d5c1a5b3203ff9b5a614ca4ccf966d58f831ea9e0d5e8e26c/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Dec  8 04:45:31 np0005550137 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e816ac81fd7d212d5c1a5b3203ff9b5a614ca4ccf966d58f831ea9e0d5e8e26c/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec  8 04:45:31 np0005550137 podman[77041]: 2025-12-08 09:45:31.634008355 +0000 UTC m=+0.133884413 container init 235bcf341d1b44c57ea08699b49304bd82f26b00ed8688c985a6c80310d589c4 (image=quay.io/ceph/ceph:v19, name=busy_buck, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2)
Dec  8 04:45:31 np0005550137 ceph-mgr[74806]: mgr.server send_report Not sending PG status to monitor yet, waiting for OSDs
Dec  8 04:45:31 np0005550137 podman[77041]: 2025-12-08 09:45:31.64270697 +0000 UTC m=+0.142583018 container start 235bcf341d1b44c57ea08699b49304bd82f26b00ed8688c985a6c80310d589c4 (image=quay.io/ceph/ceph:v19, name=busy_buck, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image)
Dec  8 04:45:31 np0005550137 podman[77041]: 2025-12-08 09:45:31.645750686 +0000 UTC m=+0.145626744 container attach 235bcf341d1b44c57ea08699b49304bd82f26b00ed8688c985a6c80310d589c4 (image=quay.io/ceph/ceph:v19, name=busy_buck, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_REF=squid, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS)
Dec  8 04:45:31 np0005550137 ceph-mon[74516]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Dec  8 04:45:31 np0005550137 ceph-mon[74516]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1555784564' entity='mgr.compute-0.kitiwu' 
Dec  8 04:45:31 np0005550137 ceph-mon[74516]: from='mgr.14122 192.168.122.100:0/1555784564' entity='mgr.compute-0.kitiwu' 
Dec  8 04:45:31 np0005550137 ceph-mon[74516]: from='mgr.14122 192.168.122.100:0/1555784564' entity='mgr.compute-0.kitiwu' 
Dec  8 04:45:32 np0005550137 ceph-mon[74516]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/dashboard/cluster/status}] v 0)
Dec  8 04:45:32 np0005550137 ceph-mon[74516]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/2737127885' entity='client.admin' 
Dec  8 04:45:32 np0005550137 busy_buck[77058]: set mgr/dashboard/cluster/status
Dec  8 04:45:32 np0005550137 systemd[1]: libpod-235bcf341d1b44c57ea08699b49304bd82f26b00ed8688c985a6c80310d589c4.scope: Deactivated successfully.
Dec  8 04:45:32 np0005550137 podman[77041]: 2025-12-08 09:45:32.118346566 +0000 UTC m=+0.618222664 container died 235bcf341d1b44c57ea08699b49304bd82f26b00ed8688c985a6c80310d589c4 (image=quay.io/ceph/ceph:v19, name=busy_buck, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, CEPH_REF=squid, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec  8 04:45:32 np0005550137 systemd[1]: var-lib-containers-storage-overlay-e816ac81fd7d212d5c1a5b3203ff9b5a614ca4ccf966d58f831ea9e0d5e8e26c-merged.mount: Deactivated successfully.
Dec  8 04:45:32 np0005550137 podman[77041]: 2025-12-08 09:45:32.165301442 +0000 UTC m=+0.665177490 container remove 235bcf341d1b44c57ea08699b49304bd82f26b00ed8688c985a6c80310d589c4 (image=quay.io/ceph/ceph:v19, name=busy_buck, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image)
Dec  8 04:45:32 np0005550137 systemd[1]: libpod-conmon-235bcf341d1b44c57ea08699b49304bd82f26b00ed8688c985a6c80310d589c4.scope: Deactivated successfully.
Dec  8 04:45:32 np0005550137 podman[77203]: 2025-12-08 09:45:32.246088245 +0000 UTC m=+0.053078509 container create 47f86e030b1b67f530180978f103333d57d96ef0e7565755863ed76329886521 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=pedantic_yalow, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, org.label-schema.license=GPLv2)
Dec  8 04:45:32 np0005550137 systemd[1]: Started libpod-conmon-47f86e030b1b67f530180978f103333d57d96ef0e7565755863ed76329886521.scope.
Dec  8 04:45:32 np0005550137 systemd[1]: Started libcrun container.
Dec  8 04:45:32 np0005550137 podman[77203]: 2025-12-08 09:45:32.217821962 +0000 UTC m=+0.024812286 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Dec  8 04:45:32 np0005550137 podman[77203]: 2025-12-08 09:45:32.321174619 +0000 UTC m=+0.128164883 container init 47f86e030b1b67f530180978f103333d57d96ef0e7565755863ed76329886521 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=pedantic_yalow, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, CEPH_REF=squid, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, ceph=True, OSD_FLAVOR=default)
Dec  8 04:45:32 np0005550137 podman[77203]: 2025-12-08 09:45:32.32752409 +0000 UTC m=+0.134514364 container start 47f86e030b1b67f530180978f103333d57d96ef0e7565755863ed76329886521 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=pedantic_yalow, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_REF=squid)
Dec  8 04:45:32 np0005550137 pedantic_yalow[77220]: 167 167
Dec  8 04:45:32 np0005550137 podman[77203]: 2025-12-08 09:45:32.332268259 +0000 UTC m=+0.139258543 container attach 47f86e030b1b67f530180978f103333d57d96ef0e7565755863ed76329886521 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=pedantic_yalow, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, OSD_FLAVOR=default, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325)
Dec  8 04:45:32 np0005550137 systemd[1]: libpod-47f86e030b1b67f530180978f103333d57d96ef0e7565755863ed76329886521.scope: Deactivated successfully.
Dec  8 04:45:32 np0005550137 podman[77203]: 2025-12-08 09:45:32.333961783 +0000 UTC m=+0.140952027 container died 47f86e030b1b67f530180978f103333d57d96ef0e7565755863ed76329886521 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=pedantic_yalow, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_REF=squid, ceph=True, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec  8 04:45:32 np0005550137 systemd[1]: var-lib-containers-storage-overlay-6b6a9621dcd209a476962d930d5d66a560efdd27be13b54500738c001d8f2062-merged.mount: Deactivated successfully.
Dec  8 04:45:32 np0005550137 podman[77203]: 2025-12-08 09:45:32.38387482 +0000 UTC m=+0.190865064 container remove 47f86e030b1b67f530180978f103333d57d96ef0e7565755863ed76329886521 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=pedantic_yalow, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, io.buildah.version=1.40.1, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec  8 04:45:32 np0005550137 systemd[1]: libpod-conmon-47f86e030b1b67f530180978f103333d57d96ef0e7565755863ed76329886521.scope: Deactivated successfully.
Dec  8 04:45:32 np0005550137 podman[77261]: 2025-12-08 09:45:32.613023785 +0000 UTC m=+0.071383398 container create 677339816f39d4f178575e238fcc24b02a4c180d0213e8ef0a23682bf198242d (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=nervous_lederberg, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, ceph=True, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec  8 04:45:32 np0005550137 podman[77261]: 2025-12-08 09:45:32.58315934 +0000 UTC m=+0.041519043 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Dec  8 04:45:32 np0005550137 systemd[1]: Started libpod-conmon-677339816f39d4f178575e238fcc24b02a4c180d0213e8ef0a23682bf198242d.scope.
Dec  8 04:45:32 np0005550137 systemd[1]: Started libcrun container.
Dec  8 04:45:32 np0005550137 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8731be85e9228603b0b933c2009f00f7d9b3944caf10f53abe2eb0cdd07274f7/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec  8 04:45:32 np0005550137 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8731be85e9228603b0b933c2009f00f7d9b3944caf10f53abe2eb0cdd07274f7/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec  8 04:45:32 np0005550137 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8731be85e9228603b0b933c2009f00f7d9b3944caf10f53abe2eb0cdd07274f7/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec  8 04:45:32 np0005550137 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8731be85e9228603b0b933c2009f00f7d9b3944caf10f53abe2eb0cdd07274f7/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec  8 04:45:32 np0005550137 python3[77281]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v19 --fsid ceb838ef-9d5d-54e4-bddb-2f01adce2ad4 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   config set mgr mgr/cephadm/use_repo_digest false#012 _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec  8 04:45:32 np0005550137 podman[77261]: 2025-12-08 09:45:32.752072641 +0000 UTC m=+0.210432324 container init 677339816f39d4f178575e238fcc24b02a4c180d0213e8ef0a23682bf198242d (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=nervous_lederberg, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, ceph=True)
Dec  8 04:45:32 np0005550137 podman[77261]: 2025-12-08 09:45:32.765255397 +0000 UTC m=+0.223615040 container start 677339816f39d4f178575e238fcc24b02a4c180d0213e8ef0a23682bf198242d (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=nervous_lederberg, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, ceph=True, CEPH_REF=squid, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec  8 04:45:32 np0005550137 podman[77261]: 2025-12-08 09:45:32.769298035 +0000 UTC m=+0.227657728 container attach 677339816f39d4f178575e238fcc24b02a4c180d0213e8ef0a23682bf198242d (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=nervous_lederberg, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, CEPH_REF=squid, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS)
Dec  8 04:45:32 np0005550137 podman[77289]: 2025-12-08 09:45:32.780988385 +0000 UTC m=+0.035945568 container create 4fdadb1e73c5576a89fc2fd70e8fcbd376bb2d6e75806f6870bfc1e640a64526 (image=quay.io/ceph/ceph:v19, name=zen_hugle, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, CEPH_REF=squid, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2)
Dec  8 04:45:32 np0005550137 systemd[1]: Started libpod-conmon-4fdadb1e73c5576a89fc2fd70e8fcbd376bb2d6e75806f6870bfc1e640a64526.scope.
Dec  8 04:45:32 np0005550137 systemd[1]: Started libcrun container.
Dec  8 04:45:32 np0005550137 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/dfc6518133ffdbbba325cbf65883af58754e48a391a927e28015c4bb89871908/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Dec  8 04:45:32 np0005550137 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/dfc6518133ffdbbba325cbf65883af58754e48a391a927e28015c4bb89871908/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec  8 04:45:32 np0005550137 podman[77289]: 2025-12-08 09:45:32.862268715 +0000 UTC m=+0.117225918 container init 4fdadb1e73c5576a89fc2fd70e8fcbd376bb2d6e75806f6870bfc1e640a64526 (image=quay.io/ceph/ceph:v19, name=zen_hugle, org.label-schema.vendor=CentOS, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec  8 04:45:32 np0005550137 podman[77289]: 2025-12-08 09:45:32.765004019 +0000 UTC m=+0.019961232 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Dec  8 04:45:32 np0005550137 podman[77289]: 2025-12-08 09:45:32.868019736 +0000 UTC m=+0.122976919 container start 4fdadb1e73c5576a89fc2fd70e8fcbd376bb2d6e75806f6870bfc1e640a64526 (image=quay.io/ceph/ceph:v19, name=zen_hugle, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, OSD_FLAVOR=default, ceph=True, CEPH_REF=squid, org.label-schema.schema-version=1.0)
Dec  8 04:45:32 np0005550137 podman[77289]: 2025-12-08 09:45:32.871294289 +0000 UTC m=+0.126251472 container attach 4fdadb1e73c5576a89fc2fd70e8fcbd376bb2d6e75806f6870bfc1e640a64526 (image=quay.io/ceph/ceph:v19, name=zen_hugle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec  8 04:45:33 np0005550137 ceph-mon[74516]: Added label _admin to host compute-0
Dec  8 04:45:33 np0005550137 ceph-mon[74516]: from='client.? 192.168.122.100:0/2737127885' entity='client.admin' 
Dec  8 04:45:33 np0005550137 ceph-mon[74516]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config set, name=mgr/cephadm/use_repo_digest}] v 0)
Dec  8 04:45:33 np0005550137 ceph-mon[74516]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/927097132' entity='client.admin' 
Dec  8 04:45:33 np0005550137 systemd[1]: libpod-4fdadb1e73c5576a89fc2fd70e8fcbd376bb2d6e75806f6870bfc1e640a64526.scope: Deactivated successfully.
Dec  8 04:45:33 np0005550137 podman[77289]: 2025-12-08 09:45:33.235473282 +0000 UTC m=+0.490430495 container died 4fdadb1e73c5576a89fc2fd70e8fcbd376bb2d6e75806f6870bfc1e640a64526 (image=quay.io/ceph/ceph:v19, name=zen_hugle, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_REF=squid)
Dec  8 04:45:33 np0005550137 systemd[1]: var-lib-containers-storage-overlay-dfc6518133ffdbbba325cbf65883af58754e48a391a927e28015c4bb89871908-merged.mount: Deactivated successfully.
Dec  8 04:45:33 np0005550137 podman[77289]: 2025-12-08 09:45:33.277407608 +0000 UTC m=+0.532364791 container remove 4fdadb1e73c5576a89fc2fd70e8fcbd376bb2d6e75806f6870bfc1e640a64526 (image=quay.io/ceph/ceph:v19, name=zen_hugle, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, CEPH_REF=squid)
Dec  8 04:45:33 np0005550137 systemd[1]: libpod-conmon-4fdadb1e73c5576a89fc2fd70e8fcbd376bb2d6e75806f6870bfc1e640a64526.scope: Deactivated successfully.
Dec  8 04:45:33 np0005550137 ceph-mgr[74806]: mgr.server send_report Not sending PG status to monitor yet, waiting for OSDs
Dec  8 04:45:33 np0005550137 nervous_lederberg[77286]: [
Dec  8 04:45:33 np0005550137 nervous_lederberg[77286]:    {
Dec  8 04:45:33 np0005550137 nervous_lederberg[77286]:        "available": false,
Dec  8 04:45:33 np0005550137 nervous_lederberg[77286]:        "being_replaced": false,
Dec  8 04:45:33 np0005550137 nervous_lederberg[77286]:        "ceph_device_lvm": false,
Dec  8 04:45:33 np0005550137 nervous_lederberg[77286]:        "device_id": "QEMU_DVD-ROM_QM00001",
Dec  8 04:45:33 np0005550137 nervous_lederberg[77286]:        "lsm_data": {},
Dec  8 04:45:33 np0005550137 nervous_lederberg[77286]:        "lvs": [],
Dec  8 04:45:33 np0005550137 nervous_lederberg[77286]:        "path": "/dev/sr0",
Dec  8 04:45:33 np0005550137 nervous_lederberg[77286]:        "rejected_reasons": [
Dec  8 04:45:33 np0005550137 nervous_lederberg[77286]:            "Has a FileSystem",
Dec  8 04:45:33 np0005550137 nervous_lederberg[77286]:            "Insufficient space (<5GB)"
Dec  8 04:45:33 np0005550137 nervous_lederberg[77286]:        ],
Dec  8 04:45:33 np0005550137 nervous_lederberg[77286]:        "sys_api": {
Dec  8 04:45:33 np0005550137 nervous_lederberg[77286]:            "actuators": null,
Dec  8 04:45:33 np0005550137 nervous_lederberg[77286]:            "device_nodes": [
Dec  8 04:45:33 np0005550137 nervous_lederberg[77286]:                "sr0"
Dec  8 04:45:33 np0005550137 nervous_lederberg[77286]:            ],
Dec  8 04:45:33 np0005550137 nervous_lederberg[77286]:            "devname": "sr0",
Dec  8 04:45:33 np0005550137 nervous_lederberg[77286]:            "human_readable_size": "482.00 KB",
Dec  8 04:45:33 np0005550137 nervous_lederberg[77286]:            "id_bus": "ata",
Dec  8 04:45:33 np0005550137 nervous_lederberg[77286]:            "model": "QEMU DVD-ROM",
Dec  8 04:45:33 np0005550137 nervous_lederberg[77286]:            "nr_requests": "2",
Dec  8 04:45:33 np0005550137 nervous_lederberg[77286]:            "parent": "/dev/sr0",
Dec  8 04:45:33 np0005550137 nervous_lederberg[77286]:            "partitions": {},
Dec  8 04:45:33 np0005550137 nervous_lederberg[77286]:            "path": "/dev/sr0",
Dec  8 04:45:33 np0005550137 nervous_lederberg[77286]:            "removable": "1",
Dec  8 04:45:33 np0005550137 nervous_lederberg[77286]:            "rev": "2.5+",
Dec  8 04:45:33 np0005550137 nervous_lederberg[77286]:            "ro": "0",
Dec  8 04:45:33 np0005550137 nervous_lederberg[77286]:            "rotational": "1",
Dec  8 04:45:33 np0005550137 nervous_lederberg[77286]:            "sas_address": "",
Dec  8 04:45:33 np0005550137 nervous_lederberg[77286]:            "sas_device_handle": "",
Dec  8 04:45:33 np0005550137 nervous_lederberg[77286]:            "scheduler_mode": "mq-deadline",
Dec  8 04:45:33 np0005550137 nervous_lederberg[77286]:            "sectors": 0,
Dec  8 04:45:33 np0005550137 nervous_lederberg[77286]:            "sectorsize": "2048",
Dec  8 04:45:33 np0005550137 nervous_lederberg[77286]:            "size": 493568.0,
Dec  8 04:45:33 np0005550137 nervous_lederberg[77286]:            "support_discard": "2048",
Dec  8 04:45:33 np0005550137 nervous_lederberg[77286]:            "type": "disk",
Dec  8 04:45:33 np0005550137 nervous_lederberg[77286]:            "vendor": "QEMU"
Dec  8 04:45:33 np0005550137 nervous_lederberg[77286]:        }
Dec  8 04:45:33 np0005550137 nervous_lederberg[77286]:    }
Dec  8 04:45:33 np0005550137 nervous_lederberg[77286]: ]
Dec  8 04:45:33 np0005550137 systemd[1]: libpod-677339816f39d4f178575e238fcc24b02a4c180d0213e8ef0a23682bf198242d.scope: Deactivated successfully.
Dec  8 04:45:33 np0005550137 podman[77261]: 2025-12-08 09:45:33.680562433 +0000 UTC m=+1.138922036 container died 677339816f39d4f178575e238fcc24b02a4c180d0213e8ef0a23682bf198242d (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=nervous_lederberg, ceph=True, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Dec  8 04:45:33 np0005550137 systemd[1]: var-lib-containers-storage-overlay-8731be85e9228603b0b933c2009f00f7d9b3944caf10f53abe2eb0cdd07274f7-merged.mount: Deactivated successfully.
Dec  8 04:45:33 np0005550137 podman[77261]: 2025-12-08 09:45:33.729538061 +0000 UTC m=+1.187897704 container remove 677339816f39d4f178575e238fcc24b02a4c180d0213e8ef0a23682bf198242d (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=nervous_lederberg, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec  8 04:45:33 np0005550137 systemd[1]: libpod-conmon-677339816f39d4f178575e238fcc24b02a4c180d0213e8ef0a23682bf198242d.scope: Deactivated successfully.
Dec  8 04:45:33 np0005550137 ceph-mon[74516]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Dec  8 04:45:33 np0005550137 ceph-mon[74516]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1555784564' entity='mgr.compute-0.kitiwu' 
Dec  8 04:45:33 np0005550137 ceph-mon[74516]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Dec  8 04:45:33 np0005550137 ceph-mon[74516]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1555784564' entity='mgr.compute-0.kitiwu' 
Dec  8 04:45:33 np0005550137 ceph-mon[74516]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Dec  8 04:45:33 np0005550137 ceph-mon[74516]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1555784564' entity='mgr.compute-0.kitiwu' 
Dec  8 04:45:33 np0005550137 ceph-mon[74516]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Dec  8 04:45:33 np0005550137 ceph-mon[74516]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1555784564' entity='mgr.compute-0.kitiwu' 
Dec  8 04:45:33 np0005550137 ceph-mon[74516]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"} v 0)
Dec  8 04:45:33 np0005550137 ceph-mon[74516]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1555784564' entity='mgr.compute-0.kitiwu' cmd=[{"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"}]: dispatch
Dec  8 04:45:33 np0005550137 ceph-mon[74516]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Dec  8 04:45:33 np0005550137 ceph-mon[74516]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/1555784564' entity='mgr.compute-0.kitiwu' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec  8 04:45:33 np0005550137 ceph-mon[74516]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Dec  8 04:45:33 np0005550137 ceph-mon[74516]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1555784564' entity='mgr.compute-0.kitiwu' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Dec  8 04:45:33 np0005550137 ceph-mgr[74806]: [cephadm INFO cephadm.serve] Updating compute-0:/etc/ceph/ceph.conf
Dec  8 04:45:33 np0005550137 ceph-mgr[74806]: log_channel(cephadm) log [INF] : Updating compute-0:/etc/ceph/ceph.conf
Dec  8 04:45:34 np0005550137 ceph-mon[74516]: from='client.? 192.168.122.100:0/927097132' entity='client.admin' 
Dec  8 04:45:34 np0005550137 ceph-mon[74516]: from='mgr.14122 192.168.122.100:0/1555784564' entity='mgr.compute-0.kitiwu' 
Dec  8 04:45:34 np0005550137 ceph-mon[74516]: from='mgr.14122 192.168.122.100:0/1555784564' entity='mgr.compute-0.kitiwu' 
Dec  8 04:45:34 np0005550137 ceph-mon[74516]: from='mgr.14122 192.168.122.100:0/1555784564' entity='mgr.compute-0.kitiwu' 
Dec  8 04:45:34 np0005550137 ceph-mon[74516]: from='mgr.14122 192.168.122.100:0/1555784564' entity='mgr.compute-0.kitiwu' 
Dec  8 04:45:34 np0005550137 ceph-mon[74516]: from='mgr.14122 192.168.122.100:0/1555784564' entity='mgr.compute-0.kitiwu' cmd=[{"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"}]: dispatch
Dec  8 04:45:34 np0005550137 ceph-mon[74516]: from='mgr.14122 192.168.122.100:0/1555784564' entity='mgr.compute-0.kitiwu' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Dec  8 04:45:34 np0005550137 ceph-mon[74516]: Updating compute-0:/etc/ceph/ceph.conf
Dec  8 04:45:34 np0005550137 ansible-async_wrapper.py[78793]: Invoked with j389128767568 30 /home/zuul/.ansible/tmp/ansible-tmp-1765187133.6687763-37049-96437470741767/AnsiballZ_command.py _
Dec  8 04:45:34 np0005550137 ansible-async_wrapper.py[78850]: Starting module and watcher
Dec  8 04:45:34 np0005550137 ansible-async_wrapper.py[78850]: Start watching 78852 (30)
Dec  8 04:45:34 np0005550137 ansible-async_wrapper.py[78852]: Start module (78852)
Dec  8 04:45:34 np0005550137 ansible-async_wrapper.py[78793]: Return async_wrapper task started.
Dec  8 04:45:34 np0005550137 ceph-mgr[74806]: [cephadm INFO cephadm.serve] Updating compute-0:/var/lib/ceph/ceb838ef-9d5d-54e4-bddb-2f01adce2ad4/config/ceph.conf
Dec  8 04:45:34 np0005550137 ceph-mgr[74806]: log_channel(cephadm) log [INF] : Updating compute-0:/var/lib/ceph/ceb838ef-9d5d-54e4-bddb-2f01adce2ad4/config/ceph.conf
Dec  8 04:45:34 np0005550137 python3[78856]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v19 --fsid ceb838ef-9d5d-54e4-bddb-2f01adce2ad4 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   orch status --format json _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec  8 04:45:34 np0005550137 podman[78916]: 2025-12-08 09:45:34.51935479 +0000 UTC m=+0.036687071 container create b0decc72a7b3d6faf9dfd5e9f311f84f91bba9b62ae8cae3a470671321f117e2 (image=quay.io/ceph/ceph:v19, name=gifted_chatelet, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.build-date=20250325)
Dec  8 04:45:34 np0005550137 systemd[1]: Started libpod-conmon-b0decc72a7b3d6faf9dfd5e9f311f84f91bba9b62ae8cae3a470671321f117e2.scope.
Dec  8 04:45:34 np0005550137 systemd[1]: Started libcrun container.
Dec  8 04:45:34 np0005550137 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/46d83b4d255011dfe93573de7d75a8d0a6b0ecb8fb27da7ab05594c5b441a9cd/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Dec  8 04:45:34 np0005550137 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/46d83b4d255011dfe93573de7d75a8d0a6b0ecb8fb27da7ab05594c5b441a9cd/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec  8 04:45:34 np0005550137 podman[78916]: 2025-12-08 09:45:34.505090959 +0000 UTC m=+0.022423250 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Dec  8 04:45:34 np0005550137 podman[78916]: 2025-12-08 09:45:34.616427748 +0000 UTC m=+0.133760059 container init b0decc72a7b3d6faf9dfd5e9f311f84f91bba9b62ae8cae3a470671321f117e2 (image=quay.io/ceph/ceph:v19, name=gifted_chatelet, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.vendor=CentOS, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec  8 04:45:34 np0005550137 podman[78916]: 2025-12-08 09:45:34.624580846 +0000 UTC m=+0.141913127 container start b0decc72a7b3d6faf9dfd5e9f311f84f91bba9b62ae8cae3a470671321f117e2 (image=quay.io/ceph/ceph:v19, name=gifted_chatelet, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Dec  8 04:45:34 np0005550137 podman[78916]: 2025-12-08 09:45:34.628406497 +0000 UTC m=+0.145738768 container attach b0decc72a7b3d6faf9dfd5e9f311f84f91bba9b62ae8cae3a470671321f117e2 (image=quay.io/ceph/ceph:v19, name=gifted_chatelet, OSD_FLAVOR=default, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Dec  8 04:45:34 np0005550137 ceph-mon[74516]: mon.compute-0@0(leader).osd e2 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  8 04:45:34 np0005550137 ceph-mgr[74806]: [cephadm INFO cephadm.serve] Updating compute-0:/etc/ceph/ceph.client.admin.keyring
Dec  8 04:45:34 np0005550137 ceph-mgr[74806]: log_channel(cephadm) log [INF] : Updating compute-0:/etc/ceph/ceph.client.admin.keyring
Dec  8 04:45:34 np0005550137 ceph-mgr[74806]: log_channel(audit) log [DBG] : from='client.14162 -' entity='client.admin' cmd=[{"prefix": "orch status", "target": ["mon-mgr", ""], "format": "json"}]: dispatch
Dec  8 04:45:34 np0005550137 gifted_chatelet[78964]: 
Dec  8 04:45:34 np0005550137 gifted_chatelet[78964]: {"available": true, "backend": "cephadm", "paused": false, "workers": 10}
Dec  8 04:45:35 np0005550137 systemd[1]: libpod-b0decc72a7b3d6faf9dfd5e9f311f84f91bba9b62ae8cae3a470671321f117e2.scope: Deactivated successfully.
Dec  8 04:45:35 np0005550137 podman[78916]: 2025-12-08 09:45:35.009970829 +0000 UTC m=+0.527303130 container died b0decc72a7b3d6faf9dfd5e9f311f84f91bba9b62ae8cae3a470671321f117e2 (image=quay.io/ceph/ceph:v19, name=gifted_chatelet, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS)
Dec  8 04:45:35 np0005550137 systemd[1]: var-lib-containers-storage-overlay-46d83b4d255011dfe93573de7d75a8d0a6b0ecb8fb27da7ab05594c5b441a9cd-merged.mount: Deactivated successfully.
Dec  8 04:45:35 np0005550137 podman[78916]: 2025-12-08 09:45:35.047701752 +0000 UTC m=+0.565034023 container remove b0decc72a7b3d6faf9dfd5e9f311f84f91bba9b62ae8cae3a470671321f117e2 (image=quay.io/ceph/ceph:v19, name=gifted_chatelet, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec  8 04:45:35 np0005550137 systemd[1]: libpod-conmon-b0decc72a7b3d6faf9dfd5e9f311f84f91bba9b62ae8cae3a470671321f117e2.scope: Deactivated successfully.
Dec  8 04:45:35 np0005550137 ansible-async_wrapper.py[78852]: Module complete (78852)
Dec  8 04:45:35 np0005550137 ceph-mon[74516]: Updating compute-0:/var/lib/ceph/ceb838ef-9d5d-54e4-bddb-2f01adce2ad4/config/ceph.conf
Dec  8 04:45:35 np0005550137 ceph-mon[74516]: Updating compute-0:/etc/ceph/ceph.client.admin.keyring
Dec  8 04:45:35 np0005550137 ceph-mgr[74806]: [cephadm INFO cephadm.serve] Updating compute-0:/var/lib/ceph/ceb838ef-9d5d-54e4-bddb-2f01adce2ad4/config/ceph.client.admin.keyring
Dec  8 04:45:35 np0005550137 ceph-mgr[74806]: log_channel(cephadm) log [INF] : Updating compute-0:/var/lib/ceph/ceb838ef-9d5d-54e4-bddb-2f01adce2ad4/config/ceph.client.admin.keyring
Dec  8 04:45:35 np0005550137 ceph-mgr[74806]: mgr.server send_report Not sending PG status to monitor yet, waiting for OSDs
Dec  8 04:45:35 np0005550137 python3[79475]: ansible-ansible.legacy.async_status Invoked with jid=j389128767568.78793 mode=status _async_dir=/root/.ansible_async
Dec  8 04:45:36 np0005550137 python3[79641]: ansible-ansible.legacy.async_status Invoked with jid=j389128767568.78793 mode=cleanup _async_dir=/root/.ansible_async
Dec  8 04:45:36 np0005550137 ceph-mon[74516]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Dec  8 04:45:36 np0005550137 ceph-mon[74516]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1555784564' entity='mgr.compute-0.kitiwu' 
Dec  8 04:45:36 np0005550137 ceph-mon[74516]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Dec  8 04:45:36 np0005550137 ceph-mon[74516]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1555784564' entity='mgr.compute-0.kitiwu' 
Dec  8 04:45:36 np0005550137 ceph-mon[74516]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Dec  8 04:45:36 np0005550137 ceph-mon[74516]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1555784564' entity='mgr.compute-0.kitiwu' 
Dec  8 04:45:36 np0005550137 ceph-mgr[74806]: [progress INFO root] update: starting ev 3cf6b15a-f145-40b4-9fe2-c59d94459737 (Updating crash deployment (+1 -> 1))
Dec  8 04:45:36 np0005550137 ceph-mon[74516]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get-or-create", "entity": "client.crash.compute-0", "caps": ["mon", "profile crash", "mgr", "profile crash"]} v 0)
Dec  8 04:45:36 np0005550137 ceph-mon[74516]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1555784564' entity='mgr.compute-0.kitiwu' cmd=[{"prefix": "auth get-or-create", "entity": "client.crash.compute-0", "caps": ["mon", "profile crash", "mgr", "profile crash"]}]: dispatch
Dec  8 04:45:36 np0005550137 ceph-mon[74516]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1555784564' entity='mgr.compute-0.kitiwu' cmd='[{"prefix": "auth get-or-create", "entity": "client.crash.compute-0", "caps": ["mon", "profile crash", "mgr", "profile crash"]}]': finished
Dec  8 04:45:36 np0005550137 ceph-mon[74516]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Dec  8 04:45:36 np0005550137 ceph-mon[74516]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/1555784564' entity='mgr.compute-0.kitiwu' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec  8 04:45:36 np0005550137 ceph-mgr[74806]: [cephadm INFO cephadm.serve] Deploying daemon crash.compute-0 on compute-0
Dec  8 04:45:36 np0005550137 ceph-mgr[74806]: log_channel(cephadm) log [INF] : Deploying daemon crash.compute-0 on compute-0
Dec  8 04:45:36 np0005550137 ceph-mon[74516]: Updating compute-0:/var/lib/ceph/ceb838ef-9d5d-54e4-bddb-2f01adce2ad4/config/ceph.client.admin.keyring
Dec  8 04:45:36 np0005550137 ceph-mon[74516]: from='mgr.14122 192.168.122.100:0/1555784564' entity='mgr.compute-0.kitiwu' 
Dec  8 04:45:36 np0005550137 ceph-mon[74516]: from='mgr.14122 192.168.122.100:0/1555784564' entity='mgr.compute-0.kitiwu' 
Dec  8 04:45:36 np0005550137 ceph-mon[74516]: from='mgr.14122 192.168.122.100:0/1555784564' entity='mgr.compute-0.kitiwu' 
Dec  8 04:45:36 np0005550137 ceph-mon[74516]: from='mgr.14122 192.168.122.100:0/1555784564' entity='mgr.compute-0.kitiwu' cmd=[{"prefix": "auth get-or-create", "entity": "client.crash.compute-0", "caps": ["mon", "profile crash", "mgr", "profile crash"]}]: dispatch
Dec  8 04:45:36 np0005550137 ceph-mon[74516]: from='mgr.14122 192.168.122.100:0/1555784564' entity='mgr.compute-0.kitiwu' cmd='[{"prefix": "auth get-or-create", "entity": "client.crash.compute-0", "caps": ["mon", "profile crash", "mgr", "profile crash"]}]': finished
Dec  8 04:45:36 np0005550137 python3[79769]: ansible-ansible.builtin.stat Invoked with path=/home/ceph-admin/specs/ceph_spec.yaml follow=False get_md5=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Dec  8 04:45:36 np0005550137 podman[79813]: 2025-12-08 09:45:36.821085694 +0000 UTC m=+0.040083078 container create 3ca17443de6d80eff1a8ee8e4e8987f234eb33e5785f4ce17955a692e52fc367 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=dreamy_morse, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_REF=squid, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec  8 04:45:36 np0005550137 systemd[1]: Started libpod-conmon-3ca17443de6d80eff1a8ee8e4e8987f234eb33e5785f4ce17955a692e52fc367.scope.
Dec  8 04:45:36 np0005550137 podman[79813]: 2025-12-08 09:45:36.803753766 +0000 UTC m=+0.022751170 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Dec  8 04:45:36 np0005550137 systemd[1]: Started libcrun container.
Dec  8 04:45:36 np0005550137 podman[79813]: 2025-12-08 09:45:36.9196853 +0000 UTC m=+0.138682774 container init 3ca17443de6d80eff1a8ee8e4e8987f234eb33e5785f4ce17955a692e52fc367 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=dreamy_morse, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image)
Dec  8 04:45:36 np0005550137 podman[79813]: 2025-12-08 09:45:36.931829245 +0000 UTC m=+0.150826639 container start 3ca17443de6d80eff1a8ee8e4e8987f234eb33e5785f4ce17955a692e52fc367 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=dreamy_morse, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, CEPH_REF=squid, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec  8 04:45:36 np0005550137 podman[79813]: 2025-12-08 09:45:36.935917074 +0000 UTC m=+0.154914498 container attach 3ca17443de6d80eff1a8ee8e4e8987f234eb33e5785f4ce17955a692e52fc367 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=dreamy_morse, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec  8 04:45:36 np0005550137 dreamy_morse[79829]: 167 167
Dec  8 04:45:36 np0005550137 systemd[1]: libpod-3ca17443de6d80eff1a8ee8e4e8987f234eb33e5785f4ce17955a692e52fc367.scope: Deactivated successfully.
Dec  8 04:45:36 np0005550137 podman[79813]: 2025-12-08 09:45:36.93898226 +0000 UTC m=+0.157979684 container died 3ca17443de6d80eff1a8ee8e4e8987f234eb33e5785f4ce17955a692e52fc367 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=dreamy_morse, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, ceph=True, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS)
Dec  8 04:45:36 np0005550137 systemd[1]: var-lib-containers-storage-overlay-90ba99742176c25bdab6f6e9bf91cd511403cf79c829c1cc13e119de8e403a21-merged.mount: Deactivated successfully.
Dec  8 04:45:36 np0005550137 podman[79813]: 2025-12-08 09:45:36.990955764 +0000 UTC m=+0.209953148 container remove 3ca17443de6d80eff1a8ee8e4e8987f234eb33e5785f4ce17955a692e52fc367 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=dreamy_morse, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, CEPH_REF=squid, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True)
Dec  8 04:45:37 np0005550137 systemd[1]: libpod-conmon-3ca17443de6d80eff1a8ee8e4e8987f234eb33e5785f4ce17955a692e52fc367.scope: Deactivated successfully.
Dec  8 04:45:37 np0005550137 systemd[1]: Reloading.
Dec  8 04:45:37 np0005550137 python3[79866]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z   --volume /home/ceph-admin/specs/ceph_spec.yaml:/home/ceph_spec.yaml:z   --entrypoint ceph quay.io/ceph/ceph:v19 --fsid ceb838ef-9d5d-54e4-bddb-2f01adce2ad4 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   orch status --format json _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec  8 04:45:37 np0005550137 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Dec  8 04:45:37 np0005550137 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec  8 04:45:37 np0005550137 podman[79875]: 2025-12-08 09:45:37.191423881 +0000 UTC m=+0.058214011 container create 18f02d4cab89e246a200c30f3f63c13581d97ff3a17b0527c01f6aa8df28951c (image=quay.io/ceph/ceph:v19, name=confident_lichterman, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default)
Dec  8 04:45:37 np0005550137 ceph-mon[74516]: Deploying daemon crash.compute-0 on compute-0
Dec  8 04:45:37 np0005550137 podman[79875]: 2025-12-08 09:45:37.173075981 +0000 UTC m=+0.039866141 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Dec  8 04:45:37 np0005550137 systemd[1]: Started libpod-conmon-18f02d4cab89e246a200c30f3f63c13581d97ff3a17b0527c01f6aa8df28951c.scope.
Dec  8 04:45:37 np0005550137 systemd[1]: Started libcrun container.
Dec  8 04:45:37 np0005550137 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6509ae5090af6bc69d5b12b49a3e9b6e8f1794d33938482d6a0ab739309aaa1e/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Dec  8 04:45:37 np0005550137 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6509ae5090af6bc69d5b12b49a3e9b6e8f1794d33938482d6a0ab739309aaa1e/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec  8 04:45:37 np0005550137 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6509ae5090af6bc69d5b12b49a3e9b6e8f1794d33938482d6a0ab739309aaa1e/merged/home/ceph_spec.yaml supports timestamps until 2038 (0x7fffffff)
Dec  8 04:45:37 np0005550137 systemd[1]: Reloading.
Dec  8 04:45:37 np0005550137 podman[79875]: 2025-12-08 09:45:37.430272012 +0000 UTC m=+0.297062162 container init 18f02d4cab89e246a200c30f3f63c13581d97ff3a17b0527c01f6aa8df28951c (image=quay.io/ceph/ceph:v19, name=confident_lichterman, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec  8 04:45:37 np0005550137 podman[79875]: 2025-12-08 09:45:37.440262977 +0000 UTC m=+0.307053127 container start 18f02d4cab89e246a200c30f3f63c13581d97ff3a17b0527c01f6aa8df28951c (image=quay.io/ceph/ceph:v19, name=confident_lichterman, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, CEPH_REF=squid, OSD_FLAVOR=default)
Dec  8 04:45:37 np0005550137 podman[79875]: 2025-12-08 09:45:37.444993008 +0000 UTC m=+0.311783138 container attach 18f02d4cab89e246a200c30f3f63c13581d97ff3a17b0527c01f6aa8df28951c (image=quay.io/ceph/ceph:v19, name=confident_lichterman, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, org.label-schema.license=GPLv2)
Dec  8 04:45:37 np0005550137 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec  8 04:45:37 np0005550137 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Dec  8 04:45:37 np0005550137 ceph-mgr[74806]: mgr.server send_report Giving up on OSDs that haven't reported yet, sending potentially incomplete PG state to mon
Dec  8 04:45:37 np0005550137 ceph-mon[74516]: log_channel(cluster) log [WRN] : Health check failed: OSD count 0 < osd_pool_default_size 1 (TOO_FEW_OSDS)
Dec  8 04:45:37 np0005550137 ceph-mgr[74806]: log_channel(cluster) log [DBG] : pgmap v3: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Dec  8 04:45:37 np0005550137 systemd[1]: Starting Ceph crash.compute-0 for ceb838ef-9d5d-54e4-bddb-2f01adce2ad4...
Dec  8 04:45:37 np0005550137 ceph-mgr[74806]: log_channel(audit) log [DBG] : from='client.14164 -' entity='client.admin' cmd=[{"prefix": "orch status", "target": ["mon-mgr", ""], "format": "json"}]: dispatch
Dec  8 04:45:37 np0005550137 confident_lichterman[79925]: 
Dec  8 04:45:37 np0005550137 confident_lichterman[79925]: {"available": true, "backend": "cephadm", "paused": false, "workers": 10}
Dec  8 04:45:37 np0005550137 systemd[1]: libpod-18f02d4cab89e246a200c30f3f63c13581d97ff3a17b0527c01f6aa8df28951c.scope: Deactivated successfully.
Dec  8 04:45:37 np0005550137 podman[79875]: 2025-12-08 09:45:37.818564627 +0000 UTC m=+0.685354747 container died 18f02d4cab89e246a200c30f3f63c13581d97ff3a17b0527c01f6aa8df28951c (image=quay.io/ceph/ceph:v19, name=confident_lichterman, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, CEPH_REF=squid, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec  8 04:45:37 np0005550137 systemd[1]: var-lib-containers-storage-overlay-6509ae5090af6bc69d5b12b49a3e9b6e8f1794d33938482d6a0ab739309aaa1e-merged.mount: Deactivated successfully.
Dec  8 04:45:37 np0005550137 podman[79875]: 2025-12-08 09:45:37.861949789 +0000 UTC m=+0.728739899 container remove 18f02d4cab89e246a200c30f3f63c13581d97ff3a17b0527c01f6aa8df28951c (image=quay.io/ceph/ceph:v19, name=confident_lichterman, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True)
Dec  8 04:45:37 np0005550137 systemd[1]: libpod-conmon-18f02d4cab89e246a200c30f3f63c13581d97ff3a17b0527c01f6aa8df28951c.scope: Deactivated successfully.
Dec  8 04:45:37 np0005550137 podman[80046]: 2025-12-08 09:45:37.989400767 +0000 UTC m=+0.044635372 container create bc6254304966247a6d6514e47b72babce3456db550b3f469ee95f721956452c5 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-ceb838ef-9d5d-54e4-bddb-2f01adce2ad4-crash-compute-0, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec  8 04:45:38 np0005550137 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e0ed8a312366ca7750c941b1559a3a164a554b955b5c540e86da6bf46beb9a99/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec  8 04:45:38 np0005550137 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e0ed8a312366ca7750c941b1559a3a164a554b955b5c540e86da6bf46beb9a99/merged/etc/ceph/ceph.client.crash.compute-0.keyring supports timestamps until 2038 (0x7fffffff)
Dec  8 04:45:38 np0005550137 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e0ed8a312366ca7750c941b1559a3a164a554b955b5c540e86da6bf46beb9a99/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec  8 04:45:38 np0005550137 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e0ed8a312366ca7750c941b1559a3a164a554b955b5c540e86da6bf46beb9a99/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec  8 04:45:38 np0005550137 podman[80046]: 2025-12-08 09:45:37.966271976 +0000 UTC m=+0.021506551 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Dec  8 04:45:38 np0005550137 podman[80046]: 2025-12-08 09:45:38.069126108 +0000 UTC m=+0.124360753 container init bc6254304966247a6d6514e47b72babce3456db550b3f469ee95f721956452c5 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-ceb838ef-9d5d-54e4-bddb-2f01adce2ad4-crash-compute-0, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default)
Dec  8 04:45:38 np0005550137 podman[80046]: 2025-12-08 09:45:38.079124734 +0000 UTC m=+0.134359339 container start bc6254304966247a6d6514e47b72babce3456db550b3f469ee95f721956452c5 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-ceb838ef-9d5d-54e4-bddb-2f01adce2ad4-crash-compute-0, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, ceph=True)
Dec  8 04:45:38 np0005550137 bash[80046]: bc6254304966247a6d6514e47b72babce3456db550b3f469ee95f721956452c5
Dec  8 04:45:38 np0005550137 systemd[1]: Started Ceph crash.compute-0 for ceb838ef-9d5d-54e4-bddb-2f01adce2ad4.
Dec  8 04:45:38 np0005550137 ceph-mon[74516]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Dec  8 04:45:38 np0005550137 ceph-mon[74516]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1555784564' entity='mgr.compute-0.kitiwu' 
Dec  8 04:45:38 np0005550137 ceph-ceb838ef-9d5d-54e4-bddb-2f01adce2ad4-crash-compute-0[80061]: INFO:ceph-crash:pinging cluster to exercise our key
Dec  8 04:45:38 np0005550137 ceph-mon[74516]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Dec  8 04:45:38 np0005550137 ceph-mon[74516]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1555784564' entity='mgr.compute-0.kitiwu' 
Dec  8 04:45:38 np0005550137 ceph-mon[74516]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.crash}] v 0)
Dec  8 04:45:38 np0005550137 ceph-mon[74516]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1555784564' entity='mgr.compute-0.kitiwu' 
Dec  8 04:45:38 np0005550137 ceph-mgr[74806]: [progress INFO root] complete: finished ev 3cf6b15a-f145-40b4-9fe2-c59d94459737 (Updating crash deployment (+1 -> 1))
Dec  8 04:45:38 np0005550137 ceph-mgr[74806]: [progress INFO root] Completed event 3cf6b15a-f145-40b4-9fe2-c59d94459737 (Updating crash deployment (+1 -> 1)) in 2 seconds
Dec  8 04:45:38 np0005550137 ceph-mon[74516]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.crash}] v 0)
Dec  8 04:45:38 np0005550137 ceph-mon[74516]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1555784564' entity='mgr.compute-0.kitiwu' 
Dec  8 04:45:38 np0005550137 ceph-mon[74516]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.mon}] v 0)
Dec  8 04:45:38 np0005550137 ceph-mon[74516]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1555784564' entity='mgr.compute-0.kitiwu' 
Dec  8 04:45:38 np0005550137 ceph-mon[74516]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.mgr}] v 0)
Dec  8 04:45:38 np0005550137 ceph-mon[74516]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1555784564' entity='mgr.compute-0.kitiwu' 
Dec  8 04:45:38 np0005550137 ceph-mon[74516]: Health check failed: OSD count 0 < osd_pool_default_size 1 (TOO_FEW_OSDS)
Dec  8 04:45:38 np0005550137 ceph-mon[74516]: from='mgr.14122 192.168.122.100:0/1555784564' entity='mgr.compute-0.kitiwu' 
Dec  8 04:45:38 np0005550137 ceph-mon[74516]: from='mgr.14122 192.168.122.100:0/1555784564' entity='mgr.compute-0.kitiwu' 
Dec  8 04:45:38 np0005550137 ceph-mon[74516]: from='mgr.14122 192.168.122.100:0/1555784564' entity='mgr.compute-0.kitiwu' 
Dec  8 04:45:38 np0005550137 ceph-mon[74516]: from='mgr.14122 192.168.122.100:0/1555784564' entity='mgr.compute-0.kitiwu' 
Dec  8 04:45:38 np0005550137 ceph-mon[74516]: from='mgr.14122 192.168.122.100:0/1555784564' entity='mgr.compute-0.kitiwu' 
Dec  8 04:45:38 np0005550137 ceph-mon[74516]: from='mgr.14122 192.168.122.100:0/1555784564' entity='mgr.compute-0.kitiwu' 
Dec  8 04:45:38 np0005550137 ceph-ceb838ef-9d5d-54e4-bddb-2f01adce2ad4-crash-compute-0[80061]: 2025-12-08T09:45:38.286+0000 7f9018ae0640 -1 auth: unable to find a keyring on /etc/ceph/ceph.client.admin.keyring,/etc/ceph/ceph.keyring,/etc/ceph/keyring,/etc/ceph/keyring.bin: (2) No such file or directory
Dec  8 04:45:38 np0005550137 ceph-ceb838ef-9d5d-54e4-bddb-2f01adce2ad4-crash-compute-0[80061]: 2025-12-08T09:45:38.286+0000 7f9018ae0640 -1 AuthRegistry(0x7f9014069490) no keyring found at /etc/ceph/ceph.client.admin.keyring,/etc/ceph/ceph.keyring,/etc/ceph/keyring,/etc/ceph/keyring.bin, disabling cephx
Dec  8 04:45:38 np0005550137 ceph-ceb838ef-9d5d-54e4-bddb-2f01adce2ad4-crash-compute-0[80061]: 2025-12-08T09:45:38.288+0000 7f9018ae0640 -1 auth: unable to find a keyring on /etc/ceph/ceph.client.admin.keyring,/etc/ceph/ceph.keyring,/etc/ceph/keyring,/etc/ceph/keyring.bin: (2) No such file or directory
Dec  8 04:45:38 np0005550137 ceph-ceb838ef-9d5d-54e4-bddb-2f01adce2ad4-crash-compute-0[80061]: 2025-12-08T09:45:38.288+0000 7f9018ae0640 -1 AuthRegistry(0x7f9018adeff0) no keyring found at /etc/ceph/ceph.client.admin.keyring,/etc/ceph/ceph.keyring,/etc/ceph/keyring,/etc/ceph/keyring.bin, disabling cephx
Dec  8 04:45:38 np0005550137 ceph-ceb838ef-9d5d-54e4-bddb-2f01adce2ad4-crash-compute-0[80061]: 2025-12-08T09:45:38.289+0000 7f9012575640 -1 monclient(hunting): handle_auth_bad_method server allowed_methods [2] but i only support [1]
Dec  8 04:45:38 np0005550137 ceph-ceb838ef-9d5d-54e4-bddb-2f01adce2ad4-crash-compute-0[80061]: 2025-12-08T09:45:38.289+0000 7f9018ae0640 -1 monclient: authenticate NOTE: no keyring found; disabled cephx authentication
Dec  8 04:45:38 np0005550137 ceph-ceb838ef-9d5d-54e4-bddb-2f01adce2ad4-crash-compute-0[80061]: [errno 13] RADOS permission denied (error connecting to the cluster)
Dec  8 04:45:38 np0005550137 ceph-ceb838ef-9d5d-54e4-bddb-2f01adce2ad4-crash-compute-0[80061]: INFO:ceph-crash:monitoring path /var/lib/ceph/crash, delay 600s
Dec  8 04:45:38 np0005550137 python3[80096]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z   --volume /home/ceph-admin/specs/ceph_spec.yaml:/home/ceph_spec.yaml:z   --entrypoint ceph quay.io/ceph/ceph:v19 --fsid ceb838ef-9d5d-54e4-bddb-2f01adce2ad4 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   config set global log_to_file true _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec  8 04:45:38 np0005550137 podman[80166]: 2025-12-08 09:45:38.429324375 +0000 UTC m=+0.049931600 container create b59b1bda8b8f4ce6c0f58da6a4dbc7fc24234bbbd8d8c6a1a8a22049aaf77975 (image=quay.io/ceph/ceph:v19, name=priceless_bartik, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Dec  8 04:45:38 np0005550137 systemd[1]: Started libpod-conmon-b59b1bda8b8f4ce6c0f58da6a4dbc7fc24234bbbd8d8c6a1a8a22049aaf77975.scope.
Dec  8 04:45:38 np0005550137 systemd[1]: Started libcrun container.
Dec  8 04:45:38 np0005550137 podman[80166]: 2025-12-08 09:45:38.403616672 +0000 UTC m=+0.024223907 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Dec  8 04:45:38 np0005550137 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d13990885792427b12525aca0d7c99a21767f521fad6aa227bdc278bf98a691f/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Dec  8 04:45:38 np0005550137 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d13990885792427b12525aca0d7c99a21767f521fad6aa227bdc278bf98a691f/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec  8 04:45:38 np0005550137 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d13990885792427b12525aca0d7c99a21767f521fad6aa227bdc278bf98a691f/merged/home/ceph_spec.yaml supports timestamps until 2038 (0x7fffffff)
Dec  8 04:45:38 np0005550137 podman[80166]: 2025-12-08 09:45:38.517136271 +0000 UTC m=+0.137743496 container init b59b1bda8b8f4ce6c0f58da6a4dbc7fc24234bbbd8d8c6a1a8a22049aaf77975 (image=quay.io/ceph/ceph:v19, name=priceless_bartik, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid)
Dec  8 04:45:38 np0005550137 podman[80166]: 2025-12-08 09:45:38.528031635 +0000 UTC m=+0.148638830 container start b59b1bda8b8f4ce6c0f58da6a4dbc7fc24234bbbd8d8c6a1a8a22049aaf77975 (image=quay.io/ceph/ceph:v19, name=priceless_bartik, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec  8 04:45:38 np0005550137 podman[80166]: 2025-12-08 09:45:38.545881009 +0000 UTC m=+0.166488254 container attach b59b1bda8b8f4ce6c0f58da6a4dbc7fc24234bbbd8d8c6a1a8a22049aaf77975 (image=quay.io/ceph/ceph:v19, name=priceless_bartik, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid)
Dec  8 04:45:38 np0005550137 ceph-mon[74516]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config set, name=log_to_file}] v 0)
Dec  8 04:45:38 np0005550137 ceph-mon[74516]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/120047362' entity='client.admin' 
Dec  8 04:45:38 np0005550137 systemd[1]: libpod-b59b1bda8b8f4ce6c0f58da6a4dbc7fc24234bbbd8d8c6a1a8a22049aaf77975.scope: Deactivated successfully.
Dec  8 04:45:38 np0005550137 podman[80166]: 2025-12-08 09:45:38.929897959 +0000 UTC m=+0.550505174 container died b59b1bda8b8f4ce6c0f58da6a4dbc7fc24234bbbd8d8c6a1a8a22049aaf77975 (image=quay.io/ceph/ceph:v19, name=priceless_bartik, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec  8 04:45:38 np0005550137 podman[80290]: 2025-12-08 09:45:38.936842379 +0000 UTC m=+0.062321011 container exec e9eed32aa882af654b63b8cfe5830bc1e3b4cbe08ed44f11c04405084f3a1007 (image=quay.io/ceph/ceph:v19, name=ceph-ceb838ef-9d5d-54e4-bddb-2f01adce2ad4-mon-compute-0, OSD_FLAVOR=default, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec  8 04:45:38 np0005550137 systemd[1]: var-lib-containers-storage-overlay-d13990885792427b12525aca0d7c99a21767f521fad6aa227bdc278bf98a691f-merged.mount: Deactivated successfully.
Dec  8 04:45:38 np0005550137 podman[80166]: 2025-12-08 09:45:38.97925252 +0000 UTC m=+0.599859715 container remove b59b1bda8b8f4ce6c0f58da6a4dbc7fc24234bbbd8d8c6a1a8a22049aaf77975 (image=quay.io/ceph/ceph:v19, name=priceless_bartik, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec  8 04:45:38 np0005550137 systemd[1]: libpod-conmon-b59b1bda8b8f4ce6c0f58da6a4dbc7fc24234bbbd8d8c6a1a8a22049aaf77975.scope: Deactivated successfully.
Dec  8 04:45:39 np0005550137 podman[80290]: 2025-12-08 09:45:39.034278699 +0000 UTC m=+0.159757331 container exec_died e9eed32aa882af654b63b8cfe5830bc1e3b4cbe08ed44f11c04405084f3a1007 (image=quay.io/ceph/ceph:v19, name=ceph-ceb838ef-9d5d-54e4-bddb-2f01adce2ad4-mon-compute-0, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, CEPH_REF=squid, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec  8 04:45:39 np0005550137 python3[80379]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z   --volume /home/ceph-admin/specs/ceph_spec.yaml:/home/ceph_spec.yaml:z   --entrypoint ceph quay.io/ceph/ceph:v19 --fsid ceb838ef-9d5d-54e4-bddb-2f01adce2ad4 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   config set global mon_cluster_log_to_file true _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec  8 04:45:39 np0005550137 ceph-mon[74516]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Dec  8 04:45:39 np0005550137 ceph-mon[74516]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1555784564' entity='mgr.compute-0.kitiwu' 
Dec  8 04:45:39 np0005550137 ceph-mon[74516]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Dec  8 04:45:39 np0005550137 ceph-mon[74516]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1555784564' entity='mgr.compute-0.kitiwu' 
Dec  8 04:45:39 np0005550137 ceph-mon[74516]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Dec  8 04:45:39 np0005550137 ceph-mon[74516]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/1555784564' entity='mgr.compute-0.kitiwu' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec  8 04:45:39 np0005550137 ceph-mon[74516]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Dec  8 04:45:39 np0005550137 ceph-mon[74516]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1555784564' entity='mgr.compute-0.kitiwu' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Dec  8 04:45:39 np0005550137 ceph-mon[74516]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Dec  8 04:45:39 np0005550137 ceph-mon[74516]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1555784564' entity='mgr.compute-0.kitiwu' 
Dec  8 04:45:39 np0005550137 podman[80396]: 2025-12-08 09:45:39.322875193 +0000 UTC m=+0.039748388 container create 4bad3f3b3ddb0a0d6425211b920ea68651c70b4d6893288a5b188b3f023c9bb2 (image=quay.io/ceph/ceph:v19, name=suspicious_lamarr, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1)
Dec  8 04:45:39 np0005550137 ansible-async_wrapper.py[78850]: Done in kid B.
Dec  8 04:45:39 np0005550137 systemd[1]: Started libpod-conmon-4bad3f3b3ddb0a0d6425211b920ea68651c70b4d6893288a5b188b3f023c9bb2.scope.
Dec  8 04:45:39 np0005550137 ceph-mon[74516]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/alertmanager/web_user}] v 0)
Dec  8 04:45:39 np0005550137 systemd[1]: Started libcrun container.
Dec  8 04:45:39 np0005550137 ceph-mon[74516]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1555784564' entity='mgr.compute-0.kitiwu' 
Dec  8 04:45:39 np0005550137 ceph-mon[74516]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/alertmanager/web_password}] v 0)
Dec  8 04:45:39 np0005550137 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/32c1ae137c216060a665cd73ec6de559c787addb716a2faca883e69fe1546745/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Dec  8 04:45:39 np0005550137 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/32c1ae137c216060a665cd73ec6de559c787addb716a2faca883e69fe1546745/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec  8 04:45:39 np0005550137 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/32c1ae137c216060a665cd73ec6de559c787addb716a2faca883e69fe1546745/merged/home/ceph_spec.yaml supports timestamps until 2038 (0x7fffffff)
Dec  8 04:45:39 np0005550137 ceph-mon[74516]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1555784564' entity='mgr.compute-0.kitiwu' 
Dec  8 04:45:39 np0005550137 ceph-mon[74516]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/prometheus/web_user}] v 0)
Dec  8 04:45:39 np0005550137 podman[80396]: 2025-12-08 09:45:39.306888157 +0000 UTC m=+0.023761362 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Dec  8 04:45:39 np0005550137 ceph-mon[74516]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1555784564' entity='mgr.compute-0.kitiwu' 
Dec  8 04:45:39 np0005550137 ceph-mon[74516]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/prometheus/web_password}] v 0)
Dec  8 04:45:39 np0005550137 ceph-mon[74516]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1555784564' entity='mgr.compute-0.kitiwu' 
Dec  8 04:45:39 np0005550137 ceph-mgr[74806]: [cephadm INFO cephadm.serve] Reconfiguring mon.compute-0 (unknown last config time)...
Dec  8 04:45:39 np0005550137 ceph-mgr[74806]: log_channel(cephadm) log [INF] : Reconfiguring mon.compute-0 (unknown last config time)...
Dec  8 04:45:39 np0005550137 ceph-mon[74516]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "mon."} v 0)
Dec  8 04:45:39 np0005550137 ceph-mon[74516]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1555784564' entity='mgr.compute-0.kitiwu' cmd=[{"prefix": "auth get", "entity": "mon."}]: dispatch
Dec  8 04:45:39 np0005550137 ceph-mon[74516]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config get", "who": "mon", "key": "public_network"} v 0)
Dec  8 04:45:39 np0005550137 ceph-mon[74516]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/1555784564' entity='mgr.compute-0.kitiwu' cmd=[{"prefix": "config get", "who": "mon", "key": "public_network"}]: dispatch
Dec  8 04:45:39 np0005550137 podman[80396]: 2025-12-08 09:45:39.413894089 +0000 UTC m=+0.130767304 container init 4bad3f3b3ddb0a0d6425211b920ea68651c70b4d6893288a5b188b3f023c9bb2 (image=quay.io/ceph/ceph:v19, name=suspicious_lamarr, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec  8 04:45:39 np0005550137 ceph-mon[74516]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Dec  8 04:45:39 np0005550137 ceph-mon[74516]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/1555784564' entity='mgr.compute-0.kitiwu' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec  8 04:45:39 np0005550137 ceph-mgr[74806]: [cephadm INFO cephadm.serve] Reconfiguring daemon mon.compute-0 on compute-0
Dec  8 04:45:39 np0005550137 ceph-mgr[74806]: log_channel(cephadm) log [INF] : Reconfiguring daemon mon.compute-0 on compute-0
Dec  8 04:45:39 np0005550137 podman[80396]: 2025-12-08 09:45:39.427811759 +0000 UTC m=+0.144684944 container start 4bad3f3b3ddb0a0d6425211b920ea68651c70b4d6893288a5b188b3f023c9bb2 (image=quay.io/ceph/ceph:v19, name=suspicious_lamarr, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1)
Dec  8 04:45:39 np0005550137 podman[80396]: 2025-12-08 09:45:39.431570719 +0000 UTC m=+0.148443954 container attach 4bad3f3b3ddb0a0d6425211b920ea68651c70b4d6893288a5b188b3f023c9bb2 (image=quay.io/ceph/ceph:v19, name=suspicious_lamarr, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_REF=squid, org.label-schema.build-date=20250325)
Dec  8 04:45:39 np0005550137 ceph-mgr[74806]: log_channel(cluster) log [DBG] : pgmap v4: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Dec  8 04:45:39 np0005550137 ceph-mon[74516]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config set, name=mon_cluster_log_to_file}] v 0)
Dec  8 04:45:39 np0005550137 ceph-mon[74516]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/3216566127' entity='client.admin' 
Dec  8 04:45:39 np0005550137 systemd[1]: libpod-4bad3f3b3ddb0a0d6425211b920ea68651c70b4d6893288a5b188b3f023c9bb2.scope: Deactivated successfully.
Dec  8 04:45:39 np0005550137 podman[80396]: 2025-12-08 09:45:39.790897248 +0000 UTC m=+0.507770433 container died 4bad3f3b3ddb0a0d6425211b920ea68651c70b4d6893288a5b188b3f023c9bb2 (image=quay.io/ceph/ceph:v19, name=suspicious_lamarr, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True)
Dec  8 04:45:39 np0005550137 ceph-mon[74516]: mon.compute-0@0(leader).osd e2 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  8 04:45:39 np0005550137 systemd[1]: var-lib-containers-storage-overlay-32c1ae137c216060a665cd73ec6de559c787addb716a2faca883e69fe1546745-merged.mount: Deactivated successfully.
Dec  8 04:45:39 np0005550137 podman[80396]: 2025-12-08 09:45:39.855254935 +0000 UTC m=+0.572128150 container remove 4bad3f3b3ddb0a0d6425211b920ea68651c70b4d6893288a5b188b3f023c9bb2 (image=quay.io/ceph/ceph:v19, name=suspicious_lamarr, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec  8 04:45:39 np0005550137 systemd[1]: libpod-conmon-4bad3f3b3ddb0a0d6425211b920ea68651c70b4d6893288a5b188b3f023c9bb2.scope: Deactivated successfully.
Dec  8 04:45:39 np0005550137 podman[80540]: 2025-12-08 09:45:39.910489369 +0000 UTC m=+0.041101329 container create 6f9804653606b2dff4ab0136ccb7e42858f5cdc26c9e3a35ac5ba9f22c4ef819 (image=quay.io/ceph/ceph:v19, name=vibrant_kilby, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec  8 04:45:39 np0005550137 ceph-mon[74516]: from='client.? 192.168.122.100:0/120047362' entity='client.admin' 
Dec  8 04:45:39 np0005550137 ceph-mon[74516]: from='mgr.14122 192.168.122.100:0/1555784564' entity='mgr.compute-0.kitiwu' 
Dec  8 04:45:39 np0005550137 ceph-mon[74516]: from='mgr.14122 192.168.122.100:0/1555784564' entity='mgr.compute-0.kitiwu' 
Dec  8 04:45:39 np0005550137 ceph-mon[74516]: from='mgr.14122 192.168.122.100:0/1555784564' entity='mgr.compute-0.kitiwu' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Dec  8 04:45:39 np0005550137 ceph-mon[74516]: from='mgr.14122 192.168.122.100:0/1555784564' entity='mgr.compute-0.kitiwu' 
Dec  8 04:45:39 np0005550137 ceph-mon[74516]: from='mgr.14122 192.168.122.100:0/1555784564' entity='mgr.compute-0.kitiwu' 
Dec  8 04:45:39 np0005550137 ceph-mon[74516]: from='mgr.14122 192.168.122.100:0/1555784564' entity='mgr.compute-0.kitiwu' 
Dec  8 04:45:39 np0005550137 ceph-mon[74516]: from='mgr.14122 192.168.122.100:0/1555784564' entity='mgr.compute-0.kitiwu' 
Dec  8 04:45:39 np0005550137 ceph-mon[74516]: from='mgr.14122 192.168.122.100:0/1555784564' entity='mgr.compute-0.kitiwu' 
Dec  8 04:45:39 np0005550137 ceph-mon[74516]: from='mgr.14122 192.168.122.100:0/1555784564' entity='mgr.compute-0.kitiwu' cmd=[{"prefix": "auth get", "entity": "mon."}]: dispatch
Dec  8 04:45:39 np0005550137 ceph-mon[74516]: from='client.? 192.168.122.100:0/3216566127' entity='client.admin' 
Dec  8 04:45:39 np0005550137 systemd[1]: Started libpod-conmon-6f9804653606b2dff4ab0136ccb7e42858f5cdc26c9e3a35ac5ba9f22c4ef819.scope.
Dec  8 04:45:39 np0005550137 systemd[1]: Started libcrun container.
Dec  8 04:45:39 np0005550137 podman[80540]: 2025-12-08 09:45:39.983715664 +0000 UTC m=+0.114327614 container init 6f9804653606b2dff4ab0136ccb7e42858f5cdc26c9e3a35ac5ba9f22c4ef819 (image=quay.io/ceph/ceph:v19, name=vibrant_kilby, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, ceph=True, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec  8 04:45:39 np0005550137 podman[80540]: 2025-12-08 09:45:39.892242463 +0000 UTC m=+0.022854403 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Dec  8 04:45:39 np0005550137 podman[80540]: 2025-12-08 09:45:39.995181646 +0000 UTC m=+0.125793596 container start 6f9804653606b2dff4ab0136ccb7e42858f5cdc26c9e3a35ac5ba9f22c4ef819 (image=quay.io/ceph/ceph:v19, name=vibrant_kilby, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, org.label-schema.license=GPLv2, io.buildah.version=1.40.1)
Dec  8 04:45:39 np0005550137 podman[80540]: 2025-12-08 09:45:39.999354748 +0000 UTC m=+0.129966708 container attach 6f9804653606b2dff4ab0136ccb7e42858f5cdc26c9e3a35ac5ba9f22c4ef819 (image=quay.io/ceph/ceph:v19, name=vibrant_kilby, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec  8 04:45:40 np0005550137 vibrant_kilby[80556]: 167 167
Dec  8 04:45:40 np0005550137 systemd[1]: libpod-6f9804653606b2dff4ab0136ccb7e42858f5cdc26c9e3a35ac5ba9f22c4ef819.scope: Deactivated successfully.
Dec  8 04:45:40 np0005550137 podman[80540]: 2025-12-08 09:45:40.00575948 +0000 UTC m=+0.136371390 container died 6f9804653606b2dff4ab0136ccb7e42858f5cdc26c9e3a35ac5ba9f22c4ef819 (image=quay.io/ceph/ceph:v19, name=vibrant_kilby, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, ceph=True, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Dec  8 04:45:40 np0005550137 systemd[1]: var-lib-containers-storage-overlay-c04f56c508e993a47b0b354314442b70841f819e1f8c34f162d63b601e77dac3-merged.mount: Deactivated successfully.
Dec  8 04:45:40 np0005550137 podman[80540]: 2025-12-08 09:45:40.055593696 +0000 UTC m=+0.186205636 container remove 6f9804653606b2dff4ab0136ccb7e42858f5cdc26c9e3a35ac5ba9f22c4ef819 (image=quay.io/ceph/ceph:v19, name=vibrant_kilby, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image)
Dec  8 04:45:40 np0005550137 systemd[1]: libpod-conmon-6f9804653606b2dff4ab0136ccb7e42858f5cdc26c9e3a35ac5ba9f22c4ef819.scope: Deactivated successfully.
Dec  8 04:45:40 np0005550137 ceph-mon[74516]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Dec  8 04:45:40 np0005550137 ceph-mon[74516]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1555784564' entity='mgr.compute-0.kitiwu' 
Dec  8 04:45:40 np0005550137 ceph-mon[74516]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Dec  8 04:45:40 np0005550137 ceph-mon[74516]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1555784564' entity='mgr.compute-0.kitiwu' 
Dec  8 04:45:40 np0005550137 ceph-mgr[74806]: [cephadm INFO cephadm.serve] Reconfiguring mgr.compute-0.kitiwu (unknown last config time)...
Dec  8 04:45:40 np0005550137 ceph-mgr[74806]: log_channel(cephadm) log [INF] : Reconfiguring mgr.compute-0.kitiwu (unknown last config time)...
Dec  8 04:45:40 np0005550137 ceph-mon[74516]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get-or-create", "entity": "mgr.compute-0.kitiwu", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]} v 0)
Dec  8 04:45:40 np0005550137 ceph-mon[74516]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1555784564' entity='mgr.compute-0.kitiwu' cmd=[{"prefix": "auth get-or-create", "entity": "mgr.compute-0.kitiwu", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]}]: dispatch
Dec  8 04:45:40 np0005550137 ceph-mon[74516]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mgr services"} v 0)
Dec  8 04:45:40 np0005550137 ceph-mon[74516]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/1555784564' entity='mgr.compute-0.kitiwu' cmd=[{"prefix": "mgr services"}]: dispatch
Dec  8 04:45:40 np0005550137 ceph-mon[74516]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Dec  8 04:45:40 np0005550137 ceph-mon[74516]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/1555784564' entity='mgr.compute-0.kitiwu' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec  8 04:45:40 np0005550137 ceph-mgr[74806]: [cephadm INFO cephadm.serve] Reconfiguring daemon mgr.compute-0.kitiwu on compute-0
Dec  8 04:45:40 np0005550137 ceph-mgr[74806]: log_channel(cephadm) log [INF] : Reconfiguring daemon mgr.compute-0.kitiwu on compute-0
Dec  8 04:45:40 np0005550137 python3[80598]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z   --volume /home/ceph-admin/specs/ceph_spec.yaml:/home/ceph_spec.yaml:z   --entrypoint ceph quay.io/ceph/ceph:v19 --fsid ceb838ef-9d5d-54e4-bddb-2f01adce2ad4 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   osd set-require-min-compat-client mimic#012 _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec  8 04:45:40 np0005550137 podman[80624]: 2025-12-08 09:45:40.274681532 +0000 UTC m=+0.042358170 container create 7d5f7c608394ac2725987d1496bd9c300b60dd1c0806275243147cadec790a14 (image=quay.io/ceph/ceph:v19, name=nostalgic_black, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec  8 04:45:40 np0005550137 systemd[1]: Started libpod-conmon-7d5f7c608394ac2725987d1496bd9c300b60dd1c0806275243147cadec790a14.scope.
Dec  8 04:45:40 np0005550137 systemd[1]: Started libcrun container.
Dec  8 04:45:40 np0005550137 podman[80624]: 2025-12-08 09:45:40.255394342 +0000 UTC m=+0.023071010 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Dec  8 04:45:40 np0005550137 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5f0f96513b97ffb5ad89498d9c5c8c8b2630143c8577b24da544750428e94969/merged/home/ceph_spec.yaml supports timestamps until 2038 (0x7fffffff)
Dec  8 04:45:40 np0005550137 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5f0f96513b97ffb5ad89498d9c5c8c8b2630143c8577b24da544750428e94969/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Dec  8 04:45:40 np0005550137 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5f0f96513b97ffb5ad89498d9c5c8c8b2630143c8577b24da544750428e94969/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec  8 04:45:40 np0005550137 podman[80624]: 2025-12-08 09:45:40.366390971 +0000 UTC m=+0.134067689 container init 7d5f7c608394ac2725987d1496bd9c300b60dd1c0806275243147cadec790a14 (image=quay.io/ceph/ceph:v19, name=nostalgic_black, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325)
Dec  8 04:45:40 np0005550137 podman[80624]: 2025-12-08 09:45:40.374396324 +0000 UTC m=+0.142072962 container start 7d5f7c608394ac2725987d1496bd9c300b60dd1c0806275243147cadec790a14 (image=quay.io/ceph/ceph:v19, name=nostalgic_black, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec  8 04:45:40 np0005550137 podman[80624]: 2025-12-08 09:45:40.377582545 +0000 UTC m=+0.145259263 container attach 7d5f7c608394ac2725987d1496bd9c300b60dd1c0806275243147cadec790a14 (image=quay.io/ceph/ceph:v19, name=nostalgic_black, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default)
Dec  8 04:45:40 np0005550137 podman[80706]: 2025-12-08 09:45:40.613134791 +0000 UTC m=+0.042415772 container create e776ac4a71afb37eceb6d1f18df86258162998db09e7ae01ac84ecc8487ca07b (image=quay.io/ceph/ceph:v19, name=epic_beaver, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec  8 04:45:40 np0005550137 systemd[1]: Started libpod-conmon-e776ac4a71afb37eceb6d1f18df86258162998db09e7ae01ac84ecc8487ca07b.scope.
Dec  8 04:45:40 np0005550137 systemd[1]: Started libcrun container.
Dec  8 04:45:40 np0005550137 podman[80706]: 2025-12-08 09:45:40.670276318 +0000 UTC m=+0.099557309 container init e776ac4a71afb37eceb6d1f18df86258162998db09e7ae01ac84ecc8487ca07b (image=quay.io/ceph/ceph:v19, name=epic_beaver, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.build-date=20250325)
Dec  8 04:45:40 np0005550137 podman[80706]: 2025-12-08 09:45:40.675633607 +0000 UTC m=+0.104914588 container start e776ac4a71afb37eceb6d1f18df86258162998db09e7ae01ac84ecc8487ca07b (image=quay.io/ceph/ceph:v19, name=epic_beaver, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0)
Dec  8 04:45:40 np0005550137 epic_beaver[80724]: 167 167
Dec  8 04:45:40 np0005550137 systemd[1]: libpod-e776ac4a71afb37eceb6d1f18df86258162998db09e7ae01ac84ecc8487ca07b.scope: Deactivated successfully.
Dec  8 04:45:40 np0005550137 podman[80706]: 2025-12-08 09:45:40.679453027 +0000 UTC m=+0.108734018 container attach e776ac4a71afb37eceb6d1f18df86258162998db09e7ae01ac84ecc8487ca07b (image=quay.io/ceph/ceph:v19, name=epic_beaver, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325)
Dec  8 04:45:40 np0005550137 podman[80706]: 2025-12-08 09:45:40.680554673 +0000 UTC m=+0.109835654 container died e776ac4a71afb37eceb6d1f18df86258162998db09e7ae01ac84ecc8487ca07b (image=quay.io/ceph/ceph:v19, name=epic_beaver, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0)
Dec  8 04:45:40 np0005550137 podman[80706]: 2025-12-08 09:45:40.593923264 +0000 UTC m=+0.023204255 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Dec  8 04:45:40 np0005550137 systemd[1]: var-lib-containers-storage-overlay-767c8b6ad71a8ae309c9ad28019e3bab830f1a5758211d1c5ff1e633100c70d7-merged.mount: Deactivated successfully.
Dec  8 04:45:40 np0005550137 podman[80706]: 2025-12-08 09:45:40.720842387 +0000 UTC m=+0.150123368 container remove e776ac4a71afb37eceb6d1f18df86258162998db09e7ae01ac84ecc8487ca07b (image=quay.io/ceph/ceph:v19, name=epic_beaver, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec  8 04:45:40 np0005550137 systemd[1]: libpod-conmon-e776ac4a71afb37eceb6d1f18df86258162998db09e7ae01ac84ecc8487ca07b.scope: Deactivated successfully.
Dec  8 04:45:40 np0005550137 ceph-mon[74516]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd set-require-min-compat-client", "version": "mimic"} v 0)
Dec  8 04:45:40 np0005550137 ceph-mon[74516]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/3632428028' entity='client.admin' cmd=[{"prefix": "osd set-require-min-compat-client", "version": "mimic"}]: dispatch
Dec  8 04:45:40 np0005550137 ceph-mon[74516]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Dec  8 04:45:40 np0005550137 ceph-mon[74516]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1555784564' entity='mgr.compute-0.kitiwu' 
Dec  8 04:45:40 np0005550137 ceph-mon[74516]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Dec  8 04:45:40 np0005550137 ceph-mon[74516]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1555784564' entity='mgr.compute-0.kitiwu' 
Dec  8 04:45:40 np0005550137 ceph-mon[74516]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Dec  8 04:45:40 np0005550137 ceph-mon[74516]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/1555784564' entity='mgr.compute-0.kitiwu' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec  8 04:45:40 np0005550137 ceph-mon[74516]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Dec  8 04:45:40 np0005550137 ceph-mon[74516]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1555784564' entity='mgr.compute-0.kitiwu' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Dec  8 04:45:40 np0005550137 ceph-mon[74516]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Dec  8 04:45:40 np0005550137 ceph-mon[74516]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1555784564' entity='mgr.compute-0.kitiwu' 
Dec  8 04:45:40 np0005550137 ceph-mon[74516]: Reconfiguring mon.compute-0 (unknown last config time)...
Dec  8 04:45:40 np0005550137 ceph-mon[74516]: Reconfiguring daemon mon.compute-0 on compute-0
Dec  8 04:45:40 np0005550137 ceph-mon[74516]: from='mgr.14122 192.168.122.100:0/1555784564' entity='mgr.compute-0.kitiwu' 
Dec  8 04:45:40 np0005550137 ceph-mon[74516]: from='mgr.14122 192.168.122.100:0/1555784564' entity='mgr.compute-0.kitiwu' 
Dec  8 04:45:40 np0005550137 ceph-mon[74516]: from='mgr.14122 192.168.122.100:0/1555784564' entity='mgr.compute-0.kitiwu' cmd=[{"prefix": "auth get-or-create", "entity": "mgr.compute-0.kitiwu", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]}]: dispatch
Dec  8 04:45:40 np0005550137 ceph-mon[74516]: from='client.? 192.168.122.100:0/3632428028' entity='client.admin' cmd=[{"prefix": "osd set-require-min-compat-client", "version": "mimic"}]: dispatch
Dec  8 04:45:40 np0005550137 ceph-mon[74516]: from='mgr.14122 192.168.122.100:0/1555784564' entity='mgr.compute-0.kitiwu' 
Dec  8 04:45:40 np0005550137 ceph-mon[74516]: from='mgr.14122 192.168.122.100:0/1555784564' entity='mgr.compute-0.kitiwu' 
Dec  8 04:45:40 np0005550137 ceph-mon[74516]: from='mgr.14122 192.168.122.100:0/1555784564' entity='mgr.compute-0.kitiwu' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Dec  8 04:45:40 np0005550137 ceph-mon[74516]: from='mgr.14122 192.168.122.100:0/1555784564' entity='mgr.compute-0.kitiwu' 
Dec  8 04:45:41 np0005550137 ceph-mon[74516]: mon.compute-0@0(leader).osd e2 do_prune osdmap full prune enabled
Dec  8 04:45:41 np0005550137 ceph-mon[74516]: mon.compute-0@0(leader).osd e2 encode_pending skipping prime_pg_temp; mapping job did not start
Dec  8 04:45:41 np0005550137 ceph-mon[74516]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/3632428028' entity='client.admin' cmd='[{"prefix": "osd set-require-min-compat-client", "version": "mimic"}]': finished
Dec  8 04:45:41 np0005550137 ceph-mon[74516]: mon.compute-0@0(leader).osd e3 e3: 0 total, 0 up, 0 in
Dec  8 04:45:41 np0005550137 nostalgic_black[80666]: set require_min_compat_client to mimic
Dec  8 04:45:41 np0005550137 ceph-mon[74516]: log_channel(cluster) log [DBG] : osdmap e3: 0 total, 0 up, 0 in
Dec  8 04:45:41 np0005550137 systemd[1]: libpod-7d5f7c608394ac2725987d1496bd9c300b60dd1c0806275243147cadec790a14.scope: Deactivated successfully.
Dec  8 04:45:41 np0005550137 podman[80624]: 2025-12-08 09:45:41.439316669 +0000 UTC m=+1.206993357 container died 7d5f7c608394ac2725987d1496bd9c300b60dd1c0806275243147cadec790a14 (image=quay.io/ceph/ceph:v19, name=nostalgic_black, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325)
Dec  8 04:45:41 np0005550137 systemd[1]: var-lib-containers-storage-overlay-5f0f96513b97ffb5ad89498d9c5c8c8b2630143c8577b24da544750428e94969-merged.mount: Deactivated successfully.
Dec  8 04:45:41 np0005550137 podman[80624]: 2025-12-08 09:45:41.485125718 +0000 UTC m=+1.252802366 container remove 7d5f7c608394ac2725987d1496bd9c300b60dd1c0806275243147cadec790a14 (image=quay.io/ceph/ceph:v19, name=nostalgic_black, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1)
Dec  8 04:45:41 np0005550137 systemd[1]: libpod-conmon-7d5f7c608394ac2725987d1496bd9c300b60dd1c0806275243147cadec790a14.scope: Deactivated successfully.
Dec  8 04:45:41 np0005550137 ceph-mgr[74806]: log_channel(cluster) log [DBG] : pgmap v6: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Dec  8 04:45:41 np0005550137 ceph-mon[74516]: Reconfiguring mgr.compute-0.kitiwu (unknown last config time)...
Dec  8 04:45:41 np0005550137 ceph-mon[74516]: Reconfiguring daemon mgr.compute-0.kitiwu on compute-0
Dec  8 04:45:41 np0005550137 ceph-mon[74516]: from='client.? 192.168.122.100:0/3632428028' entity='client.admin' cmd='[{"prefix": "osd set-require-min-compat-client", "version": "mimic"}]': finished
Dec  8 04:45:42 np0005550137 python3[80806]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z   --volume /home/ceph-admin/specs/ceph_spec.yaml:/home/ceph_spec.yaml:z   --entrypoint ceph quay.io/ceph/ceph:v19 --fsid ceb838ef-9d5d-54e4-bddb-2f01adce2ad4 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   orch apply --in-file /home/ceph_spec.yaml _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec  8 04:45:42 np0005550137 podman[80807]: 2025-12-08 09:45:42.182909876 +0000 UTC m=+0.049180446 container create e301bb685fbad1838b2a9f219e2290b25506c363247b87bc880315369bb91d80 (image=quay.io/ceph/ceph:v19, name=keen_dubinsky, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, org.label-schema.license=GPLv2, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Dec  8 04:45:42 np0005550137 systemd[1]: Started libpod-conmon-e301bb685fbad1838b2a9f219e2290b25506c363247b87bc880315369bb91d80.scope.
Dec  8 04:45:42 np0005550137 systemd[1]: Started libcrun container.
Dec  8 04:45:42 np0005550137 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3ea43e304ae25ef40829e92b840cec27d7c0a7302801c63906f833a3e659e5ca/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec  8 04:45:42 np0005550137 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3ea43e304ae25ef40829e92b840cec27d7c0a7302801c63906f833a3e659e5ca/merged/home/ceph_spec.yaml supports timestamps until 2038 (0x7fffffff)
Dec  8 04:45:42 np0005550137 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3ea43e304ae25ef40829e92b840cec27d7c0a7302801c63906f833a3e659e5ca/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Dec  8 04:45:42 np0005550137 podman[80807]: 2025-12-08 09:45:42.164826935 +0000 UTC m=+0.031097555 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Dec  8 04:45:42 np0005550137 podman[80807]: 2025-12-08 09:45:42.261309164 +0000 UTC m=+0.127579754 container init e301bb685fbad1838b2a9f219e2290b25506c363247b87bc880315369bb91d80 (image=quay.io/ceph/ceph:v19, name=keen_dubinsky, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS)
Dec  8 04:45:42 np0005550137 podman[80807]: 2025-12-08 09:45:42.271771766 +0000 UTC m=+0.138042336 container start e301bb685fbad1838b2a9f219e2290b25506c363247b87bc880315369bb91d80 (image=quay.io/ceph/ceph:v19, name=keen_dubinsky, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec  8 04:45:42 np0005550137 podman[80807]: 2025-12-08 09:45:42.275551234 +0000 UTC m=+0.141821834 container attach e301bb685fbad1838b2a9f219e2290b25506c363247b87bc880315369bb91d80 (image=quay.io/ceph/ceph:v19, name=keen_dubinsky, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, org.label-schema.license=GPLv2, ceph=True, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image)
Dec  8 04:45:42 np0005550137 ceph-mgr[74806]: log_channel(audit) log [DBG] : from='client.14172 -' entity='client.admin' cmd=[{"prefix": "orch apply", "target": ["mon-mgr", ""]}]: dispatch
Dec  8 04:45:42 np0005550137 ceph-mgr[74806]: [progress INFO root] Writing back 1 completed events
Dec  8 04:45:42 np0005550137 ceph-mon[74516]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/progress/completed}] v 0)
Dec  8 04:45:42 np0005550137 ceph-mon[74516]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1555784564' entity='mgr.compute-0.kitiwu' 
Dec  8 04:45:42 np0005550137 ceph-mon[74516]: from='mgr.14122 192.168.122.100:0/1555784564' entity='mgr.compute-0.kitiwu' 
Dec  8 04:45:43 np0005550137 ceph-mon[74516]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/inventory}] v 0)
Dec  8 04:45:43 np0005550137 ceph-mon[74516]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1555784564' entity='mgr.compute-0.kitiwu' 
Dec  8 04:45:43 np0005550137 ceph-mon[74516]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/inventory}] v 0)
Dec  8 04:45:43 np0005550137 ceph-mon[74516]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1555784564' entity='mgr.compute-0.kitiwu' 
Dec  8 04:45:43 np0005550137 ceph-mon[74516]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/inventory}] v 0)
Dec  8 04:45:43 np0005550137 ceph-mon[74516]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1555784564' entity='mgr.compute-0.kitiwu' 
Dec  8 04:45:43 np0005550137 ceph-mon[74516]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/inventory}] v 0)
Dec  8 04:45:43 np0005550137 ceph-mon[74516]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1555784564' entity='mgr.compute-0.kitiwu' 
Dec  8 04:45:43 np0005550137 ceph-mgr[74806]: [cephadm INFO root] Added host compute-0
Dec  8 04:45:43 np0005550137 ceph-mgr[74806]: log_channel(cephadm) log [INF] : Added host compute-0
Dec  8 04:45:43 np0005550137 ceph-mon[74516]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Dec  8 04:45:43 np0005550137 ceph-mon[74516]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/1555784564' entity='mgr.compute-0.kitiwu' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec  8 04:45:43 np0005550137 ceph-mon[74516]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Dec  8 04:45:43 np0005550137 ceph-mon[74516]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1555784564' entity='mgr.compute-0.kitiwu' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Dec  8 04:45:43 np0005550137 ceph-mon[74516]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Dec  8 04:45:43 np0005550137 ceph-mon[74516]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1555784564' entity='mgr.compute-0.kitiwu' 
Dec  8 04:45:43 np0005550137 ceph-mgr[74806]: log_channel(cluster) log [DBG] : pgmap v7: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Dec  8 04:45:44 np0005550137 ceph-mon[74516]: from='mgr.14122 192.168.122.100:0/1555784564' entity='mgr.compute-0.kitiwu' 
Dec  8 04:45:44 np0005550137 ceph-mon[74516]: from='mgr.14122 192.168.122.100:0/1555784564' entity='mgr.compute-0.kitiwu' 
Dec  8 04:45:44 np0005550137 ceph-mon[74516]: from='mgr.14122 192.168.122.100:0/1555784564' entity='mgr.compute-0.kitiwu' 
Dec  8 04:45:44 np0005550137 ceph-mon[74516]: from='mgr.14122 192.168.122.100:0/1555784564' entity='mgr.compute-0.kitiwu' 
Dec  8 04:45:44 np0005550137 ceph-mon[74516]: from='mgr.14122 192.168.122.100:0/1555784564' entity='mgr.compute-0.kitiwu' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Dec  8 04:45:44 np0005550137 ceph-mon[74516]: from='mgr.14122 192.168.122.100:0/1555784564' entity='mgr.compute-0.kitiwu' 
Dec  8 04:45:44 np0005550137 ceph-mgr[74806]: [cephadm INFO cephadm.serve] Deploying cephadm binary to compute-1
Dec  8 04:45:44 np0005550137 ceph-mgr[74806]: log_channel(cephadm) log [INF] : Deploying cephadm binary to compute-1
Dec  8 04:45:44 np0005550137 ceph-mon[74516]: mon.compute-0@0(leader).osd e3 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  8 04:45:45 np0005550137 ceph-mon[74516]: Added host compute-0
Dec  8 04:45:45 np0005550137 ceph-mgr[74806]: log_channel(cluster) log [DBG] : pgmap v8: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Dec  8 04:45:46 np0005550137 ceph-mon[74516]: Deploying cephadm binary to compute-1
Dec  8 04:45:47 np0005550137 ceph-mgr[74806]: log_channel(cluster) log [DBG] : pgmap v9: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Dec  8 04:45:47 np0005550137 ceph-mgr[74806]: [volumes INFO mgr_util] scanning for idle connections..
Dec  8 04:45:47 np0005550137 ceph-mgr[74806]: [volumes INFO mgr_util] cleaning up connections: []
Dec  8 04:45:47 np0005550137 ceph-mgr[74806]: [volumes INFO mgr_util] scanning for idle connections..
Dec  8 04:45:47 np0005550137 ceph-mgr[74806]: [volumes INFO mgr_util] cleaning up connections: []
Dec  8 04:45:47 np0005550137 ceph-mgr[74806]: [volumes INFO mgr_util] scanning for idle connections..
Dec  8 04:45:47 np0005550137 ceph-mgr[74806]: [volumes INFO mgr_util] cleaning up connections: []
Dec  8 04:45:48 np0005550137 ceph-mon[74516]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/inventory}] v 0)
Dec  8 04:45:48 np0005550137 ceph-mon[74516]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1555784564' entity='mgr.compute-0.kitiwu' 
Dec  8 04:45:48 np0005550137 ceph-mgr[74806]: [cephadm INFO root] Added host compute-1
Dec  8 04:45:48 np0005550137 ceph-mgr[74806]: log_channel(cephadm) log [INF] : Added host compute-1
Dec  8 04:45:49 np0005550137 ceph-mon[74516]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1}] v 0)
Dec  8 04:45:49 np0005550137 ceph-mon[74516]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1555784564' entity='mgr.compute-0.kitiwu' 
Dec  8 04:45:49 np0005550137 ceph-mgr[74806]: log_channel(cluster) log [DBG] : pgmap v10: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Dec  8 04:45:49 np0005550137 ceph-mon[74516]: mon.compute-0@0(leader).osd e3 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  8 04:45:49 np0005550137 ceph-mon[74516]: from='mgr.14122 192.168.122.100:0/1555784564' entity='mgr.compute-0.kitiwu' 
Dec  8 04:45:49 np0005550137 ceph-mon[74516]: Added host compute-1
Dec  8 04:45:49 np0005550137 ceph-mon[74516]: from='mgr.14122 192.168.122.100:0/1555784564' entity='mgr.compute-0.kitiwu' 
Dec  8 04:45:49 np0005550137 ceph-mon[74516]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1}] v 0)
Dec  8 04:45:49 np0005550137 ceph-mon[74516]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1555784564' entity='mgr.compute-0.kitiwu' 
Dec  8 04:45:50 np0005550137 ceph-mgr[74806]: [cephadm INFO cephadm.serve] Deploying cephadm binary to compute-2
Dec  8 04:45:50 np0005550137 ceph-mgr[74806]: log_channel(cephadm) log [INF] : Deploying cephadm binary to compute-2
Dec  8 04:45:50 np0005550137 ceph-mon[74516]: from='mgr.14122 192.168.122.100:0/1555784564' entity='mgr.compute-0.kitiwu' 
Dec  8 04:45:51 np0005550137 ceph-mon[74516]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1}] v 0)
Dec  8 04:45:51 np0005550137 ceph-mon[74516]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1555784564' entity='mgr.compute-0.kitiwu' 
Dec  8 04:45:51 np0005550137 ceph-mgr[74806]: log_channel(cluster) log [DBG] : pgmap v11: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Dec  8 04:45:51 np0005550137 ceph-mon[74516]: Deploying cephadm binary to compute-2
Dec  8 04:45:51 np0005550137 ceph-mon[74516]: from='mgr.14122 192.168.122.100:0/1555784564' entity='mgr.compute-0.kitiwu' 
Dec  8 04:45:53 np0005550137 ceph-mgr[74806]: log_channel(cluster) log [DBG] : pgmap v12: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Dec  8 04:45:54 np0005550137 ceph-mon[74516]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/inventory}] v 0)
Dec  8 04:45:54 np0005550137 ceph-mon[74516]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1555784564' entity='mgr.compute-0.kitiwu' 
Dec  8 04:45:54 np0005550137 ceph-mgr[74806]: [cephadm INFO root] Added host compute-2
Dec  8 04:45:54 np0005550137 ceph-mgr[74806]: log_channel(cephadm) log [INF] : Added host compute-2
Dec  8 04:45:54 np0005550137 ceph-mgr[74806]: [cephadm INFO root] Saving service mon spec with placement compute-0;compute-1;compute-2
Dec  8 04:45:54 np0005550137 ceph-mgr[74806]: log_channel(cephadm) log [INF] : Saving service mon spec with placement compute-0;compute-1;compute-2
Dec  8 04:45:54 np0005550137 ceph-mon[74516]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.mon}] v 0)
Dec  8 04:45:54 np0005550137 ceph-mon[74516]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1555784564' entity='mgr.compute-0.kitiwu' 
Dec  8 04:45:54 np0005550137 ceph-mgr[74806]: [cephadm INFO root] Saving service mgr spec with placement compute-0;compute-1;compute-2
Dec  8 04:45:54 np0005550137 ceph-mgr[74806]: log_channel(cephadm) log [INF] : Saving service mgr spec with placement compute-0;compute-1;compute-2
Dec  8 04:45:54 np0005550137 ceph-mon[74516]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.mgr}] v 0)
Dec  8 04:45:54 np0005550137 ceph-mon[74516]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1555784564' entity='mgr.compute-0.kitiwu' 
Dec  8 04:45:54 np0005550137 ceph-mgr[74806]: [cephadm INFO root] Marking host: compute-0 for OSDSpec preview refresh.
Dec  8 04:45:54 np0005550137 ceph-mgr[74806]: log_channel(cephadm) log [INF] : Marking host: compute-0 for OSDSpec preview refresh.
Dec  8 04:45:54 np0005550137 ceph-mgr[74806]: [cephadm INFO root] Marking host: compute-1 for OSDSpec preview refresh.
Dec  8 04:45:54 np0005550137 ceph-mgr[74806]: log_channel(cephadm) log [INF] : Marking host: compute-1 for OSDSpec preview refresh.
Dec  8 04:45:54 np0005550137 ceph-mgr[74806]: [cephadm INFO root] Saving service osd.default_drive_group spec with placement compute-0;compute-1;compute-2
Dec  8 04:45:54 np0005550137 ceph-mgr[74806]: log_channel(cephadm) log [INF] : Saving service osd.default_drive_group spec with placement compute-0;compute-1;compute-2
Dec  8 04:45:54 np0005550137 ceph-mon[74516]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.osd.default_drive_group}] v 0)
Dec  8 04:45:54 np0005550137 ceph-mon[74516]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1555784564' entity='mgr.compute-0.kitiwu' 
Dec  8 04:45:54 np0005550137 keen_dubinsky[80822]: Added host 'compute-0' with addr '192.168.122.100'
Dec  8 04:45:54 np0005550137 keen_dubinsky[80822]: Added host 'compute-1' with addr '192.168.122.101'
Dec  8 04:45:54 np0005550137 keen_dubinsky[80822]: Added host 'compute-2' with addr '192.168.122.102'
Dec  8 04:45:54 np0005550137 keen_dubinsky[80822]: Scheduled mon update...
Dec  8 04:45:54 np0005550137 keen_dubinsky[80822]: Scheduled mgr update...
Dec  8 04:45:54 np0005550137 keen_dubinsky[80822]: Scheduled osd.default_drive_group update...
Dec  8 04:45:54 np0005550137 systemd[1]: libpod-e301bb685fbad1838b2a9f219e2290b25506c363247b87bc880315369bb91d80.scope: Deactivated successfully.
Dec  8 04:45:54 np0005550137 podman[80807]: 2025-12-08 09:45:54.236009011 +0000 UTC m=+12.102279601 container died e301bb685fbad1838b2a9f219e2290b25506c363247b87bc880315369bb91d80 (image=quay.io/ceph/ceph:v19, name=keen_dubinsky, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, io.buildah.version=1.40.1)
Dec  8 04:45:54 np0005550137 systemd[1]: var-lib-containers-storage-overlay-3ea43e304ae25ef40829e92b840cec27d7c0a7302801c63906f833a3e659e5ca-merged.mount: Deactivated successfully.
Dec  8 04:45:54 np0005550137 podman[80807]: 2025-12-08 09:45:54.282938054 +0000 UTC m=+12.149208624 container remove e301bb685fbad1838b2a9f219e2290b25506c363247b87bc880315369bb91d80 (image=quay.io/ceph/ceph:v19, name=keen_dubinsky, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, org.label-schema.schema-version=1.0)
Dec  8 04:45:54 np0005550137 systemd[1]: libpod-conmon-e301bb685fbad1838b2a9f219e2290b25506c363247b87bc880315369bb91d80.scope: Deactivated successfully.
Dec  8 04:45:54 np0005550137 python3[80983]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z   --volume /home/ceph-admin/specs/ceph_spec.yaml:/home/ceph_spec.yaml:z   --entrypoint ceph quay.io/ceph/ceph:v19 --fsid ceb838ef-9d5d-54e4-bddb-2f01adce2ad4 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   status --format json | jq .osdmap.num_up_osds _uses_shell=True stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec  8 04:45:54 np0005550137 podman[80985]: 2025-12-08 09:45:54.775597068 +0000 UTC m=+0.050467497 container create 95feb6ab31532ae7a81965709876626a7b5d66203f245c3025b6dc221ea6ca35 (image=quay.io/ceph/ceph:v19, name=sharp_shannon, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_REF=squid, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec  8 04:45:54 np0005550137 ceph-mon[74516]: mon.compute-0@0(leader).osd e3 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  8 04:45:54 np0005550137 systemd[1]: Started libpod-conmon-95feb6ab31532ae7a81965709876626a7b5d66203f245c3025b6dc221ea6ca35.scope.
Dec  8 04:45:54 np0005550137 podman[80985]: 2025-12-08 09:45:54.751427174 +0000 UTC m=+0.026297583 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Dec  8 04:45:54 np0005550137 systemd[1]: Started libcrun container.
Dec  8 04:45:54 np0005550137 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e82180f112f1c26579b446df86a16ee909b5b3a67f328c2dbab626e2c734544e/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Dec  8 04:45:54 np0005550137 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e82180f112f1c26579b446df86a16ee909b5b3a67f328c2dbab626e2c734544e/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec  8 04:45:54 np0005550137 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e82180f112f1c26579b446df86a16ee909b5b3a67f328c2dbab626e2c734544e/merged/home/ceph_spec.yaml supports timestamps until 2038 (0x7fffffff)
Dec  8 04:45:54 np0005550137 podman[80985]: 2025-12-08 09:45:54.872710259 +0000 UTC m=+0.147580698 container init 95feb6ab31532ae7a81965709876626a7b5d66203f245c3025b6dc221ea6ca35 (image=quay.io/ceph/ceph:v19, name=sharp_shannon, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, OSD_FLAVOR=default, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0)
Dec  8 04:45:54 np0005550137 podman[80985]: 2025-12-08 09:45:54.885688628 +0000 UTC m=+0.160559017 container start 95feb6ab31532ae7a81965709876626a7b5d66203f245c3025b6dc221ea6ca35 (image=quay.io/ceph/ceph:v19, name=sharp_shannon, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325)
Dec  8 04:45:54 np0005550137 podman[80985]: 2025-12-08 09:45:54.890098837 +0000 UTC m=+0.164969246 container attach 95feb6ab31532ae7a81965709876626a7b5d66203f245c3025b6dc221ea6ca35 (image=quay.io/ceph/ceph:v19, name=sharp_shannon, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image)
Dec  8 04:45:54 np0005550137 ceph-mon[74516]: from='mgr.14122 192.168.122.100:0/1555784564' entity='mgr.compute-0.kitiwu' 
Dec  8 04:45:54 np0005550137 ceph-mon[74516]: from='mgr.14122 192.168.122.100:0/1555784564' entity='mgr.compute-0.kitiwu' 
Dec  8 04:45:54 np0005550137 ceph-mon[74516]: from='mgr.14122 192.168.122.100:0/1555784564' entity='mgr.compute-0.kitiwu' 
Dec  8 04:45:54 np0005550137 ceph-mon[74516]: from='mgr.14122 192.168.122.100:0/1555784564' entity='mgr.compute-0.kitiwu' 
Dec  8 04:45:55 np0005550137 ceph-mon[74516]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "status", "format": "json"} v 0)
Dec  8 04:45:55 np0005550137 ceph-mon[74516]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1352056455' entity='client.admin' cmd=[{"prefix": "status", "format": "json"}]: dispatch
Dec  8 04:45:55 np0005550137 sharp_shannon[81002]: 
Dec  8 04:45:55 np0005550137 sharp_shannon[81002]: {"fsid":"ceb838ef-9d5d-54e4-bddb-2f01adce2ad4","health":{"status":"HEALTH_WARN","checks":{"TOO_FEW_OSDS":{"severity":"HEALTH_WARN","summary":{"message":"OSD count 0 < osd_pool_default_size 1","count":1},"muted":false}},"mutes":[]},"election_epoch":5,"quorum":[0],"quorum_names":["compute-0"],"quorum_age":55,"monmap":{"epoch":1,"min_mon_release_name":"squid","num_mons":1},"osdmap":{"epoch":3,"num_osds":0,"num_up_osds":0,"osd_up_since":0,"num_in_osds":0,"osd_in_since":0,"num_remapped_pgs":0},"pgmap":{"pgs_by_state":[],"num_pgs":0,"num_pools":0,"num_objects":0,"data_bytes":0,"bytes_used":0,"bytes_avail":0,"bytes_total":0},"fsmap":{"epoch":1,"btime":"2025-12-08T09:44:57:301434+0000","by_rank":[],"up:standby":0},"mgrmap":{"available":true,"num_standbys":0,"modules":["cephadm","iostat","nfs","restful"],"services":{}},"servicemap":{"epoch":1,"modified":"2025-12-08T09:44:57.306979+0000","services":{}},"progress_events":{}}
Dec  8 04:45:55 np0005550137 systemd[1]: libpod-95feb6ab31532ae7a81965709876626a7b5d66203f245c3025b6dc221ea6ca35.scope: Deactivated successfully.
Dec  8 04:45:55 np0005550137 podman[81029]: 2025-12-08 09:45:55.429996725 +0000 UTC m=+0.027551622 container died 95feb6ab31532ae7a81965709876626a7b5d66203f245c3025b6dc221ea6ca35 (image=quay.io/ceph/ceph:v19, name=sharp_shannon, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, OSD_FLAVOR=default)
Dec  8 04:45:55 np0005550137 systemd[1]: var-lib-containers-storage-overlay-e82180f112f1c26579b446df86a16ee909b5b3a67f328c2dbab626e2c734544e-merged.mount: Deactivated successfully.
Dec  8 04:45:55 np0005550137 podman[81029]: 2025-12-08 09:45:55.464818606 +0000 UTC m=+0.062373453 container remove 95feb6ab31532ae7a81965709876626a7b5d66203f245c3025b6dc221ea6ca35 (image=quay.io/ceph/ceph:v19, name=sharp_shannon, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_REF=squid, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Dec  8 04:45:55 np0005550137 systemd[1]: libpod-conmon-95feb6ab31532ae7a81965709876626a7b5d66203f245c3025b6dc221ea6ca35.scope: Deactivated successfully.
Dec  8 04:45:55 np0005550137 ceph-mgr[74806]: log_channel(cluster) log [DBG] : pgmap v13: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Dec  8 04:45:55 np0005550137 ceph-mon[74516]: Added host compute-2
Dec  8 04:45:55 np0005550137 ceph-mon[74516]: Saving service mon spec with placement compute-0;compute-1;compute-2
Dec  8 04:45:55 np0005550137 ceph-mon[74516]: Saving service mgr spec with placement compute-0;compute-1;compute-2
Dec  8 04:45:55 np0005550137 ceph-mon[74516]: Marking host: compute-0 for OSDSpec preview refresh.
Dec  8 04:45:55 np0005550137 ceph-mon[74516]: Marking host: compute-1 for OSDSpec preview refresh.
Dec  8 04:45:55 np0005550137 ceph-mon[74516]: Saving service osd.default_drive_group spec with placement compute-0;compute-1;compute-2
Dec  8 04:45:57 np0005550137 ceph-mgr[74806]: log_channel(cluster) log [DBG] : pgmap v14: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Dec  8 04:45:59 np0005550137 ceph-mgr[74806]: log_channel(cluster) log [DBG] : pgmap v15: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Dec  8 04:45:59 np0005550137 ceph-mon[74516]: mon.compute-0@0(leader).osd e3 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  8 04:46:01 np0005550137 ceph-mgr[74806]: log_channel(cluster) log [DBG] : pgmap v16: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Dec  8 04:46:03 np0005550137 ceph-mgr[74806]: log_channel(cluster) log [DBG] : pgmap v17: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Dec  8 04:46:04 np0005550137 ceph-mon[74516]: mon.compute-0@0(leader).osd e3 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  8 04:46:05 np0005550137 ceph-mgr[74806]: log_channel(cluster) log [DBG] : pgmap v18: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Dec  8 04:46:07 np0005550137 ceph-mgr[74806]: log_channel(cluster) log [DBG] : pgmap v19: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Dec  8 04:46:09 np0005550137 ceph-mgr[74806]: log_channel(cluster) log [DBG] : pgmap v20: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Dec  8 04:46:09 np0005550137 ceph-mon[74516]: mon.compute-0@0(leader).osd e3 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  8 04:46:11 np0005550137 ceph-mon[74516]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1.devices.0}] v 0)
Dec  8 04:46:11 np0005550137 ceph-mon[74516]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1555784564' entity='mgr.compute-0.kitiwu' 
Dec  8 04:46:11 np0005550137 ceph-mon[74516]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1}] v 0)
Dec  8 04:46:11 np0005550137 ceph-mon[74516]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1555784564' entity='mgr.compute-0.kitiwu' 
Dec  8 04:46:11 np0005550137 ceph-mon[74516]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1.devices.0}] v 0)
Dec  8 04:46:11 np0005550137 ceph-mon[74516]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1555784564' entity='mgr.compute-0.kitiwu' 
Dec  8 04:46:11 np0005550137 ceph-mon[74516]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1}] v 0)
Dec  8 04:46:11 np0005550137 ceph-mon[74516]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1555784564' entity='mgr.compute-0.kitiwu' 
Dec  8 04:46:11 np0005550137 ceph-mon[74516]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config rm", "who": "osd/host:compute-1", "name": "osd_memory_target"} v 0)
Dec  8 04:46:11 np0005550137 ceph-mon[74516]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1555784564' entity='mgr.compute-0.kitiwu' cmd=[{"prefix": "config rm", "who": "osd/host:compute-1", "name": "osd_memory_target"}]: dispatch
Dec  8 04:46:11 np0005550137 ceph-mon[74516]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Dec  8 04:46:11 np0005550137 ceph-mon[74516]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/1555784564' entity='mgr.compute-0.kitiwu' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec  8 04:46:11 np0005550137 ceph-mon[74516]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Dec  8 04:46:11 np0005550137 ceph-mon[74516]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1555784564' entity='mgr.compute-0.kitiwu' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Dec  8 04:46:11 np0005550137 ceph-mgr[74806]: [cephadm INFO cephadm.serve] Updating compute-1:/etc/ceph/ceph.conf
Dec  8 04:46:11 np0005550137 ceph-mgr[74806]: log_channel(cephadm) log [INF] : Updating compute-1:/etc/ceph/ceph.conf
Dec  8 04:46:11 np0005550137 ceph-mgr[74806]: log_channel(cluster) log [DBG] : pgmap v21: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Dec  8 04:46:11 np0005550137 ceph-mgr[74806]: [cephadm INFO cephadm.serve] Updating compute-1:/var/lib/ceph/ceb838ef-9d5d-54e4-bddb-2f01adce2ad4/config/ceph.conf
Dec  8 04:46:11 np0005550137 ceph-mgr[74806]: log_channel(cephadm) log [INF] : Updating compute-1:/var/lib/ceph/ceb838ef-9d5d-54e4-bddb-2f01adce2ad4/config/ceph.conf
Dec  8 04:46:12 np0005550137 ceph-mon[74516]: from='mgr.14122 192.168.122.100:0/1555784564' entity='mgr.compute-0.kitiwu' 
Dec  8 04:46:12 np0005550137 ceph-mon[74516]: from='mgr.14122 192.168.122.100:0/1555784564' entity='mgr.compute-0.kitiwu' 
Dec  8 04:46:12 np0005550137 ceph-mon[74516]: from='mgr.14122 192.168.122.100:0/1555784564' entity='mgr.compute-0.kitiwu' 
Dec  8 04:46:12 np0005550137 ceph-mon[74516]: from='mgr.14122 192.168.122.100:0/1555784564' entity='mgr.compute-0.kitiwu' 
Dec  8 04:46:12 np0005550137 ceph-mon[74516]: from='mgr.14122 192.168.122.100:0/1555784564' entity='mgr.compute-0.kitiwu' cmd=[{"prefix": "config rm", "who": "osd/host:compute-1", "name": "osd_memory_target"}]: dispatch
Dec  8 04:46:12 np0005550137 ceph-mon[74516]: from='mgr.14122 192.168.122.100:0/1555784564' entity='mgr.compute-0.kitiwu' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Dec  8 04:46:12 np0005550137 ceph-mon[74516]: Updating compute-1:/etc/ceph/ceph.conf
Dec  8 04:46:12 np0005550137 ceph-mon[74516]: Updating compute-1:/var/lib/ceph/ceb838ef-9d5d-54e4-bddb-2f01adce2ad4/config/ceph.conf
Dec  8 04:46:12 np0005550137 ceph-mgr[74806]: [cephadm INFO cephadm.serve] Updating compute-1:/etc/ceph/ceph.client.admin.keyring
Dec  8 04:46:12 np0005550137 ceph-mgr[74806]: log_channel(cephadm) log [INF] : Updating compute-1:/etc/ceph/ceph.client.admin.keyring
Dec  8 04:46:13 np0005550137 ceph-mon[74516]: Updating compute-1:/etc/ceph/ceph.client.admin.keyring
Dec  8 04:46:13 np0005550137 ceph-mgr[74806]: [cephadm INFO cephadm.serve] Updating compute-1:/var/lib/ceph/ceb838ef-9d5d-54e4-bddb-2f01adce2ad4/config/ceph.client.admin.keyring
Dec  8 04:46:13 np0005550137 ceph-mgr[74806]: log_channel(cephadm) log [INF] : Updating compute-1:/var/lib/ceph/ceb838ef-9d5d-54e4-bddb-2f01adce2ad4/config/ceph.client.admin.keyring
Dec  8 04:46:13 np0005550137 ceph-mgr[74806]: log_channel(cluster) log [DBG] : pgmap v22: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Dec  8 04:46:13 np0005550137 ceph-mon[74516]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1.devices.0}] v 0)
Dec  8 04:46:14 np0005550137 ceph-mon[74516]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1555784564' entity='mgr.compute-0.kitiwu' 
Dec  8 04:46:14 np0005550137 ceph-mon[74516]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1}] v 0)
Dec  8 04:46:14 np0005550137 ceph-mon[74516]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1555784564' entity='mgr.compute-0.kitiwu' 
Dec  8 04:46:14 np0005550137 ceph-mon[74516]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Dec  8 04:46:14 np0005550137 ceph-mon[74516]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1555784564' entity='mgr.compute-0.kitiwu' 
Dec  8 04:46:14 np0005550137 ceph-mgr[74806]: [cephadm ERROR cephadm.serve] Failed to apply mon spec MONSpec.from_json(yaml.safe_load('''service_type: mon#012service_name: mon#012placement:#012  hosts:#012  - compute-0#012  - compute-1#012  - compute-2#012''')): Cannot place <MONSpec for service_name=mon> on compute-2: Unknown hosts
Dec  8 04:46:14 np0005550137 ceph-mgr[74806]: log_channel(cephadm) log [ERR] : Failed to apply mon spec MONSpec.from_json(yaml.safe_load('''service_type: mon#012service_name: mon#012placement:#012  hosts:#012  - compute-0#012  - compute-1#012  - compute-2#012''')): Cannot place <MONSpec for service_name=mon> on compute-2: Unknown hosts
Dec  8 04:46:14 np0005550137 ceph-mgr[74806]: log_channel(cluster) log [DBG] : pgmap v23: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Dec  8 04:46:14 np0005550137 ceph-mgr[74806]: [cephadm ERROR cephadm.serve] Failed to apply mgr spec ServiceSpec.from_json(yaml.safe_load('''service_type: mgr#012service_name: mgr#012placement:#012  hosts:#012  - compute-0#012  - compute-1#012  - compute-2#012''')): Cannot place <ServiceSpec for service_name=mgr> on compute-2: Unknown hosts
Dec  8 04:46:14 np0005550137 ceph-mgr[74806]: log_channel(cephadm) log [ERR] : Failed to apply mgr spec ServiceSpec.from_json(yaml.safe_load('''service_type: mgr#012service_name: mgr#012placement:#012  hosts:#012  - compute-0#012  - compute-1#012  - compute-2#012''')): Cannot place <ServiceSpec for service_name=mgr> on compute-2: Unknown hosts
Dec  8 04:46:14 np0005550137 ceph-mgr[74806]: log_channel(cluster) log [DBG] : pgmap v24: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Dec  8 04:46:14 np0005550137 ceph-mgr[74806]: [progress INFO root] update: starting ev 6711183f-9a9a-4164-830b-6cbebd473c6a (Updating crash deployment (+1 -> 2))
Dec  8 04:46:14 np0005550137 ceph-ceb838ef-9d5d-54e4-bddb-2f01adce2ad4-mgr-compute-0-kitiwu[74802]: 2025-12-08T09:46:14.021+0000 7fa158dc6640 -1 log_channel(cephadm) log [ERR] : Failed to apply mon spec MONSpec.from_json(yaml.safe_load('''service_type: mon
Dec  8 04:46:14 np0005550137 ceph-ceb838ef-9d5d-54e4-bddb-2f01adce2ad4-mgr-compute-0-kitiwu[74802]: service_name: mon
Dec  8 04:46:14 np0005550137 ceph-ceb838ef-9d5d-54e4-bddb-2f01adce2ad4-mgr-compute-0-kitiwu[74802]: placement:
Dec  8 04:46:14 np0005550137 ceph-ceb838ef-9d5d-54e4-bddb-2f01adce2ad4-mgr-compute-0-kitiwu[74802]:  hosts:
Dec  8 04:46:14 np0005550137 ceph-ceb838ef-9d5d-54e4-bddb-2f01adce2ad4-mgr-compute-0-kitiwu[74802]:  - compute-0
Dec  8 04:46:14 np0005550137 ceph-ceb838ef-9d5d-54e4-bddb-2f01adce2ad4-mgr-compute-0-kitiwu[74802]:  - compute-1
Dec  8 04:46:14 np0005550137 ceph-ceb838ef-9d5d-54e4-bddb-2f01adce2ad4-mgr-compute-0-kitiwu[74802]:  - compute-2
Dec  8 04:46:14 np0005550137 ceph-ceb838ef-9d5d-54e4-bddb-2f01adce2ad4-mgr-compute-0-kitiwu[74802]: ''')): Cannot place <MONSpec for service_name=mon> on compute-2: Unknown hosts
Dec  8 04:46:14 np0005550137 ceph-ceb838ef-9d5d-54e4-bddb-2f01adce2ad4-mgr-compute-0-kitiwu[74802]: 2025-12-08T09:46:14.022+0000 7fa158dc6640 -1 log_channel(cephadm) log [ERR] : Failed to apply mgr spec ServiceSpec.from_json(yaml.safe_load('''service_type: mgr
Dec  8 04:46:14 np0005550137 ceph-ceb838ef-9d5d-54e4-bddb-2f01adce2ad4-mgr-compute-0-kitiwu[74802]: service_name: mgr
Dec  8 04:46:14 np0005550137 ceph-ceb838ef-9d5d-54e4-bddb-2f01adce2ad4-mgr-compute-0-kitiwu[74802]: placement:
Dec  8 04:46:14 np0005550137 ceph-ceb838ef-9d5d-54e4-bddb-2f01adce2ad4-mgr-compute-0-kitiwu[74802]:  hosts:
Dec  8 04:46:14 np0005550137 ceph-ceb838ef-9d5d-54e4-bddb-2f01adce2ad4-mgr-compute-0-kitiwu[74802]:  - compute-0
Dec  8 04:46:14 np0005550137 ceph-ceb838ef-9d5d-54e4-bddb-2f01adce2ad4-mgr-compute-0-kitiwu[74802]:  - compute-1
Dec  8 04:46:14 np0005550137 ceph-ceb838ef-9d5d-54e4-bddb-2f01adce2ad4-mgr-compute-0-kitiwu[74802]:  - compute-2
Dec  8 04:46:14 np0005550137 ceph-ceb838ef-9d5d-54e4-bddb-2f01adce2ad4-mgr-compute-0-kitiwu[74802]: ''')): Cannot place <ServiceSpec for service_name=mgr> on compute-2: Unknown hosts
Dec  8 04:46:14 np0005550137 ceph-mon[74516]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get-or-create", "entity": "client.crash.compute-1", "caps": ["mon", "profile crash", "mgr", "profile crash"]} v 0)
Dec  8 04:46:14 np0005550137 ceph-mon[74516]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1555784564' entity='mgr.compute-0.kitiwu' cmd=[{"prefix": "auth get-or-create", "entity": "client.crash.compute-1", "caps": ["mon", "profile crash", "mgr", "profile crash"]}]: dispatch
Dec  8 04:46:14 np0005550137 ceph-mon[74516]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1555784564' entity='mgr.compute-0.kitiwu' cmd='[{"prefix": "auth get-or-create", "entity": "client.crash.compute-1", "caps": ["mon", "profile crash", "mgr", "profile crash"]}]': finished
Dec  8 04:46:14 np0005550137 ceph-mon[74516]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Dec  8 04:46:14 np0005550137 ceph-mon[74516]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/1555784564' entity='mgr.compute-0.kitiwu' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec  8 04:46:14 np0005550137 ceph-mgr[74806]: [cephadm INFO cephadm.serve] Deploying daemon crash.compute-1 on compute-1
Dec  8 04:46:14 np0005550137 ceph-mgr[74806]: log_channel(cephadm) log [INF] : Deploying daemon crash.compute-1 on compute-1
Dec  8 04:46:14 np0005550137 ceph-mon[74516]: log_channel(cluster) log [WRN] : Health check failed: Failed to apply 2 service(s): mon,mgr (CEPHADM_APPLY_SPEC_FAIL)
Dec  8 04:46:14 np0005550137 ceph-mon[74516]: mon.compute-0@0(leader).osd e3 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  8 04:46:15 np0005550137 ceph-mon[74516]: Updating compute-1:/var/lib/ceph/ceb838ef-9d5d-54e4-bddb-2f01adce2ad4/config/ceph.client.admin.keyring
Dec  8 04:46:15 np0005550137 ceph-mon[74516]: from='mgr.14122 192.168.122.100:0/1555784564' entity='mgr.compute-0.kitiwu' 
Dec  8 04:46:15 np0005550137 ceph-mon[74516]: from='mgr.14122 192.168.122.100:0/1555784564' entity='mgr.compute-0.kitiwu' 
Dec  8 04:46:15 np0005550137 ceph-mon[74516]: from='mgr.14122 192.168.122.100:0/1555784564' entity='mgr.compute-0.kitiwu' 
Dec  8 04:46:15 np0005550137 ceph-mon[74516]: Failed to apply mon spec MONSpec.from_json(yaml.safe_load('''service_type: mon#012service_name: mon#012placement:#012  hosts:#012  - compute-0#012  - compute-1#012  - compute-2#012''')): Cannot place <MONSpec for service_name=mon> on compute-2: Unknown hosts
Dec  8 04:46:15 np0005550137 ceph-mon[74516]: Failed to apply mgr spec ServiceSpec.from_json(yaml.safe_load('''service_type: mgr#012service_name: mgr#012placement:#012  hosts:#012  - compute-0#012  - compute-1#012  - compute-2#012''')): Cannot place <ServiceSpec for service_name=mgr> on compute-2: Unknown hosts
Dec  8 04:46:15 np0005550137 ceph-mon[74516]: from='mgr.14122 192.168.122.100:0/1555784564' entity='mgr.compute-0.kitiwu' cmd=[{"prefix": "auth get-or-create", "entity": "client.crash.compute-1", "caps": ["mon", "profile crash", "mgr", "profile crash"]}]: dispatch
Dec  8 04:46:15 np0005550137 ceph-mon[74516]: from='mgr.14122 192.168.122.100:0/1555784564' entity='mgr.compute-0.kitiwu' cmd='[{"prefix": "auth get-or-create", "entity": "client.crash.compute-1", "caps": ["mon", "profile crash", "mgr", "profile crash"]}]': finished
Dec  8 04:46:15 np0005550137 ceph-mon[74516]: Deploying daemon crash.compute-1 on compute-1
Dec  8 04:46:15 np0005550137 ceph-mon[74516]: Health check failed: Failed to apply 2 service(s): mon,mgr (CEPHADM_APPLY_SPEC_FAIL)
Dec  8 04:46:16 np0005550137 ceph-mgr[74806]: log_channel(cluster) log [DBG] : pgmap v25: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Dec  8 04:46:16 np0005550137 ceph-mon[74516]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1.devices.0}] v 0)
Dec  8 04:46:16 np0005550137 ceph-mon[74516]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1555784564' entity='mgr.compute-0.kitiwu' 
Dec  8 04:46:16 np0005550137 ceph-mon[74516]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1}] v 0)
Dec  8 04:46:16 np0005550137 ceph-mon[74516]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1555784564' entity='mgr.compute-0.kitiwu' 
Dec  8 04:46:16 np0005550137 ceph-mon[74516]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.crash}] v 0)
Dec  8 04:46:16 np0005550137 ceph-mon[74516]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1555784564' entity='mgr.compute-0.kitiwu' 
Dec  8 04:46:16 np0005550137 ceph-mgr[74806]: [progress INFO root] complete: finished ev 6711183f-9a9a-4164-830b-6cbebd473c6a (Updating crash deployment (+1 -> 2))
Dec  8 04:46:16 np0005550137 ceph-mgr[74806]: [progress INFO root] Completed event 6711183f-9a9a-4164-830b-6cbebd473c6a (Updating crash deployment (+1 -> 2)) in 3 seconds
Dec  8 04:46:16 np0005550137 ceph-mon[74516]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.crash}] v 0)
Dec  8 04:46:16 np0005550137 ceph-mon[74516]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1555784564' entity='mgr.compute-0.kitiwu' 
Dec  8 04:46:16 np0005550137 ceph-mon[74516]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Dec  8 04:46:16 np0005550137 ceph-mon[74516]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/1555784564' entity='mgr.compute-0.kitiwu' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Dec  8 04:46:16 np0005550137 ceph-mon[74516]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Dec  8 04:46:16 np0005550137 ceph-mon[74516]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1555784564' entity='mgr.compute-0.kitiwu' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Dec  8 04:46:16 np0005550137 ceph-mon[74516]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Dec  8 04:46:16 np0005550137 ceph-mon[74516]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/1555784564' entity='mgr.compute-0.kitiwu' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec  8 04:46:16 np0005550137 ceph-mon[74516]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Dec  8 04:46:16 np0005550137 ceph-mon[74516]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1555784564' entity='mgr.compute-0.kitiwu' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Dec  8 04:46:16 np0005550137 ceph-mon[74516]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Dec  8 04:46:16 np0005550137 ceph-mon[74516]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/1555784564' entity='mgr.compute-0.kitiwu' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec  8 04:46:17 np0005550137 ceph-mon[74516]: from='mgr.14122 192.168.122.100:0/1555784564' entity='mgr.compute-0.kitiwu' 
Dec  8 04:46:17 np0005550137 ceph-mon[74516]: from='mgr.14122 192.168.122.100:0/1555784564' entity='mgr.compute-0.kitiwu' 
Dec  8 04:46:17 np0005550137 ceph-mon[74516]: from='mgr.14122 192.168.122.100:0/1555784564' entity='mgr.compute-0.kitiwu' 
Dec  8 04:46:17 np0005550137 ceph-mon[74516]: from='mgr.14122 192.168.122.100:0/1555784564' entity='mgr.compute-0.kitiwu' 
Dec  8 04:46:17 np0005550137 ceph-mon[74516]: from='mgr.14122 192.168.122.100:0/1555784564' entity='mgr.compute-0.kitiwu' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Dec  8 04:46:17 np0005550137 ceph-mon[74516]: from='mgr.14122 192.168.122.100:0/1555784564' entity='mgr.compute-0.kitiwu' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Dec  8 04:46:17 np0005550137 podman[81135]: 2025-12-08 09:46:17.427968165 +0000 UTC m=+0.037475576 container create bead48935c4afa872af94e9daeb031f40cd7e752bfbd95ea3af9d704ade79661 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=zen_darwin, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0)
Dec  8 04:46:17 np0005550137 systemd[1]: Started libpod-conmon-bead48935c4afa872af94e9daeb031f40cd7e752bfbd95ea3af9d704ade79661.scope.
Dec  8 04:46:17 np0005550137 systemd[1]: Started libcrun container.
Dec  8 04:46:17 np0005550137 podman[81135]: 2025-12-08 09:46:17.412004381 +0000 UTC m=+0.021511792 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Dec  8 04:46:17 np0005550137 podman[81135]: 2025-12-08 09:46:17.521476831 +0000 UTC m=+0.130984322 container init bead48935c4afa872af94e9daeb031f40cd7e752bfbd95ea3af9d704ade79661 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=zen_darwin, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, ceph=True, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec  8 04:46:17 np0005550137 podman[81135]: 2025-12-08 09:46:17.527228213 +0000 UTC m=+0.136735624 container start bead48935c4afa872af94e9daeb031f40cd7e752bfbd95ea3af9d704ade79661 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=zen_darwin, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec  8 04:46:17 np0005550137 podman[81135]: 2025-12-08 09:46:17.529966059 +0000 UTC m=+0.139473520 container attach bead48935c4afa872af94e9daeb031f40cd7e752bfbd95ea3af9d704ade79661 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=zen_darwin, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec  8 04:46:17 np0005550137 zen_darwin[81151]: 167 167
Dec  8 04:46:17 np0005550137 systemd[1]: libpod-bead48935c4afa872af94e9daeb031f40cd7e752bfbd95ea3af9d704ade79661.scope: Deactivated successfully.
Dec  8 04:46:17 np0005550137 podman[81135]: 2025-12-08 09:46:17.534921887 +0000 UTC m=+0.144429338 container died bead48935c4afa872af94e9daeb031f40cd7e752bfbd95ea3af9d704ade79661 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=zen_darwin, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid)
Dec  8 04:46:17 np0005550137 systemd[1]: var-lib-containers-storage-overlay-6424cfd62d80e5451d0720b103e2e63992b238847fa19a787e010ddcb8ab1800-merged.mount: Deactivated successfully.
Dec  8 04:46:17 np0005550137 podman[81135]: 2025-12-08 09:46:17.576995857 +0000 UTC m=+0.186503268 container remove bead48935c4afa872af94e9daeb031f40cd7e752bfbd95ea3af9d704ade79661 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=zen_darwin, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec  8 04:46:17 np0005550137 systemd[1]: libpod-conmon-bead48935c4afa872af94e9daeb031f40cd7e752bfbd95ea3af9d704ade79661.scope: Deactivated successfully.
Dec  8 04:46:17 np0005550137 ceph-mgr[74806]: [balancer INFO root] Optimize plan auto_2025-12-08_09:46:17
Dec  8 04:46:17 np0005550137 ceph-mgr[74806]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Dec  8 04:46:17 np0005550137 ceph-mgr[74806]: [balancer INFO root] do_upmap
Dec  8 04:46:17 np0005550137 ceph-mgr[74806]: [balancer INFO root] No pools available
Dec  8 04:46:17 np0005550137 ceph-mgr[74806]: [pg_autoscaler INFO root] _maybe_adjust
Dec  8 04:46:17 np0005550137 ceph-mgr[74806]: [progress INFO root] Writing back 2 completed events
Dec  8 04:46:17 np0005550137 ceph-mon[74516]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/progress/completed}] v 0)
Dec  8 04:46:17 np0005550137 ceph-mgr[74806]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Dec  8 04:46:17 np0005550137 ceph-mgr[74806]: [volumes INFO mgr_util] scanning for idle connections..
Dec  8 04:46:17 np0005550137 ceph-mgr[74806]: [volumes INFO mgr_util] cleaning up connections: []
Dec  8 04:46:17 np0005550137 ceph-mon[74516]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1555784564' entity='mgr.compute-0.kitiwu' 
Dec  8 04:46:17 np0005550137 ceph-mgr[74806]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Dec  8 04:46:17 np0005550137 ceph-mgr[74806]: [volumes INFO mgr_util] scanning for idle connections..
Dec  8 04:46:17 np0005550137 ceph-mgr[74806]: [volumes INFO mgr_util] cleaning up connections: []
Dec  8 04:46:17 np0005550137 ceph-mgr[74806]: [volumes INFO mgr_util] scanning for idle connections..
Dec  8 04:46:17 np0005550137 ceph-mgr[74806]: [volumes INFO mgr_util] cleaning up connections: []
Dec  8 04:46:17 np0005550137 podman[81173]: 2025-12-08 09:46:17.741252899 +0000 UTC m=+0.051617873 container create 029da3585f797d423a416bfca3f619a2464a942c6b4a4957dfd1b4406e1c0186 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=inspiring_volhard, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image)
Dec  8 04:46:17 np0005550137 systemd[1]: Started libpod-conmon-029da3585f797d423a416bfca3f619a2464a942c6b4a4957dfd1b4406e1c0186.scope.
Dec  8 04:46:17 np0005550137 podman[81173]: 2025-12-08 09:46:17.715304138 +0000 UTC m=+0.025669192 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Dec  8 04:46:17 np0005550137 systemd[1]: Started libcrun container.
Dec  8 04:46:17 np0005550137 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/43cfe0cc4ae208541e25e1ce78c66d4374e7c311911199c81c92e42241341569/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec  8 04:46:17 np0005550137 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/43cfe0cc4ae208541e25e1ce78c66d4374e7c311911199c81c92e42241341569/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec  8 04:46:17 np0005550137 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/43cfe0cc4ae208541e25e1ce78c66d4374e7c311911199c81c92e42241341569/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec  8 04:46:17 np0005550137 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/43cfe0cc4ae208541e25e1ce78c66d4374e7c311911199c81c92e42241341569/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec  8 04:46:17 np0005550137 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/43cfe0cc4ae208541e25e1ce78c66d4374e7c311911199c81c92e42241341569/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Dec  8 04:46:17 np0005550137 podman[81173]: 2025-12-08 09:46:17.848058456 +0000 UTC m=+0.158423470 container init 029da3585f797d423a416bfca3f619a2464a942c6b4a4957dfd1b4406e1c0186 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=inspiring_volhard, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325)
Dec  8 04:46:17 np0005550137 podman[81173]: 2025-12-08 09:46:17.862558144 +0000 UTC m=+0.172923118 container start 029da3585f797d423a416bfca3f619a2464a942c6b4a4957dfd1b4406e1c0186 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=inspiring_volhard, OSD_FLAVOR=default, ceph=True, CEPH_REF=squid, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec  8 04:46:17 np0005550137 podman[81173]: 2025-12-08 09:46:17.866376274 +0000 UTC m=+0.176741288 container attach 029da3585f797d423a416bfca3f619a2464a942c6b4a4957dfd1b4406e1c0186 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=inspiring_volhard, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, CEPH_REF=squid, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec  8 04:46:18 np0005550137 ceph-mgr[74806]: log_channel(cluster) log [DBG] : pgmap v26: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Dec  8 04:46:18 np0005550137 inspiring_volhard[81189]: --> passed data devices: 0 physical, 1 LVM
Dec  8 04:46:18 np0005550137 inspiring_volhard[81189]: Running command: /usr/bin/ceph-authtool --gen-print-key
Dec  8 04:46:18 np0005550137 inspiring_volhard[81189]: Running command: /usr/bin/ceph-authtool --gen-print-key
Dec  8 04:46:18 np0005550137 inspiring_volhard[81189]: Running command: /usr/bin/ceph --cluster ceph --name client.bootstrap-osd --keyring /var/lib/ceph/bootstrap-osd/ceph.keyring -i - osd new 10863df8-16d4-4896-ae26-227efb76290e
Dec  8 04:46:18 np0005550137 ceph-mon[74516]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd new", "uuid": "c550a2b3-dc83-454a-a82b-745064d6ae84"} v 0)
Dec  8 04:46:18 np0005550137 ceph-mon[74516]: log_channel(audit) log [INF] : from='client.? 192.168.122.101:0/748161812' entity='client.bootstrap-osd' cmd=[{"prefix": "osd new", "uuid": "c550a2b3-dc83-454a-a82b-745064d6ae84"}]: dispatch
Dec  8 04:46:18 np0005550137 ceph-mon[74516]: mon.compute-0@0(leader).osd e3 do_prune osdmap full prune enabled
Dec  8 04:46:18 np0005550137 ceph-mon[74516]: mon.compute-0@0(leader).osd e3 encode_pending skipping prime_pg_temp; mapping job did not start
Dec  8 04:46:18 np0005550137 ceph-mon[74516]: log_channel(audit) log [INF] : from='client.? 192.168.122.101:0/748161812' entity='client.bootstrap-osd' cmd='[{"prefix": "osd new", "uuid": "c550a2b3-dc83-454a-a82b-745064d6ae84"}]': finished
Dec  8 04:46:18 np0005550137 ceph-mon[74516]: mon.compute-0@0(leader).osd e4 e4: 1 total, 0 up, 1 in
Dec  8 04:46:18 np0005550137 ceph-mon[74516]: log_channel(cluster) log [DBG] : osdmap e4: 1 total, 0 up, 1 in
Dec  8 04:46:18 np0005550137 ceph-mon[74516]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 0} v 0)
Dec  8 04:46:18 np0005550137 ceph-mon[74516]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/1555784564' entity='mgr.compute-0.kitiwu' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch
Dec  8 04:46:18 np0005550137 ceph-mgr[74806]: mgr finish mon failed to return metadata for osd.0: (2) No such file or directory
Dec  8 04:46:18 np0005550137 ceph-mon[74516]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd new", "uuid": "10863df8-16d4-4896-ae26-227efb76290e"} v 0)
Dec  8 04:46:18 np0005550137 ceph-mon[74516]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/604360874' entity='client.bootstrap-osd' cmd=[{"prefix": "osd new", "uuid": "10863df8-16d4-4896-ae26-227efb76290e"}]: dispatch
Dec  8 04:46:18 np0005550137 ceph-mon[74516]: mon.compute-0@0(leader).osd e4 do_prune osdmap full prune enabled
Dec  8 04:46:18 np0005550137 ceph-mon[74516]: mon.compute-0@0(leader).osd e4 encode_pending skipping prime_pg_temp; mapping job did not start
Dec  8 04:46:18 np0005550137 ceph-mon[74516]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/604360874' entity='client.bootstrap-osd' cmd='[{"prefix": "osd new", "uuid": "10863df8-16d4-4896-ae26-227efb76290e"}]': finished
Dec  8 04:46:18 np0005550137 ceph-mon[74516]: mon.compute-0@0(leader).osd e5 e5: 2 total, 0 up, 2 in
Dec  8 04:46:18 np0005550137 ceph-mon[74516]: from='mgr.14122 192.168.122.100:0/1555784564' entity='mgr.compute-0.kitiwu' 
Dec  8 04:46:18 np0005550137 ceph-mon[74516]: from='client.? 192.168.122.101:0/748161812' entity='client.bootstrap-osd' cmd=[{"prefix": "osd new", "uuid": "c550a2b3-dc83-454a-a82b-745064d6ae84"}]: dispatch
Dec  8 04:46:18 np0005550137 ceph-mon[74516]: from='client.? 192.168.122.101:0/748161812' entity='client.bootstrap-osd' cmd='[{"prefix": "osd new", "uuid": "c550a2b3-dc83-454a-a82b-745064d6ae84"}]': finished
Dec  8 04:46:18 np0005550137 ceph-mon[74516]: log_channel(cluster) log [DBG] : osdmap e5: 2 total, 0 up, 2 in
Dec  8 04:46:18 np0005550137 ceph-mon[74516]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 0} v 0)
Dec  8 04:46:18 np0005550137 ceph-mon[74516]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/1555784564' entity='mgr.compute-0.kitiwu' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch
Dec  8 04:46:18 np0005550137 ceph-mon[74516]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 1} v 0)
Dec  8 04:46:18 np0005550137 ceph-mon[74516]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/1555784564' entity='mgr.compute-0.kitiwu' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch
Dec  8 04:46:18 np0005550137 ceph-mgr[74806]: mgr finish mon failed to return metadata for osd.0: (2) No such file or directory
Dec  8 04:46:18 np0005550137 ceph-mgr[74806]: mgr finish mon failed to return metadata for osd.1: (2) No such file or directory
Dec  8 04:46:18 np0005550137 inspiring_volhard[81189]: Running command: /usr/bin/mount -t tmpfs tmpfs /var/lib/ceph/osd/ceph-1
Dec  8 04:46:18 np0005550137 inspiring_volhard[81189]: Running command: /usr/bin/chown -h ceph:ceph /dev/ceph_vg0/ceph_lv0
Dec  8 04:46:18 np0005550137 inspiring_volhard[81189]: Running command: /usr/bin/chown -R ceph:ceph /dev/dm-0
Dec  8 04:46:18 np0005550137 inspiring_volhard[81189]: Running command: /usr/bin/ln -s /dev/ceph_vg0/ceph_lv0 /var/lib/ceph/osd/ceph-1/block
Dec  8 04:46:18 np0005550137 inspiring_volhard[81189]: Running command: /usr/bin/ceph --cluster ceph --name client.bootstrap-osd --keyring /var/lib/ceph/bootstrap-osd/ceph.keyring mon getmap -o /var/lib/ceph/osd/ceph-1/activate.monmap
Dec  8 04:46:18 np0005550137 lvm[81251]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Dec  8 04:46:18 np0005550137 lvm[81251]: VG ceph_vg0 finished
Dec  8 04:46:19 np0005550137 ceph-mon[74516]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon getmap"} v 0)
Dec  8 04:46:19 np0005550137 ceph-mon[74516]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/1327518535' entity='client.bootstrap-osd' cmd=[{"prefix": "mon getmap"}]: dispatch
Dec  8 04:46:19 np0005550137 ceph-mon[74516]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon getmap"} v 0)
Dec  8 04:46:19 np0005550137 ceph-mon[74516]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1141389350' entity='client.bootstrap-osd' cmd=[{"prefix": "mon getmap"}]: dispatch
Dec  8 04:46:19 np0005550137 inspiring_volhard[81189]: stderr: got monmap epoch 1
Dec  8 04:46:19 np0005550137 inspiring_volhard[81189]: --> Creating keyring file for osd.1
Dec  8 04:46:19 np0005550137 inspiring_volhard[81189]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-1/keyring
Dec  8 04:46:19 np0005550137 inspiring_volhard[81189]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-1/
Dec  8 04:46:19 np0005550137 inspiring_volhard[81189]: Running command: /usr/bin/ceph-osd --cluster ceph --osd-objectstore bluestore --mkfs -i 1 --monmap /var/lib/ceph/osd/ceph-1/activate.monmap --keyfile - --osdspec-affinity default_drive_group --osd-data /var/lib/ceph/osd/ceph-1/ --osd-uuid 10863df8-16d4-4896-ae26-227efb76290e --setuser ceph --setgroup ceph
Dec  8 04:46:19 np0005550137 ceph-mon[74516]: from='client.? 192.168.122.100:0/604360874' entity='client.bootstrap-osd' cmd=[{"prefix": "osd new", "uuid": "10863df8-16d4-4896-ae26-227efb76290e"}]: dispatch
Dec  8 04:46:19 np0005550137 ceph-mon[74516]: from='client.? 192.168.122.100:0/604360874' entity='client.bootstrap-osd' cmd='[{"prefix": "osd new", "uuid": "10863df8-16d4-4896-ae26-227efb76290e"}]': finished
Dec  8 04:46:19 np0005550137 ceph-mon[74516]: mon.compute-0@0(leader).osd e5 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  8 04:46:20 np0005550137 ceph-mgr[74806]: log_channel(cluster) log [DBG] : pgmap v29: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Dec  8 04:46:20 np0005550137 ceph-mon[74516]: log_channel(cluster) log [INF] : Health check cleared: TOO_FEW_OSDS (was: OSD count 0 < osd_pool_default_size 1)
Dec  8 04:46:21 np0005550137 ceph-mon[74516]: Health check cleared: TOO_FEW_OSDS (was: OSD count 0 < osd_pool_default_size 1)
Dec  8 04:46:22 np0005550137 ceph-mgr[74806]: log_channel(cluster) log [DBG] : pgmap v30: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Dec  8 04:46:22 np0005550137 inspiring_volhard[81189]: stderr: 2025-12-08T09:46:19.498+0000 7f2b7bab1740 -1 bluestore(/var/lib/ceph/osd/ceph-1//block) No valid bdev label found
Dec  8 04:46:22 np0005550137 inspiring_volhard[81189]: stderr: 2025-12-08T09:46:19.764+0000 7f2b7bab1740 -1 bluestore(/var/lib/ceph/osd/ceph-1/) _read_fsid unparsable uuid
Dec  8 04:46:22 np0005550137 inspiring_volhard[81189]: --> ceph-volume lvm prepare successful for: ceph_vg0/ceph_lv0
Dec  8 04:46:22 np0005550137 inspiring_volhard[81189]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-1
Dec  8 04:46:22 np0005550137 inspiring_volhard[81189]: Running command: /usr/bin/ceph-bluestore-tool --cluster=ceph prime-osd-dir --dev /dev/ceph_vg0/ceph_lv0 --path /var/lib/ceph/osd/ceph-1 --no-mon-config
Dec  8 04:46:22 np0005550137 inspiring_volhard[81189]: Running command: /usr/bin/ln -snf /dev/ceph_vg0/ceph_lv0 /var/lib/ceph/osd/ceph-1/block
Dec  8 04:46:22 np0005550137 inspiring_volhard[81189]: Running command: /usr/bin/chown -h ceph:ceph /var/lib/ceph/osd/ceph-1/block
Dec  8 04:46:22 np0005550137 inspiring_volhard[81189]: Running command: /usr/bin/chown -R ceph:ceph /dev/dm-0
Dec  8 04:46:22 np0005550137 inspiring_volhard[81189]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-1
Dec  8 04:46:22 np0005550137 inspiring_volhard[81189]: --> ceph-volume lvm activate successful for osd ID: 1
Dec  8 04:46:22 np0005550137 inspiring_volhard[81189]: --> ceph-volume lvm create successful for: ceph_vg0/ceph_lv0
Dec  8 04:46:22 np0005550137 systemd[1]: libpod-029da3585f797d423a416bfca3f619a2464a942c6b4a4957dfd1b4406e1c0186.scope: Deactivated successfully.
Dec  8 04:46:22 np0005550137 systemd[1]: libpod-029da3585f797d423a416bfca3f619a2464a942c6b4a4957dfd1b4406e1c0186.scope: Consumed 2.218s CPU time.
Dec  8 04:46:22 np0005550137 podman[81173]: 2025-12-08 09:46:22.504801049 +0000 UTC m=+4.815166033 container died 029da3585f797d423a416bfca3f619a2464a942c6b4a4957dfd1b4406e1c0186 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=inspiring_volhard, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default)
Dec  8 04:46:22 np0005550137 systemd[1]: var-lib-containers-storage-overlay-43cfe0cc4ae208541e25e1ce78c66d4374e7c311911199c81c92e42241341569-merged.mount: Deactivated successfully.
Dec  8 04:46:22 np0005550137 podman[81173]: 2025-12-08 09:46:22.550965459 +0000 UTC m=+4.861330433 container remove 029da3585f797d423a416bfca3f619a2464a942c6b4a4957dfd1b4406e1c0186 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=inspiring_volhard, OSD_FLAVOR=default, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_REF=squid, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image)
Dec  8 04:46:22 np0005550137 systemd[1]: libpod-conmon-029da3585f797d423a416bfca3f619a2464a942c6b4a4957dfd1b4406e1c0186.scope: Deactivated successfully.
Dec  8 04:46:23 np0005550137 podman[82275]: 2025-12-08 09:46:23.143989886 +0000 UTC m=+0.037729113 container create 7c91970eb6af7ab6c1442f58fd653052e20174e6bb911989d67f3933f4783372 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=peaceful_pasteur, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec  8 04:46:23 np0005550137 systemd[1]: Started libpod-conmon-7c91970eb6af7ab6c1442f58fd653052e20174e6bb911989d67f3933f4783372.scope.
Dec  8 04:46:23 np0005550137 systemd[1]: Started libcrun container.
Dec  8 04:46:23 np0005550137 podman[82275]: 2025-12-08 09:46:23.126491452 +0000 UTC m=+0.020230679 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Dec  8 04:46:23 np0005550137 podman[82275]: 2025-12-08 09:46:23.231303716 +0000 UTC m=+0.125042973 container init 7c91970eb6af7ab6c1442f58fd653052e20174e6bb911989d67f3933f4783372 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=peaceful_pasteur, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325)
Dec  8 04:46:23 np0005550137 podman[82275]: 2025-12-08 09:46:23.244139762 +0000 UTC m=+0.137878999 container start 7c91970eb6af7ab6c1442f58fd653052e20174e6bb911989d67f3933f4783372 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=peaceful_pasteur, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec  8 04:46:23 np0005550137 podman[82275]: 2025-12-08 09:46:23.248144059 +0000 UTC m=+0.141883306 container attach 7c91970eb6af7ab6c1442f58fd653052e20174e6bb911989d67f3933f4783372 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=peaceful_pasteur, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, OSD_FLAVOR=default)
Dec  8 04:46:23 np0005550137 peaceful_pasteur[82291]: 167 167
Dec  8 04:46:23 np0005550137 systemd[1]: libpod-7c91970eb6af7ab6c1442f58fd653052e20174e6bb911989d67f3933f4783372.scope: Deactivated successfully.
Dec  8 04:46:23 np0005550137 podman[82275]: 2025-12-08 09:46:23.252204136 +0000 UTC m=+0.145943383 container died 7c91970eb6af7ab6c1442f58fd653052e20174e6bb911989d67f3933f4783372 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=peaceful_pasteur, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, CEPH_REF=squid, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Dec  8 04:46:23 np0005550137 systemd[1]: var-lib-containers-storage-overlay-4146c93e0d52ce21400b4b6bcdc5c48da4b25e099648c04bd5826d2b2e1958af-merged.mount: Deactivated successfully.
Dec  8 04:46:23 np0005550137 podman[82275]: 2025-12-08 09:46:23.306828774 +0000 UTC m=+0.200567991 container remove 7c91970eb6af7ab6c1442f58fd653052e20174e6bb911989d67f3933f4783372 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=peaceful_pasteur, org.label-schema.license=GPLv2, CEPH_REF=squid, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Dec  8 04:46:23 np0005550137 systemd[1]: libpod-conmon-7c91970eb6af7ab6c1442f58fd653052e20174e6bb911989d67f3933f4783372.scope: Deactivated successfully.
Dec  8 04:46:23 np0005550137 podman[82315]: 2025-12-08 09:46:23.476380903 +0000 UTC m=+0.044605101 container create 0e3f9a60a4553b06cbfe6528194d7899c7f177e88bea027f545f68eb791136fa (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=eager_blackwell, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Dec  8 04:46:23 np0005550137 systemd[1]: Started libpod-conmon-0e3f9a60a4553b06cbfe6528194d7899c7f177e88bea027f545f68eb791136fa.scope.
Dec  8 04:46:23 np0005550137 systemd[1]: Started libcrun container.
Dec  8 04:46:23 np0005550137 podman[82315]: 2025-12-08 09:46:23.460343217 +0000 UTC m=+0.028567435 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Dec  8 04:46:23 np0005550137 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/032327dc43480c6edd83bae08a7d2ed48931721f9f1f82956d14bd1f28e45456/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec  8 04:46:23 np0005550137 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/032327dc43480c6edd83bae08a7d2ed48931721f9f1f82956d14bd1f28e45456/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec  8 04:46:23 np0005550137 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/032327dc43480c6edd83bae08a7d2ed48931721f9f1f82956d14bd1f28e45456/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec  8 04:46:23 np0005550137 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/032327dc43480c6edd83bae08a7d2ed48931721f9f1f82956d14bd1f28e45456/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec  8 04:46:23 np0005550137 podman[82315]: 2025-12-08 09:46:23.580554657 +0000 UTC m=+0.148778885 container init 0e3f9a60a4553b06cbfe6528194d7899c7f177e88bea027f545f68eb791136fa (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=eager_blackwell, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec  8 04:46:23 np0005550137 podman[82315]: 2025-12-08 09:46:23.586588908 +0000 UTC m=+0.154813106 container start 0e3f9a60a4553b06cbfe6528194d7899c7f177e88bea027f545f68eb791136fa (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=eager_blackwell, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec  8 04:46:23 np0005550137 podman[82315]: 2025-12-08 09:46:23.5904381 +0000 UTC m=+0.158662328 container attach 0e3f9a60a4553b06cbfe6528194d7899c7f177e88bea027f545f68eb791136fa (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=eager_blackwell, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, ceph=True)
Dec  8 04:46:23 np0005550137 eager_blackwell[82331]: {
Dec  8 04:46:23 np0005550137 eager_blackwell[82331]:    "1": [
Dec  8 04:46:23 np0005550137 eager_blackwell[82331]:        {
Dec  8 04:46:23 np0005550137 eager_blackwell[82331]:            "devices": [
Dec  8 04:46:23 np0005550137 eager_blackwell[82331]:                "/dev/loop3"
Dec  8 04:46:23 np0005550137 eager_blackwell[82331]:            ],
Dec  8 04:46:23 np0005550137 eager_blackwell[82331]:            "lv_name": "ceph_lv0",
Dec  8 04:46:23 np0005550137 eager_blackwell[82331]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Dec  8 04:46:23 np0005550137 eager_blackwell[82331]:            "lv_size": "21470642176",
Dec  8 04:46:23 np0005550137 eager_blackwell[82331]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=RomYjf-Huw1-Uvyl-0ZXT-yb3F-i1Wo-KjPFYE,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=ceb838ef-9d5d-54e4-bddb-2f01adce2ad4,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=10863df8-16d4-4896-ae26-227efb76290e,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Dec  8 04:46:23 np0005550137 eager_blackwell[82331]:            "lv_uuid": "RomYjf-Huw1-Uvyl-0ZXT-yb3F-i1Wo-KjPFYE",
Dec  8 04:46:23 np0005550137 eager_blackwell[82331]:            "name": "ceph_lv0",
Dec  8 04:46:23 np0005550137 eager_blackwell[82331]:            "path": "/dev/ceph_vg0/ceph_lv0",
Dec  8 04:46:23 np0005550137 eager_blackwell[82331]:            "tags": {
Dec  8 04:46:23 np0005550137 eager_blackwell[82331]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Dec  8 04:46:23 np0005550137 eager_blackwell[82331]:                "ceph.block_uuid": "RomYjf-Huw1-Uvyl-0ZXT-yb3F-i1Wo-KjPFYE",
Dec  8 04:46:23 np0005550137 eager_blackwell[82331]:                "ceph.cephx_lockbox_secret": "",
Dec  8 04:46:23 np0005550137 eager_blackwell[82331]:                "ceph.cluster_fsid": "ceb838ef-9d5d-54e4-bddb-2f01adce2ad4",
Dec  8 04:46:23 np0005550137 eager_blackwell[82331]:                "ceph.cluster_name": "ceph",
Dec  8 04:46:23 np0005550137 eager_blackwell[82331]:                "ceph.crush_device_class": "",
Dec  8 04:46:23 np0005550137 eager_blackwell[82331]:                "ceph.encrypted": "0",
Dec  8 04:46:23 np0005550137 eager_blackwell[82331]:                "ceph.osd_fsid": "10863df8-16d4-4896-ae26-227efb76290e",
Dec  8 04:46:23 np0005550137 eager_blackwell[82331]:                "ceph.osd_id": "1",
Dec  8 04:46:23 np0005550137 eager_blackwell[82331]:                "ceph.osdspec_affinity": "default_drive_group",
Dec  8 04:46:23 np0005550137 eager_blackwell[82331]:                "ceph.type": "block",
Dec  8 04:46:23 np0005550137 eager_blackwell[82331]:                "ceph.vdo": "0",
Dec  8 04:46:23 np0005550137 eager_blackwell[82331]:                "ceph.with_tpm": "0"
Dec  8 04:46:23 np0005550137 eager_blackwell[82331]:            },
Dec  8 04:46:23 np0005550137 eager_blackwell[82331]:            "type": "block",
Dec  8 04:46:23 np0005550137 eager_blackwell[82331]:            "vg_name": "ceph_vg0"
Dec  8 04:46:23 np0005550137 eager_blackwell[82331]:        }
Dec  8 04:46:23 np0005550137 eager_blackwell[82331]:    ]
Dec  8 04:46:23 np0005550137 eager_blackwell[82331]: }
Dec  8 04:46:23 np0005550137 systemd[1]: libpod-0e3f9a60a4553b06cbfe6528194d7899c7f177e88bea027f545f68eb791136fa.scope: Deactivated successfully.
Dec  8 04:46:23 np0005550137 podman[82315]: 2025-12-08 09:46:23.891350262 +0000 UTC m=+0.459574460 container died 0e3f9a60a4553b06cbfe6528194d7899c7f177e88bea027f545f68eb791136fa (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=eager_blackwell, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, OSD_FLAVOR=default, org.label-schema.license=GPLv2)
Dec  8 04:46:23 np0005550137 systemd[1]: var-lib-containers-storage-overlay-032327dc43480c6edd83bae08a7d2ed48931721f9f1f82956d14bd1f28e45456-merged.mount: Deactivated successfully.
Dec  8 04:46:23 np0005550137 podman[82315]: 2025-12-08 09:46:23.939588017 +0000 UTC m=+0.507812225 container remove 0e3f9a60a4553b06cbfe6528194d7899c7f177e88bea027f545f68eb791136fa (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=eager_blackwell, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default)
Dec  8 04:46:23 np0005550137 systemd[1]: libpod-conmon-0e3f9a60a4553b06cbfe6528194d7899c7f177e88bea027f545f68eb791136fa.scope: Deactivated successfully.
Dec  8 04:46:23 np0005550137 ceph-mon[74516]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "osd.1"} v 0)
Dec  8 04:46:24 np0005550137 ceph-mon[74516]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1555784564' entity='mgr.compute-0.kitiwu' cmd=[{"prefix": "auth get", "entity": "osd.1"}]: dispatch
Dec  8 04:46:24 np0005550137 ceph-mon[74516]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Dec  8 04:46:24 np0005550137 ceph-mon[74516]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/1555784564' entity='mgr.compute-0.kitiwu' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec  8 04:46:24 np0005550137 ceph-mgr[74806]: [cephadm INFO cephadm.serve] Deploying daemon osd.1 on compute-0
Dec  8 04:46:24 np0005550137 ceph-mgr[74806]: log_channel(cephadm) log [INF] : Deploying daemon osd.1 on compute-0
Dec  8 04:46:24 np0005550137 ceph-mgr[74806]: log_channel(cluster) log [DBG] : pgmap v31: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Dec  8 04:46:24 np0005550137 ceph-mon[74516]: from='mgr.14122 192.168.122.100:0/1555784564' entity='mgr.compute-0.kitiwu' cmd=[{"prefix": "auth get", "entity": "osd.1"}]: dispatch
Dec  8 04:46:24 np0005550137 ceph-mon[74516]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "osd.0"} v 0)
Dec  8 04:46:24 np0005550137 ceph-mon[74516]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1555784564' entity='mgr.compute-0.kitiwu' cmd=[{"prefix": "auth get", "entity": "osd.0"}]: dispatch
Dec  8 04:46:24 np0005550137 ceph-mon[74516]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Dec  8 04:46:24 np0005550137 ceph-mon[74516]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/1555784564' entity='mgr.compute-0.kitiwu' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec  8 04:46:24 np0005550137 ceph-mgr[74806]: [cephadm INFO cephadm.serve] Deploying daemon osd.0 on compute-1
Dec  8 04:46:24 np0005550137 ceph-mgr[74806]: log_channel(cephadm) log [INF] : Deploying daemon osd.0 on compute-1
Dec  8 04:46:24 np0005550137 podman[82441]: 2025-12-08 09:46:24.581101327 +0000 UTC m=+0.040927555 container create 031ab8c8afcd2ef580d4ce4d4e36b941ea86ddd1346c3e67722cb1a28a5b71fe (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=friendly_keller, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_REF=squid, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec  8 04:46:24 np0005550137 systemd[1]: Started libpod-conmon-031ab8c8afcd2ef580d4ce4d4e36b941ea86ddd1346c3e67722cb1a28a5b71fe.scope.
Dec  8 04:46:24 np0005550137 systemd[1]: Started libcrun container.
Dec  8 04:46:24 np0005550137 podman[82441]: 2025-12-08 09:46:24.650866923 +0000 UTC m=+0.110693161 container init 031ab8c8afcd2ef580d4ce4d4e36b941ea86ddd1346c3e67722cb1a28a5b71fe (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=friendly_keller, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, org.label-schema.build-date=20250325)
Dec  8 04:46:24 np0005550137 podman[82441]: 2025-12-08 09:46:24.561345093 +0000 UTC m=+0.021171301 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Dec  8 04:46:24 np0005550137 podman[82441]: 2025-12-08 09:46:24.665093473 +0000 UTC m=+0.124919701 container start 031ab8c8afcd2ef580d4ce4d4e36b941ea86ddd1346c3e67722cb1a28a5b71fe (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=friendly_keller, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec  8 04:46:24 np0005550137 podman[82441]: 2025-12-08 09:46:24.669710168 +0000 UTC m=+0.129536386 container attach 031ab8c8afcd2ef580d4ce4d4e36b941ea86ddd1346c3e67722cb1a28a5b71fe (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=friendly_keller, CEPH_REF=squid, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec  8 04:46:24 np0005550137 friendly_keller[82457]: 167 167
Dec  8 04:46:24 np0005550137 systemd[1]: libpod-031ab8c8afcd2ef580d4ce4d4e36b941ea86ddd1346c3e67722cb1a28a5b71fe.scope: Deactivated successfully.
Dec  8 04:46:24 np0005550137 podman[82441]: 2025-12-08 09:46:24.671891517 +0000 UTC m=+0.131717745 container died 031ab8c8afcd2ef580d4ce4d4e36b941ea86ddd1346c3e67722cb1a28a5b71fe (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=friendly_keller, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image)
Dec  8 04:46:24 np0005550137 systemd[1]: var-lib-containers-storage-overlay-ad1d37f136fd4256c95abf02e6b264f62bf32973550564cbc8863530bf7c8764-merged.mount: Deactivated successfully.
Dec  8 04:46:24 np0005550137 podman[82441]: 2025-12-08 09:46:24.714285827 +0000 UTC m=+0.174112025 container remove 031ab8c8afcd2ef580d4ce4d4e36b941ea86ddd1346c3e67722cb1a28a5b71fe (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=friendly_keller, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec  8 04:46:24 np0005550137 systemd[1]: libpod-conmon-031ab8c8afcd2ef580d4ce4d4e36b941ea86ddd1346c3e67722cb1a28a5b71fe.scope: Deactivated successfully.
Dec  8 04:46:24 np0005550137 ceph-mon[74516]: mon.compute-0@0(leader).osd e5 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  8 04:46:25 np0005550137 podman[82486]: 2025-12-08 09:46:25.043875226 +0000 UTC m=+0.054561225 container create b5f66ddb366c7a08417fcd484eec1ded1d3360ae74c34e007aa5a7f23da12f1d (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-ceb838ef-9d5d-54e4-bddb-2f01adce2ad4-osd-1-activate-test, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, ceph=True, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image)
Dec  8 04:46:25 np0005550137 ceph-mon[74516]: Deploying daemon osd.1 on compute-0
Dec  8 04:46:25 np0005550137 ceph-mon[74516]: from='mgr.14122 192.168.122.100:0/1555784564' entity='mgr.compute-0.kitiwu' cmd=[{"prefix": "auth get", "entity": "osd.0"}]: dispatch
Dec  8 04:46:25 np0005550137 systemd[1]: Started libpod-conmon-b5f66ddb366c7a08417fcd484eec1ded1d3360ae74c34e007aa5a7f23da12f1d.scope.
Dec  8 04:46:25 np0005550137 systemd[1]: Started libcrun container.
Dec  8 04:46:25 np0005550137 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/69ea0e27b670f47c3e93534b63b47668e2fb5db7fd0f162c557b56bf41ce8378/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec  8 04:46:25 np0005550137 podman[82486]: 2025-12-08 09:46:25.026736865 +0000 UTC m=+0.037422874 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Dec  8 04:46:25 np0005550137 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/69ea0e27b670f47c3e93534b63b47668e2fb5db7fd0f162c557b56bf41ce8378/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec  8 04:46:25 np0005550137 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/69ea0e27b670f47c3e93534b63b47668e2fb5db7fd0f162c557b56bf41ce8378/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec  8 04:46:25 np0005550137 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/69ea0e27b670f47c3e93534b63b47668e2fb5db7fd0f162c557b56bf41ce8378/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec  8 04:46:25 np0005550137 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/69ea0e27b670f47c3e93534b63b47668e2fb5db7fd0f162c557b56bf41ce8378/merged/var/lib/ceph/osd/ceph-1 supports timestamps until 2038 (0x7fffffff)
Dec  8 04:46:25 np0005550137 podman[82486]: 2025-12-08 09:46:25.140272324 +0000 UTC m=+0.150958413 container init b5f66ddb366c7a08417fcd484eec1ded1d3360ae74c34e007aa5a7f23da12f1d (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-ceb838ef-9d5d-54e4-bddb-2f01adce2ad4-osd-1-activate-test, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image)
Dec  8 04:46:25 np0005550137 podman[82486]: 2025-12-08 09:46:25.15406945 +0000 UTC m=+0.164755439 container start b5f66ddb366c7a08417fcd484eec1ded1d3360ae74c34e007aa5a7f23da12f1d (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-ceb838ef-9d5d-54e4-bddb-2f01adce2ad4-osd-1-activate-test, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0)
Dec  8 04:46:25 np0005550137 podman[82486]: 2025-12-08 09:46:25.15785444 +0000 UTC m=+0.168540449 container attach b5f66ddb366c7a08417fcd484eec1ded1d3360ae74c34e007aa5a7f23da12f1d (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-ceb838ef-9d5d-54e4-bddb-2f01adce2ad4-osd-1-activate-test, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, OSD_FLAVOR=default)
Dec  8 04:46:25 np0005550137 ceph-ceb838ef-9d5d-54e4-bddb-2f01adce2ad4-osd-1-activate-test[82502]: usage: ceph-volume activate [-h] [--osd-id OSD_ID] [--osd-uuid OSD_FSID]
Dec  8 04:46:25 np0005550137 ceph-ceb838ef-9d5d-54e4-bddb-2f01adce2ad4-osd-1-activate-test[82502]:                            [--no-systemd] [--no-tmpfs]
Dec  8 04:46:25 np0005550137 ceph-ceb838ef-9d5d-54e4-bddb-2f01adce2ad4-osd-1-activate-test[82502]: ceph-volume activate: error: unrecognized arguments: --bad-option
Dec  8 04:46:25 np0005550137 systemd[1]: libpod-b5f66ddb366c7a08417fcd484eec1ded1d3360ae74c34e007aa5a7f23da12f1d.scope: Deactivated successfully.
Dec  8 04:46:25 np0005550137 podman[82486]: 2025-12-08 09:46:25.332594681 +0000 UTC m=+0.343280730 container died b5f66ddb366c7a08417fcd484eec1ded1d3360ae74c34e007aa5a7f23da12f1d (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-ceb838ef-9d5d-54e4-bddb-2f01adce2ad4-osd-1-activate-test, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image)
Dec  8 04:46:25 np0005550137 systemd[1]: var-lib-containers-storage-overlay-69ea0e27b670f47c3e93534b63b47668e2fb5db7fd0f162c557b56bf41ce8378-merged.mount: Deactivated successfully.
Dec  8 04:46:25 np0005550137 podman[82486]: 2025-12-08 09:46:25.378120958 +0000 UTC m=+0.388806987 container remove b5f66ddb366c7a08417fcd484eec1ded1d3360ae74c34e007aa5a7f23da12f1d (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-ceb838ef-9d5d-54e4-bddb-2f01adce2ad4-osd-1-activate-test, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, CEPH_REF=squid)
Dec  8 04:46:25 np0005550137 systemd[1]: libpod-conmon-b5f66ddb366c7a08417fcd484eec1ded1d3360ae74c34e007aa5a7f23da12f1d.scope: Deactivated successfully.
Dec  8 04:46:26 np0005550137 ceph-mgr[74806]: log_channel(cluster) log [DBG] : pgmap v32: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Dec  8 04:46:26 np0005550137 systemd[1]: Reloading.
Dec  8 04:46:26 np0005550137 ceph-mon[74516]: Deploying daemon osd.0 on compute-1
Dec  8 04:46:26 np0005550137 python3[82559]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z   --volume /home/ceph-admin/specs/ceph_spec.yaml:/home/ceph_spec.yaml:z   --entrypoint ceph quay.io/ceph/ceph:v19 --fsid ceb838ef-9d5d-54e4-bddb-2f01adce2ad4 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   status --format json | jq .osdmap.num_up_osds _uses_shell=True stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec  8 04:46:26 np0005550137 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Dec  8 04:46:26 np0005550137 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec  8 04:46:26 np0005550137 podman[82599]: 2025-12-08 09:46:26.246510786 +0000 UTC m=+0.045703894 container create f20b359a428f94489719cb37e56e0a7ec5f1747df2bf780df42e9df0c0103e9c (image=quay.io/ceph/ceph:v19, name=frosty_galois, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_REF=squid, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0)
Dec  8 04:46:26 np0005550137 podman[82599]: 2025-12-08 09:46:26.230057972 +0000 UTC m=+0.029251090 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Dec  8 04:46:26 np0005550137 systemd[1]: Started libpod-conmon-f20b359a428f94489719cb37e56e0a7ec5f1747df2bf780df42e9df0c0103e9c.scope.
Dec  8 04:46:26 np0005550137 systemd[1]: Started libcrun container.
Dec  8 04:46:26 np0005550137 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9179c94f34284fc74c59530bc2b41388d127104b39794986e99a02d13b4968c2/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec  8 04:46:26 np0005550137 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9179c94f34284fc74c59530bc2b41388d127104b39794986e99a02d13b4968c2/merged/home/ceph_spec.yaml supports timestamps until 2038 (0x7fffffff)
Dec  8 04:46:26 np0005550137 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9179c94f34284fc74c59530bc2b41388d127104b39794986e99a02d13b4968c2/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Dec  8 04:46:26 np0005550137 podman[82599]: 2025-12-08 09:46:26.427461527 +0000 UTC m=+0.226654705 container init f20b359a428f94489719cb37e56e0a7ec5f1747df2bf780df42e9df0c0103e9c (image=quay.io/ceph/ceph:v19, name=frosty_galois, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2)
Dec  8 04:46:26 np0005550137 systemd[1]: Reloading.
Dec  8 04:46:26 np0005550137 podman[82599]: 2025-12-08 09:46:26.436181459 +0000 UTC m=+0.235374607 container start f20b359a428f94489719cb37e56e0a7ec5f1747df2bf780df42e9df0c0103e9c (image=quay.io/ceph/ceph:v19, name=frosty_galois, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec  8 04:46:26 np0005550137 podman[82599]: 2025-12-08 09:46:26.44020238 +0000 UTC m=+0.239395518 container attach f20b359a428f94489719cb37e56e0a7ec5f1747df2bf780df42e9df0c0103e9c (image=quay.io/ceph/ceph:v19, name=frosty_galois, OSD_FLAVOR=default, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Dec  8 04:46:26 np0005550137 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec  8 04:46:26 np0005550137 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Dec  8 04:46:26 np0005550137 systemd[1]: Starting Ceph osd.1 for ceb838ef-9d5d-54e4-bddb-2f01adce2ad4...
Dec  8 04:46:26 np0005550137 ceph-mon[74516]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "status", "format": "json"} v 0)
Dec  8 04:46:26 np0005550137 ceph-mon[74516]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2234269109' entity='client.admin' cmd=[{"prefix": "status", "format": "json"}]: dispatch
Dec  8 04:46:26 np0005550137 frosty_galois[82617]: 
Dec  8 04:46:26 np0005550137 frosty_galois[82617]: {"fsid":"ceb838ef-9d5d-54e4-bddb-2f01adce2ad4","health":{"status":"HEALTH_WARN","checks":{"CEPHADM_APPLY_SPEC_FAIL":{"severity":"HEALTH_WARN","summary":{"message":"Failed to apply 2 service(s): mon,mgr","count":2},"muted":false}},"mutes":[]},"election_epoch":5,"quorum":[0],"quorum_names":["compute-0"],"quorum_age":87,"monmap":{"epoch":1,"min_mon_release_name":"squid","num_mons":1},"osdmap":{"epoch":5,"num_osds":2,"num_up_osds":0,"osd_up_since":0,"num_in_osds":2,"osd_in_since":1765187178,"num_remapped_pgs":0},"pgmap":{"pgs_by_state":[],"num_pgs":0,"num_pools":0,"num_objects":0,"data_bytes":0,"bytes_used":0,"bytes_avail":0,"bytes_total":0},"fsmap":{"epoch":1,"btime":"2025-12-08T09:44:57:301434+0000","by_rank":[],"up:standby":0},"mgrmap":{"available":true,"num_standbys":0,"modules":["cephadm","iostat","nfs","restful"],"services":{}},"servicemap":{"epoch":2,"modified":"2025-12-08T09:46:20.024511+0000","services":{}},"progress_events":{}}
Dec  8 04:46:26 np0005550137 systemd[1]: libpod-f20b359a428f94489719cb37e56e0a7ec5f1747df2bf780df42e9df0c0103e9c.scope: Deactivated successfully.
Dec  8 04:46:26 np0005550137 podman[82599]: 2025-12-08 09:46:26.896642511 +0000 UTC m=+0.695835629 container died f20b359a428f94489719cb37e56e0a7ec5f1747df2bf780df42e9df0c0103e9c (image=quay.io/ceph/ceph:v19, name=frosty_galois, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_REF=squid, OSD_FLAVOR=default, org.label-schema.build-date=20250325)
Dec  8 04:46:26 np0005550137 systemd[1]: var-lib-containers-storage-overlay-9179c94f34284fc74c59530bc2b41388d127104b39794986e99a02d13b4968c2-merged.mount: Deactivated successfully.
Dec  8 04:46:26 np0005550137 podman[82599]: 2025-12-08 09:46:26.967487657 +0000 UTC m=+0.766680765 container remove f20b359a428f94489719cb37e56e0a7ec5f1747df2bf780df42e9df0c0103e9c (image=quay.io/ceph/ceph:v19, name=frosty_galois, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, org.label-schema.schema-version=1.0, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image)
Dec  8 04:46:26 np0005550137 systemd[1]: libpod-conmon-f20b359a428f94489719cb37e56e0a7ec5f1747df2bf780df42e9df0c0103e9c.scope: Deactivated successfully.
Dec  8 04:46:27 np0005550137 podman[82741]: 2025-12-08 09:46:27.002017924 +0000 UTC m=+0.050493427 container create 0156515020aeec5468e56368a614b5b168787783ac2d1a80ce6788f1974d7056 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-ceb838ef-9d5d-54e4-bddb-2f01adce2ad4-osd-1-activate, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_REF=squid, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default)
Dec  8 04:46:27 np0005550137 systemd[1]: Started libcrun container.
Dec  8 04:46:27 np0005550137 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f17d38cdda731b8a32fb35c9c724c3338f42d78d33066b71ba2a06271119d43f/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec  8 04:46:27 np0005550137 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f17d38cdda731b8a32fb35c9c724c3338f42d78d33066b71ba2a06271119d43f/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec  8 04:46:27 np0005550137 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f17d38cdda731b8a32fb35c9c724c3338f42d78d33066b71ba2a06271119d43f/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec  8 04:46:27 np0005550137 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f17d38cdda731b8a32fb35c9c724c3338f42d78d33066b71ba2a06271119d43f/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec  8 04:46:27 np0005550137 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f17d38cdda731b8a32fb35c9c724c3338f42d78d33066b71ba2a06271119d43f/merged/var/lib/ceph/osd/ceph-1 supports timestamps until 2038 (0x7fffffff)
Dec  8 04:46:27 np0005550137 podman[82741]: 2025-12-08 09:46:27.066539031 +0000 UTC m=+0.115014524 container init 0156515020aeec5468e56368a614b5b168787783ac2d1a80ce6788f1974d7056 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-ceb838ef-9d5d-54e4-bddb-2f01adce2ad4-osd-1-activate, ceph=True, CEPH_REF=squid, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec  8 04:46:27 np0005550137 podman[82741]: 2025-12-08 09:46:27.074288993 +0000 UTC m=+0.122764496 container start 0156515020aeec5468e56368a614b5b168787783ac2d1a80ce6788f1974d7056 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-ceb838ef-9d5d-54e4-bddb-2f01adce2ad4-osd-1-activate, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, ceph=True, CEPH_REF=squid, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default)
Dec  8 04:46:27 np0005550137 podman[82741]: 2025-12-08 09:46:27.077541451 +0000 UTC m=+0.126016954 container attach 0156515020aeec5468e56368a614b5b168787783ac2d1a80ce6788f1974d7056 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-ceb838ef-9d5d-54e4-bddb-2f01adce2ad4-osd-1-activate, org.label-schema.vendor=CentOS, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec  8 04:46:27 np0005550137 podman[82741]: 2025-12-08 09:46:26.983443967 +0000 UTC m=+0.031919500 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Dec  8 04:46:27 np0005550137 ceph-ceb838ef-9d5d-54e4-bddb-2f01adce2ad4-osd-1-activate[82757]: Running command: /usr/bin/ceph-authtool --gen-print-key
Dec  8 04:46:27 np0005550137 bash[82741]: Running command: /usr/bin/ceph-authtool --gen-print-key
Dec  8 04:46:27 np0005550137 ceph-ceb838ef-9d5d-54e4-bddb-2f01adce2ad4-osd-1-activate[82757]: Running command: /usr/bin/ceph-authtool --gen-print-key
Dec  8 04:46:27 np0005550137 bash[82741]: Running command: /usr/bin/ceph-authtool --gen-print-key
Dec  8 04:46:27 np0005550137 lvm[82838]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Dec  8 04:46:27 np0005550137 lvm[82838]: VG ceph_vg0 finished
Dec  8 04:46:27 np0005550137 ceph-ceb838ef-9d5d-54e4-bddb-2f01adce2ad4-osd-1-activate[82757]: --> Failed to activate via raw: did not find any matching OSD to activate
Dec  8 04:46:27 np0005550137 ceph-ceb838ef-9d5d-54e4-bddb-2f01adce2ad4-osd-1-activate[82757]: Running command: /usr/bin/ceph-authtool --gen-print-key
Dec  8 04:46:27 np0005550137 bash[82741]: --> Failed to activate via raw: did not find any matching OSD to activate
Dec  8 04:46:27 np0005550137 bash[82741]: Running command: /usr/bin/ceph-authtool --gen-print-key
Dec  8 04:46:27 np0005550137 ceph-ceb838ef-9d5d-54e4-bddb-2f01adce2ad4-osd-1-activate[82757]: Running command: /usr/bin/ceph-authtool --gen-print-key
Dec  8 04:46:27 np0005550137 bash[82741]: Running command: /usr/bin/ceph-authtool --gen-print-key
Dec  8 04:46:27 np0005550137 ceph-ceb838ef-9d5d-54e4-bddb-2f01adce2ad4-osd-1-activate[82757]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-1
Dec  8 04:46:27 np0005550137 bash[82741]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-1
Dec  8 04:46:27 np0005550137 ceph-ceb838ef-9d5d-54e4-bddb-2f01adce2ad4-osd-1-activate[82757]: Running command: /usr/bin/ceph-bluestore-tool --cluster=ceph prime-osd-dir --dev /dev/ceph_vg0/ceph_lv0 --path /var/lib/ceph/osd/ceph-1 --no-mon-config
Dec  8 04:46:27 np0005550137 bash[82741]: Running command: /usr/bin/ceph-bluestore-tool --cluster=ceph prime-osd-dir --dev /dev/ceph_vg0/ceph_lv0 --path /var/lib/ceph/osd/ceph-1 --no-mon-config
Dec  8 04:46:28 np0005550137 ceph-mgr[74806]: log_channel(cluster) log [DBG] : pgmap v33: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Dec  8 04:46:28 np0005550137 ceph-ceb838ef-9d5d-54e4-bddb-2f01adce2ad4-osd-1-activate[82757]: Running command: /usr/bin/ln -snf /dev/ceph_vg0/ceph_lv0 /var/lib/ceph/osd/ceph-1/block
Dec  8 04:46:28 np0005550137 bash[82741]: Running command: /usr/bin/ln -snf /dev/ceph_vg0/ceph_lv0 /var/lib/ceph/osd/ceph-1/block
Dec  8 04:46:28 np0005550137 ceph-ceb838ef-9d5d-54e4-bddb-2f01adce2ad4-osd-1-activate[82757]: Running command: /usr/bin/chown -h ceph:ceph /var/lib/ceph/osd/ceph-1/block
Dec  8 04:46:28 np0005550137 bash[82741]: Running command: /usr/bin/chown -h ceph:ceph /var/lib/ceph/osd/ceph-1/block
Dec  8 04:46:28 np0005550137 ceph-ceb838ef-9d5d-54e4-bddb-2f01adce2ad4-osd-1-activate[82757]: Running command: /usr/bin/chown -R ceph:ceph /dev/dm-0
Dec  8 04:46:28 np0005550137 bash[82741]: Running command: /usr/bin/chown -R ceph:ceph /dev/dm-0
Dec  8 04:46:28 np0005550137 ceph-ceb838ef-9d5d-54e4-bddb-2f01adce2ad4-osd-1-activate[82757]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-1
Dec  8 04:46:28 np0005550137 bash[82741]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-1
Dec  8 04:46:28 np0005550137 ceph-ceb838ef-9d5d-54e4-bddb-2f01adce2ad4-osd-1-activate[82757]: --> ceph-volume lvm activate successful for osd ID: 1
Dec  8 04:46:28 np0005550137 bash[82741]: --> ceph-volume lvm activate successful for osd ID: 1
Dec  8 04:46:28 np0005550137 systemd[1]: libpod-0156515020aeec5468e56368a614b5b168787783ac2d1a80ce6788f1974d7056.scope: Deactivated successfully.
Dec  8 04:46:28 np0005550137 systemd[1]: libpod-0156515020aeec5468e56368a614b5b168787783ac2d1a80ce6788f1974d7056.scope: Consumed 1.419s CPU time.
Dec  8 04:46:28 np0005550137 podman[82741]: 2025-12-08 09:46:28.324201533 +0000 UTC m=+1.372677066 container died 0156515020aeec5468e56368a614b5b168787783ac2d1a80ce6788f1974d7056 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-ceb838ef-9d5d-54e4-bddb-2f01adce2ad4-osd-1-activate, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec  8 04:46:28 np0005550137 systemd[1]: var-lib-containers-storage-overlay-f17d38cdda731b8a32fb35c9c724c3338f42d78d33066b71ba2a06271119d43f-merged.mount: Deactivated successfully.
Dec  8 04:46:28 np0005550137 podman[82741]: 2025-12-08 09:46:28.395158383 +0000 UTC m=+1.443633886 container remove 0156515020aeec5468e56368a614b5b168787783ac2d1a80ce6788f1974d7056 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-ceb838ef-9d5d-54e4-bddb-2f01adce2ad4-osd-1-activate, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default)
Dec  8 04:46:28 np0005550137 podman[82989]: 2025-12-08 09:46:28.556737354 +0000 UTC m=+0.035957431 container create 7dfc4fabbb2f8d134cfb5525c7d017c32399b44e5cb25f22dd419235416a69d4 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-ceb838ef-9d5d-54e4-bddb-2f01adce2ad4-osd-1, CEPH_REF=squid, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Dec  8 04:46:28 np0005550137 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6d7776a5b854a995b063ae0d580c2fc9da44089024defee9405b43b1ddc41693/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec  8 04:46:28 np0005550137 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6d7776a5b854a995b063ae0d580c2fc9da44089024defee9405b43b1ddc41693/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec  8 04:46:28 np0005550137 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6d7776a5b854a995b063ae0d580c2fc9da44089024defee9405b43b1ddc41693/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec  8 04:46:28 np0005550137 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6d7776a5b854a995b063ae0d580c2fc9da44089024defee9405b43b1ddc41693/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec  8 04:46:28 np0005550137 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6d7776a5b854a995b063ae0d580c2fc9da44089024defee9405b43b1ddc41693/merged/var/lib/ceph/osd/ceph-1 supports timestamps until 2038 (0x7fffffff)
Dec  8 04:46:28 np0005550137 podman[82989]: 2025-12-08 09:46:28.61193989 +0000 UTC m=+0.091159967 container init 7dfc4fabbb2f8d134cfb5525c7d017c32399b44e5cb25f22dd419235416a69d4 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-ceb838ef-9d5d-54e4-bddb-2f01adce2ad4-osd-1, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec  8 04:46:28 np0005550137 ceph-mon[74516]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1.devices.0}] v 0)
Dec  8 04:46:28 np0005550137 podman[82989]: 2025-12-08 09:46:28.620681283 +0000 UTC m=+0.099901330 container start 7dfc4fabbb2f8d134cfb5525c7d017c32399b44e5cb25f22dd419235416a69d4 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-ceb838ef-9d5d-54e4-bddb-2f01adce2ad4-osd-1, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2)
Dec  8 04:46:28 np0005550137 ceph-mon[74516]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1555784564' entity='mgr.compute-0.kitiwu' 
Dec  8 04:46:28 np0005550137 ceph-mon[74516]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1}] v 0)
Dec  8 04:46:28 np0005550137 bash[82989]: 7dfc4fabbb2f8d134cfb5525c7d017c32399b44e5cb25f22dd419235416a69d4
Dec  8 04:46:28 np0005550137 podman[82989]: 2025-12-08 09:46:28.539692742 +0000 UTC m=+0.018912809 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Dec  8 04:46:28 np0005550137 ceph-mon[74516]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1555784564' entity='mgr.compute-0.kitiwu' 
Dec  8 04:46:28 np0005550137 systemd[1]: Started Ceph osd.1 for ceb838ef-9d5d-54e4-bddb-2f01adce2ad4.
Dec  8 04:46:28 np0005550137 ceph-osd[83009]: set uid:gid to 167:167 (ceph:ceph)
Dec  8 04:46:28 np0005550137 ceph-osd[83009]: ceph version 19.2.3 (c92aebb279828e9c3c1f5d24613efca272649e62) squid (stable), process ceph-osd, pid 2
Dec  8 04:46:28 np0005550137 ceph-osd[83009]: pidfile_write: ignore empty --pid-file
Dec  8 04:46:28 np0005550137 ceph-osd[83009]: bdev(0x55aa754c1800 /var/lib/ceph/osd/ceph-1/block) open path /var/lib/ceph/osd/ceph-1/block
Dec  8 04:46:28 np0005550137 ceph-osd[83009]: bdev(0x55aa754c1800 /var/lib/ceph/osd/ceph-1/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-1/block failed: (22) Invalid argument
Dec  8 04:46:28 np0005550137 ceph-osd[83009]: bdev(0x55aa754c1800 /var/lib/ceph/osd/ceph-1/block) open backing device/file reports st_blksize 512, using bdev_block_size 4096 anyway
Dec  8 04:46:28 np0005550137 ceph-osd[83009]: bdev(0x55aa754c1800 /var/lib/ceph/osd/ceph-1/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Dec  8 04:46:28 np0005550137 ceph-osd[83009]: bdev(0x55aa754c1800 /var/lib/ceph/osd/ceph-1/block) close
Dec  8 04:46:28 np0005550137 ceph-mon[74516]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Dec  8 04:46:28 np0005550137 ceph-mon[74516]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1555784564' entity='mgr.compute-0.kitiwu' 
Dec  8 04:46:28 np0005550137 ceph-mon[74516]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Dec  8 04:46:28 np0005550137 ceph-mon[74516]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1555784564' entity='mgr.compute-0.kitiwu' 
Dec  8 04:46:28 np0005550137 ceph-osd[83009]: bdev(0x55aa754c1800 /var/lib/ceph/osd/ceph-1/block) open path /var/lib/ceph/osd/ceph-1/block
Dec  8 04:46:28 np0005550137 ceph-osd[83009]: bdev(0x55aa754c1800 /var/lib/ceph/osd/ceph-1/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-1/block failed: (22) Invalid argument
Dec  8 04:46:28 np0005550137 ceph-osd[83009]: bdev(0x55aa754c1800 /var/lib/ceph/osd/ceph-1/block) open backing device/file reports st_blksize 512, using bdev_block_size 4096 anyway
Dec  8 04:46:28 np0005550137 ceph-osd[83009]: bdev(0x55aa754c1800 /var/lib/ceph/osd/ceph-1/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Dec  8 04:46:28 np0005550137 ceph-osd[83009]: bdev(0x55aa754c1800 /var/lib/ceph/osd/ceph-1/block) close
Dec  8 04:46:28 np0005550137 ceph-osd[83009]: bdev(0x55aa754c1800 /var/lib/ceph/osd/ceph-1/block) open path /var/lib/ceph/osd/ceph-1/block
Dec  8 04:46:28 np0005550137 ceph-osd[83009]: bdev(0x55aa754c1800 /var/lib/ceph/osd/ceph-1/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-1/block failed: (22) Invalid argument
Dec  8 04:46:28 np0005550137 ceph-osd[83009]: bdev(0x55aa754c1800 /var/lib/ceph/osd/ceph-1/block) open backing device/file reports st_blksize 512, using bdev_block_size 4096 anyway
Dec  8 04:46:28 np0005550137 ceph-osd[83009]: bdev(0x55aa754c1800 /var/lib/ceph/osd/ceph-1/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Dec  8 04:46:28 np0005550137 ceph-osd[83009]: bdev(0x55aa754c1800 /var/lib/ceph/osd/ceph-1/block) close
Dec  8 04:46:29 np0005550137 ceph-osd[83009]: bdev(0x55aa754c1800 /var/lib/ceph/osd/ceph-1/block) open path /var/lib/ceph/osd/ceph-1/block
Dec  8 04:46:29 np0005550137 ceph-osd[83009]: bdev(0x55aa754c1800 /var/lib/ceph/osd/ceph-1/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-1/block failed: (22) Invalid argument
Dec  8 04:46:29 np0005550137 ceph-osd[83009]: bdev(0x55aa754c1800 /var/lib/ceph/osd/ceph-1/block) open backing device/file reports st_blksize 512, using bdev_block_size 4096 anyway
Dec  8 04:46:29 np0005550137 ceph-osd[83009]: bdev(0x55aa754c1800 /var/lib/ceph/osd/ceph-1/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Dec  8 04:46:29 np0005550137 ceph-osd[83009]: bdev(0x55aa754c1800 /var/lib/ceph/osd/ceph-1/block) close
Dec  8 04:46:29 np0005550137 ceph-osd[83009]: bdev(0x55aa754c1800 /var/lib/ceph/osd/ceph-1/block) open path /var/lib/ceph/osd/ceph-1/block
Dec  8 04:46:29 np0005550137 ceph-osd[83009]: bdev(0x55aa754c1800 /var/lib/ceph/osd/ceph-1/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-1/block failed: (22) Invalid argument
Dec  8 04:46:29 np0005550137 ceph-osd[83009]: bdev(0x55aa754c1800 /var/lib/ceph/osd/ceph-1/block) open backing device/file reports st_blksize 512, using bdev_block_size 4096 anyway
Dec  8 04:46:29 np0005550137 ceph-osd[83009]: bdev(0x55aa754c1800 /var/lib/ceph/osd/ceph-1/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Dec  8 04:46:29 np0005550137 ceph-osd[83009]: bdev(0x55aa754c1800 /var/lib/ceph/osd/ceph-1/block) close
Dec  8 04:46:29 np0005550137 ceph-mon[74516]: from='mgr.14122 192.168.122.100:0/1555784564' entity='mgr.compute-0.kitiwu' 
Dec  8 04:46:29 np0005550137 ceph-mon[74516]: from='mgr.14122 192.168.122.100:0/1555784564' entity='mgr.compute-0.kitiwu' 
Dec  8 04:46:29 np0005550137 ceph-mon[74516]: from='mgr.14122 192.168.122.100:0/1555784564' entity='mgr.compute-0.kitiwu' 
Dec  8 04:46:29 np0005550137 ceph-mon[74516]: from='mgr.14122 192.168.122.100:0/1555784564' entity='mgr.compute-0.kitiwu' 
Dec  8 04:46:29 np0005550137 podman[83118]: 2025-12-08 09:46:29.186838408 +0000 UTC m=+0.038315922 container create c5ebdc6a3412e3ae509c3e21684c353697be9ffdd109a2f0643fc8c92b8035f2 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=determined_visvesvaraya, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1)
Dec  8 04:46:29 np0005550137 systemd[1]: Started libpod-conmon-c5ebdc6a3412e3ae509c3e21684c353697be9ffdd109a2f0643fc8c92b8035f2.scope.
Dec  8 04:46:29 np0005550137 systemd[1]: Started libcrun container.
Dec  8 04:46:29 np0005550137 podman[83118]: 2025-12-08 09:46:29.167875248 +0000 UTC m=+0.019352782 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Dec  8 04:46:29 np0005550137 podman[83118]: 2025-12-08 09:46:29.278940933 +0000 UTC m=+0.130418537 container init c5ebdc6a3412e3ae509c3e21684c353697be9ffdd109a2f0643fc8c92b8035f2 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=determined_visvesvaraya, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec  8 04:46:29 np0005550137 podman[83118]: 2025-12-08 09:46:29.292691485 +0000 UTC m=+0.144169029 container start c5ebdc6a3412e3ae509c3e21684c353697be9ffdd109a2f0643fc8c92b8035f2 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=determined_visvesvaraya, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, OSD_FLAVOR=default, CEPH_REF=squid, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec  8 04:46:29 np0005550137 podman[83118]: 2025-12-08 09:46:29.296503089 +0000 UTC m=+0.147980633 container attach c5ebdc6a3412e3ae509c3e21684c353697be9ffdd109a2f0643fc8c92b8035f2 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=determined_visvesvaraya, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_REF=squid, org.label-schema.build-date=20250325, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Dec  8 04:46:29 np0005550137 determined_visvesvaraya[83134]: 167 167
Dec  8 04:46:29 np0005550137 systemd[1]: libpod-c5ebdc6a3412e3ae509c3e21684c353697be9ffdd109a2f0643fc8c92b8035f2.scope: Deactivated successfully.
Dec  8 04:46:29 np0005550137 podman[83118]: 2025-12-08 09:46:29.299081117 +0000 UTC m=+0.150558641 container died c5ebdc6a3412e3ae509c3e21684c353697be9ffdd109a2f0643fc8c92b8035f2 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=determined_visvesvaraya, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image)
Dec  8 04:46:29 np0005550137 ceph-osd[83009]: bdev(0x55aa754c1800 /var/lib/ceph/osd/ceph-1/block) open path /var/lib/ceph/osd/ceph-1/block
Dec  8 04:46:29 np0005550137 ceph-osd[83009]: bdev(0x55aa754c1800 /var/lib/ceph/osd/ceph-1/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-1/block failed: (22) Invalid argument
Dec  8 04:46:29 np0005550137 ceph-osd[83009]: bdev(0x55aa754c1800 /var/lib/ceph/osd/ceph-1/block) open backing device/file reports st_blksize 512, using bdev_block_size 4096 anyway
Dec  8 04:46:29 np0005550137 ceph-osd[83009]: bdev(0x55aa754c1800 /var/lib/ceph/osd/ceph-1/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Dec  8 04:46:29 np0005550137 ceph-osd[83009]: bluestore(/var/lib/ceph/osd/ceph-1) _set_cache_sizes cache_size 1073741824 meta 0.45 kv 0.45 kv_onode 0.04 data 0.06
Dec  8 04:46:29 np0005550137 ceph-osd[83009]: bdev(0x55aa754c1c00 /var/lib/ceph/osd/ceph-1/block) open path /var/lib/ceph/osd/ceph-1/block
Dec  8 04:46:29 np0005550137 ceph-osd[83009]: bdev(0x55aa754c1c00 /var/lib/ceph/osd/ceph-1/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-1/block failed: (22) Invalid argument
Dec  8 04:46:29 np0005550137 ceph-osd[83009]: bdev(0x55aa754c1c00 /var/lib/ceph/osd/ceph-1/block) open backing device/file reports st_blksize 512, using bdev_block_size 4096 anyway
Dec  8 04:46:29 np0005550137 ceph-osd[83009]: bdev(0x55aa754c1c00 /var/lib/ceph/osd/ceph-1/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Dec  8 04:46:29 np0005550137 ceph-osd[83009]: bluefs add_block_device bdev 1 path /var/lib/ceph/osd/ceph-1/block size 20 GiB
Dec  8 04:46:29 np0005550137 ceph-osd[83009]: bdev(0x55aa754c1c00 /var/lib/ceph/osd/ceph-1/block) close
Dec  8 04:46:29 np0005550137 systemd[1]: var-lib-containers-storage-overlay-33b160df00ab96e26390fbd1a8208dafa4ef2cef588f8c8e4ae638e4539a2f00-merged.mount: Deactivated successfully.
Dec  8 04:46:29 np0005550137 podman[83118]: 2025-12-08 09:46:29.349434939 +0000 UTC m=+0.200912493 container remove c5ebdc6a3412e3ae509c3e21684c353697be9ffdd109a2f0643fc8c92b8035f2 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=determined_visvesvaraya, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec  8 04:46:29 np0005550137 systemd[1]: libpod-conmon-c5ebdc6a3412e3ae509c3e21684c353697be9ffdd109a2f0643fc8c92b8035f2.scope: Deactivated successfully.
Dec  8 04:46:29 np0005550137 ceph-osd[83009]: bdev(0x55aa754c1800 /var/lib/ceph/osd/ceph-1/block) close
Dec  8 04:46:29 np0005550137 podman[83163]: 2025-12-08 09:46:29.600902987 +0000 UTC m=+0.063596261 container create 92f0aa189f7d53a189bcf7844e809b0807adad5b64fe71f546cbc379272a7135 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=trusting_brahmagupta, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, CEPH_REF=squid, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True)
Dec  8 04:46:29 np0005550137 systemd[1]: Started libpod-conmon-92f0aa189f7d53a189bcf7844e809b0807adad5b64fe71f546cbc379272a7135.scope.
Dec  8 04:46:29 np0005550137 podman[83163]: 2025-12-08 09:46:29.581184285 +0000 UTC m=+0.043877549 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Dec  8 04:46:29 np0005550137 systemd[1]: Started libcrun container.
Dec  8 04:46:29 np0005550137 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1de4cccd903694e02b9dc2646c3c0d2c114a3074f44e752750ca1c21aad763fc/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec  8 04:46:29 np0005550137 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1de4cccd903694e02b9dc2646c3c0d2c114a3074f44e752750ca1c21aad763fc/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec  8 04:46:29 np0005550137 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1de4cccd903694e02b9dc2646c3c0d2c114a3074f44e752750ca1c21aad763fc/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec  8 04:46:29 np0005550137 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1de4cccd903694e02b9dc2646c3c0d2c114a3074f44e752750ca1c21aad763fc/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec  8 04:46:29 np0005550137 podman[83163]: 2025-12-08 09:46:29.713416765 +0000 UTC m=+0.176110099 container init 92f0aa189f7d53a189bcf7844e809b0807adad5b64fe71f546cbc379272a7135 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=trusting_brahmagupta, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, CEPH_REF=squid, org.label-schema.license=GPLv2, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec  8 04:46:29 np0005550137 podman[83163]: 2025-12-08 09:46:29.729185598 +0000 UTC m=+0.191878852 container start 92f0aa189f7d53a189bcf7844e809b0807adad5b64fe71f546cbc379272a7135 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=trusting_brahmagupta, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec  8 04:46:29 np0005550137 podman[83163]: 2025-12-08 09:46:29.733056164 +0000 UTC m=+0.195749458 container attach 92f0aa189f7d53a189bcf7844e809b0807adad5b64fe71f546cbc379272a7135 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=trusting_brahmagupta, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, CEPH_REF=squid, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2)
Dec  8 04:46:29 np0005550137 ceph-osd[83009]: starting osd.1 osd_data /var/lib/ceph/osd/ceph-1 /var/lib/ceph/osd/ceph-1/journal
Dec  8 04:46:29 np0005550137 ceph-mon[74516]: mon.compute-0@0(leader).osd e5 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  8 04:46:29 np0005550137 ceph-osd[83009]: load: jerasure load: lrc 
Dec  8 04:46:29 np0005550137 ceph-osd[83009]: bdev(0x55aa76378c00 /var/lib/ceph/osd/ceph-1/block) open path /var/lib/ceph/osd/ceph-1/block
Dec  8 04:46:29 np0005550137 ceph-osd[83009]: bdev(0x55aa76378c00 /var/lib/ceph/osd/ceph-1/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-1/block failed: (22) Invalid argument
Dec  8 04:46:29 np0005550137 ceph-osd[83009]: bdev(0x55aa76378c00 /var/lib/ceph/osd/ceph-1/block) open backing device/file reports st_blksize 512, using bdev_block_size 4096 anyway
Dec  8 04:46:29 np0005550137 ceph-osd[83009]: bdev(0x55aa76378c00 /var/lib/ceph/osd/ceph-1/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Dec  8 04:46:29 np0005550137 ceph-osd[83009]: bluestore(/var/lib/ceph/osd/ceph-1) _set_cache_sizes cache_size 1073741824 meta 0.45 kv 0.45 kv_onode 0.04 data 0.06
Dec  8 04:46:29 np0005550137 ceph-osd[83009]: bdev(0x55aa76378c00 /var/lib/ceph/osd/ceph-1/block) close
Dec  8 04:46:30 np0005550137 ceph-mgr[74806]: log_channel(cluster) log [DBG] : pgmap v34: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Dec  8 04:46:30 np0005550137 ceph-osd[83009]: bdev(0x55aa76378c00 /var/lib/ceph/osd/ceph-1/block) open path /var/lib/ceph/osd/ceph-1/block
Dec  8 04:46:30 np0005550137 ceph-osd[83009]: bdev(0x55aa76378c00 /var/lib/ceph/osd/ceph-1/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-1/block failed: (22) Invalid argument
Dec  8 04:46:30 np0005550137 ceph-osd[83009]: bdev(0x55aa76378c00 /var/lib/ceph/osd/ceph-1/block) open backing device/file reports st_blksize 512, using bdev_block_size 4096 anyway
Dec  8 04:46:30 np0005550137 ceph-osd[83009]: bdev(0x55aa76378c00 /var/lib/ceph/osd/ceph-1/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Dec  8 04:46:30 np0005550137 ceph-osd[83009]: bluestore(/var/lib/ceph/osd/ceph-1) _set_cache_sizes cache_size 1073741824 meta 0.45 kv 0.45 kv_onode 0.04 data 0.06
Dec  8 04:46:30 np0005550137 ceph-osd[83009]: bdev(0x55aa76378c00 /var/lib/ceph/osd/ceph-1/block) close
Dec  8 04:46:30 np0005550137 ceph-osd[83009]: mClockScheduler: set_osd_capacity_params_from_config: osd_bandwidth_cost_per_io: 499321.90 bytes/io, osd_bandwidth_capacity_per_shard 157286400.00 bytes/second
Dec  8 04:46:30 np0005550137 ceph-osd[83009]: osd.1:0.OSDShard using op scheduler mclock_scheduler, cutoff=196
Dec  8 04:46:30 np0005550137 ceph-osd[83009]: bdev(0x55aa76378c00 /var/lib/ceph/osd/ceph-1/block) open path /var/lib/ceph/osd/ceph-1/block
Dec  8 04:46:30 np0005550137 ceph-osd[83009]: bdev(0x55aa76378c00 /var/lib/ceph/osd/ceph-1/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-1/block failed: (22) Invalid argument
Dec  8 04:46:30 np0005550137 ceph-osd[83009]: bdev(0x55aa76378c00 /var/lib/ceph/osd/ceph-1/block) open backing device/file reports st_blksize 512, using bdev_block_size 4096 anyway
Dec  8 04:46:30 np0005550137 ceph-osd[83009]: bdev(0x55aa76378c00 /var/lib/ceph/osd/ceph-1/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Dec  8 04:46:30 np0005550137 ceph-osd[83009]: bdev(0x55aa76378c00 /var/lib/ceph/osd/ceph-1/block) close
Dec  8 04:46:30 np0005550137 lvm[83271]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Dec  8 04:46:30 np0005550137 lvm[83271]: VG ceph_vg0 finished
Dec  8 04:46:30 np0005550137 trusting_brahmagupta[83180]: {}
Dec  8 04:46:30 np0005550137 systemd[1]: libpod-92f0aa189f7d53a189bcf7844e809b0807adad5b64fe71f546cbc379272a7135.scope: Deactivated successfully.
Dec  8 04:46:30 np0005550137 systemd[1]: libpod-92f0aa189f7d53a189bcf7844e809b0807adad5b64fe71f546cbc379272a7135.scope: Consumed 1.295s CPU time.
Dec  8 04:46:30 np0005550137 podman[83163]: 2025-12-08 09:46:30.57242636 +0000 UTC m=+1.035119644 container died 92f0aa189f7d53a189bcf7844e809b0807adad5b64fe71f546cbc379272a7135 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=trusting_brahmagupta, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0)
Dec  8 04:46:30 np0005550137 ceph-mon[74516]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1.devices.0}] v 0)
Dec  8 04:46:30 np0005550137 ceph-mon[74516]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1555784564' entity='mgr.compute-0.kitiwu' 
Dec  8 04:46:30 np0005550137 systemd[1]: var-lib-containers-storage-overlay-1de4cccd903694e02b9dc2646c3c0d2c114a3074f44e752750ca1c21aad763fc-merged.mount: Deactivated successfully.
Dec  8 04:46:30 np0005550137 ceph-mon[74516]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1}] v 0)
Dec  8 04:46:30 np0005550137 ceph-mon[74516]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1555784564' entity='mgr.compute-0.kitiwu' 
Dec  8 04:46:30 np0005550137 podman[83163]: 2025-12-08 09:46:30.632625107 +0000 UTC m=+1.095318401 container remove 92f0aa189f7d53a189bcf7844e809b0807adad5b64fe71f546cbc379272a7135 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=trusting_brahmagupta, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_REF=squid, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, OSD_FLAVOR=default, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec  8 04:46:30 np0005550137 systemd[1]: libpod-conmon-92f0aa189f7d53a189bcf7844e809b0807adad5b64fe71f546cbc379272a7135.scope: Deactivated successfully.
Dec  8 04:46:30 np0005550137 ceph-osd[83009]: bdev(0x55aa76378c00 /var/lib/ceph/osd/ceph-1/block) open path /var/lib/ceph/osd/ceph-1/block
Dec  8 04:46:30 np0005550137 ceph-osd[83009]: bdev(0x55aa76378c00 /var/lib/ceph/osd/ceph-1/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-1/block failed: (22) Invalid argument
Dec  8 04:46:30 np0005550137 ceph-osd[83009]: bdev(0x55aa76378c00 /var/lib/ceph/osd/ceph-1/block) open backing device/file reports st_blksize 512, using bdev_block_size 4096 anyway
Dec  8 04:46:30 np0005550137 ceph-osd[83009]: bdev(0x55aa76378c00 /var/lib/ceph/osd/ceph-1/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Dec  8 04:46:30 np0005550137 ceph-osd[83009]: bdev(0x55aa76378c00 /var/lib/ceph/osd/ceph-1/block) close
Dec  8 04:46:30 np0005550137 ceph-mon[74516]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Dec  8 04:46:30 np0005550137 ceph-mon[74516]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1555784564' entity='mgr.compute-0.kitiwu' 
Dec  8 04:46:30 np0005550137 ceph-mon[74516]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Dec  8 04:46:30 np0005550137 ceph-mon[74516]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1555784564' entity='mgr.compute-0.kitiwu' 
Dec  8 04:46:30 np0005550137 ceph-osd[83009]: bdev(0x55aa76378c00 /var/lib/ceph/osd/ceph-1/block) open path /var/lib/ceph/osd/ceph-1/block
Dec  8 04:46:30 np0005550137 ceph-osd[83009]: bdev(0x55aa76378c00 /var/lib/ceph/osd/ceph-1/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-1/block failed: (22) Invalid argument
Dec  8 04:46:30 np0005550137 ceph-osd[83009]: bdev(0x55aa76378c00 /var/lib/ceph/osd/ceph-1/block) open backing device/file reports st_blksize 512, using bdev_block_size 4096 anyway
Dec  8 04:46:30 np0005550137 ceph-osd[83009]: bdev(0x55aa76378c00 /var/lib/ceph/osd/ceph-1/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Dec  8 04:46:30 np0005550137 ceph-osd[83009]: bdev(0x55aa76378c00 /var/lib/ceph/osd/ceph-1/block) close
Dec  8 04:46:30 np0005550137 ceph-osd[83009]: bdev(0x55aa76378c00 /var/lib/ceph/osd/ceph-1/block) open path /var/lib/ceph/osd/ceph-1/block
Dec  8 04:46:30 np0005550137 ceph-osd[83009]: bdev(0x55aa76378c00 /var/lib/ceph/osd/ceph-1/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-1/block failed: (22) Invalid argument
Dec  8 04:46:30 np0005550137 ceph-osd[83009]: bdev(0x55aa76378c00 /var/lib/ceph/osd/ceph-1/block) open backing device/file reports st_blksize 512, using bdev_block_size 4096 anyway
Dec  8 04:46:30 np0005550137 ceph-osd[83009]: bdev(0x55aa76378c00 /var/lib/ceph/osd/ceph-1/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Dec  8 04:46:30 np0005550137 ceph-osd[83009]: bluestore(/var/lib/ceph/osd/ceph-1) _set_cache_sizes cache_size 1073741824 meta 0.45 kv 0.45 kv_onode 0.04 data 0.06
Dec  8 04:46:30 np0005550137 ceph-osd[83009]: bdev(0x55aa76379000 /var/lib/ceph/osd/ceph-1/block) open path /var/lib/ceph/osd/ceph-1/block
Dec  8 04:46:30 np0005550137 ceph-osd[83009]: bdev(0x55aa76379000 /var/lib/ceph/osd/ceph-1/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-1/block failed: (22) Invalid argument
Dec  8 04:46:30 np0005550137 ceph-osd[83009]: bdev(0x55aa76379000 /var/lib/ceph/osd/ceph-1/block) open backing device/file reports st_blksize 512, using bdev_block_size 4096 anyway
Dec  8 04:46:30 np0005550137 ceph-osd[83009]: bdev(0x55aa76379000 /var/lib/ceph/osd/ceph-1/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Dec  8 04:46:30 np0005550137 ceph-osd[83009]: bluefs add_block_device bdev 1 path /var/lib/ceph/osd/ceph-1/block size 20 GiB
Dec  8 04:46:30 np0005550137 ceph-osd[83009]: bluefs mount
Dec  8 04:46:30 np0005550137 ceph-osd[83009]: bluefs _init_alloc shared, id 1, capacity 0x4ffc00000, block size 0x10000
Dec  8 04:46:30 np0005550137 ceph-osd[83009]: bluefs mount final locked allocations 0 <0x0~0, [0x0~0], 0x0~0> => <0x0~0, [0x0~0], 0x0~0>
Dec  8 04:46:30 np0005550137 ceph-osd[83009]: bluefs mount final locked allocations 1 <0x0~0, [0x0~0], 0x0~0> => <0x0~0, [0x0~0], 0x0~0>
Dec  8 04:46:30 np0005550137 ceph-osd[83009]: bluefs mount final locked allocations 2 <0x0~0, [0x0~0], 0x0~0> => <0x0~0, [0x0~0], 0x0~0>
Dec  8 04:46:30 np0005550137 ceph-osd[83009]: bluefs mount final locked allocations 3 <0x0~0, [0x0~0], 0x0~0> => <0x0~0, [0x0~0], 0x0~0>
Dec  8 04:46:30 np0005550137 ceph-osd[83009]: bluefs mount final locked allocations 4 <0x0~0, [0x0~0], 0x0~0> => <0x0~0, [0x0~0], 0x0~0>
Dec  8 04:46:30 np0005550137 ceph-osd[83009]: bluefs mount shared_bdev_used = 0
Dec  8 04:46:30 np0005550137 ceph-osd[83009]: bluestore(/var/lib/ceph/osd/ceph-1) _prepare_db_environment set db_paths to db,20397110067 db.slow,20397110067
Dec  8 04:46:30 np0005550137 ceph-osd[83009]: rocksdb: RocksDB version: 7.9.2
Dec  8 04:46:30 np0005550137 ceph-osd[83009]: rocksdb: Git sha 0
Dec  8 04:46:30 np0005550137 ceph-osd[83009]: rocksdb: Compile date 2025-07-17 03:12:14
Dec  8 04:46:30 np0005550137 ceph-osd[83009]: rocksdb: DB SUMMARY
Dec  8 04:46:30 np0005550137 ceph-osd[83009]: rocksdb: DB Session ID:  DKEFO6RJPLWZ8UUC3GGK
Dec  8 04:46:30 np0005550137 ceph-osd[83009]: rocksdb: CURRENT file:  CURRENT
Dec  8 04:46:30 np0005550137 ceph-osd[83009]: rocksdb: IDENTITY file:  IDENTITY
Dec  8 04:46:30 np0005550137 ceph-osd[83009]: rocksdb: MANIFEST file:  MANIFEST-000032 size: 1007 Bytes
Dec  8 04:46:30 np0005550137 ceph-osd[83009]: rocksdb: SST files in db dir, Total Num: 1, files: 000030.sst 
Dec  8 04:46:30 np0005550137 ceph-osd[83009]: rocksdb: SST files in db.slow dir, Total Num: 0, files: 
Dec  8 04:46:30 np0005550137 ceph-osd[83009]: rocksdb: Write Ahead Log file in db.wal: 000031.log size: 5097 ; 
Dec  8 04:46:30 np0005550137 ceph-osd[83009]: rocksdb:                         Options.error_if_exists: 0
Dec  8 04:46:30 np0005550137 ceph-osd[83009]: rocksdb:                       Options.create_if_missing: 0
Dec  8 04:46:30 np0005550137 ceph-osd[83009]: rocksdb:                         Options.paranoid_checks: 1
Dec  8 04:46:30 np0005550137 ceph-osd[83009]: rocksdb:             Options.flush_verify_memtable_count: 1
Dec  8 04:46:30 np0005550137 ceph-osd[83009]: rocksdb:                               Options.track_and_verify_wals_in_manifest: 0
Dec  8 04:46:30 np0005550137 ceph-osd[83009]: rocksdb:        Options.verify_sst_unique_id_in_manifest: 1
Dec  8 04:46:30 np0005550137 ceph-osd[83009]: rocksdb:                                     Options.env: 0x55aa76337dc0
Dec  8 04:46:30 np0005550137 ceph-osd[83009]: rocksdb:                                      Options.fs: LegacyFileSystem
Dec  8 04:46:30 np0005550137 ceph-osd[83009]: rocksdb:                                Options.info_log: 0x55aa7633b7a0
Dec  8 04:46:30 np0005550137 ceph-osd[83009]: rocksdb:                Options.max_file_opening_threads: 16
Dec  8 04:46:30 np0005550137 ceph-osd[83009]: rocksdb:                              Options.statistics: (nil)
Dec  8 04:46:30 np0005550137 ceph-osd[83009]: rocksdb:                               Options.use_fsync: 0
Dec  8 04:46:30 np0005550137 ceph-osd[83009]: rocksdb:                       Options.max_log_file_size: 0
Dec  8 04:46:30 np0005550137 ceph-osd[83009]: rocksdb:                  Options.max_manifest_file_size: 1073741824
Dec  8 04:46:30 np0005550137 ceph-osd[83009]: rocksdb:                   Options.log_file_time_to_roll: 0
Dec  8 04:46:30 np0005550137 ceph-osd[83009]: rocksdb:                       Options.keep_log_file_num: 1000
Dec  8 04:46:30 np0005550137 ceph-osd[83009]: rocksdb:                    Options.recycle_log_file_num: 0
Dec  8 04:46:30 np0005550137 ceph-osd[83009]: rocksdb:                         Options.allow_fallocate: 1
Dec  8 04:46:30 np0005550137 ceph-osd[83009]: rocksdb:                        Options.allow_mmap_reads: 0
Dec  8 04:46:30 np0005550137 ceph-osd[83009]: rocksdb:                       Options.allow_mmap_writes: 0
Dec  8 04:46:30 np0005550137 ceph-osd[83009]: rocksdb:                        Options.use_direct_reads: 0
Dec  8 04:46:30 np0005550137 ceph-osd[83009]: rocksdb:                        Options.use_direct_io_for_flush_and_compaction: 0
Dec  8 04:46:30 np0005550137 ceph-osd[83009]: rocksdb:          Options.create_missing_column_families: 0
Dec  8 04:46:30 np0005550137 ceph-osd[83009]: rocksdb:                              Options.db_log_dir: 
Dec  8 04:46:30 np0005550137 ceph-osd[83009]: rocksdb:                                 Options.wal_dir: db.wal
Dec  8 04:46:30 np0005550137 ceph-osd[83009]: rocksdb:                Options.table_cache_numshardbits: 6
Dec  8 04:46:30 np0005550137 ceph-osd[83009]: rocksdb:                         Options.WAL_ttl_seconds: 0
Dec  8 04:46:30 np0005550137 ceph-osd[83009]: rocksdb:                       Options.WAL_size_limit_MB: 0
Dec  8 04:46:30 np0005550137 ceph-osd[83009]: rocksdb:                        Options.max_write_batch_group_size_bytes: 1048576
Dec  8 04:46:30 np0005550137 ceph-osd[83009]: rocksdb:             Options.manifest_preallocation_size: 4194304
Dec  8 04:46:30 np0005550137 ceph-osd[83009]: rocksdb:                     Options.is_fd_close_on_exec: 1
Dec  8 04:46:30 np0005550137 ceph-osd[83009]: rocksdb:                   Options.advise_random_on_open: 1
Dec  8 04:46:30 np0005550137 ceph-osd[83009]: rocksdb:                    Options.db_write_buffer_size: 0
Dec  8 04:46:30 np0005550137 ceph-osd[83009]: rocksdb:                    Options.write_buffer_manager: 0x55aa76444a00
Dec  8 04:46:30 np0005550137 ceph-osd[83009]: rocksdb:         Options.access_hint_on_compaction_start: 1
Dec  8 04:46:30 np0005550137 ceph-osd[83009]: rocksdb:           Options.random_access_max_buffer_size: 1048576
Dec  8 04:46:30 np0005550137 ceph-osd[83009]: rocksdb:                      Options.use_adaptive_mutex: 0
Dec  8 04:46:30 np0005550137 ceph-osd[83009]: rocksdb:                            Options.rate_limiter: (nil)
Dec  8 04:46:30 np0005550137 ceph-osd[83009]: rocksdb:     Options.sst_file_manager.rate_bytes_per_sec: 0
Dec  8 04:46:30 np0005550137 ceph-osd[83009]: rocksdb:                       Options.wal_recovery_mode: 2
Dec  8 04:46:30 np0005550137 ceph-osd[83009]: rocksdb:                  Options.enable_thread_tracking: 0
Dec  8 04:46:30 np0005550137 ceph-osd[83009]: rocksdb:                  Options.enable_pipelined_write: 0
Dec  8 04:46:30 np0005550137 ceph-osd[83009]: rocksdb:                  Options.unordered_write: 0
Dec  8 04:46:30 np0005550137 ceph-osd[83009]: rocksdb:         Options.allow_concurrent_memtable_write: 1
Dec  8 04:46:30 np0005550137 ceph-osd[83009]: rocksdb:      Options.enable_write_thread_adaptive_yield: 1
Dec  8 04:46:30 np0005550137 ceph-osd[83009]: rocksdb:             Options.write_thread_max_yield_usec: 100
Dec  8 04:46:30 np0005550137 ceph-osd[83009]: rocksdb:            Options.write_thread_slow_yield_usec: 3
Dec  8 04:46:30 np0005550137 ceph-osd[83009]: rocksdb:                               Options.row_cache: None
Dec  8 04:46:30 np0005550137 ceph-osd[83009]: rocksdb:                              Options.wal_filter: None
Dec  8 04:46:30 np0005550137 ceph-osd[83009]: rocksdb:             Options.avoid_flush_during_recovery: 0
Dec  8 04:46:30 np0005550137 ceph-osd[83009]: rocksdb:             Options.allow_ingest_behind: 0
Dec  8 04:46:30 np0005550137 ceph-osd[83009]: rocksdb:             Options.two_write_queues: 0
Dec  8 04:46:30 np0005550137 ceph-osd[83009]: rocksdb:             Options.manual_wal_flush: 0
Dec  8 04:46:30 np0005550137 ceph-osd[83009]: rocksdb:             Options.wal_compression: 0
Dec  8 04:46:30 np0005550137 ceph-osd[83009]: rocksdb:             Options.atomic_flush: 0
Dec  8 04:46:30 np0005550137 ceph-osd[83009]: rocksdb:             Options.avoid_unnecessary_blocking_io: 0
Dec  8 04:46:30 np0005550137 ceph-osd[83009]: rocksdb:                 Options.persist_stats_to_disk: 0
Dec  8 04:46:30 np0005550137 ceph-osd[83009]: rocksdb:                 Options.write_dbid_to_manifest: 0
Dec  8 04:46:30 np0005550137 ceph-osd[83009]: rocksdb:                 Options.log_readahead_size: 0
Dec  8 04:46:30 np0005550137 ceph-osd[83009]: rocksdb:                 Options.file_checksum_gen_factory: Unknown
Dec  8 04:46:30 np0005550137 ceph-osd[83009]: rocksdb:                 Options.best_efforts_recovery: 0
Dec  8 04:46:30 np0005550137 ceph-osd[83009]: rocksdb:                Options.max_bgerror_resume_count: 2147483647
Dec  8 04:46:30 np0005550137 ceph-osd[83009]: rocksdb:            Options.bgerror_resume_retry_interval: 1000000
Dec  8 04:46:30 np0005550137 ceph-osd[83009]: rocksdb:             Options.allow_data_in_errors: 0
Dec  8 04:46:30 np0005550137 ceph-osd[83009]: rocksdb:             Options.db_host_id: __hostname__
Dec  8 04:46:30 np0005550137 ceph-osd[83009]: rocksdb:             Options.enforce_single_del_contracts: true
Dec  8 04:46:30 np0005550137 ceph-osd[83009]: rocksdb:             Options.max_background_jobs: 4
Dec  8 04:46:30 np0005550137 ceph-osd[83009]: rocksdb:             Options.max_background_compactions: -1
Dec  8 04:46:30 np0005550137 ceph-osd[83009]: rocksdb:             Options.max_subcompactions: 1
Dec  8 04:46:30 np0005550137 ceph-osd[83009]: rocksdb:             Options.avoid_flush_during_shutdown: 0
Dec  8 04:46:30 np0005550137 ceph-osd[83009]: rocksdb:           Options.writable_file_max_buffer_size: 0
Dec  8 04:46:30 np0005550137 ceph-osd[83009]: rocksdb:             Options.delayed_write_rate : 16777216
Dec  8 04:46:30 np0005550137 ceph-osd[83009]: rocksdb:             Options.max_total_wal_size: 1073741824
Dec  8 04:46:30 np0005550137 ceph-osd[83009]: rocksdb:             Options.delete_obsolete_files_period_micros: 21600000000
Dec  8 04:46:30 np0005550137 ceph-osd[83009]: rocksdb:                   Options.stats_dump_period_sec: 600
Dec  8 04:46:30 np0005550137 ceph-osd[83009]: rocksdb:                 Options.stats_persist_period_sec: 600
Dec  8 04:46:30 np0005550137 ceph-osd[83009]: rocksdb:                 Options.stats_history_buffer_size: 1048576
Dec  8 04:46:30 np0005550137 ceph-osd[83009]: rocksdb:                          Options.max_open_files: -1
Dec  8 04:46:30 np0005550137 ceph-osd[83009]: rocksdb:                          Options.bytes_per_sync: 0
Dec  8 04:46:30 np0005550137 ceph-osd[83009]: rocksdb:                      Options.wal_bytes_per_sync: 0
Dec  8 04:46:30 np0005550137 ceph-osd[83009]: rocksdb:                   Options.strict_bytes_per_sync: 0
Dec  8 04:46:30 np0005550137 ceph-osd[83009]: rocksdb:       Options.compaction_readahead_size: 2097152
Dec  8 04:46:30 np0005550137 ceph-osd[83009]: rocksdb:                  Options.max_background_flushes: -1
Dec  8 04:46:30 np0005550137 ceph-osd[83009]: rocksdb: Compression algorithms supported:
Dec  8 04:46:30 np0005550137 ceph-osd[83009]: rocksdb: #011kZSTD supported: 0
Dec  8 04:46:30 np0005550137 ceph-osd[83009]: rocksdb: #011kXpressCompression supported: 0
Dec  8 04:46:30 np0005550137 ceph-osd[83009]: rocksdb: #011kBZip2Compression supported: 0
Dec  8 04:46:30 np0005550137 ceph-osd[83009]: rocksdb: #011kZSTDNotFinalCompression supported: 0
Dec  8 04:46:30 np0005550137 ceph-osd[83009]: rocksdb: #011kLZ4Compression supported: 1
Dec  8 04:46:30 np0005550137 ceph-osd[83009]: rocksdb: #011kZlibCompression supported: 1
Dec  8 04:46:30 np0005550137 ceph-osd[83009]: rocksdb: #011kLZ4HCCompression supported: 1
Dec  8 04:46:30 np0005550137 ceph-osd[83009]: rocksdb: #011kSnappyCompression supported: 1
Dec  8 04:46:30 np0005550137 ceph-osd[83009]: rocksdb: Fast CRC32 supported: Supported on x86
Dec  8 04:46:30 np0005550137 ceph-osd[83009]: rocksdb: DMutex implementation: pthread_mutex_t
Dec  8 04:46:30 np0005550137 ceph-osd[83009]: rocksdb: [db/db_impl/db_impl_readonly.cc:25] Opening the db in read only mode
Dec  8 04:46:30 np0005550137 ceph-osd[83009]: rocksdb: [db/version_set.cc:5527] Recovering from manifest file: db/MANIFEST-000032
Dec  8 04:46:30 np0005550137 ceph-osd[83009]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [default]:
Dec  8 04:46:30 np0005550137 ceph-osd[83009]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Dec  8 04:46:30 np0005550137 ceph-osd[83009]: rocksdb:           Options.merge_operator: .T:int64_array.b:bitwise_xor
Dec  8 04:46:30 np0005550137 ceph-osd[83009]: rocksdb:        Options.compaction_filter: None
Dec  8 04:46:30 np0005550137 ceph-osd[83009]: rocksdb:        Options.compaction_filter_factory: None
Dec  8 04:46:30 np0005550137 ceph-osd[83009]: rocksdb:  Options.sst_partitioner_factory: None
Dec  8 04:46:30 np0005550137 ceph-osd[83009]: rocksdb:         Options.memtable_factory: SkipListFactory
Dec  8 04:46:30 np0005550137 ceph-osd[83009]: rocksdb:            Options.table_factory: BlockBasedTable
Dec  8 04:46:30 np0005550137 ceph-osd[83009]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x55aa7633bb60)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x55aa75557350#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 483183820#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Dec  8 04:46:30 np0005550137 ceph-osd[83009]: rocksdb:        Options.write_buffer_size: 16777216
Dec  8 04:46:30 np0005550137 ceph-osd[83009]: rocksdb:  Options.max_write_buffer_number: 64
Dec  8 04:46:30 np0005550137 ceph-osd[83009]: rocksdb:          Options.compression: LZ4
Dec  8 04:46:30 np0005550137 ceph-osd[83009]: rocksdb:                  Options.bottommost_compression: Disabled
Dec  8 04:46:30 np0005550137 ceph-osd[83009]: rocksdb:       Options.prefix_extractor: nullptr
Dec  8 04:46:30 np0005550137 ceph-osd[83009]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Dec  8 04:46:30 np0005550137 ceph-osd[83009]: rocksdb:             Options.num_levels: 7
Dec  8 04:46:30 np0005550137 ceph-osd[83009]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Dec  8 04:46:30 np0005550137 ceph-osd[83009]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Dec  8 04:46:30 np0005550137 ceph-osd[83009]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Dec  8 04:46:30 np0005550137 ceph-osd[83009]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Dec  8 04:46:30 np0005550137 ceph-osd[83009]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Dec  8 04:46:30 np0005550137 ceph-osd[83009]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Dec  8 04:46:30 np0005550137 ceph-osd[83009]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Dec  8 04:46:30 np0005550137 ceph-osd[83009]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Dec  8 04:46:30 np0005550137 ceph-osd[83009]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Dec  8 04:46:30 np0005550137 ceph-osd[83009]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Dec  8 04:46:30 np0005550137 ceph-osd[83009]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Dec  8 04:46:30 np0005550137 ceph-osd[83009]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Dec  8 04:46:30 np0005550137 ceph-osd[83009]: rocksdb:            Options.compression_opts.window_bits: -14
Dec  8 04:46:30 np0005550137 ceph-osd[83009]: rocksdb:                  Options.compression_opts.level: 32767
Dec  8 04:46:30 np0005550137 ceph-osd[83009]: rocksdb:               Options.compression_opts.strategy: 0
Dec  8 04:46:30 np0005550137 ceph-osd[83009]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Dec  8 04:46:30 np0005550137 ceph-osd[83009]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Dec  8 04:46:30 np0005550137 ceph-osd[83009]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Dec  8 04:46:30 np0005550137 ceph-osd[83009]: rocksdb:         Options.compression_opts.parallel_threads: 1
Dec  8 04:46:30 np0005550137 ceph-osd[83009]: rocksdb:                  Options.compression_opts.enabled: false
Dec  8 04:46:30 np0005550137 ceph-osd[83009]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Dec  8 04:46:30 np0005550137 ceph-osd[83009]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Dec  8 04:46:30 np0005550137 ceph-osd[83009]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Dec  8 04:46:30 np0005550137 ceph-osd[83009]: rocksdb:              Options.level0_stop_writes_trigger: 36
Dec  8 04:46:30 np0005550137 ceph-osd[83009]: rocksdb:                   Options.target_file_size_base: 67108864
Dec  8 04:46:30 np0005550137 ceph-osd[83009]: rocksdb:             Options.target_file_size_multiplier: 1
Dec  8 04:46:30 np0005550137 ceph-osd[83009]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Dec  8 04:46:30 np0005550137 ceph-osd[83009]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Dec  8 04:46:30 np0005550137 ceph-osd[83009]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Dec  8 04:46:30 np0005550137 ceph-osd[83009]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Dec  8 04:46:30 np0005550137 ceph-osd[83009]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Dec  8 04:46:30 np0005550137 ceph-osd[83009]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Dec  8 04:46:30 np0005550137 ceph-osd[83009]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Dec  8 04:46:30 np0005550137 ceph-osd[83009]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Dec  8 04:46:30 np0005550137 ceph-osd[83009]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Dec  8 04:46:30 np0005550137 ceph-osd[83009]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Dec  8 04:46:30 np0005550137 ceph-osd[83009]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Dec  8 04:46:30 np0005550137 ceph-osd[83009]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Dec  8 04:46:30 np0005550137 ceph-osd[83009]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Dec  8 04:46:30 np0005550137 ceph-osd[83009]: rocksdb:                        Options.arena_block_size: 1048576
Dec  8 04:46:30 np0005550137 ceph-osd[83009]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Dec  8 04:46:30 np0005550137 ceph-osd[83009]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Dec  8 04:46:30 np0005550137 ceph-osd[83009]: rocksdb:                Options.disable_auto_compactions: 0
Dec  8 04:46:30 np0005550137 ceph-osd[83009]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Dec  8 04:46:30 np0005550137 ceph-osd[83009]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Dec  8 04:46:30 np0005550137 ceph-osd[83009]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Dec  8 04:46:30 np0005550137 ceph-osd[83009]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Dec  8 04:46:30 np0005550137 ceph-osd[83009]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Dec  8 04:46:30 np0005550137 ceph-osd[83009]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Dec  8 04:46:30 np0005550137 ceph-osd[83009]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Dec  8 04:46:30 np0005550137 ceph-osd[83009]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Dec  8 04:46:30 np0005550137 ceph-osd[83009]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Dec  8 04:46:30 np0005550137 ceph-osd[83009]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Dec  8 04:46:30 np0005550137 ceph-osd[83009]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Dec  8 04:46:30 np0005550137 ceph-osd[83009]: rocksdb:                   Options.inplace_update_support: 0
Dec  8 04:46:30 np0005550137 ceph-osd[83009]: rocksdb:                 Options.inplace_update_num_locks: 10000
Dec  8 04:46:30 np0005550137 ceph-osd[83009]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Dec  8 04:46:30 np0005550137 ceph-osd[83009]: rocksdb:               Options.memtable_whole_key_filtering: 0
Dec  8 04:46:30 np0005550137 ceph-osd[83009]: rocksdb:   Options.memtable_huge_page_size: 0
Dec  8 04:46:30 np0005550137 ceph-osd[83009]: rocksdb:                           Options.bloom_locality: 0
Dec  8 04:46:30 np0005550137 ceph-osd[83009]: rocksdb:                    Options.max_successive_merges: 0
Dec  8 04:46:30 np0005550137 ceph-osd[83009]: rocksdb:                Options.optimize_filters_for_hits: 0
Dec  8 04:46:30 np0005550137 ceph-osd[83009]: rocksdb:                Options.paranoid_file_checks: 0
Dec  8 04:46:30 np0005550137 ceph-osd[83009]: rocksdb:                Options.force_consistency_checks: 1
Dec  8 04:46:30 np0005550137 ceph-osd[83009]: rocksdb:                Options.report_bg_io_stats: 0
Dec  8 04:46:30 np0005550137 ceph-osd[83009]: rocksdb:                               Options.ttl: 2592000
Dec  8 04:46:30 np0005550137 ceph-osd[83009]: rocksdb:          Options.periodic_compaction_seconds: 0
Dec  8 04:46:30 np0005550137 ceph-osd[83009]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Dec  8 04:46:30 np0005550137 ceph-osd[83009]: rocksdb:    Options.preserve_internal_time_seconds: 0
Dec  8 04:46:30 np0005550137 ceph-osd[83009]: rocksdb:                       Options.enable_blob_files: false
Dec  8 04:46:30 np0005550137 ceph-osd[83009]: rocksdb:                           Options.min_blob_size: 0
Dec  8 04:46:30 np0005550137 ceph-osd[83009]: rocksdb:                          Options.blob_file_size: 268435456
Dec  8 04:46:30 np0005550137 ceph-osd[83009]: rocksdb:                   Options.blob_compression_type: NoCompression
Dec  8 04:46:30 np0005550137 ceph-osd[83009]: rocksdb:          Options.enable_blob_garbage_collection: false
Dec  8 04:46:30 np0005550137 ceph-osd[83009]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Dec  8 04:46:30 np0005550137 ceph-osd[83009]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Dec  8 04:46:30 np0005550137 ceph-osd[83009]: rocksdb:          Options.blob_compaction_readahead_size: 0
Dec  8 04:46:30 np0005550137 ceph-osd[83009]: rocksdb:                Options.blob_file_starting_level: 0
Dec  8 04:46:30 np0005550137 ceph-osd[83009]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Dec  8 04:46:30 np0005550137 ceph-osd[83009]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [m-0]:
Dec  8 04:46:30 np0005550137 ceph-osd[83009]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Dec  8 04:46:30 np0005550137 ceph-osd[83009]: rocksdb:           Options.merge_operator: None
Dec  8 04:46:30 np0005550137 ceph-osd[83009]: rocksdb:        Options.compaction_filter: None
Dec  8 04:46:30 np0005550137 ceph-osd[83009]: rocksdb:        Options.compaction_filter_factory: None
Dec  8 04:46:30 np0005550137 ceph-osd[83009]: rocksdb:  Options.sst_partitioner_factory: None
Dec  8 04:46:30 np0005550137 ceph-osd[83009]: rocksdb:         Options.memtable_factory: SkipListFactory
Dec  8 04:46:30 np0005550137 ceph-osd[83009]: rocksdb:            Options.table_factory: BlockBasedTable
Dec  8 04:46:30 np0005550137 ceph-osd[83009]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x55aa7633bb60)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x55aa75557350#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 483183820#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Dec  8 04:46:30 np0005550137 ceph-osd[83009]: rocksdb:        Options.write_buffer_size: 16777216
Dec  8 04:46:30 np0005550137 ceph-osd[83009]: rocksdb:  Options.max_write_buffer_number: 64
Dec  8 04:46:30 np0005550137 ceph-osd[83009]: rocksdb:          Options.compression: LZ4
Dec  8 04:46:30 np0005550137 ceph-osd[83009]: rocksdb:                  Options.bottommost_compression: Disabled
Dec  8 04:46:30 np0005550137 ceph-osd[83009]: rocksdb:       Options.prefix_extractor: nullptr
Dec  8 04:46:30 np0005550137 ceph-osd[83009]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Dec  8 04:46:30 np0005550137 ceph-osd[83009]: rocksdb:             Options.num_levels: 7
Dec  8 04:46:30 np0005550137 ceph-osd[83009]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Dec  8 04:46:30 np0005550137 ceph-osd[83009]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Dec  8 04:46:30 np0005550137 ceph-osd[83009]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Dec  8 04:46:30 np0005550137 ceph-osd[83009]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Dec  8 04:46:30 np0005550137 ceph-osd[83009]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Dec  8 04:46:30 np0005550137 ceph-osd[83009]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Dec  8 04:46:30 np0005550137 ceph-osd[83009]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Dec  8 04:46:30 np0005550137 ceph-osd[83009]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Dec  8 04:46:30 np0005550137 ceph-osd[83009]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Dec  8 04:46:30 np0005550137 ceph-osd[83009]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Dec  8 04:46:30 np0005550137 ceph-osd[83009]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Dec  8 04:46:30 np0005550137 ceph-osd[83009]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Dec  8 04:46:30 np0005550137 ceph-osd[83009]: rocksdb:            Options.compression_opts.window_bits: -14
Dec  8 04:46:30 np0005550137 ceph-osd[83009]: rocksdb:                  Options.compression_opts.level: 32767
Dec  8 04:46:30 np0005550137 ceph-osd[83009]: rocksdb:               Options.compression_opts.strategy: 0
Dec  8 04:46:30 np0005550137 ceph-osd[83009]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Dec  8 04:46:30 np0005550137 ceph-osd[83009]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Dec  8 04:46:30 np0005550137 ceph-osd[83009]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Dec  8 04:46:30 np0005550137 ceph-osd[83009]: rocksdb:         Options.compression_opts.parallel_threads: 1
Dec  8 04:46:30 np0005550137 ceph-osd[83009]: rocksdb:                  Options.compression_opts.enabled: false
Dec  8 04:46:30 np0005550137 ceph-osd[83009]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Dec  8 04:46:30 np0005550137 ceph-osd[83009]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Dec  8 04:46:30 np0005550137 ceph-osd[83009]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Dec  8 04:46:30 np0005550137 ceph-osd[83009]: rocksdb:              Options.level0_stop_writes_trigger: 36
Dec  8 04:46:30 np0005550137 ceph-osd[83009]: rocksdb:                   Options.target_file_size_base: 67108864
Dec  8 04:46:30 np0005550137 ceph-osd[83009]: rocksdb:             Options.target_file_size_multiplier: 1
Dec  8 04:46:30 np0005550137 ceph-osd[83009]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Dec  8 04:46:30 np0005550137 ceph-osd[83009]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Dec  8 04:46:30 np0005550137 ceph-osd[83009]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Dec  8 04:46:30 np0005550137 ceph-osd[83009]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Dec  8 04:46:30 np0005550137 ceph-osd[83009]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Dec  8 04:46:30 np0005550137 ceph-osd[83009]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Dec  8 04:46:30 np0005550137 ceph-osd[83009]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Dec  8 04:46:30 np0005550137 ceph-osd[83009]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Dec  8 04:46:30 np0005550137 ceph-osd[83009]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Dec  8 04:46:30 np0005550137 ceph-osd[83009]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Dec  8 04:46:30 np0005550137 ceph-osd[83009]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Dec  8 04:46:30 np0005550137 ceph-osd[83009]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Dec  8 04:46:30 np0005550137 ceph-osd[83009]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Dec  8 04:46:30 np0005550137 ceph-osd[83009]: rocksdb:                        Options.arena_block_size: 1048576
Dec  8 04:46:30 np0005550137 ceph-osd[83009]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Dec  8 04:46:30 np0005550137 ceph-osd[83009]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Dec  8 04:46:30 np0005550137 ceph-osd[83009]: rocksdb:                Options.disable_auto_compactions: 0
Dec  8 04:46:30 np0005550137 ceph-osd[83009]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Dec  8 04:46:30 np0005550137 ceph-osd[83009]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Dec  8 04:46:30 np0005550137 ceph-osd[83009]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Dec  8 04:46:30 np0005550137 ceph-osd[83009]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Dec  8 04:46:30 np0005550137 ceph-osd[83009]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Dec  8 04:46:30 np0005550137 ceph-osd[83009]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Dec  8 04:46:30 np0005550137 ceph-osd[83009]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Dec  8 04:46:30 np0005550137 ceph-osd[83009]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Dec  8 04:46:30 np0005550137 ceph-osd[83009]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Dec  8 04:46:30 np0005550137 ceph-osd[83009]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Dec  8 04:46:30 np0005550137 ceph-osd[83009]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Dec  8 04:46:30 np0005550137 ceph-osd[83009]: rocksdb:                   Options.inplace_update_support: 0
Dec  8 04:46:30 np0005550137 ceph-osd[83009]: rocksdb:                 Options.inplace_update_num_locks: 10000
Dec  8 04:46:30 np0005550137 ceph-osd[83009]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Dec  8 04:46:30 np0005550137 ceph-osd[83009]: rocksdb:               Options.memtable_whole_key_filtering: 0
Dec  8 04:46:30 np0005550137 ceph-osd[83009]: rocksdb:   Options.memtable_huge_page_size: 0
Dec  8 04:46:30 np0005550137 ceph-osd[83009]: rocksdb:                           Options.bloom_locality: 0
Dec  8 04:46:30 np0005550137 ceph-osd[83009]: rocksdb:                    Options.max_successive_merges: 0
Dec  8 04:46:30 np0005550137 ceph-osd[83009]: rocksdb:                Options.optimize_filters_for_hits: 0
Dec  8 04:46:30 np0005550137 ceph-osd[83009]: rocksdb:                Options.paranoid_file_checks: 0
Dec  8 04:46:30 np0005550137 ceph-osd[83009]: rocksdb:                Options.force_consistency_checks: 1
Dec  8 04:46:30 np0005550137 ceph-osd[83009]: rocksdb:                Options.report_bg_io_stats: 0
Dec  8 04:46:30 np0005550137 ceph-osd[83009]: rocksdb:                               Options.ttl: 2592000
Dec  8 04:46:30 np0005550137 ceph-osd[83009]: rocksdb:          Options.periodic_compaction_seconds: 0
Dec  8 04:46:30 np0005550137 ceph-osd[83009]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Dec  8 04:46:30 np0005550137 ceph-osd[83009]: rocksdb:    Options.preserve_internal_time_seconds: 0
Dec  8 04:46:30 np0005550137 ceph-osd[83009]: rocksdb:                       Options.enable_blob_files: false
Dec  8 04:46:30 np0005550137 ceph-osd[83009]: rocksdb:                           Options.min_blob_size: 0
Dec  8 04:46:30 np0005550137 ceph-osd[83009]: rocksdb:                          Options.blob_file_size: 268435456
Dec  8 04:46:30 np0005550137 ceph-osd[83009]: rocksdb:                   Options.blob_compression_type: NoCompression
Dec  8 04:46:30 np0005550137 ceph-osd[83009]: rocksdb:          Options.enable_blob_garbage_collection: false
Dec  8 04:46:30 np0005550137 ceph-osd[83009]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Dec  8 04:46:30 np0005550137 ceph-osd[83009]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Dec  8 04:46:30 np0005550137 ceph-osd[83009]: rocksdb:          Options.blob_compaction_readahead_size: 0
Dec  8 04:46:30 np0005550137 ceph-osd[83009]: rocksdb:                Options.blob_file_starting_level: 0
Dec  8 04:46:30 np0005550137 ceph-osd[83009]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Dec  8 04:46:30 np0005550137 ceph-osd[83009]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [m-1]:
Dec  8 04:46:30 np0005550137 ceph-osd[83009]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Dec  8 04:46:30 np0005550137 ceph-osd[83009]: rocksdb:           Options.merge_operator: None
Dec  8 04:46:30 np0005550137 ceph-osd[83009]: rocksdb:        Options.compaction_filter: None
Dec  8 04:46:30 np0005550137 ceph-osd[83009]: rocksdb:        Options.compaction_filter_factory: None
Dec  8 04:46:30 np0005550137 ceph-osd[83009]: rocksdb:  Options.sst_partitioner_factory: None
Dec  8 04:46:30 np0005550137 ceph-osd[83009]: rocksdb:         Options.memtable_factory: SkipListFactory
Dec  8 04:46:30 np0005550137 ceph-osd[83009]: rocksdb:            Options.table_factory: BlockBasedTable
Dec  8 04:46:30 np0005550137 ceph-osd[83009]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x55aa7633bb60)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x55aa75557350#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 483183820#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Dec  8 04:46:30 np0005550137 ceph-osd[83009]: rocksdb:        Options.write_buffer_size: 16777216
Dec  8 04:46:30 np0005550137 ceph-osd[83009]: rocksdb:  Options.max_write_buffer_number: 64
Dec  8 04:46:30 np0005550137 ceph-osd[83009]: rocksdb:          Options.compression: LZ4
Dec  8 04:46:30 np0005550137 ceph-osd[83009]: rocksdb:                  Options.bottommost_compression: Disabled
Dec  8 04:46:30 np0005550137 ceph-osd[83009]: rocksdb:       Options.prefix_extractor: nullptr
Dec  8 04:46:30 np0005550137 ceph-osd[83009]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Dec  8 04:46:30 np0005550137 ceph-osd[83009]: rocksdb:             Options.num_levels: 7
Dec  8 04:46:30 np0005550137 ceph-osd[83009]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Dec  8 04:46:30 np0005550137 ceph-osd[83009]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Dec  8 04:46:30 np0005550137 ceph-osd[83009]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Dec  8 04:46:30 np0005550137 ceph-osd[83009]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Dec  8 04:46:30 np0005550137 ceph-osd[83009]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Dec  8 04:46:30 np0005550137 ceph-osd[83009]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Dec  8 04:46:30 np0005550137 ceph-osd[83009]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Dec  8 04:46:30 np0005550137 ceph-osd[83009]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Dec  8 04:46:30 np0005550137 ceph-osd[83009]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Dec  8 04:46:30 np0005550137 ceph-osd[83009]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Dec  8 04:46:30 np0005550137 ceph-osd[83009]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Dec  8 04:46:30 np0005550137 ceph-osd[83009]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Dec  8 04:46:30 np0005550137 ceph-osd[83009]: rocksdb:            Options.compression_opts.window_bits: -14
Dec  8 04:46:30 np0005550137 ceph-osd[83009]: rocksdb:                  Options.compression_opts.level: 32767
Dec  8 04:46:30 np0005550137 ceph-osd[83009]: rocksdb:               Options.compression_opts.strategy: 0
Dec  8 04:46:30 np0005550137 ceph-osd[83009]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Dec  8 04:46:30 np0005550137 ceph-osd[83009]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Dec  8 04:46:30 np0005550137 ceph-osd[83009]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Dec  8 04:46:30 np0005550137 ceph-osd[83009]: rocksdb:         Options.compression_opts.parallel_threads: 1
Dec  8 04:46:30 np0005550137 ceph-osd[83009]: rocksdb:                  Options.compression_opts.enabled: false
Dec  8 04:46:30 np0005550137 ceph-osd[83009]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Dec  8 04:46:30 np0005550137 ceph-osd[83009]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Dec  8 04:46:30 np0005550137 ceph-osd[83009]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Dec  8 04:46:30 np0005550137 ceph-osd[83009]: rocksdb:              Options.level0_stop_writes_trigger: 36
Dec  8 04:46:30 np0005550137 ceph-osd[83009]: rocksdb:                   Options.target_file_size_base: 67108864
Dec  8 04:46:30 np0005550137 ceph-osd[83009]: rocksdb:             Options.target_file_size_multiplier: 1
Dec  8 04:46:30 np0005550137 ceph-osd[83009]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Dec  8 04:46:30 np0005550137 ceph-osd[83009]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Dec  8 04:46:30 np0005550137 ceph-osd[83009]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Dec  8 04:46:30 np0005550137 ceph-osd[83009]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Dec  8 04:46:30 np0005550137 ceph-osd[83009]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Dec  8 04:46:30 np0005550137 ceph-osd[83009]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Dec  8 04:46:30 np0005550137 ceph-osd[83009]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Dec  8 04:46:30 np0005550137 ceph-osd[83009]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Dec  8 04:46:30 np0005550137 ceph-osd[83009]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Dec  8 04:46:30 np0005550137 ceph-osd[83009]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Dec  8 04:46:30 np0005550137 ceph-osd[83009]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Dec  8 04:46:30 np0005550137 ceph-osd[83009]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Dec  8 04:46:30 np0005550137 ceph-osd[83009]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Dec  8 04:46:30 np0005550137 ceph-osd[83009]: rocksdb:                        Options.arena_block_size: 1048576
Dec  8 04:46:30 np0005550137 ceph-osd[83009]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Dec  8 04:46:30 np0005550137 ceph-osd[83009]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Dec  8 04:46:30 np0005550137 ceph-osd[83009]: rocksdb:                Options.disable_auto_compactions: 0
Dec  8 04:46:30 np0005550137 ceph-osd[83009]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Dec  8 04:46:30 np0005550137 ceph-osd[83009]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Dec  8 04:46:30 np0005550137 ceph-osd[83009]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Dec  8 04:46:30 np0005550137 ceph-osd[83009]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Dec  8 04:46:30 np0005550137 ceph-osd[83009]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Dec  8 04:46:30 np0005550137 ceph-osd[83009]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Dec  8 04:46:30 np0005550137 ceph-osd[83009]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Dec  8 04:46:30 np0005550137 ceph-osd[83009]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Dec  8 04:46:30 np0005550137 ceph-osd[83009]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Dec  8 04:46:30 np0005550137 ceph-osd[83009]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Dec  8 04:46:30 np0005550137 ceph-osd[83009]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Dec  8 04:46:30 np0005550137 ceph-osd[83009]: rocksdb:                   Options.inplace_update_support: 0
Dec  8 04:46:30 np0005550137 ceph-osd[83009]: rocksdb:                 Options.inplace_update_num_locks: 10000
Dec  8 04:46:30 np0005550137 ceph-osd[83009]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Dec  8 04:46:30 np0005550137 ceph-osd[83009]: rocksdb:               Options.memtable_whole_key_filtering: 0
Dec  8 04:46:30 np0005550137 ceph-osd[83009]: rocksdb:   Options.memtable_huge_page_size: 0
Dec  8 04:46:30 np0005550137 ceph-osd[83009]: rocksdb:                           Options.bloom_locality: 0
Dec  8 04:46:30 np0005550137 ceph-osd[83009]: rocksdb:                    Options.max_successive_merges: 0
Dec  8 04:46:30 np0005550137 ceph-osd[83009]: rocksdb:                Options.optimize_filters_for_hits: 0
Dec  8 04:46:30 np0005550137 ceph-osd[83009]: rocksdb:                Options.paranoid_file_checks: 0
Dec  8 04:46:30 np0005550137 ceph-osd[83009]: rocksdb:                Options.force_consistency_checks: 1
Dec  8 04:46:30 np0005550137 ceph-osd[83009]: rocksdb:                Options.report_bg_io_stats: 0
Dec  8 04:46:30 np0005550137 ceph-osd[83009]: rocksdb:                               Options.ttl: 2592000
Dec  8 04:46:30 np0005550137 ceph-osd[83009]: rocksdb:          Options.periodic_compaction_seconds: 0
Dec  8 04:46:30 np0005550137 ceph-osd[83009]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Dec  8 04:46:30 np0005550137 ceph-osd[83009]: rocksdb:    Options.preserve_internal_time_seconds: 0
Dec  8 04:46:30 np0005550137 ceph-osd[83009]: rocksdb:                       Options.enable_blob_files: false
Dec  8 04:46:30 np0005550137 ceph-osd[83009]: rocksdb:                           Options.min_blob_size: 0
Dec  8 04:46:30 np0005550137 ceph-osd[83009]: rocksdb:                          Options.blob_file_size: 268435456
Dec  8 04:46:30 np0005550137 ceph-osd[83009]: rocksdb:                   Options.blob_compression_type: NoCompression
Dec  8 04:46:30 np0005550137 ceph-osd[83009]: rocksdb:          Options.enable_blob_garbage_collection: false
Dec  8 04:46:30 np0005550137 ceph-osd[83009]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Dec  8 04:46:30 np0005550137 ceph-osd[83009]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Dec  8 04:46:30 np0005550137 ceph-osd[83009]: rocksdb:          Options.blob_compaction_readahead_size: 0
Dec  8 04:46:30 np0005550137 ceph-osd[83009]: rocksdb:                Options.blob_file_starting_level: 0
Dec  8 04:46:30 np0005550137 ceph-osd[83009]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Dec  8 04:46:30 np0005550137 ceph-osd[83009]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [m-2]:
Dec  8 04:46:30 np0005550137 ceph-osd[83009]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Dec  8 04:46:30 np0005550137 ceph-osd[83009]: rocksdb:           Options.merge_operator: None
Dec  8 04:46:30 np0005550137 ceph-osd[83009]: rocksdb:        Options.compaction_filter: None
Dec  8 04:46:30 np0005550137 ceph-osd[83009]: rocksdb:        Options.compaction_filter_factory: None
Dec  8 04:46:30 np0005550137 ceph-osd[83009]: rocksdb:  Options.sst_partitioner_factory: None
Dec  8 04:46:30 np0005550137 ceph-osd[83009]: rocksdb:         Options.memtable_factory: SkipListFactory
Dec  8 04:46:30 np0005550137 ceph-osd[83009]: rocksdb:            Options.table_factory: BlockBasedTable
Dec  8 04:46:30 np0005550137 ceph-osd[83009]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x55aa7633bb60)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x55aa75557350#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 483183820#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Dec  8 04:46:30 np0005550137 ceph-osd[83009]: rocksdb:        Options.write_buffer_size: 16777216
Dec  8 04:46:30 np0005550137 ceph-osd[83009]: rocksdb:  Options.max_write_buffer_number: 64
Dec  8 04:46:30 np0005550137 ceph-osd[83009]: rocksdb:          Options.compression: LZ4
Dec  8 04:46:30 np0005550137 ceph-osd[83009]: rocksdb:                  Options.bottommost_compression: Disabled
Dec  8 04:46:30 np0005550137 ceph-osd[83009]: rocksdb:       Options.prefix_extractor: nullptr
Dec  8 04:46:30 np0005550137 ceph-osd[83009]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Dec  8 04:46:30 np0005550137 ceph-osd[83009]: rocksdb:             Options.num_levels: 7
Dec  8 04:46:30 np0005550137 ceph-osd[83009]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Dec  8 04:46:30 np0005550137 ceph-osd[83009]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Dec  8 04:46:30 np0005550137 ceph-osd[83009]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Dec  8 04:46:30 np0005550137 ceph-osd[83009]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Dec  8 04:46:30 np0005550137 ceph-osd[83009]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Dec  8 04:46:30 np0005550137 ceph-osd[83009]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Dec  8 04:46:30 np0005550137 ceph-osd[83009]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Dec  8 04:46:30 np0005550137 ceph-osd[83009]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Dec  8 04:46:30 np0005550137 ceph-osd[83009]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Dec  8 04:46:30 np0005550137 ceph-osd[83009]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Dec  8 04:46:30 np0005550137 ceph-osd[83009]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Dec  8 04:46:30 np0005550137 ceph-osd[83009]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Dec  8 04:46:30 np0005550137 ceph-osd[83009]: rocksdb:            Options.compression_opts.window_bits: -14
Dec  8 04:46:30 np0005550137 ceph-osd[83009]: rocksdb:                  Options.compression_opts.level: 32767
Dec  8 04:46:30 np0005550137 ceph-osd[83009]: rocksdb:               Options.compression_opts.strategy: 0
Dec  8 04:46:30 np0005550137 ceph-osd[83009]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Dec  8 04:46:30 np0005550137 ceph-osd[83009]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Dec  8 04:46:30 np0005550137 ceph-osd[83009]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Dec  8 04:46:30 np0005550137 ceph-osd[83009]: rocksdb:         Options.compression_opts.parallel_threads: 1
Dec  8 04:46:30 np0005550137 ceph-osd[83009]: rocksdb:                  Options.compression_opts.enabled: false
Dec  8 04:46:30 np0005550137 ceph-osd[83009]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Dec  8 04:46:30 np0005550137 ceph-osd[83009]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Dec  8 04:46:30 np0005550137 ceph-osd[83009]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Dec  8 04:46:30 np0005550137 ceph-osd[83009]: rocksdb:              Options.level0_stop_writes_trigger: 36
Dec  8 04:46:30 np0005550137 ceph-osd[83009]: rocksdb:                   Options.target_file_size_base: 67108864
Dec  8 04:46:30 np0005550137 ceph-osd[83009]: rocksdb:             Options.target_file_size_multiplier: 1
Dec  8 04:46:30 np0005550137 ceph-osd[83009]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Dec  8 04:46:30 np0005550137 ceph-osd[83009]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Dec  8 04:46:30 np0005550137 ceph-osd[83009]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Dec  8 04:46:30 np0005550137 ceph-osd[83009]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Dec  8 04:46:30 np0005550137 ceph-osd[83009]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Dec  8 04:46:30 np0005550137 ceph-osd[83009]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Dec  8 04:46:30 np0005550137 ceph-osd[83009]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Dec  8 04:46:30 np0005550137 ceph-osd[83009]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Dec  8 04:46:30 np0005550137 ceph-osd[83009]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Dec  8 04:46:30 np0005550137 ceph-osd[83009]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Dec  8 04:46:30 np0005550137 ceph-osd[83009]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Dec  8 04:46:30 np0005550137 ceph-osd[83009]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Dec  8 04:46:30 np0005550137 ceph-osd[83009]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Dec  8 04:46:30 np0005550137 ceph-osd[83009]: rocksdb:                        Options.arena_block_size: 1048576
Dec  8 04:46:30 np0005550137 ceph-osd[83009]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Dec  8 04:46:30 np0005550137 ceph-osd[83009]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Dec  8 04:46:30 np0005550137 ceph-osd[83009]: rocksdb:                Options.disable_auto_compactions: 0
Dec  8 04:46:30 np0005550137 ceph-osd[83009]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Dec  8 04:46:30 np0005550137 ceph-osd[83009]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Dec  8 04:46:30 np0005550137 ceph-osd[83009]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Dec  8 04:46:30 np0005550137 ceph-osd[83009]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Dec  8 04:46:30 np0005550137 ceph-osd[83009]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Dec  8 04:46:30 np0005550137 ceph-osd[83009]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Dec  8 04:46:30 np0005550137 ceph-osd[83009]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Dec  8 04:46:30 np0005550137 ceph-osd[83009]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Dec  8 04:46:30 np0005550137 ceph-osd[83009]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Dec  8 04:46:30 np0005550137 ceph-osd[83009]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Dec  8 04:46:30 np0005550137 ceph-osd[83009]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Dec  8 04:46:30 np0005550137 ceph-osd[83009]: rocksdb:                   Options.inplace_update_support: 0
Dec  8 04:46:30 np0005550137 ceph-osd[83009]: rocksdb:                 Options.inplace_update_num_locks: 10000
Dec  8 04:46:30 np0005550137 ceph-osd[83009]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Dec  8 04:46:30 np0005550137 ceph-osd[83009]: rocksdb:               Options.memtable_whole_key_filtering: 0
Dec  8 04:46:30 np0005550137 ceph-osd[83009]: rocksdb:   Options.memtable_huge_page_size: 0
Dec  8 04:46:30 np0005550137 ceph-osd[83009]: rocksdb:                           Options.bloom_locality: 0
Dec  8 04:46:30 np0005550137 ceph-osd[83009]: rocksdb:                    Options.max_successive_merges: 0
Dec  8 04:46:30 np0005550137 ceph-osd[83009]: rocksdb:                Options.optimize_filters_for_hits: 0
Dec  8 04:46:30 np0005550137 ceph-osd[83009]: rocksdb:                Options.paranoid_file_checks: 0
Dec  8 04:46:30 np0005550137 ceph-osd[83009]: rocksdb:                Options.force_consistency_checks: 1
Dec  8 04:46:30 np0005550137 ceph-osd[83009]: rocksdb:                Options.report_bg_io_stats: 0
Dec  8 04:46:30 np0005550137 ceph-osd[83009]: rocksdb:                               Options.ttl: 2592000
Dec  8 04:46:30 np0005550137 ceph-osd[83009]: rocksdb:          Options.periodic_compaction_seconds: 0
Dec  8 04:46:30 np0005550137 ceph-osd[83009]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Dec  8 04:46:30 np0005550137 ceph-osd[83009]: rocksdb:    Options.preserve_internal_time_seconds: 0
Dec  8 04:46:30 np0005550137 ceph-osd[83009]: rocksdb:                       Options.enable_blob_files: false
Dec  8 04:46:30 np0005550137 ceph-osd[83009]: rocksdb:                           Options.min_blob_size: 0
Dec  8 04:46:30 np0005550137 ceph-osd[83009]: rocksdb:                          Options.blob_file_size: 268435456
Dec  8 04:46:30 np0005550137 ceph-osd[83009]: rocksdb:                   Options.blob_compression_type: NoCompression
Dec  8 04:46:30 np0005550137 ceph-osd[83009]: rocksdb:          Options.enable_blob_garbage_collection: false
Dec  8 04:46:30 np0005550137 ceph-osd[83009]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Dec  8 04:46:30 np0005550137 ceph-osd[83009]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Dec  8 04:46:30 np0005550137 ceph-osd[83009]: rocksdb:          Options.blob_compaction_readahead_size: 0
Dec  8 04:46:30 np0005550137 ceph-osd[83009]: rocksdb:                Options.blob_file_starting_level: 0
Dec  8 04:46:30 np0005550137 ceph-osd[83009]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Dec  8 04:46:30 np0005550137 ceph-osd[83009]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [p-0]:
Dec  8 04:46:30 np0005550137 ceph-osd[83009]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Dec  8 04:46:30 np0005550137 ceph-osd[83009]: rocksdb:           Options.merge_operator: None
Dec  8 04:46:30 np0005550137 ceph-osd[83009]: rocksdb:        Options.compaction_filter: None
Dec  8 04:46:30 np0005550137 ceph-osd[83009]: rocksdb:        Options.compaction_filter_factory: None
Dec  8 04:46:30 np0005550137 ceph-osd[83009]: rocksdb:  Options.sst_partitioner_factory: None
Dec  8 04:46:30 np0005550137 ceph-osd[83009]: rocksdb:         Options.memtable_factory: SkipListFactory
Dec  8 04:46:30 np0005550137 ceph-osd[83009]: rocksdb:            Options.table_factory: BlockBasedTable
Dec  8 04:46:30 np0005550137 ceph-osd[83009]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x55aa7633bb60)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x55aa75557350#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 483183820#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Dec  8 04:46:30 np0005550137 ceph-osd[83009]: rocksdb:        Options.write_buffer_size: 16777216
Dec  8 04:46:30 np0005550137 ceph-osd[83009]: rocksdb:  Options.max_write_buffer_number: 64
Dec  8 04:46:30 np0005550137 ceph-osd[83009]: rocksdb:          Options.compression: LZ4
Dec  8 04:46:30 np0005550137 ceph-osd[83009]: rocksdb:                  Options.bottommost_compression: Disabled
Dec  8 04:46:30 np0005550137 ceph-osd[83009]: rocksdb:       Options.prefix_extractor: nullptr
Dec  8 04:46:30 np0005550137 ceph-osd[83009]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Dec  8 04:46:30 np0005550137 ceph-osd[83009]: rocksdb:             Options.num_levels: 7
Dec  8 04:46:30 np0005550137 ceph-osd[83009]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Dec  8 04:46:30 np0005550137 ceph-osd[83009]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Dec  8 04:46:30 np0005550137 ceph-osd[83009]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Dec  8 04:46:30 np0005550137 ceph-osd[83009]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Dec  8 04:46:30 np0005550137 ceph-osd[83009]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Dec  8 04:46:30 np0005550137 ceph-osd[83009]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Dec  8 04:46:30 np0005550137 ceph-osd[83009]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Dec  8 04:46:30 np0005550137 ceph-osd[83009]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Dec  8 04:46:30 np0005550137 ceph-osd[83009]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Dec  8 04:46:30 np0005550137 ceph-osd[83009]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Dec  8 04:46:30 np0005550137 ceph-osd[83009]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Dec  8 04:46:30 np0005550137 ceph-osd[83009]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Dec  8 04:46:30 np0005550137 ceph-osd[83009]: rocksdb:            Options.compression_opts.window_bits: -14
Dec  8 04:46:30 np0005550137 ceph-osd[83009]: rocksdb:                  Options.compression_opts.level: 32767
Dec  8 04:46:30 np0005550137 ceph-osd[83009]: rocksdb:               Options.compression_opts.strategy: 0
Dec  8 04:46:30 np0005550137 ceph-osd[83009]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Dec  8 04:46:30 np0005550137 ceph-osd[83009]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Dec  8 04:46:30 np0005550137 ceph-osd[83009]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Dec  8 04:46:30 np0005550137 ceph-osd[83009]: rocksdb:         Options.compression_opts.parallel_threads: 1
Dec  8 04:46:30 np0005550137 ceph-osd[83009]: rocksdb:                  Options.compression_opts.enabled: false
Dec  8 04:46:30 np0005550137 ceph-osd[83009]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Dec  8 04:46:30 np0005550137 ceph-osd[83009]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Dec  8 04:46:30 np0005550137 ceph-osd[83009]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Dec  8 04:46:30 np0005550137 ceph-osd[83009]: rocksdb:              Options.level0_stop_writes_trigger: 36
Dec  8 04:46:30 np0005550137 ceph-osd[83009]: rocksdb:                   Options.target_file_size_base: 67108864
Dec  8 04:46:30 np0005550137 ceph-osd[83009]: rocksdb:             Options.target_file_size_multiplier: 1
Dec  8 04:46:30 np0005550137 ceph-osd[83009]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Dec  8 04:46:30 np0005550137 ceph-osd[83009]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Dec  8 04:46:30 np0005550137 ceph-osd[83009]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Dec  8 04:46:30 np0005550137 ceph-osd[83009]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Dec  8 04:46:30 np0005550137 ceph-osd[83009]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Dec  8 04:46:30 np0005550137 ceph-osd[83009]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Dec  8 04:46:30 np0005550137 ceph-osd[83009]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Dec  8 04:46:30 np0005550137 ceph-osd[83009]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Dec  8 04:46:30 np0005550137 ceph-osd[83009]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Dec  8 04:46:30 np0005550137 ceph-osd[83009]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Dec  8 04:46:30 np0005550137 ceph-osd[83009]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Dec  8 04:46:30 np0005550137 ceph-osd[83009]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Dec  8 04:46:30 np0005550137 ceph-osd[83009]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Dec  8 04:46:30 np0005550137 ceph-osd[83009]: rocksdb:                        Options.arena_block_size: 1048576
Dec  8 04:46:30 np0005550137 ceph-osd[83009]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Dec  8 04:46:30 np0005550137 ceph-osd[83009]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Dec  8 04:46:30 np0005550137 ceph-osd[83009]: rocksdb:                Options.disable_auto_compactions: 0
Dec  8 04:46:30 np0005550137 ceph-osd[83009]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Dec  8 04:46:30 np0005550137 ceph-osd[83009]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Dec  8 04:46:30 np0005550137 ceph-osd[83009]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Dec  8 04:46:30 np0005550137 ceph-osd[83009]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Dec  8 04:46:30 np0005550137 ceph-osd[83009]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Dec  8 04:46:30 np0005550137 ceph-osd[83009]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Dec  8 04:46:30 np0005550137 ceph-osd[83009]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Dec  8 04:46:30 np0005550137 ceph-osd[83009]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Dec  8 04:46:30 np0005550137 ceph-osd[83009]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Dec  8 04:46:30 np0005550137 ceph-osd[83009]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Dec  8 04:46:30 np0005550137 ceph-osd[83009]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Dec  8 04:46:30 np0005550137 ceph-osd[83009]: rocksdb:                   Options.inplace_update_support: 0
Dec  8 04:46:30 np0005550137 ceph-osd[83009]: rocksdb:                 Options.inplace_update_num_locks: 10000
Dec  8 04:46:30 np0005550137 ceph-osd[83009]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Dec  8 04:46:30 np0005550137 ceph-osd[83009]: rocksdb:               Options.memtable_whole_key_filtering: 0
Dec  8 04:46:30 np0005550137 ceph-osd[83009]: rocksdb:   Options.memtable_huge_page_size: 0
Dec  8 04:46:30 np0005550137 ceph-osd[83009]: rocksdb:                           Options.bloom_locality: 0
Dec  8 04:46:30 np0005550137 ceph-osd[83009]: rocksdb:                    Options.max_successive_merges: 0
Dec  8 04:46:30 np0005550137 ceph-osd[83009]: rocksdb:                Options.optimize_filters_for_hits: 0
Dec  8 04:46:30 np0005550137 ceph-osd[83009]: rocksdb:                Options.paranoid_file_checks: 0
Dec  8 04:46:30 np0005550137 ceph-osd[83009]: rocksdb:                Options.force_consistency_checks: 1
Dec  8 04:46:30 np0005550137 ceph-osd[83009]: rocksdb:                Options.report_bg_io_stats: 0
Dec  8 04:46:30 np0005550137 ceph-osd[83009]: rocksdb:                               Options.ttl: 2592000
Dec  8 04:46:30 np0005550137 ceph-osd[83009]: rocksdb:          Options.periodic_compaction_seconds: 0
Dec  8 04:46:30 np0005550137 ceph-osd[83009]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Dec  8 04:46:30 np0005550137 ceph-osd[83009]: rocksdb:    Options.preserve_internal_time_seconds: 0
Dec  8 04:46:30 np0005550137 ceph-osd[83009]: rocksdb:                       Options.enable_blob_files: false
Dec  8 04:46:30 np0005550137 ceph-osd[83009]: rocksdb:                           Options.min_blob_size: 0
Dec  8 04:46:30 np0005550137 ceph-osd[83009]: rocksdb:                          Options.blob_file_size: 268435456
Dec  8 04:46:30 np0005550137 ceph-osd[83009]: rocksdb:                   Options.blob_compression_type: NoCompression
Dec  8 04:46:30 np0005550137 ceph-osd[83009]: rocksdb:          Options.enable_blob_garbage_collection: false
Dec  8 04:46:30 np0005550137 ceph-osd[83009]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Dec  8 04:46:30 np0005550137 ceph-osd[83009]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Dec  8 04:46:30 np0005550137 ceph-osd[83009]: rocksdb:          Options.blob_compaction_readahead_size: 0
Dec  8 04:46:30 np0005550137 ceph-osd[83009]: rocksdb:                Options.blob_file_starting_level: 0
Dec  8 04:46:30 np0005550137 ceph-osd[83009]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Dec  8 04:46:30 np0005550137 ceph-osd[83009]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [p-1]:
Dec  8 04:46:30 np0005550137 ceph-osd[83009]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Dec  8 04:46:30 np0005550137 ceph-osd[83009]: rocksdb:           Options.merge_operator: None
Dec  8 04:46:30 np0005550137 ceph-osd[83009]: rocksdb:        Options.compaction_filter: None
Dec  8 04:46:30 np0005550137 ceph-osd[83009]: rocksdb:        Options.compaction_filter_factory: None
Dec  8 04:46:30 np0005550137 ceph-osd[83009]: rocksdb:  Options.sst_partitioner_factory: None
Dec  8 04:46:30 np0005550137 ceph-osd[83009]: rocksdb:         Options.memtable_factory: SkipListFactory
Dec  8 04:46:30 np0005550137 ceph-osd[83009]: rocksdb:            Options.table_factory: BlockBasedTable
Dec  8 04:46:30 np0005550137 ceph-osd[83009]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x55aa7633bb60)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x55aa75557350#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 483183820#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Dec  8 04:46:30 np0005550137 ceph-osd[83009]: rocksdb:        Options.write_buffer_size: 16777216
Dec  8 04:46:30 np0005550137 ceph-osd[83009]: rocksdb:  Options.max_write_buffer_number: 64
Dec  8 04:46:30 np0005550137 ceph-osd[83009]: rocksdb:          Options.compression: LZ4
Dec  8 04:46:30 np0005550137 ceph-osd[83009]: rocksdb:                  Options.bottommost_compression: Disabled
Dec  8 04:46:30 np0005550137 ceph-osd[83009]: rocksdb:       Options.prefix_extractor: nullptr
Dec  8 04:46:30 np0005550137 ceph-osd[83009]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Dec  8 04:46:30 np0005550137 ceph-osd[83009]: rocksdb:             Options.num_levels: 7
Dec  8 04:46:30 np0005550137 ceph-osd[83009]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Dec  8 04:46:30 np0005550137 ceph-osd[83009]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Dec  8 04:46:30 np0005550137 ceph-osd[83009]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Dec  8 04:46:30 np0005550137 ceph-osd[83009]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Dec  8 04:46:30 np0005550137 ceph-osd[83009]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Dec  8 04:46:30 np0005550137 ceph-osd[83009]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Dec  8 04:46:30 np0005550137 ceph-osd[83009]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Dec  8 04:46:30 np0005550137 ceph-osd[83009]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Dec  8 04:46:30 np0005550137 ceph-osd[83009]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Dec  8 04:46:30 np0005550137 ceph-osd[83009]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Dec  8 04:46:30 np0005550137 ceph-osd[83009]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Dec  8 04:46:30 np0005550137 ceph-osd[83009]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Dec  8 04:46:30 np0005550137 ceph-osd[83009]: rocksdb:            Options.compression_opts.window_bits: -14
Dec  8 04:46:30 np0005550137 ceph-osd[83009]: rocksdb:                  Options.compression_opts.level: 32767
Dec  8 04:46:30 np0005550137 ceph-osd[83009]: rocksdb:               Options.compression_opts.strategy: 0
Dec  8 04:46:30 np0005550137 ceph-osd[83009]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Dec  8 04:46:30 np0005550137 ceph-osd[83009]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Dec  8 04:46:30 np0005550137 ceph-osd[83009]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Dec  8 04:46:30 np0005550137 ceph-osd[83009]: rocksdb:         Options.compression_opts.parallel_threads: 1
Dec  8 04:46:30 np0005550137 ceph-osd[83009]: rocksdb:                  Options.compression_opts.enabled: false
Dec  8 04:46:30 np0005550137 ceph-osd[83009]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Dec  8 04:46:30 np0005550137 ceph-osd[83009]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Dec  8 04:46:30 np0005550137 ceph-osd[83009]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Dec  8 04:46:30 np0005550137 ceph-osd[83009]: rocksdb:              Options.level0_stop_writes_trigger: 36
Dec  8 04:46:30 np0005550137 ceph-osd[83009]: rocksdb:                   Options.target_file_size_base: 67108864
Dec  8 04:46:30 np0005550137 ceph-osd[83009]: rocksdb:             Options.target_file_size_multiplier: 1
Dec  8 04:46:30 np0005550137 ceph-osd[83009]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Dec  8 04:46:30 np0005550137 ceph-osd[83009]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Dec  8 04:46:30 np0005550137 ceph-osd[83009]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Dec  8 04:46:30 np0005550137 ceph-osd[83009]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Dec  8 04:46:30 np0005550137 ceph-osd[83009]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Dec  8 04:46:30 np0005550137 ceph-osd[83009]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Dec  8 04:46:30 np0005550137 ceph-osd[83009]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Dec  8 04:46:30 np0005550137 ceph-osd[83009]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Dec  8 04:46:30 np0005550137 ceph-osd[83009]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Dec  8 04:46:30 np0005550137 ceph-osd[83009]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Dec  8 04:46:30 np0005550137 ceph-osd[83009]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Dec  8 04:46:30 np0005550137 ceph-osd[83009]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Dec  8 04:46:30 np0005550137 ceph-osd[83009]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Dec  8 04:46:30 np0005550137 ceph-osd[83009]: rocksdb:                        Options.arena_block_size: 1048576
Dec  8 04:46:30 np0005550137 ceph-osd[83009]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Dec  8 04:46:30 np0005550137 ceph-osd[83009]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Dec  8 04:46:30 np0005550137 ceph-osd[83009]: rocksdb:                Options.disable_auto_compactions: 0
Dec  8 04:46:30 np0005550137 ceph-osd[83009]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Dec  8 04:46:30 np0005550137 ceph-osd[83009]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Dec  8 04:46:30 np0005550137 ceph-osd[83009]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Dec  8 04:46:30 np0005550137 ceph-osd[83009]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Dec  8 04:46:30 np0005550137 ceph-osd[83009]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Dec  8 04:46:30 np0005550137 ceph-osd[83009]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Dec  8 04:46:30 np0005550137 ceph-osd[83009]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Dec  8 04:46:30 np0005550137 ceph-osd[83009]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Dec  8 04:46:30 np0005550137 ceph-osd[83009]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Dec  8 04:46:30 np0005550137 ceph-osd[83009]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Dec  8 04:46:30 np0005550137 ceph-osd[83009]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Dec  8 04:46:30 np0005550137 ceph-osd[83009]: rocksdb:                   Options.inplace_update_support: 0
Dec  8 04:46:30 np0005550137 ceph-osd[83009]: rocksdb:                 Options.inplace_update_num_locks: 10000
Dec  8 04:46:30 np0005550137 ceph-osd[83009]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Dec  8 04:46:30 np0005550137 ceph-osd[83009]: rocksdb:               Options.memtable_whole_key_filtering: 0
Dec  8 04:46:30 np0005550137 ceph-osd[83009]: rocksdb:   Options.memtable_huge_page_size: 0
Dec  8 04:46:30 np0005550137 ceph-osd[83009]: rocksdb:                           Options.bloom_locality: 0
Dec  8 04:46:30 np0005550137 ceph-osd[83009]: rocksdb:                    Options.max_successive_merges: 0
Dec  8 04:46:30 np0005550137 ceph-osd[83009]: rocksdb:                Options.optimize_filters_for_hits: 0
Dec  8 04:46:30 np0005550137 ceph-osd[83009]: rocksdb:                Options.paranoid_file_checks: 0
Dec  8 04:46:30 np0005550137 ceph-osd[83009]: rocksdb:                Options.force_consistency_checks: 1
Dec  8 04:46:30 np0005550137 ceph-osd[83009]: rocksdb:                Options.report_bg_io_stats: 0
Dec  8 04:46:30 np0005550137 ceph-osd[83009]: rocksdb:                               Options.ttl: 2592000
Dec  8 04:46:30 np0005550137 ceph-osd[83009]: rocksdb:          Options.periodic_compaction_seconds: 0
Dec  8 04:46:30 np0005550137 ceph-osd[83009]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Dec  8 04:46:30 np0005550137 ceph-osd[83009]: rocksdb:    Options.preserve_internal_time_seconds: 0
Dec  8 04:46:30 np0005550137 ceph-osd[83009]: rocksdb:                       Options.enable_blob_files: false
Dec  8 04:46:30 np0005550137 ceph-osd[83009]: rocksdb:                           Options.min_blob_size: 0
Dec  8 04:46:30 np0005550137 ceph-osd[83009]: rocksdb:                          Options.blob_file_size: 268435456
Dec  8 04:46:30 np0005550137 ceph-osd[83009]: rocksdb:                   Options.blob_compression_type: NoCompression
Dec  8 04:46:30 np0005550137 ceph-osd[83009]: rocksdb:          Options.enable_blob_garbage_collection: false
Dec  8 04:46:30 np0005550137 ceph-osd[83009]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Dec  8 04:46:30 np0005550137 ceph-osd[83009]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Dec  8 04:46:30 np0005550137 ceph-osd[83009]: rocksdb:          Options.blob_compaction_readahead_size: 0
Dec  8 04:46:30 np0005550137 ceph-osd[83009]: rocksdb:                Options.blob_file_starting_level: 0
Dec  8 04:46:30 np0005550137 ceph-osd[83009]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Dec  8 04:46:30 np0005550137 ceph-osd[83009]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [p-2]:
Dec  8 04:46:30 np0005550137 ceph-osd[83009]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Dec  8 04:46:30 np0005550137 ceph-osd[83009]: rocksdb:           Options.merge_operator: None
Dec  8 04:46:30 np0005550137 ceph-osd[83009]: rocksdb:        Options.compaction_filter: None
Dec  8 04:46:30 np0005550137 ceph-osd[83009]: rocksdb:        Options.compaction_filter_factory: None
Dec  8 04:46:30 np0005550137 ceph-osd[83009]: rocksdb:  Options.sst_partitioner_factory: None
Dec  8 04:46:30 np0005550137 ceph-osd[83009]: rocksdb:         Options.memtable_factory: SkipListFactory
Dec  8 04:46:30 np0005550137 ceph-osd[83009]: rocksdb:            Options.table_factory: BlockBasedTable
Dec  8 04:46:30 np0005550137 ceph-osd[83009]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x55aa7633bb60)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x55aa75557350#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 483183820#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Dec  8 04:46:30 np0005550137 ceph-osd[83009]: rocksdb:        Options.write_buffer_size: 16777216
Dec  8 04:46:30 np0005550137 ceph-osd[83009]: rocksdb:  Options.max_write_buffer_number: 64
Dec  8 04:46:30 np0005550137 ceph-osd[83009]: rocksdb:          Options.compression: LZ4
Dec  8 04:46:30 np0005550137 ceph-osd[83009]: rocksdb:                  Options.bottommost_compression: Disabled
Dec  8 04:46:30 np0005550137 ceph-osd[83009]: rocksdb:       Options.prefix_extractor: nullptr
Dec  8 04:46:30 np0005550137 ceph-osd[83009]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Dec  8 04:46:30 np0005550137 ceph-osd[83009]: rocksdb:             Options.num_levels: 7
Dec  8 04:46:30 np0005550137 ceph-osd[83009]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Dec  8 04:46:30 np0005550137 ceph-osd[83009]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Dec  8 04:46:30 np0005550137 ceph-osd[83009]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Dec  8 04:46:30 np0005550137 ceph-osd[83009]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Dec  8 04:46:30 np0005550137 ceph-osd[83009]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Dec  8 04:46:30 np0005550137 ceph-osd[83009]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Dec  8 04:46:30 np0005550137 ceph-osd[83009]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Dec  8 04:46:30 np0005550137 ceph-osd[83009]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Dec  8 04:46:30 np0005550137 ceph-osd[83009]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Dec  8 04:46:30 np0005550137 ceph-osd[83009]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Dec  8 04:46:30 np0005550137 ceph-osd[83009]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Dec  8 04:46:30 np0005550137 ceph-osd[83009]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Dec  8 04:46:30 np0005550137 ceph-osd[83009]: rocksdb:            Options.compression_opts.window_bits: -14
Dec  8 04:46:30 np0005550137 ceph-osd[83009]: rocksdb:                  Options.compression_opts.level: 32767
Dec  8 04:46:30 np0005550137 ceph-osd[83009]: rocksdb:               Options.compression_opts.strategy: 0
Dec  8 04:46:30 np0005550137 ceph-osd[83009]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Dec  8 04:46:30 np0005550137 ceph-osd[83009]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Dec  8 04:46:30 np0005550137 ceph-osd[83009]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Dec  8 04:46:30 np0005550137 ceph-osd[83009]: rocksdb:         Options.compression_opts.parallel_threads: 1
Dec  8 04:46:30 np0005550137 ceph-osd[83009]: rocksdb:                  Options.compression_opts.enabled: false
Dec  8 04:46:30 np0005550137 ceph-osd[83009]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Dec  8 04:46:30 np0005550137 ceph-osd[83009]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Dec  8 04:46:30 np0005550137 ceph-osd[83009]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Dec  8 04:46:30 np0005550137 ceph-osd[83009]: rocksdb:              Options.level0_stop_writes_trigger: 36
Dec  8 04:46:30 np0005550137 ceph-osd[83009]: rocksdb:                   Options.target_file_size_base: 67108864
Dec  8 04:46:30 np0005550137 ceph-osd[83009]: rocksdb:             Options.target_file_size_multiplier: 1
Dec  8 04:46:30 np0005550137 ceph-osd[83009]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Dec  8 04:46:30 np0005550137 ceph-osd[83009]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Dec  8 04:46:30 np0005550137 ceph-osd[83009]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Dec  8 04:46:30 np0005550137 ceph-osd[83009]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Dec  8 04:46:30 np0005550137 ceph-osd[83009]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Dec  8 04:46:30 np0005550137 ceph-osd[83009]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Dec  8 04:46:30 np0005550137 ceph-osd[83009]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Dec  8 04:46:30 np0005550137 ceph-osd[83009]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Dec  8 04:46:30 np0005550137 ceph-osd[83009]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Dec  8 04:46:30 np0005550137 ceph-osd[83009]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Dec  8 04:46:30 np0005550137 ceph-osd[83009]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Dec  8 04:46:30 np0005550137 ceph-osd[83009]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Dec  8 04:46:30 np0005550137 ceph-osd[83009]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Dec  8 04:46:30 np0005550137 ceph-osd[83009]: rocksdb:                        Options.arena_block_size: 1048576
Dec  8 04:46:30 np0005550137 ceph-osd[83009]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Dec  8 04:46:30 np0005550137 ceph-osd[83009]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Dec  8 04:46:30 np0005550137 ceph-osd[83009]: rocksdb:                Options.disable_auto_compactions: 0
Dec  8 04:46:30 np0005550137 ceph-osd[83009]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Dec  8 04:46:30 np0005550137 ceph-osd[83009]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Dec  8 04:46:30 np0005550137 ceph-osd[83009]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Dec  8 04:46:30 np0005550137 ceph-osd[83009]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Dec  8 04:46:30 np0005550137 ceph-osd[83009]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Dec  8 04:46:30 np0005550137 ceph-osd[83009]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Dec  8 04:46:30 np0005550137 ceph-osd[83009]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Dec  8 04:46:30 np0005550137 ceph-osd[83009]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Dec  8 04:46:30 np0005550137 ceph-osd[83009]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Dec  8 04:46:30 np0005550137 ceph-osd[83009]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Dec  8 04:46:30 np0005550137 ceph-osd[83009]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Dec  8 04:46:30 np0005550137 ceph-osd[83009]: rocksdb:                   Options.inplace_update_support: 0
Dec  8 04:46:30 np0005550137 ceph-osd[83009]: rocksdb:                 Options.inplace_update_num_locks: 10000
Dec  8 04:46:30 np0005550137 ceph-osd[83009]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Dec  8 04:46:30 np0005550137 ceph-osd[83009]: rocksdb:               Options.memtable_whole_key_filtering: 0
Dec  8 04:46:30 np0005550137 ceph-osd[83009]: rocksdb:   Options.memtable_huge_page_size: 0
Dec  8 04:46:30 np0005550137 ceph-osd[83009]: rocksdb:                           Options.bloom_locality: 0
Dec  8 04:46:30 np0005550137 ceph-osd[83009]: rocksdb:                    Options.max_successive_merges: 0
Dec  8 04:46:30 np0005550137 ceph-osd[83009]: rocksdb:                Options.optimize_filters_for_hits: 0
Dec  8 04:46:30 np0005550137 ceph-osd[83009]: rocksdb:                Options.paranoid_file_checks: 0
Dec  8 04:46:30 np0005550137 ceph-osd[83009]: rocksdb:                Options.force_consistency_checks: 1
Dec  8 04:46:30 np0005550137 ceph-osd[83009]: rocksdb:                Options.report_bg_io_stats: 0
Dec  8 04:46:30 np0005550137 ceph-osd[83009]: rocksdb:                               Options.ttl: 2592000
Dec  8 04:46:30 np0005550137 ceph-osd[83009]: rocksdb:          Options.periodic_compaction_seconds: 0
Dec  8 04:46:30 np0005550137 ceph-osd[83009]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Dec  8 04:46:30 np0005550137 ceph-osd[83009]: rocksdb:    Options.preserve_internal_time_seconds: 0
Dec  8 04:46:30 np0005550137 ceph-osd[83009]: rocksdb:                       Options.enable_blob_files: false
Dec  8 04:46:30 np0005550137 ceph-osd[83009]: rocksdb:                           Options.min_blob_size: 0
Dec  8 04:46:30 np0005550137 ceph-osd[83009]: rocksdb:                          Options.blob_file_size: 268435456
Dec  8 04:46:30 np0005550137 ceph-osd[83009]: rocksdb:                   Options.blob_compression_type: NoCompression
Dec  8 04:46:30 np0005550137 ceph-osd[83009]: rocksdb:          Options.enable_blob_garbage_collection: false
Dec  8 04:46:30 np0005550137 ceph-osd[83009]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Dec  8 04:46:30 np0005550137 ceph-osd[83009]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Dec  8 04:46:30 np0005550137 ceph-osd[83009]: rocksdb:          Options.blob_compaction_readahead_size: 0
Dec  8 04:46:30 np0005550137 ceph-osd[83009]: rocksdb:                Options.blob_file_starting_level: 0
Dec  8 04:46:30 np0005550137 ceph-osd[83009]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Dec  8 04:46:30 np0005550137 ceph-osd[83009]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [O-0]:
Dec  8 04:46:30 np0005550137 ceph-osd[83009]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Dec  8 04:46:30 np0005550137 ceph-osd[83009]: rocksdb:           Options.merge_operator: None
Dec  8 04:46:30 np0005550137 ceph-osd[83009]: rocksdb:        Options.compaction_filter: None
Dec  8 04:46:30 np0005550137 ceph-osd[83009]: rocksdb:        Options.compaction_filter_factory: None
Dec  8 04:46:30 np0005550137 ceph-osd[83009]: rocksdb:  Options.sst_partitioner_factory: None
Dec  8 04:46:30 np0005550137 ceph-osd[83009]: rocksdb:         Options.memtable_factory: SkipListFactory
Dec  8 04:46:30 np0005550137 ceph-osd[83009]: rocksdb:            Options.table_factory: BlockBasedTable
Dec  8 04:46:30 np0005550137 ceph-osd[83009]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x55aa7633bb80)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x55aa755569b0#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 536870912#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Dec  8 04:46:30 np0005550137 ceph-osd[83009]: rocksdb:        Options.write_buffer_size: 16777216
Dec  8 04:46:30 np0005550137 ceph-osd[83009]: rocksdb:  Options.max_write_buffer_number: 64
Dec  8 04:46:30 np0005550137 ceph-osd[83009]: rocksdb:          Options.compression: LZ4
Dec  8 04:46:30 np0005550137 ceph-osd[83009]: rocksdb:                  Options.bottommost_compression: Disabled
Dec  8 04:46:30 np0005550137 ceph-osd[83009]: rocksdb:       Options.prefix_extractor: nullptr
Dec  8 04:46:30 np0005550137 ceph-osd[83009]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Dec  8 04:46:30 np0005550137 ceph-osd[83009]: rocksdb:             Options.num_levels: 7
Dec  8 04:46:30 np0005550137 ceph-osd[83009]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Dec  8 04:46:30 np0005550137 ceph-osd[83009]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Dec  8 04:46:30 np0005550137 ceph-osd[83009]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Dec  8 04:46:30 np0005550137 ceph-osd[83009]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Dec  8 04:46:30 np0005550137 ceph-osd[83009]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Dec  8 04:46:30 np0005550137 ceph-osd[83009]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Dec  8 04:46:30 np0005550137 ceph-osd[83009]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Dec  8 04:46:30 np0005550137 ceph-osd[83009]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Dec  8 04:46:30 np0005550137 ceph-osd[83009]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Dec  8 04:46:30 np0005550137 ceph-osd[83009]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Dec  8 04:46:30 np0005550137 ceph-osd[83009]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Dec  8 04:46:30 np0005550137 ceph-osd[83009]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Dec  8 04:46:30 np0005550137 ceph-osd[83009]: rocksdb:            Options.compression_opts.window_bits: -14
Dec  8 04:46:30 np0005550137 ceph-osd[83009]: rocksdb:                  Options.compression_opts.level: 32767
Dec  8 04:46:30 np0005550137 ceph-osd[83009]: rocksdb:               Options.compression_opts.strategy: 0
Dec  8 04:46:30 np0005550137 ceph-osd[83009]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Dec  8 04:46:30 np0005550137 ceph-osd[83009]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Dec  8 04:46:30 np0005550137 ceph-osd[83009]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Dec  8 04:46:30 np0005550137 ceph-osd[83009]: rocksdb:         Options.compression_opts.parallel_threads: 1
Dec  8 04:46:30 np0005550137 ceph-osd[83009]: rocksdb:                  Options.compression_opts.enabled: false
Dec  8 04:46:30 np0005550137 ceph-osd[83009]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Dec  8 04:46:30 np0005550137 ceph-osd[83009]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Dec  8 04:46:30 np0005550137 ceph-osd[83009]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Dec  8 04:46:30 np0005550137 ceph-osd[83009]: rocksdb:              Options.level0_stop_writes_trigger: 36
Dec  8 04:46:30 np0005550137 ceph-osd[83009]: rocksdb:                   Options.target_file_size_base: 67108864
Dec  8 04:46:30 np0005550137 ceph-osd[83009]: rocksdb:             Options.target_file_size_multiplier: 1
Dec  8 04:46:30 np0005550137 ceph-osd[83009]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Dec  8 04:46:30 np0005550137 ceph-osd[83009]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Dec  8 04:46:30 np0005550137 ceph-osd[83009]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Dec  8 04:46:30 np0005550137 ceph-osd[83009]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Dec  8 04:46:30 np0005550137 ceph-osd[83009]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Dec  8 04:46:30 np0005550137 ceph-osd[83009]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Dec  8 04:46:30 np0005550137 ceph-osd[83009]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Dec  8 04:46:30 np0005550137 ceph-osd[83009]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Dec  8 04:46:30 np0005550137 ceph-osd[83009]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Dec  8 04:46:30 np0005550137 ceph-osd[83009]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Dec  8 04:46:30 np0005550137 ceph-osd[83009]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Dec  8 04:46:30 np0005550137 ceph-osd[83009]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Dec  8 04:46:30 np0005550137 ceph-osd[83009]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Dec  8 04:46:30 np0005550137 ceph-osd[83009]: rocksdb:                        Options.arena_block_size: 1048576
Dec  8 04:46:30 np0005550137 ceph-osd[83009]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Dec  8 04:46:30 np0005550137 ceph-osd[83009]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Dec  8 04:46:30 np0005550137 ceph-osd[83009]: rocksdb:                Options.disable_auto_compactions: 0
Dec  8 04:46:30 np0005550137 ceph-osd[83009]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Dec  8 04:46:30 np0005550137 ceph-osd[83009]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Dec  8 04:46:30 np0005550137 ceph-osd[83009]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Dec  8 04:46:30 np0005550137 ceph-osd[83009]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Dec  8 04:46:30 np0005550137 ceph-osd[83009]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Dec  8 04:46:30 np0005550137 ceph-osd[83009]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Dec  8 04:46:30 np0005550137 ceph-osd[83009]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Dec  8 04:46:30 np0005550137 ceph-osd[83009]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Dec  8 04:46:30 np0005550137 ceph-osd[83009]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Dec  8 04:46:30 np0005550137 ceph-osd[83009]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Dec  8 04:46:30 np0005550137 ceph-osd[83009]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Dec  8 04:46:30 np0005550137 ceph-osd[83009]: rocksdb:                   Options.inplace_update_support: 0
Dec  8 04:46:30 np0005550137 ceph-osd[83009]: rocksdb:                 Options.inplace_update_num_locks: 10000
Dec  8 04:46:30 np0005550137 ceph-osd[83009]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Dec  8 04:46:30 np0005550137 ceph-osd[83009]: rocksdb:               Options.memtable_whole_key_filtering: 0
Dec  8 04:46:30 np0005550137 ceph-osd[83009]: rocksdb:   Options.memtable_huge_page_size: 0
Dec  8 04:46:30 np0005550137 ceph-osd[83009]: rocksdb:                           Options.bloom_locality: 0
Dec  8 04:46:30 np0005550137 ceph-osd[83009]: rocksdb:                    Options.max_successive_merges: 0
Dec  8 04:46:30 np0005550137 ceph-osd[83009]: rocksdb:                Options.optimize_filters_for_hits: 0
Dec  8 04:46:30 np0005550137 ceph-osd[83009]: rocksdb:                Options.paranoid_file_checks: 0
Dec  8 04:46:30 np0005550137 ceph-osd[83009]: rocksdb:                Options.force_consistency_checks: 1
Dec  8 04:46:30 np0005550137 ceph-osd[83009]: rocksdb:                Options.report_bg_io_stats: 0
Dec  8 04:46:30 np0005550137 ceph-osd[83009]: rocksdb:                               Options.ttl: 2592000
Dec  8 04:46:30 np0005550137 ceph-osd[83009]: rocksdb:          Options.periodic_compaction_seconds: 0
Dec  8 04:46:30 np0005550137 ceph-osd[83009]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Dec  8 04:46:30 np0005550137 ceph-osd[83009]: rocksdb:    Options.preserve_internal_time_seconds: 0
Dec  8 04:46:30 np0005550137 ceph-osd[83009]: rocksdb:                       Options.enable_blob_files: false
Dec  8 04:46:30 np0005550137 ceph-osd[83009]: rocksdb:                           Options.min_blob_size: 0
Dec  8 04:46:30 np0005550137 ceph-osd[83009]: rocksdb:                          Options.blob_file_size: 268435456
Dec  8 04:46:30 np0005550137 ceph-osd[83009]: rocksdb:                   Options.blob_compression_type: NoCompression
Dec  8 04:46:30 np0005550137 ceph-osd[83009]: rocksdb:          Options.enable_blob_garbage_collection: false
Dec  8 04:46:30 np0005550137 ceph-osd[83009]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Dec  8 04:46:30 np0005550137 ceph-osd[83009]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Dec  8 04:46:30 np0005550137 ceph-osd[83009]: rocksdb:          Options.blob_compaction_readahead_size: 0
Dec  8 04:46:30 np0005550137 ceph-osd[83009]: rocksdb:                Options.blob_file_starting_level: 0
Dec  8 04:46:30 np0005550137 ceph-osd[83009]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Dec  8 04:46:30 np0005550137 ceph-osd[83009]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [O-1]:
Dec  8 04:46:30 np0005550137 ceph-osd[83009]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Dec  8 04:46:30 np0005550137 ceph-osd[83009]: rocksdb:           Options.merge_operator: None
Dec  8 04:46:30 np0005550137 ceph-osd[83009]: rocksdb:        Options.compaction_filter: None
Dec  8 04:46:30 np0005550137 ceph-osd[83009]: rocksdb:        Options.compaction_filter_factory: None
Dec  8 04:46:30 np0005550137 ceph-osd[83009]: rocksdb:  Options.sst_partitioner_factory: None
Dec  8 04:46:30 np0005550137 ceph-osd[83009]: rocksdb:         Options.memtable_factory: SkipListFactory
Dec  8 04:46:30 np0005550137 ceph-osd[83009]: rocksdb:            Options.table_factory: BlockBasedTable
Dec  8 04:46:30 np0005550137 ceph-osd[83009]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x55aa7633bb80)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x55aa755569b0#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 536870912#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Dec  8 04:46:30 np0005550137 ceph-osd[83009]: rocksdb:        Options.write_buffer_size: 16777216
Dec  8 04:46:30 np0005550137 ceph-osd[83009]: rocksdb:  Options.max_write_buffer_number: 64
Dec  8 04:46:30 np0005550137 ceph-osd[83009]: rocksdb:          Options.compression: LZ4
Dec  8 04:46:30 np0005550137 ceph-osd[83009]: rocksdb:                  Options.bottommost_compression: Disabled
Dec  8 04:46:30 np0005550137 ceph-osd[83009]: rocksdb:       Options.prefix_extractor: nullptr
Dec  8 04:46:30 np0005550137 ceph-osd[83009]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Dec  8 04:46:30 np0005550137 ceph-osd[83009]: rocksdb:             Options.num_levels: 7
Dec  8 04:46:30 np0005550137 ceph-osd[83009]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Dec  8 04:46:30 np0005550137 ceph-osd[83009]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Dec  8 04:46:30 np0005550137 ceph-osd[83009]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Dec  8 04:46:30 np0005550137 ceph-osd[83009]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Dec  8 04:46:30 np0005550137 ceph-osd[83009]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Dec  8 04:46:30 np0005550137 ceph-osd[83009]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Dec  8 04:46:30 np0005550137 ceph-osd[83009]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Dec  8 04:46:30 np0005550137 ceph-osd[83009]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Dec  8 04:46:30 np0005550137 ceph-osd[83009]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Dec  8 04:46:30 np0005550137 ceph-osd[83009]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Dec  8 04:46:30 np0005550137 ceph-osd[83009]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Dec  8 04:46:30 np0005550137 ceph-osd[83009]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Dec  8 04:46:30 np0005550137 ceph-osd[83009]: rocksdb:            Options.compression_opts.window_bits: -14
Dec  8 04:46:30 np0005550137 ceph-osd[83009]: rocksdb:                  Options.compression_opts.level: 32767
Dec  8 04:46:30 np0005550137 ceph-osd[83009]: rocksdb:               Options.compression_opts.strategy: 0
Dec  8 04:46:30 np0005550137 ceph-osd[83009]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Dec  8 04:46:30 np0005550137 ceph-osd[83009]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Dec  8 04:46:30 np0005550137 ceph-osd[83009]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Dec  8 04:46:30 np0005550137 ceph-osd[83009]: rocksdb:         Options.compression_opts.parallel_threads: 1
Dec  8 04:46:30 np0005550137 ceph-osd[83009]: rocksdb:                  Options.compression_opts.enabled: false
Dec  8 04:46:30 np0005550137 ceph-osd[83009]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Dec  8 04:46:30 np0005550137 ceph-osd[83009]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Dec  8 04:46:30 np0005550137 ceph-osd[83009]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Dec  8 04:46:30 np0005550137 ceph-osd[83009]: rocksdb:              Options.level0_stop_writes_trigger: 36
Dec  8 04:46:30 np0005550137 ceph-osd[83009]: rocksdb:                   Options.target_file_size_base: 67108864
Dec  8 04:46:30 np0005550137 ceph-osd[83009]: rocksdb:             Options.target_file_size_multiplier: 1
Dec  8 04:46:30 np0005550137 ceph-osd[83009]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Dec  8 04:46:30 np0005550137 ceph-osd[83009]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Dec  8 04:46:30 np0005550137 ceph-osd[83009]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Dec  8 04:46:30 np0005550137 ceph-osd[83009]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Dec  8 04:46:30 np0005550137 ceph-osd[83009]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Dec  8 04:46:30 np0005550137 ceph-osd[83009]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Dec  8 04:46:30 np0005550137 ceph-osd[83009]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Dec  8 04:46:30 np0005550137 ceph-osd[83009]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Dec  8 04:46:30 np0005550137 ceph-osd[83009]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Dec  8 04:46:30 np0005550137 ceph-osd[83009]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Dec  8 04:46:30 np0005550137 ceph-osd[83009]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Dec  8 04:46:30 np0005550137 ceph-osd[83009]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Dec  8 04:46:30 np0005550137 ceph-osd[83009]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Dec  8 04:46:30 np0005550137 ceph-osd[83009]: rocksdb:                        Options.arena_block_size: 1048576
Dec  8 04:46:30 np0005550137 ceph-osd[83009]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Dec  8 04:46:30 np0005550137 ceph-osd[83009]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Dec  8 04:46:30 np0005550137 ceph-osd[83009]: rocksdb:                Options.disable_auto_compactions: 0
Dec  8 04:46:30 np0005550137 ceph-osd[83009]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Dec  8 04:46:30 np0005550137 ceph-osd[83009]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Dec  8 04:46:30 np0005550137 ceph-osd[83009]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Dec  8 04:46:30 np0005550137 ceph-osd[83009]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Dec  8 04:46:30 np0005550137 ceph-osd[83009]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Dec  8 04:46:30 np0005550137 ceph-osd[83009]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Dec  8 04:46:30 np0005550137 ceph-osd[83009]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Dec  8 04:46:30 np0005550137 ceph-osd[83009]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Dec  8 04:46:30 np0005550137 ceph-osd[83009]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Dec  8 04:46:30 np0005550137 ceph-osd[83009]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Dec  8 04:46:30 np0005550137 ceph-osd[83009]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Dec  8 04:46:30 np0005550137 ceph-osd[83009]: rocksdb:                   Options.inplace_update_support: 0
Dec  8 04:46:30 np0005550137 ceph-osd[83009]: rocksdb:                 Options.inplace_update_num_locks: 10000
Dec  8 04:46:30 np0005550137 ceph-osd[83009]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Dec  8 04:46:30 np0005550137 ceph-osd[83009]: rocksdb:               Options.memtable_whole_key_filtering: 0
Dec  8 04:46:30 np0005550137 ceph-osd[83009]: rocksdb:   Options.memtable_huge_page_size: 0
Dec  8 04:46:30 np0005550137 ceph-osd[83009]: rocksdb:                           Options.bloom_locality: 0
Dec  8 04:46:30 np0005550137 ceph-osd[83009]: rocksdb:                    Options.max_successive_merges: 0
Dec  8 04:46:30 np0005550137 ceph-osd[83009]: rocksdb:                Options.optimize_filters_for_hits: 0
Dec  8 04:46:30 np0005550137 ceph-osd[83009]: rocksdb:                Options.paranoid_file_checks: 0
Dec  8 04:46:30 np0005550137 ceph-osd[83009]: rocksdb:                Options.force_consistency_checks: 1
Dec  8 04:46:30 np0005550137 ceph-osd[83009]: rocksdb:                Options.report_bg_io_stats: 0
Dec  8 04:46:30 np0005550137 ceph-osd[83009]: rocksdb:                               Options.ttl: 2592000
Dec  8 04:46:30 np0005550137 ceph-osd[83009]: rocksdb:          Options.periodic_compaction_seconds: 0
Dec  8 04:46:30 np0005550137 ceph-osd[83009]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Dec  8 04:46:30 np0005550137 ceph-osd[83009]: rocksdb:    Options.preserve_internal_time_seconds: 0
Dec  8 04:46:30 np0005550137 ceph-osd[83009]: rocksdb:                       Options.enable_blob_files: false
Dec  8 04:46:30 np0005550137 ceph-osd[83009]: rocksdb:                           Options.min_blob_size: 0
Dec  8 04:46:30 np0005550137 ceph-osd[83009]: rocksdb:                          Options.blob_file_size: 268435456
Dec  8 04:46:30 np0005550137 ceph-osd[83009]: rocksdb:                   Options.blob_compression_type: NoCompression
Dec  8 04:46:30 np0005550137 ceph-osd[83009]: rocksdb:          Options.enable_blob_garbage_collection: false
Dec  8 04:46:30 np0005550137 ceph-osd[83009]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Dec  8 04:46:30 np0005550137 ceph-osd[83009]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Dec  8 04:46:30 np0005550137 ceph-osd[83009]: rocksdb:          Options.blob_compaction_readahead_size: 0
Dec  8 04:46:30 np0005550137 ceph-osd[83009]: rocksdb:                Options.blob_file_starting_level: 0
Dec  8 04:46:30 np0005550137 ceph-osd[83009]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Dec  8 04:46:30 np0005550137 ceph-osd[83009]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [O-2]:
Dec  8 04:46:30 np0005550137 ceph-osd[83009]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Dec  8 04:46:30 np0005550137 ceph-osd[83009]: rocksdb:           Options.merge_operator: None
Dec  8 04:46:30 np0005550137 ceph-osd[83009]: rocksdb:        Options.compaction_filter: None
Dec  8 04:46:30 np0005550137 ceph-osd[83009]: rocksdb:        Options.compaction_filter_factory: None
Dec  8 04:46:30 np0005550137 ceph-osd[83009]: rocksdb:  Options.sst_partitioner_factory: None
Dec  8 04:46:30 np0005550137 ceph-osd[83009]: rocksdb:         Options.memtable_factory: SkipListFactory
Dec  8 04:46:30 np0005550137 ceph-osd[83009]: rocksdb:            Options.table_factory: BlockBasedTable
Dec  8 04:46:30 np0005550137 ceph-osd[83009]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x55aa7633bb80)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x55aa755569b0#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 536870912#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Dec  8 04:46:30 np0005550137 ceph-osd[83009]: rocksdb:        Options.write_buffer_size: 16777216
Dec  8 04:46:30 np0005550137 ceph-osd[83009]: rocksdb:  Options.max_write_buffer_number: 64
Dec  8 04:46:30 np0005550137 ceph-osd[83009]: rocksdb:          Options.compression: LZ4
Dec  8 04:46:30 np0005550137 ceph-osd[83009]: rocksdb:                  Options.bottommost_compression: Disabled
Dec  8 04:46:30 np0005550137 ceph-osd[83009]: rocksdb:       Options.prefix_extractor: nullptr
Dec  8 04:46:30 np0005550137 ceph-osd[83009]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Dec  8 04:46:30 np0005550137 ceph-osd[83009]: rocksdb:             Options.num_levels: 7
Dec  8 04:46:30 np0005550137 ceph-osd[83009]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Dec  8 04:46:30 np0005550137 ceph-osd[83009]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Dec  8 04:46:30 np0005550137 ceph-osd[83009]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Dec  8 04:46:30 np0005550137 ceph-osd[83009]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Dec  8 04:46:30 np0005550137 ceph-osd[83009]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Dec  8 04:46:30 np0005550137 ceph-osd[83009]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Dec  8 04:46:30 np0005550137 ceph-osd[83009]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Dec  8 04:46:30 np0005550137 ceph-osd[83009]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Dec  8 04:46:30 np0005550137 ceph-osd[83009]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Dec  8 04:46:30 np0005550137 ceph-osd[83009]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Dec  8 04:46:30 np0005550137 ceph-osd[83009]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Dec  8 04:46:30 np0005550137 ceph-osd[83009]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Dec  8 04:46:30 np0005550137 ceph-osd[83009]: rocksdb:            Options.compression_opts.window_bits: -14
Dec  8 04:46:30 np0005550137 ceph-osd[83009]: rocksdb:                  Options.compression_opts.level: 32767
Dec  8 04:46:30 np0005550137 ceph-osd[83009]: rocksdb:               Options.compression_opts.strategy: 0
Dec  8 04:46:30 np0005550137 ceph-osd[83009]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Dec  8 04:46:30 np0005550137 ceph-osd[83009]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Dec  8 04:46:30 np0005550137 ceph-osd[83009]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Dec  8 04:46:30 np0005550137 ceph-osd[83009]: rocksdb:         Options.compression_opts.parallel_threads: 1
Dec  8 04:46:30 np0005550137 ceph-osd[83009]: rocksdb:                  Options.compression_opts.enabled: false
Dec  8 04:46:30 np0005550137 ceph-osd[83009]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Dec  8 04:46:30 np0005550137 ceph-osd[83009]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Dec  8 04:46:30 np0005550137 ceph-osd[83009]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Dec  8 04:46:30 np0005550137 ceph-osd[83009]: rocksdb:              Options.level0_stop_writes_trigger: 36
Dec  8 04:46:30 np0005550137 ceph-osd[83009]: rocksdb:                   Options.target_file_size_base: 67108864
Dec  8 04:46:30 np0005550137 ceph-osd[83009]: rocksdb:             Options.target_file_size_multiplier: 1
Dec  8 04:46:30 np0005550137 ceph-osd[83009]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Dec  8 04:46:30 np0005550137 ceph-osd[83009]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Dec  8 04:46:30 np0005550137 ceph-osd[83009]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Dec  8 04:46:30 np0005550137 ceph-osd[83009]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Dec  8 04:46:30 np0005550137 ceph-osd[83009]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Dec  8 04:46:30 np0005550137 ceph-osd[83009]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Dec  8 04:46:30 np0005550137 ceph-osd[83009]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Dec  8 04:46:30 np0005550137 ceph-osd[83009]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Dec  8 04:46:30 np0005550137 ceph-osd[83009]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Dec  8 04:46:30 np0005550137 ceph-osd[83009]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Dec  8 04:46:30 np0005550137 ceph-osd[83009]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Dec  8 04:46:30 np0005550137 ceph-osd[83009]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Dec  8 04:46:30 np0005550137 ceph-osd[83009]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Dec  8 04:46:30 np0005550137 ceph-osd[83009]: rocksdb:                        Options.arena_block_size: 1048576
Dec  8 04:46:30 np0005550137 ceph-osd[83009]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Dec  8 04:46:30 np0005550137 ceph-osd[83009]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Dec  8 04:46:30 np0005550137 ceph-osd[83009]: rocksdb:                Options.disable_auto_compactions: 0
Dec  8 04:46:30 np0005550137 ceph-osd[83009]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Dec  8 04:46:30 np0005550137 ceph-osd[83009]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Dec  8 04:46:30 np0005550137 ceph-osd[83009]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Dec  8 04:46:30 np0005550137 ceph-osd[83009]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Dec  8 04:46:30 np0005550137 ceph-osd[83009]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Dec  8 04:46:30 np0005550137 ceph-osd[83009]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Dec  8 04:46:30 np0005550137 ceph-osd[83009]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Dec  8 04:46:30 np0005550137 ceph-osd[83009]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Dec  8 04:46:30 np0005550137 ceph-osd[83009]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Dec  8 04:46:30 np0005550137 ceph-osd[83009]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Dec  8 04:46:30 np0005550137 ceph-osd[83009]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Dec  8 04:46:30 np0005550137 ceph-osd[83009]: rocksdb:                   Options.inplace_update_support: 0
Dec  8 04:46:30 np0005550137 ceph-osd[83009]: rocksdb:                 Options.inplace_update_num_locks: 10000
Dec  8 04:46:30 np0005550137 ceph-osd[83009]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Dec  8 04:46:30 np0005550137 ceph-osd[83009]: rocksdb:               Options.memtable_whole_key_filtering: 0
Dec  8 04:46:30 np0005550137 ceph-osd[83009]: rocksdb:   Options.memtable_huge_page_size: 0
Dec  8 04:46:30 np0005550137 ceph-osd[83009]: rocksdb:                           Options.bloom_locality: 0
Dec  8 04:46:30 np0005550137 ceph-osd[83009]: rocksdb:                    Options.max_successive_merges: 0
Dec  8 04:46:30 np0005550137 ceph-osd[83009]: rocksdb:                Options.optimize_filters_for_hits: 0
Dec  8 04:46:30 np0005550137 ceph-osd[83009]: rocksdb:                Options.paranoid_file_checks: 0
Dec  8 04:46:30 np0005550137 ceph-osd[83009]: rocksdb:                Options.force_consistency_checks: 1
Dec  8 04:46:30 np0005550137 ceph-osd[83009]: rocksdb:                Options.report_bg_io_stats: 0
Dec  8 04:46:30 np0005550137 ceph-osd[83009]: rocksdb:                               Options.ttl: 2592000
Dec  8 04:46:30 np0005550137 ceph-osd[83009]: rocksdb:          Options.periodic_compaction_seconds: 0
Dec  8 04:46:30 np0005550137 ceph-osd[83009]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Dec  8 04:46:30 np0005550137 ceph-osd[83009]: rocksdb:    Options.preserve_internal_time_seconds: 0
Dec  8 04:46:30 np0005550137 ceph-osd[83009]: rocksdb:                       Options.enable_blob_files: false
Dec  8 04:46:30 np0005550137 ceph-osd[83009]: rocksdb:                           Options.min_blob_size: 0
Dec  8 04:46:30 np0005550137 ceph-osd[83009]: rocksdb:                          Options.blob_file_size: 268435456
Dec  8 04:46:30 np0005550137 ceph-osd[83009]: rocksdb:                   Options.blob_compression_type: NoCompression
Dec  8 04:46:30 np0005550137 ceph-osd[83009]: rocksdb:          Options.enable_blob_garbage_collection: false
Dec  8 04:46:30 np0005550137 ceph-osd[83009]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Dec  8 04:46:30 np0005550137 ceph-osd[83009]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Dec  8 04:46:30 np0005550137 ceph-osd[83009]: rocksdb:          Options.blob_compaction_readahead_size: 0
Dec  8 04:46:30 np0005550137 ceph-osd[83009]: rocksdb:                Options.blob_file_starting_level: 0
Dec  8 04:46:30 np0005550137 ceph-osd[83009]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Dec  8 04:46:30 np0005550137 ceph-osd[83009]: rocksdb: [db/column_family.cc:635] #011(skipping printing options)
Dec  8 04:46:30 np0005550137 ceph-osd[83009]: rocksdb: [db/column_family.cc:635] #011(skipping printing options)
Dec  8 04:46:30 np0005550137 ceph-osd[83009]: rocksdb: [db/version_set.cc:5566] Recovered from manifest file:db/MANIFEST-000032 succeeded,manifest_file_number is 32, next_file_number is 34, last_sequence is 12, log_number is 5,prev_log_number is 0,max_column_family is 11,min_log_number_to_keep is 5
Dec  8 04:46:30 np0005550137 ceph-osd[83009]: rocksdb: [db/version_set.cc:5581] Column family [default] (ID 0), log number is 5
Dec  8 04:46:30 np0005550137 ceph-osd[83009]: rocksdb: [db/version_set.cc:5581] Column family [m-0] (ID 1), log number is 5
Dec  8 04:46:30 np0005550137 ceph-osd[83009]: rocksdb: [db/version_set.cc:5581] Column family [m-1] (ID 2), log number is 5
Dec  8 04:46:30 np0005550137 ceph-osd[83009]: rocksdb: [db/version_set.cc:5581] Column family [m-2] (ID 3), log number is 5
Dec  8 04:46:30 np0005550137 ceph-osd[83009]: rocksdb: [db/version_set.cc:5581] Column family [p-0] (ID 4), log number is 5
Dec  8 04:46:30 np0005550137 ceph-osd[83009]: rocksdb: [db/version_set.cc:5581] Column family [p-1] (ID 5), log number is 5
Dec  8 04:46:30 np0005550137 ceph-osd[83009]: rocksdb: [db/version_set.cc:5581] Column family [p-2] (ID 6), log number is 5
Dec  8 04:46:30 np0005550137 ceph-osd[83009]: rocksdb: [db/version_set.cc:5581] Column family [O-0] (ID 7), log number is 5
Dec  8 04:46:30 np0005550137 ceph-osd[83009]: rocksdb: [db/version_set.cc:5581] Column family [O-1] (ID 8), log number is 5
Dec  8 04:46:30 np0005550137 ceph-osd[83009]: rocksdb: [db/version_set.cc:5581] Column family [O-2] (ID 9), log number is 5
Dec  8 04:46:30 np0005550137 ceph-osd[83009]: rocksdb: [db/version_set.cc:5581] Column family [L] (ID 10), log number is 5
Dec  8 04:46:30 np0005550137 ceph-osd[83009]: rocksdb: [db/version_set.cc:5581] Column family [P] (ID 11), log number is 5
Dec  8 04:46:30 np0005550137 ceph-osd[83009]: rocksdb: [db/db_impl/db_impl_open.cc:539] DB ID: e4e9ce99-17b1-48e3-99f8-43b7f5d91cf7
Dec  8 04:46:30 np0005550137 ceph-osd[83009]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765187190967879, "job": 1, "event": "recovery_started", "wal_files": [31]}
Dec  8 04:46:30 np0005550137 ceph-osd[83009]: rocksdb: [db/db_impl/db_impl_open.cc:1043] Recovering log #31 mode 2
Dec  8 04:46:30 np0005550137 ceph-osd[83009]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765187190968083, "job": 1, "event": "recovery_finished"}
Dec  8 04:46:30 np0005550137 ceph-osd[83009]: bluestore(/var/lib/ceph/osd/ceph-1) _open_db opened rocksdb path db options compression=kLZ4Compression,max_write_buffer_number=64,min_write_buffer_number_to_merge=6,compaction_style=kCompactionStyleLevel,write_buffer_size=16777216,max_background_jobs=4,level0_file_num_compaction_trigger=8,max_bytes_for_level_base=1073741824,max_bytes_for_level_multiplier=8,compaction_readahead_size=2MB,max_total_wal_size=1073741824,writable_file_max_buffer_size=0
Dec  8 04:46:30 np0005550137 ceph-osd[83009]: bluestore(/var/lib/ceph/osd/ceph-1) _open_super_meta old nid_max 1025
Dec  8 04:46:30 np0005550137 ceph-osd[83009]: bluestore(/var/lib/ceph/osd/ceph-1) _open_super_meta old blobid_max 10240
Dec  8 04:46:30 np0005550137 ceph-osd[83009]: bluestore(/var/lib/ceph/osd/ceph-1) _open_super_meta ondisk_format 4 compat_ondisk_format 3
Dec  8 04:46:30 np0005550137 ceph-osd[83009]: bluestore(/var/lib/ceph/osd/ceph-1) _open_super_meta min_alloc_size 0x1000
Dec  8 04:46:30 np0005550137 ceph-osd[83009]: freelist init
Dec  8 04:46:30 np0005550137 ceph-osd[83009]: freelist _read_cfg
Dec  8 04:46:30 np0005550137 ceph-osd[83009]: bluestore(/var/lib/ceph/osd/ceph-1) _init_alloc loaded 20 GiB in 2 extents, allocator type hybrid, capacity 0x4ffc00000, block size 0x1000, free 0x4ffbfd000, fragmentation 1.9e-07
Dec  8 04:46:30 np0005550137 ceph-osd[83009]: rocksdb: [db/db_impl/db_impl.cc:496] Shutdown: canceling all background work
Dec  8 04:46:30 np0005550137 ceph-osd[83009]: rocksdb: [db/db_impl/db_impl.cc:704] Shutdown complete
Dec  8 04:46:30 np0005550137 ceph-osd[83009]: bluefs umount
Dec  8 04:46:30 np0005550137 ceph-osd[83009]: bdev(0x55aa76379000 /var/lib/ceph/osd/ceph-1/block) close
Dec  8 04:46:31 np0005550137 ceph-mon[74516]: from='mgr.14122 192.168.122.100:0/1555784564' entity='mgr.compute-0.kitiwu' 
Dec  8 04:46:31 np0005550137 ceph-mon[74516]: from='mgr.14122 192.168.122.100:0/1555784564' entity='mgr.compute-0.kitiwu' 
Dec  8 04:46:31 np0005550137 ceph-mon[74516]: from='mgr.14122 192.168.122.100:0/1555784564' entity='mgr.compute-0.kitiwu' 
Dec  8 04:46:31 np0005550137 ceph-mon[74516]: from='mgr.14122 192.168.122.100:0/1555784564' entity='mgr.compute-0.kitiwu' 
Dec  8 04:46:31 np0005550137 ceph-mon[74516]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["0"]} v 0)
Dec  8 04:46:31 np0005550137 ceph-mon[74516]: log_channel(audit) log [INF] : from='osd.0 [v2:192.168.122.101:6800/3218940105,v1:192.168.122.101:6801/3218940105]' entity='osd.0' cmd=[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["0"]}]: dispatch
Dec  8 04:46:31 np0005550137 ceph-osd[83009]: bdev(0x55aa76379000 /var/lib/ceph/osd/ceph-1/block) open path /var/lib/ceph/osd/ceph-1/block
Dec  8 04:46:31 np0005550137 ceph-osd[83009]: bdev(0x55aa76379000 /var/lib/ceph/osd/ceph-1/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-1/block failed: (22) Invalid argument
Dec  8 04:46:31 np0005550137 ceph-osd[83009]: bdev(0x55aa76379000 /var/lib/ceph/osd/ceph-1/block) open backing device/file reports st_blksize 512, using bdev_block_size 4096 anyway
Dec  8 04:46:31 np0005550137 ceph-osd[83009]: bdev(0x55aa76379000 /var/lib/ceph/osd/ceph-1/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Dec  8 04:46:31 np0005550137 ceph-osd[83009]: bluefs add_block_device bdev 1 path /var/lib/ceph/osd/ceph-1/block size 20 GiB
Dec  8 04:46:31 np0005550137 ceph-osd[83009]: bluefs mount
Dec  8 04:46:31 np0005550137 ceph-osd[83009]: bluefs _init_alloc shared, id 1, capacity 0x4ffc00000, block size 0x10000
Dec  8 04:46:31 np0005550137 ceph-osd[83009]: bluefs mount final locked allocations 0 <0x0~0, [0x0~0], 0x0~0> => <0x0~0, [0x0~0], 0x0~0>
Dec  8 04:46:31 np0005550137 ceph-osd[83009]: bluefs mount final locked allocations 1 <0x0~0, [0x0~0], 0x0~0> => <0x0~0, [0x0~0], 0x0~0>
Dec  8 04:46:31 np0005550137 ceph-osd[83009]: bluefs mount final locked allocations 2 <0x0~0, [0x0~0], 0x0~0> => <0x0~0, [0x0~0], 0x0~0>
Dec  8 04:46:31 np0005550137 ceph-osd[83009]: bluefs mount final locked allocations 3 <0x0~0, [0x0~0], 0x0~0> => <0x0~0, [0x0~0], 0x0~0>
Dec  8 04:46:31 np0005550137 ceph-osd[83009]: bluefs mount final locked allocations 4 <0x0~0, [0x0~0], 0x0~0> => <0x0~0, [0x0~0], 0x0~0>
Dec  8 04:46:31 np0005550137 ceph-osd[83009]: bluefs mount shared_bdev_used = 4718592
Dec  8 04:46:31 np0005550137 ceph-osd[83009]: bluestore(/var/lib/ceph/osd/ceph-1) _prepare_db_environment set db_paths to db,20397110067 db.slow,20397110067
Dec  8 04:46:31 np0005550137 ceph-osd[83009]: rocksdb: RocksDB version: 7.9.2
Dec  8 04:46:31 np0005550137 ceph-osd[83009]: rocksdb: Git sha 0
Dec  8 04:46:31 np0005550137 ceph-osd[83009]: rocksdb: Compile date 2025-07-17 03:12:14
Dec  8 04:46:31 np0005550137 ceph-osd[83009]: rocksdb: DB SUMMARY
Dec  8 04:46:31 np0005550137 ceph-osd[83009]: rocksdb: DB Session ID:  DKEFO6RJPLWZ8UUC3GGL
Dec  8 04:46:31 np0005550137 ceph-osd[83009]: rocksdb: CURRENT file:  CURRENT
Dec  8 04:46:31 np0005550137 ceph-osd[83009]: rocksdb: IDENTITY file:  IDENTITY
Dec  8 04:46:31 np0005550137 ceph-osd[83009]: rocksdb: MANIFEST file:  MANIFEST-000032 size: 1007 Bytes
Dec  8 04:46:31 np0005550137 ceph-osd[83009]: rocksdb: SST files in db dir, Total Num: 1, files: 000030.sst 
Dec  8 04:46:31 np0005550137 ceph-osd[83009]: rocksdb: SST files in db.slow dir, Total Num: 0, files: 
Dec  8 04:46:31 np0005550137 ceph-osd[83009]: rocksdb: Write Ahead Log file in db.wal: 000031.log size: 5097 ; 
Dec  8 04:46:31 np0005550137 ceph-osd[83009]: rocksdb:                         Options.error_if_exists: 0
Dec  8 04:46:31 np0005550137 ceph-osd[83009]: rocksdb:                       Options.create_if_missing: 0
Dec  8 04:46:31 np0005550137 ceph-osd[83009]: rocksdb:                         Options.paranoid_checks: 1
Dec  8 04:46:31 np0005550137 ceph-osd[83009]: rocksdb:             Options.flush_verify_memtable_count: 1
Dec  8 04:46:31 np0005550137 ceph-osd[83009]: rocksdb:                               Options.track_and_verify_wals_in_manifest: 0
Dec  8 04:46:31 np0005550137 ceph-osd[83009]: rocksdb:        Options.verify_sst_unique_id_in_manifest: 1
Dec  8 04:46:31 np0005550137 ceph-osd[83009]: rocksdb:                                     Options.env: 0x55aa764e82a0
Dec  8 04:46:31 np0005550137 ceph-osd[83009]: rocksdb:                                      Options.fs: LegacyFileSystem
Dec  8 04:46:31 np0005550137 ceph-osd[83009]: rocksdb:                                Options.info_log: 0x55aa7633b940
Dec  8 04:46:31 np0005550137 ceph-osd[83009]: rocksdb:                Options.max_file_opening_threads: 16
Dec  8 04:46:31 np0005550137 ceph-osd[83009]: rocksdb:                              Options.statistics: (nil)
Dec  8 04:46:31 np0005550137 ceph-osd[83009]: rocksdb:                               Options.use_fsync: 0
Dec  8 04:46:31 np0005550137 ceph-osd[83009]: rocksdb:                       Options.max_log_file_size: 0
Dec  8 04:46:31 np0005550137 ceph-osd[83009]: rocksdb:                  Options.max_manifest_file_size: 1073741824
Dec  8 04:46:31 np0005550137 ceph-osd[83009]: rocksdb:                   Options.log_file_time_to_roll: 0
Dec  8 04:46:31 np0005550137 ceph-osd[83009]: rocksdb:                       Options.keep_log_file_num: 1000
Dec  8 04:46:31 np0005550137 ceph-osd[83009]: rocksdb:                    Options.recycle_log_file_num: 0
Dec  8 04:46:31 np0005550137 ceph-osd[83009]: rocksdb:                         Options.allow_fallocate: 1
Dec  8 04:46:31 np0005550137 ceph-osd[83009]: rocksdb:                        Options.allow_mmap_reads: 0
Dec  8 04:46:31 np0005550137 ceph-osd[83009]: rocksdb:                       Options.allow_mmap_writes: 0
Dec  8 04:46:31 np0005550137 ceph-osd[83009]: rocksdb:                        Options.use_direct_reads: 0
Dec  8 04:46:31 np0005550137 ceph-osd[83009]: rocksdb:                        Options.use_direct_io_for_flush_and_compaction: 0
Dec  8 04:46:31 np0005550137 ceph-osd[83009]: rocksdb:          Options.create_missing_column_families: 0
Dec  8 04:46:31 np0005550137 ceph-osd[83009]: rocksdb:                              Options.db_log_dir: 
Dec  8 04:46:31 np0005550137 ceph-osd[83009]: rocksdb:                                 Options.wal_dir: db.wal
Dec  8 04:46:31 np0005550137 ceph-osd[83009]: rocksdb:                Options.table_cache_numshardbits: 6
Dec  8 04:46:31 np0005550137 ceph-osd[83009]: rocksdb:                         Options.WAL_ttl_seconds: 0
Dec  8 04:46:31 np0005550137 ceph-osd[83009]: rocksdb:                       Options.WAL_size_limit_MB: 0
Dec  8 04:46:31 np0005550137 ceph-osd[83009]: rocksdb:                        Options.max_write_batch_group_size_bytes: 1048576
Dec  8 04:46:31 np0005550137 ceph-osd[83009]: rocksdb:             Options.manifest_preallocation_size: 4194304
Dec  8 04:46:31 np0005550137 ceph-osd[83009]: rocksdb:                     Options.is_fd_close_on_exec: 1
Dec  8 04:46:31 np0005550137 ceph-osd[83009]: rocksdb:                   Options.advise_random_on_open: 1
Dec  8 04:46:31 np0005550137 ceph-osd[83009]: rocksdb:                    Options.db_write_buffer_size: 0
Dec  8 04:46:31 np0005550137 ceph-osd[83009]: rocksdb:                    Options.write_buffer_manager: 0x55aa76444a00
Dec  8 04:46:31 np0005550137 ceph-osd[83009]: rocksdb:         Options.access_hint_on_compaction_start: 1
Dec  8 04:46:31 np0005550137 ceph-osd[83009]: rocksdb:           Options.random_access_max_buffer_size: 1048576
Dec  8 04:46:31 np0005550137 ceph-osd[83009]: rocksdb:                      Options.use_adaptive_mutex: 0
Dec  8 04:46:31 np0005550137 ceph-osd[83009]: rocksdb:                            Options.rate_limiter: (nil)
Dec  8 04:46:31 np0005550137 ceph-osd[83009]: rocksdb:     Options.sst_file_manager.rate_bytes_per_sec: 0
Dec  8 04:46:31 np0005550137 ceph-osd[83009]: rocksdb:                       Options.wal_recovery_mode: 2
Dec  8 04:46:31 np0005550137 ceph-osd[83009]: rocksdb:                  Options.enable_thread_tracking: 0
Dec  8 04:46:31 np0005550137 ceph-osd[83009]: rocksdb:                  Options.enable_pipelined_write: 0
Dec  8 04:46:31 np0005550137 ceph-osd[83009]: rocksdb:                  Options.unordered_write: 0
Dec  8 04:46:31 np0005550137 ceph-osd[83009]: rocksdb:         Options.allow_concurrent_memtable_write: 1
Dec  8 04:46:31 np0005550137 ceph-osd[83009]: rocksdb:      Options.enable_write_thread_adaptive_yield: 1
Dec  8 04:46:31 np0005550137 ceph-osd[83009]: rocksdb:             Options.write_thread_max_yield_usec: 100
Dec  8 04:46:31 np0005550137 ceph-osd[83009]: rocksdb:            Options.write_thread_slow_yield_usec: 3
Dec  8 04:46:31 np0005550137 ceph-osd[83009]: rocksdb:                               Options.row_cache: None
Dec  8 04:46:31 np0005550137 ceph-osd[83009]: rocksdb:                              Options.wal_filter: None
Dec  8 04:46:31 np0005550137 ceph-osd[83009]: rocksdb:             Options.avoid_flush_during_recovery: 0
Dec  8 04:46:31 np0005550137 ceph-osd[83009]: rocksdb:             Options.allow_ingest_behind: 0
Dec  8 04:46:31 np0005550137 ceph-osd[83009]: rocksdb:             Options.two_write_queues: 0
Dec  8 04:46:31 np0005550137 ceph-osd[83009]: rocksdb:             Options.manual_wal_flush: 0
Dec  8 04:46:31 np0005550137 ceph-osd[83009]: rocksdb:             Options.wal_compression: 0
Dec  8 04:46:31 np0005550137 ceph-osd[83009]: rocksdb:             Options.atomic_flush: 0
Dec  8 04:46:31 np0005550137 ceph-osd[83009]: rocksdb:             Options.avoid_unnecessary_blocking_io: 0
Dec  8 04:46:31 np0005550137 ceph-osd[83009]: rocksdb:                 Options.persist_stats_to_disk: 0
Dec  8 04:46:31 np0005550137 ceph-osd[83009]: rocksdb:                 Options.write_dbid_to_manifest: 0
Dec  8 04:46:31 np0005550137 ceph-osd[83009]: rocksdb:                 Options.log_readahead_size: 0
Dec  8 04:46:31 np0005550137 ceph-osd[83009]: rocksdb:                 Options.file_checksum_gen_factory: Unknown
Dec  8 04:46:31 np0005550137 ceph-osd[83009]: rocksdb:                 Options.best_efforts_recovery: 0
Dec  8 04:46:31 np0005550137 ceph-osd[83009]: rocksdb:                Options.max_bgerror_resume_count: 2147483647
Dec  8 04:46:31 np0005550137 ceph-osd[83009]: rocksdb:            Options.bgerror_resume_retry_interval: 1000000
Dec  8 04:46:31 np0005550137 ceph-osd[83009]: rocksdb:             Options.allow_data_in_errors: 0
Dec  8 04:46:31 np0005550137 ceph-osd[83009]: rocksdb:             Options.db_host_id: __hostname__
Dec  8 04:46:31 np0005550137 ceph-osd[83009]: rocksdb:             Options.enforce_single_del_contracts: true
Dec  8 04:46:31 np0005550137 ceph-osd[83009]: rocksdb:             Options.max_background_jobs: 4
Dec  8 04:46:31 np0005550137 ceph-osd[83009]: rocksdb:             Options.max_background_compactions: -1
Dec  8 04:46:31 np0005550137 ceph-osd[83009]: rocksdb:             Options.max_subcompactions: 1
Dec  8 04:46:31 np0005550137 ceph-osd[83009]: rocksdb:             Options.avoid_flush_during_shutdown: 0
Dec  8 04:46:31 np0005550137 ceph-osd[83009]: rocksdb:           Options.writable_file_max_buffer_size: 0
Dec  8 04:46:31 np0005550137 ceph-osd[83009]: rocksdb:             Options.delayed_write_rate : 16777216
Dec  8 04:46:31 np0005550137 ceph-osd[83009]: rocksdb:             Options.max_total_wal_size: 1073741824
Dec  8 04:46:31 np0005550137 ceph-osd[83009]: rocksdb:             Options.delete_obsolete_files_period_micros: 21600000000
Dec  8 04:46:31 np0005550137 ceph-osd[83009]: rocksdb:                   Options.stats_dump_period_sec: 600
Dec  8 04:46:31 np0005550137 ceph-osd[83009]: rocksdb:                 Options.stats_persist_period_sec: 600
Dec  8 04:46:31 np0005550137 ceph-osd[83009]: rocksdb:                 Options.stats_history_buffer_size: 1048576
Dec  8 04:46:31 np0005550137 ceph-osd[83009]: rocksdb:                          Options.max_open_files: -1
Dec  8 04:46:31 np0005550137 ceph-osd[83009]: rocksdb:                          Options.bytes_per_sync: 0
Dec  8 04:46:31 np0005550137 ceph-osd[83009]: rocksdb:                      Options.wal_bytes_per_sync: 0
Dec  8 04:46:31 np0005550137 ceph-osd[83009]: rocksdb:                   Options.strict_bytes_per_sync: 0
Dec  8 04:46:31 np0005550137 ceph-osd[83009]: rocksdb:       Options.compaction_readahead_size: 2097152
Dec  8 04:46:31 np0005550137 ceph-osd[83009]: rocksdb:                  Options.max_background_flushes: -1
Dec  8 04:46:31 np0005550137 ceph-osd[83009]: rocksdb: Compression algorithms supported:
Dec  8 04:46:31 np0005550137 ceph-osd[83009]: rocksdb: #011kZSTD supported: 0
Dec  8 04:46:31 np0005550137 ceph-osd[83009]: rocksdb: #011kXpressCompression supported: 0
Dec  8 04:46:31 np0005550137 ceph-osd[83009]: rocksdb: #011kBZip2Compression supported: 0
Dec  8 04:46:31 np0005550137 ceph-osd[83009]: rocksdb: #011kZSTDNotFinalCompression supported: 0
Dec  8 04:46:31 np0005550137 ceph-osd[83009]: rocksdb: #011kLZ4Compression supported: 1
Dec  8 04:46:31 np0005550137 ceph-osd[83009]: rocksdb: #011kZlibCompression supported: 1
Dec  8 04:46:31 np0005550137 ceph-osd[83009]: rocksdb: #011kLZ4HCCompression supported: 1
Dec  8 04:46:31 np0005550137 ceph-osd[83009]: rocksdb: #011kSnappyCompression supported: 1
Dec  8 04:46:31 np0005550137 ceph-osd[83009]: rocksdb: Fast CRC32 supported: Supported on x86
Dec  8 04:46:31 np0005550137 ceph-osd[83009]: rocksdb: DMutex implementation: pthread_mutex_t
Dec  8 04:46:31 np0005550137 ceph-osd[83009]: rocksdb: [db/version_set.cc:5527] Recovering from manifest file: db/MANIFEST-000032
Dec  8 04:46:31 np0005550137 ceph-osd[83009]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [default]:
Dec  8 04:46:31 np0005550137 ceph-osd[83009]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Dec  8 04:46:31 np0005550137 ceph-osd[83009]: rocksdb:           Options.merge_operator: .T:int64_array.b:bitwise_xor
Dec  8 04:46:31 np0005550137 ceph-osd[83009]: rocksdb:        Options.compaction_filter: None
Dec  8 04:46:31 np0005550137 ceph-osd[83009]: rocksdb:        Options.compaction_filter_factory: None
Dec  8 04:46:31 np0005550137 ceph-osd[83009]: rocksdb:  Options.sst_partitioner_factory: None
Dec  8 04:46:31 np0005550137 ceph-osd[83009]: rocksdb:         Options.memtable_factory: SkipListFactory
Dec  8 04:46:31 np0005550137 ceph-osd[83009]: rocksdb:            Options.table_factory: BlockBasedTable
Dec  8 04:46:31 np0005550137 ceph-osd[83009]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x55aa7633b680)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x55aa75557350#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 483183820#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Dec  8 04:46:31 np0005550137 ceph-osd[83009]: rocksdb:        Options.write_buffer_size: 16777216
Dec  8 04:46:31 np0005550137 ceph-osd[83009]: rocksdb:  Options.max_write_buffer_number: 64
Dec  8 04:46:31 np0005550137 ceph-osd[83009]: rocksdb:          Options.compression: LZ4
Dec  8 04:46:31 np0005550137 ceph-osd[83009]: rocksdb:                  Options.bottommost_compression: Disabled
Dec  8 04:46:31 np0005550137 ceph-osd[83009]: rocksdb:       Options.prefix_extractor: nullptr
Dec  8 04:46:31 np0005550137 ceph-osd[83009]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Dec  8 04:46:31 np0005550137 ceph-osd[83009]: rocksdb:             Options.num_levels: 7
Dec  8 04:46:31 np0005550137 ceph-osd[83009]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Dec  8 04:46:31 np0005550137 ceph-osd[83009]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Dec  8 04:46:31 np0005550137 ceph-osd[83009]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Dec  8 04:46:31 np0005550137 ceph-osd[83009]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Dec  8 04:46:31 np0005550137 ceph-osd[83009]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Dec  8 04:46:31 np0005550137 ceph-osd[83009]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Dec  8 04:46:31 np0005550137 ceph-osd[83009]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Dec  8 04:46:31 np0005550137 ceph-osd[83009]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Dec  8 04:46:31 np0005550137 ceph-osd[83009]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Dec  8 04:46:31 np0005550137 ceph-osd[83009]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Dec  8 04:46:31 np0005550137 ceph-osd[83009]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Dec  8 04:46:31 np0005550137 ceph-osd[83009]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Dec  8 04:46:31 np0005550137 ceph-osd[83009]: rocksdb:            Options.compression_opts.window_bits: -14
Dec  8 04:46:31 np0005550137 ceph-osd[83009]: rocksdb:                  Options.compression_opts.level: 32767
Dec  8 04:46:31 np0005550137 ceph-osd[83009]: rocksdb:               Options.compression_opts.strategy: 0
Dec  8 04:46:31 np0005550137 ceph-osd[83009]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Dec  8 04:46:31 np0005550137 ceph-osd[83009]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Dec  8 04:46:31 np0005550137 ceph-osd[83009]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Dec  8 04:46:31 np0005550137 ceph-osd[83009]: rocksdb:         Options.compression_opts.parallel_threads: 1
Dec  8 04:46:31 np0005550137 ceph-osd[83009]: rocksdb:                  Options.compression_opts.enabled: false
Dec  8 04:46:31 np0005550137 ceph-osd[83009]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Dec  8 04:46:31 np0005550137 ceph-osd[83009]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Dec  8 04:46:31 np0005550137 ceph-osd[83009]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Dec  8 04:46:31 np0005550137 ceph-osd[83009]: rocksdb:              Options.level0_stop_writes_trigger: 36
Dec  8 04:46:31 np0005550137 ceph-osd[83009]: rocksdb:                   Options.target_file_size_base: 67108864
Dec  8 04:46:31 np0005550137 ceph-osd[83009]: rocksdb:             Options.target_file_size_multiplier: 1
Dec  8 04:46:31 np0005550137 ceph-osd[83009]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Dec  8 04:46:31 np0005550137 ceph-osd[83009]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Dec  8 04:46:31 np0005550137 ceph-osd[83009]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Dec  8 04:46:31 np0005550137 ceph-osd[83009]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Dec  8 04:46:31 np0005550137 ceph-osd[83009]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Dec  8 04:46:31 np0005550137 ceph-osd[83009]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Dec  8 04:46:31 np0005550137 ceph-osd[83009]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Dec  8 04:46:31 np0005550137 ceph-osd[83009]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Dec  8 04:46:31 np0005550137 ceph-osd[83009]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Dec  8 04:46:31 np0005550137 ceph-osd[83009]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Dec  8 04:46:31 np0005550137 ceph-osd[83009]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Dec  8 04:46:31 np0005550137 ceph-osd[83009]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Dec  8 04:46:31 np0005550137 ceph-osd[83009]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Dec  8 04:46:31 np0005550137 ceph-osd[83009]: rocksdb:                        Options.arena_block_size: 1048576
Dec  8 04:46:31 np0005550137 ceph-osd[83009]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Dec  8 04:46:31 np0005550137 ceph-osd[83009]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Dec  8 04:46:31 np0005550137 ceph-osd[83009]: rocksdb:                Options.disable_auto_compactions: 0
Dec  8 04:46:31 np0005550137 ceph-osd[83009]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Dec  8 04:46:31 np0005550137 ceph-osd[83009]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Dec  8 04:46:31 np0005550137 ceph-osd[83009]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Dec  8 04:46:31 np0005550137 ceph-osd[83009]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Dec  8 04:46:31 np0005550137 ceph-osd[83009]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Dec  8 04:46:31 np0005550137 ceph-osd[83009]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Dec  8 04:46:31 np0005550137 ceph-osd[83009]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Dec  8 04:46:31 np0005550137 ceph-osd[83009]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Dec  8 04:46:31 np0005550137 ceph-osd[83009]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Dec  8 04:46:31 np0005550137 ceph-osd[83009]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Dec  8 04:46:31 np0005550137 ceph-osd[83009]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Dec  8 04:46:31 np0005550137 ceph-osd[83009]: rocksdb:                   Options.inplace_update_support: 0
Dec  8 04:46:31 np0005550137 ceph-osd[83009]: rocksdb:                 Options.inplace_update_num_locks: 10000
Dec  8 04:46:31 np0005550137 ceph-osd[83009]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Dec  8 04:46:31 np0005550137 ceph-osd[83009]: rocksdb:               Options.memtable_whole_key_filtering: 0
Dec  8 04:46:31 np0005550137 ceph-osd[83009]: rocksdb:   Options.memtable_huge_page_size: 0
Dec  8 04:46:31 np0005550137 ceph-osd[83009]: rocksdb:                           Options.bloom_locality: 0
Dec  8 04:46:31 np0005550137 ceph-osd[83009]: rocksdb:                    Options.max_successive_merges: 0
Dec  8 04:46:31 np0005550137 ceph-osd[83009]: rocksdb:                Options.optimize_filters_for_hits: 0
Dec  8 04:46:31 np0005550137 ceph-osd[83009]: rocksdb:                Options.paranoid_file_checks: 0
Dec  8 04:46:31 np0005550137 ceph-osd[83009]: rocksdb:                Options.force_consistency_checks: 1
Dec  8 04:46:31 np0005550137 ceph-osd[83009]: rocksdb:                Options.report_bg_io_stats: 0
Dec  8 04:46:31 np0005550137 ceph-osd[83009]: rocksdb:                               Options.ttl: 2592000
Dec  8 04:46:31 np0005550137 ceph-osd[83009]: rocksdb:          Options.periodic_compaction_seconds: 0
Dec  8 04:46:31 np0005550137 ceph-osd[83009]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Dec  8 04:46:31 np0005550137 ceph-osd[83009]: rocksdb:    Options.preserve_internal_time_seconds: 0
Dec  8 04:46:31 np0005550137 ceph-osd[83009]: rocksdb:                       Options.enable_blob_files: false
Dec  8 04:46:31 np0005550137 ceph-osd[83009]: rocksdb:                           Options.min_blob_size: 0
Dec  8 04:46:31 np0005550137 ceph-osd[83009]: rocksdb:                          Options.blob_file_size: 268435456
Dec  8 04:46:31 np0005550137 ceph-osd[83009]: rocksdb:                   Options.blob_compression_type: NoCompression
Dec  8 04:46:31 np0005550137 ceph-osd[83009]: rocksdb:          Options.enable_blob_garbage_collection: false
Dec  8 04:46:31 np0005550137 ceph-osd[83009]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Dec  8 04:46:31 np0005550137 ceph-osd[83009]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Dec  8 04:46:31 np0005550137 ceph-osd[83009]: rocksdb:          Options.blob_compaction_readahead_size: 0
Dec  8 04:46:31 np0005550137 ceph-osd[83009]: rocksdb:                Options.blob_file_starting_level: 0
Dec  8 04:46:31 np0005550137 ceph-osd[83009]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Dec  8 04:46:31 np0005550137 ceph-osd[83009]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [m-0]:
Dec  8 04:46:31 np0005550137 ceph-osd[83009]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Dec  8 04:46:31 np0005550137 ceph-osd[83009]: rocksdb:           Options.merge_operator: None
Dec  8 04:46:31 np0005550137 ceph-osd[83009]: rocksdb:        Options.compaction_filter: None
Dec  8 04:46:31 np0005550137 ceph-osd[83009]: rocksdb:        Options.compaction_filter_factory: None
Dec  8 04:46:31 np0005550137 ceph-osd[83009]: rocksdb:  Options.sst_partitioner_factory: None
Dec  8 04:46:31 np0005550137 ceph-osd[83009]: rocksdb:         Options.memtable_factory: SkipListFactory
Dec  8 04:46:31 np0005550137 ceph-osd[83009]: rocksdb:            Options.table_factory: BlockBasedTable
Dec  8 04:46:31 np0005550137 ceph-osd[83009]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x55aa7633b680)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x55aa75557350#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 483183820#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Dec  8 04:46:31 np0005550137 ceph-osd[83009]: rocksdb:        Options.write_buffer_size: 16777216
Dec  8 04:46:31 np0005550137 ceph-osd[83009]: rocksdb:  Options.max_write_buffer_number: 64
Dec  8 04:46:31 np0005550137 ceph-osd[83009]: rocksdb:          Options.compression: LZ4
Dec  8 04:46:31 np0005550137 ceph-osd[83009]: rocksdb:                  Options.bottommost_compression: Disabled
Dec  8 04:46:31 np0005550137 ceph-osd[83009]: rocksdb:       Options.prefix_extractor: nullptr
Dec  8 04:46:31 np0005550137 ceph-osd[83009]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Dec  8 04:46:31 np0005550137 ceph-osd[83009]: rocksdb:             Options.num_levels: 7
Dec  8 04:46:31 np0005550137 ceph-osd[83009]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Dec  8 04:46:31 np0005550137 ceph-osd[83009]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Dec  8 04:46:31 np0005550137 ceph-osd[83009]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Dec  8 04:46:31 np0005550137 ceph-osd[83009]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Dec  8 04:46:31 np0005550137 ceph-osd[83009]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Dec  8 04:46:31 np0005550137 ceph-osd[83009]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Dec  8 04:46:31 np0005550137 ceph-osd[83009]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Dec  8 04:46:31 np0005550137 ceph-osd[83009]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Dec  8 04:46:31 np0005550137 ceph-osd[83009]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Dec  8 04:46:31 np0005550137 ceph-osd[83009]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Dec  8 04:46:31 np0005550137 ceph-osd[83009]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Dec  8 04:46:31 np0005550137 ceph-osd[83009]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Dec  8 04:46:31 np0005550137 ceph-osd[83009]: rocksdb:            Options.compression_opts.window_bits: -14
Dec  8 04:46:31 np0005550137 ceph-osd[83009]: rocksdb:                  Options.compression_opts.level: 32767
Dec  8 04:46:31 np0005550137 ceph-osd[83009]: rocksdb:               Options.compression_opts.strategy: 0
Dec  8 04:46:31 np0005550137 ceph-osd[83009]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Dec  8 04:46:31 np0005550137 ceph-osd[83009]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Dec  8 04:46:31 np0005550137 ceph-osd[83009]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Dec  8 04:46:31 np0005550137 ceph-osd[83009]: rocksdb:         Options.compression_opts.parallel_threads: 1
Dec  8 04:46:31 np0005550137 ceph-osd[83009]: rocksdb:                  Options.compression_opts.enabled: false
Dec  8 04:46:31 np0005550137 ceph-osd[83009]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Dec  8 04:46:31 np0005550137 ceph-osd[83009]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Dec  8 04:46:31 np0005550137 ceph-osd[83009]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Dec  8 04:46:31 np0005550137 ceph-osd[83009]: rocksdb:              Options.level0_stop_writes_trigger: 36
Dec  8 04:46:31 np0005550137 ceph-osd[83009]: rocksdb:                   Options.target_file_size_base: 67108864
Dec  8 04:46:31 np0005550137 ceph-osd[83009]: rocksdb:             Options.target_file_size_multiplier: 1
Dec  8 04:46:31 np0005550137 ceph-osd[83009]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Dec  8 04:46:31 np0005550137 ceph-osd[83009]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Dec  8 04:46:31 np0005550137 ceph-osd[83009]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Dec  8 04:46:31 np0005550137 ceph-osd[83009]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Dec  8 04:46:31 np0005550137 ceph-osd[83009]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Dec  8 04:46:31 np0005550137 ceph-osd[83009]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Dec  8 04:46:31 np0005550137 ceph-osd[83009]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Dec  8 04:46:31 np0005550137 ceph-osd[83009]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Dec  8 04:46:31 np0005550137 ceph-osd[83009]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Dec  8 04:46:31 np0005550137 ceph-osd[83009]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Dec  8 04:46:31 np0005550137 ceph-osd[83009]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Dec  8 04:46:31 np0005550137 ceph-osd[83009]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Dec  8 04:46:31 np0005550137 ceph-osd[83009]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Dec  8 04:46:31 np0005550137 ceph-osd[83009]: rocksdb:                        Options.arena_block_size: 1048576
Dec  8 04:46:31 np0005550137 ceph-osd[83009]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Dec  8 04:46:31 np0005550137 ceph-osd[83009]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Dec  8 04:46:31 np0005550137 ceph-osd[83009]: rocksdb:                Options.disable_auto_compactions: 0
Dec  8 04:46:31 np0005550137 ceph-osd[83009]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Dec  8 04:46:31 np0005550137 ceph-osd[83009]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Dec  8 04:46:31 np0005550137 ceph-osd[83009]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Dec  8 04:46:31 np0005550137 ceph-osd[83009]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Dec  8 04:46:31 np0005550137 ceph-osd[83009]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Dec  8 04:46:31 np0005550137 ceph-osd[83009]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Dec  8 04:46:31 np0005550137 ceph-osd[83009]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Dec  8 04:46:31 np0005550137 ceph-osd[83009]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Dec  8 04:46:31 np0005550137 ceph-osd[83009]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Dec  8 04:46:31 np0005550137 ceph-osd[83009]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Dec  8 04:46:31 np0005550137 ceph-osd[83009]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Dec  8 04:46:31 np0005550137 ceph-osd[83009]: rocksdb:                   Options.inplace_update_support: 0
Dec  8 04:46:31 np0005550137 ceph-osd[83009]: rocksdb:                 Options.inplace_update_num_locks: 10000
Dec  8 04:46:31 np0005550137 ceph-osd[83009]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Dec  8 04:46:31 np0005550137 ceph-osd[83009]: rocksdb:               Options.memtable_whole_key_filtering: 0
Dec  8 04:46:31 np0005550137 ceph-osd[83009]: rocksdb:   Options.memtable_huge_page_size: 0
Dec  8 04:46:31 np0005550137 ceph-osd[83009]: rocksdb:                           Options.bloom_locality: 0
Dec  8 04:46:31 np0005550137 ceph-osd[83009]: rocksdb:                    Options.max_successive_merges: 0
Dec  8 04:46:31 np0005550137 ceph-osd[83009]: rocksdb:                Options.optimize_filters_for_hits: 0
Dec  8 04:46:31 np0005550137 ceph-osd[83009]: rocksdb:                Options.paranoid_file_checks: 0
Dec  8 04:46:31 np0005550137 ceph-osd[83009]: rocksdb:                Options.force_consistency_checks: 1
Dec  8 04:46:31 np0005550137 ceph-osd[83009]: rocksdb:                Options.report_bg_io_stats: 0
Dec  8 04:46:31 np0005550137 ceph-osd[83009]: rocksdb:                               Options.ttl: 2592000
Dec  8 04:46:31 np0005550137 ceph-osd[83009]: rocksdb:          Options.periodic_compaction_seconds: 0
Dec  8 04:46:31 np0005550137 ceph-osd[83009]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Dec  8 04:46:31 np0005550137 ceph-osd[83009]: rocksdb:    Options.preserve_internal_time_seconds: 0
Dec  8 04:46:31 np0005550137 ceph-osd[83009]: rocksdb:                       Options.enable_blob_files: false
Dec  8 04:46:31 np0005550137 ceph-osd[83009]: rocksdb:                           Options.min_blob_size: 0
Dec  8 04:46:31 np0005550137 ceph-osd[83009]: rocksdb:                          Options.blob_file_size: 268435456
Dec  8 04:46:31 np0005550137 ceph-osd[83009]: rocksdb:                   Options.blob_compression_type: NoCompression
Dec  8 04:46:31 np0005550137 ceph-osd[83009]: rocksdb:          Options.enable_blob_garbage_collection: false
Dec  8 04:46:31 np0005550137 ceph-osd[83009]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Dec  8 04:46:31 np0005550137 ceph-osd[83009]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Dec  8 04:46:31 np0005550137 ceph-osd[83009]: rocksdb:          Options.blob_compaction_readahead_size: 0
Dec  8 04:46:31 np0005550137 ceph-osd[83009]: rocksdb:                Options.blob_file_starting_level: 0
Dec  8 04:46:31 np0005550137 ceph-osd[83009]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Dec  8 04:46:31 np0005550137 ceph-osd[83009]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [m-1]:
Dec  8 04:46:31 np0005550137 ceph-osd[83009]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Dec  8 04:46:31 np0005550137 ceph-osd[83009]: rocksdb:           Options.merge_operator: None
Dec  8 04:46:31 np0005550137 ceph-osd[83009]: rocksdb:        Options.compaction_filter: None
Dec  8 04:46:31 np0005550137 ceph-osd[83009]: rocksdb:        Options.compaction_filter_factory: None
Dec  8 04:46:31 np0005550137 ceph-osd[83009]: rocksdb:  Options.sst_partitioner_factory: None
Dec  8 04:46:31 np0005550137 ceph-osd[83009]: rocksdb:         Options.memtable_factory: SkipListFactory
Dec  8 04:46:31 np0005550137 ceph-osd[83009]: rocksdb:            Options.table_factory: BlockBasedTable
Dec  8 04:46:31 np0005550137 ceph-osd[83009]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x55aa7633b680)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x55aa75557350#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 483183820#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Dec  8 04:46:31 np0005550137 ceph-osd[83009]: rocksdb:        Options.write_buffer_size: 16777216
Dec  8 04:46:31 np0005550137 ceph-osd[83009]: rocksdb:  Options.max_write_buffer_number: 64
Dec  8 04:46:31 np0005550137 ceph-osd[83009]: rocksdb:          Options.compression: LZ4
Dec  8 04:46:31 np0005550137 ceph-osd[83009]: rocksdb:                  Options.bottommost_compression: Disabled
Dec  8 04:46:31 np0005550137 ceph-osd[83009]: rocksdb:       Options.prefix_extractor: nullptr
Dec  8 04:46:31 np0005550137 ceph-osd[83009]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Dec  8 04:46:31 np0005550137 ceph-osd[83009]: rocksdb:             Options.num_levels: 7
Dec  8 04:46:31 np0005550137 ceph-osd[83009]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Dec  8 04:46:31 np0005550137 ceph-osd[83009]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Dec  8 04:46:31 np0005550137 ceph-osd[83009]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Dec  8 04:46:31 np0005550137 ceph-osd[83009]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Dec  8 04:46:31 np0005550137 ceph-osd[83009]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Dec  8 04:46:31 np0005550137 ceph-osd[83009]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Dec  8 04:46:31 np0005550137 ceph-osd[83009]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Dec  8 04:46:31 np0005550137 ceph-osd[83009]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Dec  8 04:46:31 np0005550137 ceph-osd[83009]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Dec  8 04:46:31 np0005550137 ceph-osd[83009]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Dec  8 04:46:31 np0005550137 ceph-osd[83009]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Dec  8 04:46:31 np0005550137 ceph-osd[83009]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Dec  8 04:46:31 np0005550137 ceph-osd[83009]: rocksdb:            Options.compression_opts.window_bits: -14
Dec  8 04:46:31 np0005550137 ceph-osd[83009]: rocksdb:                  Options.compression_opts.level: 32767
Dec  8 04:46:31 np0005550137 ceph-osd[83009]: rocksdb:               Options.compression_opts.strategy: 0
Dec  8 04:46:31 np0005550137 ceph-osd[83009]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Dec  8 04:46:31 np0005550137 ceph-osd[83009]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Dec  8 04:46:31 np0005550137 ceph-osd[83009]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Dec  8 04:46:31 np0005550137 ceph-osd[83009]: rocksdb:         Options.compression_opts.parallel_threads: 1
Dec  8 04:46:31 np0005550137 ceph-osd[83009]: rocksdb:                  Options.compression_opts.enabled: false
Dec  8 04:46:31 np0005550137 ceph-osd[83009]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Dec  8 04:46:31 np0005550137 ceph-osd[83009]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Dec  8 04:46:31 np0005550137 ceph-osd[83009]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Dec  8 04:46:31 np0005550137 ceph-osd[83009]: rocksdb:              Options.level0_stop_writes_trigger: 36
Dec  8 04:46:31 np0005550137 ceph-osd[83009]: rocksdb:                   Options.target_file_size_base: 67108864
Dec  8 04:46:31 np0005550137 ceph-osd[83009]: rocksdb:             Options.target_file_size_multiplier: 1
Dec  8 04:46:31 np0005550137 ceph-osd[83009]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Dec  8 04:46:31 np0005550137 ceph-osd[83009]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Dec  8 04:46:31 np0005550137 ceph-osd[83009]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Dec  8 04:46:31 np0005550137 ceph-osd[83009]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Dec  8 04:46:31 np0005550137 ceph-osd[83009]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Dec  8 04:46:31 np0005550137 ceph-osd[83009]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Dec  8 04:46:31 np0005550137 ceph-osd[83009]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Dec  8 04:46:31 np0005550137 ceph-osd[83009]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Dec  8 04:46:31 np0005550137 ceph-osd[83009]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Dec  8 04:46:31 np0005550137 ceph-osd[83009]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Dec  8 04:46:31 np0005550137 ceph-osd[83009]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Dec  8 04:46:31 np0005550137 ceph-osd[83009]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Dec  8 04:46:31 np0005550137 ceph-osd[83009]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Dec  8 04:46:31 np0005550137 ceph-osd[83009]: rocksdb:                        Options.arena_block_size: 1048576
Dec  8 04:46:31 np0005550137 ceph-osd[83009]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Dec  8 04:46:31 np0005550137 ceph-osd[83009]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Dec  8 04:46:31 np0005550137 ceph-osd[83009]: rocksdb:                Options.disable_auto_compactions: 0
Dec  8 04:46:31 np0005550137 ceph-osd[83009]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Dec  8 04:46:31 np0005550137 ceph-osd[83009]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Dec  8 04:46:31 np0005550137 ceph-osd[83009]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Dec  8 04:46:31 np0005550137 ceph-osd[83009]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Dec  8 04:46:31 np0005550137 ceph-osd[83009]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Dec  8 04:46:31 np0005550137 ceph-osd[83009]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Dec  8 04:46:31 np0005550137 ceph-osd[83009]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Dec  8 04:46:31 np0005550137 ceph-osd[83009]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Dec  8 04:46:31 np0005550137 ceph-osd[83009]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Dec  8 04:46:31 np0005550137 ceph-osd[83009]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Dec  8 04:46:31 np0005550137 ceph-osd[83009]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Dec  8 04:46:31 np0005550137 ceph-osd[83009]: rocksdb:                   Options.inplace_update_support: 0
Dec  8 04:46:31 np0005550137 ceph-osd[83009]: rocksdb:                 Options.inplace_update_num_locks: 10000
Dec  8 04:46:31 np0005550137 ceph-osd[83009]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Dec  8 04:46:31 np0005550137 ceph-osd[83009]: rocksdb:               Options.memtable_whole_key_filtering: 0
Dec  8 04:46:31 np0005550137 ceph-osd[83009]: rocksdb:   Options.memtable_huge_page_size: 0
Dec  8 04:46:31 np0005550137 ceph-osd[83009]: rocksdb:                           Options.bloom_locality: 0
Dec  8 04:46:31 np0005550137 ceph-osd[83009]: rocksdb:                    Options.max_successive_merges: 0
Dec  8 04:46:31 np0005550137 ceph-osd[83009]: rocksdb:                Options.optimize_filters_for_hits: 0
Dec  8 04:46:31 np0005550137 ceph-osd[83009]: rocksdb:                Options.paranoid_file_checks: 0
Dec  8 04:46:31 np0005550137 ceph-osd[83009]: rocksdb:                Options.force_consistency_checks: 1
Dec  8 04:46:31 np0005550137 ceph-osd[83009]: rocksdb:                Options.report_bg_io_stats: 0
Dec  8 04:46:31 np0005550137 ceph-osd[83009]: rocksdb:                               Options.ttl: 2592000
Dec  8 04:46:31 np0005550137 ceph-osd[83009]: rocksdb:          Options.periodic_compaction_seconds: 0
Dec  8 04:46:31 np0005550137 ceph-osd[83009]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Dec  8 04:46:31 np0005550137 ceph-osd[83009]: rocksdb:    Options.preserve_internal_time_seconds: 0
Dec  8 04:46:31 np0005550137 ceph-osd[83009]: rocksdb:                       Options.enable_blob_files: false
Dec  8 04:46:31 np0005550137 ceph-osd[83009]: rocksdb:                           Options.min_blob_size: 0
Dec  8 04:46:31 np0005550137 ceph-osd[83009]: rocksdb:                          Options.blob_file_size: 268435456
Dec  8 04:46:31 np0005550137 ceph-osd[83009]: rocksdb:                   Options.blob_compression_type: NoCompression
Dec  8 04:46:31 np0005550137 ceph-osd[83009]: rocksdb:          Options.enable_blob_garbage_collection: false
Dec  8 04:46:31 np0005550137 ceph-osd[83009]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Dec  8 04:46:31 np0005550137 ceph-osd[83009]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Dec  8 04:46:31 np0005550137 ceph-osd[83009]: rocksdb:          Options.blob_compaction_readahead_size: 0
Dec  8 04:46:31 np0005550137 ceph-osd[83009]: rocksdb:                Options.blob_file_starting_level: 0
Dec  8 04:46:31 np0005550137 ceph-osd[83009]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Dec  8 04:46:31 np0005550137 ceph-osd[83009]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [m-2]:
Dec  8 04:46:31 np0005550137 ceph-osd[83009]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Dec  8 04:46:31 np0005550137 ceph-osd[83009]: rocksdb:           Options.merge_operator: None
Dec  8 04:46:31 np0005550137 ceph-osd[83009]: rocksdb:        Options.compaction_filter: None
Dec  8 04:46:31 np0005550137 ceph-osd[83009]: rocksdb:        Options.compaction_filter_factory: None
Dec  8 04:46:31 np0005550137 ceph-osd[83009]: rocksdb:  Options.sst_partitioner_factory: None
Dec  8 04:46:31 np0005550137 ceph-osd[83009]: rocksdb:         Options.memtable_factory: SkipListFactory
Dec  8 04:46:31 np0005550137 ceph-osd[83009]: rocksdb:            Options.table_factory: BlockBasedTable
Dec  8 04:46:31 np0005550137 ceph-osd[83009]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x55aa7633b680)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x55aa75557350#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 483183820#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Dec  8 04:46:31 np0005550137 ceph-osd[83009]: rocksdb:        Options.write_buffer_size: 16777216
Dec  8 04:46:31 np0005550137 ceph-osd[83009]: rocksdb:  Options.max_write_buffer_number: 64
Dec  8 04:46:31 np0005550137 ceph-osd[83009]: rocksdb:          Options.compression: LZ4
Dec  8 04:46:31 np0005550137 ceph-osd[83009]: rocksdb:                  Options.bottommost_compression: Disabled
Dec  8 04:46:31 np0005550137 ceph-osd[83009]: rocksdb:       Options.prefix_extractor: nullptr
Dec  8 04:46:31 np0005550137 ceph-osd[83009]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Dec  8 04:46:31 np0005550137 ceph-osd[83009]: rocksdb:             Options.num_levels: 7
Dec  8 04:46:31 np0005550137 ceph-osd[83009]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Dec  8 04:46:31 np0005550137 ceph-osd[83009]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Dec  8 04:46:31 np0005550137 ceph-osd[83009]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Dec  8 04:46:31 np0005550137 ceph-osd[83009]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Dec  8 04:46:31 np0005550137 ceph-osd[83009]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Dec  8 04:46:31 np0005550137 ceph-osd[83009]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Dec  8 04:46:31 np0005550137 ceph-osd[83009]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Dec  8 04:46:31 np0005550137 ceph-osd[83009]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Dec  8 04:46:31 np0005550137 ceph-osd[83009]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Dec  8 04:46:31 np0005550137 ceph-osd[83009]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Dec  8 04:46:31 np0005550137 ceph-osd[83009]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Dec  8 04:46:31 np0005550137 ceph-osd[83009]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Dec  8 04:46:31 np0005550137 ceph-osd[83009]: rocksdb:            Options.compression_opts.window_bits: -14
Dec  8 04:46:31 np0005550137 ceph-osd[83009]: rocksdb:                  Options.compression_opts.level: 32767
Dec  8 04:46:31 np0005550137 ceph-osd[83009]: rocksdb:               Options.compression_opts.strategy: 0
Dec  8 04:46:31 np0005550137 ceph-osd[83009]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Dec  8 04:46:31 np0005550137 ceph-osd[83009]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Dec  8 04:46:31 np0005550137 ceph-osd[83009]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Dec  8 04:46:31 np0005550137 ceph-osd[83009]: rocksdb:         Options.compression_opts.parallel_threads: 1
Dec  8 04:46:31 np0005550137 ceph-osd[83009]: rocksdb:                  Options.compression_opts.enabled: false
Dec  8 04:46:31 np0005550137 ceph-osd[83009]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Dec  8 04:46:31 np0005550137 ceph-osd[83009]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Dec  8 04:46:31 np0005550137 ceph-osd[83009]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Dec  8 04:46:31 np0005550137 ceph-osd[83009]: rocksdb:              Options.level0_stop_writes_trigger: 36
Dec  8 04:46:31 np0005550137 ceph-osd[83009]: rocksdb:                   Options.target_file_size_base: 67108864
Dec  8 04:46:31 np0005550137 ceph-osd[83009]: rocksdb:             Options.target_file_size_multiplier: 1
Dec  8 04:46:31 np0005550137 ceph-osd[83009]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Dec  8 04:46:31 np0005550137 ceph-osd[83009]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Dec  8 04:46:31 np0005550137 ceph-osd[83009]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Dec  8 04:46:31 np0005550137 ceph-osd[83009]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Dec  8 04:46:31 np0005550137 ceph-osd[83009]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Dec  8 04:46:31 np0005550137 ceph-osd[83009]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Dec  8 04:46:31 np0005550137 ceph-osd[83009]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Dec  8 04:46:31 np0005550137 ceph-osd[83009]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Dec  8 04:46:31 np0005550137 ceph-osd[83009]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Dec  8 04:46:31 np0005550137 ceph-osd[83009]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Dec  8 04:46:31 np0005550137 ceph-osd[83009]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Dec  8 04:46:31 np0005550137 ceph-osd[83009]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Dec  8 04:46:31 np0005550137 ceph-osd[83009]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Dec  8 04:46:31 np0005550137 ceph-osd[83009]: rocksdb:                        Options.arena_block_size: 1048576
Dec  8 04:46:31 np0005550137 ceph-osd[83009]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Dec  8 04:46:31 np0005550137 ceph-osd[83009]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Dec  8 04:46:31 np0005550137 ceph-osd[83009]: rocksdb:                Options.disable_auto_compactions: 0
Dec  8 04:46:31 np0005550137 ceph-osd[83009]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Dec  8 04:46:31 np0005550137 ceph-osd[83009]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Dec  8 04:46:31 np0005550137 ceph-osd[83009]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Dec  8 04:46:31 np0005550137 ceph-osd[83009]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Dec  8 04:46:31 np0005550137 ceph-osd[83009]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Dec  8 04:46:31 np0005550137 ceph-osd[83009]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Dec  8 04:46:31 np0005550137 ceph-osd[83009]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Dec  8 04:46:31 np0005550137 ceph-osd[83009]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Dec  8 04:46:31 np0005550137 ceph-osd[83009]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Dec  8 04:46:31 np0005550137 ceph-osd[83009]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Dec  8 04:46:31 np0005550137 ceph-osd[83009]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Dec  8 04:46:31 np0005550137 ceph-osd[83009]: rocksdb:                   Options.inplace_update_support: 0
Dec  8 04:46:31 np0005550137 ceph-osd[83009]: rocksdb:                 Options.inplace_update_num_locks: 10000
Dec  8 04:46:31 np0005550137 ceph-osd[83009]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Dec  8 04:46:31 np0005550137 ceph-osd[83009]: rocksdb:               Options.memtable_whole_key_filtering: 0
Dec  8 04:46:31 np0005550137 ceph-osd[83009]: rocksdb:   Options.memtable_huge_page_size: 0
Dec  8 04:46:31 np0005550137 ceph-osd[83009]: rocksdb:                           Options.bloom_locality: 0
Dec  8 04:46:31 np0005550137 ceph-osd[83009]: rocksdb:                    Options.max_successive_merges: 0
Dec  8 04:46:31 np0005550137 ceph-osd[83009]: rocksdb:                Options.optimize_filters_for_hits: 0
Dec  8 04:46:31 np0005550137 ceph-osd[83009]: rocksdb:                Options.paranoid_file_checks: 0
Dec  8 04:46:31 np0005550137 ceph-osd[83009]: rocksdb:                Options.force_consistency_checks: 1
Dec  8 04:46:31 np0005550137 ceph-osd[83009]: rocksdb:                Options.report_bg_io_stats: 0
Dec  8 04:46:31 np0005550137 ceph-osd[83009]: rocksdb:                               Options.ttl: 2592000
Dec  8 04:46:31 np0005550137 ceph-osd[83009]: rocksdb:          Options.periodic_compaction_seconds: 0
Dec  8 04:46:31 np0005550137 ceph-osd[83009]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Dec  8 04:46:31 np0005550137 ceph-osd[83009]: rocksdb:    Options.preserve_internal_time_seconds: 0
Dec  8 04:46:31 np0005550137 ceph-osd[83009]: rocksdb:                       Options.enable_blob_files: false
Dec  8 04:46:31 np0005550137 ceph-osd[83009]: rocksdb:                           Options.min_blob_size: 0
Dec  8 04:46:31 np0005550137 ceph-osd[83009]: rocksdb:                          Options.blob_file_size: 268435456
Dec  8 04:46:31 np0005550137 ceph-osd[83009]: rocksdb:                   Options.blob_compression_type: NoCompression
Dec  8 04:46:31 np0005550137 ceph-osd[83009]: rocksdb:          Options.enable_blob_garbage_collection: false
Dec  8 04:46:31 np0005550137 ceph-osd[83009]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Dec  8 04:46:31 np0005550137 ceph-osd[83009]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Dec  8 04:46:31 np0005550137 ceph-osd[83009]: rocksdb:          Options.blob_compaction_readahead_size: 0
Dec  8 04:46:31 np0005550137 ceph-osd[83009]: rocksdb:                Options.blob_file_starting_level: 0
Dec  8 04:46:31 np0005550137 ceph-osd[83009]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Dec  8 04:46:31 np0005550137 ceph-osd[83009]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [p-0]:
Dec  8 04:46:31 np0005550137 ceph-osd[83009]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Dec  8 04:46:31 np0005550137 ceph-osd[83009]: rocksdb:           Options.merge_operator: None
Dec  8 04:46:31 np0005550137 ceph-osd[83009]: rocksdb:        Options.compaction_filter: None
Dec  8 04:46:31 np0005550137 ceph-osd[83009]: rocksdb:        Options.compaction_filter_factory: None
Dec  8 04:46:31 np0005550137 ceph-osd[83009]: rocksdb:  Options.sst_partitioner_factory: None
Dec  8 04:46:31 np0005550137 ceph-osd[83009]: rocksdb:         Options.memtable_factory: SkipListFactory
Dec  8 04:46:31 np0005550137 ceph-osd[83009]: rocksdb:            Options.table_factory: BlockBasedTable
Dec  8 04:46:31 np0005550137 ceph-osd[83009]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x55aa7633b680)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x55aa75557350#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 483183820#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Dec  8 04:46:31 np0005550137 ceph-osd[83009]: rocksdb:        Options.write_buffer_size: 16777216
Dec  8 04:46:31 np0005550137 ceph-osd[83009]: rocksdb:  Options.max_write_buffer_number: 64
Dec  8 04:46:31 np0005550137 ceph-osd[83009]: rocksdb:          Options.compression: LZ4
Dec  8 04:46:31 np0005550137 ceph-osd[83009]: rocksdb:                  Options.bottommost_compression: Disabled
Dec  8 04:46:31 np0005550137 ceph-osd[83009]: rocksdb:       Options.prefix_extractor: nullptr
Dec  8 04:46:31 np0005550137 ceph-osd[83009]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Dec  8 04:46:31 np0005550137 ceph-osd[83009]: rocksdb:             Options.num_levels: 7
Dec  8 04:46:31 np0005550137 ceph-osd[83009]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Dec  8 04:46:31 np0005550137 ceph-osd[83009]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Dec  8 04:46:31 np0005550137 ceph-osd[83009]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Dec  8 04:46:31 np0005550137 ceph-osd[83009]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Dec  8 04:46:31 np0005550137 ceph-osd[83009]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Dec  8 04:46:31 np0005550137 ceph-osd[83009]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Dec  8 04:46:31 np0005550137 ceph-osd[83009]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Dec  8 04:46:31 np0005550137 ceph-osd[83009]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Dec  8 04:46:31 np0005550137 ceph-osd[83009]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Dec  8 04:46:31 np0005550137 ceph-osd[83009]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Dec  8 04:46:31 np0005550137 ceph-osd[83009]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Dec  8 04:46:31 np0005550137 ceph-osd[83009]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Dec  8 04:46:31 np0005550137 ceph-osd[83009]: rocksdb:            Options.compression_opts.window_bits: -14
Dec  8 04:46:31 np0005550137 ceph-osd[83009]: rocksdb:                  Options.compression_opts.level: 32767
Dec  8 04:46:31 np0005550137 ceph-osd[83009]: rocksdb:               Options.compression_opts.strategy: 0
Dec  8 04:46:31 np0005550137 ceph-osd[83009]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Dec  8 04:46:31 np0005550137 ceph-osd[83009]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Dec  8 04:46:31 np0005550137 ceph-osd[83009]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Dec  8 04:46:31 np0005550137 ceph-osd[83009]: rocksdb:         Options.compression_opts.parallel_threads: 1
Dec  8 04:46:31 np0005550137 ceph-osd[83009]: rocksdb:                  Options.compression_opts.enabled: false
Dec  8 04:46:31 np0005550137 ceph-osd[83009]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Dec  8 04:46:31 np0005550137 ceph-osd[83009]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Dec  8 04:46:31 np0005550137 ceph-osd[83009]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Dec  8 04:46:31 np0005550137 ceph-osd[83009]: rocksdb:              Options.level0_stop_writes_trigger: 36
Dec  8 04:46:31 np0005550137 ceph-osd[83009]: rocksdb:                   Options.target_file_size_base: 67108864
Dec  8 04:46:31 np0005550137 ceph-osd[83009]: rocksdb:             Options.target_file_size_multiplier: 1
Dec  8 04:46:31 np0005550137 ceph-osd[83009]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Dec  8 04:46:31 np0005550137 ceph-osd[83009]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Dec  8 04:46:31 np0005550137 ceph-osd[83009]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Dec  8 04:46:31 np0005550137 ceph-osd[83009]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Dec  8 04:46:31 np0005550137 ceph-osd[83009]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Dec  8 04:46:31 np0005550137 ceph-osd[83009]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Dec  8 04:46:31 np0005550137 ceph-osd[83009]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Dec  8 04:46:31 np0005550137 ceph-osd[83009]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Dec  8 04:46:31 np0005550137 ceph-osd[83009]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Dec  8 04:46:31 np0005550137 ceph-osd[83009]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Dec  8 04:46:31 np0005550137 ceph-osd[83009]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Dec  8 04:46:31 np0005550137 ceph-osd[83009]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Dec  8 04:46:31 np0005550137 ceph-osd[83009]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Dec  8 04:46:31 np0005550137 ceph-osd[83009]: rocksdb:                        Options.arena_block_size: 1048576
Dec  8 04:46:31 np0005550137 ceph-osd[83009]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Dec  8 04:46:31 np0005550137 ceph-osd[83009]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Dec  8 04:46:31 np0005550137 ceph-osd[83009]: rocksdb:                Options.disable_auto_compactions: 0
Dec  8 04:46:31 np0005550137 ceph-osd[83009]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Dec  8 04:46:31 np0005550137 ceph-osd[83009]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Dec  8 04:46:31 np0005550137 ceph-osd[83009]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Dec  8 04:46:31 np0005550137 ceph-osd[83009]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Dec  8 04:46:31 np0005550137 ceph-osd[83009]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Dec  8 04:46:31 np0005550137 ceph-osd[83009]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Dec  8 04:46:31 np0005550137 ceph-osd[83009]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Dec  8 04:46:31 np0005550137 ceph-osd[83009]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Dec  8 04:46:31 np0005550137 ceph-osd[83009]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Dec  8 04:46:31 np0005550137 ceph-osd[83009]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Dec  8 04:46:31 np0005550137 ceph-osd[83009]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Dec  8 04:46:31 np0005550137 ceph-osd[83009]: rocksdb:                   Options.inplace_update_support: 0
Dec  8 04:46:31 np0005550137 ceph-osd[83009]: rocksdb:                 Options.inplace_update_num_locks: 10000
Dec  8 04:46:31 np0005550137 ceph-osd[83009]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Dec  8 04:46:31 np0005550137 ceph-osd[83009]: rocksdb:               Options.memtable_whole_key_filtering: 0
Dec  8 04:46:31 np0005550137 ceph-osd[83009]: rocksdb:   Options.memtable_huge_page_size: 0
Dec  8 04:46:31 np0005550137 ceph-osd[83009]: rocksdb:                           Options.bloom_locality: 0
Dec  8 04:46:31 np0005550137 ceph-osd[83009]: rocksdb:                    Options.max_successive_merges: 0
Dec  8 04:46:31 np0005550137 ceph-osd[83009]: rocksdb:                Options.optimize_filters_for_hits: 0
Dec  8 04:46:31 np0005550137 ceph-osd[83009]: rocksdb:                Options.paranoid_file_checks: 0
Dec  8 04:46:31 np0005550137 ceph-osd[83009]: rocksdb:                Options.force_consistency_checks: 1
Dec  8 04:46:31 np0005550137 ceph-osd[83009]: rocksdb:                Options.report_bg_io_stats: 0
Dec  8 04:46:31 np0005550137 ceph-osd[83009]: rocksdb:                               Options.ttl: 2592000
Dec  8 04:46:31 np0005550137 ceph-osd[83009]: rocksdb:          Options.periodic_compaction_seconds: 0
Dec  8 04:46:31 np0005550137 ceph-osd[83009]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Dec  8 04:46:31 np0005550137 ceph-osd[83009]: rocksdb:    Options.preserve_internal_time_seconds: 0
Dec  8 04:46:31 np0005550137 ceph-osd[83009]: rocksdb:                       Options.enable_blob_files: false
Dec  8 04:46:31 np0005550137 ceph-osd[83009]: rocksdb:                           Options.min_blob_size: 0
Dec  8 04:46:31 np0005550137 ceph-osd[83009]: rocksdb:                          Options.blob_file_size: 268435456
Dec  8 04:46:31 np0005550137 ceph-osd[83009]: rocksdb:                   Options.blob_compression_type: NoCompression
Dec  8 04:46:31 np0005550137 ceph-osd[83009]: rocksdb:          Options.enable_blob_garbage_collection: false
Dec  8 04:46:31 np0005550137 ceph-osd[83009]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Dec  8 04:46:31 np0005550137 ceph-osd[83009]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Dec  8 04:46:31 np0005550137 ceph-osd[83009]: rocksdb:          Options.blob_compaction_readahead_size: 0
Dec  8 04:46:31 np0005550137 ceph-osd[83009]: rocksdb:                Options.blob_file_starting_level: 0
Dec  8 04:46:31 np0005550137 ceph-osd[83009]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Dec  8 04:46:31 np0005550137 ceph-osd[83009]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [p-1]:
Dec  8 04:46:31 np0005550137 ceph-osd[83009]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Dec  8 04:46:31 np0005550137 ceph-osd[83009]: rocksdb:           Options.merge_operator: None
Dec  8 04:46:31 np0005550137 ceph-osd[83009]: rocksdb:        Options.compaction_filter: None
Dec  8 04:46:31 np0005550137 ceph-osd[83009]: rocksdb:        Options.compaction_filter_factory: None
Dec  8 04:46:31 np0005550137 ceph-osd[83009]: rocksdb:  Options.sst_partitioner_factory: None
Dec  8 04:46:31 np0005550137 ceph-osd[83009]: rocksdb:         Options.memtable_factory: SkipListFactory
Dec  8 04:46:31 np0005550137 ceph-osd[83009]: rocksdb:            Options.table_factory: BlockBasedTable
Dec  8 04:46:31 np0005550137 ceph-osd[83009]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x55aa7633b680)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x55aa75557350#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 483183820#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Dec  8 04:46:31 np0005550137 ceph-osd[83009]: rocksdb:        Options.write_buffer_size: 16777216
Dec  8 04:46:31 np0005550137 ceph-osd[83009]: rocksdb:  Options.max_write_buffer_number: 64
Dec  8 04:46:31 np0005550137 ceph-osd[83009]: rocksdb:          Options.compression: LZ4
Dec  8 04:46:31 np0005550137 ceph-osd[83009]: rocksdb:                  Options.bottommost_compression: Disabled
Dec  8 04:46:31 np0005550137 ceph-osd[83009]: rocksdb:       Options.prefix_extractor: nullptr
Dec  8 04:46:31 np0005550137 ceph-osd[83009]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Dec  8 04:46:31 np0005550137 ceph-osd[83009]: rocksdb:             Options.num_levels: 7
Dec  8 04:46:31 np0005550137 ceph-osd[83009]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Dec  8 04:46:31 np0005550137 ceph-osd[83009]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Dec  8 04:46:31 np0005550137 ceph-osd[83009]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Dec  8 04:46:31 np0005550137 ceph-osd[83009]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Dec  8 04:46:31 np0005550137 ceph-osd[83009]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Dec  8 04:46:31 np0005550137 ceph-osd[83009]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Dec  8 04:46:31 np0005550137 ceph-osd[83009]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Dec  8 04:46:31 np0005550137 ceph-osd[83009]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Dec  8 04:46:31 np0005550137 ceph-osd[83009]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Dec  8 04:46:31 np0005550137 ceph-osd[83009]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Dec  8 04:46:31 np0005550137 ceph-osd[83009]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Dec  8 04:46:31 np0005550137 ceph-osd[83009]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Dec  8 04:46:31 np0005550137 ceph-osd[83009]: rocksdb:            Options.compression_opts.window_bits: -14
Dec  8 04:46:31 np0005550137 ceph-osd[83009]: rocksdb:                  Options.compression_opts.level: 32767
Dec  8 04:46:31 np0005550137 ceph-osd[83009]: rocksdb:               Options.compression_opts.strategy: 0
Dec  8 04:46:31 np0005550137 ceph-osd[83009]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Dec  8 04:46:31 np0005550137 ceph-osd[83009]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Dec  8 04:46:31 np0005550137 ceph-osd[83009]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Dec  8 04:46:31 np0005550137 ceph-osd[83009]: rocksdb:         Options.compression_opts.parallel_threads: 1
Dec  8 04:46:31 np0005550137 ceph-osd[83009]: rocksdb:                  Options.compression_opts.enabled: false
Dec  8 04:46:31 np0005550137 ceph-osd[83009]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Dec  8 04:46:31 np0005550137 ceph-osd[83009]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Dec  8 04:46:31 np0005550137 ceph-osd[83009]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Dec  8 04:46:31 np0005550137 ceph-osd[83009]: rocksdb:              Options.level0_stop_writes_trigger: 36
Dec  8 04:46:31 np0005550137 ceph-osd[83009]: rocksdb:                   Options.target_file_size_base: 67108864
Dec  8 04:46:31 np0005550137 ceph-osd[83009]: rocksdb:             Options.target_file_size_multiplier: 1
Dec  8 04:46:31 np0005550137 ceph-osd[83009]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Dec  8 04:46:31 np0005550137 ceph-osd[83009]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Dec  8 04:46:31 np0005550137 ceph-osd[83009]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Dec  8 04:46:31 np0005550137 ceph-osd[83009]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Dec  8 04:46:31 np0005550137 ceph-osd[83009]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Dec  8 04:46:31 np0005550137 ceph-osd[83009]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Dec  8 04:46:31 np0005550137 ceph-osd[83009]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Dec  8 04:46:31 np0005550137 ceph-osd[83009]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Dec  8 04:46:31 np0005550137 ceph-osd[83009]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Dec  8 04:46:31 np0005550137 ceph-osd[83009]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Dec  8 04:46:31 np0005550137 ceph-osd[83009]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Dec  8 04:46:31 np0005550137 ceph-osd[83009]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Dec  8 04:46:31 np0005550137 ceph-osd[83009]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Dec  8 04:46:31 np0005550137 ceph-osd[83009]: rocksdb:                        Options.arena_block_size: 1048576
Dec  8 04:46:31 np0005550137 ceph-osd[83009]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Dec  8 04:46:31 np0005550137 ceph-osd[83009]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Dec  8 04:46:31 np0005550137 ceph-osd[83009]: rocksdb:                Options.disable_auto_compactions: 0
Dec  8 04:46:31 np0005550137 ceph-osd[83009]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Dec  8 04:46:31 np0005550137 ceph-osd[83009]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Dec  8 04:46:31 np0005550137 ceph-osd[83009]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Dec  8 04:46:31 np0005550137 ceph-osd[83009]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Dec  8 04:46:31 np0005550137 ceph-osd[83009]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Dec  8 04:46:31 np0005550137 ceph-osd[83009]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Dec  8 04:46:31 np0005550137 ceph-osd[83009]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Dec  8 04:46:31 np0005550137 ceph-osd[83009]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Dec  8 04:46:31 np0005550137 ceph-osd[83009]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Dec  8 04:46:31 np0005550137 ceph-osd[83009]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Dec  8 04:46:31 np0005550137 ceph-osd[83009]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Dec  8 04:46:31 np0005550137 ceph-osd[83009]: rocksdb:                   Options.inplace_update_support: 0
Dec  8 04:46:31 np0005550137 ceph-osd[83009]: rocksdb:                 Options.inplace_update_num_locks: 10000
Dec  8 04:46:31 np0005550137 ceph-osd[83009]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Dec  8 04:46:31 np0005550137 ceph-osd[83009]: rocksdb:               Options.memtable_whole_key_filtering: 0
Dec  8 04:46:31 np0005550137 ceph-osd[83009]: rocksdb:   Options.memtable_huge_page_size: 0
Dec  8 04:46:31 np0005550137 ceph-osd[83009]: rocksdb:                           Options.bloom_locality: 0
Dec  8 04:46:31 np0005550137 ceph-osd[83009]: rocksdb:                    Options.max_successive_merges: 0
Dec  8 04:46:31 np0005550137 ceph-osd[83009]: rocksdb:                Options.optimize_filters_for_hits: 0
Dec  8 04:46:31 np0005550137 ceph-osd[83009]: rocksdb:                Options.paranoid_file_checks: 0
Dec  8 04:46:31 np0005550137 ceph-osd[83009]: rocksdb:                Options.force_consistency_checks: 1
Dec  8 04:46:31 np0005550137 ceph-osd[83009]: rocksdb:                Options.report_bg_io_stats: 0
Dec  8 04:46:31 np0005550137 ceph-osd[83009]: rocksdb:                               Options.ttl: 2592000
Dec  8 04:46:31 np0005550137 ceph-osd[83009]: rocksdb:          Options.periodic_compaction_seconds: 0
Dec  8 04:46:31 np0005550137 ceph-osd[83009]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Dec  8 04:46:31 np0005550137 ceph-osd[83009]: rocksdb:    Options.preserve_internal_time_seconds: 0
Dec  8 04:46:31 np0005550137 ceph-osd[83009]: rocksdb:                       Options.enable_blob_files: false
Dec  8 04:46:31 np0005550137 ceph-osd[83009]: rocksdb:                           Options.min_blob_size: 0
Dec  8 04:46:31 np0005550137 ceph-osd[83009]: rocksdb:                          Options.blob_file_size: 268435456
Dec  8 04:46:31 np0005550137 ceph-osd[83009]: rocksdb:                   Options.blob_compression_type: NoCompression
Dec  8 04:46:31 np0005550137 ceph-osd[83009]: rocksdb:          Options.enable_blob_garbage_collection: false
Dec  8 04:46:31 np0005550137 ceph-osd[83009]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Dec  8 04:46:31 np0005550137 ceph-osd[83009]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Dec  8 04:46:31 np0005550137 ceph-osd[83009]: rocksdb:          Options.blob_compaction_readahead_size: 0
Dec  8 04:46:31 np0005550137 ceph-osd[83009]: rocksdb:                Options.blob_file_starting_level: 0
Dec  8 04:46:31 np0005550137 ceph-osd[83009]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Dec  8 04:46:31 np0005550137 ceph-osd[83009]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [p-2]:
Dec  8 04:46:31 np0005550137 ceph-osd[83009]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Dec  8 04:46:31 np0005550137 ceph-osd[83009]: rocksdb:           Options.merge_operator: None
Dec  8 04:46:31 np0005550137 ceph-osd[83009]: rocksdb:        Options.compaction_filter: None
Dec  8 04:46:31 np0005550137 ceph-osd[83009]: rocksdb:        Options.compaction_filter_factory: None
Dec  8 04:46:31 np0005550137 ceph-osd[83009]: rocksdb:  Options.sst_partitioner_factory: None
Dec  8 04:46:31 np0005550137 ceph-osd[83009]: rocksdb:         Options.memtable_factory: SkipListFactory
Dec  8 04:46:31 np0005550137 ceph-osd[83009]: rocksdb:            Options.table_factory: BlockBasedTable
Dec  8 04:46:31 np0005550137 ceph-osd[83009]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x55aa7633b680)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x55aa75557350#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 483183820#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Dec  8 04:46:31 np0005550137 ceph-osd[83009]: rocksdb:        Options.write_buffer_size: 16777216
Dec  8 04:46:31 np0005550137 ceph-osd[83009]: rocksdb:  Options.max_write_buffer_number: 64
Dec  8 04:46:31 np0005550137 ceph-osd[83009]: rocksdb:          Options.compression: LZ4
Dec  8 04:46:31 np0005550137 ceph-osd[83009]: rocksdb:                  Options.bottommost_compression: Disabled
Dec  8 04:46:31 np0005550137 ceph-osd[83009]: rocksdb:       Options.prefix_extractor: nullptr
Dec  8 04:46:31 np0005550137 ceph-osd[83009]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Dec  8 04:46:31 np0005550137 ceph-osd[83009]: rocksdb:             Options.num_levels: 7
Dec  8 04:46:31 np0005550137 ceph-osd[83009]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Dec  8 04:46:31 np0005550137 ceph-osd[83009]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Dec  8 04:46:31 np0005550137 ceph-osd[83009]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Dec  8 04:46:31 np0005550137 ceph-osd[83009]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Dec  8 04:46:31 np0005550137 ceph-osd[83009]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Dec  8 04:46:31 np0005550137 ceph-osd[83009]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Dec  8 04:46:31 np0005550137 ceph-osd[83009]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Dec  8 04:46:31 np0005550137 ceph-osd[83009]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Dec  8 04:46:31 np0005550137 ceph-osd[83009]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Dec  8 04:46:31 np0005550137 ceph-osd[83009]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Dec  8 04:46:31 np0005550137 ceph-osd[83009]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Dec  8 04:46:31 np0005550137 ceph-osd[83009]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Dec  8 04:46:31 np0005550137 ceph-osd[83009]: rocksdb:            Options.compression_opts.window_bits: -14
Dec  8 04:46:31 np0005550137 ceph-osd[83009]: rocksdb:                  Options.compression_opts.level: 32767
Dec  8 04:46:31 np0005550137 ceph-osd[83009]: rocksdb:               Options.compression_opts.strategy: 0
Dec  8 04:46:31 np0005550137 ceph-osd[83009]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Dec  8 04:46:31 np0005550137 ceph-osd[83009]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Dec  8 04:46:31 np0005550137 ceph-osd[83009]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Dec  8 04:46:31 np0005550137 ceph-osd[83009]: rocksdb:         Options.compression_opts.parallel_threads: 1
Dec  8 04:46:31 np0005550137 ceph-osd[83009]: rocksdb:                  Options.compression_opts.enabled: false
Dec  8 04:46:31 np0005550137 ceph-osd[83009]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Dec  8 04:46:31 np0005550137 ceph-osd[83009]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Dec  8 04:46:31 np0005550137 ceph-osd[83009]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Dec  8 04:46:31 np0005550137 ceph-osd[83009]: rocksdb:              Options.level0_stop_writes_trigger: 36
Dec  8 04:46:31 np0005550137 ceph-osd[83009]: rocksdb:                   Options.target_file_size_base: 67108864
Dec  8 04:46:31 np0005550137 ceph-osd[83009]: rocksdb:             Options.target_file_size_multiplier: 1
Dec  8 04:46:31 np0005550137 ceph-osd[83009]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Dec  8 04:46:31 np0005550137 ceph-osd[83009]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Dec  8 04:46:31 np0005550137 ceph-osd[83009]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Dec  8 04:46:31 np0005550137 ceph-osd[83009]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Dec  8 04:46:31 np0005550137 ceph-osd[83009]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Dec  8 04:46:31 np0005550137 ceph-osd[83009]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Dec  8 04:46:31 np0005550137 ceph-osd[83009]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Dec  8 04:46:31 np0005550137 ceph-osd[83009]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Dec  8 04:46:31 np0005550137 ceph-osd[83009]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Dec  8 04:46:31 np0005550137 ceph-osd[83009]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Dec  8 04:46:31 np0005550137 ceph-osd[83009]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Dec  8 04:46:31 np0005550137 ceph-osd[83009]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Dec  8 04:46:31 np0005550137 ceph-osd[83009]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Dec  8 04:46:31 np0005550137 ceph-osd[83009]: rocksdb:                        Options.arena_block_size: 1048576
Dec  8 04:46:31 np0005550137 ceph-osd[83009]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Dec  8 04:46:31 np0005550137 ceph-osd[83009]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Dec  8 04:46:31 np0005550137 ceph-osd[83009]: rocksdb:                Options.disable_auto_compactions: 0
Dec  8 04:46:31 np0005550137 ceph-osd[83009]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Dec  8 04:46:31 np0005550137 ceph-osd[83009]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Dec  8 04:46:31 np0005550137 ceph-osd[83009]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Dec  8 04:46:31 np0005550137 ceph-osd[83009]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Dec  8 04:46:31 np0005550137 ceph-osd[83009]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Dec  8 04:46:31 np0005550137 ceph-osd[83009]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Dec  8 04:46:31 np0005550137 ceph-osd[83009]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Dec  8 04:46:31 np0005550137 ceph-osd[83009]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Dec  8 04:46:31 np0005550137 ceph-osd[83009]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Dec  8 04:46:31 np0005550137 ceph-osd[83009]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Dec  8 04:46:31 np0005550137 ceph-osd[83009]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Dec  8 04:46:31 np0005550137 ceph-osd[83009]: rocksdb:                   Options.inplace_update_support: 0
Dec  8 04:46:31 np0005550137 ceph-osd[83009]: rocksdb:                 Options.inplace_update_num_locks: 10000
Dec  8 04:46:31 np0005550137 ceph-osd[83009]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Dec  8 04:46:31 np0005550137 ceph-osd[83009]: rocksdb:               Options.memtable_whole_key_filtering: 0
Dec  8 04:46:31 np0005550137 ceph-osd[83009]: rocksdb:   Options.memtable_huge_page_size: 0
Dec  8 04:46:31 np0005550137 ceph-osd[83009]: rocksdb:                           Options.bloom_locality: 0
Dec  8 04:46:31 np0005550137 ceph-osd[83009]: rocksdb:                    Options.max_successive_merges: 0
Dec  8 04:46:31 np0005550137 ceph-osd[83009]: rocksdb:                Options.optimize_filters_for_hits: 0
Dec  8 04:46:31 np0005550137 ceph-osd[83009]: rocksdb:                Options.paranoid_file_checks: 0
Dec  8 04:46:31 np0005550137 ceph-osd[83009]: rocksdb:                Options.force_consistency_checks: 1
Dec  8 04:46:31 np0005550137 ceph-osd[83009]: rocksdb:                Options.report_bg_io_stats: 0
Dec  8 04:46:31 np0005550137 ceph-osd[83009]: rocksdb:                               Options.ttl: 2592000
Dec  8 04:46:31 np0005550137 ceph-osd[83009]: rocksdb:          Options.periodic_compaction_seconds: 0
Dec  8 04:46:31 np0005550137 ceph-osd[83009]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Dec  8 04:46:31 np0005550137 ceph-osd[83009]: rocksdb:    Options.preserve_internal_time_seconds: 0
Dec  8 04:46:31 np0005550137 ceph-osd[83009]: rocksdb:                       Options.enable_blob_files: false
Dec  8 04:46:31 np0005550137 ceph-osd[83009]: rocksdb:                           Options.min_blob_size: 0
Dec  8 04:46:31 np0005550137 ceph-osd[83009]: rocksdb:                          Options.blob_file_size: 268435456
Dec  8 04:46:31 np0005550137 ceph-osd[83009]: rocksdb:                   Options.blob_compression_type: NoCompression
Dec  8 04:46:31 np0005550137 ceph-osd[83009]: rocksdb:          Options.enable_blob_garbage_collection: false
Dec  8 04:46:31 np0005550137 ceph-osd[83009]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Dec  8 04:46:31 np0005550137 ceph-osd[83009]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Dec  8 04:46:31 np0005550137 ceph-osd[83009]: rocksdb:          Options.blob_compaction_readahead_size: 0
Dec  8 04:46:31 np0005550137 ceph-osd[83009]: rocksdb:                Options.blob_file_starting_level: 0
Dec  8 04:46:31 np0005550137 ceph-osd[83009]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Dec  8 04:46:31 np0005550137 ceph-osd[83009]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [O-0]:
Dec  8 04:46:31 np0005550137 ceph-osd[83009]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Dec  8 04:46:31 np0005550137 ceph-osd[83009]: rocksdb:           Options.merge_operator: None
Dec  8 04:46:31 np0005550137 ceph-osd[83009]: rocksdb:        Options.compaction_filter: None
Dec  8 04:46:31 np0005550137 ceph-osd[83009]: rocksdb:        Options.compaction_filter_factory: None
Dec  8 04:46:31 np0005550137 ceph-osd[83009]: rocksdb:  Options.sst_partitioner_factory: None
Dec  8 04:46:31 np0005550137 ceph-osd[83009]: rocksdb:         Options.memtable_factory: SkipListFactory
Dec  8 04:46:31 np0005550137 ceph-osd[83009]: rocksdb:            Options.table_factory: BlockBasedTable
Dec  8 04:46:31 np0005550137 ceph-osd[83009]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x55aa7633bac0)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x55aa755569b0#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 536870912#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Dec  8 04:46:31 np0005550137 ceph-osd[83009]: rocksdb:        Options.write_buffer_size: 16777216
Dec  8 04:46:31 np0005550137 ceph-osd[83009]: rocksdb:  Options.max_write_buffer_number: 64
Dec  8 04:46:31 np0005550137 ceph-osd[83009]: rocksdb:          Options.compression: LZ4
Dec  8 04:46:31 np0005550137 ceph-osd[83009]: rocksdb:                  Options.bottommost_compression: Disabled
Dec  8 04:46:31 np0005550137 ceph-osd[83009]: rocksdb:       Options.prefix_extractor: nullptr
Dec  8 04:46:31 np0005550137 ceph-osd[83009]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Dec  8 04:46:31 np0005550137 ceph-osd[83009]: rocksdb:             Options.num_levels: 7
Dec  8 04:46:31 np0005550137 ceph-osd[83009]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Dec  8 04:46:31 np0005550137 ceph-osd[83009]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Dec  8 04:46:31 np0005550137 ceph-osd[83009]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Dec  8 04:46:31 np0005550137 ceph-osd[83009]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Dec  8 04:46:31 np0005550137 ceph-osd[83009]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Dec  8 04:46:31 np0005550137 ceph-osd[83009]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Dec  8 04:46:31 np0005550137 ceph-osd[83009]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Dec  8 04:46:31 np0005550137 ceph-osd[83009]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Dec  8 04:46:31 np0005550137 ceph-osd[83009]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Dec  8 04:46:31 np0005550137 ceph-osd[83009]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Dec  8 04:46:31 np0005550137 ceph-osd[83009]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Dec  8 04:46:31 np0005550137 ceph-osd[83009]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Dec  8 04:46:31 np0005550137 ceph-osd[83009]: rocksdb:            Options.compression_opts.window_bits: -14
Dec  8 04:46:31 np0005550137 ceph-osd[83009]: rocksdb:                  Options.compression_opts.level: 32767
Dec  8 04:46:31 np0005550137 ceph-osd[83009]: rocksdb:               Options.compression_opts.strategy: 0
Dec  8 04:46:31 np0005550137 ceph-osd[83009]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Dec  8 04:46:31 np0005550137 ceph-osd[83009]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Dec  8 04:46:31 np0005550137 ceph-osd[83009]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Dec  8 04:46:31 np0005550137 ceph-osd[83009]: rocksdb:         Options.compression_opts.parallel_threads: 1
Dec  8 04:46:31 np0005550137 ceph-osd[83009]: rocksdb:                  Options.compression_opts.enabled: false
Dec  8 04:46:31 np0005550137 ceph-osd[83009]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Dec  8 04:46:31 np0005550137 ceph-osd[83009]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Dec  8 04:46:31 np0005550137 ceph-osd[83009]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Dec  8 04:46:31 np0005550137 ceph-osd[83009]: rocksdb:              Options.level0_stop_writes_trigger: 36
Dec  8 04:46:31 np0005550137 ceph-osd[83009]: rocksdb:                   Options.target_file_size_base: 67108864
Dec  8 04:46:31 np0005550137 ceph-osd[83009]: rocksdb:             Options.target_file_size_multiplier: 1
Dec  8 04:46:31 np0005550137 ceph-osd[83009]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Dec  8 04:46:31 np0005550137 ceph-osd[83009]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Dec  8 04:46:31 np0005550137 ceph-osd[83009]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Dec  8 04:46:31 np0005550137 ceph-osd[83009]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Dec  8 04:46:31 np0005550137 ceph-osd[83009]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Dec  8 04:46:31 np0005550137 ceph-osd[83009]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Dec  8 04:46:31 np0005550137 ceph-osd[83009]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Dec  8 04:46:31 np0005550137 ceph-osd[83009]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Dec  8 04:46:31 np0005550137 ceph-osd[83009]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Dec  8 04:46:31 np0005550137 ceph-osd[83009]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Dec  8 04:46:31 np0005550137 ceph-osd[83009]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Dec  8 04:46:31 np0005550137 ceph-osd[83009]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Dec  8 04:46:31 np0005550137 ceph-osd[83009]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Dec  8 04:46:31 np0005550137 ceph-osd[83009]: rocksdb:                        Options.arena_block_size: 1048576
Dec  8 04:46:31 np0005550137 ceph-osd[83009]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Dec  8 04:46:31 np0005550137 ceph-osd[83009]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Dec  8 04:46:31 np0005550137 ceph-osd[83009]: rocksdb:                Options.disable_auto_compactions: 0
Dec  8 04:46:31 np0005550137 ceph-osd[83009]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Dec  8 04:46:31 np0005550137 ceph-osd[83009]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Dec  8 04:46:31 np0005550137 ceph-osd[83009]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Dec  8 04:46:31 np0005550137 ceph-osd[83009]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Dec  8 04:46:31 np0005550137 ceph-osd[83009]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Dec  8 04:46:31 np0005550137 ceph-osd[83009]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Dec  8 04:46:31 np0005550137 ceph-osd[83009]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Dec  8 04:46:31 np0005550137 ceph-osd[83009]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Dec  8 04:46:31 np0005550137 ceph-osd[83009]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Dec  8 04:46:31 np0005550137 ceph-osd[83009]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Dec  8 04:46:31 np0005550137 ceph-osd[83009]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Dec  8 04:46:31 np0005550137 ceph-osd[83009]: rocksdb:                   Options.inplace_update_support: 0
Dec  8 04:46:31 np0005550137 ceph-osd[83009]: rocksdb:                 Options.inplace_update_num_locks: 10000
Dec  8 04:46:31 np0005550137 ceph-osd[83009]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Dec  8 04:46:31 np0005550137 ceph-osd[83009]: rocksdb:               Options.memtable_whole_key_filtering: 0
Dec  8 04:46:31 np0005550137 ceph-osd[83009]: rocksdb:   Options.memtable_huge_page_size: 0
Dec  8 04:46:31 np0005550137 ceph-osd[83009]: rocksdb:                           Options.bloom_locality: 0
Dec  8 04:46:31 np0005550137 ceph-osd[83009]: rocksdb:                    Options.max_successive_merges: 0
Dec  8 04:46:31 np0005550137 ceph-osd[83009]: rocksdb:                Options.optimize_filters_for_hits: 0
Dec  8 04:46:31 np0005550137 ceph-osd[83009]: rocksdb:                Options.paranoid_file_checks: 0
Dec  8 04:46:31 np0005550137 ceph-osd[83009]: rocksdb:                Options.force_consistency_checks: 1
Dec  8 04:46:31 np0005550137 ceph-osd[83009]: rocksdb:                Options.report_bg_io_stats: 0
Dec  8 04:46:31 np0005550137 ceph-osd[83009]: rocksdb:                               Options.ttl: 2592000
Dec  8 04:46:31 np0005550137 ceph-osd[83009]: rocksdb:          Options.periodic_compaction_seconds: 0
Dec  8 04:46:31 np0005550137 ceph-osd[83009]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Dec  8 04:46:31 np0005550137 ceph-osd[83009]: rocksdb:    Options.preserve_internal_time_seconds: 0
Dec  8 04:46:31 np0005550137 ceph-osd[83009]: rocksdb:                       Options.enable_blob_files: false
Dec  8 04:46:31 np0005550137 ceph-osd[83009]: rocksdb:                           Options.min_blob_size: 0
Dec  8 04:46:31 np0005550137 ceph-osd[83009]: rocksdb:                          Options.blob_file_size: 268435456
Dec  8 04:46:31 np0005550137 ceph-osd[83009]: rocksdb:                   Options.blob_compression_type: NoCompression
Dec  8 04:46:31 np0005550137 ceph-osd[83009]: rocksdb:          Options.enable_blob_garbage_collection: false
Dec  8 04:46:31 np0005550137 ceph-osd[83009]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Dec  8 04:46:31 np0005550137 ceph-osd[83009]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Dec  8 04:46:31 np0005550137 ceph-osd[83009]: rocksdb:          Options.blob_compaction_readahead_size: 0
Dec  8 04:46:31 np0005550137 ceph-osd[83009]: rocksdb:                Options.blob_file_starting_level: 0
Dec  8 04:46:31 np0005550137 ceph-osd[83009]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Dec  8 04:46:31 np0005550137 ceph-osd[83009]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [O-1]:
Dec  8 04:46:31 np0005550137 ceph-osd[83009]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Dec  8 04:46:31 np0005550137 ceph-osd[83009]: rocksdb:           Options.merge_operator: None
Dec  8 04:46:31 np0005550137 ceph-osd[83009]: rocksdb:        Options.compaction_filter: None
Dec  8 04:46:31 np0005550137 ceph-osd[83009]: rocksdb:        Options.compaction_filter_factory: None
Dec  8 04:46:31 np0005550137 ceph-osd[83009]: rocksdb:  Options.sst_partitioner_factory: None
Dec  8 04:46:31 np0005550137 ceph-osd[83009]: rocksdb:         Options.memtable_factory: SkipListFactory
Dec  8 04:46:31 np0005550137 ceph-osd[83009]: rocksdb:            Options.table_factory: BlockBasedTable
Dec  8 04:46:31 np0005550137 ceph-osd[83009]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x55aa7633bac0)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x55aa755569b0#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 536870912#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Dec  8 04:46:31 np0005550137 ceph-osd[83009]: rocksdb:        Options.write_buffer_size: 16777216
Dec  8 04:46:31 np0005550137 ceph-osd[83009]: rocksdb:  Options.max_write_buffer_number: 64
Dec  8 04:46:31 np0005550137 ceph-osd[83009]: rocksdb:          Options.compression: LZ4
Dec  8 04:46:31 np0005550137 ceph-osd[83009]: rocksdb:                  Options.bottommost_compression: Disabled
Dec  8 04:46:31 np0005550137 ceph-osd[83009]: rocksdb:       Options.prefix_extractor: nullptr
Dec  8 04:46:31 np0005550137 ceph-osd[83009]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Dec  8 04:46:31 np0005550137 ceph-osd[83009]: rocksdb:             Options.num_levels: 7
Dec  8 04:46:31 np0005550137 ceph-osd[83009]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Dec  8 04:46:31 np0005550137 ceph-osd[83009]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Dec  8 04:46:31 np0005550137 ceph-osd[83009]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Dec  8 04:46:31 np0005550137 ceph-osd[83009]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Dec  8 04:46:31 np0005550137 ceph-osd[83009]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Dec  8 04:46:31 np0005550137 ceph-osd[83009]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Dec  8 04:46:31 np0005550137 ceph-osd[83009]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Dec  8 04:46:31 np0005550137 ceph-osd[83009]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Dec  8 04:46:31 np0005550137 ceph-osd[83009]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Dec  8 04:46:31 np0005550137 ceph-osd[83009]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Dec  8 04:46:31 np0005550137 ceph-osd[83009]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Dec  8 04:46:31 np0005550137 ceph-osd[83009]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Dec  8 04:46:31 np0005550137 ceph-osd[83009]: rocksdb:            Options.compression_opts.window_bits: -14
Dec  8 04:46:31 np0005550137 ceph-osd[83009]: rocksdb:                  Options.compression_opts.level: 32767
Dec  8 04:46:31 np0005550137 ceph-osd[83009]: rocksdb:               Options.compression_opts.strategy: 0
Dec  8 04:46:31 np0005550137 ceph-osd[83009]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Dec  8 04:46:31 np0005550137 ceph-osd[83009]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Dec  8 04:46:31 np0005550137 ceph-osd[83009]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Dec  8 04:46:31 np0005550137 ceph-osd[83009]: rocksdb:         Options.compression_opts.parallel_threads: 1
Dec  8 04:46:31 np0005550137 ceph-osd[83009]: rocksdb:                  Options.compression_opts.enabled: false
Dec  8 04:46:31 np0005550137 ceph-osd[83009]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Dec  8 04:46:31 np0005550137 ceph-osd[83009]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Dec  8 04:46:31 np0005550137 ceph-osd[83009]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Dec  8 04:46:31 np0005550137 ceph-osd[83009]: rocksdb:              Options.level0_stop_writes_trigger: 36
Dec  8 04:46:31 np0005550137 ceph-osd[83009]: rocksdb:                   Options.target_file_size_base: 67108864
Dec  8 04:46:31 np0005550137 ceph-osd[83009]: rocksdb:             Options.target_file_size_multiplier: 1
Dec  8 04:46:31 np0005550137 ceph-osd[83009]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Dec  8 04:46:31 np0005550137 ceph-osd[83009]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Dec  8 04:46:31 np0005550137 ceph-osd[83009]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Dec  8 04:46:31 np0005550137 ceph-osd[83009]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Dec  8 04:46:31 np0005550137 ceph-osd[83009]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Dec  8 04:46:31 np0005550137 ceph-osd[83009]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Dec  8 04:46:31 np0005550137 ceph-osd[83009]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Dec  8 04:46:31 np0005550137 ceph-osd[83009]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Dec  8 04:46:31 np0005550137 ceph-osd[83009]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Dec  8 04:46:31 np0005550137 ceph-osd[83009]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Dec  8 04:46:31 np0005550137 ceph-osd[83009]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Dec  8 04:46:31 np0005550137 ceph-osd[83009]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Dec  8 04:46:31 np0005550137 ceph-osd[83009]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Dec  8 04:46:31 np0005550137 ceph-osd[83009]: rocksdb:                        Options.arena_block_size: 1048576
Dec  8 04:46:31 np0005550137 ceph-osd[83009]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Dec  8 04:46:31 np0005550137 ceph-osd[83009]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Dec  8 04:46:31 np0005550137 ceph-osd[83009]: rocksdb:                Options.disable_auto_compactions: 0
Dec  8 04:46:31 np0005550137 ceph-osd[83009]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Dec  8 04:46:31 np0005550137 ceph-osd[83009]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Dec  8 04:46:31 np0005550137 ceph-osd[83009]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Dec  8 04:46:31 np0005550137 ceph-osd[83009]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Dec  8 04:46:31 np0005550137 ceph-osd[83009]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Dec  8 04:46:31 np0005550137 ceph-osd[83009]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Dec  8 04:46:31 np0005550137 ceph-osd[83009]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Dec  8 04:46:31 np0005550137 ceph-osd[83009]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Dec  8 04:46:31 np0005550137 ceph-osd[83009]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Dec  8 04:46:31 np0005550137 ceph-osd[83009]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Dec  8 04:46:31 np0005550137 ceph-osd[83009]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Dec  8 04:46:31 np0005550137 ceph-osd[83009]: rocksdb:                   Options.inplace_update_support: 0
Dec  8 04:46:31 np0005550137 ceph-osd[83009]: rocksdb:                 Options.inplace_update_num_locks: 10000
Dec  8 04:46:31 np0005550137 ceph-osd[83009]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Dec  8 04:46:31 np0005550137 ceph-osd[83009]: rocksdb:               Options.memtable_whole_key_filtering: 0
Dec  8 04:46:31 np0005550137 ceph-osd[83009]: rocksdb:   Options.memtable_huge_page_size: 0
Dec  8 04:46:31 np0005550137 ceph-osd[83009]: rocksdb:                           Options.bloom_locality: 0
Dec  8 04:46:31 np0005550137 ceph-osd[83009]: rocksdb:                    Options.max_successive_merges: 0
Dec  8 04:46:31 np0005550137 ceph-osd[83009]: rocksdb:                Options.optimize_filters_for_hits: 0
Dec  8 04:46:31 np0005550137 ceph-osd[83009]: rocksdb:                Options.paranoid_file_checks: 0
Dec  8 04:46:31 np0005550137 ceph-osd[83009]: rocksdb:                Options.force_consistency_checks: 1
Dec  8 04:46:31 np0005550137 ceph-osd[83009]: rocksdb:                Options.report_bg_io_stats: 0
Dec  8 04:46:31 np0005550137 ceph-osd[83009]: rocksdb:                               Options.ttl: 2592000
Dec  8 04:46:31 np0005550137 ceph-osd[83009]: rocksdb:          Options.periodic_compaction_seconds: 0
Dec  8 04:46:31 np0005550137 ceph-osd[83009]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Dec  8 04:46:31 np0005550137 ceph-osd[83009]: rocksdb:    Options.preserve_internal_time_seconds: 0
Dec  8 04:46:31 np0005550137 ceph-osd[83009]: rocksdb:                       Options.enable_blob_files: false
Dec  8 04:46:31 np0005550137 ceph-osd[83009]: rocksdb:                           Options.min_blob_size: 0
Dec  8 04:46:31 np0005550137 ceph-osd[83009]: rocksdb:                          Options.blob_file_size: 268435456
Dec  8 04:46:31 np0005550137 ceph-osd[83009]: rocksdb:                   Options.blob_compression_type: NoCompression
Dec  8 04:46:31 np0005550137 ceph-osd[83009]: rocksdb:          Options.enable_blob_garbage_collection: false
Dec  8 04:46:31 np0005550137 ceph-osd[83009]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Dec  8 04:46:31 np0005550137 ceph-osd[83009]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Dec  8 04:46:31 np0005550137 ceph-osd[83009]: rocksdb:          Options.blob_compaction_readahead_size: 0
Dec  8 04:46:31 np0005550137 ceph-osd[83009]: rocksdb:                Options.blob_file_starting_level: 0
Dec  8 04:46:31 np0005550137 ceph-osd[83009]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Dec  8 04:46:31 np0005550137 ceph-osd[83009]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [O-2]:
Dec  8 04:46:31 np0005550137 ceph-osd[83009]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Dec  8 04:46:31 np0005550137 ceph-osd[83009]: rocksdb:           Options.merge_operator: None
Dec  8 04:46:31 np0005550137 ceph-osd[83009]: rocksdb:        Options.compaction_filter: None
Dec  8 04:46:31 np0005550137 ceph-osd[83009]: rocksdb:        Options.compaction_filter_factory: None
Dec  8 04:46:31 np0005550137 ceph-osd[83009]: rocksdb:  Options.sst_partitioner_factory: None
Dec  8 04:46:31 np0005550137 ceph-osd[83009]: rocksdb:         Options.memtable_factory: SkipListFactory
Dec  8 04:46:31 np0005550137 ceph-osd[83009]: rocksdb:            Options.table_factory: BlockBasedTable
Dec  8 04:46:31 np0005550137 ceph-osd[83009]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x55aa7633bac0)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x55aa755569b0#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 536870912#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Dec  8 04:46:31 np0005550137 ceph-osd[83009]: rocksdb:        Options.write_buffer_size: 16777216
Dec  8 04:46:31 np0005550137 ceph-osd[83009]: rocksdb:  Options.max_write_buffer_number: 64
Dec  8 04:46:31 np0005550137 ceph-osd[83009]: rocksdb:          Options.compression: LZ4
Dec  8 04:46:31 np0005550137 ceph-osd[83009]: rocksdb:                  Options.bottommost_compression: Disabled
Dec  8 04:46:31 np0005550137 ceph-osd[83009]: rocksdb:       Options.prefix_extractor: nullptr
Dec  8 04:46:31 np0005550137 ceph-osd[83009]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Dec  8 04:46:31 np0005550137 ceph-osd[83009]: rocksdb:             Options.num_levels: 7
Dec  8 04:46:31 np0005550137 ceph-osd[83009]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Dec  8 04:46:31 np0005550137 ceph-osd[83009]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Dec  8 04:46:31 np0005550137 ceph-osd[83009]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Dec  8 04:46:31 np0005550137 ceph-osd[83009]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Dec  8 04:46:31 np0005550137 ceph-osd[83009]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Dec  8 04:46:31 np0005550137 ceph-osd[83009]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Dec  8 04:46:31 np0005550137 ceph-osd[83009]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Dec  8 04:46:31 np0005550137 ceph-osd[83009]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Dec  8 04:46:31 np0005550137 ceph-osd[83009]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Dec  8 04:46:31 np0005550137 ceph-osd[83009]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Dec  8 04:46:31 np0005550137 ceph-osd[83009]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Dec  8 04:46:31 np0005550137 ceph-osd[83009]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Dec  8 04:46:31 np0005550137 ceph-osd[83009]: rocksdb:            Options.compression_opts.window_bits: -14
Dec  8 04:46:31 np0005550137 ceph-osd[83009]: rocksdb:                  Options.compression_opts.level: 32767
Dec  8 04:46:31 np0005550137 ceph-osd[83009]: rocksdb:               Options.compression_opts.strategy: 0
Dec  8 04:46:31 np0005550137 ceph-osd[83009]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Dec  8 04:46:31 np0005550137 ceph-osd[83009]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Dec  8 04:46:31 np0005550137 ceph-osd[83009]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Dec  8 04:46:31 np0005550137 ceph-osd[83009]: rocksdb:         Options.compression_opts.parallel_threads: 1
Dec  8 04:46:31 np0005550137 ceph-osd[83009]: rocksdb:                  Options.compression_opts.enabled: false
Dec  8 04:46:31 np0005550137 ceph-osd[83009]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Dec  8 04:46:31 np0005550137 ceph-osd[83009]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Dec  8 04:46:31 np0005550137 ceph-osd[83009]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Dec  8 04:46:31 np0005550137 ceph-osd[83009]: rocksdb:              Options.level0_stop_writes_trigger: 36
Dec  8 04:46:31 np0005550137 ceph-osd[83009]: rocksdb:                   Options.target_file_size_base: 67108864
Dec  8 04:46:31 np0005550137 ceph-osd[83009]: rocksdb:             Options.target_file_size_multiplier: 1
Dec  8 04:46:31 np0005550137 ceph-osd[83009]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Dec  8 04:46:31 np0005550137 ceph-osd[83009]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Dec  8 04:46:31 np0005550137 ceph-osd[83009]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Dec  8 04:46:31 np0005550137 ceph-osd[83009]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Dec  8 04:46:31 np0005550137 ceph-osd[83009]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Dec  8 04:46:31 np0005550137 ceph-osd[83009]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Dec  8 04:46:31 np0005550137 ceph-osd[83009]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Dec  8 04:46:31 np0005550137 ceph-osd[83009]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Dec  8 04:46:31 np0005550137 ceph-osd[83009]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Dec  8 04:46:31 np0005550137 ceph-osd[83009]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Dec  8 04:46:31 np0005550137 ceph-osd[83009]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Dec  8 04:46:31 np0005550137 ceph-osd[83009]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Dec  8 04:46:31 np0005550137 ceph-osd[83009]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Dec  8 04:46:31 np0005550137 ceph-osd[83009]: rocksdb:                        Options.arena_block_size: 1048576
Dec  8 04:46:31 np0005550137 ceph-osd[83009]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Dec  8 04:46:31 np0005550137 ceph-osd[83009]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Dec  8 04:46:31 np0005550137 ceph-osd[83009]: rocksdb:                Options.disable_auto_compactions: 0
Dec  8 04:46:31 np0005550137 ceph-osd[83009]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Dec  8 04:46:31 np0005550137 ceph-osd[83009]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Dec  8 04:46:31 np0005550137 ceph-osd[83009]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Dec  8 04:46:31 np0005550137 ceph-osd[83009]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Dec  8 04:46:31 np0005550137 ceph-osd[83009]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Dec  8 04:46:31 np0005550137 ceph-osd[83009]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Dec  8 04:46:31 np0005550137 ceph-osd[83009]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Dec  8 04:46:31 np0005550137 ceph-osd[83009]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Dec  8 04:46:31 np0005550137 ceph-osd[83009]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Dec  8 04:46:31 np0005550137 ceph-osd[83009]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Dec  8 04:46:31 np0005550137 ceph-osd[83009]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Dec  8 04:46:31 np0005550137 ceph-osd[83009]: rocksdb:                   Options.inplace_update_support: 0
Dec  8 04:46:31 np0005550137 ceph-osd[83009]: rocksdb:                 Options.inplace_update_num_locks: 10000
Dec  8 04:46:31 np0005550137 ceph-osd[83009]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Dec  8 04:46:31 np0005550137 ceph-osd[83009]: rocksdb:               Options.memtable_whole_key_filtering: 0
Dec  8 04:46:31 np0005550137 ceph-osd[83009]: rocksdb:   Options.memtable_huge_page_size: 0
Dec  8 04:46:31 np0005550137 ceph-osd[83009]: rocksdb:                           Options.bloom_locality: 0
Dec  8 04:46:31 np0005550137 ceph-osd[83009]: rocksdb:                    Options.max_successive_merges: 0
Dec  8 04:46:31 np0005550137 ceph-osd[83009]: rocksdb:                Options.optimize_filters_for_hits: 0
Dec  8 04:46:31 np0005550137 ceph-osd[83009]: rocksdb:                Options.paranoid_file_checks: 0
Dec  8 04:46:31 np0005550137 ceph-osd[83009]: rocksdb:                Options.force_consistency_checks: 1
Dec  8 04:46:31 np0005550137 ceph-osd[83009]: rocksdb:                Options.report_bg_io_stats: 0
Dec  8 04:46:31 np0005550137 ceph-osd[83009]: rocksdb:                               Options.ttl: 2592000
Dec  8 04:46:31 np0005550137 ceph-osd[83009]: rocksdb:          Options.periodic_compaction_seconds: 0
Dec  8 04:46:31 np0005550137 ceph-osd[83009]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Dec  8 04:46:31 np0005550137 ceph-osd[83009]: rocksdb:    Options.preserve_internal_time_seconds: 0
Dec  8 04:46:31 np0005550137 ceph-osd[83009]: rocksdb:                       Options.enable_blob_files: false
Dec  8 04:46:31 np0005550137 ceph-osd[83009]: rocksdb:                           Options.min_blob_size: 0
Dec  8 04:46:31 np0005550137 ceph-osd[83009]: rocksdb:                          Options.blob_file_size: 268435456
Dec  8 04:46:31 np0005550137 ceph-osd[83009]: rocksdb:                   Options.blob_compression_type: NoCompression
Dec  8 04:46:31 np0005550137 ceph-osd[83009]: rocksdb:          Options.enable_blob_garbage_collection: false
Dec  8 04:46:31 np0005550137 ceph-osd[83009]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Dec  8 04:46:31 np0005550137 ceph-osd[83009]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Dec  8 04:46:31 np0005550137 ceph-osd[83009]: rocksdb:          Options.blob_compaction_readahead_size: 0
Dec  8 04:46:31 np0005550137 ceph-osd[83009]: rocksdb:                Options.blob_file_starting_level: 0
Dec  8 04:46:31 np0005550137 ceph-osd[83009]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Dec  8 04:46:31 np0005550137 ceph-osd[83009]: rocksdb: [db/column_family.cc:635] #011(skipping printing options)
Dec  8 04:46:31 np0005550137 ceph-osd[83009]: rocksdb: [db/column_family.cc:635] #011(skipping printing options)
Dec  8 04:46:31 np0005550137 ceph-osd[83009]: rocksdb: [db/version_set.cc:5566] Recovered from manifest file:db/MANIFEST-000032 succeeded,manifest_file_number is 32, next_file_number is 34, last_sequence is 12, log_number is 5,prev_log_number is 0,max_column_family is 11,min_log_number_to_keep is 5
Dec  8 04:46:31 np0005550137 ceph-osd[83009]: rocksdb: [db/version_set.cc:5581] Column family [default] (ID 0), log number is 5
Dec  8 04:46:31 np0005550137 ceph-osd[83009]: rocksdb: [db/version_set.cc:5581] Column family [m-0] (ID 1), log number is 5
Dec  8 04:46:31 np0005550137 ceph-osd[83009]: rocksdb: [db/version_set.cc:5581] Column family [m-1] (ID 2), log number is 5
Dec  8 04:46:31 np0005550137 ceph-osd[83009]: rocksdb: [db/version_set.cc:5581] Column family [m-2] (ID 3), log number is 5
Dec  8 04:46:31 np0005550137 ceph-osd[83009]: rocksdb: [db/version_set.cc:5581] Column family [p-0] (ID 4), log number is 5
Dec  8 04:46:31 np0005550137 ceph-osd[83009]: rocksdb: [db/version_set.cc:5581] Column family [p-1] (ID 5), log number is 5
Dec  8 04:46:31 np0005550137 ceph-osd[83009]: rocksdb: [db/version_set.cc:5581] Column family [p-2] (ID 6), log number is 5
Dec  8 04:46:31 np0005550137 ceph-osd[83009]: rocksdb: [db/version_set.cc:5581] Column family [O-0] (ID 7), log number is 5
Dec  8 04:46:31 np0005550137 ceph-osd[83009]: rocksdb: [db/version_set.cc:5581] Column family [O-1] (ID 8), log number is 5
Dec  8 04:46:31 np0005550137 ceph-osd[83009]: rocksdb: [db/version_set.cc:5581] Column family [O-2] (ID 9), log number is 5
Dec  8 04:46:31 np0005550137 ceph-osd[83009]: rocksdb: [db/version_set.cc:5581] Column family [L] (ID 10), log number is 5
Dec  8 04:46:31 np0005550137 ceph-osd[83009]: rocksdb: [db/version_set.cc:5581] Column family [P] (ID 11), log number is 5
Dec  8 04:46:31 np0005550137 ceph-osd[83009]: rocksdb: [db/db_impl/db_impl_open.cc:539] DB ID: e4e9ce99-17b1-48e3-99f8-43b7f5d91cf7
Dec  8 04:46:31 np0005550137 ceph-osd[83009]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765187191267559, "job": 1, "event": "recovery_started", "wal_files": [31]}
Dec  8 04:46:31 np0005550137 ceph-osd[83009]: rocksdb: [db/db_impl/db_impl_open.cc:1043] Recovering log #31 mode 2
Dec  8 04:46:31 np0005550137 ceph-osd[83009]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765187191272698, "cf_name": "default", "job": 1, "event": "table_file_creation", "file_number": 35, "file_size": 1272, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 13, "largest_seqno": 21, "table_properties": {"data_size": 128, "index_size": 27, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 69, "raw_key_size": 87, "raw_average_key_size": 17, "raw_value_size": 82, "raw_average_value_size": 16, "num_data_blocks": 1, "num_entries": 5, "num_filter_entries": 5, "num_deletions": 0, "num_merge_operands": 2, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": ".T:int64_array.b:bitwise_xor", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "LZ4", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1765187191, "oldest_key_time": 0, "file_creation_time": 0, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "e4e9ce99-17b1-48e3-99f8-43b7f5d91cf7", "db_session_id": "DKEFO6RJPLWZ8UUC3GGL", "orig_file_number": 35, "seqno_to_time_mapping": "N/A"}}
Dec  8 04:46:31 np0005550137 ceph-osd[83009]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765187191275970, "cf_name": "p-0", "job": 1, "event": "table_file_creation", "file_number": 36, "file_size": 1592, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 14, "largest_seqno": 15, "table_properties": {"data_size": 466, "index_size": 39, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 69, "raw_key_size": 72, "raw_average_key_size": 36, "raw_value_size": 571, "raw_average_value_size": 285, "num_data_blocks": 1, "num_entries": 2, "num_filter_entries": 2, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "p-0", "column_family_id": 4, "comparator": "leveldb.BytewiseComparator", "merge_operator": "nullptr", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "LZ4", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1765187191, "oldest_key_time": 0, "file_creation_time": 0, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "e4e9ce99-17b1-48e3-99f8-43b7f5d91cf7", "db_session_id": "DKEFO6RJPLWZ8UUC3GGL", "orig_file_number": 36, "seqno_to_time_mapping": "N/A"}}
Dec  8 04:46:31 np0005550137 ceph-osd[83009]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765187191278872, "cf_name": "O-2", "job": 1, "event": "table_file_creation", "file_number": 37, "file_size": 1275, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 16, "largest_seqno": 16, "table_properties": {"data_size": 121, "index_size": 64, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 69, "raw_key_size": 55, "raw_average_key_size": 55, "raw_value_size": 50, "raw_average_value_size": 50, "num_data_blocks": 1, "num_entries": 1, "num_filter_entries": 1, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "O-2", "column_family_id": 9, "comparator": "leveldb.BytewiseComparator", "merge_operator": "nullptr", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "LZ4", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1765187191, "oldest_key_time": 0, "file_creation_time": 0, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "e4e9ce99-17b1-48e3-99f8-43b7f5d91cf7", "db_session_id": "DKEFO6RJPLWZ8UUC3GGL", "orig_file_number": 37, "seqno_to_time_mapping": "N/A"}}
Dec  8 04:46:31 np0005550137 ceph-osd[83009]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765187191280274, "job": 1, "event": "recovery_finished"}
Dec  8 04:46:31 np0005550137 ceph-osd[83009]: rocksdb: [db/version_set.cc:5047] Creating manifest 40
Dec  8 04:46:31 np0005550137 ceph-osd[83009]: rocksdb: [db/db_impl/db_impl_open.cc:1987] SstFileManager instance 0x55aa75588000
Dec  8 04:46:31 np0005550137 ceph-osd[83009]: rocksdb: DB pointer 0x55aa764f4000
Dec  8 04:46:31 np0005550137 ceph-osd[83009]: bluestore(/var/lib/ceph/osd/ceph-1) _open_db opened rocksdb path db options compression=kLZ4Compression,max_write_buffer_number=64,min_write_buffer_number_to_merge=6,compaction_style=kCompactionStyleLevel,write_buffer_size=16777216,max_background_jobs=4,level0_file_num_compaction_trigger=8,max_bytes_for_level_base=1073741824,max_bytes_for_level_multiplier=8,compaction_readahead_size=2MB,max_total_wal_size=1073741824,writable_file_max_buffer_size=0
Dec  8 04:46:31 np0005550137 ceph-osd[83009]: bluestore(/var/lib/ceph/osd/ceph-1) _upgrade_super from 4, latest 4
Dec  8 04:46:31 np0005550137 ceph-osd[83009]: bluestore(/var/lib/ceph/osd/ceph-1) _upgrade_super done
Dec  8 04:46:31 np0005550137 ceph-osd[83009]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Dec  8 04:46:31 np0005550137 ceph-osd[83009]: rocksdb: [db/db_impl/db_impl.cc:1111] #012** DB Stats **#012Uptime(secs): 0.1 total, 0.1 interval#012Cumulative writes: 0 writes, 0 keys, 0 commit groups, 0.0 writes per commit group, ingest: 0.00 GB, 0.00 MB/s#012Cumulative WAL: 0 writes, 0 syncs, 0.00 writes per sync, written: 0.00 GB, 0.00 MB/s#012Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent#012Interval writes: 0 writes, 0 keys, 0 commit groups, 0.0 writes per commit group, ingest: 0.00 MB, 0.00 MB/s#012Interval WAL: 0 writes, 0 syncs, 0.00 writes per sync, written: 0.00 GB, 0.00 MB/s#012Interval stall: 00:00:0.000 H:M:S, 0.0 percent#012#012** Compaction Stats [default] **#012Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012  L0      2/0    2.63 KB   0.2      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.2      0.01              0.00         1    0.005       0      0       0.0       0.0#012 Sum      2/0    2.63 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.2      0.01              0.00         1    0.005       0      0       0.0       0.0#012 Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.2      0.01              0.00         1    0.005       0      0       0.0       0.0#012#012** Compaction Stats [default] **#012Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.2      0.01              0.00         1    0.005       0      0       0.0       0.0#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 0.1 total, 0.1 interval#012Flush(GB): cumulative 0.000, interval 0.000#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.00 GB write, 0.02 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Interval compaction: 0.00 GB write, 0.02 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count#012Block cache BinnedLRUCache@0x55aa75557350#2 capacity: 460.80 MB usage: 0.67 KB table_size: 0 occupancy: 18446744073709551615 collections: 1 last_copies: 8 last_secs: 2e-05 secs_since: 0#012Block cache entry stats(count,size,portion): FilterBlock(3,0.33 KB,6.95388e-05%) IndexBlock(3,0.34 KB,7.28501e-05%) Misc(1,0.00 KB,0%)#012#012** File Read Latency Histogram By Level [default] **#012#012** Compaction Stats [m-0] **#012Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012 Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012 Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012#012** Compaction Stats [m-0] **#012Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 0.1 total, 0.1 interval#012Flush(GB): cumulative 0.000, interval 0.000#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count#012Block cache BinnedLRUCache@0x55aa75557350#2 capacity: 460.80 MB usage: 0.67 KB table_size: 0 occupancy: 18446744073709551615 collections: 1 last_copies: 8 last_secs: 2e-05 secs_since: 0#012Block cache entry stats(count,size,portion): FilterBlock(3,0.33 KB,6.95388e-05%) IndexBlock(3,0.34 KB,7.28501e-05%) Misc(1,0.00 KB,0%)#012#012** File Read Latency Histogram By Level [m-0] **#012#012** Compaction Stats [m-1] **#012Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012 Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012 Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012#012** Compaction Stats [m-1] **#012Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 0.1 total, 0.1 interval#012Flush(GB): cumulative 0.000, interval 0.000#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count#012Block cache BinnedLRUCache@0x55aa75557350#2 capacity: 460.80 MB usage: 0
Dec  8 04:46:31 np0005550137 ceph-osd[83009]: <cls> /home/jenkins-build/build/workspace/ceph-build/ARCH/x86_64/AVAILABLE_ARCH/x86_64/AVAILABLE_DIST/centos9/DIST/centos9/MACHINE_SIZE/gigantic/release/19.2.3/rpm/el9/BUILD/ceph-19.2.3/src/cls/cephfs/cls_cephfs.cc:201: loading cephfs
Dec  8 04:46:31 np0005550137 ceph-osd[83009]: <cls> /home/jenkins-build/build/workspace/ceph-build/ARCH/x86_64/AVAILABLE_ARCH/x86_64/AVAILABLE_DIST/centos9/DIST/centos9/MACHINE_SIZE/gigantic/release/19.2.3/rpm/el9/BUILD/ceph-19.2.3/src/cls/hello/cls_hello.cc:316: loading cls_hello
Dec  8 04:46:31 np0005550137 ceph-osd[83009]: _get_class not permitted to load lua
Dec  8 04:46:31 np0005550137 ceph-osd[83009]: _get_class not permitted to load sdk
Dec  8 04:46:31 np0005550137 ceph-osd[83009]: osd.1 0 crush map has features 288232575208783872, adjusting msgr requires for clients
Dec  8 04:46:31 np0005550137 ceph-osd[83009]: osd.1 0 crush map has features 288232575208783872 was 8705, adjusting msgr requires for mons
Dec  8 04:46:31 np0005550137 ceph-osd[83009]: osd.1 0 crush map has features 288232575208783872, adjusting msgr requires for osds
Dec  8 04:46:31 np0005550137 ceph-osd[83009]: osd.1 0 check_osdmap_features enabling on-disk ERASURE CODES compat feature
Dec  8 04:46:31 np0005550137 ceph-osd[83009]: osd.1 0 load_pgs
Dec  8 04:46:31 np0005550137 ceph-osd[83009]: osd.1 0 load_pgs opened 0 pgs
Dec  8 04:46:31 np0005550137 ceph-osd[83009]: osd.1 0 log_to_monitors true
Dec  8 04:46:31 np0005550137 ceph-ceb838ef-9d5d-54e4-bddb-2f01adce2ad4-osd-1[83005]: 2025-12-08T09:46:31.315+0000 7fcbe74bd740 -1 osd.1 0 log_to_monitors true
Dec  8 04:46:31 np0005550137 ceph-mon[74516]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["1"]} v 0)
Dec  8 04:46:31 np0005550137 ceph-mon[74516]: log_channel(audit) log [INF] : from='osd.1 [v2:192.168.122.100:6802/2769354488,v1:192.168.122.100:6803/2769354488]' entity='osd.1' cmd=[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["1"]}]: dispatch
Dec  8 04:46:31 np0005550137 ceph-mon[74516]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2}] v 0)
Dec  8 04:46:31 np0005550137 ceph-mon[74516]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1555784564' entity='mgr.compute-0.kitiwu' 
Dec  8 04:46:31 np0005550137 podman[83839]: 2025-12-08 09:46:31.608679366 +0000 UTC m=+0.063660543 container exec e9eed32aa882af654b63b8cfe5830bc1e3b4cbe08ed44f11c04405084f3a1007 (image=quay.io/ceph/ceph:v19, name=ceph-ceb838ef-9d5d-54e4-bddb-2f01adce2ad4-mon-compute-0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, ceph=True, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325)
Dec  8 04:46:31 np0005550137 podman[83839]: 2025-12-08 09:46:31.739058019 +0000 UTC m=+0.194039176 container exec_died e9eed32aa882af654b63b8cfe5830bc1e3b4cbe08ed44f11c04405084f3a1007 (image=quay.io/ceph/ceph:v19, name=ceph-ceb838ef-9d5d-54e4-bddb-2f01adce2ad4-mon-compute-0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, OSD_FLAVOR=default, ceph=True, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid)
Dec  8 04:46:31 np0005550137 ceph-mon[74516]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2}] v 0)
Dec  8 04:46:31 np0005550137 ceph-mon[74516]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1555784564' entity='mgr.compute-0.kitiwu' 
Dec  8 04:46:31 np0005550137 ceph-mon[74516]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1.devices.0}] v 0)
Dec  8 04:46:31 np0005550137 ceph-mon[74516]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1555784564' entity='mgr.compute-0.kitiwu' 
Dec  8 04:46:31 np0005550137 ceph-mon[74516]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1}] v 0)
Dec  8 04:46:31 np0005550137 ceph-mon[74516]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1555784564' entity='mgr.compute-0.kitiwu' 
Dec  8 04:46:32 np0005550137 ceph-mgr[74806]: log_channel(cluster) log [DBG] : pgmap v35: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Dec  8 04:46:32 np0005550137 ceph-mon[74516]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Dec  8 04:46:32 np0005550137 ceph-mon[74516]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1555784564' entity='mgr.compute-0.kitiwu' 
Dec  8 04:46:32 np0005550137 ceph-mon[74516]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Dec  8 04:46:32 np0005550137 ceph-mon[74516]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1555784564' entity='mgr.compute-0.kitiwu' 
Dec  8 04:46:32 np0005550137 ceph-mon[74516]: mon.compute-0@0(leader).osd e5 do_prune osdmap full prune enabled
Dec  8 04:46:32 np0005550137 ceph-mon[74516]: mon.compute-0@0(leader).osd e5 encode_pending skipping prime_pg_temp; mapping job did not start
Dec  8 04:46:32 np0005550137 ceph-mon[74516]: from='osd.0 [v2:192.168.122.101:6800/3218940105,v1:192.168.122.101:6801/3218940105]' entity='osd.0' cmd=[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["0"]}]: dispatch
Dec  8 04:46:32 np0005550137 ceph-mon[74516]: from='osd.1 [v2:192.168.122.100:6802/2769354488,v1:192.168.122.100:6803/2769354488]' entity='osd.1' cmd=[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["1"]}]: dispatch
Dec  8 04:46:32 np0005550137 ceph-mon[74516]: from='mgr.14122 192.168.122.100:0/1555784564' entity='mgr.compute-0.kitiwu' 
Dec  8 04:46:32 np0005550137 ceph-mon[74516]: from='mgr.14122 192.168.122.100:0/1555784564' entity='mgr.compute-0.kitiwu' 
Dec  8 04:46:32 np0005550137 ceph-mon[74516]: from='mgr.14122 192.168.122.100:0/1555784564' entity='mgr.compute-0.kitiwu' 
Dec  8 04:46:32 np0005550137 ceph-mon[74516]: from='mgr.14122 192.168.122.100:0/1555784564' entity='mgr.compute-0.kitiwu' 
Dec  8 04:46:32 np0005550137 ceph-mon[74516]: from='mgr.14122 192.168.122.100:0/1555784564' entity='mgr.compute-0.kitiwu' 
Dec  8 04:46:32 np0005550137 ceph-mon[74516]: from='mgr.14122 192.168.122.100:0/1555784564' entity='mgr.compute-0.kitiwu' 
Dec  8 04:46:32 np0005550137 ceph-mon[74516]: log_channel(audit) log [INF] : from='osd.0 [v2:192.168.122.101:6800/3218940105,v1:192.168.122.101:6801/3218940105]' entity='osd.0' cmd='[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["0"]}]': finished
Dec  8 04:46:32 np0005550137 ceph-mon[74516]: log_channel(audit) log [INF] : from='osd.1 [v2:192.168.122.100:6802/2769354488,v1:192.168.122.100:6803/2769354488]' entity='osd.1' cmd='[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["1"]}]': finished
Dec  8 04:46:32 np0005550137 ceph-mon[74516]: mon.compute-0@0(leader).osd e6 e6: 2 total, 0 up, 2 in
Dec  8 04:46:32 np0005550137 ceph-mon[74516]: log_channel(cluster) log [DBG] : osdmap e6: 2 total, 0 up, 2 in
Dec  8 04:46:32 np0005550137 ceph-mon[74516]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd crush create-or-move", "id": 1, "weight":0.0195, "args": ["host=compute-0", "root=default"]} v 0)
Dec  8 04:46:32 np0005550137 ceph-mon[74516]: log_channel(audit) log [INF] : from='osd.1 [v2:192.168.122.100:6802/2769354488,v1:192.168.122.100:6803/2769354488]' entity='osd.1' cmd=[{"prefix": "osd crush create-or-move", "id": 1, "weight":0.0195, "args": ["host=compute-0", "root=default"]}]: dispatch
Dec  8 04:46:32 np0005550137 ceph-mon[74516]: mon.compute-0@0(leader).osd e6 create-or-move crush item name 'osd.1' initial_weight 0.0195 at location {host=compute-0,root=default}
Dec  8 04:46:32 np0005550137 ceph-mon[74516]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 0} v 0)
Dec  8 04:46:32 np0005550137 ceph-mon[74516]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/1555784564' entity='mgr.compute-0.kitiwu' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch
Dec  8 04:46:32 np0005550137 ceph-mon[74516]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd crush create-or-move", "id": 0, "weight":0.0195, "args": ["host=compute-1", "root=default"]} v 0)
Dec  8 04:46:32 np0005550137 ceph-mon[74516]: log_channel(audit) log [INF] : from='osd.0 [v2:192.168.122.101:6800/3218940105,v1:192.168.122.101:6801/3218940105]' entity='osd.0' cmd=[{"prefix": "osd crush create-or-move", "id": 0, "weight":0.0195, "args": ["host=compute-1", "root=default"]}]: dispatch
Dec  8 04:46:32 np0005550137 ceph-mgr[74806]: mgr finish mon failed to return metadata for osd.0: (2) No such file or directory
Dec  8 04:46:32 np0005550137 ceph-mon[74516]: mon.compute-0@0(leader).osd e6 create-or-move crush item name 'osd.0' initial_weight 0.0195 at location {host=compute-1,root=default}
Dec  8 04:46:32 np0005550137 ceph-mon[74516]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 1} v 0)
Dec  8 04:46:32 np0005550137 ceph-mon[74516]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/1555784564' entity='mgr.compute-0.kitiwu' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch
Dec  8 04:46:32 np0005550137 ceph-mgr[74806]: mgr finish mon failed to return metadata for osd.1: (2) No such file or directory
Dec  8 04:46:32 np0005550137 ceph-osd[83009]: log_channel(cluster) log [DBG] : purged_snaps scrub starts
Dec  8 04:46:32 np0005550137 ceph-osd[83009]: log_channel(cluster) log [DBG] : purged_snaps scrub ok
Dec  8 04:46:33 np0005550137 ceph-mon[74516]: mon.compute-0@0(leader).osd e6 do_prune osdmap full prune enabled
Dec  8 04:46:33 np0005550137 ceph-mon[74516]: mon.compute-0@0(leader).osd e6 encode_pending skipping prime_pg_temp; mapping job did not start
Dec  8 04:46:33 np0005550137 ceph-mon[74516]: log_channel(audit) log [INF] : from='osd.1 [v2:192.168.122.100:6802/2769354488,v1:192.168.122.100:6803/2769354488]' entity='osd.1' cmd='[{"prefix": "osd crush create-or-move", "id": 1, "weight":0.0195, "args": ["host=compute-0", "root=default"]}]': finished
Dec  8 04:46:33 np0005550137 ceph-mon[74516]: log_channel(audit) log [INF] : from='osd.0 [v2:192.168.122.101:6800/3218940105,v1:192.168.122.101:6801/3218940105]' entity='osd.0' cmd='[{"prefix": "osd crush create-or-move", "id": 0, "weight":0.0195, "args": ["host=compute-1", "root=default"]}]': finished
Dec  8 04:46:33 np0005550137 ceph-osd[83009]: osd.1 0 done with init, starting boot process
Dec  8 04:46:33 np0005550137 ceph-osd[83009]: osd.1 0 start_boot
Dec  8 04:46:33 np0005550137 ceph-mon[74516]: mon.compute-0@0(leader).osd e7 e7: 2 total, 0 up, 2 in
Dec  8 04:46:33 np0005550137 ceph-osd[83009]: osd.1 0 maybe_override_options_for_qos osd_max_backfills set to 1
Dec  8 04:46:33 np0005550137 ceph-osd[83009]: osd.1 0 maybe_override_options_for_qos osd_recovery_max_active set to 0
Dec  8 04:46:33 np0005550137 ceph-osd[83009]: osd.1 0 maybe_override_options_for_qos osd_recovery_max_active_hdd set to 3
Dec  8 04:46:33 np0005550137 ceph-osd[83009]: osd.1 0 maybe_override_options_for_qos osd_recovery_max_active_ssd set to 10
Dec  8 04:46:33 np0005550137 ceph-osd[83009]: osd.1 0  bench count 12288000 bsize 4 KiB
Dec  8 04:46:33 np0005550137 ceph-mon[74516]: log_channel(cluster) log [DBG] : osdmap e7: 2 total, 0 up, 2 in
Dec  8 04:46:33 np0005550137 ceph-mon[74516]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 0} v 0)
Dec  8 04:46:33 np0005550137 ceph-mon[74516]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/1555784564' entity='mgr.compute-0.kitiwu' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch
Dec  8 04:46:33 np0005550137 ceph-mon[74516]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 1} v 0)
Dec  8 04:46:33 np0005550137 ceph-mon[74516]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/1555784564' entity='mgr.compute-0.kitiwu' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch
Dec  8 04:46:33 np0005550137 ceph-mgr[74806]: mgr finish mon failed to return metadata for osd.0: (2) No such file or directory
Dec  8 04:46:33 np0005550137 ceph-mgr[74806]: mgr finish mon failed to return metadata for osd.1: (2) No such file or directory
Dec  8 04:46:33 np0005550137 ceph-mon[74516]: from='osd.0 [v2:192.168.122.101:6800/3218940105,v1:192.168.122.101:6801/3218940105]' entity='osd.0' cmd='[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["0"]}]': finished
Dec  8 04:46:33 np0005550137 ceph-mon[74516]: from='osd.1 [v2:192.168.122.100:6802/2769354488,v1:192.168.122.100:6803/2769354488]' entity='osd.1' cmd='[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["1"]}]': finished
Dec  8 04:46:33 np0005550137 ceph-mon[74516]: from='osd.1 [v2:192.168.122.100:6802/2769354488,v1:192.168.122.100:6803/2769354488]' entity='osd.1' cmd=[{"prefix": "osd crush create-or-move", "id": 1, "weight":0.0195, "args": ["host=compute-0", "root=default"]}]: dispatch
Dec  8 04:46:33 np0005550137 ceph-mon[74516]: from='osd.0 [v2:192.168.122.101:6800/3218940105,v1:192.168.122.101:6801/3218940105]' entity='osd.0' cmd=[{"prefix": "osd crush create-or-move", "id": 0, "weight":0.0195, "args": ["host=compute-1", "root=default"]}]: dispatch
Dec  8 04:46:33 np0005550137 ceph-mgr[74806]: mgr.server handle_open ignoring open from osd.1 v2:192.168.122.100:6802/2769354488; not ready for session (expect reconnect)
Dec  8 04:46:33 np0005550137 ceph-mgr[74806]: mgr.server handle_open ignoring open from osd.0 v2:192.168.122.101:6800/3218940105; not ready for session (expect reconnect)
Dec  8 04:46:33 np0005550137 ceph-mon[74516]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 1} v 0)
Dec  8 04:46:33 np0005550137 ceph-mon[74516]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/1555784564' entity='mgr.compute-0.kitiwu' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch
Dec  8 04:46:33 np0005550137 ceph-mon[74516]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 0} v 0)
Dec  8 04:46:33 np0005550137 ceph-mon[74516]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/1555784564' entity='mgr.compute-0.kitiwu' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch
Dec  8 04:46:33 np0005550137 ceph-mgr[74806]: mgr finish mon failed to return metadata for osd.1: (2) No such file or directory
Dec  8 04:46:33 np0005550137 ceph-mgr[74806]: mgr finish mon failed to return metadata for osd.0: (2) No such file or directory
Dec  8 04:46:33 np0005550137 ceph-mon[74516]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2}] v 0)
Dec  8 04:46:33 np0005550137 ceph-mon[74516]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1555784564' entity='mgr.compute-0.kitiwu' 
Dec  8 04:46:33 np0005550137 podman[84095]: 2025-12-08 09:46:33.32413971 +0000 UTC m=+0.058782525 container create ff9172ff980d472558fab18120d17d17f2645e2840b3fec296e74af4d9b0eae8 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=mystifying_nobel, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, CEPH_REF=squid)
Dec  8 04:46:33 np0005550137 systemd[1]: Started libpod-conmon-ff9172ff980d472558fab18120d17d17f2645e2840b3fec296e74af4d9b0eae8.scope.
Dec  8 04:46:33 np0005550137 podman[84095]: 2025-12-08 09:46:33.291592633 +0000 UTC m=+0.026235448 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Dec  8 04:46:33 np0005550137 systemd[1]: Started libcrun container.
Dec  8 04:46:33 np0005550137 podman[84095]: 2025-12-08 09:46:33.43443244 +0000 UTC m=+0.169075345 container init ff9172ff980d472558fab18120d17d17f2645e2840b3fec296e74af4d9b0eae8 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=mystifying_nobel, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec  8 04:46:33 np0005550137 podman[84095]: 2025-12-08 09:46:33.44407845 +0000 UTC m=+0.178721235 container start ff9172ff980d472558fab18120d17d17f2645e2840b3fec296e74af4d9b0eae8 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=mystifying_nobel, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, CEPH_REF=squid, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec  8 04:46:33 np0005550137 mystifying_nobel[84112]: 167 167
Dec  8 04:46:33 np0005550137 systemd[1]: libpod-ff9172ff980d472558fab18120d17d17f2645e2840b3fec296e74af4d9b0eae8.scope: Deactivated successfully.
Dec  8 04:46:33 np0005550137 conmon[84112]: conmon ff9172ff980d472558fa <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-ff9172ff980d472558fab18120d17d17f2645e2840b3fec296e74af4d9b0eae8.scope/container/memory.events
Dec  8 04:46:33 np0005550137 podman[84095]: 2025-12-08 09:46:33.45172849 +0000 UTC m=+0.186371305 container attach ff9172ff980d472558fab18120d17d17f2645e2840b3fec296e74af4d9b0eae8 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=mystifying_nobel, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, OSD_FLAVOR=default, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Dec  8 04:46:33 np0005550137 podman[84095]: 2025-12-08 09:46:33.455411691 +0000 UTC m=+0.190054496 container died ff9172ff980d472558fab18120d17d17f2645e2840b3fec296e74af4d9b0eae8 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=mystifying_nobel, org.label-schema.license=GPLv2, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, OSD_FLAVOR=default, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec  8 04:46:33 np0005550137 systemd[1]: var-lib-containers-storage-overlay-425c6a993423bb7d3a305234ace719784e415a44d5ea4b9fcb30449be4f59bb0-merged.mount: Deactivated successfully.
Dec  8 04:46:33 np0005550137 podman[84095]: 2025-12-08 09:46:33.528545066 +0000 UTC m=+0.263187881 container remove ff9172ff980d472558fab18120d17d17f2645e2840b3fec296e74af4d9b0eae8 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=mystifying_nobel, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True)
Dec  8 04:46:33 np0005550137 systemd[1]: libpod-conmon-ff9172ff980d472558fab18120d17d17f2645e2840b3fec296e74af4d9b0eae8.scope: Deactivated successfully.
Dec  8 04:46:33 np0005550137 ceph-mon[74516]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1.devices.0}] v 0)
Dec  8 04:46:33 np0005550137 ceph-mon[74516]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1555784564' entity='mgr.compute-0.kitiwu' 
Dec  8 04:46:33 np0005550137 ceph-mon[74516]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1}] v 0)
Dec  8 04:46:33 np0005550137 ceph-mon[74516]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1555784564' entity='mgr.compute-0.kitiwu' 
Dec  8 04:46:33 np0005550137 ceph-mon[74516]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1.devices.0}] v 0)
Dec  8 04:46:33 np0005550137 podman[84135]: 2025-12-08 09:46:33.70158232 +0000 UTC m=+0.045057204 container create 8e9f4a32061aeec8d4e68dbe48cf251d3e4751f9365074fab5f4d4d15c285201 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=goofy_meninsky, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, org.label-schema.build-date=20250325, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec  8 04:46:33 np0005550137 ceph-mon[74516]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1555784564' entity='mgr.compute-0.kitiwu' 
Dec  8 04:46:33 np0005550137 ceph-mon[74516]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1}] v 0)
Dec  8 04:46:33 np0005550137 ceph-mon[74516]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1555784564' entity='mgr.compute-0.kitiwu' 
Dec  8 04:46:33 np0005550137 ceph-mon[74516]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config rm", "who": "osd.0", "name": "osd_memory_target"} v 0)
Dec  8 04:46:33 np0005550137 ceph-mon[74516]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1555784564' entity='mgr.compute-0.kitiwu' cmd=[{"prefix": "config rm", "who": "osd.0", "name": "osd_memory_target"}]: dispatch
Dec  8 04:46:33 np0005550137 ceph-mgr[74806]: [cephadm INFO root] Adjusting osd_memory_target on compute-1 to  5248M
Dec  8 04:46:33 np0005550137 ceph-mgr[74806]: log_channel(cephadm) log [INF] : Adjusting osd_memory_target on compute-1 to  5248M
Dec  8 04:46:33 np0005550137 ceph-mon[74516]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config set, name=osd_memory_target}] v 0)
Dec  8 04:46:33 np0005550137 ceph-mon[74516]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1555784564' entity='mgr.compute-0.kitiwu' 
Dec  8 04:46:33 np0005550137 systemd[1]: Started libpod-conmon-8e9f4a32061aeec8d4e68dbe48cf251d3e4751f9365074fab5f4d4d15c285201.scope.
Dec  8 04:46:33 np0005550137 podman[84135]: 2025-12-08 09:46:33.681884909 +0000 UTC m=+0.025359803 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Dec  8 04:46:33 np0005550137 systemd[1]: Started libcrun container.
Dec  8 04:46:33 np0005550137 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7998066fe3eaa4342e2eeeed28c420ab74cfcd4a9f134289e265148160bc844d/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec  8 04:46:33 np0005550137 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7998066fe3eaa4342e2eeeed28c420ab74cfcd4a9f134289e265148160bc844d/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec  8 04:46:33 np0005550137 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7998066fe3eaa4342e2eeeed28c420ab74cfcd4a9f134289e265148160bc844d/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec  8 04:46:33 np0005550137 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7998066fe3eaa4342e2eeeed28c420ab74cfcd4a9f134289e265148160bc844d/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec  8 04:46:33 np0005550137 podman[84135]: 2025-12-08 09:46:33.816437518 +0000 UTC m=+0.159912402 container init 8e9f4a32061aeec8d4e68dbe48cf251d3e4751f9365074fab5f4d4d15c285201 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=goofy_meninsky, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, OSD_FLAVOR=default)
Dec  8 04:46:33 np0005550137 podman[84135]: 2025-12-08 09:46:33.825674695 +0000 UTC m=+0.169149559 container start 8e9f4a32061aeec8d4e68dbe48cf251d3e4751f9365074fab5f4d4d15c285201 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=goofy_meninsky, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, org.label-schema.schema-version=1.0, ceph=True, OSD_FLAVOR=default)
Dec  8 04:46:33 np0005550137 podman[84135]: 2025-12-08 09:46:33.846571902 +0000 UTC m=+0.190046786 container attach 8e9f4a32061aeec8d4e68dbe48cf251d3e4751f9365074fab5f4d4d15c285201 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=goofy_meninsky, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0)
Dec  8 04:46:34 np0005550137 ceph-mgr[74806]: log_channel(cluster) log [DBG] : pgmap v38: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Dec  8 04:46:34 np0005550137 ceph-mgr[74806]: mgr.server handle_open ignoring open from osd.1 v2:192.168.122.100:6802/2769354488; not ready for session (expect reconnect)
Dec  8 04:46:34 np0005550137 ceph-mgr[74806]: mgr.server handle_open ignoring open from osd.0 v2:192.168.122.101:6800/3218940105; not ready for session (expect reconnect)
Dec  8 04:46:34 np0005550137 ceph-mon[74516]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 1} v 0)
Dec  8 04:46:34 np0005550137 ceph-mon[74516]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/1555784564' entity='mgr.compute-0.kitiwu' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch
Dec  8 04:46:34 np0005550137 ceph-mon[74516]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 0} v 0)
Dec  8 04:46:34 np0005550137 ceph-mon[74516]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/1555784564' entity='mgr.compute-0.kitiwu' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch
Dec  8 04:46:34 np0005550137 ceph-mgr[74806]: mgr finish mon failed to return metadata for osd.1: (2) No such file or directory
Dec  8 04:46:34 np0005550137 ceph-mgr[74806]: mgr finish mon failed to return metadata for osd.0: (2) No such file or directory
Dec  8 04:46:34 np0005550137 ceph-mon[74516]: from='osd.1 [v2:192.168.122.100:6802/2769354488,v1:192.168.122.100:6803/2769354488]' entity='osd.1' cmd='[{"prefix": "osd crush create-or-move", "id": 1, "weight":0.0195, "args": ["host=compute-0", "root=default"]}]': finished
Dec  8 04:46:34 np0005550137 ceph-mon[74516]: from='osd.0 [v2:192.168.122.101:6800/3218940105,v1:192.168.122.101:6801/3218940105]' entity='osd.0' cmd='[{"prefix": "osd crush create-or-move", "id": 0, "weight":0.0195, "args": ["host=compute-1", "root=default"]}]': finished
Dec  8 04:46:34 np0005550137 ceph-mon[74516]: from='mgr.14122 192.168.122.100:0/1555784564' entity='mgr.compute-0.kitiwu' 
Dec  8 04:46:34 np0005550137 ceph-mon[74516]: from='mgr.14122 192.168.122.100:0/1555784564' entity='mgr.compute-0.kitiwu' 
Dec  8 04:46:34 np0005550137 ceph-mon[74516]: from='mgr.14122 192.168.122.100:0/1555784564' entity='mgr.compute-0.kitiwu' 
Dec  8 04:46:34 np0005550137 ceph-mon[74516]: from='mgr.14122 192.168.122.100:0/1555784564' entity='mgr.compute-0.kitiwu' 
Dec  8 04:46:34 np0005550137 ceph-mon[74516]: from='mgr.14122 192.168.122.100:0/1555784564' entity='mgr.compute-0.kitiwu' 
Dec  8 04:46:34 np0005550137 ceph-mon[74516]: from='mgr.14122 192.168.122.100:0/1555784564' entity='mgr.compute-0.kitiwu' cmd=[{"prefix": "config rm", "who": "osd.0", "name": "osd_memory_target"}]: dispatch
Dec  8 04:46:34 np0005550137 ceph-mon[74516]: Adjusting osd_memory_target on compute-1 to  5248M
Dec  8 04:46:34 np0005550137 ceph-mon[74516]: from='mgr.14122 192.168.122.100:0/1555784564' entity='mgr.compute-0.kitiwu' 
Dec  8 04:46:34 np0005550137 goofy_meninsky[84151]: [
Dec  8 04:46:34 np0005550137 goofy_meninsky[84151]:    {
Dec  8 04:46:34 np0005550137 goofy_meninsky[84151]:        "available": false,
Dec  8 04:46:34 np0005550137 goofy_meninsky[84151]:        "being_replaced": false,
Dec  8 04:46:34 np0005550137 goofy_meninsky[84151]:        "ceph_device_lvm": false,
Dec  8 04:46:34 np0005550137 goofy_meninsky[84151]:        "device_id": "QEMU_DVD-ROM_QM00001",
Dec  8 04:46:34 np0005550137 goofy_meninsky[84151]:        "lsm_data": {},
Dec  8 04:46:34 np0005550137 goofy_meninsky[84151]:        "lvs": [],
Dec  8 04:46:34 np0005550137 goofy_meninsky[84151]:        "path": "/dev/sr0",
Dec  8 04:46:34 np0005550137 goofy_meninsky[84151]:        "rejected_reasons": [
Dec  8 04:46:34 np0005550137 goofy_meninsky[84151]:            "Has a FileSystem",
Dec  8 04:46:34 np0005550137 goofy_meninsky[84151]:            "Insufficient space (<5GB)"
Dec  8 04:46:34 np0005550137 goofy_meninsky[84151]:        ],
Dec  8 04:46:34 np0005550137 goofy_meninsky[84151]:        "sys_api": {
Dec  8 04:46:34 np0005550137 goofy_meninsky[84151]:            "actuators": null,
Dec  8 04:46:34 np0005550137 goofy_meninsky[84151]:            "device_nodes": [
Dec  8 04:46:34 np0005550137 goofy_meninsky[84151]:                "sr0"
Dec  8 04:46:34 np0005550137 goofy_meninsky[84151]:            ],
Dec  8 04:46:34 np0005550137 goofy_meninsky[84151]:            "devname": "sr0",
Dec  8 04:46:34 np0005550137 goofy_meninsky[84151]:            "human_readable_size": "482.00 KB",
Dec  8 04:46:34 np0005550137 goofy_meninsky[84151]:            "id_bus": "ata",
Dec  8 04:46:34 np0005550137 goofy_meninsky[84151]:            "model": "QEMU DVD-ROM",
Dec  8 04:46:34 np0005550137 goofy_meninsky[84151]:            "nr_requests": "2",
Dec  8 04:46:34 np0005550137 goofy_meninsky[84151]:            "parent": "/dev/sr0",
Dec  8 04:46:34 np0005550137 goofy_meninsky[84151]:            "partitions": {},
Dec  8 04:46:34 np0005550137 goofy_meninsky[84151]:            "path": "/dev/sr0",
Dec  8 04:46:34 np0005550137 goofy_meninsky[84151]:            "removable": "1",
Dec  8 04:46:34 np0005550137 goofy_meninsky[84151]:            "rev": "2.5+",
Dec  8 04:46:34 np0005550137 goofy_meninsky[84151]:            "ro": "0",
Dec  8 04:46:34 np0005550137 goofy_meninsky[84151]:            "rotational": "1",
Dec  8 04:46:34 np0005550137 goofy_meninsky[84151]:            "sas_address": "",
Dec  8 04:46:34 np0005550137 goofy_meninsky[84151]:            "sas_device_handle": "",
Dec  8 04:46:34 np0005550137 goofy_meninsky[84151]:            "scheduler_mode": "mq-deadline",
Dec  8 04:46:34 np0005550137 goofy_meninsky[84151]:            "sectors": 0,
Dec  8 04:46:34 np0005550137 goofy_meninsky[84151]:            "sectorsize": "2048",
Dec  8 04:46:34 np0005550137 goofy_meninsky[84151]:            "size": 493568.0,
Dec  8 04:46:34 np0005550137 goofy_meninsky[84151]:            "support_discard": "2048",
Dec  8 04:46:34 np0005550137 goofy_meninsky[84151]:            "type": "disk",
Dec  8 04:46:34 np0005550137 goofy_meninsky[84151]:            "vendor": "QEMU"
Dec  8 04:46:34 np0005550137 goofy_meninsky[84151]:        }
Dec  8 04:46:34 np0005550137 goofy_meninsky[84151]:    }
Dec  8 04:46:34 np0005550137 goofy_meninsky[84151]: ]
Dec  8 04:46:34 np0005550137 systemd[1]: libpod-8e9f4a32061aeec8d4e68dbe48cf251d3e4751f9365074fab5f4d4d15c285201.scope: Deactivated successfully.
Dec  8 04:46:34 np0005550137 podman[84135]: 2025-12-08 09:46:34.652784453 +0000 UTC m=+0.996259357 container died 8e9f4a32061aeec8d4e68dbe48cf251d3e4751f9365074fab5f4d4d15c285201 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=goofy_meninsky, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec  8 04:46:34 np0005550137 systemd[1]: var-lib-containers-storage-overlay-7998066fe3eaa4342e2eeeed28c420ab74cfcd4a9f134289e265148160bc844d-merged.mount: Deactivated successfully.
Dec  8 04:46:34 np0005550137 podman[84135]: 2025-12-08 09:46:34.798753354 +0000 UTC m=+1.142228218 container remove 8e9f4a32061aeec8d4e68dbe48cf251d3e4751f9365074fab5f4d4d15c285201 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=goofy_meninsky, ceph=True, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1)
Dec  8 04:46:34 np0005550137 systemd[1]: libpod-conmon-8e9f4a32061aeec8d4e68dbe48cf251d3e4751f9365074fab5f4d4d15c285201.scope: Deactivated successfully.
Dec  8 04:46:34 np0005550137 ceph-mon[74516]: mon.compute-0@0(leader).osd e7 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  8 04:46:34 np0005550137 ceph-mon[74516]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Dec  8 04:46:34 np0005550137 ceph-mon[74516]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1555784564' entity='mgr.compute-0.kitiwu' 
Dec  8 04:46:34 np0005550137 ceph-mon[74516]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Dec  8 04:46:34 np0005550137 ceph-mon[74516]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1555784564' entity='mgr.compute-0.kitiwu' 
Dec  8 04:46:34 np0005550137 ceph-mon[74516]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Dec  8 04:46:34 np0005550137 ceph-mon[74516]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1555784564' entity='mgr.compute-0.kitiwu' 
Dec  8 04:46:34 np0005550137 ceph-mon[74516]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Dec  8 04:46:34 np0005550137 ceph-mon[74516]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1555784564' entity='mgr.compute-0.kitiwu' 
Dec  8 04:46:34 np0005550137 ceph-mon[74516]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config rm", "who": "osd.1", "name": "osd_memory_target"} v 0)
Dec  8 04:46:34 np0005550137 ceph-mon[74516]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1555784564' entity='mgr.compute-0.kitiwu' cmd=[{"prefix": "config rm", "who": "osd.1", "name": "osd_memory_target"}]: dispatch
Dec  8 04:46:34 np0005550137 ceph-mgr[74806]: [cephadm INFO root] Adjusting osd_memory_target on compute-0 to 128.0M
Dec  8 04:46:34 np0005550137 ceph-mgr[74806]: log_channel(cephadm) log [INF] : Adjusting osd_memory_target on compute-0 to 128.0M
Dec  8 04:46:34 np0005550137 ceph-mon[74516]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config set, name=osd_memory_target}] v 0)
Dec  8 04:46:34 np0005550137 ceph-mgr[74806]: [cephadm WARNING cephadm.serve] Unable to set osd_memory_target on compute-0 to 134217728: error parsing value: Value '134217728' is below minimum 939524096
Dec  8 04:46:34 np0005550137 ceph-mgr[74806]: log_channel(cephadm) log [WRN] : Unable to set osd_memory_target on compute-0 to 134217728: error parsing value: Value '134217728' is below minimum 939524096
Dec  8 04:46:35 np0005550137 ceph-mgr[74806]: mgr.server handle_open ignoring open from osd.1 v2:192.168.122.100:6802/2769354488; not ready for session (expect reconnect)
Dec  8 04:46:35 np0005550137 ceph-mon[74516]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 1} v 0)
Dec  8 04:46:35 np0005550137 ceph-mon[74516]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/1555784564' entity='mgr.compute-0.kitiwu' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch
Dec  8 04:46:35 np0005550137 ceph-mgr[74806]: mgr finish mon failed to return metadata for osd.1: (2) No such file or directory
Dec  8 04:46:35 np0005550137 ceph-mgr[74806]: mgr.server handle_open ignoring open from osd.0 v2:192.168.122.101:6800/3218940105; not ready for session (expect reconnect)
Dec  8 04:46:35 np0005550137 ceph-mon[74516]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 0} v 0)
Dec  8 04:46:35 np0005550137 ceph-mon[74516]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/1555784564' entity='mgr.compute-0.kitiwu' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch
Dec  8 04:46:35 np0005550137 ceph-mgr[74806]: mgr finish mon failed to return metadata for osd.0: (2) No such file or directory
Dec  8 04:46:35 np0005550137 ceph-mon[74516]: from='mgr.14122 192.168.122.100:0/1555784564' entity='mgr.compute-0.kitiwu' 
Dec  8 04:46:35 np0005550137 ceph-mon[74516]: from='mgr.14122 192.168.122.100:0/1555784564' entity='mgr.compute-0.kitiwu' 
Dec  8 04:46:35 np0005550137 ceph-mon[74516]: from='mgr.14122 192.168.122.100:0/1555784564' entity='mgr.compute-0.kitiwu' 
Dec  8 04:46:35 np0005550137 ceph-mon[74516]: from='mgr.14122 192.168.122.100:0/1555784564' entity='mgr.compute-0.kitiwu' 
Dec  8 04:46:35 np0005550137 ceph-mon[74516]: from='mgr.14122 192.168.122.100:0/1555784564' entity='mgr.compute-0.kitiwu' cmd=[{"prefix": "config rm", "who": "osd.1", "name": "osd_memory_target"}]: dispatch
Dec  8 04:46:35 np0005550137 ceph-mon[74516]: Adjusting osd_memory_target on compute-0 to 128.0M
Dec  8 04:46:35 np0005550137 ceph-mon[74516]: Unable to set osd_memory_target on compute-0 to 134217728: error parsing value: Value '134217728' is below minimum 939524096
Dec  8 04:46:36 np0005550137 ceph-mgr[74806]: log_channel(cluster) log [DBG] : pgmap v39: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Dec  8 04:46:36 np0005550137 ceph-mgr[74806]: mgr.server handle_open ignoring open from osd.1 v2:192.168.122.100:6802/2769354488; not ready for session (expect reconnect)
Dec  8 04:46:36 np0005550137 ceph-mon[74516]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 1} v 0)
Dec  8 04:46:36 np0005550137 ceph-mon[74516]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/1555784564' entity='mgr.compute-0.kitiwu' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch
Dec  8 04:46:36 np0005550137 ceph-mgr[74806]: mgr finish mon failed to return metadata for osd.1: (2) No such file or directory
Dec  8 04:46:36 np0005550137 ceph-mgr[74806]: mgr.server handle_open ignoring open from osd.0 v2:192.168.122.101:6800/3218940105; not ready for session (expect reconnect)
Dec  8 04:46:36 np0005550137 ceph-mon[74516]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 0} v 0)
Dec  8 04:46:36 np0005550137 ceph-mon[74516]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/1555784564' entity='mgr.compute-0.kitiwu' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch
Dec  8 04:46:36 np0005550137 ceph-mgr[74806]: mgr finish mon failed to return metadata for osd.0: (2) No such file or directory
Dec  8 04:46:36 np0005550137 ceph-osd[83009]: osd.1 0 maybe_override_max_osd_capacity_for_qos osd bench result - bandwidth (MiB/sec): 29.846 iops: 7640.666 elapsed_sec: 0.393
Dec  8 04:46:36 np0005550137 ceph-osd[83009]: log_channel(cluster) log [WRN] : OSD bench result of 7640.665901 IOPS is not within the threshold limit range of 50.000000 IOPS and 500.000000 IOPS for osd.1. IOPS capacity is unchanged at 315.000000 IOPS. The recommendation is to establish the osd's IOPS capacity using other benchmark tools (e.g. Fio) and then override osd_mclock_max_capacity_iops_[hdd|ssd].
Dec  8 04:46:36 np0005550137 ceph-osd[83009]: osd.1 0 waiting for initial osdmap
Dec  8 04:46:36 np0005550137 ceph-ceb838ef-9d5d-54e4-bddb-2f01adce2ad4-osd-1[83005]: 2025-12-08T09:46:36.493+0000 7fcbe3440640 -1 osd.1 0 waiting for initial osdmap
Dec  8 04:46:36 np0005550137 ceph-osd[83009]: osd.1 7 crush map has features 288514050185494528, adjusting msgr requires for clients
Dec  8 04:46:36 np0005550137 ceph-osd[83009]: osd.1 7 crush map has features 288514050185494528 was 288232575208792577, adjusting msgr requires for mons
Dec  8 04:46:36 np0005550137 ceph-osd[83009]: osd.1 7 crush map has features 3314932999778484224, adjusting msgr requires for osds
Dec  8 04:46:36 np0005550137 ceph-osd[83009]: osd.1 7 check_osdmap_features require_osd_release unknown -> squid
Dec  8 04:46:36 np0005550137 ceph-osd[83009]: osd.1 7 set_numa_affinity unable to identify public interface '' numa node: (2) No such file or directory
Dec  8 04:46:36 np0005550137 ceph-osd[83009]: osd.1 7 set_numa_affinity not setting numa affinity
Dec  8 04:46:36 np0005550137 ceph-ceb838ef-9d5d-54e4-bddb-2f01adce2ad4-osd-1[83005]: 2025-12-08T09:46:36.518+0000 7fcbdea68640 -1 osd.1 7 set_numa_affinity unable to identify public interface '' numa node: (2) No such file or directory
Dec  8 04:46:36 np0005550137 ceph-osd[83009]: osd.1 7 _collect_metadata loop3:  no unique device id for loop3: fallback method has no model nor serial no unique device path for loop3: no symlink to loop3 in /dev/disk/by-path
Dec  8 04:46:37 np0005550137 ceph-mgr[74806]: mgr.server handle_open ignoring open from osd.1 v2:192.168.122.100:6802/2769354488; not ready for session (expect reconnect)
Dec  8 04:46:37 np0005550137 ceph-mon[74516]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 1} v 0)
Dec  8 04:46:37 np0005550137 ceph-mon[74516]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/1555784564' entity='mgr.compute-0.kitiwu' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch
Dec  8 04:46:37 np0005550137 ceph-mgr[74806]: mgr finish mon failed to return metadata for osd.1: (2) No such file or directory
Dec  8 04:46:37 np0005550137 ceph-mgr[74806]: mgr.server handle_open ignoring open from osd.0 v2:192.168.122.101:6800/3218940105; not ready for session (expect reconnect)
Dec  8 04:46:37 np0005550137 ceph-mon[74516]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 0} v 0)
Dec  8 04:46:37 np0005550137 ceph-mon[74516]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/1555784564' entity='mgr.compute-0.kitiwu' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch
Dec  8 04:46:37 np0005550137 ceph-mgr[74806]: mgr finish mon failed to return metadata for osd.0: (2) No such file or directory
Dec  8 04:46:37 np0005550137 ceph-mon[74516]: mon.compute-0@0(leader).osd e7 do_prune osdmap full prune enabled
Dec  8 04:46:37 np0005550137 ceph-mon[74516]: mon.compute-0@0(leader).osd e7 encode_pending skipping prime_pg_temp; mapping job did not start
Dec  8 04:46:37 np0005550137 ceph-mon[74516]: OSD bench result of 7704.842246 IOPS is not within the threshold limit range of 50.000000 IOPS and 500.000000 IOPS for osd.0. IOPS capacity is unchanged at 315.000000 IOPS. The recommendation is to establish the osd's IOPS capacity using other benchmark tools (e.g. Fio) and then override osd_mclock_max_capacity_iops_[hdd|ssd].
Dec  8 04:46:37 np0005550137 ceph-mon[74516]: mon.compute-0@0(leader).osd e8 e8: 2 total, 2 up, 2 in
Dec  8 04:46:37 np0005550137 ceph-mon[74516]: log_channel(cluster) log [INF] : osd.1 [v2:192.168.122.100:6802/2769354488,v1:192.168.122.100:6803/2769354488] boot
Dec  8 04:46:37 np0005550137 ceph-mon[74516]: log_channel(cluster) log [INF] : osd.0 [v2:192.168.122.101:6800/3218940105,v1:192.168.122.101:6801/3218940105] boot
Dec  8 04:46:37 np0005550137 ceph-mon[74516]: log_channel(cluster) log [DBG] : osdmap e8: 2 total, 2 up, 2 in
Dec  8 04:46:37 np0005550137 ceph-mon[74516]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 0} v 0)
Dec  8 04:46:37 np0005550137 ceph-mon[74516]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/1555784564' entity='mgr.compute-0.kitiwu' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch
Dec  8 04:46:37 np0005550137 ceph-mon[74516]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 1} v 0)
Dec  8 04:46:37 np0005550137 ceph-mon[74516]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/1555784564' entity='mgr.compute-0.kitiwu' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch
Dec  8 04:46:37 np0005550137 ceph-osd[83009]: osd.1 8 state: booting -> active
Dec  8 04:46:37 np0005550137 ceph-mgr[74806]: [devicehealth INFO root] creating mgr pool
Dec  8 04:46:37 np0005550137 ceph-mon[74516]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool create", "format": "json", "pool": ".mgr", "pg_num": 1, "pg_num_min": 1, "pg_num_max": 32, "yes_i_really_mean_it": true} v 0)
Dec  8 04:46:37 np0005550137 ceph-mon[74516]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1555784564' entity='mgr.compute-0.kitiwu' cmd=[{"prefix": "osd pool create", "format": "json", "pool": ".mgr", "pg_num": 1, "pg_num_min": 1, "pg_num_max": 32, "yes_i_really_mean_it": true}]: dispatch
Dec  8 04:46:38 np0005550137 ceph-mgr[74806]: log_channel(cluster) log [DBG] : pgmap v41: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Dec  8 04:46:38 np0005550137 ceph-mon[74516]: mon.compute-0@0(leader).osd e8 do_prune osdmap full prune enabled
Dec  8 04:46:38 np0005550137 ceph-mon[74516]: mon.compute-0@0(leader).osd e8 encode_pending skipping prime_pg_temp; mapping job did not start
Dec  8 04:46:38 np0005550137 ceph-mon[74516]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1555784564' entity='mgr.compute-0.kitiwu' cmd='[{"prefix": "osd pool create", "format": "json", "pool": ".mgr", "pg_num": 1, "pg_num_min": 1, "pg_num_max": 32, "yes_i_really_mean_it": true}]': finished
Dec  8 04:46:38 np0005550137 ceph-mon[74516]: mon.compute-0@0(leader).osd e9 e9: 2 total, 2 up, 2 in
Dec  8 04:46:38 np0005550137 ceph-mon[74516]: mon.compute-0@0(leader).osd e9 crush map has features 3314933000852226048, adjusting msgr requires
Dec  8 04:46:38 np0005550137 ceph-mon[74516]: OSD bench result of 7640.665901 IOPS is not within the threshold limit range of 50.000000 IOPS and 500.000000 IOPS for osd.1. IOPS capacity is unchanged at 315.000000 IOPS. The recommendation is to establish the osd's IOPS capacity using other benchmark tools (e.g. Fio) and then override osd_mclock_max_capacity_iops_[hdd|ssd].
Dec  8 04:46:38 np0005550137 ceph-mon[74516]: osd.1 [v2:192.168.122.100:6802/2769354488,v1:192.168.122.100:6803/2769354488] boot
Dec  8 04:46:38 np0005550137 ceph-mon[74516]: osd.0 [v2:192.168.122.101:6800/3218940105,v1:192.168.122.101:6801/3218940105] boot
Dec  8 04:46:38 np0005550137 ceph-mon[74516]: from='mgr.14122 192.168.122.100:0/1555784564' entity='mgr.compute-0.kitiwu' cmd=[{"prefix": "osd pool create", "format": "json", "pool": ".mgr", "pg_num": 1, "pg_num_min": 1, "pg_num_max": 32, "yes_i_really_mean_it": true}]: dispatch
Dec  8 04:46:38 np0005550137 ceph-mon[74516]: mon.compute-0@0(leader).osd e9 crush map has features 288514051259236352, adjusting msgr requires
Dec  8 04:46:38 np0005550137 ceph-mon[74516]: mon.compute-0@0(leader).osd e9 crush map has features 288514051259236352, adjusting msgr requires
Dec  8 04:46:38 np0005550137 ceph-mon[74516]: mon.compute-0@0(leader).osd e9 crush map has features 288514051259236352, adjusting msgr requires
Dec  8 04:46:38 np0005550137 ceph-mon[74516]: log_channel(cluster) log [DBG] : osdmap e9: 2 total, 2 up, 2 in
Dec  8 04:46:38 np0005550137 ceph-mon[74516]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool application enable", "format": "json", "pool": ".mgr", "app": "mgr", "yes_i_really_mean_it": true} v 0)
Dec  8 04:46:38 np0005550137 ceph-mon[74516]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1555784564' entity='mgr.compute-0.kitiwu' cmd=[{"prefix": "osd pool application enable", "format": "json", "pool": ".mgr", "app": "mgr", "yes_i_really_mean_it": true}]: dispatch
Dec  8 04:46:39 np0005550137 ceph-mon[74516]: mon.compute-0@0(leader).osd e9 do_prune osdmap full prune enabled
Dec  8 04:46:39 np0005550137 ceph-mon[74516]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1555784564' entity='mgr.compute-0.kitiwu' cmd='[{"prefix": "osd pool application enable", "format": "json", "pool": ".mgr", "app": "mgr", "yes_i_really_mean_it": true}]': finished
Dec  8 04:46:39 np0005550137 ceph-mon[74516]: mon.compute-0@0(leader).osd e10 e10: 2 total, 2 up, 2 in
Dec  8 04:46:39 np0005550137 ceph-mon[74516]: log_channel(cluster) log [DBG] : osdmap e10: 2 total, 2 up, 2 in
Dec  8 04:46:39 np0005550137 ceph-mon[74516]: from='mgr.14122 192.168.122.100:0/1555784564' entity='mgr.compute-0.kitiwu' cmd='[{"prefix": "osd pool create", "format": "json", "pool": ".mgr", "pg_num": 1, "pg_num_min": 1, "pg_num_max": 32, "yes_i_really_mean_it": true}]': finished
Dec  8 04:46:39 np0005550137 ceph-mon[74516]: from='mgr.14122 192.168.122.100:0/1555784564' entity='mgr.compute-0.kitiwu' cmd=[{"prefix": "osd pool application enable", "format": "json", "pool": ".mgr", "app": "mgr", "yes_i_really_mean_it": true}]: dispatch
Dec  8 04:46:39 np0005550137 ceph-osd[83009]: osd.1 10 crush map has features 288514051259236352, adjusting msgr requires for clients
Dec  8 04:46:39 np0005550137 ceph-osd[83009]: osd.1 10 crush map has features 288514051259236352 was 288514050185503233, adjusting msgr requires for mons
Dec  8 04:46:39 np0005550137 ceph-osd[83009]: osd.1 10 crush map has features 3314933000852226048, adjusting msgr requires for osds
Dec  8 04:46:39 np0005550137 ceph-mgr[74806]: [devicehealth INFO root] creating main.db for devicehealth
Dec  8 04:46:39 np0005550137 ceph-mgr[74806]: [devicehealth INFO root] Check health
Dec  8 04:46:39 np0005550137 ceph-mon[74516]: log_channel(audit) log [INF] : from='admin socket' entity='admin socket' cmd='smart' args=[json]: dispatch
Dec  8 04:46:39 np0005550137 ceph-mon[74516]: log_channel(audit) log [INF] : from='admin socket' entity='admin socket' cmd=smart args=[json]: finished
Dec  8 04:46:39 np0005550137 ceph-mon[74516]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon metadata", "id": "compute-0"} v 0)
Dec  8 04:46:39 np0005550137 ceph-mon[74516]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/1555784564' entity='mgr.compute-0.kitiwu' cmd=[{"prefix": "mon metadata", "id": "compute-0"}]: dispatch
Dec  8 04:46:39 np0005550137 ceph-mon[74516]: mon.compute-0@0(leader).osd e10 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  8 04:46:40 np0005550137 ceph-mgr[74806]: log_channel(cluster) log [DBG] : pgmap v44: 1 pgs: 1 unknown; 0 B data, 853 MiB used, 39 GiB / 40 GiB avail
Dec  8 04:46:40 np0005550137 ceph-mon[74516]: mon.compute-0@0(leader).osd e10 do_prune osdmap full prune enabled
Dec  8 04:46:40 np0005550137 ceph-mon[74516]: from='mgr.14122 192.168.122.100:0/1555784564' entity='mgr.compute-0.kitiwu' cmd='[{"prefix": "osd pool application enable", "format": "json", "pool": ".mgr", "app": "mgr", "yes_i_really_mean_it": true}]': finished
Dec  8 04:46:40 np0005550137 ceph-mon[74516]: from='admin socket' entity='admin socket' cmd='smart' args=[json]: dispatch
Dec  8 04:46:40 np0005550137 ceph-mon[74516]: from='admin socket' entity='admin socket' cmd=smart args=[json]: finished
Dec  8 04:46:40 np0005550137 ceph-mon[74516]: mon.compute-0@0(leader).osd e11 e11: 2 total, 2 up, 2 in
Dec  8 04:46:40 np0005550137 ceph-mon[74516]: log_channel(cluster) log [DBG] : osdmap e11: 2 total, 2 up, 2 in
Dec  8 04:46:40 np0005550137 ceph-mon[74516]: log_channel(cluster) log [DBG] : mgrmap e9: compute-0.kitiwu(active, since 82s)
Dec  8 04:46:42 np0005550137 ceph-mgr[74806]: log_channel(cluster) log [DBG] : pgmap v46: 1 pgs: 1 unknown; 0 B data, 853 MiB used, 39 GiB / 40 GiB avail
Dec  8 04:46:44 np0005550137 ceph-mgr[74806]: log_channel(cluster) log [DBG] : pgmap v47: 1 pgs: 1 active+clean; 449 KiB data, 853 MiB used, 39 GiB / 40 GiB avail
Dec  8 04:46:44 np0005550137 ceph-mon[74516]: mon.compute-0@0(leader).osd e11 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  8 04:46:46 np0005550137 ceph-mgr[74806]: log_channel(cluster) log [DBG] : pgmap v48: 1 pgs: 1 active+clean; 449 KiB data, 853 MiB used, 39 GiB / 40 GiB avail
Dec  8 04:46:47 np0005550137 ceph-mgr[74806]: [volumes INFO mgr_util] scanning for idle connections..
Dec  8 04:46:47 np0005550137 ceph-mgr[74806]: [volumes INFO mgr_util] cleaning up connections: []
Dec  8 04:46:47 np0005550137 ceph-mgr[74806]: [volumes INFO mgr_util] scanning for idle connections..
Dec  8 04:46:47 np0005550137 ceph-mgr[74806]: [volumes INFO mgr_util] cleaning up connections: []
Dec  8 04:46:47 np0005550137 ceph-mgr[74806]: [volumes INFO mgr_util] scanning for idle connections..
Dec  8 04:46:47 np0005550137 ceph-mgr[74806]: [volumes INFO mgr_util] cleaning up connections: []
Dec  8 04:46:48 np0005550137 ceph-mgr[74806]: log_channel(cluster) log [DBG] : pgmap v49: 1 pgs: 1 active+clean; 449 KiB data, 853 MiB used, 39 GiB / 40 GiB avail
Dec  8 04:46:49 np0005550137 ceph-mon[74516]: mon.compute-0@0(leader).osd e11 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  8 04:46:50 np0005550137 ceph-mgr[74806]: log_channel(cluster) log [DBG] : pgmap v50: 1 pgs: 1 active+clean; 449 KiB data, 853 MiB used, 39 GiB / 40 GiB avail
Dec  8 04:46:51 np0005550137 ceph-mon[74516]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2.devices.0}] v 0)
Dec  8 04:46:51 np0005550137 ceph-mon[74516]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1555784564' entity='mgr.compute-0.kitiwu' 
Dec  8 04:46:51 np0005550137 ceph-mon[74516]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2}] v 0)
Dec  8 04:46:51 np0005550137 ceph-mon[74516]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1555784564' entity='mgr.compute-0.kitiwu' 
Dec  8 04:46:51 np0005550137 ceph-mon[74516]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2.devices.0}] v 0)
Dec  8 04:46:51 np0005550137 ceph-mon[74516]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1555784564' entity='mgr.compute-0.kitiwu' 
Dec  8 04:46:51 np0005550137 ceph-mon[74516]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2}] v 0)
Dec  8 04:46:51 np0005550137 ceph-mon[74516]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1555784564' entity='mgr.compute-0.kitiwu' 
Dec  8 04:46:51 np0005550137 ceph-mon[74516]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config rm", "who": "osd/host:compute-2", "name": "osd_memory_target"} v 0)
Dec  8 04:46:51 np0005550137 ceph-mon[74516]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1555784564' entity='mgr.compute-0.kitiwu' cmd=[{"prefix": "config rm", "who": "osd/host:compute-2", "name": "osd_memory_target"}]: dispatch
Dec  8 04:46:51 np0005550137 ceph-mon[74516]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Dec  8 04:46:51 np0005550137 ceph-mon[74516]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/1555784564' entity='mgr.compute-0.kitiwu' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec  8 04:46:51 np0005550137 ceph-mon[74516]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Dec  8 04:46:51 np0005550137 ceph-mon[74516]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1555784564' entity='mgr.compute-0.kitiwu' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Dec  8 04:46:51 np0005550137 ceph-mgr[74806]: [cephadm INFO cephadm.serve] Updating compute-2:/etc/ceph/ceph.conf
Dec  8 04:46:51 np0005550137 ceph-mgr[74806]: log_channel(cephadm) log [INF] : Updating compute-2:/etc/ceph/ceph.conf
Dec  8 04:46:52 np0005550137 ceph-mgr[74806]: log_channel(cluster) log [DBG] : pgmap v51: 1 pgs: 1 active+clean; 449 KiB data, 853 MiB used, 39 GiB / 40 GiB avail
Dec  8 04:46:52 np0005550137 ceph-mgr[74806]: [cephadm INFO cephadm.serve] Updating compute-2:/var/lib/ceph/ceb838ef-9d5d-54e4-bddb-2f01adce2ad4/config/ceph.conf
Dec  8 04:46:52 np0005550137 ceph-mgr[74806]: log_channel(cephadm) log [INF] : Updating compute-2:/var/lib/ceph/ceb838ef-9d5d-54e4-bddb-2f01adce2ad4/config/ceph.conf
Dec  8 04:46:52 np0005550137 ceph-mon[74516]: from='mgr.14122 192.168.122.100:0/1555784564' entity='mgr.compute-0.kitiwu' 
Dec  8 04:46:52 np0005550137 ceph-mon[74516]: from='mgr.14122 192.168.122.100:0/1555784564' entity='mgr.compute-0.kitiwu' 
Dec  8 04:46:52 np0005550137 ceph-mon[74516]: from='mgr.14122 192.168.122.100:0/1555784564' entity='mgr.compute-0.kitiwu' 
Dec  8 04:46:52 np0005550137 ceph-mon[74516]: from='mgr.14122 192.168.122.100:0/1555784564' entity='mgr.compute-0.kitiwu' 
Dec  8 04:46:52 np0005550137 ceph-mon[74516]: from='mgr.14122 192.168.122.100:0/1555784564' entity='mgr.compute-0.kitiwu' cmd=[{"prefix": "config rm", "who": "osd/host:compute-2", "name": "osd_memory_target"}]: dispatch
Dec  8 04:46:52 np0005550137 ceph-mon[74516]: from='mgr.14122 192.168.122.100:0/1555784564' entity='mgr.compute-0.kitiwu' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Dec  8 04:46:52 np0005550137 ceph-mon[74516]: Updating compute-2:/etc/ceph/ceph.conf
Dec  8 04:46:53 np0005550137 ceph-mgr[74806]: [cephadm INFO cephadm.serve] Updating compute-2:/etc/ceph/ceph.client.admin.keyring
Dec  8 04:46:53 np0005550137 ceph-mgr[74806]: log_channel(cephadm) log [INF] : Updating compute-2:/etc/ceph/ceph.client.admin.keyring
Dec  8 04:46:53 np0005550137 ceph-mon[74516]: Updating compute-2:/var/lib/ceph/ceb838ef-9d5d-54e4-bddb-2f01adce2ad4/config/ceph.conf
Dec  8 04:46:53 np0005550137 ceph-mon[74516]: Updating compute-2:/etc/ceph/ceph.client.admin.keyring
Dec  8 04:46:53 np0005550137 ceph-mgr[74806]: [cephadm INFO cephadm.serve] Updating compute-2:/var/lib/ceph/ceb838ef-9d5d-54e4-bddb-2f01adce2ad4/config/ceph.client.admin.keyring
Dec  8 04:46:53 np0005550137 ceph-mgr[74806]: log_channel(cephadm) log [INF] : Updating compute-2:/var/lib/ceph/ceb838ef-9d5d-54e4-bddb-2f01adce2ad4/config/ceph.client.admin.keyring
Dec  8 04:46:54 np0005550137 ceph-mgr[74806]: log_channel(cluster) log [DBG] : pgmap v52: 1 pgs: 1 active+clean; 449 KiB data, 853 MiB used, 39 GiB / 40 GiB avail
Dec  8 04:46:54 np0005550137 ceph-mon[74516]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2.devices.0}] v 0)
Dec  8 04:46:54 np0005550137 ceph-mon[74516]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1555784564' entity='mgr.compute-0.kitiwu' 
Dec  8 04:46:54 np0005550137 ceph-mon[74516]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2}] v 0)
Dec  8 04:46:54 np0005550137 ceph-mon[74516]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1555784564' entity='mgr.compute-0.kitiwu' 
Dec  8 04:46:54 np0005550137 ceph-mon[74516]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Dec  8 04:46:54 np0005550137 ceph-mon[74516]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1555784564' entity='mgr.compute-0.kitiwu' 
Dec  8 04:46:54 np0005550137 ceph-mgr[74806]: log_channel(cluster) log [DBG] : pgmap v53: 1 pgs: 1 active+clean; 449 KiB data, 853 MiB used, 39 GiB / 40 GiB avail
Dec  8 04:46:54 np0005550137 ceph-mgr[74806]: [progress INFO root] update: starting ev 9a60ca55-8d34-46b8-9cee-c82837a79f65 (Updating mon deployment (+2 -> 3))
Dec  8 04:46:54 np0005550137 ceph-mon[74516]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "mon."} v 0)
Dec  8 04:46:54 np0005550137 ceph-mon[74516]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1555784564' entity='mgr.compute-0.kitiwu' cmd=[{"prefix": "auth get", "entity": "mon."}]: dispatch
Dec  8 04:46:54 np0005550137 ceph-mon[74516]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config get", "who": "mon", "key": "public_network"} v 0)
Dec  8 04:46:54 np0005550137 ceph-mon[74516]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/1555784564' entity='mgr.compute-0.kitiwu' cmd=[{"prefix": "config get", "who": "mon", "key": "public_network"}]: dispatch
Dec  8 04:46:54 np0005550137 ceph-mon[74516]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Dec  8 04:46:54 np0005550137 ceph-mon[74516]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/1555784564' entity='mgr.compute-0.kitiwu' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec  8 04:46:54 np0005550137 ceph-mgr[74806]: [cephadm INFO cephadm.serve] Deploying daemon mon.compute-2 on compute-2
Dec  8 04:46:54 np0005550137 ceph-mgr[74806]: log_channel(cephadm) log [INF] : Deploying daemon mon.compute-2 on compute-2
Dec  8 04:46:54 np0005550137 ceph-mon[74516]: log_channel(cluster) log [INF] : Health check cleared: CEPHADM_APPLY_SPEC_FAIL (was: Failed to apply 2 service(s): mon,mgr)
Dec  8 04:46:54 np0005550137 ceph-mon[74516]: log_channel(cluster) log [INF] : Cluster is now healthy
Dec  8 04:46:54 np0005550137 ceph-mon[74516]: Updating compute-2:/var/lib/ceph/ceb838ef-9d5d-54e4-bddb-2f01adce2ad4/config/ceph.client.admin.keyring
Dec  8 04:46:54 np0005550137 ceph-mon[74516]: from='mgr.14122 192.168.122.100:0/1555784564' entity='mgr.compute-0.kitiwu' 
Dec  8 04:46:54 np0005550137 ceph-mon[74516]: from='mgr.14122 192.168.122.100:0/1555784564' entity='mgr.compute-0.kitiwu' 
Dec  8 04:46:54 np0005550137 ceph-mon[74516]: from='mgr.14122 192.168.122.100:0/1555784564' entity='mgr.compute-0.kitiwu' 
Dec  8 04:46:54 np0005550137 ceph-mon[74516]: from='mgr.14122 192.168.122.100:0/1555784564' entity='mgr.compute-0.kitiwu' cmd=[{"prefix": "auth get", "entity": "mon."}]: dispatch
Dec  8 04:46:54 np0005550137 ceph-mon[74516]: mon.compute-0@0(leader).osd e11 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  8 04:46:55 np0005550137 ceph-mon[74516]: Deploying daemon mon.compute-2 on compute-2
Dec  8 04:46:55 np0005550137 ceph-mon[74516]: Health check cleared: CEPHADM_APPLY_SPEC_FAIL (was: Failed to apply 2 service(s): mon,mgr)
Dec  8 04:46:55 np0005550137 ceph-mon[74516]: Cluster is now healthy
Dec  8 04:46:56 np0005550137 ceph-mgr[74806]: log_channel(cluster) log [DBG] : pgmap v54: 1 pgs: 1 active+clean; 449 KiB data, 853 MiB used, 39 GiB / 40 GiB avail
Dec  8 04:46:57 np0005550137 ceph-mon[74516]: mon.compute-0@0(leader) e1  adding peer [v2:192.168.122.102:3300/0,v1:192.168.122.102:6789/0] to list of hints
Dec  8 04:46:57 np0005550137 ceph-mon[74516]: mon.compute-0@0(leader) e1  adding peer [v2:192.168.122.102:3300/0,v1:192.168.122.102:6789/0] to list of hints
Dec  8 04:46:57 np0005550137 ceph-mon[74516]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2.devices.0}] v 0)
Dec  8 04:46:57 np0005550137 ceph-mon[74516]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1555784564' entity='mgr.compute-0.kitiwu' 
Dec  8 04:46:57 np0005550137 ceph-mon[74516]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2}] v 0)
Dec  8 04:46:57 np0005550137 ceph-mon[74516]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1555784564' entity='mgr.compute-0.kitiwu' 
Dec  8 04:46:57 np0005550137 ceph-mon[74516]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.mon}] v 0)
Dec  8 04:46:57 np0005550137 ceph-mon[74516]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1555784564' entity='mgr.compute-0.kitiwu' 
Dec  8 04:46:57 np0005550137 ceph-mon[74516]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "mon."} v 0)
Dec  8 04:46:57 np0005550137 ceph-mon[74516]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1555784564' entity='mgr.compute-0.kitiwu' cmd=[{"prefix": "auth get", "entity": "mon."}]: dispatch
Dec  8 04:46:57 np0005550137 ceph-mon[74516]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config get", "who": "mon", "key": "public_network"} v 0)
Dec  8 04:46:57 np0005550137 ceph-mon[74516]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/1555784564' entity='mgr.compute-0.kitiwu' cmd=[{"prefix": "config get", "who": "mon", "key": "public_network"}]: dispatch
Dec  8 04:46:57 np0005550137 ceph-mon[74516]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Dec  8 04:46:57 np0005550137 ceph-mon[74516]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/1555784564' entity='mgr.compute-0.kitiwu' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec  8 04:46:57 np0005550137 ceph-mgr[74806]: [cephadm INFO cephadm.serve] Deploying daemon mon.compute-1 on compute-1
Dec  8 04:46:57 np0005550137 ceph-mgr[74806]: log_channel(cephadm) log [INF] : Deploying daemon mon.compute-1 on compute-1
Dec  8 04:46:57 np0005550137 python3[85253]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z   --volume /home/ceph-admin/specs/ceph_spec.yaml:/home/ceph_spec.yaml:z   --entrypoint ceph quay.io/ceph/ceph:v19 --fsid ceb838ef-9d5d-54e4-bddb-2f01adce2ad4 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   status --format json | jq .osdmap.num_up_osds _uses_shell=True stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec  8 04:46:57 np0005550137 ceph-mon[74516]: mon.compute-0@0(leader) e1  adding peer [v2:192.168.122.102:3300/0,v1:192.168.122.102:6789/0] to list of hints
Dec  8 04:46:57 np0005550137 ceph-mgr[74806]: mgr.server handle_open ignoring open from mon.compute-2 192.168.122.102:0/3865363297; not ready for session (expect reconnect)
Dec  8 04:46:57 np0005550137 ceph-mon[74516]: mon.compute-0@0(leader).monmap v1 adding/updating compute-2 at [v2:192.168.122.102:3300/0,v1:192.168.122.102:6789/0] to monitor cluster
Dec  8 04:46:57 np0005550137 ceph-mon[74516]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon metadata", "id": "compute-2"} v 0)
Dec  8 04:46:57 np0005550137 ceph-mon[74516]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/1555784564' entity='mgr.compute-0.kitiwu' cmd=[{"prefix": "mon metadata", "id": "compute-2"}]: dispatch
Dec  8 04:46:57 np0005550137 ceph-mgr[74806]: mgr finish mon failed to return metadata for mon.compute-2: (2) No such file or directory
Dec  8 04:46:57 np0005550137 ceph-mon[74516]: mon.compute-0@0(probing) e2 handle_command mon_command({"prefix": "mon metadata", "id": "compute-0"} v 0)
Dec  8 04:46:57 np0005550137 ceph-mon[74516]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/1555784564' entity='mgr.compute-0.kitiwu' cmd=[{"prefix": "mon metadata", "id": "compute-0"}]: dispatch
Dec  8 04:46:57 np0005550137 ceph-mgr[74806]: mgr finish mon failed to return metadata for mon.compute-2: (22) Invalid argument
Dec  8 04:46:57 np0005550137 ceph-mon[74516]: mon.compute-0@0(probing) e2 handle_command mon_command({"prefix": "mon metadata", "id": "compute-2"} v 0)
Dec  8 04:46:57 np0005550137 ceph-mon[74516]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/1555784564' entity='mgr.compute-0.kitiwu' cmd=[{"prefix": "mon metadata", "id": "compute-2"}]: dispatch
Dec  8 04:46:57 np0005550137 ceph-mon[74516]: log_channel(cluster) log [INF] : mon.compute-0 calling monitor election
Dec  8 04:46:57 np0005550137 ceph-mon[74516]: paxos.0).electionLogic(5) init, last seen epoch 5, mid-election, bumping
Dec  8 04:46:57 np0005550137 ceph-mon[74516]: mon.compute-0@0(electing) e2 collect_metadata vda:  no unique device id for vda: fallback method has no model nor serial
Dec  8 04:46:57 np0005550137 podman[85255]: 2025-12-08 09:46:57.392023734 +0000 UTC m=+0.048574349 container create 5ebde502fd9f3538a59de2b568a30fe96df8b23cbbc8ddfd1df7116240f88345 (image=quay.io/ceph/ceph:v19, name=festive_mirzakhani, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, ceph=True, io.buildah.version=1.40.1, OSD_FLAVOR=default, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec  8 04:46:57 np0005550137 systemd[1]: Started libpod-conmon-5ebde502fd9f3538a59de2b568a30fe96df8b23cbbc8ddfd1df7116240f88345.scope.
Dec  8 04:46:57 np0005550137 podman[85255]: 2025-12-08 09:46:57.368808057 +0000 UTC m=+0.025358712 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Dec  8 04:46:57 np0005550137 systemd[1]: Started libcrun container.
Dec  8 04:46:57 np0005550137 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e7e74484d6235b7f7f412620c2c8445b7e3005282011499c661806d198f4c6e5/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec  8 04:46:57 np0005550137 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e7e74484d6235b7f7f412620c2c8445b7e3005282011499c661806d198f4c6e5/merged/home/ceph_spec.yaml supports timestamps until 2038 (0x7fffffff)
Dec  8 04:46:57 np0005550137 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e7e74484d6235b7f7f412620c2c8445b7e3005282011499c661806d198f4c6e5/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Dec  8 04:46:57 np0005550137 podman[85255]: 2025-12-08 09:46:57.490293154 +0000 UTC m=+0.146843869 container init 5ebde502fd9f3538a59de2b568a30fe96df8b23cbbc8ddfd1df7116240f88345 (image=quay.io/ceph/ceph:v19, name=festive_mirzakhani, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, OSD_FLAVOR=default, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0)
Dec  8 04:46:57 np0005550137 podman[85255]: 2025-12-08 09:46:57.500533431 +0000 UTC m=+0.157084056 container start 5ebde502fd9f3538a59de2b568a30fe96df8b23cbbc8ddfd1df7116240f88345 (image=quay.io/ceph/ceph:v19, name=festive_mirzakhani, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec  8 04:46:57 np0005550137 podman[85255]: 2025-12-08 09:46:57.50450273 +0000 UTC m=+0.161053465 container attach 5ebde502fd9f3538a59de2b568a30fe96df8b23cbbc8ddfd1df7116240f88345 (image=quay.io/ceph/ceph:v19, name=festive_mirzakhani, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2)
Dec  8 04:46:57 np0005550137 ceph-mon[74516]: mon.compute-0@0(electing) e2 handle_auth_request failed to assign global_id
Dec  8 04:46:57 np0005550137 ceph-mon[74516]: mon.compute-0@0(electing) e2 handle_auth_request failed to assign global_id
Dec  8 04:46:58 np0005550137 ceph-mon[74516]: mon.compute-0@0(electing) e2 handle_auth_request failed to assign global_id
Dec  8 04:46:58 np0005550137 ceph-mgr[74806]: mgr.server handle_open ignoring open from mon.compute-2 192.168.122.102:0/3865363297; not ready for session (expect reconnect)
Dec  8 04:46:58 np0005550137 ceph-mon[74516]: mon.compute-0@0(electing) e2 handle_command mon_command({"prefix": "mon metadata", "id": "compute-2"} v 0)
Dec  8 04:46:58 np0005550137 ceph-mon[74516]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/1555784564' entity='mgr.compute-0.kitiwu' cmd=[{"prefix": "mon metadata", "id": "compute-2"}]: dispatch
Dec  8 04:46:58 np0005550137 ceph-mgr[74806]: mgr finish mon failed to return metadata for mon.compute-2: (22) Invalid argument
Dec  8 04:46:58 np0005550137 ceph-mgr[74806]: log_channel(cluster) log [DBG] : pgmap v55: 1 pgs: 1 active+clean; 449 KiB data, 853 MiB used, 39 GiB / 40 GiB avail
Dec  8 04:46:59 np0005550137 ceph-mon[74516]: mon.compute-0@0(electing) e2 handle_auth_request failed to assign global_id
Dec  8 04:46:59 np0005550137 ceph-mgr[74806]: mgr.server handle_open ignoring open from mon.compute-2 192.168.122.102:0/3865363297; not ready for session (expect reconnect)
Dec  8 04:46:59 np0005550137 ceph-mon[74516]: mon.compute-0@0(electing) e2 handle_command mon_command({"prefix": "mon metadata", "id": "compute-2"} v 0)
Dec  8 04:46:59 np0005550137 ceph-mon[74516]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/1555784564' entity='mgr.compute-0.kitiwu' cmd=[{"prefix": "mon metadata", "id": "compute-2"}]: dispatch
Dec  8 04:46:59 np0005550137 ceph-mgr[74806]: mgr finish mon failed to return metadata for mon.compute-2: (22) Invalid argument
Dec  8 04:46:59 np0005550137 ceph-mon[74516]: mon.compute-0@0(electing) e2 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1.devices.0}] v 0)
Dec  8 04:46:59 np0005550137 ceph-mon[74516]: mon.compute-0@0(electing) e2  adding peer [v2:192.168.122.101:3300/0,v1:192.168.122.101:6789/0] to list of hints
Dec  8 04:46:59 np0005550137 ceph-mon[74516]: mon.compute-0@0(electing) e2  adding peer [v2:192.168.122.101:3300/0,v1:192.168.122.101:6789/0] to list of hints
Dec  8 04:46:59 np0005550137 ceph-mon[74516]: mon.compute-0@0(electing) e2  adding peer [v2:192.168.122.101:3300/0,v1:192.168.122.101:6789/0] to list of hints
Dec  8 04:46:59 np0005550137 ceph-mgr[74806]: mgr.server handle_open ignoring open from mon.compute-1 192.168.122.101:0/761782811; not ready for session (expect reconnect)
Dec  8 04:46:59 np0005550137 ceph-mon[74516]: mon.compute-0@0(electing) e2 handle_command mon_command({"prefix": "mon metadata", "id": "compute-1"} v 0)
Dec  8 04:46:59 np0005550137 ceph-mon[74516]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/1555784564' entity='mgr.compute-0.kitiwu' cmd=[{"prefix": "mon metadata", "id": "compute-1"}]: dispatch
Dec  8 04:46:59 np0005550137 ceph-mgr[74806]: mgr finish mon failed to return metadata for mon.compute-1: (2) No such file or directory
Dec  8 04:47:00 np0005550137 ceph-mgr[74806]: mgr.server handle_open ignoring open from mon.compute-2 192.168.122.102:0/3865363297; not ready for session (expect reconnect)
Dec  8 04:47:00 np0005550137 ceph-mon[74516]: mon.compute-0@0(electing) e2 handle_command mon_command({"prefix": "mon metadata", "id": "compute-2"} v 0)
Dec  8 04:47:00 np0005550137 ceph-mon[74516]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/1555784564' entity='mgr.compute-0.kitiwu' cmd=[{"prefix": "mon metadata", "id": "compute-2"}]: dispatch
Dec  8 04:47:00 np0005550137 ceph-mgr[74806]: mgr finish mon failed to return metadata for mon.compute-2: (22) Invalid argument
Dec  8 04:47:00 np0005550137 ceph-mgr[74806]: log_channel(cluster) log [DBG] : pgmap v56: 1 pgs: 1 active+clean; 449 KiB data, 853 MiB used, 39 GiB / 40 GiB avail
Dec  8 04:47:00 np0005550137 ceph-mon[74516]: mon.compute-0@0(electing) e2 handle_auth_request failed to assign global_id
Dec  8 04:47:00 np0005550137 ceph-mgr[74806]: mgr.server handle_open ignoring open from mon.compute-1 192.168.122.101:0/761782811; not ready for session (expect reconnect)
Dec  8 04:47:00 np0005550137 ceph-mon[74516]: mon.compute-0@0(electing) e2 handle_command mon_command({"prefix": "mon metadata", "id": "compute-1"} v 0)
Dec  8 04:47:00 np0005550137 ceph-mon[74516]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/1555784564' entity='mgr.compute-0.kitiwu' cmd=[{"prefix": "mon metadata", "id": "compute-1"}]: dispatch
Dec  8 04:47:00 np0005550137 ceph-mgr[74806]: mgr finish mon failed to return metadata for mon.compute-1: (2) No such file or directory
Dec  8 04:47:00 np0005550137 ceph-mon[74516]: mon.compute-0@0(electing) e2 handle_auth_request failed to assign global_id
Dec  8 04:47:01 np0005550137 ceph-mon[74516]: mon.compute-0@0(electing) e2 handle_auth_request failed to assign global_id
Dec  8 04:47:01 np0005550137 ceph-mgr[74806]: mgr.server handle_open ignoring open from mon.compute-2 192.168.122.102:0/3865363297; not ready for session (expect reconnect)
Dec  8 04:47:01 np0005550137 ceph-mon[74516]: mon.compute-0@0(electing) e2 handle_command mon_command({"prefix": "mon metadata", "id": "compute-2"} v 0)
Dec  8 04:47:01 np0005550137 ceph-mon[74516]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/1555784564' entity='mgr.compute-0.kitiwu' cmd=[{"prefix": "mon metadata", "id": "compute-2"}]: dispatch
Dec  8 04:47:01 np0005550137 ceph-mgr[74806]: mgr finish mon failed to return metadata for mon.compute-2: (22) Invalid argument
Dec  8 04:47:01 np0005550137 ceph-mon[74516]: mon.compute-0@0(electing) e2  adding peer [v2:192.168.122.101:3300/0,v1:192.168.122.101:6789/0] to list of hints
Dec  8 04:47:01 np0005550137 ceph-mgr[74806]: mgr.server handle_open ignoring open from mon.compute-1 192.168.122.101:0/761782811; not ready for session (expect reconnect)
Dec  8 04:47:01 np0005550137 ceph-mon[74516]: mon.compute-0@0(electing) e2 handle_command mon_command({"prefix": "mon metadata", "id": "compute-1"} v 0)
Dec  8 04:47:01 np0005550137 ceph-mon[74516]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/1555784564' entity='mgr.compute-0.kitiwu' cmd=[{"prefix": "mon metadata", "id": "compute-1"}]: dispatch
Dec  8 04:47:01 np0005550137 ceph-mgr[74806]: mgr finish mon failed to return metadata for mon.compute-1: (2) No such file or directory
Dec  8 04:47:02 np0005550137 ceph-mon[74516]: mon.compute-0@0(electing) e2 handle_auth_request failed to assign global_id
Dec  8 04:47:02 np0005550137 ceph-mgr[74806]: mgr.server handle_open ignoring open from mon.compute-2 192.168.122.102:0/3865363297; not ready for session (expect reconnect)
Dec  8 04:47:02 np0005550137 ceph-mon[74516]: mon.compute-0@0(electing) e2 handle_command mon_command({"prefix": "mon metadata", "id": "compute-2"} v 0)
Dec  8 04:47:02 np0005550137 ceph-mon[74516]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/1555784564' entity='mgr.compute-0.kitiwu' cmd=[{"prefix": "mon metadata", "id": "compute-2"}]: dispatch
Dec  8 04:47:02 np0005550137 ceph-mgr[74806]: mgr finish mon failed to return metadata for mon.compute-2: (22) Invalid argument
Dec  8 04:47:02 np0005550137 ceph-mon[74516]: paxos.0).electionLogic(7) init, last seen epoch 7, mid-election, bumping
Dec  8 04:47:02 np0005550137 ceph-mon[74516]: mon.compute-0@0(electing) e2 collect_metadata vda:  no unique device id for vda: fallback method has no model nor serial
Dec  8 04:47:02 np0005550137 ceph-mgr[74806]: log_channel(cluster) log [DBG] : pgmap v57: 1 pgs: 1 active+clean; 449 KiB data, 853 MiB used, 39 GiB / 40 GiB avail
Dec  8 04:47:02 np0005550137 ceph-mon[74516]: log_channel(cluster) log [INF] : mon.compute-0 is new leader, mons compute-0,compute-2 in quorum (ranks 0,1)
Dec  8 04:47:02 np0005550137 ceph-mon[74516]: log_channel(cluster) log [DBG] : monmap epoch 2
Dec  8 04:47:02 np0005550137 ceph-mon[74516]: log_channel(cluster) log [DBG] : fsid ceb838ef-9d5d-54e4-bddb-2f01adce2ad4
Dec  8 04:47:02 np0005550137 ceph-mon[74516]: log_channel(cluster) log [DBG] : last_changed 2025-12-08T09:46:57.351280+0000
Dec  8 04:47:02 np0005550137 ceph-mon[74516]: log_channel(cluster) log [DBG] : created 2025-12-08T09:44:55.163607+0000
Dec  8 04:47:02 np0005550137 ceph-mon[74516]: log_channel(cluster) log [DBG] : min_mon_release 19 (squid)
Dec  8 04:47:02 np0005550137 ceph-mon[74516]: log_channel(cluster) log [DBG] : election_strategy: 1
Dec  8 04:47:02 np0005550137 ceph-mon[74516]: log_channel(cluster) log [DBG] : 0: [v2:192.168.122.100:3300/0,v1:192.168.122.100:6789/0] mon.compute-0
Dec  8 04:47:02 np0005550137 ceph-mon[74516]: log_channel(cluster) log [DBG] : 1: [v2:192.168.122.102:3300/0,v1:192.168.122.102:6789/0] mon.compute-2
Dec  8 04:47:02 np0005550137 ceph-mon[74516]: mon.compute-0@0(leader) e2 collect_metadata vda:  no unique device id for vda: fallback method has no model nor serial
Dec  8 04:47:02 np0005550137 ceph-mon[74516]: log_channel(cluster) log [DBG] : fsmap 
Dec  8 04:47:02 np0005550137 ceph-mon[74516]: log_channel(cluster) log [DBG] : osdmap e11: 2 total, 2 up, 2 in
Dec  8 04:47:02 np0005550137 ceph-mon[74516]: log_channel(cluster) log [DBG] : mgrmap e9: compute-0.kitiwu(active, since 104s)
Dec  8 04:47:02 np0005550137 ceph-mon[74516]: log_channel(cluster) log [INF] : overall HEALTH_OK
Dec  8 04:47:02 np0005550137 ceph-mon[74516]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1555784564' entity='mgr.compute-0.kitiwu' 
Dec  8 04:47:02 np0005550137 ceph-mon[74516]: mon.compute-0@0(leader) e2 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1}] v 0)
Dec  8 04:47:02 np0005550137 ceph-mon[74516]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1555784564' entity='mgr.compute-0.kitiwu' 
Dec  8 04:47:02 np0005550137 ceph-mon[74516]: mon.compute-0@0(leader) e2 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.mon}] v 0)
Dec  8 04:47:02 np0005550137 ceph-mon[74516]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1555784564' entity='mgr.compute-0.kitiwu' 
Dec  8 04:47:02 np0005550137 ceph-mgr[74806]: [progress INFO root] complete: finished ev 9a60ca55-8d34-46b8-9cee-c82837a79f65 (Updating mon deployment (+2 -> 3))
Dec  8 04:47:02 np0005550137 ceph-mgr[74806]: [progress INFO root] Completed event 9a60ca55-8d34-46b8-9cee-c82837a79f65 (Updating mon deployment (+2 -> 3)) in 8 seconds
Dec  8 04:47:02 np0005550137 ceph-mon[74516]: mon.compute-0@0(leader) e2 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.mon}] v 0)
Dec  8 04:47:02 np0005550137 ceph-mon[74516]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1555784564' entity='mgr.compute-0.kitiwu' 
Dec  8 04:47:02 np0005550137 ceph-mgr[74806]: [progress INFO root] update: starting ev d59cca98-32da-4ff4-bf23-f5d3a53dc18c (Updating mgr deployment (+2 -> 3))
Dec  8 04:47:02 np0005550137 ceph-mon[74516]: mon.compute-0@0(leader) e2 handle_command mon_command({"prefix": "auth get-or-create", "entity": "mgr.compute-2.zqytsv", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]} v 0)
Dec  8 04:47:02 np0005550137 ceph-mon[74516]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1555784564' entity='mgr.compute-0.kitiwu' cmd=[{"prefix": "auth get-or-create", "entity": "mgr.compute-2.zqytsv", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]}]: dispatch
Dec  8 04:47:02 np0005550137 ceph-mon[74516]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1555784564' entity='mgr.compute-0.kitiwu' cmd='[{"prefix": "auth get-or-create", "entity": "mgr.compute-2.zqytsv", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]}]': finished
Dec  8 04:47:02 np0005550137 ceph-mon[74516]: mon.compute-0@0(leader) e2 handle_command mon_command({"prefix": "mgr services"} v 0)
Dec  8 04:47:02 np0005550137 ceph-mon[74516]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/1555784564' entity='mgr.compute-0.kitiwu' cmd=[{"prefix": "mgr services"}]: dispatch
Dec  8 04:47:02 np0005550137 ceph-mon[74516]: mon.compute-0@0(leader) e2 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Dec  8 04:47:02 np0005550137 ceph-mon[74516]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/1555784564' entity='mgr.compute-0.kitiwu' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec  8 04:47:02 np0005550137 ceph-mgr[74806]: [cephadm INFO cephadm.serve] Deploying daemon mgr.compute-2.zqytsv on compute-2
Dec  8 04:47:02 np0005550137 ceph-mgr[74806]: log_channel(cephadm) log [INF] : Deploying daemon mgr.compute-2.zqytsv on compute-2
Dec  8 04:47:02 np0005550137 ceph-mon[74516]: Deploying daemon mon.compute-1 on compute-1
Dec  8 04:47:02 np0005550137 ceph-mon[74516]: mon.compute-0 calling monitor election
Dec  8 04:47:02 np0005550137 ceph-mon[74516]: mon.compute-2 calling monitor election
Dec  8 04:47:02 np0005550137 ceph-mon[74516]: mon.compute-0 is new leader, mons compute-0,compute-2 in quorum (ranks 0,1)
Dec  8 04:47:02 np0005550137 ceph-mon[74516]: overall HEALTH_OK
Dec  8 04:47:02 np0005550137 ceph-mon[74516]: from='mgr.14122 192.168.122.100:0/1555784564' entity='mgr.compute-0.kitiwu' 
Dec  8 04:47:02 np0005550137 ceph-mon[74516]: from='mgr.14122 192.168.122.100:0/1555784564' entity='mgr.compute-0.kitiwu' 
Dec  8 04:47:02 np0005550137 ceph-mon[74516]: from='mgr.14122 192.168.122.100:0/1555784564' entity='mgr.compute-0.kitiwu' 
Dec  8 04:47:02 np0005550137 ceph-mon[74516]: from='mgr.14122 192.168.122.100:0/1555784564' entity='mgr.compute-0.kitiwu' 
Dec  8 04:47:02 np0005550137 ceph-mon[74516]: from='mgr.14122 192.168.122.100:0/1555784564' entity='mgr.compute-0.kitiwu' cmd=[{"prefix": "auth get-or-create", "entity": "mgr.compute-2.zqytsv", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]}]: dispatch
Dec  8 04:47:02 np0005550137 ceph-mgr[74806]: mgr.server handle_open ignoring open from mon.compute-1 192.168.122.101:0/761782811; not ready for session (expect reconnect)
Dec  8 04:47:02 np0005550137 ceph-mon[74516]: mon.compute-0@0(leader) e2 handle_command mon_command({"prefix": "mon metadata", "id": "compute-1"} v 0)
Dec  8 04:47:02 np0005550137 ceph-mon[74516]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/1555784564' entity='mgr.compute-0.kitiwu' cmd=[{"prefix": "mon metadata", "id": "compute-1"}]: dispatch
Dec  8 04:47:02 np0005550137 ceph-mgr[74806]: mgr finish mon failed to return metadata for mon.compute-1: (2) No such file or directory
Dec  8 04:47:02 np0005550137 ceph-mgr[74806]: [progress INFO root] Writing back 3 completed events
Dec  8 04:47:02 np0005550137 ceph-mon[74516]: mon.compute-0@0(leader) e2 handle_command mon_command([{prefix=config-key set, key=mgr/progress/completed}] v 0)
Dec  8 04:47:02 np0005550137 ceph-mon[74516]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1555784564' entity='mgr.compute-0.kitiwu' 
Dec  8 04:47:03 np0005550137 ceph-mgr[74806]: mgr.server handle_open ignoring open from mon.compute-2 192.168.122.102:0/3865363297; not ready for session (expect reconnect)
Dec  8 04:47:03 np0005550137 ceph-mon[74516]: mon.compute-0@0(leader) e2 handle_command mon_command({"prefix": "mon metadata", "id": "compute-2"} v 0)
Dec  8 04:47:03 np0005550137 ceph-mon[74516]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/1555784564' entity='mgr.compute-0.kitiwu' cmd=[{"prefix": "mon metadata", "id": "compute-2"}]: dispatch
Dec  8 04:47:03 np0005550137 ceph-mon[74516]: from='mgr.14122 192.168.122.100:0/1555784564' entity='mgr.compute-0.kitiwu' cmd='[{"prefix": "auth get-or-create", "entity": "mgr.compute-2.zqytsv", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]}]': finished
Dec  8 04:47:03 np0005550137 ceph-mon[74516]: Deploying daemon mgr.compute-2.zqytsv on compute-2
Dec  8 04:47:03 np0005550137 ceph-mon[74516]: from='mgr.14122 192.168.122.100:0/1555784564' entity='mgr.compute-0.kitiwu' 
Dec  8 04:47:03 np0005550137 ceph-mon[74516]: mon.compute-0@0(leader) e2  adding peer [v2:192.168.122.101:3300/0,v1:192.168.122.101:6789/0] to list of hints
Dec  8 04:47:03 np0005550137 ceph-mon[74516]: mon.compute-0@0(leader).monmap v2 adding/updating compute-1 at [v2:192.168.122.101:3300/0,v1:192.168.122.101:6789/0] to monitor cluster
Dec  8 04:47:03 np0005550137 ceph-mgr[74806]: mgr.server handle_open ignoring open from mon.compute-1 192.168.122.101:0/761782811; not ready for session (expect reconnect)
Dec  8 04:47:03 np0005550137 ceph-mon[74516]: mon.compute-0@0(leader) e2 handle_command mon_command({"prefix": "mon metadata", "id": "compute-1"} v 0)
Dec  8 04:47:03 np0005550137 ceph-mon[74516]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/1555784564' entity='mgr.compute-0.kitiwu' cmd=[{"prefix": "mon metadata", "id": "compute-1"}]: dispatch
Dec  8 04:47:03 np0005550137 ceph-mgr[74806]: mgr finish mon failed to return metadata for mon.compute-1: (2) No such file or directory
Dec  8 04:47:03 np0005550137 ceph-mon[74516]: mon.compute-0@0(probing) e3 handle_command mon_command({"prefix": "mon metadata", "id": "compute-0"} v 0)
Dec  8 04:47:03 np0005550137 ceph-mon[74516]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/1555784564' entity='mgr.compute-0.kitiwu' cmd=[{"prefix": "mon metadata", "id": "compute-0"}]: dispatch
Dec  8 04:47:03 np0005550137 ceph-mon[74516]: mon.compute-0@0(probing) e3 handle_command mon_command({"prefix": "mon metadata", "id": "compute-1"} v 0)
Dec  8 04:47:03 np0005550137 ceph-mon[74516]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/1555784564' entity='mgr.compute-0.kitiwu' cmd=[{"prefix": "mon metadata", "id": "compute-1"}]: dispatch
Dec  8 04:47:03 np0005550137 ceph-mon[74516]: mon.compute-0@0(probing) e3 handle_command mon_command({"prefix": "mon metadata", "id": "compute-2"} v 0)
Dec  8 04:47:03 np0005550137 ceph-mon[74516]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/1555784564' entity='mgr.compute-0.kitiwu' cmd=[{"prefix": "mon metadata", "id": "compute-2"}]: dispatch
Dec  8 04:47:03 np0005550137 ceph-mon[74516]: log_channel(cluster) log [INF] : mon.compute-0 calling monitor election
Dec  8 04:47:03 np0005550137 ceph-mon[74516]: paxos.0).electionLogic(10) init, last seen epoch 10
Dec  8 04:47:03 np0005550137 ceph-mgr[74806]: mgr finish mon failed to return metadata for mon.compute-1: (22) Invalid argument
Dec  8 04:47:03 np0005550137 ceph-mon[74516]: mon.compute-0@0(electing) e3 collect_metadata vda:  no unique device id for vda: fallback method has no model nor serial
Dec  8 04:47:03 np0005550137 ceph-mon[74516]: mon.compute-0@0(electing) e3 handle_command mon_command({"prefix": "status", "format": "json"} v 0)
Dec  8 04:47:03 np0005550137 ceph-mon[74516]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2567547284' entity='client.admin' cmd=[{"prefix": "status", "format": "json"}]: dispatch
Dec  8 04:47:03 np0005550137 festive_mirzakhani[85271]: 
Dec  8 04:47:03 np0005550137 festive_mirzakhani[85271]: {"fsid":"ceb838ef-9d5d-54e4-bddb-2f01adce2ad4","health":{"status":"HEALTH_OK","checks":{},"mutes":[]},"election_epoch":11,"quorum":[],"quorum_names":[],"quorum_age":2308,"monmap":{"epoch":3,"min_mon_release_name":"squid","num_mons":3},"osdmap":{"epoch":11,"num_osds":2,"num_up_osds":2,"osd_up_since":1765187197,"num_in_osds":2,"osd_in_since":1765187178,"num_remapped_pgs":0},"pgmap":{"pgs_by_state":[{"state_name":"active+clean","count":1}],"num_pgs":1,"num_pools":1,"num_objects":2,"data_bytes":459280,"bytes_used":894631936,"bytes_avail":42046652416,"bytes_total":42941284352},"fsmap":{"epoch":1,"btime":"2025-12-08T09:44:57:301434+0000","by_rank":[],"up:standby":0},"mgrmap":{"available":true,"num_standbys":0,"modules":["cephadm","iostat","nfs","restful"],"services":{}},"servicemap":{"epoch":2,"modified":"2025-12-08T09:46:20.024511+0000","services":{}},"progress_events":{"9a60ca55-8d34-46b8-9cee-c82837a79f65":{"message":"Updating mon deployment (+2 -> 3) (2s)\n      [==============..............] (remaining: 2s)","progress":0.5,"add_to_ceph_s":true}}}
Dec  8 04:47:03 np0005550137 systemd[1]: libpod-5ebde502fd9f3538a59de2b568a30fe96df8b23cbbc8ddfd1df7116240f88345.scope: Deactivated successfully.
Dec  8 04:47:03 np0005550137 podman[85255]: 2025-12-08 09:47:03.956466974 +0000 UTC m=+6.613017589 container died 5ebde502fd9f3538a59de2b568a30fe96df8b23cbbc8ddfd1df7116240f88345 (image=quay.io/ceph/ceph:v19, name=festive_mirzakhani, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec  8 04:47:03 np0005550137 systemd[1]: var-lib-containers-storage-overlay-e7e74484d6235b7f7f412620c2c8445b7e3005282011499c661806d198f4c6e5-merged.mount: Deactivated successfully.
Dec  8 04:47:04 np0005550137 podman[85255]: 2025-12-08 09:47:04.021326391 +0000 UTC m=+6.677876996 container remove 5ebde502fd9f3538a59de2b568a30fe96df8b23cbbc8ddfd1df7116240f88345 (image=quay.io/ceph/ceph:v19, name=festive_mirzakhani, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, ceph=True, org.label-schema.build-date=20250325)
Dec  8 04:47:04 np0005550137 systemd[1]: libpod-conmon-5ebde502fd9f3538a59de2b568a30fe96df8b23cbbc8ddfd1df7116240f88345.scope: Deactivated successfully.
Dec  8 04:47:04 np0005550137 ceph-mon[74516]: mon.compute-0@0(electing) e3 handle_auth_request failed to assign global_id
Dec  8 04:47:04 np0005550137 ceph-mon[74516]: mon.compute-0@0(electing) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2.devices.0}] v 0)
Dec  8 04:47:04 np0005550137 ceph-mgr[74806]: mgr.server handle_report got status from non-daemon mon.compute-2
Dec  8 04:47:04 np0005550137 ceph-ceb838ef-9d5d-54e4-bddb-2f01adce2ad4-mgr-compute-0-kitiwu[74802]: 2025-12-08T09:47:04.352+0000 7fa166de2640 -1 mgr.server handle_report got status from non-daemon mon.compute-2
Dec  8 04:47:04 np0005550137 ceph-mgr[74806]: log_channel(cluster) log [DBG] : pgmap v58: 1 pgs: 1 active+clean; 449 KiB data, 853 MiB used, 39 GiB / 40 GiB avail
Dec  8 04:47:04 np0005550137 ceph-mon[74516]: mon.compute-0@0(electing) e3 handle_auth_request failed to assign global_id
Dec  8 04:47:04 np0005550137 python3[85333]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v19 --fsid ceb838ef-9d5d-54e4-bddb-2f01adce2ad4 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   osd pool create vms  replicated_rule --autoscale-mode on _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec  8 04:47:04 np0005550137 podman[85334]: 2025-12-08 09:47:04.553995171 +0000 UTC m=+0.060532739 container create cd23aec71ac09e154c5bd1df6e99c5327e9a51c9351cb9158717200a37910a7a (image=quay.io/ceph/ceph:v19, name=nifty_kare, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_REF=squid)
Dec  8 04:47:04 np0005550137 systemd[1]: Started libpod-conmon-cd23aec71ac09e154c5bd1df6e99c5327e9a51c9351cb9158717200a37910a7a.scope.
Dec  8 04:47:04 np0005550137 podman[85334]: 2025-12-08 09:47:04.524193195 +0000 UTC m=+0.030730853 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Dec  8 04:47:04 np0005550137 systemd[1]: Started libcrun container.
Dec  8 04:47:04 np0005550137 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/cccdbb2a2ea5b9ccb8891e108c3479ffc48d9e86ba8b13158061936094c1790f/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec  8 04:47:04 np0005550137 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/cccdbb2a2ea5b9ccb8891e108c3479ffc48d9e86ba8b13158061936094c1790f/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Dec  8 04:47:04 np0005550137 podman[85334]: 2025-12-08 09:47:04.643766974 +0000 UTC m=+0.150304602 container init cd23aec71ac09e154c5bd1df6e99c5327e9a51c9351cb9158717200a37910a7a (image=quay.io/ceph/ceph:v19, name=nifty_kare, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2)
Dec  8 04:47:04 np0005550137 podman[85334]: 2025-12-08 09:47:04.650181717 +0000 UTC m=+0.156719325 container start cd23aec71ac09e154c5bd1df6e99c5327e9a51c9351cb9158717200a37910a7a (image=quay.io/ceph/ceph:v19, name=nifty_kare, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, OSD_FLAVOR=default, ceph=True, org.label-schema.vendor=CentOS)
Dec  8 04:47:04 np0005550137 podman[85334]: 2025-12-08 09:47:04.654077905 +0000 UTC m=+0.160615523 container attach cd23aec71ac09e154c5bd1df6e99c5327e9a51c9351cb9158717200a37910a7a (image=quay.io/ceph/ceph:v19, name=nifty_kare, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec  8 04:47:04 np0005550137 ceph-mon[74516]: mon.compute-0@0(electing) e3 handle_auth_request failed to assign global_id
Dec  8 04:47:04 np0005550137 ceph-mon[74516]: mon.compute-0@0(electing) e3 handle_auth_request failed to assign global_id
Dec  8 04:47:04 np0005550137 ceph-mgr[74806]: mgr.server handle_open ignoring open from mon.compute-1 192.168.122.101:0/761782811; not ready for session (expect reconnect)
Dec  8 04:47:04 np0005550137 ceph-mon[74516]: mon.compute-0@0(electing) e3 handle_command mon_command({"prefix": "mon metadata", "id": "compute-1"} v 0)
Dec  8 04:47:04 np0005550137 ceph-mon[74516]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/1555784564' entity='mgr.compute-0.kitiwu' cmd=[{"prefix": "mon metadata", "id": "compute-1"}]: dispatch
Dec  8 04:47:04 np0005550137 ceph-mgr[74806]: mgr finish mon failed to return metadata for mon.compute-1: (22) Invalid argument
Dec  8 04:47:04 np0005550137 ceph-mon[74516]: mon.compute-0@0(electing) e3 handle_auth_request failed to assign global_id
Dec  8 04:47:05 np0005550137 ceph-mon[74516]: mon.compute-0@0(electing) e3 handle_auth_request failed to assign global_id
Dec  8 04:47:05 np0005550137 ceph-mon[74516]: mon.compute-0@0(electing) e3 handle_auth_request failed to assign global_id
Dec  8 04:47:05 np0005550137 ceph-mgr[74806]: mgr.server handle_open ignoring open from mon.compute-1 192.168.122.101:0/761782811; not ready for session (expect reconnect)
Dec  8 04:47:05 np0005550137 ceph-mon[74516]: mon.compute-0@0(electing) e3 handle_command mon_command({"prefix": "mon metadata", "id": "compute-1"} v 0)
Dec  8 04:47:05 np0005550137 ceph-mon[74516]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/1555784564' entity='mgr.compute-0.kitiwu' cmd=[{"prefix": "mon metadata", "id": "compute-1"}]: dispatch
Dec  8 04:47:05 np0005550137 ceph-mgr[74806]: mgr finish mon failed to return metadata for mon.compute-1: (22) Invalid argument
Dec  8 04:47:06 np0005550137 ceph-mon[74516]: mon.compute-0@0(electing) e3 handle_auth_request failed to assign global_id
Dec  8 04:47:06 np0005550137 ceph-mgr[74806]: log_channel(cluster) log [DBG] : pgmap v59: 1 pgs: 1 active+clean; 449 KiB data, 853 MiB used, 39 GiB / 40 GiB avail
Dec  8 04:47:06 np0005550137 ceph-mgr[74806]: mgr.server handle_open ignoring open from mon.compute-1 192.168.122.101:0/761782811; not ready for session (expect reconnect)
Dec  8 04:47:06 np0005550137 ceph-mon[74516]: mon.compute-0@0(electing) e3 handle_command mon_command({"prefix": "mon metadata", "id": "compute-1"} v 0)
Dec  8 04:47:06 np0005550137 ceph-mon[74516]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/1555784564' entity='mgr.compute-0.kitiwu' cmd=[{"prefix": "mon metadata", "id": "compute-1"}]: dispatch
Dec  8 04:47:06 np0005550137 ceph-mgr[74806]: mgr finish mon failed to return metadata for mon.compute-1: (22) Invalid argument
Dec  8 04:47:07 np0005550137 ceph-mon[74516]: mon.compute-0@0(electing) e3 handle_auth_request failed to assign global_id
Dec  8 04:47:07 np0005550137 ceph-mon[74516]: mon.compute-0@0(electing) e3 handle_auth_request failed to assign global_id
Dec  8 04:47:07 np0005550137 ceph-mon[74516]: mon.compute-0@0(electing) e3 handle_auth_request failed to assign global_id
Dec  8 04:47:07 np0005550137 ceph-mon[74516]: mon.compute-0@0(electing) e3 handle_auth_request failed to assign global_id
Dec  8 04:47:07 np0005550137 ceph-mgr[74806]: mgr.server handle_open ignoring open from mon.compute-1 192.168.122.101:0/761782811; not ready for session (expect reconnect)
Dec  8 04:47:07 np0005550137 ceph-mon[74516]: mon.compute-0@0(electing) e3 handle_command mon_command({"prefix": "mon metadata", "id": "compute-1"} v 0)
Dec  8 04:47:07 np0005550137 ceph-mon[74516]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/1555784564' entity='mgr.compute-0.kitiwu' cmd=[{"prefix": "mon metadata", "id": "compute-1"}]: dispatch
Dec  8 04:47:07 np0005550137 ceph-mgr[74806]: mgr finish mon failed to return metadata for mon.compute-1: (22) Invalid argument
Dec  8 04:47:07 np0005550137 ceph-mon[74516]: mon.compute-0@0(electing) e3 handle_auth_request failed to assign global_id
Dec  8 04:47:08 np0005550137 ceph-mgr[74806]: log_channel(cluster) log [DBG] : pgmap v60: 1 pgs: 1 active+clean; 449 KiB data, 853 MiB used, 39 GiB / 40 GiB avail
Dec  8 04:47:08 np0005550137 ceph-mon[74516]: mon.compute-0@0(electing) e3 handle_auth_request failed to assign global_id
Dec  8 04:47:08 np0005550137 ceph-mon[74516]: mon.compute-0@0(electing) e3 handle_auth_request failed to assign global_id
Dec  8 04:47:08 np0005550137 ceph-mgr[74806]: mgr.server handle_open ignoring open from mon.compute-1 192.168.122.101:0/761782811; not ready for session (expect reconnect)
Dec  8 04:47:08 np0005550137 ceph-mon[74516]: mon.compute-0@0(electing) e3 handle_command mon_command({"prefix": "mon metadata", "id": "compute-1"} v 0)
Dec  8 04:47:08 np0005550137 ceph-mon[74516]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/1555784564' entity='mgr.compute-0.kitiwu' cmd=[{"prefix": "mon metadata", "id": "compute-1"}]: dispatch
Dec  8 04:47:08 np0005550137 ceph-mgr[74806]: mgr finish mon failed to return metadata for mon.compute-1: (22) Invalid argument
Dec  8 04:47:08 np0005550137 ceph-mon[74516]: paxos.0).electionLogic(11) init, last seen epoch 11, mid-election, bumping
Dec  8 04:47:08 np0005550137 ceph-mon[74516]: mon.compute-0@0(electing) e3 collect_metadata vda:  no unique device id for vda: fallback method has no model nor serial
Dec  8 04:47:08 np0005550137 ceph-mon[74516]: log_channel(cluster) log [INF] : mon.compute-0 is new leader, mons compute-0,compute-2,compute-1 in quorum (ranks 0,1,2)
Dec  8 04:47:09 np0005550137 ceph-mon[74516]: log_channel(cluster) log [DBG] : monmap epoch 3
Dec  8 04:47:09 np0005550137 ceph-mon[74516]: log_channel(cluster) log [DBG] : fsid ceb838ef-9d5d-54e4-bddb-2f01adce2ad4
Dec  8 04:47:09 np0005550137 ceph-mon[74516]: log_channel(cluster) log [DBG] : last_changed 2025-12-08T09:47:03.886776+0000
Dec  8 04:47:09 np0005550137 ceph-mon[74516]: log_channel(cluster) log [DBG] : created 2025-12-08T09:44:55.163607+0000
Dec  8 04:47:09 np0005550137 ceph-mon[74516]: log_channel(cluster) log [DBG] : min_mon_release 19 (squid)
Dec  8 04:47:09 np0005550137 ceph-mon[74516]: log_channel(cluster) log [DBG] : election_strategy: 1
Dec  8 04:47:09 np0005550137 ceph-mon[74516]: log_channel(cluster) log [DBG] : 0: [v2:192.168.122.100:3300/0,v1:192.168.122.100:6789/0] mon.compute-0
Dec  8 04:47:09 np0005550137 ceph-mon[74516]: log_channel(cluster) log [DBG] : 1: [v2:192.168.122.102:3300/0,v1:192.168.122.102:6789/0] mon.compute-2
Dec  8 04:47:09 np0005550137 ceph-mon[74516]: log_channel(cluster) log [DBG] : 2: [v2:192.168.122.101:3300/0,v1:192.168.122.101:6789/0] mon.compute-1
Dec  8 04:47:09 np0005550137 ceph-mon[74516]: mon.compute-0@0(leader) e3 collect_metadata vda:  no unique device id for vda: fallback method has no model nor serial
Dec  8 04:47:09 np0005550137 ceph-mon[74516]: log_channel(cluster) log [DBG] : fsmap 
Dec  8 04:47:09 np0005550137 ceph-mon[74516]: log_channel(cluster) log [DBG] : osdmap e11: 2 total, 2 up, 2 in
Dec  8 04:47:09 np0005550137 ceph-mon[74516]: log_channel(cluster) log [DBG] : mgrmap e9: compute-0.kitiwu(active, since 111s)
Dec  8 04:47:09 np0005550137 ceph-mon[74516]: log_channel(cluster) log [INF] : overall HEALTH_OK
Dec  8 04:47:09 np0005550137 ceph-mon[74516]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1555784564' entity='mgr.compute-0.kitiwu' 
Dec  8 04:47:09 np0005550137 ceph-mon[74516]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2}] v 0)
Dec  8 04:47:09 np0005550137 ceph-mon[74516]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1555784564' entity='mgr.compute-0.kitiwu' 
Dec  8 04:47:09 np0005550137 ceph-mon[74516]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.mgr}] v 0)
Dec  8 04:47:09 np0005550137 ceph-mon[74516]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1555784564' entity='mgr.compute-0.kitiwu' 
Dec  8 04:47:09 np0005550137 ceph-mon[74516]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get-or-create", "entity": "mgr.compute-1.mmkaif", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]} v 0)
Dec  8 04:47:09 np0005550137 ceph-mon[74516]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1555784564' entity='mgr.compute-0.kitiwu' cmd=[{"prefix": "auth get-or-create", "entity": "mgr.compute-1.mmkaif", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]}]: dispatch
Dec  8 04:47:09 np0005550137 ceph-mon[74516]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1555784564' entity='mgr.compute-0.kitiwu' cmd='[{"prefix": "auth get-or-create", "entity": "mgr.compute-1.mmkaif", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]}]': finished
Dec  8 04:47:09 np0005550137 ceph-mon[74516]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mgr services"} v 0)
Dec  8 04:47:09 np0005550137 ceph-mon[74516]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/1555784564' entity='mgr.compute-0.kitiwu' cmd=[{"prefix": "mgr services"}]: dispatch
Dec  8 04:47:09 np0005550137 ceph-mon[74516]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Dec  8 04:47:09 np0005550137 ceph-mon[74516]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/1555784564' entity='mgr.compute-0.kitiwu' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec  8 04:47:09 np0005550137 ceph-mgr[74806]: [cephadm INFO cephadm.serve] Deploying daemon mgr.compute-1.mmkaif on compute-1
Dec  8 04:47:09 np0005550137 ceph-mgr[74806]: log_channel(cephadm) log [INF] : Deploying daemon mgr.compute-1.mmkaif on compute-1
Dec  8 04:47:09 np0005550137 ceph-mon[74516]: mon.compute-0 calling monitor election
Dec  8 04:47:09 np0005550137 ceph-mon[74516]: mon.compute-2 calling monitor election
Dec  8 04:47:09 np0005550137 ceph-mon[74516]: mon.compute-1 calling monitor election
Dec  8 04:47:09 np0005550137 ceph-mon[74516]: mon.compute-0 is new leader, mons compute-0,compute-2,compute-1 in quorum (ranks 0,1,2)
Dec  8 04:47:09 np0005550137 ceph-mon[74516]: overall HEALTH_OK
Dec  8 04:47:09 np0005550137 ceph-mon[74516]: from='mgr.14122 192.168.122.100:0/1555784564' entity='mgr.compute-0.kitiwu' 
Dec  8 04:47:09 np0005550137 ceph-mon[74516]: from='mgr.14122 192.168.122.100:0/1555784564' entity='mgr.compute-0.kitiwu' 
Dec  8 04:47:09 np0005550137 ceph-mon[74516]: from='mgr.14122 192.168.122.100:0/1555784564' entity='mgr.compute-0.kitiwu' 
Dec  8 04:47:09 np0005550137 ceph-mon[74516]: from='mgr.14122 192.168.122.100:0/1555784564' entity='mgr.compute-0.kitiwu' cmd=[{"prefix": "auth get-or-create", "entity": "mgr.compute-1.mmkaif", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]}]: dispatch
Dec  8 04:47:09 np0005550137 ceph-mon[74516]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool create", "pool": "vms", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"} v 0)
Dec  8 04:47:09 np0005550137 ceph-mon[74516]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/4242671449' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "vms", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"}]: dispatch
Dec  8 04:47:09 np0005550137 ceph-mon[74516]: mon.compute-0@0(leader).osd e11 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  8 04:47:09 np0005550137 ceph-mgr[74806]: mgr.server handle_open ignoring open from mon.compute-1 192.168.122.101:0/761782811; not ready for session (expect reconnect)
Dec  8 04:47:09 np0005550137 ceph-mon[74516]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mon metadata", "id": "compute-1"} v 0)
Dec  8 04:47:09 np0005550137 ceph-mon[74516]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/1555784564' entity='mgr.compute-0.kitiwu' cmd=[{"prefix": "mon metadata", "id": "compute-1"}]: dispatch
Dec  8 04:47:10 np0005550137 ceph-mon[74516]: mon.compute-0@0(leader).osd e11 do_prune osdmap full prune enabled
Dec  8 04:47:10 np0005550137 ceph-mon[74516]: from='mgr.14122 192.168.122.100:0/1555784564' entity='mgr.compute-0.kitiwu' cmd='[{"prefix": "auth get-or-create", "entity": "mgr.compute-1.mmkaif", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]}]': finished
Dec  8 04:47:10 np0005550137 ceph-mon[74516]: Deploying daemon mgr.compute-1.mmkaif on compute-1
Dec  8 04:47:10 np0005550137 ceph-mon[74516]: from='client.? 192.168.122.100:0/4242671449' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "vms", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"}]: dispatch
Dec  8 04:47:10 np0005550137 ceph-mon[74516]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/4242671449' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "vms", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"}]': finished
Dec  8 04:47:10 np0005550137 ceph-mon[74516]: mon.compute-0@0(leader).osd e12 e12: 2 total, 2 up, 2 in
Dec  8 04:47:10 np0005550137 nifty_kare[85349]: pool 'vms' created
Dec  8 04:47:10 np0005550137 ceph-mon[74516]: log_channel(cluster) log [DBG] : osdmap e12: 2 total, 2 up, 2 in
Dec  8 04:47:10 np0005550137 systemd[1]: libpod-cd23aec71ac09e154c5bd1df6e99c5327e9a51c9351cb9158717200a37910a7a.scope: Deactivated successfully.
Dec  8 04:47:10 np0005550137 podman[85334]: 2025-12-08 09:47:10.140984458 +0000 UTC m=+5.647522026 container died cd23aec71ac09e154c5bd1df6e99c5327e9a51c9351cb9158717200a37910a7a (image=quay.io/ceph/ceph:v19, name=nifty_kare, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True)
Dec  8 04:47:10 np0005550137 systemd[1]: var-lib-containers-storage-overlay-cccdbb2a2ea5b9ccb8891e108c3479ffc48d9e86ba8b13158061936094c1790f-merged.mount: Deactivated successfully.
Dec  8 04:47:10 np0005550137 podman[85334]: 2025-12-08 09:47:10.174329119 +0000 UTC m=+5.680866687 container remove cd23aec71ac09e154c5bd1df6e99c5327e9a51c9351cb9158717200a37910a7a (image=quay.io/ceph/ceph:v19, name=nifty_kare, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True)
Dec  8 04:47:10 np0005550137 systemd[1]: libpod-conmon-cd23aec71ac09e154c5bd1df6e99c5327e9a51c9351cb9158717200a37910a7a.scope: Deactivated successfully.
Dec  8 04:47:10 np0005550137 ceph-mgr[74806]: log_channel(cluster) log [DBG] : pgmap v62: 2 pgs: 1 unknown, 1 active+clean; 449 KiB data, 853 MiB used, 39 GiB / 40 GiB avail
Dec  8 04:47:10 np0005550137 python3[85412]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v19 --fsid ceb838ef-9d5d-54e4-bddb-2f01adce2ad4 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   osd pool create volumes  replicated_rule --autoscale-mode on _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec  8 04:47:10 np0005550137 podman[85413]: 2025-12-08 09:47:10.561847292 +0000 UTC m=+0.053692863 container create 5447bcbbe85682e84aa4f2e9c01a09543352bbe0b758348bd28ccb4d74de4c0a (image=quay.io/ceph/ceph:v19, name=mystifying_meninsky, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.build-date=20250325)
Dec  8 04:47:10 np0005550137 systemd[1]: Started libpod-conmon-5447bcbbe85682e84aa4f2e9c01a09543352bbe0b758348bd28ccb4d74de4c0a.scope.
Dec  8 04:47:10 np0005550137 systemd[1]: Started libcrun container.
Dec  8 04:47:10 np0005550137 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d8e88ddbaa82b37ec8f3de4a4c1bec5c5ae0b2b8fcdffee2f43b389cc2cae75e/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec  8 04:47:10 np0005550137 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d8e88ddbaa82b37ec8f3de4a4c1bec5c5ae0b2b8fcdffee2f43b389cc2cae75e/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Dec  8 04:47:10 np0005550137 podman[85413]: 2025-12-08 09:47:10.539027367 +0000 UTC m=+0.030872988 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Dec  8 04:47:10 np0005550137 podman[85413]: 2025-12-08 09:47:10.644003647 +0000 UTC m=+0.135849268 container init 5447bcbbe85682e84aa4f2e9c01a09543352bbe0b758348bd28ccb4d74de4c0a (image=quay.io/ceph/ceph:v19, name=mystifying_meninsky, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, ceph=True, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_REF=squid)
Dec  8 04:47:10 np0005550137 podman[85413]: 2025-12-08 09:47:10.649705568 +0000 UTC m=+0.141551139 container start 5447bcbbe85682e84aa4f2e9c01a09543352bbe0b758348bd28ccb4d74de4c0a (image=quay.io/ceph/ceph:v19, name=mystifying_meninsky, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, ceph=True, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Dec  8 04:47:10 np0005550137 podman[85413]: 2025-12-08 09:47:10.652994917 +0000 UTC m=+0.144840558 container attach 5447bcbbe85682e84aa4f2e9c01a09543352bbe0b758348bd28ccb4d74de4c0a (image=quay.io/ceph/ceph:v19, name=mystifying_meninsky, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_REF=squid, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec  8 04:47:10 np0005550137 ceph-mgr[74806]: mgr.server handle_report got status from non-daemon mon.compute-1
Dec  8 04:47:10 np0005550137 ceph-ceb838ef-9d5d-54e4-bddb-2f01adce2ad4-mgr-compute-0-kitiwu[74802]: 2025-12-08T09:47:10.892+0000 7fa166de2640 -1 mgr.server handle_report got status from non-daemon mon.compute-1
Dec  8 04:47:11 np0005550137 ceph-mon[74516]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool create", "pool": "volumes", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"} v 0)
Dec  8 04:47:11 np0005550137 ceph-mon[74516]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/4229412466' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "volumes", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"}]: dispatch
Dec  8 04:47:11 np0005550137 ceph-mon[74516]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1.devices.0}] v 0)
Dec  8 04:47:11 np0005550137 ceph-mon[74516]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1555784564' entity='mgr.compute-0.kitiwu' 
Dec  8 04:47:11 np0005550137 ceph-mon[74516]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1}] v 0)
Dec  8 04:47:11 np0005550137 ceph-mon[74516]: log_channel(cluster) log [WRN] : Health check failed: 1 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED)
Dec  8 04:47:11 np0005550137 ceph-mon[74516]: mon.compute-0@0(leader).osd e12 do_prune osdmap full prune enabled
Dec  8 04:47:11 np0005550137 ceph-mon[74516]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1555784564' entity='mgr.compute-0.kitiwu' 
Dec  8 04:47:11 np0005550137 ceph-mon[74516]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.mgr}] v 0)
Dec  8 04:47:11 np0005550137 ceph-mon[74516]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/4229412466' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "volumes", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"}]': finished
Dec  8 04:47:11 np0005550137 ceph-mon[74516]: mon.compute-0@0(leader).osd e13 e13: 2 total, 2 up, 2 in
Dec  8 04:47:11 np0005550137 mystifying_meninsky[85428]: pool 'volumes' created
Dec  8 04:47:11 np0005550137 ceph-mon[74516]: from='client.? 192.168.122.100:0/4242671449' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "vms", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"}]': finished
Dec  8 04:47:11 np0005550137 ceph-mon[74516]: from='client.? 192.168.122.100:0/4229412466' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "volumes", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"}]: dispatch
Dec  8 04:47:11 np0005550137 ceph-mon[74516]: from='mgr.14122 192.168.122.100:0/1555784564' entity='mgr.compute-0.kitiwu' 
Dec  8 04:47:11 np0005550137 systemd[1]: libpod-5447bcbbe85682e84aa4f2e9c01a09543352bbe0b758348bd28ccb4d74de4c0a.scope: Deactivated successfully.
Dec  8 04:47:11 np0005550137 podman[85413]: 2025-12-08 09:47:11.24639786 +0000 UTC m=+0.738243431 container died 5447bcbbe85682e84aa4f2e9c01a09543352bbe0b758348bd28ccb4d74de4c0a (image=quay.io/ceph/ceph:v19, name=mystifying_meninsky, ceph=True, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec  8 04:47:11 np0005550137 ceph-mon[74516]: log_channel(cluster) log [DBG] : osdmap e13: 2 total, 2 up, 2 in
Dec  8 04:47:11 np0005550137 ceph-osd[83009]: osd.1 pg_epoch: 13 pg[3.0( empty local-lis/les=0/0 n=0 ec=13/13 lis/c=0/0 les/c/f=0/0/0 sis=13) [1] r=0 lpr=13 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  8 04:47:11 np0005550137 ceph-mon[74516]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1555784564' entity='mgr.compute-0.kitiwu' 
Dec  8 04:47:11 np0005550137 ceph-mgr[74806]: [progress INFO root] complete: finished ev d59cca98-32da-4ff4-bf23-f5d3a53dc18c (Updating mgr deployment (+2 -> 3))
Dec  8 04:47:11 np0005550137 ceph-mgr[74806]: [progress INFO root] Completed event d59cca98-32da-4ff4-bf23-f5d3a53dc18c (Updating mgr deployment (+2 -> 3)) in 9 seconds
Dec  8 04:47:11 np0005550137 ceph-mon[74516]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.mgr}] v 0)
Dec  8 04:47:11 np0005550137 systemd[1]: var-lib-containers-storage-overlay-d8e88ddbaa82b37ec8f3de4a4c1bec5c5ae0b2b8fcdffee2f43b389cc2cae75e-merged.mount: Deactivated successfully.
Dec  8 04:47:11 np0005550137 ceph-mon[74516]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1555784564' entity='mgr.compute-0.kitiwu' 
Dec  8 04:47:11 np0005550137 ceph-mgr[74806]: [progress INFO root] update: starting ev f5385a1a-e3dd-45ea-8b23-25166cfb2670 (Updating crash deployment (+1 -> 3))
Dec  8 04:47:11 np0005550137 ceph-mon[74516]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get-or-create", "entity": "client.crash.compute-2", "caps": ["mon", "profile crash", "mgr", "profile crash"]} v 0)
Dec  8 04:47:11 np0005550137 ceph-mon[74516]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1555784564' entity='mgr.compute-0.kitiwu' cmd=[{"prefix": "auth get-or-create", "entity": "client.crash.compute-2", "caps": ["mon", "profile crash", "mgr", "profile crash"]}]: dispatch
Dec  8 04:47:11 np0005550137 ceph-mon[74516]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1555784564' entity='mgr.compute-0.kitiwu' cmd='[{"prefix": "auth get-or-create", "entity": "client.crash.compute-2", "caps": ["mon", "profile crash", "mgr", "profile crash"]}]': finished
Dec  8 04:47:11 np0005550137 ceph-mon[74516]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Dec  8 04:47:11 np0005550137 ceph-mon[74516]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/1555784564' entity='mgr.compute-0.kitiwu' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec  8 04:47:11 np0005550137 ceph-mgr[74806]: [cephadm INFO cephadm.serve] Deploying daemon crash.compute-2 on compute-2
Dec  8 04:47:11 np0005550137 ceph-mgr[74806]: log_channel(cephadm) log [INF] : Deploying daemon crash.compute-2 on compute-2
Dec  8 04:47:12 np0005550137 podman[85413]: 2025-12-08 09:47:12.121362394 +0000 UTC m=+1.613208005 container remove 5447bcbbe85682e84aa4f2e9c01a09543352bbe0b758348bd28ccb4d74de4c0a (image=quay.io/ceph/ceph:v19, name=mystifying_meninsky, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default)
Dec  8 04:47:12 np0005550137 systemd[1]: libpod-conmon-5447bcbbe85682e84aa4f2e9c01a09543352bbe0b758348bd28ccb4d74de4c0a.scope: Deactivated successfully.
Dec  8 04:47:12 np0005550137 ceph-mon[74516]: mon.compute-0@0(leader).osd e13 do_prune osdmap full prune enabled
Dec  8 04:47:12 np0005550137 ceph-mgr[74806]: log_channel(cluster) log [DBG] : pgmap v64: 3 pgs: 2 unknown, 1 active+clean; 449 KiB data, 853 MiB used, 39 GiB / 40 GiB avail
Dec  8 04:47:12 np0005550137 ceph-mon[74516]: Health check failed: 1 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED)
Dec  8 04:47:12 np0005550137 ceph-mon[74516]: from='mgr.14122 192.168.122.100:0/1555784564' entity='mgr.compute-0.kitiwu' 
Dec  8 04:47:12 np0005550137 ceph-mon[74516]: from='client.? 192.168.122.100:0/4229412466' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "volumes", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"}]': finished
Dec  8 04:47:12 np0005550137 ceph-mon[74516]: from='mgr.14122 192.168.122.100:0/1555784564' entity='mgr.compute-0.kitiwu' 
Dec  8 04:47:12 np0005550137 ceph-mon[74516]: from='mgr.14122 192.168.122.100:0/1555784564' entity='mgr.compute-0.kitiwu' 
Dec  8 04:47:12 np0005550137 ceph-mon[74516]: from='mgr.14122 192.168.122.100:0/1555784564' entity='mgr.compute-0.kitiwu' cmd=[{"prefix": "auth get-or-create", "entity": "client.crash.compute-2", "caps": ["mon", "profile crash", "mgr", "profile crash"]}]: dispatch
Dec  8 04:47:12 np0005550137 ceph-mon[74516]: from='mgr.14122 192.168.122.100:0/1555784564' entity='mgr.compute-0.kitiwu' cmd='[{"prefix": "auth get-or-create", "entity": "client.crash.compute-2", "caps": ["mon", "profile crash", "mgr", "profile crash"]}]': finished
Dec  8 04:47:12 np0005550137 ceph-mon[74516]: Deploying daemon crash.compute-2 on compute-2
Dec  8 04:47:12 np0005550137 python3[85493]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v19 --fsid ceb838ef-9d5d-54e4-bddb-2f01adce2ad4 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   osd pool create backups  replicated_rule --autoscale-mode on _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec  8 04:47:12 np0005550137 ceph-mon[74516]: mon.compute-0@0(leader).osd e14 e14: 2 total, 2 up, 2 in
Dec  8 04:47:12 np0005550137 ceph-mon[74516]: log_channel(cluster) log [DBG] : osdmap e14: 2 total, 2 up, 2 in
Dec  8 04:47:12 np0005550137 ceph-osd[83009]: osd.1 pg_epoch: 14 pg[3.0( empty local-lis/les=13/14 n=0 ec=13/13 lis/c=0/0 les/c/f=0/0/0 sis=13) [1] r=0 lpr=13 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  8 04:47:12 np0005550137 podman[85494]: 2025-12-08 09:47:12.556184097 +0000 UTC m=+0.083681543 container create 9f4c248d9edb6c1c00acfaca7da0d4614dad1894341b0cb413ea25b6c8f74d74 (image=quay.io/ceph/ceph:v19, name=condescending_lederberg, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_REF=squid, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Dec  8 04:47:12 np0005550137 podman[85494]: 2025-12-08 09:47:12.50563302 +0000 UTC m=+0.033130476 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Dec  8 04:47:12 np0005550137 systemd[1]: Started libpod-conmon-9f4c248d9edb6c1c00acfaca7da0d4614dad1894341b0cb413ea25b6c8f74d74.scope.
Dec  8 04:47:12 np0005550137 systemd[1]: Started libcrun container.
Dec  8 04:47:12 np0005550137 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ec8a718e7ec4b8b082e92e50aea1729982e5be7b1ebef622e0ca4e4810576769/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Dec  8 04:47:12 np0005550137 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ec8a718e7ec4b8b082e92e50aea1729982e5be7b1ebef622e0ca4e4810576769/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec  8 04:47:12 np0005550137 podman[85494]: 2025-12-08 09:47:12.693575451 +0000 UTC m=+0.221072947 container init 9f4c248d9edb6c1c00acfaca7da0d4614dad1894341b0cb413ea25b6c8f74d74 (image=quay.io/ceph/ceph:v19, name=condescending_lederberg, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Dec  8 04:47:12 np0005550137 podman[85494]: 2025-12-08 09:47:12.703593303 +0000 UTC m=+0.231090739 container start 9f4c248d9edb6c1c00acfaca7da0d4614dad1894341b0cb413ea25b6c8f74d74 (image=quay.io/ceph/ceph:v19, name=condescending_lederberg, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec  8 04:47:12 np0005550137 podman[85494]: 2025-12-08 09:47:12.707367125 +0000 UTC m=+0.234864601 container attach 9f4c248d9edb6c1c00acfaca7da0d4614dad1894341b0cb413ea25b6c8f74d74 (image=quay.io/ceph/ceph:v19, name=condescending_lederberg, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec  8 04:47:12 np0005550137 ceph-mgr[74806]: [progress INFO root] Writing back 4 completed events
Dec  8 04:47:12 np0005550137 ceph-mon[74516]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/progress/completed}] v 0)
Dec  8 04:47:12 np0005550137 ceph-mon[74516]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1555784564' entity='mgr.compute-0.kitiwu' 
Dec  8 04:47:12 np0005550137 ceph-mgr[74806]: [progress WARNING root] Starting Global Recovery Event,2 pgs not in active + clean state
Dec  8 04:47:13 np0005550137 ceph-mon[74516]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool create", "pool": "backups", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"} v 0)
Dec  8 04:47:13 np0005550137 ceph-mon[74516]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/2699660867' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "backups", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"}]: dispatch
Dec  8 04:47:13 np0005550137 ceph-mon[74516]: mon.compute-0@0(leader).osd e14 do_prune osdmap full prune enabled
Dec  8 04:47:13 np0005550137 ceph-mon[74516]: from='mgr.14122 192.168.122.100:0/1555784564' entity='mgr.compute-0.kitiwu' 
Dec  8 04:47:13 np0005550137 ceph-mon[74516]: from='client.? 192.168.122.100:0/2699660867' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "backups", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"}]: dispatch
Dec  8 04:47:13 np0005550137 ceph-mon[74516]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/2699660867' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "backups", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"}]': finished
Dec  8 04:47:13 np0005550137 ceph-mon[74516]: mon.compute-0@0(leader).osd e15 e15: 2 total, 2 up, 2 in
Dec  8 04:47:13 np0005550137 condescending_lederberg[85510]: pool 'backups' created
Dec  8 04:47:13 np0005550137 ceph-mon[74516]: log_channel(cluster) log [DBG] : osdmap e15: 2 total, 2 up, 2 in
Dec  8 04:47:13 np0005550137 systemd[1]: libpod-9f4c248d9edb6c1c00acfaca7da0d4614dad1894341b0cb413ea25b6c8f74d74.scope: Deactivated successfully.
Dec  8 04:47:13 np0005550137 podman[85494]: 2025-12-08 09:47:13.511149303 +0000 UTC m=+1.038646739 container died 9f4c248d9edb6c1c00acfaca7da0d4614dad1894341b0cb413ea25b6c8f74d74 (image=quay.io/ceph/ceph:v19, name=condescending_lederberg, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_REF=squid, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image)
Dec  8 04:47:13 np0005550137 systemd[1]: var-lib-containers-storage-overlay-ec8a718e7ec4b8b082e92e50aea1729982e5be7b1ebef622e0ca4e4810576769-merged.mount: Deactivated successfully.
Dec  8 04:47:13 np0005550137 podman[85494]: 2025-12-08 09:47:13.554502575 +0000 UTC m=+1.082000071 container remove 9f4c248d9edb6c1c00acfaca7da0d4614dad1894341b0cb413ea25b6c8f74d74 (image=quay.io/ceph/ceph:v19, name=condescending_lederberg, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, ceph=True, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec  8 04:47:13 np0005550137 systemd[1]: libpod-conmon-9f4c248d9edb6c1c00acfaca7da0d4614dad1894341b0cb413ea25b6c8f74d74.scope: Deactivated successfully.
Dec  8 04:47:13 np0005550137 ceph-osd[83009]: osd.1 pg_epoch: 15 pg[4.0( empty local-lis/les=0/0 n=0 ec=15/15 lis/c=0/0 les/c/f=0/0/0 sis=15) [1] r=0 lpr=15 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  8 04:47:13 np0005550137 ceph-mon[74516]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2.devices.0}] v 0)
Dec  8 04:47:13 np0005550137 ceph-mon[74516]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1555784564' entity='mgr.compute-0.kitiwu' 
Dec  8 04:47:13 np0005550137 ceph-mon[74516]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2}] v 0)
Dec  8 04:47:13 np0005550137 ceph-mon[74516]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1555784564' entity='mgr.compute-0.kitiwu' 
Dec  8 04:47:13 np0005550137 ceph-mon[74516]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.crash}] v 0)
Dec  8 04:47:13 np0005550137 ceph-mon[74516]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1555784564' entity='mgr.compute-0.kitiwu' 
Dec  8 04:47:13 np0005550137 ceph-mgr[74806]: [progress INFO root] complete: finished ev f5385a1a-e3dd-45ea-8b23-25166cfb2670 (Updating crash deployment (+1 -> 3))
Dec  8 04:47:13 np0005550137 ceph-mgr[74806]: [progress INFO root] Completed event f5385a1a-e3dd-45ea-8b23-25166cfb2670 (Updating crash deployment (+1 -> 3)) in 2 seconds
Dec  8 04:47:13 np0005550137 ceph-mon[74516]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.crash}] v 0)
Dec  8 04:47:13 np0005550137 ceph-mon[74516]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1555784564' entity='mgr.compute-0.kitiwu' 
Dec  8 04:47:13 np0005550137 ceph-mon[74516]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Dec  8 04:47:13 np0005550137 ceph-mon[74516]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/1555784564' entity='mgr.compute-0.kitiwu' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Dec  8 04:47:13 np0005550137 ceph-mon[74516]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Dec  8 04:47:13 np0005550137 ceph-mon[74516]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1555784564' entity='mgr.compute-0.kitiwu' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Dec  8 04:47:13 np0005550137 ceph-mon[74516]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Dec  8 04:47:13 np0005550137 ceph-mon[74516]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/1555784564' entity='mgr.compute-0.kitiwu' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec  8 04:47:13 np0005550137 ceph-mon[74516]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Dec  8 04:47:13 np0005550137 ceph-mon[74516]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1555784564' entity='mgr.compute-0.kitiwu' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Dec  8 04:47:13 np0005550137 ceph-mon[74516]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Dec  8 04:47:13 np0005550137 ceph-mon[74516]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/1555784564' entity='mgr.compute-0.kitiwu' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec  8 04:47:13 np0005550137 python3[85574]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v19 --fsid ceb838ef-9d5d-54e4-bddb-2f01adce2ad4 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   osd pool create images  replicated_rule --autoscale-mode on _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec  8 04:47:13 np0005550137 podman[85623]: 2025-12-08 09:47:13.940666756 +0000 UTC m=+0.047666482 container create 4aff6607e4a3c776675e5c37c204bdecb8dbded8a0dd5357728ca24a4a6a57b8 (image=quay.io/ceph/ceph:v19, name=gracious_zhukovsky, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.schema-version=1.0)
Dec  8 04:47:13 np0005550137 systemd[1]: Started libpod-conmon-4aff6607e4a3c776675e5c37c204bdecb8dbded8a0dd5357728ca24a4a6a57b8.scope.
Dec  8 04:47:14 np0005550137 podman[85623]: 2025-12-08 09:47:13.921204432 +0000 UTC m=+0.028204188 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Dec  8 04:47:14 np0005550137 systemd[1]: Started libcrun container.
Dec  8 04:47:14 np0005550137 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/cc71bdf2d3a609910572034baae34e051c7816ffed26b2f7056b8ce9309261b5/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Dec  8 04:47:14 np0005550137 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/cc71bdf2d3a609910572034baae34e051c7816ffed26b2f7056b8ce9309261b5/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec  8 04:47:14 np0005550137 podman[85623]: 2025-12-08 09:47:14.042498532 +0000 UTC m=+0.149498248 container init 4aff6607e4a3c776675e5c37c204bdecb8dbded8a0dd5357728ca24a4a6a57b8 (image=quay.io/ceph/ceph:v19, name=gracious_zhukovsky, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0)
Dec  8 04:47:14 np0005550137 podman[85623]: 2025-12-08 09:47:14.048536954 +0000 UTC m=+0.155536710 container start 4aff6607e4a3c776675e5c37c204bdecb8dbded8a0dd5357728ca24a4a6a57b8 (image=quay.io/ceph/ceph:v19, name=gracious_zhukovsky, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, ceph=True, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, OSD_FLAVOR=default)
Dec  8 04:47:14 np0005550137 podman[85623]: 2025-12-08 09:47:14.052514144 +0000 UTC m=+0.159513860 container attach 4aff6607e4a3c776675e5c37c204bdecb8dbded8a0dd5357728ca24a4a6a57b8 (image=quay.io/ceph/ceph:v19, name=gracious_zhukovsky, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, CEPH_REF=squid, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Dec  8 04:47:14 np0005550137 podman[85702]: 2025-12-08 09:47:14.237547338 +0000 UTC m=+0.042913520 container create c21fe3a0c081120723e3103bc9608bc761e2b50758ce52c051d1a1d1d0ef2659 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=goofy_visvesvaraya, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_REF=squid, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0)
Dec  8 04:47:14 np0005550137 systemd[1]: Started libpod-conmon-c21fe3a0c081120723e3103bc9608bc761e2b50758ce52c051d1a1d1d0ef2659.scope.
Dec  8 04:47:14 np0005550137 systemd[1]: Started libcrun container.
Dec  8 04:47:14 np0005550137 podman[85702]: 2025-12-08 09:47:14.303105715 +0000 UTC m=+0.108471957 container init c21fe3a0c081120723e3103bc9608bc761e2b50758ce52c051d1a1d1d0ef2659 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=goofy_visvesvaraya, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Dec  8 04:47:14 np0005550137 podman[85702]: 2025-12-08 09:47:14.310323743 +0000 UTC m=+0.115689905 container start c21fe3a0c081120723e3103bc9608bc761e2b50758ce52c051d1a1d1d0ef2659 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=goofy_visvesvaraya, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec  8 04:47:14 np0005550137 podman[85702]: 2025-12-08 09:47:14.217697322 +0000 UTC m=+0.023063514 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Dec  8 04:47:14 np0005550137 podman[85702]: 2025-12-08 09:47:14.313950221 +0000 UTC m=+0.119316383 container attach c21fe3a0c081120723e3103bc9608bc761e2b50758ce52c051d1a1d1d0ef2659 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=goofy_visvesvaraya, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec  8 04:47:14 np0005550137 goofy_visvesvaraya[85718]: 167 167
Dec  8 04:47:14 np0005550137 systemd[1]: libpod-c21fe3a0c081120723e3103bc9608bc761e2b50758ce52c051d1a1d1d0ef2659.scope: Deactivated successfully.
Dec  8 04:47:14 np0005550137 podman[85702]: 2025-12-08 09:47:14.315833078 +0000 UTC m=+0.121199280 container died c21fe3a0c081120723e3103bc9608bc761e2b50758ce52c051d1a1d1d0ef2659 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=goofy_visvesvaraya, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default)
Dec  8 04:47:14 np0005550137 systemd[1]: var-lib-containers-storage-overlay-f7bcac116c93797dfa98b537017ec9819002042df4b45722f8ee7b48618ad603-merged.mount: Deactivated successfully.
Dec  8 04:47:14 np0005550137 podman[85702]: 2025-12-08 09:47:14.361134977 +0000 UTC m=+0.166501179 container remove c21fe3a0c081120723e3103bc9608bc761e2b50758ce52c051d1a1d1d0ef2659 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=goofy_visvesvaraya, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325)
Dec  8 04:47:14 np0005550137 ceph-mgr[74806]: log_channel(cluster) log [DBG] : pgmap v67: 4 pgs: 1 unknown, 3 active+clean; 449 KiB data, 53 MiB used, 40 GiB / 40 GiB avail
Dec  8 04:47:14 np0005550137 systemd[1]: libpod-conmon-c21fe3a0c081120723e3103bc9608bc761e2b50758ce52c051d1a1d1d0ef2659.scope: Deactivated successfully.
Dec  8 04:47:14 np0005550137 ceph-mon[74516]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool create", "pool": "images", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"} v 0)
Dec  8 04:47:14 np0005550137 ceph-mon[74516]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/2698027637' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "images", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"}]: dispatch
Dec  8 04:47:14 np0005550137 ceph-mon[74516]: mon.compute-0@0(leader).osd e15 do_prune osdmap full prune enabled
Dec  8 04:47:14 np0005550137 ceph-mon[74516]: from='client.? 192.168.122.100:0/2699660867' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "backups", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"}]': finished
Dec  8 04:47:14 np0005550137 ceph-mon[74516]: from='mgr.14122 192.168.122.100:0/1555784564' entity='mgr.compute-0.kitiwu' 
Dec  8 04:47:14 np0005550137 ceph-mon[74516]: from='mgr.14122 192.168.122.100:0/1555784564' entity='mgr.compute-0.kitiwu' 
Dec  8 04:47:14 np0005550137 ceph-mon[74516]: from='mgr.14122 192.168.122.100:0/1555784564' entity='mgr.compute-0.kitiwu' 
Dec  8 04:47:14 np0005550137 ceph-mon[74516]: from='mgr.14122 192.168.122.100:0/1555784564' entity='mgr.compute-0.kitiwu' 
Dec  8 04:47:14 np0005550137 ceph-mon[74516]: from='mgr.14122 192.168.122.100:0/1555784564' entity='mgr.compute-0.kitiwu' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Dec  8 04:47:14 np0005550137 ceph-mon[74516]: from='mgr.14122 192.168.122.100:0/1555784564' entity='mgr.compute-0.kitiwu' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Dec  8 04:47:14 np0005550137 ceph-mon[74516]: from='client.? 192.168.122.100:0/2698027637' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "images", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"}]: dispatch
Dec  8 04:47:14 np0005550137 ceph-mon[74516]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/2698027637' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "images", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"}]': finished
Dec  8 04:47:14 np0005550137 ceph-mon[74516]: mon.compute-0@0(leader).osd e16 e16: 2 total, 2 up, 2 in
Dec  8 04:47:14 np0005550137 gracious_zhukovsky[85641]: pool 'images' created
Dec  8 04:47:14 np0005550137 ceph-mon[74516]: log_channel(cluster) log [DBG] : osdmap e16: 2 total, 2 up, 2 in
Dec  8 04:47:14 np0005550137 ceph-osd[83009]: osd.1 pg_epoch: 16 pg[5.0( empty local-lis/les=0/0 n=0 ec=16/16 lis/c=0/0 les/c/f=0/0/0 sis=16) [1] r=0 lpr=16 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  8 04:47:14 np0005550137 ceph-osd[83009]: osd.1 pg_epoch: 16 pg[4.0( empty local-lis/les=15/16 n=0 ec=15/15 lis/c=0/0 les/c/f=0/0/0 sis=15) [1] r=0 lpr=15 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  8 04:47:14 np0005550137 systemd[1]: libpod-4aff6607e4a3c776675e5c37c204bdecb8dbded8a0dd5357728ca24a4a6a57b8.scope: Deactivated successfully.
Dec  8 04:47:14 np0005550137 podman[85623]: 2025-12-08 09:47:14.55676894 +0000 UTC m=+0.663768666 container died 4aff6607e4a3c776675e5c37c204bdecb8dbded8a0dd5357728ca24a4a6a57b8 (image=quay.io/ceph/ceph:v19, name=gracious_zhukovsky, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, ceph=True, org.label-schema.vendor=CentOS, OSD_FLAVOR=default)
Dec  8 04:47:14 np0005550137 systemd[1]: var-lib-containers-storage-overlay-cc71bdf2d3a609910572034baae34e051c7816ffed26b2f7056b8ce9309261b5-merged.mount: Deactivated successfully.
Dec  8 04:47:14 np0005550137 podman[85623]: 2025-12-08 09:47:14.602040478 +0000 UTC m=+0.709040234 container remove 4aff6607e4a3c776675e5c37c204bdecb8dbded8a0dd5357728ca24a4a6a57b8 (image=quay.io/ceph/ceph:v19, name=gracious_zhukovsky, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS)
Dec  8 04:47:14 np0005550137 podman[85745]: 2025-12-08 09:47:14.606760931 +0000 UTC m=+0.070294971 container create 79be4f69697c0c0479af95147e95dd9b22971b95d83db079f385cd96b46966c0 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=boring_wiles, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, OSD_FLAVOR=default, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0)
Dec  8 04:47:14 np0005550137 systemd[1]: libpod-conmon-4aff6607e4a3c776675e5c37c204bdecb8dbded8a0dd5357728ca24a4a6a57b8.scope: Deactivated successfully.
Dec  8 04:47:14 np0005550137 systemd[1]: Started libpod-conmon-79be4f69697c0c0479af95147e95dd9b22971b95d83db079f385cd96b46966c0.scope.
Dec  8 04:47:14 np0005550137 podman[85745]: 2025-12-08 09:47:14.576360248 +0000 UTC m=+0.039894328 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Dec  8 04:47:14 np0005550137 systemd[1]: Started libcrun container.
Dec  8 04:47:14 np0005550137 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e4a9e16540b7b9ca4b2c3546e45562dd249472149d6966e211d4e1c1473fc15c/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec  8 04:47:14 np0005550137 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e4a9e16540b7b9ca4b2c3546e45562dd249472149d6966e211d4e1c1473fc15c/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec  8 04:47:14 np0005550137 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e4a9e16540b7b9ca4b2c3546e45562dd249472149d6966e211d4e1c1473fc15c/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec  8 04:47:14 np0005550137 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e4a9e16540b7b9ca4b2c3546e45562dd249472149d6966e211d4e1c1473fc15c/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec  8 04:47:14 np0005550137 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e4a9e16540b7b9ca4b2c3546e45562dd249472149d6966e211d4e1c1473fc15c/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Dec  8 04:47:14 np0005550137 podman[85745]: 2025-12-08 09:47:14.694683349 +0000 UTC m=+0.158217409 container init 79be4f69697c0c0479af95147e95dd9b22971b95d83db079f385cd96b46966c0 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=boring_wiles, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.vendor=CentOS, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec  8 04:47:14 np0005550137 podman[85745]: 2025-12-08 09:47:14.707442643 +0000 UTC m=+0.170976673 container start 79be4f69697c0c0479af95147e95dd9b22971b95d83db079f385cd96b46966c0 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=boring_wiles, CEPH_REF=squid, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, ceph=True)
Dec  8 04:47:14 np0005550137 podman[85745]: 2025-12-08 09:47:14.710723312 +0000 UTC m=+0.174257342 container attach 79be4f69697c0c0479af95147e95dd9b22971b95d83db079f385cd96b46966c0 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=boring_wiles, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, ceph=True)
Dec  8 04:47:14 np0005550137 ceph-mon[74516]: mon.compute-0@0(leader).osd e16 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  8 04:47:14 np0005550137 python3[85802]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v19 --fsid ceb838ef-9d5d-54e4-bddb-2f01adce2ad4 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   osd pool create cephfs.cephfs.meta  replicated_rule --autoscale-mode on _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec  8 04:47:15 np0005550137 podman[85809]: 2025-12-08 09:47:15.011249782 +0000 UTC m=+0.051917089 container create 89431030958ee9da083d6b0cbb29f034feee83b1c02a79e3f219e98a9fc90242 (image=quay.io/ceph/ceph:v19, name=mystifying_aryabhata, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, org.label-schema.schema-version=1.0)
Dec  8 04:47:15 np0005550137 boring_wiles[85772]: --> passed data devices: 0 physical, 1 LVM
Dec  8 04:47:15 np0005550137 boring_wiles[85772]: --> All data devices are unavailable
Dec  8 04:47:15 np0005550137 systemd[1]: Started libpod-conmon-89431030958ee9da083d6b0cbb29f034feee83b1c02a79e3f219e98a9fc90242.scope.
Dec  8 04:47:15 np0005550137 podman[85809]: 2025-12-08 09:47:14.988001315 +0000 UTC m=+0.028668652 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Dec  8 04:47:15 np0005550137 podman[85745]: 2025-12-08 09:47:15.082105129 +0000 UTC m=+0.545639169 container died 79be4f69697c0c0479af95147e95dd9b22971b95d83db079f385cd96b46966c0 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=boring_wiles, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec  8 04:47:15 np0005550137 systemd[1]: Started libcrun container.
Dec  8 04:47:15 np0005550137 systemd[1]: libpod-79be4f69697c0c0479af95147e95dd9b22971b95d83db079f385cd96b46966c0.scope: Deactivated successfully.
Dec  8 04:47:15 np0005550137 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b4ef1eb4f4bdeb98cdf73c1fad064322de71e1f4391f226cf6cc3dbe653711ac/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Dec  8 04:47:15 np0005550137 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b4ef1eb4f4bdeb98cdf73c1fad064322de71e1f4391f226cf6cc3dbe653711ac/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec  8 04:47:15 np0005550137 podman[85809]: 2025-12-08 09:47:15.119455991 +0000 UTC m=+0.160123288 container init 89431030958ee9da083d6b0cbb29f034feee83b1c02a79e3f219e98a9fc90242 (image=quay.io/ceph/ceph:v19, name=mystifying_aryabhata, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.build-date=20250325, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, io.buildah.version=1.40.1)
Dec  8 04:47:15 np0005550137 podman[85809]: 2025-12-08 09:47:15.125612045 +0000 UTC m=+0.166279342 container start 89431030958ee9da083d6b0cbb29f034feee83b1c02a79e3f219e98a9fc90242 (image=quay.io/ceph/ceph:v19, name=mystifying_aryabhata, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325)
Dec  8 04:47:15 np0005550137 podman[85809]: 2025-12-08 09:47:15.128348237 +0000 UTC m=+0.169015534 container attach 89431030958ee9da083d6b0cbb29f034feee83b1c02a79e3f219e98a9fc90242 (image=quay.io/ceph/ceph:v19, name=mystifying_aryabhata, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec  8 04:47:15 np0005550137 systemd[1]: var-lib-containers-storage-overlay-e4a9e16540b7b9ca4b2c3546e45562dd249472149d6966e211d4e1c1473fc15c-merged.mount: Deactivated successfully.
Dec  8 04:47:15 np0005550137 podman[85745]: 2025-12-08 09:47:15.155495343 +0000 UTC m=+0.619029403 container remove 79be4f69697c0c0479af95147e95dd9b22971b95d83db079f385cd96b46966c0 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=boring_wiles, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True)
Dec  8 04:47:15 np0005550137 systemd[1]: libpod-conmon-79be4f69697c0c0479af95147e95dd9b22971b95d83db079f385cd96b46966c0.scope: Deactivated successfully.
Dec  8 04:47:15 np0005550137 ceph-mon[74516]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool create", "pool": "cephfs.cephfs.meta", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"} v 0)
Dec  8 04:47:15 np0005550137 ceph-mon[74516]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/64121159' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "cephfs.cephfs.meta", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"}]: dispatch
Dec  8 04:47:15 np0005550137 ceph-mon[74516]: mon.compute-0@0(leader).osd e16 do_prune osdmap full prune enabled
Dec  8 04:47:15 np0005550137 ceph-mon[74516]: from='client.? 192.168.122.100:0/2698027637' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "images", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"}]': finished
Dec  8 04:47:15 np0005550137 ceph-mon[74516]: from='client.? 192.168.122.100:0/64121159' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "cephfs.cephfs.meta", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"}]: dispatch
Dec  8 04:47:15 np0005550137 ceph-mon[74516]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/64121159' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "cephfs.cephfs.meta", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"}]': finished
Dec  8 04:47:15 np0005550137 ceph-mon[74516]: mon.compute-0@0(leader).osd e17 e17: 2 total, 2 up, 2 in
Dec  8 04:47:15 np0005550137 mystifying_aryabhata[85829]: pool 'cephfs.cephfs.meta' created
Dec  8 04:47:15 np0005550137 ceph-mon[74516]: log_channel(cluster) log [DBG] : osdmap e17: 2 total, 2 up, 2 in
Dec  8 04:47:15 np0005550137 ceph-osd[83009]: osd.1 pg_epoch: 17 pg[6.0( empty local-lis/les=0/0 n=0 ec=17/17 lis/c=0/0 les/c/f=0/0/0 sis=17) [1] r=0 lpr=17 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  8 04:47:15 np0005550137 ceph-osd[83009]: osd.1 pg_epoch: 17 pg[5.0( empty local-lis/les=16/17 n=0 ec=16/16 lis/c=0/0 les/c/f=0/0/0 sis=16) [1] r=0 lpr=16 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  8 04:47:15 np0005550137 systemd[1]: libpod-89431030958ee9da083d6b0cbb29f034feee83b1c02a79e3f219e98a9fc90242.scope: Deactivated successfully.
Dec  8 04:47:15 np0005550137 podman[85809]: 2025-12-08 09:47:15.571750947 +0000 UTC m=+0.612418334 container died 89431030958ee9da083d6b0cbb29f034feee83b1c02a79e3f219e98a9fc90242 (image=quay.io/ceph/ceph:v19, name=mystifying_aryabhata, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, ceph=True, OSD_FLAVOR=default, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS)
Dec  8 04:47:15 np0005550137 systemd[1]: var-lib-containers-storage-overlay-b4ef1eb4f4bdeb98cdf73c1fad064322de71e1f4391f226cf6cc3dbe653711ac-merged.mount: Deactivated successfully.
Dec  8 04:47:15 np0005550137 podman[85809]: 2025-12-08 09:47:15.623787659 +0000 UTC m=+0.664454976 container remove 89431030958ee9da083d6b0cbb29f034feee83b1c02a79e3f219e98a9fc90242 (image=quay.io/ceph/ceph:v19, name=mystifying_aryabhata, org.label-schema.license=GPLv2, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec  8 04:47:15 np0005550137 systemd[1]: libpod-conmon-89431030958ee9da083d6b0cbb29f034feee83b1c02a79e3f219e98a9fc90242.scope: Deactivated successfully.
Dec  8 04:47:15 np0005550137 ceph-mon[74516]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd new", "uuid": "ff8e95fa-0a48-4071-9e37-1bf4e30dac93"} v 0)
Dec  8 04:47:15 np0005550137 ceph-mon[74516]: log_channel(audit) log [INF] : from='client.? ' entity='client.bootstrap-osd' cmd=[{"prefix": "osd new", "uuid": "ff8e95fa-0a48-4071-9e37-1bf4e30dac93"}]: dispatch
Dec  8 04:47:15 np0005550137 ceph-mon[74516]: mon.compute-0@0(leader).osd e17 do_prune osdmap full prune enabled
Dec  8 04:47:15 np0005550137 ceph-mon[74516]: log_channel(audit) log [INF] : from='client.? ' entity='client.bootstrap-osd' cmd='[{"prefix": "osd new", "uuid": "ff8e95fa-0a48-4071-9e37-1bf4e30dac93"}]': finished
Dec  8 04:47:15 np0005550137 ceph-mon[74516]: mon.compute-0@0(leader).osd e18 e18: 3 total, 2 up, 3 in
Dec  8 04:47:15 np0005550137 ceph-mon[74516]: log_channel(cluster) log [DBG] : osdmap e18: 3 total, 2 up, 3 in
Dec  8 04:47:15 np0005550137 ceph-mon[74516]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd metadata", "id": 2} v 0)
Dec  8 04:47:15 np0005550137 ceph-mon[74516]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/1555784564' entity='mgr.compute-0.kitiwu' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch
Dec  8 04:47:15 np0005550137 ceph-mgr[74806]: mgr finish mon failed to return metadata for osd.2: (2) No such file or directory
Dec  8 04:47:15 np0005550137 ceph-osd[83009]: osd.1 pg_epoch: 18 pg[6.0( empty local-lis/les=17/18 n=0 ec=17/17 lis/c=0/0 les/c/f=0/0/0 sis=17) [1] r=0 lpr=17 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  8 04:47:15 np0005550137 podman[85997]: 2025-12-08 09:47:15.833169905 +0000 UTC m=+0.053215419 container create f936aff4ba864bb58e35eddb81e71563737383bd24337b1a5ccca26d7316c91c (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=cool_driscoll, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_REF=squid)
Dec  8 04:47:15 np0005550137 systemd[1]: Started libpod-conmon-f936aff4ba864bb58e35eddb81e71563737383bd24337b1a5ccca26d7316c91c.scope.
Dec  8 04:47:15 np0005550137 podman[85997]: 2025-12-08 09:47:15.81301711 +0000 UTC m=+0.033062654 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Dec  8 04:47:15 np0005550137 systemd[1]: Started libcrun container.
Dec  8 04:47:15 np0005550137 python3[85999]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v19 --fsid ceb838ef-9d5d-54e4-bddb-2f01adce2ad4 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   osd pool create cephfs.cephfs.data  replicated_rule --autoscale-mode on _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec  8 04:47:15 np0005550137 podman[85997]: 2025-12-08 09:47:15.924285049 +0000 UTC m=+0.144330634 container init f936aff4ba864bb58e35eddb81e71563737383bd24337b1a5ccca26d7316c91c (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=cool_driscoll, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_REF=squid, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec  8 04:47:15 np0005550137 podman[85997]: 2025-12-08 09:47:15.935127475 +0000 UTC m=+0.155172999 container start f936aff4ba864bb58e35eddb81e71563737383bd24337b1a5ccca26d7316c91c (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=cool_driscoll, ceph=True, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec  8 04:47:15 np0005550137 podman[85997]: 2025-12-08 09:47:15.939566819 +0000 UTC m=+0.159612423 container attach f936aff4ba864bb58e35eddb81e71563737383bd24337b1a5ccca26d7316c91c (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=cool_driscoll, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid)
Dec  8 04:47:15 np0005550137 cool_driscoll[86014]: 167 167
Dec  8 04:47:15 np0005550137 systemd[1]: libpod-f936aff4ba864bb58e35eddb81e71563737383bd24337b1a5ccca26d7316c91c.scope: Deactivated successfully.
Dec  8 04:47:15 np0005550137 podman[85997]: 2025-12-08 09:47:15.944022372 +0000 UTC m=+0.164067906 container died f936aff4ba864bb58e35eddb81e71563737383bd24337b1a5ccca26d7316c91c (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=cool_driscoll, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Dec  8 04:47:15 np0005550137 systemd[1]: var-lib-containers-storage-overlay-19dce325e2e3656492a2cc311d90c9f3bdf8a24a7088770b2465a0a2bcb6ea2c-merged.mount: Deactivated successfully.
Dec  8 04:47:15 np0005550137 podman[85997]: 2025-12-08 09:47:15.979091015 +0000 UTC m=+0.199136529 container remove f936aff4ba864bb58e35eddb81e71563737383bd24337b1a5ccca26d7316c91c (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=cool_driscoll, CEPH_REF=squid, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325)
Dec  8 04:47:15 np0005550137 podman[86017]: 2025-12-08 09:47:15.989486537 +0000 UTC m=+0.060707864 container create 9f96dc902d1a6b235d6d98f0c483adb80e992640b0656dea7f0da5eb0f70d04f (image=quay.io/ceph/ceph:v19, name=crazy_maxwell, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True)
Dec  8 04:47:16 np0005550137 systemd[1]: libpod-conmon-f936aff4ba864bb58e35eddb81e71563737383bd24337b1a5ccca26d7316c91c.scope: Deactivated successfully.
Dec  8 04:47:16 np0005550137 systemd[1]: Started libpod-conmon-9f96dc902d1a6b235d6d98f0c483adb80e992640b0656dea7f0da5eb0f70d04f.scope.
Dec  8 04:47:16 np0005550137 systemd[1]: Started libcrun container.
Dec  8 04:47:16 np0005550137 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3eb3a975a8b0c209171906c5b9d8e7c7c3e69beda6229bc4b35cd7f6830b62e1/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec  8 04:47:16 np0005550137 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3eb3a975a8b0c209171906c5b9d8e7c7c3e69beda6229bc4b35cd7f6830b62e1/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Dec  8 04:47:16 np0005550137 podman[86017]: 2025-12-08 09:47:15.961027313 +0000 UTC m=+0.032248670 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Dec  8 04:47:16 np0005550137 podman[86017]: 2025-12-08 09:47:16.066476528 +0000 UTC m=+0.137697845 container init 9f96dc902d1a6b235d6d98f0c483adb80e992640b0656dea7f0da5eb0f70d04f (image=quay.io/ceph/ceph:v19, name=crazy_maxwell, ceph=True, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, OSD_FLAVOR=default)
Dec  8 04:47:16 np0005550137 podman[86017]: 2025-12-08 09:47:16.071452728 +0000 UTC m=+0.142674055 container start 9f96dc902d1a6b235d6d98f0c483adb80e992640b0656dea7f0da5eb0f70d04f (image=quay.io/ceph/ceph:v19, name=crazy_maxwell, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default)
Dec  8 04:47:16 np0005550137 podman[86017]: 2025-12-08 09:47:16.074468567 +0000 UTC m=+0.145689894 container attach 9f96dc902d1a6b235d6d98f0c483adb80e992640b0656dea7f0da5eb0f70d04f (image=quay.io/ceph/ceph:v19, name=crazy_maxwell, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, CEPH_REF=squid, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2)
Dec  8 04:47:16 np0005550137 podman[86058]: 2025-12-08 09:47:16.17648198 +0000 UTC m=+0.047202748 container create 9515a058f127e41ba15f39c372570c5728c1535fce8606c6a2dc513e94edc7e6 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=confident_matsumoto, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, ceph=True, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325)
Dec  8 04:47:16 np0005550137 systemd[1]: Started libpod-conmon-9515a058f127e41ba15f39c372570c5728c1535fce8606c6a2dc513e94edc7e6.scope.
Dec  8 04:47:16 np0005550137 systemd[1]: Started libcrun container.
Dec  8 04:47:16 np0005550137 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3db22d6cf05340f2baea8f9a704fe0b02a1c44cdaa71bdf747fd364b3f0d9e18/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec  8 04:47:16 np0005550137 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3db22d6cf05340f2baea8f9a704fe0b02a1c44cdaa71bdf747fd364b3f0d9e18/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec  8 04:47:16 np0005550137 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3db22d6cf05340f2baea8f9a704fe0b02a1c44cdaa71bdf747fd364b3f0d9e18/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec  8 04:47:16 np0005550137 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3db22d6cf05340f2baea8f9a704fe0b02a1c44cdaa71bdf747fd364b3f0d9e18/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec  8 04:47:16 np0005550137 podman[86058]: 2025-12-08 09:47:16.15682532 +0000 UTC m=+0.027546098 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Dec  8 04:47:16 np0005550137 podman[86058]: 2025-12-08 09:47:16.258868903 +0000 UTC m=+0.129589681 container init 9515a058f127e41ba15f39c372570c5728c1535fce8606c6a2dc513e94edc7e6 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=confident_matsumoto, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, ceph=True, io.buildah.version=1.40.1, org.label-schema.license=GPLv2)
Dec  8 04:47:16 np0005550137 podman[86058]: 2025-12-08 09:47:16.266050279 +0000 UTC m=+0.136771037 container start 9515a058f127e41ba15f39c372570c5728c1535fce8606c6a2dc513e94edc7e6 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=confident_matsumoto, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, ceph=True, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.build-date=20250325)
Dec  8 04:47:16 np0005550137 podman[86058]: 2025-12-08 09:47:16.269214814 +0000 UTC m=+0.139935582 container attach 9515a058f127e41ba15f39c372570c5728c1535fce8606c6a2dc513e94edc7e6 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=confident_matsumoto, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, CEPH_REF=squid, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS)
Dec  8 04:47:16 np0005550137 ceph-mon[74516]: log_channel(cluster) log [DBG] : Standby manager daemon compute-2.zqytsv started
Dec  8 04:47:16 np0005550137 ceph-mgr[74806]: mgr.server handle_open ignoring open from mgr.compute-2.zqytsv 192.168.122.102:0/300672968; not ready for session (expect reconnect)
Dec  8 04:47:16 np0005550137 ceph-mgr[74806]: log_channel(cluster) log [DBG] : pgmap v71: 6 pgs: 3 unknown, 3 active+clean; 449 KiB data, 53 MiB used, 40 GiB / 40 GiB avail
Dec  8 04:47:16 np0005550137 ceph-mon[74516]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool create", "pool": "cephfs.cephfs.data", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"} v 0)
Dec  8 04:47:16 np0005550137 ceph-mon[74516]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/3965186026' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "cephfs.cephfs.data", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"}]: dispatch
Dec  8 04:47:16 np0005550137 confident_matsumoto[86093]: {
Dec  8 04:47:16 np0005550137 confident_matsumoto[86093]:    "1": [
Dec  8 04:47:16 np0005550137 confident_matsumoto[86093]:        {
Dec  8 04:47:16 np0005550137 confident_matsumoto[86093]:            "devices": [
Dec  8 04:47:16 np0005550137 confident_matsumoto[86093]:                "/dev/loop3"
Dec  8 04:47:16 np0005550137 confident_matsumoto[86093]:            ],
Dec  8 04:47:16 np0005550137 confident_matsumoto[86093]:            "lv_name": "ceph_lv0",
Dec  8 04:47:16 np0005550137 confident_matsumoto[86093]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Dec  8 04:47:16 np0005550137 confident_matsumoto[86093]:            "lv_size": "21470642176",
Dec  8 04:47:16 np0005550137 confident_matsumoto[86093]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=RomYjf-Huw1-Uvyl-0ZXT-yb3F-i1Wo-KjPFYE,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=ceb838ef-9d5d-54e4-bddb-2f01adce2ad4,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=10863df8-16d4-4896-ae26-227efb76290e,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Dec  8 04:47:16 np0005550137 confident_matsumoto[86093]:            "lv_uuid": "RomYjf-Huw1-Uvyl-0ZXT-yb3F-i1Wo-KjPFYE",
Dec  8 04:47:16 np0005550137 confident_matsumoto[86093]:            "name": "ceph_lv0",
Dec  8 04:47:16 np0005550137 confident_matsumoto[86093]:            "path": "/dev/ceph_vg0/ceph_lv0",
Dec  8 04:47:16 np0005550137 confident_matsumoto[86093]:            "tags": {
Dec  8 04:47:16 np0005550137 confident_matsumoto[86093]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Dec  8 04:47:16 np0005550137 confident_matsumoto[86093]:                "ceph.block_uuid": "RomYjf-Huw1-Uvyl-0ZXT-yb3F-i1Wo-KjPFYE",
Dec  8 04:47:16 np0005550137 confident_matsumoto[86093]:                "ceph.cephx_lockbox_secret": "",
Dec  8 04:47:16 np0005550137 confident_matsumoto[86093]:                "ceph.cluster_fsid": "ceb838ef-9d5d-54e4-bddb-2f01adce2ad4",
Dec  8 04:47:16 np0005550137 confident_matsumoto[86093]:                "ceph.cluster_name": "ceph",
Dec  8 04:47:16 np0005550137 confident_matsumoto[86093]:                "ceph.crush_device_class": "",
Dec  8 04:47:16 np0005550137 confident_matsumoto[86093]:                "ceph.encrypted": "0",
Dec  8 04:47:16 np0005550137 confident_matsumoto[86093]:                "ceph.osd_fsid": "10863df8-16d4-4896-ae26-227efb76290e",
Dec  8 04:47:16 np0005550137 confident_matsumoto[86093]:                "ceph.osd_id": "1",
Dec  8 04:47:16 np0005550137 confident_matsumoto[86093]:                "ceph.osdspec_affinity": "default_drive_group",
Dec  8 04:47:16 np0005550137 confident_matsumoto[86093]:                "ceph.type": "block",
Dec  8 04:47:16 np0005550137 confident_matsumoto[86093]:                "ceph.vdo": "0",
Dec  8 04:47:16 np0005550137 confident_matsumoto[86093]:                "ceph.with_tpm": "0"
Dec  8 04:47:16 np0005550137 confident_matsumoto[86093]:            },
Dec  8 04:47:16 np0005550137 confident_matsumoto[86093]:            "type": "block",
Dec  8 04:47:16 np0005550137 confident_matsumoto[86093]:            "vg_name": "ceph_vg0"
Dec  8 04:47:16 np0005550137 confident_matsumoto[86093]:        }
Dec  8 04:47:16 np0005550137 confident_matsumoto[86093]:    ]
Dec  8 04:47:16 np0005550137 confident_matsumoto[86093]: }
Dec  8 04:47:16 np0005550137 systemd[1]: libpod-9515a058f127e41ba15f39c372570c5728c1535fce8606c6a2dc513e94edc7e6.scope: Deactivated successfully.
Dec  8 04:47:16 np0005550137 podman[86058]: 2025-12-08 09:47:16.5579095 +0000 UTC m=+0.428630258 container died 9515a058f127e41ba15f39c372570c5728c1535fce8606c6a2dc513e94edc7e6 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=confident_matsumoto, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, org.label-schema.build-date=20250325, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec  8 04:47:16 np0005550137 ceph-mon[74516]: from='client.? 192.168.122.100:0/64121159' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "cephfs.cephfs.meta", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"}]': finished
Dec  8 04:47:16 np0005550137 ceph-mon[74516]: from='client.? 192.168.122.102:0/2956902591' entity='client.bootstrap-osd' cmd=[{"prefix": "osd new", "uuid": "ff8e95fa-0a48-4071-9e37-1bf4e30dac93"}]: dispatch
Dec  8 04:47:16 np0005550137 ceph-mon[74516]: from='client.? ' entity='client.bootstrap-osd' cmd=[{"prefix": "osd new", "uuid": "ff8e95fa-0a48-4071-9e37-1bf4e30dac93"}]: dispatch
Dec  8 04:47:16 np0005550137 ceph-mon[74516]: from='client.? ' entity='client.bootstrap-osd' cmd='[{"prefix": "osd new", "uuid": "ff8e95fa-0a48-4071-9e37-1bf4e30dac93"}]': finished
Dec  8 04:47:16 np0005550137 ceph-mon[74516]: from='client.? 192.168.122.100:0/3965186026' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "cephfs.cephfs.data", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"}]: dispatch
Dec  8 04:47:16 np0005550137 podman[86058]: 2025-12-08 09:47:16.597806088 +0000 UTC m=+0.468526836 container remove 9515a058f127e41ba15f39c372570c5728c1535fce8606c6a2dc513e94edc7e6 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=confident_matsumoto, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec  8 04:47:16 np0005550137 systemd[1]: var-lib-containers-storage-overlay-3db22d6cf05340f2baea8f9a704fe0b02a1c44cdaa71bdf747fd364b3f0d9e18-merged.mount: Deactivated successfully.
Dec  8 04:47:16 np0005550137 systemd[1]: libpod-conmon-9515a058f127e41ba15f39c372570c5728c1535fce8606c6a2dc513e94edc7e6.scope: Deactivated successfully.
Dec  8 04:47:16 np0005550137 ceph-mon[74516]: mon.compute-0@0(leader).osd e18 do_prune osdmap full prune enabled
Dec  8 04:47:16 np0005550137 ceph-mon[74516]: log_channel(cluster) log [WRN] : Health check update: 5 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED)
Dec  8 04:47:16 np0005550137 ceph-mon[74516]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/3965186026' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "cephfs.cephfs.data", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"}]': finished
Dec  8 04:47:16 np0005550137 ceph-mon[74516]: mon.compute-0@0(leader).osd e19 e19: 3 total, 2 up, 3 in
Dec  8 04:47:16 np0005550137 crazy_maxwell[86049]: pool 'cephfs.cephfs.data' created
Dec  8 04:47:16 np0005550137 ceph-mon[74516]: log_channel(cluster) log [DBG] : osdmap e19: 3 total, 2 up, 3 in
Dec  8 04:47:16 np0005550137 ceph-mon[74516]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd metadata", "id": 2} v 0)
Dec  8 04:47:16 np0005550137 ceph-mon[74516]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/1555784564' entity='mgr.compute-0.kitiwu' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch
Dec  8 04:47:16 np0005550137 ceph-mgr[74806]: mgr finish mon failed to return metadata for osd.2: (2) No such file or directory
Dec  8 04:47:16 np0005550137 systemd[1]: libpod-9f96dc902d1a6b235d6d98f0c483adb80e992640b0656dea7f0da5eb0f70d04f.scope: Deactivated successfully.
Dec  8 04:47:16 np0005550137 podman[86017]: 2025-12-08 09:47:16.782748058 +0000 UTC m=+0.853969385 container died 9f96dc902d1a6b235d6d98f0c483adb80e992640b0656dea7f0da5eb0f70d04f (image=quay.io/ceph/ceph:v19, name=crazy_maxwell, CEPH_REF=squid, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Dec  8 04:47:16 np0005550137 ceph-mon[74516]: log_channel(cluster) log [DBG] : mgrmap e10: compute-0.kitiwu(active, since 119s), standbys: compute-2.zqytsv
Dec  8 04:47:16 np0005550137 ceph-mon[74516]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mgr metadata", "who": "compute-2.zqytsv", "id": "compute-2.zqytsv"} v 0)
Dec  8 04:47:16 np0005550137 ceph-mon[74516]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/1555784564' entity='mgr.compute-0.kitiwu' cmd=[{"prefix": "mgr metadata", "who": "compute-2.zqytsv", "id": "compute-2.zqytsv"}]: dispatch
Dec  8 04:47:16 np0005550137 systemd[1]: var-lib-containers-storage-overlay-3eb3a975a8b0c209171906c5b9d8e7c7c3e69beda6229bc4b35cd7f6830b62e1-merged.mount: Deactivated successfully.
Dec  8 04:47:16 np0005550137 podman[86017]: 2025-12-08 09:47:16.833596095 +0000 UTC m=+0.904817422 container remove 9f96dc902d1a6b235d6d98f0c483adb80e992640b0656dea7f0da5eb0f70d04f (image=quay.io/ceph/ceph:v19, name=crazy_maxwell, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, ceph=True, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec  8 04:47:16 np0005550137 systemd[1]: libpod-conmon-9f96dc902d1a6b235d6d98f0c483adb80e992640b0656dea7f0da5eb0f70d04f.scope: Deactivated successfully.
Dec  8 04:47:17 np0005550137 python3[86225]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v19 --fsid ceb838ef-9d5d-54e4-bddb-2f01adce2ad4 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   osd pool application enable vms rbd _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec  8 04:47:17 np0005550137 podman[86247]: 2025-12-08 09:47:17.174180009 +0000 UTC m=+0.049853058 container create b5612a55635c661d79ed3e84b8a96b875d5eeca2f7fa3354fba87490402d4f87 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=heuristic_khayyam, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, ceph=True, CEPH_REF=squid, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Dec  8 04:47:17 np0005550137 systemd[1]: Started libpod-conmon-b5612a55635c661d79ed3e84b8a96b875d5eeca2f7fa3354fba87490402d4f87.scope.
Dec  8 04:47:17 np0005550137 ceph-mon[74516]: log_channel(cluster) log [DBG] : Standby manager daemon compute-1.mmkaif started
Dec  8 04:47:17 np0005550137 ceph-mgr[74806]: mgr.server handle_open ignoring open from mgr.compute-1.mmkaif 192.168.122.101:0/1981514679; not ready for session (expect reconnect)
Dec  8 04:47:17 np0005550137 podman[86261]: 2025-12-08 09:47:17.232801159 +0000 UTC m=+0.039480097 container create e9f6092587c6834291c490431eb740d8b567118382c7523fd16810a1f95b03a5 (image=quay.io/ceph/ceph:v19, name=zen_poitras, org.label-schema.vendor=CentOS, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec  8 04:47:17 np0005550137 systemd[1]: Started libcrun container.
Dec  8 04:47:17 np0005550137 podman[86247]: 2025-12-08 09:47:17.152979752 +0000 UTC m=+0.028652861 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Dec  8 04:47:17 np0005550137 podman[86247]: 2025-12-08 09:47:17.251941663 +0000 UTC m=+0.127614812 container init b5612a55635c661d79ed3e84b8a96b875d5eeca2f7fa3354fba87490402d4f87 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=heuristic_khayyam, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, CEPH_REF=squid)
Dec  8 04:47:17 np0005550137 podman[86247]: 2025-12-08 09:47:17.258371456 +0000 UTC m=+0.134044515 container start b5612a55635c661d79ed3e84b8a96b875d5eeca2f7fa3354fba87490402d4f87 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=heuristic_khayyam, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, CEPH_REF=squid, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Dec  8 04:47:17 np0005550137 podman[86247]: 2025-12-08 09:47:17.261927842 +0000 UTC m=+0.137600941 container attach b5612a55635c661d79ed3e84b8a96b875d5eeca2f7fa3354fba87490402d4f87 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=heuristic_khayyam, org.label-schema.build-date=20250325, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True)
Dec  8 04:47:17 np0005550137 heuristic_khayyam[86276]: 167 167
Dec  8 04:47:17 np0005550137 podman[86247]: 2025-12-08 09:47:17.263734707 +0000 UTC m=+0.139407776 container died b5612a55635c661d79ed3e84b8a96b875d5eeca2f7fa3354fba87490402d4f87 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=heuristic_khayyam, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.build-date=20250325, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec  8 04:47:17 np0005550137 systemd[1]: Started libpod-conmon-e9f6092587c6834291c490431eb740d8b567118382c7523fd16810a1f95b03a5.scope.
Dec  8 04:47:17 np0005550137 systemd[1]: libpod-b5612a55635c661d79ed3e84b8a96b875d5eeca2f7fa3354fba87490402d4f87.scope: Deactivated successfully.
Dec  8 04:47:17 np0005550137 systemd[1]: var-lib-containers-storage-overlay-a204641a203d29ec492e011cfe36dab1eeb6ca08833bde35df4e706cd449fc95-merged.mount: Deactivated successfully.
Dec  8 04:47:17 np0005550137 podman[86247]: 2025-12-08 09:47:17.303722647 +0000 UTC m=+0.179395706 container remove b5612a55635c661d79ed3e84b8a96b875d5eeca2f7fa3354fba87490402d4f87 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=heuristic_khayyam, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec  8 04:47:17 np0005550137 systemd[1]: Started libcrun container.
Dec  8 04:47:17 np0005550137 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a5665b24cd7583f13d251aebda9f232224d949043d78af7ca30a9fc6cbba7a7b/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Dec  8 04:47:17 np0005550137 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a5665b24cd7583f13d251aebda9f232224d949043d78af7ca30a9fc6cbba7a7b/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec  8 04:47:17 np0005550137 systemd[1]: libpod-conmon-b5612a55635c661d79ed3e84b8a96b875d5eeca2f7fa3354fba87490402d4f87.scope: Deactivated successfully.
Dec  8 04:47:17 np0005550137 podman[86261]: 2025-12-08 09:47:17.217171769 +0000 UTC m=+0.023850727 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Dec  8 04:47:17 np0005550137 podman[86261]: 2025-12-08 09:47:17.337202412 +0000 UTC m=+0.143881350 container init e9f6092587c6834291c490431eb740d8b567118382c7523fd16810a1f95b03a5 (image=quay.io/ceph/ceph:v19, name=zen_poitras, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.build-date=20250325)
Dec  8 04:47:17 np0005550137 podman[86261]: 2025-12-08 09:47:17.34411731 +0000 UTC m=+0.150796248 container start e9f6092587c6834291c490431eb740d8b567118382c7523fd16810a1f95b03a5 (image=quay.io/ceph/ceph:v19, name=zen_poitras, CEPH_REF=squid, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec  8 04:47:17 np0005550137 podman[86261]: 2025-12-08 09:47:17.347168321 +0000 UTC m=+0.153847279 container attach e9f6092587c6834291c490431eb740d8b567118382c7523fd16810a1f95b03a5 (image=quay.io/ceph/ceph:v19, name=zen_poitras, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Dec  8 04:47:17 np0005550137 podman[86307]: 2025-12-08 09:47:17.482180344 +0000 UTC m=+0.056238859 container create a50f0081d772d7aef60a7d6d8eb34a432d5f8020e7c999c005e3ba3b64edbc99 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=charming_einstein, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2)
Dec  8 04:47:17 np0005550137 systemd[1]: Started libpod-conmon-a50f0081d772d7aef60a7d6d8eb34a432d5f8020e7c999c005e3ba3b64edbc99.scope.
Dec  8 04:47:17 np0005550137 podman[86307]: 2025-12-08 09:47:17.454067071 +0000 UTC m=+0.028125636 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Dec  8 04:47:17 np0005550137 systemd[1]: Started libcrun container.
Dec  8 04:47:17 np0005550137 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/032ffdbef548cc32c416070330aebc249b148d58e09ba47d071a2677a64695c6/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec  8 04:47:17 np0005550137 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/032ffdbef548cc32c416070330aebc249b148d58e09ba47d071a2677a64695c6/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec  8 04:47:17 np0005550137 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/032ffdbef548cc32c416070330aebc249b148d58e09ba47d071a2677a64695c6/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec  8 04:47:17 np0005550137 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/032ffdbef548cc32c416070330aebc249b148d58e09ba47d071a2677a64695c6/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec  8 04:47:17 np0005550137 podman[86307]: 2025-12-08 09:47:17.584870977 +0000 UTC m=+0.158929532 container init a50f0081d772d7aef60a7d6d8eb34a432d5f8020e7c999c005e3ba3b64edbc99 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=charming_einstein, CEPH_REF=squid, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default)
Dec  8 04:47:17 np0005550137 podman[86307]: 2025-12-08 09:47:17.591824475 +0000 UTC m=+0.165882990 container start a50f0081d772d7aef60a7d6d8eb34a432d5f8020e7c999c005e3ba3b64edbc99 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=charming_einstein, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec  8 04:47:17 np0005550137 podman[86307]: 2025-12-08 09:47:17.595489105 +0000 UTC m=+0.169547680 container attach a50f0081d772d7aef60a7d6d8eb34a432d5f8020e7c999c005e3ba3b64edbc99 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=charming_einstein, CEPH_REF=squid, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Dec  8 04:47:17 np0005550137 ceph-mgr[74806]: [balancer INFO root] Optimize plan auto_2025-12-08_09:47:17
Dec  8 04:47:17 np0005550137 ceph-mgr[74806]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Dec  8 04:47:17 np0005550137 ceph-mgr[74806]: [balancer INFO root] Some PGs (0.571429) are unknown; try again later
Dec  8 04:47:17 np0005550137 ceph-mgr[74806]: [pg_autoscaler INFO root] _maybe_adjust
Dec  8 04:47:17 np0005550137 ceph-mgr[74806]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 42941284352
Dec  8 04:47:17 np0005550137 ceph-mgr[74806]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 1.0778624975581169e-05 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Dec  8 04:47:17 np0005550137 ceph-mgr[74806]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 42941284352
Dec  8 04:47:17 np0005550137 ceph-mon[74516]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "vms", "var": "pg_num", "val": "32"} v 0)
Dec  8 04:47:17 np0005550137 ceph-mon[74516]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1555784564' entity='mgr.compute-0.kitiwu' cmd=[{"prefix": "osd pool set", "pool": "vms", "var": "pg_num", "val": "32"}]: dispatch
Dec  8 04:47:17 np0005550137 ceph-mgr[74806]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 1)
Dec  8 04:47:17 np0005550137 ceph-mgr[74806]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 42941284352
Dec  8 04:47:17 np0005550137 ceph-mgr[74806]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 1)
Dec  8 04:47:17 np0005550137 ceph-mgr[74806]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 42941284352
Dec  8 04:47:17 np0005550137 ceph-mgr[74806]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 1)
Dec  8 04:47:17 np0005550137 ceph-mgr[74806]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 42941284352
Dec  8 04:47:17 np0005550137 ceph-mgr[74806]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 1)
Dec  8 04:47:17 np0005550137 ceph-mgr[74806]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 42941284352
Dec  8 04:47:17 np0005550137 ceph-mgr[74806]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 1)
Dec  8 04:47:17 np0005550137 ceph-mgr[74806]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 42941284352
Dec  8 04:47:17 np0005550137 ceph-mgr[74806]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 1)
Dec  8 04:47:17 np0005550137 ceph-mgr[74806]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Dec  8 04:47:17 np0005550137 ceph-mgr[74806]: [volumes INFO mgr_util] scanning for idle connections..
Dec  8 04:47:17 np0005550137 ceph-mgr[74806]: [volumes INFO mgr_util] cleaning up connections: []
Dec  8 04:47:17 np0005550137 ceph-mgr[74806]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Dec  8 04:47:17 np0005550137 ceph-mgr[74806]: [volumes INFO mgr_util] scanning for idle connections..
Dec  8 04:47:17 np0005550137 ceph-mgr[74806]: [volumes INFO mgr_util] cleaning up connections: []
Dec  8 04:47:17 np0005550137 ceph-mgr[74806]: [volumes INFO mgr_util] scanning for idle connections..
Dec  8 04:47:17 np0005550137 ceph-mgr[74806]: [volumes INFO mgr_util] cleaning up connections: []
Dec  8 04:47:17 np0005550137 ceph-mon[74516]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool application enable", "pool": "vms", "app": "rbd"} v 0)
Dec  8 04:47:17 np0005550137 ceph-mon[74516]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/1188324566' entity='client.admin' cmd=[{"prefix": "osd pool application enable", "pool": "vms", "app": "rbd"}]: dispatch
Dec  8 04:47:17 np0005550137 ceph-mgr[74806]: [progress INFO root] Writing back 5 completed events
Dec  8 04:47:17 np0005550137 ceph-mon[74516]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/progress/completed}] v 0)
Dec  8 04:47:17 np0005550137 ceph-mon[74516]: Health check update: 5 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED)
Dec  8 04:47:17 np0005550137 ceph-mon[74516]: from='client.? 192.168.122.100:0/3965186026' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "cephfs.cephfs.data", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"}]': finished
Dec  8 04:47:17 np0005550137 ceph-mon[74516]: mon.compute-0@0(leader).osd e19 do_prune osdmap full prune enabled
Dec  8 04:47:18 np0005550137 ceph-mon[74516]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1555784564' entity='mgr.compute-0.kitiwu' 
Dec  8 04:47:18 np0005550137 ceph-mon[74516]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1555784564' entity='mgr.compute-0.kitiwu' cmd='[{"prefix": "osd pool set", "pool": "vms", "var": "pg_num", "val": "32"}]': finished
Dec  8 04:47:18 np0005550137 ceph-mon[74516]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/1188324566' entity='client.admin' cmd='[{"prefix": "osd pool application enable", "pool": "vms", "app": "rbd"}]': finished
Dec  8 04:47:18 np0005550137 ceph-mon[74516]: mon.compute-0@0(leader).osd e20 e20: 3 total, 2 up, 3 in
Dec  8 04:47:18 np0005550137 zen_poitras[86285]: enabled application 'rbd' on pool 'vms'
Dec  8 04:47:18 np0005550137 ceph-mon[74516]: log_channel(cluster) log [DBG] : mgrmap e11: compute-0.kitiwu(active, since 2m), standbys: compute-2.zqytsv, compute-1.mmkaif
Dec  8 04:47:18 np0005550137 ceph-mon[74516]: log_channel(cluster) log [DBG] : osdmap e20: 3 total, 2 up, 3 in
Dec  8 04:47:18 np0005550137 ceph-mon[74516]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd metadata", "id": 2} v 0)
Dec  8 04:47:18 np0005550137 ceph-mon[74516]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/1555784564' entity='mgr.compute-0.kitiwu' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch
Dec  8 04:47:18 np0005550137 ceph-mon[74516]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mgr metadata", "who": "compute-1.mmkaif", "id": "compute-1.mmkaif"} v 0)
Dec  8 04:47:18 np0005550137 ceph-mon[74516]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/1555784564' entity='mgr.compute-0.kitiwu' cmd=[{"prefix": "mgr metadata", "who": "compute-1.mmkaif", "id": "compute-1.mmkaif"}]: dispatch
Dec  8 04:47:18 np0005550137 ceph-mgr[74806]: mgr finish mon failed to return metadata for osd.2: (2) No such file or directory
Dec  8 04:47:18 np0005550137 ceph-mgr[74806]: [progress INFO root] update: starting ev acc356f5-6ef5-41e1-a546-ae46d295f0bf (PG autoscaler increasing pool 2 PGs from 1 to 32)
Dec  8 04:47:18 np0005550137 ceph-mon[74516]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "volumes", "var": "pg_num", "val": "32"} v 0)
Dec  8 04:47:18 np0005550137 ceph-mon[74516]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1555784564' entity='mgr.compute-0.kitiwu' cmd=[{"prefix": "osd pool set", "pool": "volumes", "var": "pg_num", "val": "32"}]: dispatch
Dec  8 04:47:18 np0005550137 systemd[1]: libpod-e9f6092587c6834291c490431eb740d8b567118382c7523fd16810a1f95b03a5.scope: Deactivated successfully.
Dec  8 04:47:18 np0005550137 lvm[86420]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Dec  8 04:47:18 np0005550137 lvm[86420]: VG ceph_vg0 finished
Dec  8 04:47:18 np0005550137 podman[86419]: 2025-12-08 09:47:18.324386206 +0000 UTC m=+0.080172899 container died e9f6092587c6834291c490431eb740d8b567118382c7523fd16810a1f95b03a5 (image=quay.io/ceph/ceph:v19, name=zen_poitras, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_REF=squid)
Dec  8 04:47:18 np0005550137 systemd[1]: var-lib-containers-storage-overlay-a5665b24cd7583f13d251aebda9f232224d949043d78af7ca30a9fc6cbba7a7b-merged.mount: Deactivated successfully.
Dec  8 04:47:18 np0005550137 charming_einstein[86343]: {}
Dec  8 04:47:18 np0005550137 podman[86419]: 2025-12-08 09:47:18.369560322 +0000 UTC m=+0.125346985 container remove e9f6092587c6834291c490431eb740d8b567118382c7523fd16810a1f95b03a5 (image=quay.io/ceph/ceph:v19, name=zen_poitras, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, OSD_FLAVOR=default, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec  8 04:47:18 np0005550137 ceph-mgr[74806]: log_channel(cluster) log [DBG] : pgmap v74: 7 pgs: 1 creating+peering, 6 active+clean; 449 KiB data, 53 MiB used, 40 GiB / 40 GiB avail
Dec  8 04:47:18 np0005550137 ceph-mon[74516]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "vms", "var": "pg_num_actual", "val": "32"} v 0)
Dec  8 04:47:18 np0005550137 ceph-mon[74516]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1555784564' entity='mgr.compute-0.kitiwu' cmd=[{"prefix": "osd pool set", "pool": "vms", "var": "pg_num_actual", "val": "32"}]: dispatch
Dec  8 04:47:18 np0005550137 systemd[1]: libpod-conmon-e9f6092587c6834291c490431eb740d8b567118382c7523fd16810a1f95b03a5.scope: Deactivated successfully.
Dec  8 04:47:18 np0005550137 systemd[1]: libpod-a50f0081d772d7aef60a7d6d8eb34a432d5f8020e7c999c005e3ba3b64edbc99.scope: Deactivated successfully.
Dec  8 04:47:18 np0005550137 podman[86307]: 2025-12-08 09:47:18.385632424 +0000 UTC m=+0.959690949 container died a50f0081d772d7aef60a7d6d8eb34a432d5f8020e7c999c005e3ba3b64edbc99 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=charming_einstein, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default)
Dec  8 04:47:18 np0005550137 systemd[1]: libpod-a50f0081d772d7aef60a7d6d8eb34a432d5f8020e7c999c005e3ba3b64edbc99.scope: Consumed 1.110s CPU time.
Dec  8 04:47:18 np0005550137 systemd[1]: var-lib-containers-storage-overlay-032ffdbef548cc32c416070330aebc249b148d58e09ba47d071a2677a64695c6-merged.mount: Deactivated successfully.
Dec  8 04:47:18 np0005550137 podman[86307]: 2025-12-08 09:47:18.4371581 +0000 UTC m=+1.011216615 container remove a50f0081d772d7aef60a7d6d8eb34a432d5f8020e7c999c005e3ba3b64edbc99 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=charming_einstein, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, ceph=True, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Dec  8 04:47:18 np0005550137 systemd[1]: libpod-conmon-a50f0081d772d7aef60a7d6d8eb34a432d5f8020e7c999c005e3ba3b64edbc99.scope: Deactivated successfully.
Dec  8 04:47:18 np0005550137 ceph-mon[74516]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Dec  8 04:47:18 np0005550137 python3[86475]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v19 --fsid ceb838ef-9d5d-54e4-bddb-2f01adce2ad4 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   osd pool application enable volumes rbd _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec  8 04:47:18 np0005550137 podman[86476]: 2025-12-08 09:47:18.758173197 +0000 UTC m=+0.064182208 container create 736ad1e4e47b148cd6d53e61dad3c1c2e4877959a778a0a56ed59bdc3c67901c (image=quay.io/ceph/ceph:v19, name=vibrant_kirch, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_REF=squid, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True)
Dec  8 04:47:18 np0005550137 systemd[1]: Started libpod-conmon-736ad1e4e47b148cd6d53e61dad3c1c2e4877959a778a0a56ed59bdc3c67901c.scope.
Dec  8 04:47:18 np0005550137 podman[86476]: 2025-12-08 09:47:18.739937039 +0000 UTC m=+0.045946090 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Dec  8 04:47:18 np0005550137 systemd[1]: Started libcrun container.
Dec  8 04:47:18 np0005550137 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/70a0e9bbc75afd470b7ab2b0174a79de6fec3cd413620492df4c8f192252aea0/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Dec  8 04:47:18 np0005550137 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/70a0e9bbc75afd470b7ab2b0174a79de6fec3cd413620492df4c8f192252aea0/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec  8 04:47:18 np0005550137 podman[86476]: 2025-12-08 09:47:18.862481718 +0000 UTC m=+0.168490769 container init 736ad1e4e47b148cd6d53e61dad3c1c2e4877959a778a0a56ed59bdc3c67901c (image=quay.io/ceph/ceph:v19, name=vibrant_kirch, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, ceph=True, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec  8 04:47:18 np0005550137 podman[86476]: 2025-12-08 09:47:18.871525149 +0000 UTC m=+0.177534160 container start 736ad1e4e47b148cd6d53e61dad3c1c2e4877959a778a0a56ed59bdc3c67901c (image=quay.io/ceph/ceph:v19, name=vibrant_kirch, ceph=True, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec  8 04:47:18 np0005550137 podman[86476]: 2025-12-08 09:47:18.875592051 +0000 UTC m=+0.181601142 container attach 736ad1e4e47b148cd6d53e61dad3c1c2e4877959a778a0a56ed59bdc3c67901c (image=quay.io/ceph/ceph:v19, name=vibrant_kirch, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_REF=squid, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec  8 04:47:18 np0005550137 ceph-mon[74516]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1555784564' entity='mgr.compute-0.kitiwu' 
Dec  8 04:47:18 np0005550137 ceph-mon[74516]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Dec  8 04:47:19 np0005550137 ceph-mon[74516]: from='mgr.14122 192.168.122.100:0/1555784564' entity='mgr.compute-0.kitiwu' cmd=[{"prefix": "osd pool set", "pool": "vms", "var": "pg_num", "val": "32"}]: dispatch
Dec  8 04:47:19 np0005550137 ceph-mon[74516]: from='client.? 192.168.122.100:0/1188324566' entity='client.admin' cmd=[{"prefix": "osd pool application enable", "pool": "vms", "app": "rbd"}]: dispatch
Dec  8 04:47:19 np0005550137 ceph-mon[74516]: from='mgr.14122 192.168.122.100:0/1555784564' entity='mgr.compute-0.kitiwu' 
Dec  8 04:47:19 np0005550137 ceph-mon[74516]: from='mgr.14122 192.168.122.100:0/1555784564' entity='mgr.compute-0.kitiwu' cmd='[{"prefix": "osd pool set", "pool": "vms", "var": "pg_num", "val": "32"}]': finished
Dec  8 04:47:19 np0005550137 ceph-mon[74516]: from='client.? 192.168.122.100:0/1188324566' entity='client.admin' cmd='[{"prefix": "osd pool application enable", "pool": "vms", "app": "rbd"}]': finished
Dec  8 04:47:19 np0005550137 ceph-mon[74516]: from='mgr.14122 192.168.122.100:0/1555784564' entity='mgr.compute-0.kitiwu' cmd=[{"prefix": "osd pool set", "pool": "volumes", "var": "pg_num", "val": "32"}]: dispatch
Dec  8 04:47:19 np0005550137 ceph-mon[74516]: from='mgr.14122 192.168.122.100:0/1555784564' entity='mgr.compute-0.kitiwu' cmd=[{"prefix": "osd pool set", "pool": "vms", "var": "pg_num_actual", "val": "32"}]: dispatch
Dec  8 04:47:19 np0005550137 ceph-mon[74516]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1555784564' entity='mgr.compute-0.kitiwu' 
Dec  8 04:47:19 np0005550137 ceph-mon[74516]: mon.compute-0@0(leader).osd e20 do_prune osdmap full prune enabled
Dec  8 04:47:19 np0005550137 ceph-mon[74516]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1555784564' entity='mgr.compute-0.kitiwu' cmd='[{"prefix": "osd pool set", "pool": "volumes", "var": "pg_num", "val": "32"}]': finished
Dec  8 04:47:19 np0005550137 ceph-mon[74516]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1555784564' entity='mgr.compute-0.kitiwu' cmd='[{"prefix": "osd pool set", "pool": "vms", "var": "pg_num_actual", "val": "32"}]': finished
Dec  8 04:47:19 np0005550137 ceph-mon[74516]: mon.compute-0@0(leader).osd e21 e21: 3 total, 2 up, 3 in
Dec  8 04:47:19 np0005550137 ceph-mon[74516]: log_channel(cluster) log [DBG] : osdmap e21: 3 total, 2 up, 3 in
Dec  8 04:47:19 np0005550137 ceph-mon[74516]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd metadata", "id": 2} v 0)
Dec  8 04:47:19 np0005550137 ceph-mon[74516]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/1555784564' entity='mgr.compute-0.kitiwu' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch
Dec  8 04:47:19 np0005550137 ceph-mgr[74806]: mgr finish mon failed to return metadata for osd.2: (2) No such file or directory
Dec  8 04:47:19 np0005550137 ceph-mgr[74806]: [progress INFO root] update: starting ev 7d3c549d-c5a5-4146-bffa-7616b44a86bb (PG autoscaler increasing pool 3 PGs from 1 to 32)
Dec  8 04:47:19 np0005550137 ceph-mon[74516]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "backups", "var": "pg_num", "val": "32"} v 0)
Dec  8 04:47:19 np0005550137 ceph-mon[74516]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1555784564' entity='mgr.compute-0.kitiwu' cmd=[{"prefix": "osd pool set", "pool": "backups", "var": "pg_num", "val": "32"}]: dispatch
Dec  8 04:47:19 np0005550137 ceph-mon[74516]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool application enable", "pool": "volumes", "app": "rbd"} v 0)
Dec  8 04:47:19 np0005550137 ceph-mon[74516]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/4067368477' entity='client.admin' cmd=[{"prefix": "osd pool application enable", "pool": "volumes", "app": "rbd"}]: dispatch
Dec  8 04:47:19 np0005550137 ceph-mon[74516]: mon.compute-0@0(leader).osd e21 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  8 04:47:20 np0005550137 ceph-mon[74516]: from='mgr.14122 192.168.122.100:0/1555784564' entity='mgr.compute-0.kitiwu' 
Dec  8 04:47:20 np0005550137 ceph-mon[74516]: from='mgr.14122 192.168.122.100:0/1555784564' entity='mgr.compute-0.kitiwu' 
Dec  8 04:47:20 np0005550137 ceph-mon[74516]: from='mgr.14122 192.168.122.100:0/1555784564' entity='mgr.compute-0.kitiwu' cmd='[{"prefix": "osd pool set", "pool": "volumes", "var": "pg_num", "val": "32"}]': finished
Dec  8 04:47:20 np0005550137 ceph-mon[74516]: from='mgr.14122 192.168.122.100:0/1555784564' entity='mgr.compute-0.kitiwu' cmd='[{"prefix": "osd pool set", "pool": "vms", "var": "pg_num_actual", "val": "32"}]': finished
Dec  8 04:47:20 np0005550137 ceph-mon[74516]: from='mgr.14122 192.168.122.100:0/1555784564' entity='mgr.compute-0.kitiwu' cmd=[{"prefix": "osd pool set", "pool": "backups", "var": "pg_num", "val": "32"}]: dispatch
Dec  8 04:47:20 np0005550137 ceph-mon[74516]: from='client.? 192.168.122.100:0/4067368477' entity='client.admin' cmd=[{"prefix": "osd pool application enable", "pool": "volumes", "app": "rbd"}]: dispatch
Dec  8 04:47:20 np0005550137 ceph-mon[74516]: mon.compute-0@0(leader).osd e21 do_prune osdmap full prune enabled
Dec  8 04:47:20 np0005550137 ceph-mon[74516]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1555784564' entity='mgr.compute-0.kitiwu' cmd='[{"prefix": "osd pool set", "pool": "backups", "var": "pg_num", "val": "32"}]': finished
Dec  8 04:47:20 np0005550137 ceph-mon[74516]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/4067368477' entity='client.admin' cmd='[{"prefix": "osd pool application enable", "pool": "volumes", "app": "rbd"}]': finished
Dec  8 04:47:20 np0005550137 ceph-mon[74516]: mon.compute-0@0(leader).osd e22 e22: 3 total, 2 up, 3 in
Dec  8 04:47:20 np0005550137 vibrant_kirch[86492]: enabled application 'rbd' on pool 'volumes'
Dec  8 04:47:20 np0005550137 ceph-mon[74516]: log_channel(cluster) log [DBG] : osdmap e22: 3 total, 2 up, 3 in
Dec  8 04:47:20 np0005550137 ceph-mon[74516]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd metadata", "id": 2} v 0)
Dec  8 04:47:20 np0005550137 ceph-mon[74516]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/1555784564' entity='mgr.compute-0.kitiwu' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch
Dec  8 04:47:20 np0005550137 ceph-mgr[74806]: mgr finish mon failed to return metadata for osd.2: (2) No such file or directory
Dec  8 04:47:20 np0005550137 ceph-mgr[74806]: [progress INFO root] update: starting ev d079d8de-eaa4-47b7-a1be-f09a4f3e7919 (PG autoscaler increasing pool 4 PGs from 1 to 32)
Dec  8 04:47:20 np0005550137 ceph-mon[74516]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "images", "var": "pg_num", "val": "32"} v 0)
Dec  8 04:47:20 np0005550137 ceph-mon[74516]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1555784564' entity='mgr.compute-0.kitiwu' cmd=[{"prefix": "osd pool set", "pool": "images", "var": "pg_num", "val": "32"}]: dispatch
Dec  8 04:47:20 np0005550137 systemd[1]: libpod-736ad1e4e47b148cd6d53e61dad3c1c2e4877959a778a0a56ed59bdc3c67901c.scope: Deactivated successfully.
Dec  8 04:47:20 np0005550137 podman[86476]: 2025-12-08 09:47:20.213183243 +0000 UTC m=+1.519192244 container died 736ad1e4e47b148cd6d53e61dad3c1c2e4877959a778a0a56ed59bdc3c67901c (image=quay.io/ceph/ceph:v19, name=vibrant_kirch, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec  8 04:47:20 np0005550137 systemd[1]: var-lib-containers-storage-overlay-70a0e9bbc75afd470b7ab2b0174a79de6fec3cd413620492df4c8f192252aea0-merged.mount: Deactivated successfully.
Dec  8 04:47:20 np0005550137 podman[86476]: 2025-12-08 09:47:20.254183503 +0000 UTC m=+1.560192534 container remove 736ad1e4e47b148cd6d53e61dad3c1c2e4877959a778a0a56ed59bdc3c67901c (image=quay.io/ceph/ceph:v19, name=vibrant_kirch, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, OSD_FLAVOR=default)
Dec  8 04:47:20 np0005550137 systemd[1]: libpod-conmon-736ad1e4e47b148cd6d53e61dad3c1c2e4877959a778a0a56ed59bdc3c67901c.scope: Deactivated successfully.
Dec  8 04:47:20 np0005550137 ceph-mgr[74806]: log_channel(cluster) log [DBG] : pgmap v77: 38 pgs: 31 unknown, 1 creating+peering, 6 active+clean; 449 KiB data, 53 MiB used, 40 GiB / 40 GiB avail
Dec  8 04:47:20 np0005550137 ceph-mon[74516]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "backups", "var": "pg_num_actual", "val": "32"} v 0)
Dec  8 04:47:20 np0005550137 ceph-mon[74516]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1555784564' entity='mgr.compute-0.kitiwu' cmd=[{"prefix": "osd pool set", "pool": "backups", "var": "pg_num_actual", "val": "32"}]: dispatch
Dec  8 04:47:20 np0005550137 ceph-mon[74516]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "volumes", "var": "pg_num_actual", "val": "32"} v 0)
Dec  8 04:47:20 np0005550137 ceph-mon[74516]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1555784564' entity='mgr.compute-0.kitiwu' cmd=[{"prefix": "osd pool set", "pool": "volumes", "var": "pg_num_actual", "val": "32"}]: dispatch
Dec  8 04:47:20 np0005550137 python3[86556]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v19 --fsid ceb838ef-9d5d-54e4-bddb-2f01adce2ad4 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   osd pool application enable backups rbd _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec  8 04:47:20 np0005550137 podman[86557]: 2025-12-08 09:47:20.63084311 +0000 UTC m=+0.055009912 container create 1008cdb1107f986e875069802aee8996515d3558837bebb256956669ad545e95 (image=quay.io/ceph/ceph:v19, name=laughing_hugle, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, ceph=True, CEPH_REF=squid, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec  8 04:47:20 np0005550137 systemd[1]: Started libpod-conmon-1008cdb1107f986e875069802aee8996515d3558837bebb256956669ad545e95.scope.
Dec  8 04:47:20 np0005550137 podman[86557]: 2025-12-08 09:47:20.606689715 +0000 UTC m=+0.030856537 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Dec  8 04:47:20 np0005550137 systemd[1]: Started libcrun container.
Dec  8 04:47:20 np0005550137 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/45a68e89411627369b1a69b1e497c8fb2e01a5b8c3d014c130766ec6d32bda7e/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec  8 04:47:20 np0005550137 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/45a68e89411627369b1a69b1e497c8fb2e01a5b8c3d014c130766ec6d32bda7e/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Dec  8 04:47:20 np0005550137 podman[86557]: 2025-12-08 09:47:20.736279655 +0000 UTC m=+0.160446467 container init 1008cdb1107f986e875069802aee8996515d3558837bebb256956669ad545e95 (image=quay.io/ceph/ceph:v19, name=laughing_hugle, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, ceph=True, OSD_FLAVOR=default, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec  8 04:47:20 np0005550137 podman[86557]: 2025-12-08 09:47:20.742729739 +0000 UTC m=+0.166896501 container start 1008cdb1107f986e875069802aee8996515d3558837bebb256956669ad545e95 (image=quay.io/ceph/ceph:v19, name=laughing_hugle, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, org.label-schema.license=GPLv2, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec  8 04:47:20 np0005550137 podman[86557]: 2025-12-08 09:47:20.745568294 +0000 UTC m=+0.169735106 container attach 1008cdb1107f986e875069802aee8996515d3558837bebb256956669ad545e95 (image=quay.io/ceph/ceph:v19, name=laughing_hugle, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS)
Dec  8 04:47:21 np0005550137 ceph-mon[74516]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool application enable", "pool": "backups", "app": "rbd"} v 0)
Dec  8 04:47:21 np0005550137 ceph-mon[74516]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/962673436' entity='client.admin' cmd=[{"prefix": "osd pool application enable", "pool": "backups", "app": "rbd"}]: dispatch
Dec  8 04:47:21 np0005550137 ceph-mon[74516]: mon.compute-0@0(leader).osd e22 do_prune osdmap full prune enabled
Dec  8 04:47:21 np0005550137 ceph-mon[74516]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1555784564' entity='mgr.compute-0.kitiwu' cmd='[{"prefix": "osd pool set", "pool": "images", "var": "pg_num", "val": "32"}]': finished
Dec  8 04:47:21 np0005550137 ceph-mon[74516]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1555784564' entity='mgr.compute-0.kitiwu' cmd='[{"prefix": "osd pool set", "pool": "backups", "var": "pg_num_actual", "val": "32"}]': finished
Dec  8 04:47:21 np0005550137 ceph-mon[74516]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1555784564' entity='mgr.compute-0.kitiwu' cmd='[{"prefix": "osd pool set", "pool": "volumes", "var": "pg_num_actual", "val": "32"}]': finished
Dec  8 04:47:21 np0005550137 ceph-mon[74516]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/962673436' entity='client.admin' cmd='[{"prefix": "osd pool application enable", "pool": "backups", "app": "rbd"}]': finished
Dec  8 04:47:21 np0005550137 ceph-mon[74516]: mon.compute-0@0(leader).osd e23 e23: 3 total, 2 up, 3 in
Dec  8 04:47:21 np0005550137 laughing_hugle[86572]: enabled application 'rbd' on pool 'backups'
Dec  8 04:47:21 np0005550137 ceph-mon[74516]: from='mgr.14122 192.168.122.100:0/1555784564' entity='mgr.compute-0.kitiwu' cmd='[{"prefix": "osd pool set", "pool": "backups", "var": "pg_num", "val": "32"}]': finished
Dec  8 04:47:21 np0005550137 ceph-mon[74516]: from='client.? 192.168.122.100:0/4067368477' entity='client.admin' cmd='[{"prefix": "osd pool application enable", "pool": "volumes", "app": "rbd"}]': finished
Dec  8 04:47:21 np0005550137 ceph-mon[74516]: from='mgr.14122 192.168.122.100:0/1555784564' entity='mgr.compute-0.kitiwu' cmd=[{"prefix": "osd pool set", "pool": "images", "var": "pg_num", "val": "32"}]: dispatch
Dec  8 04:47:21 np0005550137 ceph-mon[74516]: from='mgr.14122 192.168.122.100:0/1555784564' entity='mgr.compute-0.kitiwu' cmd=[{"prefix": "osd pool set", "pool": "backups", "var": "pg_num_actual", "val": "32"}]: dispatch
Dec  8 04:47:21 np0005550137 ceph-mon[74516]: from='mgr.14122 192.168.122.100:0/1555784564' entity='mgr.compute-0.kitiwu' cmd=[{"prefix": "osd pool set", "pool": "volumes", "var": "pg_num_actual", "val": "32"}]: dispatch
Dec  8 04:47:21 np0005550137 ceph-mon[74516]: from='client.? 192.168.122.100:0/962673436' entity='client.admin' cmd=[{"prefix": "osd pool application enable", "pool": "backups", "app": "rbd"}]: dispatch
Dec  8 04:47:21 np0005550137 ceph-mon[74516]: log_channel(cluster) log [DBG] : osdmap e23: 3 total, 2 up, 3 in
Dec  8 04:47:21 np0005550137 ceph-mon[74516]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd metadata", "id": 2} v 0)
Dec  8 04:47:21 np0005550137 ceph-mon[74516]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/1555784564' entity='mgr.compute-0.kitiwu' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch
Dec  8 04:47:21 np0005550137 ceph-mgr[74806]: mgr finish mon failed to return metadata for osd.2: (2) No such file or directory
Dec  8 04:47:21 np0005550137 ceph-mgr[74806]: [progress INFO root] update: starting ev 5f9783d0-912f-42da-9c83-ceeb85c88c28 (PG autoscaler increasing pool 5 PGs from 1 to 32)
Dec  8 04:47:21 np0005550137 ceph-mon[74516]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pg_num", "val": "32"} v 0)
Dec  8 04:47:21 np0005550137 ceph-mon[74516]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1555784564' entity='mgr.compute-0.kitiwu' cmd=[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pg_num", "val": "32"}]: dispatch
Dec  8 04:47:21 np0005550137 systemd[1]: libpod-1008cdb1107f986e875069802aee8996515d3558837bebb256956669ad545e95.scope: Deactivated successfully.
Dec  8 04:47:21 np0005550137 podman[86557]: 2025-12-08 09:47:21.24247983 +0000 UTC m=+0.666646622 container died 1008cdb1107f986e875069802aee8996515d3558837bebb256956669ad545e95 (image=quay.io/ceph/ceph:v19, name=laughing_hugle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, OSD_FLAVOR=default, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, ceph=True)
Dec  8 04:47:21 np0005550137 systemd[1]: var-lib-containers-storage-overlay-45a68e89411627369b1a69b1e497c8fb2e01a5b8c3d014c130766ec6d32bda7e-merged.mount: Deactivated successfully.
Dec  8 04:47:21 np0005550137 podman[86557]: 2025-12-08 09:47:21.28278435 +0000 UTC m=+0.706951122 container remove 1008cdb1107f986e875069802aee8996515d3558837bebb256956669ad545e95 (image=quay.io/ceph/ceph:v19, name=laughing_hugle, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, ceph=True, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec  8 04:47:21 np0005550137 systemd[1]: libpod-conmon-1008cdb1107f986e875069802aee8996515d3558837bebb256956669ad545e95.scope: Deactivated successfully.
Dec  8 04:47:21 np0005550137 python3[86633]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v19 --fsid ceb838ef-9d5d-54e4-bddb-2f01adce2ad4 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   osd pool application enable images rbd _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec  8 04:47:21 np0005550137 podman[86634]: 2025-12-08 09:47:21.665070614 +0000 UTC m=+0.050691071 container create 156ae2dc3b7fcb60a036245d8b5ce7768b37029b4dabecd212b39b67f066dd25 (image=quay.io/ceph/ceph:v19, name=charming_lehmann, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, CEPH_REF=squid, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS)
Dec  8 04:47:21 np0005550137 systemd[1]: Started libpod-conmon-156ae2dc3b7fcb60a036245d8b5ce7768b37029b4dabecd212b39b67f066dd25.scope.
Dec  8 04:47:21 np0005550137 systemd[1]: Started libcrun container.
Dec  8 04:47:21 np0005550137 podman[86634]: 2025-12-08 09:47:21.642626452 +0000 UTC m=+0.028246929 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Dec  8 04:47:21 np0005550137 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/327d6d43bfd58774015f0c7f22646cdfe10d69a857552a256c7f2175231d2b51/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Dec  8 04:47:21 np0005550137 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/327d6d43bfd58774015f0c7f22646cdfe10d69a857552a256c7f2175231d2b51/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec  8 04:47:21 np0005550137 podman[86634]: 2025-12-08 09:47:21.756952414 +0000 UTC m=+0.142572881 container init 156ae2dc3b7fcb60a036245d8b5ce7768b37029b4dabecd212b39b67f066dd25 (image=quay.io/ceph/ceph:v19, name=charming_lehmann, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Dec  8 04:47:21 np0005550137 podman[86634]: 2025-12-08 09:47:21.767262833 +0000 UTC m=+0.152883320 container start 156ae2dc3b7fcb60a036245d8b5ce7768b37029b4dabecd212b39b67f066dd25 (image=quay.io/ceph/ceph:v19, name=charming_lehmann, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec  8 04:47:21 np0005550137 podman[86634]: 2025-12-08 09:47:21.771589272 +0000 UTC m=+0.157209749 container attach 156ae2dc3b7fcb60a036245d8b5ce7768b37029b4dabecd212b39b67f066dd25 (image=quay.io/ceph/ceph:v19, name=charming_lehmann, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, CEPH_REF=squid, OSD_FLAVOR=default, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec  8 04:47:21 np0005550137 ceph-mon[74516]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "osd.2"} v 0)
Dec  8 04:47:21 np0005550137 ceph-mon[74516]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1555784564' entity='mgr.compute-0.kitiwu' cmd=[{"prefix": "auth get", "entity": "osd.2"}]: dispatch
Dec  8 04:47:21 np0005550137 ceph-mon[74516]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Dec  8 04:47:21 np0005550137 ceph-mon[74516]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/1555784564' entity='mgr.compute-0.kitiwu' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec  8 04:47:21 np0005550137 ceph-mgr[74806]: [cephadm INFO cephadm.serve] Deploying daemon osd.2 on compute-2
Dec  8 04:47:21 np0005550137 ceph-mgr[74806]: log_channel(cephadm) log [INF] : Deploying daemon osd.2 on compute-2
Dec  8 04:47:22 np0005550137 ceph-osd[83009]: osd.1 pg_epoch: 23 pg[3.0( empty local-lis/les=13/14 n=0 ec=13/13 lis/c=13/13 les/c/f=14/14/0 sis=23 pruub=14.375902176s) [1] r=0 lpr=23 pi=[13,23)/1 crt=0'0 mlcod 0'0 active pruub 65.167823792s@ mbc={}] PeeringState::start_peering_interval up [1] -> [1], acting [1] -> [1], acting_primary 1 -> 1, up_primary 1 -> 1, role 0 -> 0, features acting 4540701547738038271 upacting 4540701547738038271
Dec  8 04:47:22 np0005550137 ceph-osd[83009]: osd.1 pg_epoch: 23 pg[4.0( empty local-lis/les=15/16 n=0 ec=15/15 lis/c=15/15 les/c/f=16/16/0 sis=23 pruub=8.437085152s) [1] r=0 lpr=23 pi=[15,23)/1 crt=0'0 mlcod 0'0 active pruub 59.229038239s@ mbc={}] PeeringState::start_peering_interval up [1] -> [1], acting [1] -> [1], acting_primary 1 -> 1, up_primary 1 -> 1, role 0 -> 0, features acting 4540701547738038271 upacting 4540701547738038271
Dec  8 04:47:22 np0005550137 ceph-osd[83009]: osd.1 pg_epoch: 23 pg[3.0( empty local-lis/les=13/14 n=0 ec=13/13 lis/c=13/13 les/c/f=14/14/0 sis=23 pruub=14.375902176s) [1] r=0 lpr=23 pi=[13,23)/1 crt=0'0 mlcod 0'0 unknown pruub 65.167823792s@ mbc={}] state<Start>: transitioning to Primary
Dec  8 04:47:22 np0005550137 ceph-osd[83009]: osd.1 pg_epoch: 23 pg[4.0( empty local-lis/les=15/16 n=0 ec=15/15 lis/c=15/15 les/c/f=16/16/0 sis=23 pruub=8.437085152s) [1] r=0 lpr=23 pi=[15,23)/1 crt=0'0 mlcod 0'0 unknown pruub 59.229038239s@ mbc={}] state<Start>: transitioning to Primary
Dec  8 04:47:22 np0005550137 ceph-mon[74516]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool application enable", "pool": "images", "app": "rbd"} v 0)
Dec  8 04:47:22 np0005550137 ceph-mon[74516]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/2842985013' entity='client.admin' cmd=[{"prefix": "osd pool application enable", "pool": "images", "app": "rbd"}]: dispatch
Dec  8 04:47:22 np0005550137 ceph-mon[74516]: mon.compute-0@0(leader).osd e23 do_prune osdmap full prune enabled
Dec  8 04:47:22 np0005550137 ceph-mon[74516]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1555784564' entity='mgr.compute-0.kitiwu' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pg_num", "val": "32"}]': finished
Dec  8 04:47:22 np0005550137 ceph-mon[74516]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/2842985013' entity='client.admin' cmd='[{"prefix": "osd pool application enable", "pool": "images", "app": "rbd"}]': finished
Dec  8 04:47:22 np0005550137 ceph-mon[74516]: mon.compute-0@0(leader).osd e24 e24: 3 total, 2 up, 3 in
Dec  8 04:47:22 np0005550137 charming_lehmann[86650]: enabled application 'rbd' on pool 'images'
Dec  8 04:47:22 np0005550137 ceph-mon[74516]: log_channel(cluster) log [DBG] : osdmap e24: 3 total, 2 up, 3 in
Dec  8 04:47:22 np0005550137 ceph-mon[74516]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd metadata", "id": 2} v 0)
Dec  8 04:47:22 np0005550137 ceph-mon[74516]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/1555784564' entity='mgr.compute-0.kitiwu' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch
Dec  8 04:47:22 np0005550137 ceph-mgr[74806]: mgr finish mon failed to return metadata for osd.2: (2) No such file or directory
Dec  8 04:47:22 np0005550137 ceph-mgr[74806]: [progress INFO root] update: starting ev ad1399ff-dcc4-4c9b-b13d-2ab60fb06c89 (PG autoscaler increasing pool 6 PGs from 1 to 32)
Dec  8 04:47:22 np0005550137 ceph-mon[74516]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "cephfs.cephfs.data", "var": "pg_num", "val": "32"} v 0)
Dec  8 04:47:22 np0005550137 ceph-mon[74516]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1555784564' entity='mgr.compute-0.kitiwu' cmd=[{"prefix": "osd pool set", "pool": "cephfs.cephfs.data", "var": "pg_num", "val": "32"}]: dispatch
Dec  8 04:47:22 np0005550137 ceph-osd[83009]: osd.1 pg_epoch: 24 pg[4.1f( empty local-lis/les=15/16 n=0 ec=23/15 lis/c=15/15 les/c/f=16/16/0 sis=23) [1] r=0 lpr=23 pi=[15,23)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  8 04:47:22 np0005550137 ceph-osd[83009]: osd.1 pg_epoch: 24 pg[4.1e( empty local-lis/les=15/16 n=0 ec=23/15 lis/c=15/15 les/c/f=16/16/0 sis=23) [1] r=0 lpr=23 pi=[15,23)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  8 04:47:22 np0005550137 ceph-osd[83009]: osd.1 pg_epoch: 24 pg[3.19( empty local-lis/les=13/14 n=0 ec=23/13 lis/c=13/13 les/c/f=14/14/0 sis=23) [1] r=0 lpr=23 pi=[13,23)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  8 04:47:22 np0005550137 ceph-osd[83009]: osd.1 pg_epoch: 24 pg[3.17( empty local-lis/les=13/14 n=0 ec=23/13 lis/c=13/13 les/c/f=14/14/0 sis=23) [1] r=0 lpr=23 pi=[13,23)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  8 04:47:22 np0005550137 ceph-osd[83009]: osd.1 pg_epoch: 24 pg[3.18( empty local-lis/les=13/14 n=0 ec=23/13 lis/c=13/13 les/c/f=14/14/0 sis=23) [1] r=0 lpr=23 pi=[13,23)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  8 04:47:22 np0005550137 ceph-osd[83009]: osd.1 pg_epoch: 24 pg[4.10( empty local-lis/les=15/16 n=0 ec=23/15 lis/c=15/15 les/c/f=16/16/0 sis=23) [1] r=0 lpr=23 pi=[15,23)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  8 04:47:22 np0005550137 ceph-osd[83009]: osd.1 pg_epoch: 24 pg[3.16( empty local-lis/les=13/14 n=0 ec=23/13 lis/c=13/13 les/c/f=14/14/0 sis=23) [1] r=0 lpr=23 pi=[13,23)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  8 04:47:22 np0005550137 ceph-osd[83009]: osd.1 pg_epoch: 24 pg[3.15( empty local-lis/les=13/14 n=0 ec=23/13 lis/c=13/13 les/c/f=14/14/0 sis=23) [1] r=0 lpr=23 pi=[13,23)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  8 04:47:22 np0005550137 ceph-osd[83009]: osd.1 pg_epoch: 24 pg[4.12( empty local-lis/les=15/16 n=0 ec=23/15 lis/c=15/15 les/c/f=16/16/0 sis=23) [1] r=0 lpr=23 pi=[15,23)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  8 04:47:22 np0005550137 ceph-osd[83009]: osd.1 pg_epoch: 24 pg[4.11( empty local-lis/les=15/16 n=0 ec=23/15 lis/c=15/15 les/c/f=16/16/0 sis=23) [1] r=0 lpr=23 pi=[15,23)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  8 04:47:22 np0005550137 ceph-osd[83009]: osd.1 pg_epoch: 24 pg[3.14( empty local-lis/les=13/14 n=0 ec=23/13 lis/c=13/13 les/c/f=14/14/0 sis=23) [1] r=0 lpr=23 pi=[13,23)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  8 04:47:22 np0005550137 ceph-osd[83009]: osd.1 pg_epoch: 24 pg[4.13( empty local-lis/les=15/16 n=0 ec=23/15 lis/c=15/15 les/c/f=16/16/0 sis=23) [1] r=0 lpr=23 pi=[15,23)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  8 04:47:22 np0005550137 ceph-osd[83009]: osd.1 pg_epoch: 24 pg[4.14( empty local-lis/les=15/16 n=0 ec=23/15 lis/c=15/15 les/c/f=16/16/0 sis=23) [1] r=0 lpr=23 pi=[15,23)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  8 04:47:22 np0005550137 ceph-osd[83009]: osd.1 pg_epoch: 24 pg[3.13( empty local-lis/les=13/14 n=0 ec=23/13 lis/c=13/13 les/c/f=14/14/0 sis=23) [1] r=0 lpr=23 pi=[13,23)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  8 04:47:22 np0005550137 ceph-osd[83009]: osd.1 pg_epoch: 24 pg[3.12( empty local-lis/les=13/14 n=0 ec=23/13 lis/c=13/13 les/c/f=14/14/0 sis=23) [1] r=0 lpr=23 pi=[13,23)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  8 04:47:22 np0005550137 ceph-osd[83009]: osd.1 pg_epoch: 24 pg[3.10( empty local-lis/les=13/14 n=0 ec=23/13 lis/c=13/13 les/c/f=14/14/0 sis=23) [1] r=0 lpr=23 pi=[13,23)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  8 04:47:22 np0005550137 ceph-osd[83009]: osd.1 pg_epoch: 24 pg[4.16( empty local-lis/les=15/16 n=0 ec=23/15 lis/c=15/15 les/c/f=16/16/0 sis=23) [1] r=0 lpr=23 pi=[15,23)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  8 04:47:22 np0005550137 ceph-osd[83009]: osd.1 pg_epoch: 24 pg[4.15( empty local-lis/les=15/16 n=0 ec=23/15 lis/c=15/15 les/c/f=16/16/0 sis=23) [1] r=0 lpr=23 pi=[15,23)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  8 04:47:22 np0005550137 ceph-osd[83009]: osd.1 pg_epoch: 24 pg[4.17( empty local-lis/les=15/16 n=0 ec=23/15 lis/c=15/15 les/c/f=16/16/0 sis=23) [1] r=0 lpr=23 pi=[15,23)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  8 04:47:22 np0005550137 ceph-osd[83009]: osd.1 pg_epoch: 24 pg[3.f( empty local-lis/les=13/14 n=0 ec=23/13 lis/c=13/13 les/c/f=14/14/0 sis=23) [1] r=0 lpr=23 pi=[13,23)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  8 04:47:22 np0005550137 ceph-osd[83009]: osd.1 pg_epoch: 24 pg[4.8( empty local-lis/les=15/16 n=0 ec=23/15 lis/c=15/15 les/c/f=16/16/0 sis=23) [1] r=0 lpr=23 pi=[15,23)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  8 04:47:22 np0005550137 ceph-osd[83009]: osd.1 pg_epoch: 24 pg[3.e( empty local-lis/les=13/14 n=0 ec=23/13 lis/c=13/13 les/c/f=14/14/0 sis=23) [1] r=0 lpr=23 pi=[13,23)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  8 04:47:22 np0005550137 ceph-osd[83009]: osd.1 pg_epoch: 24 pg[3.d( empty local-lis/les=13/14 n=0 ec=23/13 lis/c=13/13 les/c/f=14/14/0 sis=23) [1] r=0 lpr=23 pi=[13,23)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  8 04:47:22 np0005550137 ceph-osd[83009]: osd.1 pg_epoch: 24 pg[4.9( empty local-lis/les=15/16 n=0 ec=23/15 lis/c=15/15 les/c/f=16/16/0 sis=23) [1] r=0 lpr=23 pi=[15,23)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  8 04:47:22 np0005550137 ceph-osd[83009]: osd.1 pg_epoch: 24 pg[4.a( empty local-lis/les=15/16 n=0 ec=23/15 lis/c=15/15 les/c/f=16/16/0 sis=23) [1] r=0 lpr=23 pi=[15,23)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  8 04:47:22 np0005550137 ceph-osd[83009]: osd.1 pg_epoch: 24 pg[3.c( empty local-lis/les=13/14 n=0 ec=23/13 lis/c=13/13 les/c/f=14/14/0 sis=23) [1] r=0 lpr=23 pi=[13,23)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  8 04:47:22 np0005550137 ceph-osd[83009]: osd.1 pg_epoch: 24 pg[4.b( empty local-lis/les=15/16 n=0 ec=23/15 lis/c=15/15 les/c/f=16/16/0 sis=23) [1] r=0 lpr=23 pi=[15,23)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  8 04:47:22 np0005550137 ceph-osd[83009]: osd.1 pg_epoch: 24 pg[3.b( empty local-lis/les=13/14 n=0 ec=23/13 lis/c=13/13 les/c/f=14/14/0 sis=23) [1] r=0 lpr=23 pi=[13,23)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  8 04:47:22 np0005550137 ceph-osd[83009]: osd.1 pg_epoch: 24 pg[3.11( empty local-lis/les=13/14 n=0 ec=23/13 lis/c=13/13 les/c/f=14/14/0 sis=23) [1] r=0 lpr=23 pi=[13,23)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  8 04:47:22 np0005550137 ceph-osd[83009]: osd.1 pg_epoch: 24 pg[3.a( empty local-lis/les=13/14 n=0 ec=23/13 lis/c=13/13 les/c/f=14/14/0 sis=23) [1] r=0 lpr=23 pi=[13,23)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  8 04:47:22 np0005550137 ceph-osd[83009]: osd.1 pg_epoch: 24 pg[4.d( empty local-lis/les=15/16 n=0 ec=23/15 lis/c=15/15 les/c/f=16/16/0 sis=23) [1] r=0 lpr=23 pi=[15,23)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  8 04:47:22 np0005550137 ceph-osd[83009]: osd.1 pg_epoch: 24 pg[4.c( empty local-lis/les=15/16 n=0 ec=23/15 lis/c=15/15 les/c/f=16/16/0 sis=23) [1] r=0 lpr=23 pi=[15,23)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  8 04:47:22 np0005550137 ceph-osd[83009]: osd.1 pg_epoch: 24 pg[4.7( empty local-lis/les=15/16 n=0 ec=23/15 lis/c=15/15 les/c/f=16/16/0 sis=23) [1] r=0 lpr=23 pi=[15,23)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  8 04:47:22 np0005550137 ceph-osd[83009]: osd.1 pg_epoch: 24 pg[3.7( empty local-lis/les=13/14 n=0 ec=23/13 lis/c=13/13 les/c/f=14/14/0 sis=23) [1] r=0 lpr=23 pi=[13,23)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  8 04:47:22 np0005550137 ceph-osd[83009]: osd.1 pg_epoch: 24 pg[3.6( empty local-lis/les=13/14 n=0 ec=23/13 lis/c=13/13 les/c/f=14/14/0 sis=23) [1] r=0 lpr=23 pi=[13,23)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  8 04:47:22 np0005550137 ceph-osd[83009]: osd.1 pg_epoch: 24 pg[4.1( empty local-lis/les=15/16 n=0 ec=23/15 lis/c=15/15 les/c/f=16/16/0 sis=23) [1] r=0 lpr=23 pi=[15,23)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  8 04:47:22 np0005550137 ceph-osd[83009]: osd.1 pg_epoch: 24 pg[3.5( empty local-lis/les=13/14 n=0 ec=23/13 lis/c=13/13 les/c/f=14/14/0 sis=23) [1] r=0 lpr=23 pi=[13,23)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  8 04:47:22 np0005550137 ceph-osd[83009]: osd.1 pg_epoch: 24 pg[4.2( empty local-lis/les=15/16 n=0 ec=23/15 lis/c=15/15 les/c/f=16/16/0 sis=23) [1] r=0 lpr=23 pi=[15,23)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  8 04:47:22 np0005550137 ceph-osd[83009]: osd.1 pg_epoch: 24 pg[3.1( empty local-lis/les=13/14 n=0 ec=23/13 lis/c=13/13 les/c/f=14/14/0 sis=23) [1] r=0 lpr=23 pi=[13,23)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  8 04:47:22 np0005550137 ceph-osd[83009]: osd.1 pg_epoch: 24 pg[3.2( empty local-lis/les=13/14 n=0 ec=23/13 lis/c=13/13 les/c/f=14/14/0 sis=23) [1] r=0 lpr=23 pi=[13,23)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  8 04:47:22 np0005550137 ceph-osd[83009]: osd.1 pg_epoch: 24 pg[4.6( empty local-lis/les=15/16 n=0 ec=23/15 lis/c=15/15 les/c/f=16/16/0 sis=23) [1] r=0 lpr=23 pi=[15,23)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  8 04:47:22 np0005550137 ceph-osd[83009]: osd.1 pg_epoch: 24 pg[3.3( empty local-lis/les=13/14 n=0 ec=23/13 lis/c=13/13 les/c/f=14/14/0 sis=23) [1] r=0 lpr=23 pi=[13,23)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  8 04:47:22 np0005550137 ceph-osd[83009]: osd.1 pg_epoch: 24 pg[4.4( empty local-lis/les=15/16 n=0 ec=23/15 lis/c=15/15 les/c/f=16/16/0 sis=23) [1] r=0 lpr=23 pi=[15,23)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  8 04:47:22 np0005550137 ceph-osd[83009]: osd.1 pg_epoch: 24 pg[4.5( empty local-lis/les=15/16 n=0 ec=23/15 lis/c=15/15 les/c/f=16/16/0 sis=23) [1] r=0 lpr=23 pi=[15,23)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  8 04:47:22 np0005550137 ceph-osd[83009]: osd.1 pg_epoch: 24 pg[4.3( empty local-lis/les=15/16 n=0 ec=23/15 lis/c=15/15 les/c/f=16/16/0 sis=23) [1] r=0 lpr=23 pi=[15,23)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  8 04:47:22 np0005550137 ceph-osd[83009]: osd.1 pg_epoch: 24 pg[3.4( empty local-lis/les=13/14 n=0 ec=23/13 lis/c=13/13 les/c/f=14/14/0 sis=23) [1] r=0 lpr=23 pi=[13,23)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  8 04:47:22 np0005550137 ceph-osd[83009]: osd.1 pg_epoch: 24 pg[3.8( empty local-lis/les=13/14 n=0 ec=23/13 lis/c=13/13 les/c/f=14/14/0 sis=23) [1] r=0 lpr=23 pi=[13,23)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  8 04:47:22 np0005550137 ceph-osd[83009]: osd.1 pg_epoch: 24 pg[4.f( empty local-lis/les=15/16 n=0 ec=23/15 lis/c=15/15 les/c/f=16/16/0 sis=23) [1] r=0 lpr=23 pi=[15,23)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  8 04:47:22 np0005550137 ceph-osd[83009]: osd.1 pg_epoch: 24 pg[3.9( empty local-lis/les=13/14 n=0 ec=23/13 lis/c=13/13 les/c/f=14/14/0 sis=23) [1] r=0 lpr=23 pi=[13,23)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  8 04:47:22 np0005550137 ceph-osd[83009]: osd.1 pg_epoch: 24 pg[4.e( empty local-lis/les=15/16 n=0 ec=23/15 lis/c=15/15 les/c/f=16/16/0 sis=23) [1] r=0 lpr=23 pi=[15,23)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  8 04:47:22 np0005550137 ceph-osd[83009]: osd.1 pg_epoch: 24 pg[3.1a( empty local-lis/les=13/14 n=0 ec=23/13 lis/c=13/13 les/c/f=14/14/0 sis=23) [1] r=0 lpr=23 pi=[13,23)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  8 04:47:22 np0005550137 ceph-osd[83009]: osd.1 pg_epoch: 24 pg[4.1d( empty local-lis/les=15/16 n=0 ec=23/15 lis/c=15/15 les/c/f=16/16/0 sis=23) [1] r=0 lpr=23 pi=[15,23)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  8 04:47:22 np0005550137 ceph-osd[83009]: osd.1 pg_epoch: 24 pg[3.1b( empty local-lis/les=13/14 n=0 ec=23/13 lis/c=13/13 les/c/f=14/14/0 sis=23) [1] r=0 lpr=23 pi=[13,23)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  8 04:47:22 np0005550137 ceph-osd[83009]: osd.1 pg_epoch: 24 pg[4.1c( empty local-lis/les=15/16 n=0 ec=23/15 lis/c=15/15 les/c/f=16/16/0 sis=23) [1] r=0 lpr=23 pi=[15,23)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  8 04:47:22 np0005550137 ceph-osd[83009]: osd.1 pg_epoch: 24 pg[4.1b( empty local-lis/les=15/16 n=0 ec=23/15 lis/c=15/15 les/c/f=16/16/0 sis=23) [1] r=0 lpr=23 pi=[15,23)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  8 04:47:22 np0005550137 ceph-osd[83009]: osd.1 pg_epoch: 24 pg[3.1c( empty local-lis/les=13/14 n=0 ec=23/13 lis/c=13/13 les/c/f=14/14/0 sis=23) [1] r=0 lpr=23 pi=[13,23)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  8 04:47:22 np0005550137 ceph-osd[83009]: osd.1 pg_epoch: 24 pg[4.1a( empty local-lis/les=15/16 n=0 ec=23/15 lis/c=15/15 les/c/f=16/16/0 sis=23) [1] r=0 lpr=23 pi=[15,23)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  8 04:47:22 np0005550137 ceph-osd[83009]: osd.1 pg_epoch: 24 pg[3.1d( empty local-lis/les=13/14 n=0 ec=23/13 lis/c=13/13 les/c/f=14/14/0 sis=23) [1] r=0 lpr=23 pi=[13,23)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  8 04:47:22 np0005550137 ceph-osd[83009]: osd.1 pg_epoch: 24 pg[4.19( empty local-lis/les=15/16 n=0 ec=23/15 lis/c=15/15 les/c/f=16/16/0 sis=23) [1] r=0 lpr=23 pi=[15,23)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  8 04:47:22 np0005550137 ceph-osd[83009]: osd.1 pg_epoch: 24 pg[3.1e( empty local-lis/les=13/14 n=0 ec=23/13 lis/c=13/13 les/c/f=14/14/0 sis=23) [1] r=0 lpr=23 pi=[13,23)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  8 04:47:22 np0005550137 ceph-osd[83009]: osd.1 pg_epoch: 24 pg[3.1f( empty local-lis/les=13/14 n=0 ec=23/13 lis/c=13/13 les/c/f=14/14/0 sis=23) [1] r=0 lpr=23 pi=[13,23)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  8 04:47:22 np0005550137 ceph-osd[83009]: osd.1 pg_epoch: 24 pg[4.1e( empty local-lis/les=23/24 n=0 ec=23/15 lis/c=15/15 les/c/f=16/16/0 sis=23) [1] r=0 lpr=23 pi=[15,23)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  8 04:47:22 np0005550137 ceph-osd[83009]: osd.1 pg_epoch: 24 pg[4.18( empty local-lis/les=15/16 n=0 ec=23/15 lis/c=15/15 les/c/f=16/16/0 sis=23) [1] r=0 lpr=23 pi=[15,23)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  8 04:47:22 np0005550137 ceph-mon[74516]: from='mgr.14122 192.168.122.100:0/1555784564' entity='mgr.compute-0.kitiwu' cmd='[{"prefix": "osd pool set", "pool": "images", "var": "pg_num", "val": "32"}]': finished
Dec  8 04:47:22 np0005550137 ceph-mon[74516]: from='mgr.14122 192.168.122.100:0/1555784564' entity='mgr.compute-0.kitiwu' cmd='[{"prefix": "osd pool set", "pool": "backups", "var": "pg_num_actual", "val": "32"}]': finished
Dec  8 04:47:22 np0005550137 ceph-mon[74516]: from='mgr.14122 192.168.122.100:0/1555784564' entity='mgr.compute-0.kitiwu' cmd='[{"prefix": "osd pool set", "pool": "volumes", "var": "pg_num_actual", "val": "32"}]': finished
Dec  8 04:47:22 np0005550137 ceph-mon[74516]: from='client.? 192.168.122.100:0/962673436' entity='client.admin' cmd='[{"prefix": "osd pool application enable", "pool": "backups", "app": "rbd"}]': finished
Dec  8 04:47:22 np0005550137 ceph-mon[74516]: from='mgr.14122 192.168.122.100:0/1555784564' entity='mgr.compute-0.kitiwu' cmd=[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pg_num", "val": "32"}]: dispatch
Dec  8 04:47:22 np0005550137 ceph-mon[74516]: from='mgr.14122 192.168.122.100:0/1555784564' entity='mgr.compute-0.kitiwu' cmd=[{"prefix": "auth get", "entity": "osd.2"}]: dispatch
Dec  8 04:47:22 np0005550137 ceph-mon[74516]: Deploying daemon osd.2 on compute-2
Dec  8 04:47:22 np0005550137 ceph-mon[74516]: from='client.? 192.168.122.100:0/2842985013' entity='client.admin' cmd=[{"prefix": "osd pool application enable", "pool": "images", "app": "rbd"}]: dispatch
Dec  8 04:47:22 np0005550137 ceph-osd[83009]: osd.1 pg_epoch: 24 pg[4.1f( empty local-lis/les=23/24 n=0 ec=23/15 lis/c=15/15 les/c/f=16/16/0 sis=23) [1] r=0 lpr=23 pi=[15,23)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  8 04:47:22 np0005550137 ceph-osd[83009]: osd.1 pg_epoch: 24 pg[3.17( empty local-lis/les=23/24 n=0 ec=23/13 lis/c=13/13 les/c/f=14/14/0 sis=23) [1] r=0 lpr=23 pi=[13,23)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  8 04:47:22 np0005550137 ceph-osd[83009]: osd.1 pg_epoch: 24 pg[3.16( empty local-lis/les=23/24 n=0 ec=23/13 lis/c=13/13 les/c/f=14/14/0 sis=23) [1] r=0 lpr=23 pi=[13,23)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  8 04:47:22 np0005550137 ceph-osd[83009]: osd.1 pg_epoch: 24 pg[3.15( empty local-lis/les=23/24 n=0 ec=23/13 lis/c=13/13 les/c/f=14/14/0 sis=23) [1] r=0 lpr=23 pi=[13,23)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  8 04:47:22 np0005550137 ceph-osd[83009]: osd.1 pg_epoch: 24 pg[4.12( empty local-lis/les=23/24 n=0 ec=23/15 lis/c=15/15 les/c/f=16/16/0 sis=23) [1] r=0 lpr=23 pi=[15,23)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  8 04:47:22 np0005550137 ceph-osd[83009]: osd.1 pg_epoch: 24 pg[3.18( empty local-lis/les=23/24 n=0 ec=23/13 lis/c=13/13 les/c/f=14/14/0 sis=23) [1] r=0 lpr=23 pi=[13,23)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  8 04:47:22 np0005550137 ceph-osd[83009]: osd.1 pg_epoch: 24 pg[4.10( empty local-lis/les=23/24 n=0 ec=23/15 lis/c=15/15 les/c/f=16/16/0 sis=23) [1] r=0 lpr=23 pi=[15,23)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  8 04:47:22 np0005550137 ceph-osd[83009]: osd.1 pg_epoch: 24 pg[3.14( empty local-lis/les=23/24 n=0 ec=23/13 lis/c=13/13 les/c/f=14/14/0 sis=23) [1] r=0 lpr=23 pi=[13,23)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  8 04:47:22 np0005550137 ceph-osd[83009]: osd.1 pg_epoch: 24 pg[4.13( empty local-lis/les=23/24 n=0 ec=23/15 lis/c=15/15 les/c/f=16/16/0 sis=23) [1] r=0 lpr=23 pi=[15,23)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  8 04:47:22 np0005550137 ceph-osd[83009]: osd.1 pg_epoch: 24 pg[4.11( empty local-lis/les=23/24 n=0 ec=23/15 lis/c=15/15 les/c/f=16/16/0 sis=23) [1] r=0 lpr=23 pi=[15,23)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  8 04:47:22 np0005550137 ceph-osd[83009]: osd.1 pg_epoch: 24 pg[3.13( empty local-lis/les=23/24 n=0 ec=23/13 lis/c=13/13 les/c/f=14/14/0 sis=23) [1] r=0 lpr=23 pi=[13,23)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  8 04:47:22 np0005550137 ceph-osd[83009]: osd.1 pg_epoch: 24 pg[4.14( empty local-lis/les=23/24 n=0 ec=23/15 lis/c=15/15 les/c/f=16/16/0 sis=23) [1] r=0 lpr=23 pi=[15,23)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  8 04:47:22 np0005550137 ceph-osd[83009]: osd.1 pg_epoch: 24 pg[3.12( empty local-lis/les=23/24 n=0 ec=23/13 lis/c=13/13 les/c/f=14/14/0 sis=23) [1] r=0 lpr=23 pi=[13,23)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  8 04:47:22 np0005550137 ceph-osd[83009]: osd.1 pg_epoch: 24 pg[3.10( empty local-lis/les=23/24 n=0 ec=23/13 lis/c=13/13 les/c/f=14/14/0 sis=23) [1] r=0 lpr=23 pi=[13,23)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  8 04:47:22 np0005550137 ceph-osd[83009]: osd.1 pg_epoch: 24 pg[4.17( empty local-lis/les=23/24 n=0 ec=23/15 lis/c=15/15 les/c/f=16/16/0 sis=23) [1] r=0 lpr=23 pi=[15,23)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  8 04:47:22 np0005550137 ceph-osd[83009]: osd.1 pg_epoch: 24 pg[4.15( empty local-lis/les=23/24 n=0 ec=23/15 lis/c=15/15 les/c/f=16/16/0 sis=23) [1] r=0 lpr=23 pi=[15,23)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  8 04:47:22 np0005550137 ceph-osd[83009]: osd.1 pg_epoch: 24 pg[3.f( empty local-lis/les=23/24 n=0 ec=23/13 lis/c=13/13 les/c/f=14/14/0 sis=23) [1] r=0 lpr=23 pi=[13,23)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  8 04:47:22 np0005550137 ceph-osd[83009]: osd.1 pg_epoch: 24 pg[4.8( empty local-lis/les=23/24 n=0 ec=23/15 lis/c=15/15 les/c/f=16/16/0 sis=23) [1] r=0 lpr=23 pi=[15,23)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  8 04:47:22 np0005550137 ceph-osd[83009]: osd.1 pg_epoch: 24 pg[3.e( empty local-lis/les=23/24 n=0 ec=23/13 lis/c=13/13 les/c/f=14/14/0 sis=23) [1] r=0 lpr=23 pi=[13,23)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  8 04:47:22 np0005550137 ceph-osd[83009]: osd.1 pg_epoch: 24 pg[3.d( empty local-lis/les=23/24 n=0 ec=23/13 lis/c=13/13 les/c/f=14/14/0 sis=23) [1] r=0 lpr=23 pi=[13,23)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  8 04:47:22 np0005550137 ceph-osd[83009]: osd.1 pg_epoch: 24 pg[4.a( empty local-lis/les=23/24 n=0 ec=23/15 lis/c=15/15 les/c/f=16/16/0 sis=23) [1] r=0 lpr=23 pi=[15,23)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  8 04:47:22 np0005550137 ceph-osd[83009]: osd.1 pg_epoch: 24 pg[4.16( empty local-lis/les=23/24 n=0 ec=23/15 lis/c=15/15 les/c/f=16/16/0 sis=23) [1] r=0 lpr=23 pi=[15,23)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  8 04:47:22 np0005550137 ceph-osd[83009]: osd.1 pg_epoch: 24 pg[4.9( empty local-lis/les=23/24 n=0 ec=23/15 lis/c=15/15 les/c/f=16/16/0 sis=23) [1] r=0 lpr=23 pi=[15,23)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  8 04:47:22 np0005550137 ceph-osd[83009]: osd.1 pg_epoch: 24 pg[3.19( empty local-lis/les=23/24 n=0 ec=23/13 lis/c=13/13 les/c/f=14/14/0 sis=23) [1] r=0 lpr=23 pi=[13,23)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  8 04:47:22 np0005550137 ceph-osd[83009]: osd.1 pg_epoch: 24 pg[3.b( empty local-lis/les=23/24 n=0 ec=23/13 lis/c=13/13 les/c/f=14/14/0 sis=23) [1] r=0 lpr=23 pi=[13,23)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  8 04:47:22 np0005550137 ceph-osd[83009]: osd.1 pg_epoch: 24 pg[3.c( empty local-lis/les=23/24 n=0 ec=23/13 lis/c=13/13 les/c/f=14/14/0 sis=23) [1] r=0 lpr=23 pi=[13,23)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  8 04:47:22 np0005550137 ceph-osd[83009]: osd.1 pg_epoch: 24 pg[4.b( empty local-lis/les=23/24 n=0 ec=23/15 lis/c=15/15 les/c/f=16/16/0 sis=23) [1] r=0 lpr=23 pi=[15,23)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  8 04:47:22 np0005550137 ceph-osd[83009]: osd.1 pg_epoch: 24 pg[4.d( empty local-lis/les=23/24 n=0 ec=23/15 lis/c=15/15 les/c/f=16/16/0 sis=23) [1] r=0 lpr=23 pi=[15,23)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  8 04:47:22 np0005550137 ceph-osd[83009]: osd.1 pg_epoch: 24 pg[3.a( empty local-lis/les=23/24 n=0 ec=23/13 lis/c=13/13 les/c/f=14/14/0 sis=23) [1] r=0 lpr=23 pi=[13,23)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  8 04:47:22 np0005550137 ceph-osd[83009]: osd.1 pg_epoch: 24 pg[4.7( empty local-lis/les=23/24 n=0 ec=23/15 lis/c=15/15 les/c/f=16/16/0 sis=23) [1] r=0 lpr=23 pi=[15,23)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  8 04:47:22 np0005550137 ceph-osd[83009]: osd.1 pg_epoch: 24 pg[3.11( empty local-lis/les=23/24 n=0 ec=23/13 lis/c=13/13 les/c/f=14/14/0 sis=23) [1] r=0 lpr=23 pi=[13,23)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  8 04:47:22 np0005550137 ceph-osd[83009]: osd.1 pg_epoch: 24 pg[3.0( empty local-lis/les=23/24 n=0 ec=13/13 lis/c=13/13 les/c/f=14/14/0 sis=23) [1] r=0 lpr=23 pi=[13,23)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  8 04:47:22 np0005550137 ceph-osd[83009]: osd.1 pg_epoch: 24 pg[3.7( empty local-lis/les=23/24 n=0 ec=23/13 lis/c=13/13 les/c/f=14/14/0 sis=23) [1] r=0 lpr=23 pi=[13,23)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  8 04:47:22 np0005550137 ceph-osd[83009]: osd.1 pg_epoch: 24 pg[3.6( empty local-lis/les=23/24 n=0 ec=23/13 lis/c=13/13 les/c/f=14/14/0 sis=23) [1] r=0 lpr=23 pi=[13,23)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  8 04:47:22 np0005550137 ceph-osd[83009]: osd.1 pg_epoch: 24 pg[3.5( empty local-lis/les=23/24 n=0 ec=23/13 lis/c=13/13 les/c/f=14/14/0 sis=23) [1] r=0 lpr=23 pi=[13,23)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  8 04:47:22 np0005550137 ceph-osd[83009]: osd.1 pg_epoch: 24 pg[4.c( empty local-lis/les=23/24 n=0 ec=23/15 lis/c=15/15 les/c/f=16/16/0 sis=23) [1] r=0 lpr=23 pi=[15,23)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  8 04:47:22 np0005550137 ceph-osd[83009]: osd.1 pg_epoch: 24 pg[4.2( empty local-lis/les=23/24 n=0 ec=23/15 lis/c=15/15 les/c/f=16/16/0 sis=23) [1] r=0 lpr=23 pi=[15,23)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  8 04:47:22 np0005550137 ceph-osd[83009]: osd.1 pg_epoch: 24 pg[4.1( empty local-lis/les=23/24 n=0 ec=23/15 lis/c=15/15 les/c/f=16/16/0 sis=23) [1] r=0 lpr=23 pi=[15,23)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  8 04:47:22 np0005550137 ceph-osd[83009]: osd.1 pg_epoch: 24 pg[3.1( empty local-lis/les=23/24 n=0 ec=23/13 lis/c=13/13 les/c/f=14/14/0 sis=23) [1] r=0 lpr=23 pi=[13,23)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  8 04:47:22 np0005550137 ceph-osd[83009]: osd.1 pg_epoch: 24 pg[3.3( empty local-lis/les=23/24 n=0 ec=23/13 lis/c=13/13 les/c/f=14/14/0 sis=23) [1] r=0 lpr=23 pi=[13,23)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  8 04:47:22 np0005550137 ceph-osd[83009]: osd.1 pg_epoch: 24 pg[4.0( empty local-lis/les=23/24 n=0 ec=15/15 lis/c=15/15 les/c/f=16/16/0 sis=23) [1] r=0 lpr=23 pi=[15,23)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  8 04:47:22 np0005550137 ceph-osd[83009]: osd.1 pg_epoch: 24 pg[3.2( empty local-lis/les=23/24 n=0 ec=23/13 lis/c=13/13 les/c/f=14/14/0 sis=23) [1] r=0 lpr=23 pi=[13,23)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  8 04:47:22 np0005550137 ceph-osd[83009]: osd.1 pg_epoch: 24 pg[4.4( empty local-lis/les=23/24 n=0 ec=23/15 lis/c=15/15 les/c/f=16/16/0 sis=23) [1] r=0 lpr=23 pi=[15,23)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  8 04:47:22 np0005550137 ceph-osd[83009]: osd.1 pg_epoch: 24 pg[4.5( empty local-lis/les=23/24 n=0 ec=23/15 lis/c=15/15 les/c/f=16/16/0 sis=23) [1] r=0 lpr=23 pi=[15,23)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  8 04:47:22 np0005550137 ceph-osd[83009]: osd.1 pg_epoch: 24 pg[4.3( empty local-lis/les=23/24 n=0 ec=23/15 lis/c=15/15 les/c/f=16/16/0 sis=23) [1] r=0 lpr=23 pi=[15,23)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  8 04:47:22 np0005550137 ceph-osd[83009]: osd.1 pg_epoch: 24 pg[3.4( empty local-lis/les=23/24 n=0 ec=23/13 lis/c=13/13 les/c/f=14/14/0 sis=23) [1] r=0 lpr=23 pi=[13,23)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  8 04:47:22 np0005550137 ceph-osd[83009]: osd.1 pg_epoch: 24 pg[3.8( empty local-lis/les=23/24 n=0 ec=23/13 lis/c=13/13 les/c/f=14/14/0 sis=23) [1] r=0 lpr=23 pi=[13,23)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  8 04:47:22 np0005550137 ceph-osd[83009]: osd.1 pg_epoch: 24 pg[4.f( empty local-lis/les=23/24 n=0 ec=23/15 lis/c=15/15 les/c/f=16/16/0 sis=23) [1] r=0 lpr=23 pi=[15,23)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  8 04:47:22 np0005550137 ceph-osd[83009]: osd.1 pg_epoch: 24 pg[4.6( empty local-lis/les=23/24 n=0 ec=23/15 lis/c=15/15 les/c/f=16/16/0 sis=23) [1] r=0 lpr=23 pi=[15,23)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  8 04:47:22 np0005550137 ceph-osd[83009]: osd.1 pg_epoch: 24 pg[3.9( empty local-lis/les=23/24 n=0 ec=23/13 lis/c=13/13 les/c/f=14/14/0 sis=23) [1] r=0 lpr=23 pi=[13,23)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  8 04:47:22 np0005550137 ceph-osd[83009]: osd.1 pg_epoch: 24 pg[3.1a( empty local-lis/les=23/24 n=0 ec=23/13 lis/c=13/13 les/c/f=14/14/0 sis=23) [1] r=0 lpr=23 pi=[13,23)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  8 04:47:22 np0005550137 ceph-osd[83009]: osd.1 pg_epoch: 24 pg[3.1b( empty local-lis/les=23/24 n=0 ec=23/13 lis/c=13/13 les/c/f=14/14/0 sis=23) [1] r=0 lpr=23 pi=[13,23)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  8 04:47:22 np0005550137 ceph-osd[83009]: osd.1 pg_epoch: 24 pg[4.1d( empty local-lis/les=23/24 n=0 ec=23/15 lis/c=15/15 les/c/f=16/16/0 sis=23) [1] r=0 lpr=23 pi=[15,23)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  8 04:47:22 np0005550137 ceph-osd[83009]: osd.1 pg_epoch: 24 pg[3.1c( empty local-lis/les=23/24 n=0 ec=23/13 lis/c=13/13 les/c/f=14/14/0 sis=23) [1] r=0 lpr=23 pi=[13,23)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  8 04:47:22 np0005550137 ceph-osd[83009]: osd.1 pg_epoch: 24 pg[4.1b( empty local-lis/les=23/24 n=0 ec=23/15 lis/c=15/15 les/c/f=16/16/0 sis=23) [1] r=0 lpr=23 pi=[15,23)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  8 04:47:22 np0005550137 ceph-osd[83009]: osd.1 pg_epoch: 24 pg[4.1c( empty local-lis/les=23/24 n=0 ec=23/15 lis/c=15/15 les/c/f=16/16/0 sis=23) [1] r=0 lpr=23 pi=[15,23)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  8 04:47:22 np0005550137 ceph-osd[83009]: osd.1 pg_epoch: 24 pg[4.e( empty local-lis/les=23/24 n=0 ec=23/15 lis/c=15/15 les/c/f=16/16/0 sis=23) [1] r=0 lpr=23 pi=[15,23)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  8 04:47:22 np0005550137 ceph-osd[83009]: osd.1 pg_epoch: 24 pg[4.1a( empty local-lis/les=23/24 n=0 ec=23/15 lis/c=15/15 les/c/f=16/16/0 sis=23) [1] r=0 lpr=23 pi=[15,23)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  8 04:47:22 np0005550137 ceph-osd[83009]: osd.1 pg_epoch: 24 pg[3.1d( empty local-lis/les=23/24 n=0 ec=23/13 lis/c=13/13 les/c/f=14/14/0 sis=23) [1] r=0 lpr=23 pi=[13,23)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  8 04:47:22 np0005550137 ceph-osd[83009]: osd.1 pg_epoch: 24 pg[4.19( empty local-lis/les=23/24 n=0 ec=23/15 lis/c=15/15 les/c/f=16/16/0 sis=23) [1] r=0 lpr=23 pi=[15,23)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  8 04:47:22 np0005550137 ceph-osd[83009]: osd.1 pg_epoch: 24 pg[4.18( empty local-lis/les=23/24 n=0 ec=23/15 lis/c=15/15 les/c/f=16/16/0 sis=23) [1] r=0 lpr=23 pi=[15,23)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  8 04:47:22 np0005550137 ceph-osd[83009]: osd.1 pg_epoch: 24 pg[3.1e( empty local-lis/les=23/24 n=0 ec=23/13 lis/c=13/13 les/c/f=14/14/0 sis=23) [1] r=0 lpr=23 pi=[13,23)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  8 04:47:22 np0005550137 ceph-osd[83009]: osd.1 pg_epoch: 24 pg[3.1f( empty local-lis/les=23/24 n=0 ec=23/13 lis/c=13/13 les/c/f=14/14/0 sis=23) [1] r=0 lpr=23 pi=[13,23)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  8 04:47:22 np0005550137 systemd[1]: libpod-156ae2dc3b7fcb60a036245d8b5ce7768b37029b4dabecd212b39b67f066dd25.scope: Deactivated successfully.
Dec  8 04:47:22 np0005550137 podman[86634]: 2025-12-08 09:47:22.263292962 +0000 UTC m=+0.648913409 container died 156ae2dc3b7fcb60a036245d8b5ce7768b37029b4dabecd212b39b67f066dd25 (image=quay.io/ceph/ceph:v19, name=charming_lehmann, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, ceph=True, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec  8 04:47:22 np0005550137 systemd[1]: var-lib-containers-storage-overlay-327d6d43bfd58774015f0c7f22646cdfe10d69a857552a256c7f2175231d2b51-merged.mount: Deactivated successfully.
Dec  8 04:47:22 np0005550137 podman[86634]: 2025-12-08 09:47:22.306491589 +0000 UTC m=+0.692112036 container remove 156ae2dc3b7fcb60a036245d8b5ce7768b37029b4dabecd212b39b67f066dd25 (image=quay.io/ceph/ceph:v19, name=charming_lehmann, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, io.buildah.version=1.40.1, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec  8 04:47:22 np0005550137 systemd[1]: libpod-conmon-156ae2dc3b7fcb60a036245d8b5ce7768b37029b4dabecd212b39b67f066dd25.scope: Deactivated successfully.
Dec  8 04:47:22 np0005550137 ceph-mgr[74806]: log_channel(cluster) log [DBG] : pgmap v80: 100 pgs: 93 unknown, 1 creating+peering, 6 active+clean; 449 KiB data, 53 MiB used, 40 GiB / 40 GiB avail
Dec  8 04:47:22 np0005550137 ceph-mon[74516]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pg_num_actual", "val": "32"} v 0)
Dec  8 04:47:22 np0005550137 ceph-mon[74516]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1555784564' entity='mgr.compute-0.kitiwu' cmd=[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pg_num_actual", "val": "32"}]: dispatch
Dec  8 04:47:22 np0005550137 ceph-mon[74516]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "images", "var": "pg_num_actual", "val": "32"} v 0)
Dec  8 04:47:22 np0005550137 ceph-mon[74516]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1555784564' entity='mgr.compute-0.kitiwu' cmd=[{"prefix": "osd pool set", "pool": "images", "var": "pg_num_actual", "val": "32"}]: dispatch
Dec  8 04:47:22 np0005550137 python3[86712]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v19 --fsid ceb838ef-9d5d-54e4-bddb-2f01adce2ad4 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   osd pool application enable cephfs.cephfs.meta cephfs _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec  8 04:47:22 np0005550137 ceph-osd[83009]: log_channel(cluster) log [DBG] : 4.1e scrub starts
Dec  8 04:47:22 np0005550137 ceph-osd[83009]: log_channel(cluster) log [DBG] : 4.1e scrub ok
Dec  8 04:47:22 np0005550137 podman[86713]: 2025-12-08 09:47:22.678033082 +0000 UTC m=+0.061596480 container create 4eda41e27ff2453ed3d1be9a047e0a860fd4f90ea41ea01f4c8fde8b6c6af19d (image=quay.io/ceph/ceph:v19, name=unruffled_cannon, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, io.buildah.version=1.40.1, ceph=True, org.label-schema.build-date=20250325, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec  8 04:47:22 np0005550137 systemd[75846]: Starting Mark boot as successful...
Dec  8 04:47:22 np0005550137 systemd[75846]: Finished Mark boot as successful.
Dec  8 04:47:22 np0005550137 systemd[1]: Started libpod-conmon-4eda41e27ff2453ed3d1be9a047e0a860fd4f90ea41ea01f4c8fde8b6c6af19d.scope.
Dec  8 04:47:22 np0005550137 podman[86713]: 2025-12-08 09:47:22.647496966 +0000 UTC m=+0.031060404 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Dec  8 04:47:22 np0005550137 systemd[1]: Started libcrun container.
Dec  8 04:47:22 np0005550137 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e30751e33eba26c36ef3c5d7d7956515123abeed9b7b2a992066ce8d0224aa8b/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec  8 04:47:22 np0005550137 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e30751e33eba26c36ef3c5d7d7956515123abeed9b7b2a992066ce8d0224aa8b/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Dec  8 04:47:22 np0005550137 podman[86713]: 2025-12-08 09:47:22.771887399 +0000 UTC m=+0.155450867 container init 4eda41e27ff2453ed3d1be9a047e0a860fd4f90ea41ea01f4c8fde8b6c6af19d (image=quay.io/ceph/ceph:v19, name=unruffled_cannon, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Dec  8 04:47:22 np0005550137 podman[86713]: 2025-12-08 09:47:22.782177247 +0000 UTC m=+0.165740615 container start 4eda41e27ff2453ed3d1be9a047e0a860fd4f90ea41ea01f4c8fde8b6c6af19d (image=quay.io/ceph/ceph:v19, name=unruffled_cannon, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, ceph=True, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1)
Dec  8 04:47:22 np0005550137 podman[86713]: 2025-12-08 09:47:22.785677523 +0000 UTC m=+0.169240891 container attach 4eda41e27ff2453ed3d1be9a047e0a860fd4f90ea41ea01f4c8fde8b6c6af19d (image=quay.io/ceph/ceph:v19, name=unruffled_cannon, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True)
Dec  8 04:47:23 np0005550137 ceph-mon[74516]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool application enable", "pool": "cephfs.cephfs.meta", "app": "cephfs"} v 0)
Dec  8 04:47:23 np0005550137 ceph-mon[74516]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/1163642721' entity='client.admin' cmd=[{"prefix": "osd pool application enable", "pool": "cephfs.cephfs.meta", "app": "cephfs"}]: dispatch
Dec  8 04:47:23 np0005550137 ceph-mon[74516]: mon.compute-0@0(leader).osd e24 do_prune osdmap full prune enabled
Dec  8 04:47:23 np0005550137 ceph-mon[74516]: log_channel(cluster) log [WRN] : Health check update: 2 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED)
Dec  8 04:47:23 np0005550137 ceph-mon[74516]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1555784564' entity='mgr.compute-0.kitiwu' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.data", "var": "pg_num", "val": "32"}]': finished
Dec  8 04:47:23 np0005550137 ceph-mon[74516]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1555784564' entity='mgr.compute-0.kitiwu' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pg_num_actual", "val": "32"}]': finished
Dec  8 04:47:23 np0005550137 ceph-mon[74516]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1555784564' entity='mgr.compute-0.kitiwu' cmd='[{"prefix": "osd pool set", "pool": "images", "var": "pg_num_actual", "val": "32"}]': finished
Dec  8 04:47:23 np0005550137 ceph-mon[74516]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/1163642721' entity='client.admin' cmd='[{"prefix": "osd pool application enable", "pool": "cephfs.cephfs.meta", "app": "cephfs"}]': finished
Dec  8 04:47:23 np0005550137 ceph-mon[74516]: mon.compute-0@0(leader).osd e25 e25: 3 total, 2 up, 3 in
Dec  8 04:47:23 np0005550137 unruffled_cannon[86729]: enabled application 'cephfs' on pool 'cephfs.cephfs.meta'
Dec  8 04:47:23 np0005550137 ceph-mon[74516]: log_channel(cluster) log [DBG] : osdmap e25: 3 total, 2 up, 3 in
Dec  8 04:47:23 np0005550137 ceph-mon[74516]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd metadata", "id": 2} v 0)
Dec  8 04:47:23 np0005550137 ceph-mon[74516]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/1555784564' entity='mgr.compute-0.kitiwu' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch
Dec  8 04:47:23 np0005550137 ceph-mgr[74806]: mgr finish mon failed to return metadata for osd.2: (2) No such file or directory
Dec  8 04:47:23 np0005550137 ceph-mgr[74806]: [progress INFO root] update: starting ev 76ceca4d-4d32-44ce-b613-900a820af3db (PG autoscaler increasing pool 7 PGs from 1 to 32)
Dec  8 04:47:23 np0005550137 ceph-mgr[74806]: [progress INFO root] complete: finished ev acc356f5-6ef5-41e1-a546-ae46d295f0bf (PG autoscaler increasing pool 2 PGs from 1 to 32)
Dec  8 04:47:23 np0005550137 ceph-mgr[74806]: [progress INFO root] Completed event acc356f5-6ef5-41e1-a546-ae46d295f0bf (PG autoscaler increasing pool 2 PGs from 1 to 32) in 5 seconds
Dec  8 04:47:23 np0005550137 ceph-mgr[74806]: [progress INFO root] complete: finished ev 7d3c549d-c5a5-4146-bffa-7616b44a86bb (PG autoscaler increasing pool 3 PGs from 1 to 32)
Dec  8 04:47:23 np0005550137 ceph-mgr[74806]: [progress INFO root] Completed event 7d3c549d-c5a5-4146-bffa-7616b44a86bb (PG autoscaler increasing pool 3 PGs from 1 to 32) in 4 seconds
Dec  8 04:47:23 np0005550137 ceph-mgr[74806]: [progress INFO root] complete: finished ev d079d8de-eaa4-47b7-a1be-f09a4f3e7919 (PG autoscaler increasing pool 4 PGs from 1 to 32)
Dec  8 04:47:23 np0005550137 ceph-mgr[74806]: [progress INFO root] Completed event d079d8de-eaa4-47b7-a1be-f09a4f3e7919 (PG autoscaler increasing pool 4 PGs from 1 to 32) in 3 seconds
Dec  8 04:47:23 np0005550137 ceph-mgr[74806]: [progress INFO root] complete: finished ev 5f9783d0-912f-42da-9c83-ceeb85c88c28 (PG autoscaler increasing pool 5 PGs from 1 to 32)
Dec  8 04:47:23 np0005550137 ceph-mgr[74806]: [progress INFO root] Completed event 5f9783d0-912f-42da-9c83-ceeb85c88c28 (PG autoscaler increasing pool 5 PGs from 1 to 32) in 2 seconds
Dec  8 04:47:23 np0005550137 ceph-mgr[74806]: [progress INFO root] complete: finished ev ad1399ff-dcc4-4c9b-b13d-2ab60fb06c89 (PG autoscaler increasing pool 6 PGs from 1 to 32)
Dec  8 04:47:23 np0005550137 ceph-mgr[74806]: [progress INFO root] Completed event ad1399ff-dcc4-4c9b-b13d-2ab60fb06c89 (PG autoscaler increasing pool 6 PGs from 1 to 32) in 1 seconds
Dec  8 04:47:23 np0005550137 ceph-mgr[74806]: [progress INFO root] complete: finished ev 76ceca4d-4d32-44ce-b613-900a820af3db (PG autoscaler increasing pool 7 PGs from 1 to 32)
Dec  8 04:47:23 np0005550137 ceph-mgr[74806]: [progress INFO root] Completed event 76ceca4d-4d32-44ce-b613-900a820af3db (PG autoscaler increasing pool 7 PGs from 1 to 32) in 0 seconds
Dec  8 04:47:23 np0005550137 systemd[1]: libpod-4eda41e27ff2453ed3d1be9a047e0a860fd4f90ea41ea01f4c8fde8b6c6af19d.scope: Deactivated successfully.
Dec  8 04:47:23 np0005550137 podman[86713]: 2025-12-08 09:47:23.467597433 +0000 UTC m=+0.851160801 container died 4eda41e27ff2453ed3d1be9a047e0a860fd4f90ea41ea01f4c8fde8b6c6af19d (image=quay.io/ceph/ceph:v19, name=unruffled_cannon, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, ceph=True)
Dec  8 04:47:23 np0005550137 systemd[1]: var-lib-containers-storage-overlay-e30751e33eba26c36ef3c5d7d7956515123abeed9b7b2a992066ce8d0224aa8b-merged.mount: Deactivated successfully.
Dec  8 04:47:23 np0005550137 podman[86713]: 2025-12-08 09:47:23.505247103 +0000 UTC m=+0.888810461 container remove 4eda41e27ff2453ed3d1be9a047e0a860fd4f90ea41ea01f4c8fde8b6c6af19d (image=quay.io/ceph/ceph:v19, name=unruffled_cannon, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_REF=squid, io.buildah.version=1.40.1, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325)
Dec  8 04:47:23 np0005550137 systemd[1]: libpod-conmon-4eda41e27ff2453ed3d1be9a047e0a860fd4f90ea41ea01f4c8fde8b6c6af19d.scope: Deactivated successfully.
Dec  8 04:47:23 np0005550137 ceph-mon[74516]: from='mgr.14122 192.168.122.100:0/1555784564' entity='mgr.compute-0.kitiwu' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pg_num", "val": "32"}]': finished
Dec  8 04:47:23 np0005550137 ceph-mon[74516]: from='client.? 192.168.122.100:0/2842985013' entity='client.admin' cmd='[{"prefix": "osd pool application enable", "pool": "images", "app": "rbd"}]': finished
Dec  8 04:47:23 np0005550137 ceph-mon[74516]: from='mgr.14122 192.168.122.100:0/1555784564' entity='mgr.compute-0.kitiwu' cmd=[{"prefix": "osd pool set", "pool": "cephfs.cephfs.data", "var": "pg_num", "val": "32"}]: dispatch
Dec  8 04:47:23 np0005550137 ceph-mon[74516]: from='mgr.14122 192.168.122.100:0/1555784564' entity='mgr.compute-0.kitiwu' cmd=[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pg_num_actual", "val": "32"}]: dispatch
Dec  8 04:47:23 np0005550137 ceph-mon[74516]: from='mgr.14122 192.168.122.100:0/1555784564' entity='mgr.compute-0.kitiwu' cmd=[{"prefix": "osd pool set", "pool": "images", "var": "pg_num_actual", "val": "32"}]: dispatch
Dec  8 04:47:23 np0005550137 ceph-mon[74516]: from='client.? 192.168.122.100:0/1163642721' entity='client.admin' cmd=[{"prefix": "osd pool application enable", "pool": "cephfs.cephfs.meta", "app": "cephfs"}]: dispatch
Dec  8 04:47:23 np0005550137 ceph-osd[83009]: log_channel(cluster) log [DBG] : 3.17 scrub starts
Dec  8 04:47:23 np0005550137 ceph-osd[83009]: log_channel(cluster) log [DBG] : 3.17 scrub ok
Dec  8 04:47:24 np0005550137 python3[86793]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v19 --fsid ceb838ef-9d5d-54e4-bddb-2f01adce2ad4 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   osd pool application enable cephfs.cephfs.data cephfs _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec  8 04:47:24 np0005550137 podman[86794]: 2025-12-08 09:47:24.21439833 +0000 UTC m=+0.045530878 container create 40bbb101db08f981fb2c375bba65a421c5ccf71d90507755d00846f3dc3e97ae (image=quay.io/ceph/ceph:v19, name=wizardly_colden, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, ceph=True, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Dec  8 04:47:24 np0005550137 systemd[1]: Started libpod-conmon-40bbb101db08f981fb2c375bba65a421c5ccf71d90507755d00846f3dc3e97ae.scope.
Dec  8 04:47:24 np0005550137 systemd[1]: Started libcrun container.
Dec  8 04:47:24 np0005550137 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e00e25208d8ea904520b2b83ccc107979469f9c4f4478d872482eec14bd51c31/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Dec  8 04:47:24 np0005550137 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e00e25208d8ea904520b2b83ccc107979469f9c4f4478d872482eec14bd51c31/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec  8 04:47:24 np0005550137 podman[86794]: 2025-12-08 09:47:24.190339787 +0000 UTC m=+0.021472355 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Dec  8 04:47:24 np0005550137 podman[86794]: 2025-12-08 09:47:24.302515005 +0000 UTC m=+0.133647563 container init 40bbb101db08f981fb2c375bba65a421c5ccf71d90507755d00846f3dc3e97ae (image=quay.io/ceph/ceph:v19, name=wizardly_colden, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Dec  8 04:47:24 np0005550137 podman[86794]: 2025-12-08 09:47:24.308790833 +0000 UTC m=+0.139923361 container start 40bbb101db08f981fb2c375bba65a421c5ccf71d90507755d00846f3dc3e97ae (image=quay.io/ceph/ceph:v19, name=wizardly_colden, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS)
Dec  8 04:47:24 np0005550137 podman[86794]: 2025-12-08 09:47:24.311886706 +0000 UTC m=+0.143019254 container attach 40bbb101db08f981fb2c375bba65a421c5ccf71d90507755d00846f3dc3e97ae (image=quay.io/ceph/ceph:v19, name=wizardly_colden, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_REF=squid, ceph=True)
Dec  8 04:47:24 np0005550137 ceph-mgr[74806]: log_channel(cluster) log [DBG] : pgmap v82: 162 pgs: 62 unknown, 100 active+clean; 449 KiB data, 53 MiB used, 40 GiB / 40 GiB avail
Dec  8 04:47:24 np0005550137 ceph-mon[74516]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "cephfs.cephfs.data", "var": "pg_num_actual", "val": "32"} v 0)
Dec  8 04:47:24 np0005550137 ceph-mon[74516]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1555784564' entity='mgr.compute-0.kitiwu' cmd=[{"prefix": "osd pool set", "pool": "cephfs.cephfs.data", "var": "pg_num_actual", "val": "32"}]: dispatch
Dec  8 04:47:24 np0005550137 ceph-mon[74516]: mon.compute-0@0(leader).osd e25 do_prune osdmap full prune enabled
Dec  8 04:47:24 np0005550137 ceph-mon[74516]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1555784564' entity='mgr.compute-0.kitiwu' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.data", "var": "pg_num_actual", "val": "32"}]': finished
Dec  8 04:47:24 np0005550137 ceph-mon[74516]: mon.compute-0@0(leader).osd e26 e26: 3 total, 2 up, 3 in
Dec  8 04:47:24 np0005550137 ceph-mon[74516]: log_channel(cluster) log [DBG] : osdmap e26: 3 total, 2 up, 3 in
Dec  8 04:47:24 np0005550137 ceph-mon[74516]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd metadata", "id": 2} v 0)
Dec  8 04:47:24 np0005550137 ceph-mon[74516]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/1555784564' entity='mgr.compute-0.kitiwu' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch
Dec  8 04:47:24 np0005550137 ceph-mgr[74806]: mgr finish mon failed to return metadata for osd.2: (2) No such file or directory
Dec  8 04:47:24 np0005550137 ceph-mon[74516]: Health check update: 2 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED)
Dec  8 04:47:24 np0005550137 ceph-mon[74516]: from='mgr.14122 192.168.122.100:0/1555784564' entity='mgr.compute-0.kitiwu' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.data", "var": "pg_num", "val": "32"}]': finished
Dec  8 04:47:24 np0005550137 ceph-mon[74516]: from='mgr.14122 192.168.122.100:0/1555784564' entity='mgr.compute-0.kitiwu' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pg_num_actual", "val": "32"}]': finished
Dec  8 04:47:24 np0005550137 ceph-mon[74516]: from='mgr.14122 192.168.122.100:0/1555784564' entity='mgr.compute-0.kitiwu' cmd='[{"prefix": "osd pool set", "pool": "images", "var": "pg_num_actual", "val": "32"}]': finished
Dec  8 04:47:24 np0005550137 ceph-mon[74516]: from='client.? 192.168.122.100:0/1163642721' entity='client.admin' cmd='[{"prefix": "osd pool application enable", "pool": "cephfs.cephfs.meta", "app": "cephfs"}]': finished
Dec  8 04:47:24 np0005550137 ceph-mon[74516]: from='mgr.14122 192.168.122.100:0/1555784564' entity='mgr.compute-0.kitiwu' cmd=[{"prefix": "osd pool set", "pool": "cephfs.cephfs.data", "var": "pg_num_actual", "val": "32"}]: dispatch
Dec  8 04:47:24 np0005550137 ceph-osd[83009]: log_channel(cluster) log [DBG] : 4.1f scrub starts
Dec  8 04:47:24 np0005550137 ceph-osd[83009]: log_channel(cluster) log [DBG] : 4.1f scrub ok
Dec  8 04:47:24 np0005550137 ceph-mon[74516]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool application enable", "pool": "cephfs.cephfs.data", "app": "cephfs"} v 0)
Dec  8 04:47:24 np0005550137 ceph-mon[74516]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/2436376386' entity='client.admin' cmd=[{"prefix": "osd pool application enable", "pool": "cephfs.cephfs.data", "app": "cephfs"}]: dispatch
Dec  8 04:47:24 np0005550137 ceph-mon[74516]: mon.compute-0@0(leader).osd e26 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  8 04:47:24 np0005550137 ceph-osd[83009]: osd.1 pg_epoch: 25 pg[6.0( empty local-lis/les=17/18 n=0 ec=17/17 lis/c=17/17 les/c/f=18/18/0 sis=25 pruub=14.924284935s) [1] r=0 lpr=25 pi=[17,25)/1 crt=0'0 mlcod 0'0 active pruub 68.452720642s@ mbc={}] PeeringState::start_peering_interval up [1] -> [1], acting [1] -> [1], acting_primary 1 -> 1, up_primary 1 -> 1, role 0 -> 0, features acting 4540701547738038271 upacting 4540701547738038271
Dec  8 04:47:24 np0005550137 ceph-osd[83009]: osd.1 pg_epoch: 25 pg[5.0( empty local-lis/les=16/17 n=0 ec=16/16 lis/c=16/16 les/c/f=17/17/0 sis=25 pruub=14.724353790s) [1] r=0 lpr=25 pi=[16,25)/1 crt=0'0 mlcod 0'0 active pruub 68.252807617s@ mbc={}] PeeringState::start_peering_interval up [1] -> [1], acting [1] -> [1], acting_primary 1 -> 1, up_primary 1 -> 1, role 0 -> 0, features acting 4540701547738038271 upacting 4540701547738038271
Dec  8 04:47:24 np0005550137 ceph-osd[83009]: osd.1 pg_epoch: 26 pg[6.0( empty local-lis/les=17/18 n=0 ec=17/17 lis/c=17/17 les/c/f=18/18/0 sis=25 pruub=14.924284935s) [1] r=0 lpr=25 pi=[17,25)/1 crt=0'0 mlcod 0'0 unknown pruub 68.452720642s@ mbc={}] state<Start>: transitioning to Primary
Dec  8 04:47:24 np0005550137 ceph-osd[83009]: osd.1 pg_epoch: 26 pg[5.0( empty local-lis/les=16/17 n=0 ec=16/16 lis/c=16/16 les/c/f=17/17/0 sis=25 pruub=14.724353790s) [1] r=0 lpr=25 pi=[16,25)/1 crt=0'0 mlcod 0'0 unknown pruub 68.252807617s@ mbc={}] state<Start>: transitioning to Primary
Dec  8 04:47:24 np0005550137 ceph-osd[83009]: osd.1 pg_epoch: 26 pg[6.7( empty local-lis/les=17/18 n=0 ec=25/17 lis/c=17/17 les/c/f=18/18/0 sis=25) [1] r=0 lpr=25 pi=[17,25)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  8 04:47:24 np0005550137 ceph-osd[83009]: osd.1 pg_epoch: 26 pg[6.8( empty local-lis/les=17/18 n=0 ec=25/17 lis/c=17/17 les/c/f=18/18/0 sis=25) [1] r=0 lpr=25 pi=[17,25)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  8 04:47:24 np0005550137 ceph-osd[83009]: osd.1 pg_epoch: 26 pg[6.9( empty local-lis/les=17/18 n=0 ec=25/17 lis/c=17/17 les/c/f=18/18/0 sis=25) [1] r=0 lpr=25 pi=[17,25)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  8 04:47:24 np0005550137 ceph-osd[83009]: osd.1 pg_epoch: 26 pg[6.a( empty local-lis/les=17/18 n=0 ec=25/17 lis/c=17/17 les/c/f=18/18/0 sis=25) [1] r=0 lpr=25 pi=[17,25)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  8 04:47:24 np0005550137 ceph-osd[83009]: osd.1 pg_epoch: 26 pg[6.b( empty local-lis/les=17/18 n=0 ec=25/17 lis/c=17/17 les/c/f=18/18/0 sis=25) [1] r=0 lpr=25 pi=[17,25)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  8 04:47:24 np0005550137 ceph-osd[83009]: osd.1 pg_epoch: 26 pg[6.c( empty local-lis/les=17/18 n=0 ec=25/17 lis/c=17/17 les/c/f=18/18/0 sis=25) [1] r=0 lpr=25 pi=[17,25)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  8 04:47:24 np0005550137 ceph-osd[83009]: osd.1 pg_epoch: 26 pg[6.5( empty local-lis/les=17/18 n=0 ec=25/17 lis/c=17/17 les/c/f=18/18/0 sis=25) [1] r=0 lpr=25 pi=[17,25)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  8 04:47:24 np0005550137 ceph-osd[83009]: osd.1 pg_epoch: 26 pg[6.6( empty local-lis/les=17/18 n=0 ec=25/17 lis/c=17/17 les/c/f=18/18/0 sis=25) [1] r=0 lpr=25 pi=[17,25)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  8 04:47:24 np0005550137 ceph-osd[83009]: osd.1 pg_epoch: 26 pg[6.3( empty local-lis/les=17/18 n=0 ec=25/17 lis/c=17/17 les/c/f=18/18/0 sis=25) [1] r=0 lpr=25 pi=[17,25)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  8 04:47:24 np0005550137 ceph-osd[83009]: osd.1 pg_epoch: 26 pg[6.4( empty local-lis/les=17/18 n=0 ec=25/17 lis/c=17/17 les/c/f=18/18/0 sis=25) [1] r=0 lpr=25 pi=[17,25)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  8 04:47:24 np0005550137 ceph-osd[83009]: osd.1 pg_epoch: 26 pg[6.1( empty local-lis/les=17/18 n=0 ec=25/17 lis/c=17/17 les/c/f=18/18/0 sis=25) [1] r=0 lpr=25 pi=[17,25)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  8 04:47:24 np0005550137 ceph-osd[83009]: osd.1 pg_epoch: 26 pg[6.2( empty local-lis/les=17/18 n=0 ec=25/17 lis/c=17/17 les/c/f=18/18/0 sis=25) [1] r=0 lpr=25 pi=[17,25)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  8 04:47:24 np0005550137 ceph-osd[83009]: osd.1 pg_epoch: 26 pg[6.11( empty local-lis/les=17/18 n=0 ec=25/17 lis/c=17/17 les/c/f=18/18/0 sis=25) [1] r=0 lpr=25 pi=[17,25)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  8 04:47:24 np0005550137 ceph-osd[83009]: osd.1 pg_epoch: 26 pg[6.12( empty local-lis/les=17/18 n=0 ec=25/17 lis/c=17/17 les/c/f=18/18/0 sis=25) [1] r=0 lpr=25 pi=[17,25)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  8 04:47:24 np0005550137 ceph-osd[83009]: osd.1 pg_epoch: 26 pg[6.13( empty local-lis/les=17/18 n=0 ec=25/17 lis/c=17/17 les/c/f=18/18/0 sis=25) [1] r=0 lpr=25 pi=[17,25)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  8 04:47:24 np0005550137 ceph-osd[83009]: osd.1 pg_epoch: 26 pg[6.14( empty local-lis/les=17/18 n=0 ec=25/17 lis/c=17/17 les/c/f=18/18/0 sis=25) [1] r=0 lpr=25 pi=[17,25)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  8 04:47:24 np0005550137 ceph-osd[83009]: osd.1 pg_epoch: 26 pg[6.d( empty local-lis/les=17/18 n=0 ec=25/17 lis/c=17/17 les/c/f=18/18/0 sis=25) [1] r=0 lpr=25 pi=[17,25)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  8 04:47:24 np0005550137 ceph-osd[83009]: osd.1 pg_epoch: 26 pg[6.e( empty local-lis/les=17/18 n=0 ec=25/17 lis/c=17/17 les/c/f=18/18/0 sis=25) [1] r=0 lpr=25 pi=[17,25)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  8 04:47:24 np0005550137 ceph-osd[83009]: osd.1 pg_epoch: 26 pg[6.f( empty local-lis/les=17/18 n=0 ec=25/17 lis/c=17/17 les/c/f=18/18/0 sis=25) [1] r=0 lpr=25 pi=[17,25)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  8 04:47:24 np0005550137 ceph-osd[83009]: osd.1 pg_epoch: 26 pg[6.10( empty local-lis/les=17/18 n=0 ec=25/17 lis/c=17/17 les/c/f=18/18/0 sis=25) [1] r=0 lpr=25 pi=[17,25)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  8 04:47:24 np0005550137 ceph-osd[83009]: osd.1 pg_epoch: 26 pg[6.15( empty local-lis/les=17/18 n=0 ec=25/17 lis/c=17/17 les/c/f=18/18/0 sis=25) [1] r=0 lpr=25 pi=[17,25)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  8 04:47:24 np0005550137 ceph-osd[83009]: osd.1 pg_epoch: 26 pg[6.16( empty local-lis/les=17/18 n=0 ec=25/17 lis/c=17/17 les/c/f=18/18/0 sis=25) [1] r=0 lpr=25 pi=[17,25)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  8 04:47:24 np0005550137 ceph-osd[83009]: osd.1 pg_epoch: 26 pg[6.17( empty local-lis/les=17/18 n=0 ec=25/17 lis/c=17/17 les/c/f=18/18/0 sis=25) [1] r=0 lpr=25 pi=[17,25)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  8 04:47:24 np0005550137 ceph-osd[83009]: osd.1 pg_epoch: 26 pg[6.18( empty local-lis/les=17/18 n=0 ec=25/17 lis/c=17/17 les/c/f=18/18/0 sis=25) [1] r=0 lpr=25 pi=[17,25)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  8 04:47:24 np0005550137 ceph-osd[83009]: osd.1 pg_epoch: 26 pg[6.19( empty local-lis/les=17/18 n=0 ec=25/17 lis/c=17/17 les/c/f=18/18/0 sis=25) [1] r=0 lpr=25 pi=[17,25)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  8 04:47:24 np0005550137 ceph-osd[83009]: osd.1 pg_epoch: 26 pg[6.1a( empty local-lis/les=17/18 n=0 ec=25/17 lis/c=17/17 les/c/f=18/18/0 sis=25) [1] r=0 lpr=25 pi=[17,25)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  8 04:47:24 np0005550137 ceph-osd[83009]: osd.1 pg_epoch: 26 pg[6.1b( empty local-lis/les=17/18 n=0 ec=25/17 lis/c=17/17 les/c/f=18/18/0 sis=25) [1] r=0 lpr=25 pi=[17,25)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  8 04:47:24 np0005550137 ceph-osd[83009]: osd.1 pg_epoch: 26 pg[6.1c( empty local-lis/les=17/18 n=0 ec=25/17 lis/c=17/17 les/c/f=18/18/0 sis=25) [1] r=0 lpr=25 pi=[17,25)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  8 04:47:24 np0005550137 ceph-osd[83009]: osd.1 pg_epoch: 26 pg[6.1d( empty local-lis/les=17/18 n=0 ec=25/17 lis/c=17/17 les/c/f=18/18/0 sis=25) [1] r=0 lpr=25 pi=[17,25)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  8 04:47:24 np0005550137 ceph-osd[83009]: osd.1 pg_epoch: 26 pg[6.1e( empty local-lis/les=17/18 n=0 ec=25/17 lis/c=17/17 les/c/f=18/18/0 sis=25) [1] r=0 lpr=25 pi=[17,25)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  8 04:47:24 np0005550137 ceph-osd[83009]: osd.1 pg_epoch: 26 pg[5.3( empty local-lis/les=16/17 n=0 ec=25/16 lis/c=16/16 les/c/f=17/17/0 sis=25) [1] r=0 lpr=25 pi=[16,25)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  8 04:47:24 np0005550137 ceph-osd[83009]: osd.1 pg_epoch: 26 pg[6.1f( empty local-lis/les=17/18 n=0 ec=25/17 lis/c=17/17 les/c/f=18/18/0 sis=25) [1] r=0 lpr=25 pi=[17,25)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  8 04:47:24 np0005550137 ceph-osd[83009]: osd.1 pg_epoch: 26 pg[5.d( empty local-lis/les=16/17 n=0 ec=25/16 lis/c=16/16 les/c/f=17/17/0 sis=25) [1] r=0 lpr=25 pi=[16,25)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  8 04:47:24 np0005550137 ceph-osd[83009]: osd.1 pg_epoch: 26 pg[5.c( empty local-lis/les=16/17 n=0 ec=25/16 lis/c=16/16 les/c/f=17/17/0 sis=25) [1] r=0 lpr=25 pi=[16,25)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  8 04:47:24 np0005550137 ceph-osd[83009]: osd.1 pg_epoch: 26 pg[5.6( empty local-lis/les=16/17 n=0 ec=25/16 lis/c=16/16 les/c/f=17/17/0 sis=25) [1] r=0 lpr=25 pi=[16,25)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  8 04:47:24 np0005550137 ceph-osd[83009]: osd.1 pg_epoch: 26 pg[5.7( empty local-lis/les=16/17 n=0 ec=25/16 lis/c=16/16 les/c/f=17/17/0 sis=25) [1] r=0 lpr=25 pi=[16,25)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  8 04:47:24 np0005550137 ceph-osd[83009]: osd.1 pg_epoch: 26 pg[5.8( empty local-lis/les=16/17 n=0 ec=25/16 lis/c=16/16 les/c/f=17/17/0 sis=25) [1] r=0 lpr=25 pi=[16,25)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  8 04:47:24 np0005550137 ceph-osd[83009]: osd.1 pg_epoch: 26 pg[5.9( empty local-lis/les=16/17 n=0 ec=25/16 lis/c=16/16 les/c/f=17/17/0 sis=25) [1] r=0 lpr=25 pi=[16,25)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  8 04:47:24 np0005550137 ceph-osd[83009]: osd.1 pg_epoch: 26 pg[5.1( empty local-lis/les=16/17 n=0 ec=25/16 lis/c=16/16 les/c/f=17/17/0 sis=25) [1] r=0 lpr=25 pi=[16,25)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  8 04:47:24 np0005550137 ceph-osd[83009]: osd.1 pg_epoch: 26 pg[5.b( empty local-lis/les=16/17 n=0 ec=25/16 lis/c=16/16 les/c/f=17/17/0 sis=25) [1] r=0 lpr=25 pi=[16,25)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  8 04:47:24 np0005550137 ceph-osd[83009]: osd.1 pg_epoch: 26 pg[5.2( empty local-lis/les=16/17 n=0 ec=25/16 lis/c=16/16 les/c/f=17/17/0 sis=25) [1] r=0 lpr=25 pi=[16,25)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  8 04:47:24 np0005550137 ceph-osd[83009]: osd.1 pg_epoch: 26 pg[5.4( empty local-lis/les=16/17 n=0 ec=25/16 lis/c=16/16 les/c/f=17/17/0 sis=25) [1] r=0 lpr=25 pi=[16,25)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  8 04:47:24 np0005550137 ceph-osd[83009]: osd.1 pg_epoch: 26 pg[5.5( empty local-lis/les=16/17 n=0 ec=25/16 lis/c=16/16 les/c/f=17/17/0 sis=25) [1] r=0 lpr=25 pi=[16,25)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  8 04:47:24 np0005550137 ceph-osd[83009]: osd.1 pg_epoch: 26 pg[5.12( empty local-lis/les=16/17 n=0 ec=25/16 lis/c=16/16 les/c/f=17/17/0 sis=25) [1] r=0 lpr=25 pi=[16,25)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  8 04:47:24 np0005550137 ceph-osd[83009]: osd.1 pg_epoch: 26 pg[5.13( empty local-lis/les=16/17 n=0 ec=25/16 lis/c=16/16 les/c/f=17/17/0 sis=25) [1] r=0 lpr=25 pi=[16,25)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  8 04:47:24 np0005550137 ceph-osd[83009]: osd.1 pg_epoch: 26 pg[5.e( empty local-lis/les=16/17 n=0 ec=25/16 lis/c=16/16 les/c/f=17/17/0 sis=25) [1] r=0 lpr=25 pi=[16,25)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  8 04:47:24 np0005550137 ceph-osd[83009]: osd.1 pg_epoch: 26 pg[5.f( empty local-lis/les=16/17 n=0 ec=25/16 lis/c=16/16 les/c/f=17/17/0 sis=25) [1] r=0 lpr=25 pi=[16,25)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  8 04:47:24 np0005550137 ceph-osd[83009]: osd.1 pg_epoch: 26 pg[5.a( empty local-lis/les=16/17 n=0 ec=25/16 lis/c=16/16 les/c/f=17/17/0 sis=25) [1] r=0 lpr=25 pi=[16,25)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  8 04:47:24 np0005550137 ceph-osd[83009]: osd.1 pg_epoch: 26 pg[5.10( empty local-lis/les=16/17 n=0 ec=25/16 lis/c=16/16 les/c/f=17/17/0 sis=25) [1] r=0 lpr=25 pi=[16,25)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  8 04:47:24 np0005550137 ceph-osd[83009]: osd.1 pg_epoch: 26 pg[5.11( empty local-lis/les=16/17 n=0 ec=25/16 lis/c=16/16 les/c/f=17/17/0 sis=25) [1] r=0 lpr=25 pi=[16,25)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  8 04:47:24 np0005550137 ceph-osd[83009]: osd.1 pg_epoch: 26 pg[5.14( empty local-lis/les=16/17 n=0 ec=25/16 lis/c=16/16 les/c/f=17/17/0 sis=25) [1] r=0 lpr=25 pi=[16,25)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  8 04:47:24 np0005550137 ceph-osd[83009]: osd.1 pg_epoch: 26 pg[5.15( empty local-lis/les=16/17 n=0 ec=25/16 lis/c=16/16 les/c/f=17/17/0 sis=25) [1] r=0 lpr=25 pi=[16,25)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  8 04:47:24 np0005550137 ceph-osd[83009]: osd.1 pg_epoch: 26 pg[5.16( empty local-lis/les=16/17 n=0 ec=25/16 lis/c=16/16 les/c/f=17/17/0 sis=25) [1] r=0 lpr=25 pi=[16,25)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  8 04:47:24 np0005550137 ceph-osd[83009]: osd.1 pg_epoch: 26 pg[5.17( empty local-lis/les=16/17 n=0 ec=25/16 lis/c=16/16 les/c/f=17/17/0 sis=25) [1] r=0 lpr=25 pi=[16,25)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  8 04:47:24 np0005550137 ceph-osd[83009]: osd.1 pg_epoch: 26 pg[5.18( empty local-lis/les=16/17 n=0 ec=25/16 lis/c=16/16 les/c/f=17/17/0 sis=25) [1] r=0 lpr=25 pi=[16,25)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  8 04:47:24 np0005550137 ceph-osd[83009]: osd.1 pg_epoch: 26 pg[5.19( empty local-lis/les=16/17 n=0 ec=25/16 lis/c=16/16 les/c/f=17/17/0 sis=25) [1] r=0 lpr=25 pi=[16,25)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  8 04:47:24 np0005550137 ceph-osd[83009]: osd.1 pg_epoch: 26 pg[5.1a( empty local-lis/les=16/17 n=0 ec=25/16 lis/c=16/16 les/c/f=17/17/0 sis=25) [1] r=0 lpr=25 pi=[16,25)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  8 04:47:24 np0005550137 ceph-osd[83009]: osd.1 pg_epoch: 26 pg[5.1b( empty local-lis/les=16/17 n=0 ec=25/16 lis/c=16/16 les/c/f=17/17/0 sis=25) [1] r=0 lpr=25 pi=[16,25)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  8 04:47:24 np0005550137 ceph-osd[83009]: osd.1 pg_epoch: 26 pg[5.1c( empty local-lis/les=16/17 n=0 ec=25/16 lis/c=16/16 les/c/f=17/17/0 sis=25) [1] r=0 lpr=25 pi=[16,25)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  8 04:47:24 np0005550137 ceph-osd[83009]: osd.1 pg_epoch: 26 pg[5.1d( empty local-lis/les=16/17 n=0 ec=25/16 lis/c=16/16 les/c/f=17/17/0 sis=25) [1] r=0 lpr=25 pi=[16,25)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  8 04:47:24 np0005550137 ceph-osd[83009]: osd.1 pg_epoch: 26 pg[5.1e( empty local-lis/les=16/17 n=0 ec=25/16 lis/c=16/16 les/c/f=17/17/0 sis=25) [1] r=0 lpr=25 pi=[16,25)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  8 04:47:24 np0005550137 ceph-osd[83009]: osd.1 pg_epoch: 26 pg[5.1f( empty local-lis/les=16/17 n=0 ec=25/16 lis/c=16/16 les/c/f=17/17/0 sis=25) [1] r=0 lpr=25 pi=[16,25)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  8 04:47:25 np0005550137 ceph-mon[74516]: mon.compute-0@0(leader).osd e26 do_prune osdmap full prune enabled
Dec  8 04:47:25 np0005550137 ceph-mon[74516]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/2436376386' entity='client.admin' cmd='[{"prefix": "osd pool application enable", "pool": "cephfs.cephfs.data", "app": "cephfs"}]': finished
Dec  8 04:47:25 np0005550137 ceph-mon[74516]: mon.compute-0@0(leader).osd e27 e27: 3 total, 2 up, 3 in
Dec  8 04:47:25 np0005550137 wizardly_colden[86809]: enabled application 'cephfs' on pool 'cephfs.cephfs.data'
Dec  8 04:47:25 np0005550137 ceph-mon[74516]: log_channel(cluster) log [DBG] : osdmap e27: 3 total, 2 up, 3 in
Dec  8 04:47:25 np0005550137 ceph-mon[74516]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd metadata", "id": 2} v 0)
Dec  8 04:47:25 np0005550137 ceph-mon[74516]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/1555784564' entity='mgr.compute-0.kitiwu' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch
Dec  8 04:47:25 np0005550137 ceph-mgr[74806]: mgr finish mon failed to return metadata for osd.2: (2) No such file or directory
Dec  8 04:47:25 np0005550137 ceph-mon[74516]: from='mgr.14122 192.168.122.100:0/1555784564' entity='mgr.compute-0.kitiwu' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.data", "var": "pg_num_actual", "val": "32"}]': finished
Dec  8 04:47:25 np0005550137 ceph-mon[74516]: from='client.? 192.168.122.100:0/2436376386' entity='client.admin' cmd=[{"prefix": "osd pool application enable", "pool": "cephfs.cephfs.data", "app": "cephfs"}]: dispatch
Dec  8 04:47:25 np0005550137 ceph-mon[74516]: from='client.? 192.168.122.100:0/2436376386' entity='client.admin' cmd='[{"prefix": "osd pool application enable", "pool": "cephfs.cephfs.data", "app": "cephfs"}]': finished
Dec  8 04:47:25 np0005550137 ceph-osd[83009]: osd.1 pg_epoch: 27 pg[5.19( empty local-lis/les=25/27 n=0 ec=25/16 lis/c=16/16 les/c/f=17/17/0 sis=25) [1] r=0 lpr=25 pi=[16,25)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  8 04:47:25 np0005550137 ceph-osd[83009]: osd.1 pg_epoch: 27 pg[6.1a( empty local-lis/les=25/27 n=0 ec=25/17 lis/c=17/17 les/c/f=18/18/0 sis=25) [1] r=0 lpr=25 pi=[17,25)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  8 04:47:25 np0005550137 ceph-osd[83009]: osd.1 pg_epoch: 27 pg[5.18( empty local-lis/les=25/27 n=0 ec=25/16 lis/c=16/16 les/c/f=17/17/0 sis=25) [1] r=0 lpr=25 pi=[16,25)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  8 04:47:25 np0005550137 systemd[1]: libpod-40bbb101db08f981fb2c375bba65a421c5ccf71d90507755d00846f3dc3e97ae.scope: Deactivated successfully.
Dec  8 04:47:25 np0005550137 podman[86794]: 2025-12-08 09:47:25.60756962 +0000 UTC m=+1.438702188 container died 40bbb101db08f981fb2c375bba65a421c5ccf71d90507755d00846f3dc3e97ae (image=quay.io/ceph/ceph:v19, name=wizardly_colden, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec  8 04:47:25 np0005550137 ceph-osd[83009]: osd.1 pg_epoch: 27 pg[6.18( empty local-lis/les=25/27 n=0 ec=25/17 lis/c=17/17 les/c/f=18/18/0 sis=25) [1] r=0 lpr=25 pi=[17,25)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  8 04:47:25 np0005550137 ceph-osd[83009]: osd.1 pg_epoch: 27 pg[5.1b( empty local-lis/les=25/27 n=0 ec=25/16 lis/c=16/16 les/c/f=17/17/0 sis=25) [1] r=0 lpr=25 pi=[16,25)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  8 04:47:25 np0005550137 ceph-osd[83009]: osd.1 pg_epoch: 27 pg[6.1b( empty local-lis/les=25/27 n=0 ec=25/17 lis/c=17/17 les/c/f=18/18/0 sis=25) [1] r=0 lpr=25 pi=[17,25)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  8 04:47:25 np0005550137 ceph-osd[83009]: osd.1 pg_epoch: 27 pg[6.19( empty local-lis/les=25/27 n=0 ec=25/17 lis/c=17/17 les/c/f=18/18/0 sis=25) [1] r=0 lpr=25 pi=[17,25)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  8 04:47:25 np0005550137 ceph-osd[83009]: osd.1 pg_epoch: 27 pg[5.1a( empty local-lis/les=25/27 n=0 ec=25/16 lis/c=16/16 les/c/f=17/17/0 sis=25) [1] r=0 lpr=25 pi=[16,25)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  8 04:47:25 np0005550137 ceph-osd[83009]: osd.1 pg_epoch: 27 pg[5.1d( empty local-lis/les=25/27 n=0 ec=25/16 lis/c=16/16 les/c/f=17/17/0 sis=25) [1] r=0 lpr=25 pi=[16,25)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  8 04:47:25 np0005550137 ceph-osd[83009]: osd.1 pg_epoch: 27 pg[6.1f( empty local-lis/les=25/27 n=0 ec=25/17 lis/c=17/17 les/c/f=18/18/0 sis=25) [1] r=0 lpr=25 pi=[17,25)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  8 04:47:25 np0005550137 ceph-osd[83009]: osd.1 pg_epoch: 27 pg[6.1e( empty local-lis/les=25/27 n=0 ec=25/17 lis/c=17/17 les/c/f=18/18/0 sis=25) [1] r=0 lpr=25 pi=[17,25)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  8 04:47:25 np0005550137 ceph-osd[83009]: osd.1 pg_epoch: 27 pg[5.f( empty local-lis/les=25/27 n=0 ec=25/16 lis/c=16/16 les/c/f=17/17/0 sis=25) [1] r=0 lpr=25 pi=[16,25)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  8 04:47:25 np0005550137 ceph-osd[83009]: osd.1 pg_epoch: 27 pg[5.e( empty local-lis/les=25/27 n=0 ec=25/16 lis/c=16/16 les/c/f=17/17/0 sis=25) [1] r=0 lpr=25 pi=[16,25)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  8 04:47:25 np0005550137 ceph-osd[83009]: osd.1 pg_epoch: 27 pg[6.c( empty local-lis/les=25/27 n=0 ec=25/17 lis/c=17/17 les/c/f=18/18/0 sis=25) [1] r=0 lpr=25 pi=[17,25)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  8 04:47:25 np0005550137 ceph-osd[83009]: osd.1 pg_epoch: 27 pg[6.d( empty local-lis/les=25/27 n=0 ec=25/17 lis/c=17/17 les/c/f=18/18/0 sis=25) [1] r=0 lpr=25 pi=[17,25)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  8 04:47:25 np0005550137 ceph-osd[83009]: osd.1 pg_epoch: 27 pg[6.1( empty local-lis/les=25/27 n=0 ec=25/17 lis/c=17/17 les/c/f=18/18/0 sis=25) [1] r=0 lpr=25 pi=[17,25)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  8 04:47:25 np0005550137 ceph-osd[83009]: osd.1 pg_epoch: 27 pg[5.2( empty local-lis/les=25/27 n=0 ec=25/16 lis/c=16/16 les/c/f=17/17/0 sis=25) [1] r=0 lpr=25 pi=[16,25)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  8 04:47:25 np0005550137 ceph-osd[83009]: osd.1 pg_epoch: 27 pg[5.5( empty local-lis/les=25/27 n=0 ec=25/16 lis/c=16/16 les/c/f=17/17/0 sis=25) [1] r=0 lpr=25 pi=[16,25)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  8 04:47:25 np0005550137 ceph-osd[83009]: osd.1 pg_epoch: 27 pg[6.7( empty local-lis/les=25/27 n=0 ec=25/17 lis/c=17/17 les/c/f=18/18/0 sis=25) [1] r=0 lpr=25 pi=[17,25)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  8 04:47:25 np0005550137 ceph-osd[83009]: osd.1 pg_epoch: 27 pg[6.6( empty local-lis/les=25/27 n=0 ec=25/17 lis/c=17/17 les/c/f=18/18/0 sis=25) [1] r=0 lpr=25 pi=[17,25)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  8 04:47:25 np0005550137 ceph-osd[83009]: osd.1 pg_epoch: 27 pg[5.7( empty local-lis/les=25/27 n=0 ec=25/16 lis/c=16/16 les/c/f=17/17/0 sis=25) [1] r=0 lpr=25 pi=[16,25)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  8 04:47:25 np0005550137 ceph-osd[83009]: osd.1 pg_epoch: 27 pg[6.4( empty local-lis/les=25/27 n=0 ec=25/17 lis/c=17/17 les/c/f=18/18/0 sis=25) [1] r=0 lpr=25 pi=[17,25)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  8 04:47:25 np0005550137 ceph-osd[83009]: osd.1 pg_epoch: 27 pg[5.1c( empty local-lis/les=25/27 n=0 ec=25/16 lis/c=16/16 les/c/f=17/17/0 sis=25) [1] r=0 lpr=25 pi=[16,25)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  8 04:47:25 np0005550137 ceph-osd[83009]: osd.1 pg_epoch: 27 pg[5.4( empty local-lis/les=25/27 n=0 ec=25/16 lis/c=16/16 les/c/f=17/17/0 sis=25) [1] r=0 lpr=25 pi=[16,25)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  8 04:47:25 np0005550137 ceph-osd[83009]: osd.1 pg_epoch: 27 pg[5.3( empty local-lis/les=25/27 n=0 ec=25/16 lis/c=16/16 les/c/f=17/17/0 sis=25) [1] r=0 lpr=25 pi=[16,25)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  8 04:47:25 np0005550137 ceph-osd[83009]: osd.1 pg_epoch: 27 pg[6.0( empty local-lis/les=25/27 n=0 ec=17/17 lis/c=17/17 les/c/f=18/18/0 sis=25) [1] r=0 lpr=25 pi=[17,25)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  8 04:47:25 np0005550137 ceph-osd[83009]: osd.1 pg_epoch: 27 pg[5.0( empty local-lis/les=25/27 n=0 ec=16/16 lis/c=16/16 les/c/f=17/17/0 sis=25) [1] r=0 lpr=25 pi=[16,25)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  8 04:47:25 np0005550137 ceph-osd[83009]: osd.1 pg_epoch: 27 pg[6.3( empty local-lis/les=25/27 n=0 ec=25/17 lis/c=17/17 les/c/f=18/18/0 sis=25) [1] r=0 lpr=25 pi=[17,25)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  8 04:47:25 np0005550137 ceph-osd[83009]: osd.1 pg_epoch: 27 pg[6.2( empty local-lis/les=25/27 n=0 ec=25/17 lis/c=17/17 les/c/f=18/18/0 sis=25) [1] r=0 lpr=25 pi=[17,25)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  8 04:47:25 np0005550137 ceph-osd[83009]: osd.1 pg_epoch: 27 pg[5.1( empty local-lis/les=25/27 n=0 ec=25/16 lis/c=16/16 les/c/f=17/17/0 sis=25) [1] r=0 lpr=25 pi=[16,25)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  8 04:47:25 np0005550137 ceph-osd[83009]: osd.1 pg_epoch: 27 pg[5.6( empty local-lis/les=25/27 n=0 ec=25/16 lis/c=16/16 les/c/f=17/17/0 sis=25) [1] r=0 lpr=25 pi=[16,25)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  8 04:47:25 np0005550137 ceph-osd[83009]: osd.1 pg_epoch: 27 pg[6.f( empty local-lis/les=25/27 n=0 ec=25/17 lis/c=17/17 les/c/f=18/18/0 sis=25) [1] r=0 lpr=25 pi=[17,25)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  8 04:47:25 np0005550137 ceph-osd[83009]: osd.1 pg_epoch: 27 pg[5.c( empty local-lis/les=25/27 n=0 ec=25/16 lis/c=16/16 les/c/f=17/17/0 sis=25) [1] r=0 lpr=25 pi=[16,25)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  8 04:47:25 np0005550137 ceph-osd[83009]: osd.1 pg_epoch: 27 pg[6.e( empty local-lis/les=25/27 n=0 ec=25/17 lis/c=17/17 les/c/f=18/18/0 sis=25) [1] r=0 lpr=25 pi=[17,25)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  8 04:47:25 np0005550137 ceph-osd[83009]: osd.1 pg_epoch: 27 pg[6.5( empty local-lis/les=25/27 n=0 ec=25/17 lis/c=17/17 les/c/f=18/18/0 sis=25) [1] r=0 lpr=25 pi=[17,25)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  8 04:47:25 np0005550137 ceph-osd[83009]: osd.1 pg_epoch: 27 pg[5.d( empty local-lis/les=25/27 n=0 ec=25/16 lis/c=16/16 les/c/f=17/17/0 sis=25) [1] r=0 lpr=25 pi=[16,25)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  8 04:47:25 np0005550137 ceph-osd[83009]: osd.1 pg_epoch: 27 pg[5.a( empty local-lis/les=25/27 n=0 ec=25/16 lis/c=16/16 les/c/f=17/17/0 sis=25) [1] r=0 lpr=25 pi=[16,25)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  8 04:47:25 np0005550137 ceph-osd[83009]: osd.1 pg_epoch: 27 pg[6.9( empty local-lis/les=25/27 n=0 ec=25/17 lis/c=17/17 les/c/f=18/18/0 sis=25) [1] r=0 lpr=25 pi=[17,25)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  8 04:47:25 np0005550137 ceph-osd[83009]: osd.1 pg_epoch: 27 pg[5.b( empty local-lis/les=25/27 n=0 ec=25/16 lis/c=16/16 les/c/f=17/17/0 sis=25) [1] r=0 lpr=25 pi=[16,25)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  8 04:47:25 np0005550137 ceph-osd[83009]: osd.1 pg_epoch: 27 pg[6.8( empty local-lis/les=25/27 n=0 ec=25/17 lis/c=17/17 les/c/f=18/18/0 sis=25) [1] r=0 lpr=25 pi=[17,25)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  8 04:47:25 np0005550137 ceph-osd[83009]: osd.1 pg_epoch: 27 pg[5.8( empty local-lis/les=25/27 n=0 ec=25/16 lis/c=16/16 les/c/f=17/17/0 sis=25) [1] r=0 lpr=25 pi=[16,25)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  8 04:47:25 np0005550137 ceph-osd[83009]: osd.1 pg_epoch: 27 pg[6.b( empty local-lis/les=25/27 n=0 ec=25/17 lis/c=17/17 les/c/f=18/18/0 sis=25) [1] r=0 lpr=25 pi=[17,25)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  8 04:47:25 np0005550137 ceph-osd[83009]: osd.1 pg_epoch: 27 pg[6.a( empty local-lis/les=25/27 n=0 ec=25/17 lis/c=17/17 les/c/f=18/18/0 sis=25) [1] r=0 lpr=25 pi=[17,25)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  8 04:47:25 np0005550137 ceph-osd[83009]: osd.1 pg_epoch: 27 pg[5.9( empty local-lis/les=25/27 n=0 ec=25/16 lis/c=16/16 les/c/f=17/17/0 sis=25) [1] r=0 lpr=25 pi=[16,25)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  8 04:47:25 np0005550137 ceph-osd[83009]: osd.1 pg_epoch: 27 pg[6.15( empty local-lis/les=25/27 n=0 ec=25/17 lis/c=17/17 les/c/f=18/18/0 sis=25) [1] r=0 lpr=25 pi=[17,25)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  8 04:47:25 np0005550137 ceph-osd[83009]: osd.1 pg_epoch: 27 pg[5.16( empty local-lis/les=25/27 n=0 ec=25/16 lis/c=16/16 les/c/f=17/17/0 sis=25) [1] r=0 lpr=25 pi=[16,25)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  8 04:47:25 np0005550137 ceph-osd[83009]: osd.1 pg_epoch: 27 pg[6.14( empty local-lis/les=25/27 n=0 ec=25/17 lis/c=17/17 les/c/f=18/18/0 sis=25) [1] r=0 lpr=25 pi=[17,25)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  8 04:47:25 np0005550137 ceph-osd[83009]: osd.1 pg_epoch: 27 pg[6.17( empty local-lis/les=25/27 n=0 ec=25/17 lis/c=17/17 les/c/f=18/18/0 sis=25) [1] r=0 lpr=25 pi=[17,25)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  8 04:47:25 np0005550137 ceph-osd[83009]: osd.1 pg_epoch: 27 pg[6.16( empty local-lis/les=25/27 n=0 ec=25/17 lis/c=17/17 les/c/f=18/18/0 sis=25) [1] r=0 lpr=25 pi=[17,25)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  8 04:47:25 np0005550137 ceph-osd[83009]: osd.1 pg_epoch: 27 pg[5.15( empty local-lis/les=25/27 n=0 ec=25/16 lis/c=16/16 les/c/f=17/17/0 sis=25) [1] r=0 lpr=25 pi=[16,25)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  8 04:47:25 np0005550137 ceph-osd[83009]: osd.1 pg_epoch: 27 pg[5.14( empty local-lis/les=25/27 n=0 ec=25/16 lis/c=16/16 les/c/f=17/17/0 sis=25) [1] r=0 lpr=25 pi=[16,25)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  8 04:47:25 np0005550137 ceph-osd[83009]: osd.1 pg_epoch: 27 pg[5.12( empty local-lis/les=25/27 n=0 ec=25/16 lis/c=16/16 les/c/f=17/17/0 sis=25) [1] r=0 lpr=25 pi=[16,25)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  8 04:47:25 np0005550137 ceph-osd[83009]: osd.1 pg_epoch: 27 pg[6.11( empty local-lis/les=25/27 n=0 ec=25/17 lis/c=17/17 les/c/f=18/18/0 sis=25) [1] r=0 lpr=25 pi=[17,25)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  8 04:47:25 np0005550137 ceph-osd[83009]: osd.1 pg_epoch: 27 pg[5.13( empty local-lis/les=25/27 n=0 ec=25/16 lis/c=16/16 les/c/f=17/17/0 sis=25) [1] r=0 lpr=25 pi=[16,25)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  8 04:47:25 np0005550137 ceph-osd[83009]: osd.1 pg_epoch: 27 pg[5.10( empty local-lis/les=25/27 n=0 ec=25/16 lis/c=16/16 les/c/f=17/17/0 sis=25) [1] r=0 lpr=25 pi=[16,25)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  8 04:47:25 np0005550137 ceph-osd[83009]: osd.1 pg_epoch: 27 pg[6.10( empty local-lis/les=25/27 n=0 ec=25/17 lis/c=17/17 les/c/f=18/18/0 sis=25) [1] r=0 lpr=25 pi=[17,25)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  8 04:47:25 np0005550137 ceph-osd[83009]: osd.1 pg_epoch: 27 pg[6.13( empty local-lis/les=25/27 n=0 ec=25/17 lis/c=17/17 les/c/f=18/18/0 sis=25) [1] r=0 lpr=25 pi=[17,25)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  8 04:47:25 np0005550137 ceph-osd[83009]: osd.1 pg_epoch: 27 pg[6.12( empty local-lis/les=25/27 n=0 ec=25/17 lis/c=17/17 les/c/f=18/18/0 sis=25) [1] r=0 lpr=25 pi=[17,25)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  8 04:47:25 np0005550137 ceph-osd[83009]: osd.1 pg_epoch: 27 pg[5.17( empty local-lis/les=25/27 n=0 ec=25/16 lis/c=16/16 les/c/f=17/17/0 sis=25) [1] r=0 lpr=25 pi=[16,25)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  8 04:47:25 np0005550137 ceph-osd[83009]: osd.1 pg_epoch: 27 pg[5.11( empty local-lis/les=25/27 n=0 ec=25/16 lis/c=16/16 les/c/f=17/17/0 sis=25) [1] r=0 lpr=25 pi=[16,25)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  8 04:47:25 np0005550137 ceph-osd[83009]: osd.1 pg_epoch: 27 pg[5.1e( empty local-lis/les=25/27 n=0 ec=25/16 lis/c=16/16 les/c/f=17/17/0 sis=25) [1] r=0 lpr=25 pi=[16,25)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  8 04:47:25 np0005550137 ceph-osd[83009]: osd.1 pg_epoch: 27 pg[6.1d( empty local-lis/les=25/27 n=0 ec=25/17 lis/c=17/17 les/c/f=18/18/0 sis=25) [1] r=0 lpr=25 pi=[17,25)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  8 04:47:25 np0005550137 ceph-osd[83009]: osd.1 pg_epoch: 27 pg[5.1f( empty local-lis/les=25/27 n=0 ec=25/16 lis/c=16/16 les/c/f=17/17/0 sis=25) [1] r=0 lpr=25 pi=[16,25)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  8 04:47:25 np0005550137 ceph-osd[83009]: osd.1 pg_epoch: 27 pg[6.1c( empty local-lis/les=25/27 n=0 ec=25/17 lis/c=17/17 les/c/f=18/18/0 sis=25) [1] r=0 lpr=25 pi=[17,25)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  8 04:47:25 np0005550137 ceph-osd[83009]: log_channel(cluster) log [DBG] : 3.16 deep-scrub starts
Dec  8 04:47:25 np0005550137 ceph-osd[83009]: log_channel(cluster) log [DBG] : 3.16 deep-scrub ok
Dec  8 04:47:25 np0005550137 systemd[1]: var-lib-containers-storage-overlay-e00e25208d8ea904520b2b83ccc107979469f9c4f4478d872482eec14bd51c31-merged.mount: Deactivated successfully.
Dec  8 04:47:25 np0005550137 podman[86794]: 2025-12-08 09:47:25.667560041 +0000 UTC m=+1.498692559 container remove 40bbb101db08f981fb2c375bba65a421c5ccf71d90507755d00846f3dc3e97ae (image=quay.io/ceph/ceph:v19, name=wizardly_colden, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, CEPH_REF=squid, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Dec  8 04:47:25 np0005550137 systemd[1]: libpod-conmon-40bbb101db08f981fb2c375bba65a421c5ccf71d90507755d00846f3dc3e97ae.scope: Deactivated successfully.
Dec  8 04:47:26 np0005550137 ceph-mgr[74806]: log_channel(cluster) log [DBG] : pgmap v85: 193 pgs: 93 unknown, 100 active+clean; 449 KiB data, 53 MiB used, 40 GiB / 40 GiB avail
Dec  8 04:47:26 np0005550137 ceph-osd[83009]: log_channel(cluster) log [DBG] : 3.15 scrub starts
Dec  8 04:47:26 np0005550137 ceph-osd[83009]: log_channel(cluster) log [DBG] : 3.15 scrub ok
Dec  8 04:47:26 np0005550137 ceph-mon[74516]: log_channel(cluster) log [INF] : Health check cleared: POOL_APP_NOT_ENABLED (was: 1 pool(s) do not have an application enabled)
Dec  8 04:47:26 np0005550137 ceph-mon[74516]: log_channel(cluster) log [INF] : Cluster is now healthy
Dec  8 04:47:26 np0005550137 python3[86923]: ansible-ansible.legacy.stat Invoked with path=/tmp/ceph_rgw.yml follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Dec  8 04:47:27 np0005550137 python3[86994]: ansible-ansible.legacy.copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1765187246.4125135-37164-144899805799955/source dest=/tmp/ceph_rgw.yml mode=0644 force=True follow=False _original_basename=ceph_rgw.yml.j2 checksum=ad866aa1f51f395809dd7ac5cb7a56d43c167b49 backup=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  8 04:47:27 np0005550137 ceph-mon[74516]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2.devices.0}] v 0)
Dec  8 04:47:27 np0005550137 ceph-mon[74516]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1555784564' entity='mgr.compute-0.kitiwu' 
Dec  8 04:47:27 np0005550137 ceph-mon[74516]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2}] v 0)
Dec  8 04:47:27 np0005550137 ceph-mon[74516]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1555784564' entity='mgr.compute-0.kitiwu' 
Dec  8 04:47:27 np0005550137 ceph-osd[83009]: log_channel(cluster) log [DBG] : 3.18 scrub starts
Dec  8 04:47:27 np0005550137 ceph-osd[83009]: log_channel(cluster) log [DBG] : 3.18 scrub ok
Dec  8 04:47:27 np0005550137 ceph-mon[74516]: Health check cleared: POOL_APP_NOT_ENABLED (was: 1 pool(s) do not have an application enabled)
Dec  8 04:47:27 np0005550137 ceph-mon[74516]: Cluster is now healthy
Dec  8 04:47:27 np0005550137 ceph-mon[74516]: from='mgr.14122 192.168.122.100:0/1555784564' entity='mgr.compute-0.kitiwu' 
Dec  8 04:47:27 np0005550137 ceph-mon[74516]: from='mgr.14122 192.168.122.100:0/1555784564' entity='mgr.compute-0.kitiwu' 
Dec  8 04:47:27 np0005550137 python3[87096]: ansible-ansible.legacy.stat Invoked with path=/home/ceph-admin/assimilate_ceph.conf follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Dec  8 04:47:28 np0005550137 ceph-mgr[74806]: [progress INFO root] Writing back 11 completed events
Dec  8 04:47:28 np0005550137 ceph-mon[74516]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/progress/completed}] v 0)
Dec  8 04:47:28 np0005550137 ceph-mon[74516]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1555784564' entity='mgr.compute-0.kitiwu' 
Dec  8 04:47:28 np0005550137 ceph-mgr[74806]: log_channel(cluster) log [DBG] : pgmap v86: 193 pgs: 193 active+clean; 449 KiB data, 53 MiB used, 40 GiB / 40 GiB avail
Dec  8 04:47:28 np0005550137 ceph-mon[74516]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "backups", "var": "pgp_num_actual", "val": "32"} v 0)
Dec  8 04:47:28 np0005550137 ceph-mon[74516]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1555784564' entity='mgr.compute-0.kitiwu' cmd=[{"prefix": "osd pool set", "pool": "backups", "var": "pgp_num_actual", "val": "32"}]: dispatch
Dec  8 04:47:28 np0005550137 ceph-mon[74516]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "cephfs.cephfs.data", "var": "pgp_num_actual", "val": "32"} v 0)
Dec  8 04:47:28 np0005550137 ceph-mon[74516]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1555784564' entity='mgr.compute-0.kitiwu' cmd=[{"prefix": "osd pool set", "pool": "cephfs.cephfs.data", "var": "pgp_num_actual", "val": "32"}]: dispatch
Dec  8 04:47:28 np0005550137 python3[87171]: ansible-ansible.legacy.copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1765187247.5835154-37178-47094358924063/source dest=/home/ceph-admin/assimilate_ceph.conf owner=167 group=167 mode=0644 follow=False _original_basename=ceph_rgw.conf.j2 checksum=a57d1c052bab1e43fe28d5161d77df0cec14e892 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  8 04:47:28 np0005550137 ceph-mon[74516]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "32"} v 0)
Dec  8 04:47:28 np0005550137 ceph-mon[74516]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1555784564' entity='mgr.compute-0.kitiwu' cmd=[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "32"}]: dispatch
Dec  8 04:47:28 np0005550137 ceph-mon[74516]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "images", "var": "pgp_num_actual", "val": "32"} v 0)
Dec  8 04:47:28 np0005550137 ceph-mon[74516]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1555784564' entity='mgr.compute-0.kitiwu' cmd=[{"prefix": "osd pool set", "pool": "images", "var": "pgp_num_actual", "val": "32"}]: dispatch
Dec  8 04:47:28 np0005550137 ceph-mon[74516]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "vms", "var": "pgp_num_actual", "val": "32"} v 0)
Dec  8 04:47:28 np0005550137 ceph-mon[74516]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1555784564' entity='mgr.compute-0.kitiwu' cmd=[{"prefix": "osd pool set", "pool": "vms", "var": "pgp_num_actual", "val": "32"}]: dispatch
Dec  8 04:47:28 np0005550137 ceph-mon[74516]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "volumes", "var": "pgp_num_actual", "val": "32"} v 0)
Dec  8 04:47:28 np0005550137 ceph-mon[74516]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1555784564' entity='mgr.compute-0.kitiwu' cmd=[{"prefix": "osd pool set", "pool": "volumes", "var": "pgp_num_actual", "val": "32"}]: dispatch
Dec  8 04:47:28 np0005550137 ceph-osd[83009]: log_channel(cluster) log [DBG] : 4.12 scrub starts
Dec  8 04:47:28 np0005550137 ceph-osd[83009]: log_channel(cluster) log [DBG] : 4.12 scrub ok
Dec  8 04:47:28 np0005550137 python3[87223]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z   --volume /tmp/ceph_rgw.yml:/home/ceph_spec.yaml:z   --entrypoint ceph quay.io/ceph/ceph:v19 --fsid ceb838ef-9d5d-54e4-bddb-2f01adce2ad4 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   config assimilate-conf -i /home/assimilate_ceph.conf#012 _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec  8 04:47:28 np0005550137 podman[87224]: 2025-12-08 09:47:28.855557267 +0000 UTC m=+0.047614980 container create 68433cc26b2d8ab680d228112a107f1d6bc13f904055dc4b3707c60007a4a1dd (image=quay.io/ceph/ceph:v19, name=modest_moore, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2)
Dec  8 04:47:28 np0005550137 systemd[1]: Started libpod-conmon-68433cc26b2d8ab680d228112a107f1d6bc13f904055dc4b3707c60007a4a1dd.scope.
Dec  8 04:47:28 np0005550137 systemd[1]: Started libcrun container.
Dec  8 04:47:28 np0005550137 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e369d20e13082c0be4a6f4d878a04cdee922beb69d14a118491c0f1138b1c845/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec  8 04:47:28 np0005550137 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e369d20e13082c0be4a6f4d878a04cdee922beb69d14a118491c0f1138b1c845/merged/home/ceph_spec.yaml supports timestamps until 2038 (0x7fffffff)
Dec  8 04:47:28 np0005550137 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e369d20e13082c0be4a6f4d878a04cdee922beb69d14a118491c0f1138b1c845/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Dec  8 04:47:28 np0005550137 podman[87224]: 2025-12-08 09:47:28.835526616 +0000 UTC m=+0.027584349 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Dec  8 04:47:28 np0005550137 podman[87224]: 2025-12-08 09:47:28.932084735 +0000 UTC m=+0.124142448 container init 68433cc26b2d8ab680d228112a107f1d6bc13f904055dc4b3707c60007a4a1dd (image=quay.io/ceph/ceph:v19, name=modest_moore, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, ceph=True, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec  8 04:47:28 np0005550137 podman[87224]: 2025-12-08 09:47:28.938208209 +0000 UTC m=+0.130265922 container start 68433cc26b2d8ab680d228112a107f1d6bc13f904055dc4b3707c60007a4a1dd (image=quay.io/ceph/ceph:v19, name=modest_moore, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, ceph=True, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Dec  8 04:47:28 np0005550137 podman[87224]: 2025-12-08 09:47:28.941211988 +0000 UTC m=+0.133269721 container attach 68433cc26b2d8ab680d228112a107f1d6bc13f904055dc4b3707c60007a4a1dd (image=quay.io/ceph/ceph:v19, name=modest_moore, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_REF=squid, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Dec  8 04:47:29 np0005550137 ceph-mon[74516]: mon.compute-0@0(leader).osd e27 do_prune osdmap full prune enabled
Dec  8 04:47:29 np0005550137 ceph-mon[74516]: from='mgr.14122 192.168.122.100:0/1555784564' entity='mgr.compute-0.kitiwu' 
Dec  8 04:47:29 np0005550137 ceph-mon[74516]: from='mgr.14122 192.168.122.100:0/1555784564' entity='mgr.compute-0.kitiwu' cmd=[{"prefix": "osd pool set", "pool": "backups", "var": "pgp_num_actual", "val": "32"}]: dispatch
Dec  8 04:47:29 np0005550137 ceph-mon[74516]: from='mgr.14122 192.168.122.100:0/1555784564' entity='mgr.compute-0.kitiwu' cmd=[{"prefix": "osd pool set", "pool": "cephfs.cephfs.data", "var": "pgp_num_actual", "val": "32"}]: dispatch
Dec  8 04:47:29 np0005550137 ceph-mon[74516]: from='mgr.14122 192.168.122.100:0/1555784564' entity='mgr.compute-0.kitiwu' cmd=[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "32"}]: dispatch
Dec  8 04:47:29 np0005550137 ceph-mon[74516]: from='mgr.14122 192.168.122.100:0/1555784564' entity='mgr.compute-0.kitiwu' cmd=[{"prefix": "osd pool set", "pool": "images", "var": "pgp_num_actual", "val": "32"}]: dispatch
Dec  8 04:47:29 np0005550137 ceph-mon[74516]: from='mgr.14122 192.168.122.100:0/1555784564' entity='mgr.compute-0.kitiwu' cmd=[{"prefix": "osd pool set", "pool": "vms", "var": "pgp_num_actual", "val": "32"}]: dispatch
Dec  8 04:47:29 np0005550137 ceph-mon[74516]: from='mgr.14122 192.168.122.100:0/1555784564' entity='mgr.compute-0.kitiwu' cmd=[{"prefix": "osd pool set", "pool": "volumes", "var": "pgp_num_actual", "val": "32"}]: dispatch
Dec  8 04:47:29 np0005550137 ceph-mon[74516]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1555784564' entity='mgr.compute-0.kitiwu' cmd='[{"prefix": "osd pool set", "pool": "backups", "var": "pgp_num_actual", "val": "32"}]': finished
Dec  8 04:47:29 np0005550137 ceph-mon[74516]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1555784564' entity='mgr.compute-0.kitiwu' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.data", "var": "pgp_num_actual", "val": "32"}]': finished
Dec  8 04:47:29 np0005550137 ceph-mon[74516]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1555784564' entity='mgr.compute-0.kitiwu' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "32"}]': finished
Dec  8 04:47:29 np0005550137 ceph-mon[74516]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1555784564' entity='mgr.compute-0.kitiwu' cmd='[{"prefix": "osd pool set", "pool": "images", "var": "pgp_num_actual", "val": "32"}]': finished
Dec  8 04:47:29 np0005550137 ceph-mon[74516]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1555784564' entity='mgr.compute-0.kitiwu' cmd='[{"prefix": "osd pool set", "pool": "vms", "var": "pgp_num_actual", "val": "32"}]': finished
Dec  8 04:47:29 np0005550137 ceph-mon[74516]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1555784564' entity='mgr.compute-0.kitiwu' cmd='[{"prefix": "osd pool set", "pool": "volumes", "var": "pgp_num_actual", "val": "32"}]': finished
Dec  8 04:47:29 np0005550137 ceph-mon[74516]: mon.compute-0@0(leader).osd e28 e28: 3 total, 2 up, 3 in
Dec  8 04:47:29 np0005550137 ceph-mon[74516]: log_channel(cluster) log [DBG] : osdmap e28: 3 total, 2 up, 3 in
Dec  8 04:47:29 np0005550137 ceph-mon[74516]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd metadata", "id": 2} v 0)
Dec  8 04:47:29 np0005550137 ceph-mon[74516]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/1555784564' entity='mgr.compute-0.kitiwu' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch
Dec  8 04:47:29 np0005550137 ceph-mgr[74806]: mgr finish mon failed to return metadata for osd.2: (2) No such file or directory
Dec  8 04:47:29 np0005550137 ceph-osd[83009]: osd.1 pg_epoch: 28 pg[4.18( empty local-lis/les=23/24 n=0 ec=23/15 lis/c=23/23 les/c/f=24/24/0 sis=28 pruub=9.095930099s) [0] r=-1 lpr=28 pi=[23,28)/1 crt=0'0 mlcod 0'0 active pruub 66.944297791s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Dec  8 04:47:29 np0005550137 ceph-osd[83009]: osd.1 pg_epoch: 28 pg[4.18( empty local-lis/les=23/24 n=0 ec=23/15 lis/c=23/23 les/c/f=24/24/0 sis=28 pruub=9.095897675s) [0] r=-1 lpr=28 pi=[23,28)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 66.944297791s@ mbc={}] state<Start>: transitioning to Stray
Dec  8 04:47:29 np0005550137 ceph-osd[83009]: osd.1 pg_epoch: 28 pg[5.18( empty local-lis/les=25/27 n=0 ec=25/16 lis/c=25/25 les/c/f=27/27/0 sis=28 pruub=12.444371223s) [0] r=-1 lpr=28 pi=[25,28)/1 crt=0'0 mlcod 0'0 active pruub 70.292884827s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Dec  8 04:47:29 np0005550137 ceph-osd[83009]: osd.1 pg_epoch: 28 pg[5.18( empty local-lis/les=25/27 n=0 ec=25/16 lis/c=25/25 les/c/f=27/27/0 sis=28 pruub=12.444317818s) [0] r=-1 lpr=28 pi=[25,28)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 70.292884827s@ mbc={}] state<Start>: transitioning to Stray
Dec  8 04:47:29 np0005550137 ceph-osd[83009]: osd.1 pg_epoch: 28 pg[5.1b( empty local-lis/les=25/27 n=0 ec=25/16 lis/c=25/25 les/c/f=27/27/0 sis=28 pruub=12.453758240s) [0] r=-1 lpr=28 pi=[25,28)/1 crt=0'0 mlcod 0'0 active pruub 70.302398682s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Dec  8 04:47:29 np0005550137 ceph-osd[83009]: osd.1 pg_epoch: 28 pg[3.1d( empty local-lis/les=23/24 n=0 ec=23/13 lis/c=23/23 les/c/f=24/24/0 sis=28 pruub=9.095649719s) [0] r=-1 lpr=28 pi=[23,28)/1 crt=0'0 mlcod 0'0 active pruub 66.944290161s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Dec  8 04:47:29 np0005550137 ceph-osd[83009]: osd.1 pg_epoch: 28 pg[4.1a( empty local-lis/les=23/24 n=0 ec=23/15 lis/c=23/23 les/c/f=24/24/0 sis=28 pruub=9.095592499s) [0] r=-1 lpr=28 pi=[23,28)/1 crt=0'0 mlcod 0'0 active pruub 66.944244385s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Dec  8 04:47:29 np0005550137 ceph-osd[83009]: osd.1 pg_epoch: 28 pg[5.1b( empty local-lis/les=25/27 n=0 ec=25/16 lis/c=25/25 les/c/f=27/27/0 sis=28 pruub=12.453745842s) [0] r=-1 lpr=28 pi=[25,28)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 70.302398682s@ mbc={}] state<Start>: transitioning to Stray
Dec  8 04:47:29 np0005550137 ceph-osd[83009]: osd.1 pg_epoch: 28 pg[3.1d( empty local-lis/les=23/24 n=0 ec=23/13 lis/c=23/23 les/c/f=24/24/0 sis=28 pruub=9.095608711s) [0] r=-1 lpr=28 pi=[23,28)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 66.944290161s@ mbc={}] state<Start>: transitioning to Stray
Dec  8 04:47:29 np0005550137 ceph-osd[83009]: osd.1 pg_epoch: 28 pg[4.1a( empty local-lis/les=23/24 n=0 ec=23/15 lis/c=23/23 les/c/f=24/24/0 sis=28 pruub=9.095556259s) [0] r=-1 lpr=28 pi=[23,28)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 66.944244385s@ mbc={}] state<Start>: transitioning to Stray
Dec  8 04:47:29 np0005550137 ceph-osd[83009]: osd.1 pg_epoch: 28 pg[4.1b( empty local-lis/les=23/24 n=0 ec=23/15 lis/c=23/23 les/c/f=24/24/0 sis=28 pruub=9.095380783s) [0] r=-1 lpr=28 pi=[23,28)/1 crt=0'0 mlcod 0'0 active pruub 66.944168091s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Dec  8 04:47:29 np0005550137 ceph-osd[83009]: osd.1 pg_epoch: 28 pg[3.1c( empty local-lis/les=23/24 n=0 ec=23/13 lis/c=23/23 les/c/f=24/24/0 sis=28 pruub=9.095366478s) [0] r=-1 lpr=28 pi=[23,28)/1 crt=0'0 mlcod 0'0 active pruub 66.944152832s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Dec  8 04:47:29 np0005550137 ceph-osd[83009]: osd.1 pg_epoch: 28 pg[4.1b( empty local-lis/les=23/24 n=0 ec=23/15 lis/c=23/23 les/c/f=24/24/0 sis=28 pruub=9.095368385s) [0] r=-1 lpr=28 pi=[23,28)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 66.944168091s@ mbc={}] state<Start>: transitioning to Stray
Dec  8 04:47:29 np0005550137 ceph-osd[83009]: osd.1 pg_epoch: 28 pg[3.1c( empty local-lis/les=23/24 n=0 ec=23/13 lis/c=23/23 les/c/f=24/24/0 sis=28 pruub=9.095347404s) [0] r=-1 lpr=28 pi=[23,28)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 66.944152832s@ mbc={}] state<Start>: transitioning to Stray
Dec  8 04:47:29 np0005550137 ceph-osd[83009]: osd.1 pg_epoch: 28 pg[6.19( empty local-lis/les=25/27 n=0 ec=25/17 lis/c=25/25 les/c/f=27/27/0 sis=28 pruub=12.453549385s) [0] r=-1 lpr=28 pi=[25,28)/1 crt=0'0 mlcod 0'0 active pruub 70.302604675s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Dec  8 04:47:29 np0005550137 ceph-osd[83009]: osd.1 pg_epoch: 28 pg[6.19( empty local-lis/les=25/27 n=0 ec=25/17 lis/c=25/25 les/c/f=27/27/0 sis=28 pruub=12.453535080s) [0] r=-1 lpr=28 pi=[25,28)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 70.302604675s@ mbc={}] state<Start>: transitioning to Stray
Dec  8 04:47:29 np0005550137 ceph-osd[83009]: osd.1 pg_epoch: 28 pg[5.1a( empty local-lis/les=25/27 n=0 ec=25/16 lis/c=25/25 les/c/f=27/27/0 sis=28 pruub=12.453555107s) [0] r=-1 lpr=28 pi=[25,28)/1 crt=0'0 mlcod 0'0 active pruub 70.302642822s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Dec  8 04:47:29 np0005550137 ceph-osd[83009]: osd.1 pg_epoch: 28 pg[3.1a( empty local-lis/les=23/24 n=0 ec=23/13 lis/c=23/23 les/c/f=24/24/0 sis=28 pruub=9.095001221s) [0] r=-1 lpr=28 pi=[23,28)/1 crt=0'0 mlcod 0'0 active pruub 66.944122314s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Dec  8 04:47:29 np0005550137 ceph-osd[83009]: osd.1 pg_epoch: 28 pg[5.1a( empty local-lis/les=25/27 n=0 ec=25/16 lis/c=25/25 les/c/f=27/27/0 sis=28 pruub=12.453516006s) [0] r=-1 lpr=28 pi=[25,28)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 70.302642822s@ mbc={}] state<Start>: transitioning to Stray
Dec  8 04:47:29 np0005550137 ceph-osd[83009]: osd.1 pg_epoch: 28 pg[5.1c( empty local-lis/les=25/27 n=0 ec=25/16 lis/c=25/25 les/c/f=27/27/0 sis=28 pruub=12.454104424s) [0] r=-1 lpr=28 pi=[25,28)/1 crt=0'0 mlcod 0'0 active pruub 70.303298950s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Dec  8 04:47:29 np0005550137 ceph-osd[83009]: osd.1 pg_epoch: 28 pg[5.1c( empty local-lis/les=25/27 n=0 ec=25/16 lis/c=25/25 les/c/f=27/27/0 sis=28 pruub=12.454092979s) [0] r=-1 lpr=28 pi=[25,28)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 70.303298950s@ mbc={}] state<Start>: transitioning to Stray
Dec  8 04:47:29 np0005550137 ceph-osd[83009]: osd.1 pg_epoch: 28 pg[3.1a( empty local-lis/les=23/24 n=0 ec=23/13 lis/c=23/23 les/c/f=24/24/0 sis=28 pruub=9.094978333s) [0] r=-1 lpr=28 pi=[23,28)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 66.944122314s@ mbc={}] state<Start>: transitioning to Stray
Dec  8 04:47:29 np0005550137 ceph-osd[83009]: osd.1 pg_epoch: 28 pg[4.e( empty local-lis/les=23/24 n=0 ec=23/15 lis/c=23/23 les/c/f=24/24/0 sis=28 pruub=9.094901085s) [0] r=-1 lpr=28 pi=[23,28)/1 crt=0'0 mlcod 0'0 active pruub 66.944206238s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Dec  8 04:47:29 np0005550137 ceph-osd[83009]: osd.1 pg_epoch: 28 pg[6.1e( empty local-lis/les=25/27 n=0 ec=25/17 lis/c=25/25 les/c/f=27/27/0 sis=28 pruub=12.453479767s) [0] r=-1 lpr=28 pi=[25,28)/1 crt=0'0 mlcod 0'0 active pruub 70.302742004s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Dec  8 04:47:29 np0005550137 ceph-osd[83009]: osd.1 pg_epoch: 28 pg[3.9( empty local-lis/les=23/24 n=0 ec=23/13 lis/c=23/23 les/c/f=24/24/0 sis=28 pruub=9.094778061s) [0] r=-1 lpr=28 pi=[23,28)/1 crt=0'0 mlcod 0'0 active pruub 66.944091797s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Dec  8 04:47:29 np0005550137 ceph-osd[83009]: osd.1 pg_epoch: 28 pg[6.1e( empty local-lis/les=25/27 n=0 ec=25/17 lis/c=25/25 les/c/f=27/27/0 sis=28 pruub=12.453375816s) [0] r=-1 lpr=28 pi=[25,28)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 70.302742004s@ mbc={}] state<Start>: transitioning to Stray
Dec  8 04:47:29 np0005550137 ceph-osd[83009]: osd.1 pg_epoch: 28 pg[3.9( empty local-lis/les=23/24 n=0 ec=23/13 lis/c=23/23 les/c/f=24/24/0 sis=28 pruub=9.094712257s) [0] r=-1 lpr=28 pi=[23,28)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 66.944091797s@ mbc={}] state<Start>: transitioning to Stray
Dec  8 04:47:29 np0005550137 ceph-osd[83009]: osd.1 pg_epoch: 28 pg[5.f( empty local-lis/les=25/27 n=0 ec=25/16 lis/c=25/25 les/c/f=27/27/0 sis=28 pruub=12.453302383s) [0] r=-1 lpr=28 pi=[25,28)/1 crt=0'0 mlcod 0'0 active pruub 70.302795410s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Dec  8 04:47:29 np0005550137 ceph-osd[83009]: osd.1 pg_epoch: 28 pg[4.e( empty local-lis/les=23/24 n=0 ec=23/15 lis/c=23/23 les/c/f=24/24/0 sis=28 pruub=9.094835281s) [0] r=-1 lpr=28 pi=[23,28)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 66.944206238s@ mbc={}] state<Start>: transitioning to Stray
Dec  8 04:47:29 np0005550137 ceph-osd[83009]: osd.1 pg_epoch: 28 pg[5.f( empty local-lis/les=25/27 n=0 ec=25/16 lis/c=25/25 les/c/f=27/27/0 sis=28 pruub=12.453279495s) [0] r=-1 lpr=28 pi=[25,28)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 70.302795410s@ mbc={}] state<Start>: transitioning to Stray
Dec  8 04:47:29 np0005550137 ceph-osd[83009]: osd.1 pg_epoch: 28 pg[5.e( empty local-lis/les=25/27 n=0 ec=25/16 lis/c=25/25 les/c/f=27/27/0 sis=28 pruub=12.453356743s) [0] r=-1 lpr=28 pi=[25,28)/1 crt=0'0 mlcod 0'0 active pruub 70.303016663s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Dec  8 04:47:29 np0005550137 ceph-osd[83009]: osd.1 pg_epoch: 28 pg[6.d( empty local-lis/les=25/27 n=0 ec=25/17 lis/c=25/25 les/c/f=27/27/0 sis=28 pruub=12.453385353s) [0] r=-1 lpr=28 pi=[25,28)/1 crt=0'0 mlcod 0'0 active pruub 70.303070068s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Dec  8 04:47:29 np0005550137 ceph-osd[83009]: osd.1 pg_epoch: 28 pg[5.e( empty local-lis/les=25/27 n=0 ec=25/16 lis/c=25/25 les/c/f=27/27/0 sis=28 pruub=12.453343391s) [0] r=-1 lpr=28 pi=[25,28)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 70.303016663s@ mbc={}] state<Start>: transitioning to Stray
Dec  8 04:47:29 np0005550137 ceph-osd[83009]: osd.1 pg_epoch: 28 pg[6.d( empty local-lis/les=25/27 n=0 ec=25/17 lis/c=25/25 les/c/f=27/27/0 sis=28 pruub=12.453368187s) [0] r=-1 lpr=28 pi=[25,28)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 70.303070068s@ mbc={}] state<Start>: transitioning to Stray
Dec  8 04:47:29 np0005550137 ceph-osd[83009]: osd.1 pg_epoch: 28 pg[6.1a( empty local-lis/les=25/27 n=0 ec=25/17 lis/c=25/25 les/c/f=27/27/0 sis=28 pruub=12.442990303s) [0] r=-1 lpr=28 pi=[25,28)/1 crt=0'0 mlcod 0'0 active pruub 70.292778015s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Dec  8 04:47:29 np0005550137 ceph-osd[83009]: osd.1 pg_epoch: 28 pg[5.2( empty local-lis/les=25/27 n=0 ec=25/16 lis/c=25/25 les/c/f=27/27/0 sis=28 pruub=12.453294754s) [0] r=-1 lpr=28 pi=[25,28)/1 crt=0'0 mlcod 0'0 active pruub 70.303115845s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Dec  8 04:47:29 np0005550137 ceph-osd[83009]: osd.1 pg_epoch: 28 pg[3.3( empty local-lis/les=23/24 n=0 ec=23/13 lis/c=23/23 les/c/f=24/24/0 sis=28 pruub=9.093957901s) [0] r=-1 lpr=28 pi=[23,28)/1 crt=0'0 mlcod 0'0 active pruub 66.943862915s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Dec  8 04:47:29 np0005550137 ceph-osd[83009]: osd.1 pg_epoch: 28 pg[5.2( empty local-lis/les=25/27 n=0 ec=25/16 lis/c=25/25 les/c/f=27/27/0 sis=28 pruub=12.453258514s) [0] r=-1 lpr=28 pi=[25,28)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 70.303115845s@ mbc={}] state<Start>: transitioning to Stray
Dec  8 04:47:29 np0005550137 ceph-osd[83009]: osd.1 pg_epoch: 28 pg[3.3( empty local-lis/les=23/24 n=0 ec=23/13 lis/c=23/23 les/c/f=24/24/0 sis=28 pruub=9.093942642s) [0] r=-1 lpr=28 pi=[23,28)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 66.943862915s@ mbc={}] state<Start>: transitioning to Stray
Dec  8 04:47:29 np0005550137 ceph-osd[83009]: osd.1 pg_epoch: 28 pg[4.5( empty local-lis/les=23/24 n=0 ec=23/15 lis/c=23/23 les/c/f=24/24/0 sis=28 pruub=9.094060898s) [0] r=-1 lpr=28 pi=[23,28)/1 crt=0'0 mlcod 0'0 active pruub 66.944023132s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Dec  8 04:47:29 np0005550137 ceph-osd[83009]: osd.1 pg_epoch: 28 pg[4.5( empty local-lis/les=23/24 n=0 ec=23/15 lis/c=23/23 les/c/f=24/24/0 sis=28 pruub=9.094041824s) [0] r=-1 lpr=28 pi=[23,28)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 66.944023132s@ mbc={}] state<Start>: transitioning to Stray
Dec  8 04:47:29 np0005550137 ceph-osd[83009]: osd.1 pg_epoch: 28 pg[5.4( empty local-lis/les=25/27 n=0 ec=25/16 lis/c=25/25 les/c/f=27/27/0 sis=28 pruub=12.453373909s) [0] r=-1 lpr=28 pi=[25,28)/1 crt=0'0 mlcod 0'0 active pruub 70.303375244s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Dec  8 04:47:29 np0005550137 ceph-osd[83009]: osd.1 pg_epoch: 28 pg[6.7( empty local-lis/les=25/27 n=0 ec=25/17 lis/c=25/25 les/c/f=27/27/0 sis=28 pruub=12.453104973s) [0] r=-1 lpr=28 pi=[25,28)/1 crt=0'0 mlcod 0'0 active pruub 70.303176880s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Dec  8 04:47:29 np0005550137 ceph-osd[83009]: osd.1 pg_epoch: 28 pg[5.4( empty local-lis/les=25/27 n=0 ec=25/16 lis/c=25/25 les/c/f=27/27/0 sis=28 pruub=12.453349113s) [0] r=-1 lpr=28 pi=[25,28)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 70.303375244s@ mbc={}] state<Start>: transitioning to Stray
Dec  8 04:47:29 np0005550137 ceph-osd[83009]: osd.1 pg_epoch: 28 pg[6.7( empty local-lis/les=25/27 n=0 ec=25/17 lis/c=25/25 les/c/f=27/27/0 sis=28 pruub=12.453089714s) [0] r=-1 lpr=28 pi=[25,28)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 70.303176880s@ mbc={}] state<Start>: transitioning to Stray
Dec  8 04:47:29 np0005550137 ceph-osd[83009]: osd.1 pg_epoch: 28 pg[6.1a( empty local-lis/les=25/27 n=0 ec=25/17 lis/c=25/25 les/c/f=27/27/0 sis=28 pruub=12.442966461s) [0] r=-1 lpr=28 pi=[25,28)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 70.292778015s@ mbc={}] state<Start>: transitioning to Stray
Dec  8 04:47:29 np0005550137 ceph-osd[83009]: osd.1 pg_epoch: 28 pg[3.5( empty local-lis/les=23/24 n=0 ec=23/13 lis/c=23/23 les/c/f=24/24/0 sis=28 pruub=9.093471527s) [0] r=-1 lpr=28 pi=[23,28)/1 crt=0'0 mlcod 0'0 active pruub 66.943679810s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Dec  8 04:47:29 np0005550137 ceph-osd[83009]: osd.1 pg_epoch: 28 pg[3.5( empty local-lis/les=23/24 n=0 ec=23/13 lis/c=23/23 les/c/f=24/24/0 sis=28 pruub=9.093449593s) [0] r=-1 lpr=28 pi=[23,28)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 66.943679810s@ mbc={}] state<Start>: transitioning to Stray
Dec  8 04:47:29 np0005550137 ceph-osd[83009]: osd.1 pg_epoch: 28 pg[5.7( empty local-lis/les=25/27 n=0 ec=25/16 lis/c=25/25 les/c/f=27/27/0 sis=28 pruub=12.452974319s) [0] r=-1 lpr=28 pi=[25,28)/1 crt=0'0 mlcod 0'0 active pruub 70.303237915s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Dec  8 04:47:29 np0005550137 ceph-osd[83009]: osd.1 pg_epoch: 28 pg[6.3( empty local-lis/les=25/27 n=0 ec=25/17 lis/c=25/25 les/c/f=27/27/0 sis=28 pruub=12.453395844s) [0] r=-1 lpr=28 pi=[25,28)/1 crt=0'0 mlcod 0'0 active pruub 70.303672791s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Dec  8 04:47:29 np0005550137 ceph-osd[83009]: osd.1 pg_epoch: 28 pg[4.1( empty local-lis/les=23/24 n=0 ec=23/15 lis/c=23/23 les/c/f=24/24/0 sis=28 pruub=9.093503952s) [0] r=-1 lpr=28 pi=[23,28)/1 crt=0'0 mlcod 0'0 active pruub 66.943786621s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Dec  8 04:47:29 np0005550137 ceph-osd[83009]: osd.1 pg_epoch: 28 pg[6.3( empty local-lis/les=25/27 n=0 ec=25/17 lis/c=25/25 les/c/f=27/27/0 sis=28 pruub=12.453381538s) [0] r=-1 lpr=28 pi=[25,28)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 70.303672791s@ mbc={}] state<Start>: transitioning to Stray
Dec  8 04:47:29 np0005550137 ceph-osd[83009]: osd.1 pg_epoch: 28 pg[4.1( empty local-lis/les=23/24 n=0 ec=23/15 lis/c=23/23 les/c/f=24/24/0 sis=28 pruub=9.093452454s) [0] r=-1 lpr=28 pi=[23,28)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 66.943786621s@ mbc={}] state<Start>: transitioning to Stray
Dec  8 04:47:29 np0005550137 ceph-osd[83009]: osd.1 pg_epoch: 28 pg[5.7( empty local-lis/les=25/27 n=0 ec=25/16 lis/c=25/25 les/c/f=27/27/0 sis=28 pruub=12.452952385s) [0] r=-1 lpr=28 pi=[25,28)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 70.303237915s@ mbc={}] state<Start>: transitioning to Stray
Dec  8 04:47:29 np0005550137 ceph-osd[83009]: osd.1 pg_epoch: 28 pg[5.1( empty local-lis/les=25/27 n=0 ec=25/16 lis/c=25/25 les/c/f=27/27/0 sis=28 pruub=12.451296806s) [0] r=-1 lpr=28 pi=[25,28)/1 crt=0'0 mlcod 0'0 active pruub 70.303894043s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Dec  8 04:47:29 np0005550137 ceph-osd[83009]: osd.1 pg_epoch: 28 pg[5.1( empty local-lis/les=25/27 n=0 ec=25/16 lis/c=25/25 les/c/f=27/27/0 sis=28 pruub=12.451284409s) [0] r=-1 lpr=28 pi=[25,28)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 70.303894043s@ mbc={}] state<Start>: transitioning to Stray
Dec  8 04:47:29 np0005550137 ceph-osd[83009]: osd.1 pg_epoch: 28 pg[6.2( empty local-lis/les=25/27 n=0 ec=25/17 lis/c=25/25 les/c/f=27/27/0 sis=28 pruub=12.451136589s) [0] r=-1 lpr=28 pi=[25,28)/1 crt=0'0 mlcod 0'0 active pruub 70.303794861s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Dec  8 04:47:29 np0005550137 ceph-osd[83009]: osd.1 pg_epoch: 28 pg[6.2( empty local-lis/les=25/27 n=0 ec=25/17 lis/c=25/25 les/c/f=27/27/0 sis=28 pruub=12.451115608s) [0] r=-1 lpr=28 pi=[25,28)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 70.303794861s@ mbc={}] state<Start>: transitioning to Stray
Dec  8 04:47:29 np0005550137 ceph-osd[83009]: osd.1 pg_epoch: 28 pg[6.5( empty local-lis/les=25/27 n=0 ec=25/17 lis/c=25/25 les/c/f=27/27/0 sis=28 pruub=12.451305389s) [0] r=-1 lpr=28 pi=[25,28)/1 crt=0'0 mlcod 0'0 active pruub 70.304046631s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Dec  8 04:47:29 np0005550137 ceph-osd[83009]: osd.1 pg_epoch: 28 pg[6.5( empty local-lis/les=25/27 n=0 ec=25/17 lis/c=25/25 les/c/f=27/27/0 sis=28 pruub=12.451295853s) [0] r=-1 lpr=28 pi=[25,28)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 70.304046631s@ mbc={}] state<Start>: transitioning to Stray
Dec  8 04:47:29 np0005550137 ceph-osd[83009]: osd.1 pg_epoch: 28 pg[4.d( empty local-lis/les=23/24 n=0 ec=23/15 lis/c=23/23 les/c/f=24/24/0 sis=28 pruub=9.087344170s) [0] r=-1 lpr=28 pi=[23,28)/1 crt=0'0 mlcod 0'0 active pruub 66.940155029s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Dec  8 04:47:29 np0005550137 ceph-osd[83009]: osd.1 pg_epoch: 28 pg[3.a( empty local-lis/les=23/24 n=0 ec=23/13 lis/c=23/23 les/c/f=24/24/0 sis=28 pruub=9.087348938s) [0] r=-1 lpr=28 pi=[23,28)/1 crt=0'0 mlcod 0'0 active pruub 66.940185547s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Dec  8 04:47:29 np0005550137 ceph-osd[83009]: osd.1 pg_epoch: 28 pg[3.a( empty local-lis/les=23/24 n=0 ec=23/13 lis/c=23/23 les/c/f=24/24/0 sis=28 pruub=9.087338448s) [0] r=-1 lpr=28 pi=[23,28)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 66.940185547s@ mbc={}] state<Start>: transitioning to Stray
Dec  8 04:47:29 np0005550137 ceph-osd[83009]: osd.1 pg_epoch: 28 pg[4.d( empty local-lis/les=23/24 n=0 ec=23/15 lis/c=23/23 les/c/f=24/24/0 sis=28 pruub=9.087315559s) [0] r=-1 lpr=28 pi=[23,28)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 66.940155029s@ mbc={}] state<Start>: transitioning to Stray
Dec  8 04:47:29 np0005550137 ceph-osd[83009]: osd.1 pg_epoch: 28 pg[4.c( empty local-lis/les=23/24 n=0 ec=23/15 lis/c=23/23 les/c/f=24/24/0 sis=28 pruub=9.090743065s) [0] r=-1 lpr=28 pi=[23,28)/1 crt=0'0 mlcod 0'0 active pruub 66.943710327s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Dec  8 04:47:29 np0005550137 ceph-osd[83009]: osd.1 pg_epoch: 28 pg[4.c( empty local-lis/les=23/24 n=0 ec=23/15 lis/c=23/23 les/c/f=24/24/0 sis=28 pruub=9.090724945s) [0] r=-1 lpr=28 pi=[23,28)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 66.943710327s@ mbc={}] state<Start>: transitioning to Stray
Dec  8 04:47:29 np0005550137 ceph-osd[83009]: osd.1 pg_epoch: 28 pg[6.e( empty local-lis/les=25/27 n=0 ec=25/17 lis/c=25/25 les/c/f=27/27/0 sis=28 pruub=12.450981140s) [0] r=-1 lpr=28 pi=[25,28)/1 crt=0'0 mlcod 0'0 active pruub 70.304031372s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Dec  8 04:47:29 np0005550137 ceph-osd[83009]: osd.1 pg_epoch: 28 pg[3.c( empty local-lis/les=23/24 n=0 ec=23/13 lis/c=23/23 les/c/f=24/24/0 sis=28 pruub=9.087024689s) [0] r=-1 lpr=28 pi=[23,28)/1 crt=0'0 mlcod 0'0 active pruub 66.940109253s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Dec  8 04:47:29 np0005550137 ceph-osd[83009]: osd.1 pg_epoch: 28 pg[6.e( empty local-lis/les=25/27 n=0 ec=25/17 lis/c=25/25 les/c/f=27/27/0 sis=28 pruub=12.450959206s) [0] r=-1 lpr=28 pi=[25,28)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 70.304031372s@ mbc={}] state<Start>: transitioning to Stray
Dec  8 04:47:29 np0005550137 ceph-osd[83009]: osd.1 pg_epoch: 28 pg[3.c( empty local-lis/les=23/24 n=0 ec=23/13 lis/c=23/23 les/c/f=24/24/0 sis=28 pruub=9.087009430s) [0] r=-1 lpr=28 pi=[23,28)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 66.940109253s@ mbc={}] state<Start>: transitioning to Stray
Dec  8 04:47:29 np0005550137 ceph-osd[83009]: osd.1 pg_epoch: 28 pg[4.a( empty local-lis/les=23/24 n=0 ec=23/15 lis/c=23/23 les/c/f=24/24/0 sis=28 pruub=9.086843491s) [0] r=-1 lpr=28 pi=[23,28)/1 crt=0'0 mlcod 0'0 active pruub 66.939971924s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Dec  8 04:47:29 np0005550137 ceph-osd[83009]: osd.1 pg_epoch: 28 pg[4.a( empty local-lis/les=23/24 n=0 ec=23/15 lis/c=23/23 les/c/f=24/24/0 sis=28 pruub=9.086826324s) [0] r=-1 lpr=28 pi=[23,28)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 66.939971924s@ mbc={}] state<Start>: transitioning to Stray
Dec  8 04:47:29 np0005550137 ceph-osd[83009]: osd.1 pg_epoch: 28 pg[3.d( empty local-lis/les=23/24 n=0 ec=23/13 lis/c=23/23 les/c/f=24/24/0 sis=28 pruub=9.086750031s) [0] r=-1 lpr=28 pi=[23,28)/1 crt=0'0 mlcod 0'0 active pruub 66.939933777s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Dec  8 04:47:29 np0005550137 ceph-osd[83009]: osd.1 pg_epoch: 28 pg[3.d( empty local-lis/les=23/24 n=0 ec=23/13 lis/c=23/23 les/c/f=24/24/0 sis=28 pruub=9.086730003s) [0] r=-1 lpr=28 pi=[23,28)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 66.939933777s@ mbc={}] state<Start>: transitioning to Stray
Dec  8 04:47:29 np0005550137 ceph-osd[83009]: osd.1 pg_epoch: 28 pg[6.8( empty local-lis/les=25/27 n=0 ec=25/17 lis/c=25/25 les/c/f=27/27/0 sis=28 pruub=12.450903893s) [0] r=-1 lpr=28 pi=[25,28)/1 crt=0'0 mlcod 0'0 active pruub 70.304183960s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Dec  8 04:47:29 np0005550137 ceph-osd[83009]: osd.1 pg_epoch: 28 pg[4.9( empty local-lis/les=23/24 n=0 ec=23/15 lis/c=23/23 les/c/f=24/24/0 sis=28 pruub=9.086688995s) [0] r=-1 lpr=28 pi=[23,28)/1 crt=0'0 mlcod 0'0 active pruub 66.939994812s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Dec  8 04:47:29 np0005550137 ceph-osd[83009]: osd.1 pg_epoch: 28 pg[3.e( empty local-lis/les=23/24 n=0 ec=23/13 lis/c=23/23 les/c/f=24/24/0 sis=28 pruub=9.086600304s) [0] r=-1 lpr=28 pi=[23,28)/1 crt=0'0 mlcod 0'0 active pruub 66.939918518s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Dec  8 04:47:29 np0005550137 ceph-osd[83009]: osd.1 pg_epoch: 28 pg[6.8( empty local-lis/les=25/27 n=0 ec=25/17 lis/c=25/25 les/c/f=27/27/0 sis=28 pruub=12.450889587s) [0] r=-1 lpr=28 pi=[25,28)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 70.304183960s@ mbc={}] state<Start>: transitioning to Stray
Dec  8 04:47:29 np0005550137 ceph-osd[83009]: osd.1 pg_epoch: 28 pg[4.9( empty local-lis/les=23/24 n=0 ec=23/15 lis/c=23/23 les/c/f=24/24/0 sis=28 pruub=9.086674690s) [0] r=-1 lpr=28 pi=[23,28)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 66.939994812s@ mbc={}] state<Start>: transitioning to Stray
Dec  8 04:47:29 np0005550137 ceph-osd[83009]: osd.1 pg_epoch: 28 pg[3.f( empty local-lis/les=23/24 n=0 ec=23/13 lis/c=23/23 les/c/f=24/24/0 sis=28 pruub=9.086433411s) [0] r=-1 lpr=28 pi=[23,28)/1 crt=0'0 mlcod 0'0 active pruub 66.939819336s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Dec  8 04:47:29 np0005550137 ceph-osd[83009]: osd.1 pg_epoch: 28 pg[3.e( empty local-lis/les=23/24 n=0 ec=23/13 lis/c=23/23 les/c/f=24/24/0 sis=28 pruub=9.086577415s) [0] r=-1 lpr=28 pi=[23,28)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 66.939918518s@ mbc={}] state<Start>: transitioning to Stray
Dec  8 04:47:29 np0005550137 ceph-osd[83009]: osd.1 pg_epoch: 28 pg[5.9( empty local-lis/les=25/27 n=0 ec=25/16 lis/c=25/25 les/c/f=27/27/0 sis=28 pruub=12.450938225s) [0] r=-1 lpr=28 pi=[25,28)/1 crt=0'0 mlcod 0'0 active pruub 70.304367065s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Dec  8 04:47:29 np0005550137 ceph-osd[83009]: osd.1 pg_epoch: 28 pg[5.9( empty local-lis/les=25/27 n=0 ec=25/16 lis/c=25/25 les/c/f=27/27/0 sis=28 pruub=12.450926781s) [0] r=-1 lpr=28 pi=[25,28)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 70.304367065s@ mbc={}] state<Start>: transitioning to Stray
Dec  8 04:47:29 np0005550137 ceph-osd[83009]: osd.1 pg_epoch: 28 pg[6.a( empty local-lis/les=25/27 n=0 ec=25/17 lis/c=25/25 les/c/f=27/27/0 sis=28 pruub=12.450802803s) [0] r=-1 lpr=28 pi=[25,28)/1 crt=0'0 mlcod 0'0 active pruub 70.304283142s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Dec  8 04:47:29 np0005550137 ceph-osd[83009]: osd.1 pg_epoch: 28 pg[6.a( empty local-lis/les=25/27 n=0 ec=25/17 lis/c=25/25 les/c/f=27/27/0 sis=28 pruub=12.450774193s) [0] r=-1 lpr=28 pi=[25,28)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 70.304283142s@ mbc={}] state<Start>: transitioning to Stray
Dec  8 04:47:29 np0005550137 ceph-osd[83009]: osd.1 pg_epoch: 28 pg[3.10( empty local-lis/les=23/24 n=0 ec=23/13 lis/c=23/23 les/c/f=24/24/0 sis=28 pruub=9.086028099s) [0] r=-1 lpr=28 pi=[23,28)/1 crt=0'0 mlcod 0'0 active pruub 66.939613342s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Dec  8 04:47:29 np0005550137 ceph-osd[83009]: osd.1 pg_epoch: 28 pg[3.10( empty local-lis/les=23/24 n=0 ec=23/13 lis/c=23/23 les/c/f=24/24/0 sis=28 pruub=9.086015701s) [0] r=-1 lpr=28 pi=[23,28)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 66.939613342s@ mbc={}] state<Start>: transitioning to Stray
Dec  8 04:47:29 np0005550137 ceph-osd[83009]: osd.1 pg_epoch: 28 pg[4.8( empty local-lis/les=23/24 n=0 ec=23/15 lis/c=23/23 les/c/f=24/24/0 sis=28 pruub=9.086297035s) [0] r=-1 lpr=28 pi=[23,28)/1 crt=0'0 mlcod 0'0 active pruub 66.939910889s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Dec  8 04:47:29 np0005550137 ceph-osd[83009]: osd.1 pg_epoch: 28 pg[3.f( empty local-lis/les=23/24 n=0 ec=23/13 lis/c=23/23 les/c/f=24/24/0 sis=28 pruub=9.086416245s) [0] r=-1 lpr=28 pi=[23,28)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 66.939819336s@ mbc={}] state<Start>: transitioning to Stray
Dec  8 04:47:29 np0005550137 ceph-osd[83009]: osd.1 pg_epoch: 28 pg[4.8( empty local-lis/les=23/24 n=0 ec=23/15 lis/c=23/23 les/c/f=24/24/0 sis=28 pruub=9.086144447s) [0] r=-1 lpr=28 pi=[23,28)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 66.939910889s@ mbc={}] state<Start>: transitioning to Stray
Dec  8 04:47:29 np0005550137 ceph-osd[83009]: osd.1 pg_epoch: 28 pg[5.16( empty local-lis/les=25/27 n=0 ec=25/16 lis/c=25/25 les/c/f=27/27/0 sis=28 pruub=12.450785637s) [0] r=-1 lpr=28 pi=[25,28)/1 crt=0'0 mlcod 0'0 active pruub 70.304420471s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Dec  8 04:47:29 np0005550137 ceph-osd[83009]: osd.1 pg_epoch: 28 pg[6.15( empty local-lis/les=25/27 n=0 ec=25/17 lis/c=25/25 les/c/f=27/27/0 sis=28 pruub=12.450529099s) [0] r=-1 lpr=28 pi=[25,28)/1 crt=0'0 mlcod 0'0 active pruub 70.304382324s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Dec  8 04:47:29 np0005550137 ceph-osd[83009]: osd.1 pg_epoch: 28 pg[3.11( empty local-lis/les=23/24 n=0 ec=23/13 lis/c=23/23 les/c/f=24/24/0 sis=28 pruub=9.086366653s) [0] r=-1 lpr=28 pi=[23,28)/1 crt=0'0 mlcod 0'0 active pruub 66.940246582s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Dec  8 04:47:29 np0005550137 ceph-osd[83009]: osd.1 pg_epoch: 28 pg[6.15( empty local-lis/les=25/27 n=0 ec=25/17 lis/c=25/25 les/c/f=27/27/0 sis=28 pruub=12.450495720s) [0] r=-1 lpr=28 pi=[25,28)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 70.304382324s@ mbc={}] state<Start>: transitioning to Stray
Dec  8 04:47:29 np0005550137 ceph-osd[83009]: osd.1 pg_epoch: 28 pg[3.11( empty local-lis/les=23/24 n=0 ec=23/13 lis/c=23/23 les/c/f=24/24/0 sis=28 pruub=9.086320877s) [0] r=-1 lpr=28 pi=[23,28)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 66.940246582s@ mbc={}] state<Start>: transitioning to Stray
Dec  8 04:47:29 np0005550137 ceph-osd[83009]: osd.1 pg_epoch: 28 pg[5.16( empty local-lis/les=25/27 n=0 ec=25/16 lis/c=25/25 les/c/f=27/27/0 sis=28 pruub=12.450572968s) [0] r=-1 lpr=28 pi=[25,28)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 70.304420471s@ mbc={}] state<Start>: transitioning to Stray
Dec  8 04:47:29 np0005550137 ceph-osd[83009]: osd.1 pg_epoch: 28 pg[4.15( empty local-lis/les=23/24 n=0 ec=23/15 lis/c=23/23 les/c/f=24/24/0 sis=28 pruub=9.085761070s) [0] r=-1 lpr=28 pi=[23,28)/1 crt=0'0 mlcod 0'0 active pruub 66.939796448s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Dec  8 04:47:29 np0005550137 ceph-osd[83009]: osd.1 pg_epoch: 28 pg[6.17( empty local-lis/les=25/27 n=0 ec=25/17 lis/c=25/25 les/c/f=27/27/0 sis=28 pruub=12.450551987s) [0] r=-1 lpr=28 pi=[25,28)/1 crt=0'0 mlcod 0'0 active pruub 70.304595947s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Dec  8 04:47:29 np0005550137 ceph-osd[83009]: osd.1 pg_epoch: 28 pg[4.15( empty local-lis/les=23/24 n=0 ec=23/15 lis/c=23/23 les/c/f=24/24/0 sis=28 pruub=9.085746765s) [0] r=-1 lpr=28 pi=[23,28)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 66.939796448s@ mbc={}] state<Start>: transitioning to Stray
Dec  8 04:47:29 np0005550137 ceph-osd[83009]: osd.1 pg_epoch: 28 pg[6.17( empty local-lis/les=25/27 n=0 ec=25/17 lis/c=25/25 les/c/f=27/27/0 sis=28 pruub=12.450534821s) [0] r=-1 lpr=28 pi=[25,28)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 70.304595947s@ mbc={}] state<Start>: transitioning to Stray
Dec  8 04:47:29 np0005550137 ceph-osd[83009]: osd.1 pg_epoch: 28 pg[3.13( empty local-lis/les=23/24 n=0 ec=23/13 lis/c=23/23 les/c/f=24/24/0 sis=28 pruub=9.085172653s) [0] r=-1 lpr=28 pi=[23,28)/1 crt=0'0 mlcod 0'0 active pruub 66.939346313s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Dec  8 04:47:29 np0005550137 ceph-osd[83009]: osd.1 pg_epoch: 28 pg[3.13( empty local-lis/les=23/24 n=0 ec=23/13 lis/c=23/23 les/c/f=24/24/0 sis=28 pruub=9.085156441s) [0] r=-1 lpr=28 pi=[23,28)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 66.939346313s@ mbc={}] state<Start>: transitioning to Stray
Dec  8 04:47:29 np0005550137 ceph-osd[83009]: osd.1 pg_epoch: 28 pg[4.13( empty local-lis/les=23/24 n=0 ec=23/15 lis/c=23/23 les/c/f=24/24/0 sis=28 pruub=9.085120201s) [0] r=-1 lpr=28 pi=[23,28)/1 crt=0'0 mlcod 0'0 active pruub 66.939323425s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Dec  8 04:47:29 np0005550137 ceph-osd[83009]: osd.1 pg_epoch: 28 pg[4.13( empty local-lis/les=23/24 n=0 ec=23/15 lis/c=23/23 les/c/f=24/24/0 sis=28 pruub=9.084975243s) [0] r=-1 lpr=28 pi=[23,28)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 66.939323425s@ mbc={}] state<Start>: transitioning to Stray
Dec  8 04:47:29 np0005550137 ceph-osd[83009]: osd.1 pg_epoch: 28 pg[3.14( empty local-lis/les=23/24 n=0 ec=23/13 lis/c=23/23 les/c/f=24/24/0 sis=28 pruub=9.084902763s) [0] r=-1 lpr=28 pi=[23,28)/1 crt=0'0 mlcod 0'0 active pruub 66.939315796s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Dec  8 04:47:29 np0005550137 ceph-osd[83009]: osd.1 pg_epoch: 28 pg[3.14( empty local-lis/les=23/24 n=0 ec=23/13 lis/c=23/23 les/c/f=24/24/0 sis=28 pruub=9.084888458s) [0] r=-1 lpr=28 pi=[23,28)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 66.939315796s@ mbc={}] state<Start>: transitioning to Stray
Dec  8 04:47:29 np0005550137 ceph-osd[83009]: osd.1 pg_epoch: 28 pg[5.15( empty local-lis/les=25/27 n=0 ec=25/16 lis/c=25/25 les/c/f=27/27/0 sis=28 pruub=12.450426102s) [0] r=-1 lpr=28 pi=[25,28)/1 crt=0'0 mlcod 0'0 active pruub 70.304672241s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Dec  8 04:47:29 np0005550137 ceph-osd[83009]: osd.1 pg_epoch: 28 pg[5.15( empty local-lis/les=25/27 n=0 ec=25/16 lis/c=25/25 les/c/f=27/27/0 sis=28 pruub=12.450178146s) [0] r=-1 lpr=28 pi=[25,28)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 70.304672241s@ mbc={}] state<Start>: transitioning to Stray
Dec  8 04:47:29 np0005550137 ceph-osd[83009]: osd.1 pg_epoch: 28 pg[5.10( empty local-lis/les=25/27 n=0 ec=25/16 lis/c=25/25 les/c/f=27/27/0 sis=28 pruub=12.450178146s) [0] r=-1 lpr=28 pi=[25,28)/1 crt=0'0 mlcod 0'0 active pruub 70.304794312s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Dec  8 04:47:29 np0005550137 ceph-osd[83009]: osd.1 pg_epoch: 28 pg[5.10( empty local-lis/les=25/27 n=0 ec=25/16 lis/c=25/25 les/c/f=27/27/0 sis=28 pruub=12.450166702s) [0] r=-1 lpr=28 pi=[25,28)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 70.304794312s@ mbc={}] state<Start>: transitioning to Stray
Dec  8 04:47:29 np0005550137 ceph-osd[83009]: osd.1 pg_epoch: 28 pg[3.16( empty local-lis/les=23/24 n=0 ec=23/13 lis/c=23/23 les/c/f=24/24/0 sis=28 pruub=9.084124565s) [0] r=-1 lpr=28 pi=[23,28)/1 crt=0'0 mlcod 0'0 active pruub 66.938804626s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Dec  8 04:47:29 np0005550137 ceph-osd[83009]: osd.1 pg_epoch: 28 pg[3.16( empty local-lis/les=23/24 n=0 ec=23/13 lis/c=23/23 les/c/f=24/24/0 sis=28 pruub=9.084104538s) [0] r=-1 lpr=28 pi=[23,28)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 66.938804626s@ mbc={}] state<Start>: transitioning to Stray
Dec  8 04:47:29 np0005550137 ceph-osd[83009]: osd.1 pg_epoch: 28 pg[5.11( empty local-lis/les=25/27 n=0 ec=25/16 lis/c=25/25 les/c/f=27/27/0 sis=28 pruub=12.450146675s) [0] r=-1 lpr=28 pi=[25,28)/1 crt=0'0 mlcod 0'0 active pruub 70.304862976s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Dec  8 04:47:29 np0005550137 ceph-osd[83009]: osd.1 pg_epoch: 28 pg[6.12( empty local-lis/les=25/27 n=0 ec=25/17 lis/c=25/25 les/c/f=27/27/0 sis=28 pruub=12.450099945s) [0] r=-1 lpr=28 pi=[25,28)/1 crt=0'0 mlcod 0'0 active pruub 70.304832458s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Dec  8 04:47:29 np0005550137 ceph-osd[83009]: osd.1 pg_epoch: 28 pg[3.15( empty local-lis/les=23/24 n=0 ec=23/13 lis/c=23/23 les/c/f=24/24/0 sis=28 pruub=9.084167480s) [0] r=-1 lpr=28 pi=[23,28)/1 crt=0'0 mlcod 0'0 active pruub 66.938911438s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Dec  8 04:47:29 np0005550137 ceph-osd[83009]: osd.1 pg_epoch: 28 pg[6.12( empty local-lis/les=25/27 n=0 ec=25/17 lis/c=25/25 les/c/f=27/27/0 sis=28 pruub=12.450084686s) [0] r=-1 lpr=28 pi=[25,28)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 70.304832458s@ mbc={}] state<Start>: transitioning to Stray
Dec  8 04:47:29 np0005550137 ceph-osd[83009]: osd.1 pg_epoch: 28 pg[4.1f( empty local-lis/les=23/24 n=0 ec=23/15 lis/c=23/23 les/c/f=24/24/0 sis=28 pruub=9.083700180s) [0] r=-1 lpr=28 pi=[23,28)/1 crt=0'0 mlcod 0'0 active pruub 66.938552856s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Dec  8 04:47:29 np0005550137 ceph-osd[83009]: osd.1 pg_epoch: 28 pg[3.15( empty local-lis/les=23/24 n=0 ec=23/13 lis/c=23/23 les/c/f=24/24/0 sis=28 pruub=9.084089279s) [0] r=-1 lpr=28 pi=[23,28)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 66.938911438s@ mbc={}] state<Start>: transitioning to Stray
Dec  8 04:47:29 np0005550137 ceph-osd[83009]: osd.1 pg_epoch: 28 pg[5.11( empty local-lis/les=25/27 n=0 ec=25/16 lis/c=25/25 les/c/f=27/27/0 sis=28 pruub=12.450122833s) [0] r=-1 lpr=28 pi=[25,28)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 70.304862976s@ mbc={}] state<Start>: transitioning to Stray
Dec  8 04:47:29 np0005550137 ceph-osd[83009]: osd.1 pg_epoch: 28 pg[4.1f( empty local-lis/les=23/24 n=0 ec=23/15 lis/c=23/23 les/c/f=24/24/0 sis=28 pruub=9.083683968s) [0] r=-1 lpr=28 pi=[23,28)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 66.938552856s@ mbc={}] state<Start>: transitioning to Stray
Dec  8 04:47:29 np0005550137 ceph-osd[83009]: osd.1 pg_epoch: 28 pg[6.1c( empty local-lis/les=25/27 n=0 ec=25/17 lis/c=25/25 les/c/f=27/27/0 sis=28 pruub=12.450002670s) [0] r=-1 lpr=28 pi=[25,28)/1 crt=0'0 mlcod 0'0 active pruub 70.304954529s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Dec  8 04:47:29 np0005550137 ceph-osd[83009]: osd.1 pg_epoch: 28 pg[6.1c( empty local-lis/les=25/27 n=0 ec=25/17 lis/c=25/25 les/c/f=27/27/0 sis=28 pruub=12.449988365s) [0] r=-1 lpr=28 pi=[25,28)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 70.304954529s@ mbc={}] state<Start>: transitioning to Stray
Dec  8 04:47:29 np0005550137 ceph-osd[83009]: osd.1 pg_epoch: 28 pg[5.1f( empty local-lis/les=25/27 n=0 ec=25/16 lis/c=25/25 les/c/f=27/27/0 sis=28 pruub=12.449929237s) [0] r=-1 lpr=28 pi=[25,28)/1 crt=0'0 mlcod 0'0 active pruub 70.304931641s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Dec  8 04:47:29 np0005550137 ceph-osd[83009]: osd.1 pg_epoch: 28 pg[5.1f( empty local-lis/les=25/27 n=0 ec=25/16 lis/c=25/25 les/c/f=27/27/0 sis=28 pruub=12.449903488s) [0] r=-1 lpr=28 pi=[25,28)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 70.304931641s@ mbc={}] state<Start>: transitioning to Stray
Dec  8 04:47:29 np0005550137 ceph-osd[83009]: osd.1 pg_epoch: 28 pg[2.15( empty local-lis/les=0/0 n=0 ec=21/12 lis/c=21/21 les/c/f=22/22/0 sis=28) [1] r=0 lpr=28 pi=[21,28)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  8 04:47:29 np0005550137 ceph-osd[83009]: osd.1 pg_epoch: 28 pg[7.10( empty local-lis/les=0/0 n=0 ec=26/19 lis/c=26/26 les/c/f=27/27/0 sis=28) [1] r=0 lpr=28 pi=[26,28)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  8 04:47:29 np0005550137 ceph-osd[83009]: osd.1 pg_epoch: 28 pg[2.19( empty local-lis/les=0/0 n=0 ec=21/12 lis/c=21/21 les/c/f=22/22/0 sis=28) [1] r=0 lpr=28 pi=[21,28)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  8 04:47:29 np0005550137 ceph-osd[83009]: osd.1 pg_epoch: 28 pg[7.1d( empty local-lis/les=0/0 n=0 ec=26/19 lis/c=26/26 les/c/f=27/27/0 sis=28) [1] r=0 lpr=28 pi=[26,28)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  8 04:47:29 np0005550137 ceph-osd[83009]: osd.1 pg_epoch: 28 pg[2.13( empty local-lis/les=0/0 n=0 ec=21/12 lis/c=21/21 les/c/f=22/22/0 sis=28) [1] r=0 lpr=28 pi=[21,28)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  8 04:47:29 np0005550137 ceph-osd[83009]: osd.1 pg_epoch: 28 pg[7.13( empty local-lis/les=0/0 n=0 ec=26/19 lis/c=26/26 les/c/f=27/27/0 sis=28) [1] r=0 lpr=28 pi=[26,28)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  8 04:47:29 np0005550137 ceph-osd[83009]: osd.1 pg_epoch: 28 pg[7.14( empty local-lis/les=0/0 n=0 ec=26/19 lis/c=26/26 les/c/f=27/27/0 sis=28) [1] r=0 lpr=28 pi=[26,28)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  8 04:47:29 np0005550137 ceph-osd[83009]: osd.1 pg_epoch: 28 pg[2.e( empty local-lis/les=0/0 n=0 ec=21/12 lis/c=21/21 les/c/f=22/22/0 sis=28) [1] r=0 lpr=28 pi=[21,28)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  8 04:47:29 np0005550137 ceph-osd[83009]: osd.1 pg_epoch: 28 pg[2.10( empty local-lis/les=0/0 n=0 ec=21/12 lis/c=21/21 les/c/f=22/22/0 sis=28) [1] r=0 lpr=28 pi=[21,28)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  8 04:47:29 np0005550137 ceph-osd[83009]: osd.1 pg_epoch: 28 pg[7.b( empty local-lis/les=0/0 n=0 ec=26/19 lis/c=26/26 les/c/f=27/27/0 sis=28) [1] r=0 lpr=28 pi=[26,28)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  8 04:47:29 np0005550137 ceph-osd[83009]: osd.1 pg_epoch: 28 pg[2.c( empty local-lis/les=0/0 n=0 ec=21/12 lis/c=21/21 les/c/f=22/22/0 sis=28) [1] r=0 lpr=28 pi=[21,28)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  8 04:47:29 np0005550137 ceph-osd[83009]: osd.1 pg_epoch: 28 pg[7.8( empty local-lis/les=0/0 n=0 ec=26/19 lis/c=26/26 les/c/f=27/27/0 sis=28) [1] r=0 lpr=28 pi=[26,28)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  8 04:47:29 np0005550137 ceph-osd[83009]: osd.1 pg_epoch: 28 pg[7.a( empty local-lis/les=0/0 n=0 ec=26/19 lis/c=26/26 les/c/f=27/27/0 sis=28) [1] r=0 lpr=28 pi=[26,28)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  8 04:47:29 np0005550137 ceph-osd[83009]: osd.1 pg_epoch: 28 pg[7.e( empty local-lis/les=0/0 n=0 ec=26/19 lis/c=26/26 les/c/f=27/27/0 sis=28) [1] r=0 lpr=28 pi=[26,28)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  8 04:47:29 np0005550137 ceph-osd[83009]: osd.1 pg_epoch: 28 pg[7.6( empty local-lis/les=0/0 n=0 ec=26/19 lis/c=26/26 les/c/f=27/27/0 sis=28) [1] r=0 lpr=28 pi=[26,28)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  8 04:47:29 np0005550137 ceph-osd[83009]: osd.1 pg_epoch: 28 pg[7.9( empty local-lis/les=0/0 n=0 ec=26/19 lis/c=26/26 les/c/f=27/27/0 sis=28) [1] r=0 lpr=28 pi=[26,28)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  8 04:47:29 np0005550137 ceph-osd[83009]: osd.1 pg_epoch: 28 pg[7.4( empty local-lis/les=0/0 n=0 ec=26/19 lis/c=26/26 les/c/f=27/27/0 sis=28) [1] r=0 lpr=28 pi=[26,28)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  8 04:47:29 np0005550137 ceph-osd[83009]: osd.1 pg_epoch: 28 pg[2.4( empty local-lis/les=0/0 n=0 ec=21/12 lis/c=21/21 les/c/f=22/22/0 sis=28) [1] r=0 lpr=28 pi=[21,28)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  8 04:47:29 np0005550137 ceph-osd[83009]: osd.1 pg_epoch: 28 pg[7.3( empty local-lis/les=0/0 n=0 ec=26/19 lis/c=26/26 les/c/f=27/27/0 sis=28) [1] r=0 lpr=28 pi=[26,28)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  8 04:47:29 np0005550137 ceph-osd[83009]: osd.1 pg_epoch: 28 pg[2.6( empty local-lis/les=0/0 n=0 ec=21/12 lis/c=21/21 les/c/f=22/22/0 sis=28) [1] r=0 lpr=28 pi=[21,28)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  8 04:47:29 np0005550137 ceph-osd[83009]: osd.1 pg_epoch: 28 pg[2.9( empty local-lis/les=0/0 n=0 ec=21/12 lis/c=21/21 les/c/f=22/22/0 sis=28) [1] r=0 lpr=28 pi=[21,28)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  8 04:47:29 np0005550137 ceph-osd[83009]: osd.1 pg_epoch: 28 pg[7.2( empty local-lis/les=0/0 n=0 ec=26/19 lis/c=26/26 les/c/f=27/27/0 sis=28) [1] r=0 lpr=28 pi=[26,28)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  8 04:47:29 np0005550137 ceph-osd[83009]: osd.1 pg_epoch: 28 pg[7.1e( empty local-lis/les=0/0 n=0 ec=26/19 lis/c=26/26 les/c/f=27/27/0 sis=28) [1] r=0 lpr=28 pi=[26,28)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  8 04:47:29 np0005550137 ceph-osd[83009]: osd.1 pg_epoch: 28 pg[7.f( empty local-lis/les=0/0 n=0 ec=26/19 lis/c=26/26 les/c/f=27/27/0 sis=28) [1] r=0 lpr=28 pi=[26,28)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  8 04:47:29 np0005550137 ceph-osd[83009]: osd.1 pg_epoch: 28 pg[2.a( empty local-lis/les=0/0 n=0 ec=21/12 lis/c=21/21 les/c/f=22/22/0 sis=28) [1] r=0 lpr=28 pi=[21,28)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  8 04:47:29 np0005550137 ceph-osd[83009]: osd.1 pg_epoch: 28 pg[2.1b( empty local-lis/les=0/0 n=0 ec=21/12 lis/c=21/21 les/c/f=22/22/0 sis=28) [1] r=0 lpr=28 pi=[21,28)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  8 04:47:29 np0005550137 ceph-osd[83009]: osd.1 pg_epoch: 28 pg[2.d( empty local-lis/les=0/0 n=0 ec=21/12 lis/c=21/21 les/c/f=22/22/0 sis=28) [1] r=0 lpr=28 pi=[21,28)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  8 04:47:29 np0005550137 ceph-osd[83009]: osd.1 pg_epoch: 28 pg[7.1b( empty local-lis/les=0/0 n=0 ec=26/19 lis/c=26/26 les/c/f=27/27/0 sis=28) [1] r=0 lpr=28 pi=[26,28)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  8 04:47:29 np0005550137 ceph-osd[83009]: osd.1 pg_epoch: 28 pg[2.1f( empty local-lis/les=0/0 n=0 ec=21/12 lis/c=21/21 les/c/f=22/22/0 sis=28) [1] r=0 lpr=28 pi=[21,28)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  8 04:47:29 np0005550137 ceph-osd[83009]: osd.1 pg_epoch: 28 pg[2.1e( empty local-lis/les=0/0 n=0 ec=21/12 lis/c=21/21 les/c/f=22/22/0 sis=28) [1] r=0 lpr=28 pi=[21,28)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  8 04:47:29 np0005550137 ceph-osd[83009]: osd.1 pg_epoch: 28 pg[7.18( empty local-lis/les=0/0 n=0 ec=26/19 lis/c=26/26 les/c/f=27/27/0 sis=28) [1] r=0 lpr=28 pi=[26,28)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  8 04:47:29 np0005550137 ceph-osd[83009]: osd.1 pg_epoch: 28 pg[2.1( empty local-lis/les=0/0 n=0 ec=21/12 lis/c=21/21 les/c/f=22/22/0 sis=28) [1] r=0 lpr=28 pi=[21,28)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  8 04:47:29 np0005550137 ceph-mon[74516]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2.devices.0}] v 0)
Dec  8 04:47:29 np0005550137 ceph-mon[74516]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config assimilate-conf"} v 0)
Dec  8 04:47:29 np0005550137 ceph-mon[74516]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/2897888694' entity='client.admin' cmd=[{"prefix": "config assimilate-conf"}]: dispatch
Dec  8 04:47:29 np0005550137 ceph-mon[74516]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1555784564' entity='mgr.compute-0.kitiwu' 
Dec  8 04:47:29 np0005550137 ceph-mon[74516]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2}] v 0)
Dec  8 04:47:29 np0005550137 ceph-mon[74516]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/2897888694' entity='client.admin' cmd='[{"prefix": "config assimilate-conf"}]': finished
Dec  8 04:47:29 np0005550137 modest_moore[87239]: 
Dec  8 04:47:29 np0005550137 modest_moore[87239]: [global]
Dec  8 04:47:29 np0005550137 modest_moore[87239]: #011fsid = ceb838ef-9d5d-54e4-bddb-2f01adce2ad4
Dec  8 04:47:29 np0005550137 modest_moore[87239]: #011mon_host = 192.168.122.100
Dec  8 04:47:29 np0005550137 ceph-mon[74516]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1555784564' entity='mgr.compute-0.kitiwu' 
Dec  8 04:47:29 np0005550137 systemd[1]: libpod-68433cc26b2d8ab680d228112a107f1d6bc13f904055dc4b3707c60007a4a1dd.scope: Deactivated successfully.
Dec  8 04:47:29 np0005550137 podman[87224]: 2025-12-08 09:47:29.363976678 +0000 UTC m=+0.556034391 container died 68433cc26b2d8ab680d228112a107f1d6bc13f904055dc4b3707c60007a4a1dd (image=quay.io/ceph/ceph:v19, name=modest_moore, ceph=True, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325)
Dec  8 04:47:29 np0005550137 systemd[1]: var-lib-containers-storage-overlay-e369d20e13082c0be4a6f4d878a04cdee922beb69d14a118491c0f1138b1c845-merged.mount: Deactivated successfully.
Dec  8 04:47:29 np0005550137 podman[87224]: 2025-12-08 09:47:29.412172245 +0000 UTC m=+0.604229958 container remove 68433cc26b2d8ab680d228112a107f1d6bc13f904055dc4b3707c60007a4a1dd (image=quay.io/ceph/ceph:v19, name=modest_moore, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.build-date=20250325)
Dec  8 04:47:29 np0005550137 systemd[1]: libpod-conmon-68433cc26b2d8ab680d228112a107f1d6bc13f904055dc4b3707c60007a4a1dd.scope: Deactivated successfully.
Dec  8 04:47:29 np0005550137 ceph-osd[83009]: log_channel(cluster) log [DBG] : 3.1f scrub starts
Dec  8 04:47:29 np0005550137 ceph-osd[83009]: log_channel(cluster) log [DBG] : 3.1f scrub ok
Dec  8 04:47:29 np0005550137 ceph-mon[74516]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["2"]} v 0)
Dec  8 04:47:29 np0005550137 ceph-mon[74516]: log_channel(audit) log [INF] : from='osd.2 ' entity='osd.2' cmd=[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["2"]}]: dispatch
Dec  8 04:47:29 np0005550137 python3[87325]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z   --volume /tmp/ceph_rgw.yml:/home/ceph_spec.yaml:z   --entrypoint ceph quay.io/ceph/ceph:v19 --fsid ceb838ef-9d5d-54e4-bddb-2f01adce2ad4 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   config-key set ssl_option no_sslv2:sslv3:no_tlsv1:no_tlsv1_1#012 _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec  8 04:47:29 np0005550137 podman[87326]: 2025-12-08 09:47:29.826933726 +0000 UTC m=+0.062440516 container create b3e8b98db17afba94558e36e9be971566fceab5f9e2ae4f1174e11e5bff0ceb4 (image=quay.io/ceph/ceph:v19, name=eager_tharp, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, ceph=True, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec  8 04:47:29 np0005550137 ceph-mon[74516]: mon.compute-0@0(leader).osd e28 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  8 04:47:29 np0005550137 systemd[1]: Started libpod-conmon-b3e8b98db17afba94558e36e9be971566fceab5f9e2ae4f1174e11e5bff0ceb4.scope.
Dec  8 04:47:29 np0005550137 systemd[1]: Started libcrun container.
Dec  8 04:47:29 np0005550137 podman[87326]: 2025-12-08 09:47:29.805719629 +0000 UTC m=+0.041226429 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Dec  8 04:47:29 np0005550137 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/dc5075f0f0fdee919a8be094a131865008bf34aa99951564cb85bb1527f7c207/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Dec  8 04:47:29 np0005550137 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/dc5075f0f0fdee919a8be094a131865008bf34aa99951564cb85bb1527f7c207/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec  8 04:47:29 np0005550137 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/dc5075f0f0fdee919a8be094a131865008bf34aa99951564cb85bb1527f7c207/merged/home/ceph_spec.yaml supports timestamps until 2038 (0x7fffffff)
Dec  8 04:47:29 np0005550137 podman[87326]: 2025-12-08 09:47:29.919087072 +0000 UTC m=+0.154593862 container init b3e8b98db17afba94558e36e9be971566fceab5f9e2ae4f1174e11e5bff0ceb4 (image=quay.io/ceph/ceph:v19, name=eager_tharp, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec  8 04:47:29 np0005550137 podman[87326]: 2025-12-08 09:47:29.928544766 +0000 UTC m=+0.164051546 container start b3e8b98db17afba94558e36e9be971566fceab5f9e2ae4f1174e11e5bff0ceb4 (image=quay.io/ceph/ceph:v19, name=eager_tharp, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, io.buildah.version=1.40.1, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS)
Dec  8 04:47:29 np0005550137 podman[87326]: 2025-12-08 09:47:29.932615058 +0000 UTC m=+0.168121868 container attach b3e8b98db17afba94558e36e9be971566fceab5f9e2ae4f1174e11e5bff0ceb4 (image=quay.io/ceph/ceph:v19, name=eager_tharp, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, OSD_FLAVOR=default, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Dec  8 04:47:30 np0005550137 ceph-mon[74516]: mon.compute-0@0(leader).osd e28 do_prune osdmap full prune enabled
Dec  8 04:47:30 np0005550137 ceph-mon[74516]: from='mgr.14122 192.168.122.100:0/1555784564' entity='mgr.compute-0.kitiwu' cmd='[{"prefix": "osd pool set", "pool": "backups", "var": "pgp_num_actual", "val": "32"}]': finished
Dec  8 04:47:30 np0005550137 ceph-mon[74516]: from='mgr.14122 192.168.122.100:0/1555784564' entity='mgr.compute-0.kitiwu' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.data", "var": "pgp_num_actual", "val": "32"}]': finished
Dec  8 04:47:30 np0005550137 ceph-mon[74516]: from='mgr.14122 192.168.122.100:0/1555784564' entity='mgr.compute-0.kitiwu' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "32"}]': finished
Dec  8 04:47:30 np0005550137 ceph-mon[74516]: from='mgr.14122 192.168.122.100:0/1555784564' entity='mgr.compute-0.kitiwu' cmd='[{"prefix": "osd pool set", "pool": "images", "var": "pgp_num_actual", "val": "32"}]': finished
Dec  8 04:47:30 np0005550137 ceph-mon[74516]: from='mgr.14122 192.168.122.100:0/1555784564' entity='mgr.compute-0.kitiwu' cmd='[{"prefix": "osd pool set", "pool": "vms", "var": "pgp_num_actual", "val": "32"}]': finished
Dec  8 04:47:30 np0005550137 ceph-mon[74516]: from='mgr.14122 192.168.122.100:0/1555784564' entity='mgr.compute-0.kitiwu' cmd='[{"prefix": "osd pool set", "pool": "volumes", "var": "pgp_num_actual", "val": "32"}]': finished
Dec  8 04:47:30 np0005550137 ceph-mon[74516]: from='client.? 192.168.122.100:0/2897888694' entity='client.admin' cmd=[{"prefix": "config assimilate-conf"}]: dispatch
Dec  8 04:47:30 np0005550137 ceph-mon[74516]: from='mgr.14122 192.168.122.100:0/1555784564' entity='mgr.compute-0.kitiwu' 
Dec  8 04:47:30 np0005550137 ceph-mon[74516]: from='client.? 192.168.122.100:0/2897888694' entity='client.admin' cmd='[{"prefix": "config assimilate-conf"}]': finished
Dec  8 04:47:30 np0005550137 ceph-mon[74516]: from='mgr.14122 192.168.122.100:0/1555784564' entity='mgr.compute-0.kitiwu' 
Dec  8 04:47:30 np0005550137 ceph-mon[74516]: from='osd.2 [v2:192.168.122.102:6800/2213880029,v1:192.168.122.102:6801/2213880029]' entity='osd.2' cmd=[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["2"]}]: dispatch
Dec  8 04:47:30 np0005550137 ceph-mon[74516]: from='osd.2 ' entity='osd.2' cmd=[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["2"]}]: dispatch
Dec  8 04:47:30 np0005550137 ceph-mon[74516]: log_channel(audit) log [INF] : from='osd.2 ' entity='osd.2' cmd='[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["2"]}]': finished
Dec  8 04:47:30 np0005550137 ceph-mon[74516]: mon.compute-0@0(leader).osd e29 e29: 3 total, 2 up, 3 in
Dec  8 04:47:30 np0005550137 ceph-mon[74516]: log_channel(cluster) log [DBG] : osdmap e29: 3 total, 2 up, 3 in
Dec  8 04:47:30 np0005550137 ceph-mon[74516]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd metadata", "id": 2} v 0)
Dec  8 04:47:30 np0005550137 ceph-mon[74516]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/1555784564' entity='mgr.compute-0.kitiwu' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch
Dec  8 04:47:30 np0005550137 ceph-mgr[74806]: mgr finish mon failed to return metadata for osd.2: (2) No such file or directory
Dec  8 04:47:30 np0005550137 ceph-mon[74516]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd crush create-or-move", "id": 2, "weight":0.0195, "args": ["host=compute-2", "root=default"]} v 0)
Dec  8 04:47:30 np0005550137 ceph-osd[83009]: osd.1 pg_epoch: 29 pg[2.1e( empty local-lis/les=28/29 n=0 ec=21/12 lis/c=21/21 les/c/f=22/22/0 sis=28) [1] r=0 lpr=28 pi=[21,28)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  8 04:47:30 np0005550137 ceph-mon[74516]: log_channel(audit) log [INF] : from='osd.2 ' entity='osd.2' cmd=[{"prefix": "osd crush create-or-move", "id": 2, "weight":0.0195, "args": ["host=compute-2", "root=default"]}]: dispatch
Dec  8 04:47:30 np0005550137 ceph-mon[74516]: mon.compute-0@0(leader).osd e29 create-or-move crush item name 'osd.2' initial_weight 0.0195 at location {host=compute-2,root=default}
Dec  8 04:47:30 np0005550137 ceph-osd[83009]: osd.1 pg_epoch: 29 pg[2.1b( empty local-lis/les=28/29 n=0 ec=21/12 lis/c=21/21 les/c/f=22/22/0 sis=28) [1] r=0 lpr=28 pi=[21,28)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  8 04:47:30 np0005550137 ceph-osd[83009]: osd.1 pg_epoch: 29 pg[7.18( empty local-lis/les=28/29 n=0 ec=26/19 lis/c=26/26 les/c/f=27/27/0 sis=28) [1] r=0 lpr=28 pi=[26,28)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  8 04:47:30 np0005550137 ceph-osd[83009]: osd.1 pg_epoch: 29 pg[7.1e( empty local-lis/les=28/29 n=0 ec=26/19 lis/c=26/26 les/c/f=27/27/0 sis=28) [1] r=0 lpr=28 pi=[26,28)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  8 04:47:30 np0005550137 ceph-osd[83009]: osd.1 pg_epoch: 29 pg[2.1f( empty local-lis/les=28/29 n=0 ec=21/12 lis/c=21/21 les/c/f=22/22/0 sis=28) [1] r=0 lpr=28 pi=[21,28)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  8 04:47:30 np0005550137 ceph-osd[83009]: osd.1 pg_epoch: 29 pg[2.9( empty local-lis/les=28/29 n=0 ec=21/12 lis/c=21/21 les/c/f=22/22/0 sis=28) [1] r=0 lpr=28 pi=[21,28)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  8 04:47:30 np0005550137 ceph-osd[83009]: osd.1 pg_epoch: 29 pg[7.6( empty local-lis/les=28/29 n=0 ec=26/19 lis/c=26/26 les/c/f=27/27/0 sis=28) [1] r=0 lpr=28 pi=[26,28)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  8 04:47:30 np0005550137 ceph-osd[83009]: osd.1 pg_epoch: 29 pg[7.1b( empty local-lis/les=28/29 n=0 ec=26/19 lis/c=26/26 les/c/f=27/27/0 sis=28) [1] r=0 lpr=28 pi=[26,28)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  8 04:47:30 np0005550137 ceph-osd[83009]: osd.1 pg_epoch: 29 pg[2.6( empty local-lis/les=28/29 n=0 ec=21/12 lis/c=21/21 les/c/f=22/22/0 sis=28) [1] r=0 lpr=28 pi=[21,28)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  8 04:47:30 np0005550137 ceph-osd[83009]: osd.1 pg_epoch: 29 pg[7.2( empty local-lis/les=28/29 n=0 ec=26/19 lis/c=26/26 les/c/f=27/27/0 sis=28) [1] r=0 lpr=28 pi=[26,28)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  8 04:47:30 np0005550137 ceph-osd[83009]: osd.1 pg_epoch: 29 pg[7.3( empty local-lis/les=28/29 n=0 ec=26/19 lis/c=26/26 les/c/f=27/27/0 sis=28) [1] r=0 lpr=28 pi=[26,28)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  8 04:47:30 np0005550137 ceph-osd[83009]: osd.1 pg_epoch: 29 pg[2.1( empty local-lis/les=28/29 n=0 ec=21/12 lis/c=21/21 les/c/f=22/22/0 sis=28) [1] r=0 lpr=28 pi=[21,28)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  8 04:47:30 np0005550137 ceph-osd[83009]: osd.1 pg_epoch: 29 pg[7.4( empty local-lis/les=28/29 n=0 ec=26/19 lis/c=26/26 les/c/f=27/27/0 sis=28) [1] r=0 lpr=28 pi=[26,28)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  8 04:47:30 np0005550137 ceph-osd[83009]: osd.1 pg_epoch: 29 pg[7.e( empty local-lis/les=28/29 n=0 ec=26/19 lis/c=26/26 les/c/f=27/27/0 sis=28) [1] r=0 lpr=28 pi=[26,28)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  8 04:47:30 np0005550137 ceph-osd[83009]: osd.1 pg_epoch: 29 pg[2.4( empty local-lis/les=28/29 n=0 ec=21/12 lis/c=21/21 les/c/f=22/22/0 sis=28) [1] r=0 lpr=28 pi=[21,28)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  8 04:47:30 np0005550137 ceph-osd[83009]: osd.1 pg_epoch: 29 pg[2.a( empty local-lis/les=28/29 n=0 ec=21/12 lis/c=21/21 les/c/f=22/22/0 sis=28) [1] r=0 lpr=28 pi=[21,28)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  8 04:47:30 np0005550137 ceph-osd[83009]: osd.1 pg_epoch: 29 pg[7.8( empty local-lis/les=28/29 n=0 ec=26/19 lis/c=26/26 les/c/f=27/27/0 sis=28) [1] r=0 lpr=28 pi=[26,28)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  8 04:47:30 np0005550137 ceph-osd[83009]: osd.1 pg_epoch: 29 pg[2.d( empty local-lis/les=28/29 n=0 ec=21/12 lis/c=21/21 les/c/f=22/22/0 sis=28) [1] r=0 lpr=28 pi=[21,28)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  8 04:47:30 np0005550137 ceph-osd[83009]: osd.1 pg_epoch: 29 pg[2.c( empty local-lis/les=28/29 n=0 ec=21/12 lis/c=21/21 les/c/f=22/22/0 sis=28) [1] r=0 lpr=28 pi=[21,28)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  8 04:47:30 np0005550137 ceph-osd[83009]: osd.1 pg_epoch: 29 pg[7.b( empty local-lis/les=28/29 n=0 ec=26/19 lis/c=26/26 les/c/f=27/27/0 sis=28) [1] r=0 lpr=28 pi=[26,28)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  8 04:47:30 np0005550137 ceph-osd[83009]: osd.1 pg_epoch: 29 pg[7.a( empty local-lis/les=28/29 n=0 ec=26/19 lis/c=26/26 les/c/f=27/27/0 sis=28) [1] r=0 lpr=28 pi=[26,28)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  8 04:47:30 np0005550137 ceph-osd[83009]: osd.1 pg_epoch: 29 pg[2.e( empty local-lis/les=28/29 n=0 ec=21/12 lis/c=21/21 les/c/f=22/22/0 sis=28) [1] r=0 lpr=28 pi=[21,28)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  8 04:47:30 np0005550137 ceph-osd[83009]: osd.1 pg_epoch: 29 pg[7.f( empty local-lis/les=28/29 n=0 ec=26/19 lis/c=26/26 les/c/f=27/27/0 sis=28) [1] r=0 lpr=28 pi=[26,28)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  8 04:47:30 np0005550137 ceph-osd[83009]: osd.1 pg_epoch: 29 pg[2.13( empty local-lis/les=28/29 n=0 ec=21/12 lis/c=21/21 les/c/f=22/22/0 sis=28) [1] r=0 lpr=28 pi=[21,28)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  8 04:47:30 np0005550137 ceph-osd[83009]: osd.1 pg_epoch: 29 pg[7.14( empty local-lis/les=28/29 n=0 ec=26/19 lis/c=26/26 les/c/f=27/27/0 sis=28) [1] r=0 lpr=28 pi=[26,28)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  8 04:47:30 np0005550137 ceph-osd[83009]: osd.1 pg_epoch: 29 pg[2.10( empty local-lis/les=28/29 n=0 ec=21/12 lis/c=21/21 les/c/f=22/22/0 sis=28) [1] r=0 lpr=28 pi=[21,28)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  8 04:47:30 np0005550137 ceph-osd[83009]: osd.1 pg_epoch: 29 pg[2.15( empty local-lis/les=28/29 n=0 ec=21/12 lis/c=21/21 les/c/f=22/22/0 sis=28) [1] r=0 lpr=28 pi=[21,28)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  8 04:47:30 np0005550137 ceph-osd[83009]: osd.1 pg_epoch: 29 pg[2.19( empty local-lis/les=28/29 n=0 ec=21/12 lis/c=21/21 les/c/f=22/22/0 sis=28) [1] r=0 lpr=28 pi=[21,28)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  8 04:47:30 np0005550137 ceph-osd[83009]: osd.1 pg_epoch: 29 pg[7.10( empty local-lis/les=28/29 n=0 ec=26/19 lis/c=26/26 les/c/f=27/27/0 sis=28) [1] r=0 lpr=28 pi=[26,28)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  8 04:47:30 np0005550137 ceph-osd[83009]: osd.1 pg_epoch: 29 pg[7.1d( empty local-lis/les=28/29 n=0 ec=26/19 lis/c=26/26 les/c/f=27/27/0 sis=28) [1] r=0 lpr=28 pi=[26,28)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  8 04:47:30 np0005550137 ceph-osd[83009]: osd.1 pg_epoch: 29 pg[7.13( empty local-lis/les=28/29 n=0 ec=26/19 lis/c=26/26 les/c/f=27/27/0 sis=28) [1] r=0 lpr=28 pi=[26,28)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  8 04:47:30 np0005550137 ceph-osd[83009]: osd.1 pg_epoch: 29 pg[7.9( empty local-lis/les=28/29 n=0 ec=26/19 lis/c=26/26 les/c/f=27/27/0 sis=28) [1] r=0 lpr=28 pi=[26,28)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  8 04:47:30 np0005550137 ceph-mgr[74806]: log_channel(cluster) log [DBG] : pgmap v89: 193 pgs: 193 active+clean; 449 KiB data, 53 MiB used, 40 GiB / 40 GiB avail
Dec  8 04:47:30 np0005550137 ceph-mon[74516]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=ssl_option}] v 0)
Dec  8 04:47:30 np0005550137 ceph-mon[74516]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/1181990922' entity='client.admin' 
Dec  8 04:47:30 np0005550137 eager_tharp[87341]: set ssl_option
Dec  8 04:47:30 np0005550137 systemd[1]: libpod-b3e8b98db17afba94558e36e9be971566fceab5f9e2ae4f1174e11e5bff0ceb4.scope: Deactivated successfully.
Dec  8 04:47:30 np0005550137 podman[87326]: 2025-12-08 09:47:30.43846315 +0000 UTC m=+0.673969920 container died b3e8b98db17afba94558e36e9be971566fceab5f9e2ae4f1174e11e5bff0ceb4 (image=quay.io/ceph/ceph:v19, name=eager_tharp, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec  8 04:47:30 np0005550137 systemd[1]: var-lib-containers-storage-overlay-dc5075f0f0fdee919a8be094a131865008bf34aa99951564cb85bb1527f7c207-merged.mount: Deactivated successfully.
Dec  8 04:47:30 np0005550137 podman[87326]: 2025-12-08 09:47:30.485132781 +0000 UTC m=+0.720639531 container remove b3e8b98db17afba94558e36e9be971566fceab5f9e2ae4f1174e11e5bff0ceb4 (image=quay.io/ceph/ceph:v19, name=eager_tharp, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True)
Dec  8 04:47:30 np0005550137 systemd[1]: libpod-conmon-b3e8b98db17afba94558e36e9be971566fceab5f9e2ae4f1174e11e5bff0ceb4.scope: Deactivated successfully.
Dec  8 04:47:30 np0005550137 ceph-osd[83009]: log_channel(cluster) log [DBG] : 5.19 scrub starts
Dec  8 04:47:30 np0005550137 ceph-osd[83009]: log_channel(cluster) log [DBG] : 5.19 scrub ok
Dec  8 04:47:30 np0005550137 python3[87405]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z   --volume /tmp/ceph_rgw.yml:/home/ceph_spec.yaml:z   --entrypoint ceph quay.io/ceph/ceph:v19 --fsid ceb838ef-9d5d-54e4-bddb-2f01adce2ad4 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   orch apply --in-file /home/ceph_spec.yaml _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec  8 04:47:30 np0005550137 podman[87406]: 2025-12-08 09:47:30.857583894 +0000 UTC m=+0.051628576 container create cb7d2b5cfb9c27d1406ab50561ed2cf5a0f46346882a32abbdf0bc23c2f14eb6 (image=quay.io/ceph/ceph:v19, name=relaxed_lumiere, CEPH_REF=squid, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec  8 04:47:30 np0005550137 systemd[1]: Started libpod-conmon-cb7d2b5cfb9c27d1406ab50561ed2cf5a0f46346882a32abbdf0bc23c2f14eb6.scope.
Dec  8 04:47:30 np0005550137 podman[87406]: 2025-12-08 09:47:30.838565644 +0000 UTC m=+0.032610306 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Dec  8 04:47:30 np0005550137 systemd[1]: Started libcrun container.
Dec  8 04:47:30 np0005550137 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d29744c4dcd74f0f99ab033e6bce75fef5402009f467ab4a0b345b0b2df2bbb1/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec  8 04:47:30 np0005550137 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d29744c4dcd74f0f99ab033e6bce75fef5402009f467ab4a0b345b0b2df2bbb1/merged/home/ceph_spec.yaml supports timestamps until 2038 (0x7fffffff)
Dec  8 04:47:30 np0005550137 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d29744c4dcd74f0f99ab033e6bce75fef5402009f467ab4a0b345b0b2df2bbb1/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Dec  8 04:47:30 np0005550137 podman[87406]: 2025-12-08 09:47:30.962420199 +0000 UTC m=+0.156464911 container init cb7d2b5cfb9c27d1406ab50561ed2cf5a0f46346882a32abbdf0bc23c2f14eb6 (image=quay.io/ceph/ceph:v19, name=relaxed_lumiere, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec  8 04:47:30 np0005550137 ceph-mon[74516]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2.devices.0}] v 0)
Dec  8 04:47:30 np0005550137 podman[87406]: 2025-12-08 09:47:30.968867206 +0000 UTC m=+0.162911878 container start cb7d2b5cfb9c27d1406ab50561ed2cf5a0f46346882a32abbdf0bc23c2f14eb6 (image=quay.io/ceph/ceph:v19, name=relaxed_lumiere, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, ceph=True)
Dec  8 04:47:30 np0005550137 podman[87406]: 2025-12-08 09:47:30.972558363 +0000 UTC m=+0.166603035 container attach cb7d2b5cfb9c27d1406ab50561ed2cf5a0f46346882a32abbdf0bc23c2f14eb6 (image=quay.io/ceph/ceph:v19, name=relaxed_lumiere, ceph=True, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image)
Dec  8 04:47:30 np0005550137 ceph-mon[74516]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1555784564' entity='mgr.compute-0.kitiwu' 
Dec  8 04:47:30 np0005550137 ceph-mon[74516]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2}] v 0)
Dec  8 04:47:30 np0005550137 ceph-mon[74516]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1555784564' entity='mgr.compute-0.kitiwu' 
Dec  8 04:47:31 np0005550137 ceph-mon[74516]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1.devices.0}] v 0)
Dec  8 04:47:31 np0005550137 ceph-mon[74516]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1555784564' entity='mgr.compute-0.kitiwu' 
Dec  8 04:47:31 np0005550137 ceph-mon[74516]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1}] v 0)
Dec  8 04:47:31 np0005550137 ceph-mon[74516]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1555784564' entity='mgr.compute-0.kitiwu' 
Dec  8 04:47:31 np0005550137 ceph-mon[74516]: from='osd.2 ' entity='osd.2' cmd='[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["2"]}]': finished
Dec  8 04:47:31 np0005550137 ceph-mon[74516]: from='osd.2 [v2:192.168.122.102:6800/2213880029,v1:192.168.122.102:6801/2213880029]' entity='osd.2' cmd=[{"prefix": "osd crush create-or-move", "id": 2, "weight":0.0195, "args": ["host=compute-2", "root=default"]}]: dispatch
Dec  8 04:47:31 np0005550137 ceph-mon[74516]: from='osd.2 ' entity='osd.2' cmd=[{"prefix": "osd crush create-or-move", "id": 2, "weight":0.0195, "args": ["host=compute-2", "root=default"]}]: dispatch
Dec  8 04:47:31 np0005550137 ceph-mon[74516]: from='client.? 192.168.122.100:0/1181990922' entity='client.admin' 
Dec  8 04:47:31 np0005550137 ceph-mon[74516]: from='mgr.14122 192.168.122.100:0/1555784564' entity='mgr.compute-0.kitiwu' 
Dec  8 04:47:31 np0005550137 ceph-mon[74516]: from='mgr.14122 192.168.122.100:0/1555784564' entity='mgr.compute-0.kitiwu' 
Dec  8 04:47:31 np0005550137 ceph-mon[74516]: from='mgr.14122 192.168.122.100:0/1555784564' entity='mgr.compute-0.kitiwu' 
Dec  8 04:47:31 np0005550137 ceph-mon[74516]: from='mgr.14122 192.168.122.100:0/1555784564' entity='mgr.compute-0.kitiwu' 
Dec  8 04:47:31 np0005550137 ceph-mon[74516]: mon.compute-0@0(leader).osd e29 do_prune osdmap full prune enabled
Dec  8 04:47:31 np0005550137 ceph-mon[74516]: log_channel(audit) log [INF] : from='osd.2 ' entity='osd.2' cmd='[{"prefix": "osd crush create-or-move", "id": 2, "weight":0.0195, "args": ["host=compute-2", "root=default"]}]': finished
Dec  8 04:47:31 np0005550137 ceph-mon[74516]: mon.compute-0@0(leader).osd e30 e30: 3 total, 2 up, 3 in
Dec  8 04:47:31 np0005550137 ceph-mon[74516]: log_channel(cluster) log [DBG] : osdmap e30: 3 total, 2 up, 3 in
Dec  8 04:47:31 np0005550137 ceph-mon[74516]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd metadata", "id": 2} v 0)
Dec  8 04:47:31 np0005550137 ceph-mon[74516]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/1555784564' entity='mgr.compute-0.kitiwu' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch
Dec  8 04:47:31 np0005550137 ceph-mgr[74806]: mgr finish mon failed to return metadata for osd.2: (2) No such file or directory
Dec  8 04:47:31 np0005550137 ceph-osd[83009]: osd.1 pg_epoch: 30 pg[4.19( empty local-lis/les=23/24 n=0 ec=23/15 lis/c=23/23 les/c/f=24/24/0 sis=30 pruub=15.000075340s) [] r=-1 lpr=30 pi=[23,30)/1 crt=0'0 mlcod 0'0 active pruub 74.944480896s@ mbc={}] PeeringState::start_peering_interval up [1] -> [], acting [1] -> [], acting_primary 1 -> -1, up_primary 1 -> -1, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Dec  8 04:47:31 np0005550137 ceph-osd[83009]: osd.1 pg_epoch: 30 pg[4.19( empty local-lis/les=23/24 n=0 ec=23/15 lis/c=23/23 les/c/f=24/24/0 sis=30 pruub=15.000075340s) [] r=-1 lpr=30 pi=[23,30)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 74.944480896s@ mbc={}] state<Start>: transitioning to Stray
Dec  8 04:47:31 np0005550137 ceph-osd[83009]: osd.1 pg_epoch: 30 pg[6.1b( empty local-lis/les=25/27 n=0 ec=25/17 lis/c=25/25 les/c/f=27/27/0 sis=30 pruub=10.357873917s) [] r=-1 lpr=30 pi=[25,30)/1 crt=0'0 mlcod 0'0 active pruub 70.302452087s@ mbc={}] PeeringState::start_peering_interval up [1] -> [], acting [1] -> [], acting_primary 1 -> -1, up_primary 1 -> -1, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Dec  8 04:47:31 np0005550137 ceph-osd[83009]: osd.1 pg_epoch: 30 pg[6.1b( empty local-lis/les=25/27 n=0 ec=25/17 lis/c=25/25 les/c/f=27/27/0 sis=30 pruub=10.357873917s) [] r=-1 lpr=30 pi=[25,30)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 70.302452087s@ mbc={}] state<Start>: transitioning to Stray
Dec  8 04:47:31 np0005550137 ceph-osd[83009]: osd.1 pg_epoch: 30 pg[3.1b( empty local-lis/les=23/24 n=0 ec=23/13 lis/c=23/23 les/c/f=24/24/0 sis=30 pruub=14.999293327s) [] r=-1 lpr=30 pi=[23,30)/1 crt=0'0 mlcod 0'0 active pruub 74.944259644s@ mbc={}] PeeringState::start_peering_interval up [1] -> [], acting [1] -> [], acting_primary 1 -> -1, up_primary 1 -> -1, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Dec  8 04:47:31 np0005550137 ceph-osd[83009]: osd.1 pg_epoch: 30 pg[4.1c( empty local-lis/les=23/24 n=0 ec=23/15 lis/c=23/23 les/c/f=24/24/0 sis=30 pruub=14.999445915s) [] r=-1 lpr=30 pi=[23,30)/1 crt=0'0 mlcod 0'0 active pruub 74.944427490s@ mbc={}] PeeringState::start_peering_interval up [1] -> [], acting [1] -> [], acting_primary 1 -> -1, up_primary 1 -> -1, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Dec  8 04:47:31 np0005550137 ceph-osd[83009]: osd.1 pg_epoch: 30 pg[4.1c( empty local-lis/les=23/24 n=0 ec=23/15 lis/c=23/23 les/c/f=24/24/0 sis=30 pruub=14.999445915s) [] r=-1 lpr=30 pi=[23,30)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 74.944427490s@ mbc={}] state<Start>: transitioning to Stray
Dec  8 04:47:31 np0005550137 ceph-osd[83009]: osd.1 pg_epoch: 30 pg[3.1b( empty local-lis/les=23/24 n=0 ec=23/13 lis/c=23/23 les/c/f=24/24/0 sis=30 pruub=14.999293327s) [] r=-1 lpr=30 pi=[23,30)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 74.944259644s@ mbc={}] state<Start>: transitioning to Stray
Dec  8 04:47:31 np0005550137 ceph-osd[83009]: osd.1 pg_epoch: 30 pg[2.1b( empty local-lis/les=28/29 n=0 ec=21/12 lis/c=28/28 les/c/f=29/29/0 sis=30 pruub=14.973017693s) [] r=-1 lpr=30 pi=[28,30)/1 crt=0'0 mlcod 0'0 active pruub 74.918098450s@ mbc={}] PeeringState::start_peering_interval up [1] -> [], acting [1] -> [], acting_primary 1 -> -1, up_primary 1 -> -1, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Dec  8 04:47:31 np0005550137 ceph-osd[83009]: osd.1 pg_epoch: 30 pg[3.8( empty local-lis/les=23/24 n=0 ec=23/13 lis/c=23/23 les/c/f=24/24/0 sis=30 pruub=14.998954773s) [] r=-1 lpr=30 pi=[23,30)/1 crt=0'0 mlcod 0'0 active pruub 74.944122314s@ mbc={}] PeeringState::start_peering_interval up [1] -> [], acting [1] -> [], acting_primary 1 -> -1, up_primary 1 -> -1, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Dec  8 04:47:31 np0005550137 ceph-osd[83009]: osd.1 pg_epoch: 30 pg[2.1b( empty local-lis/les=28/29 n=0 ec=21/12 lis/c=28/28 les/c/f=29/29/0 sis=30 pruub=14.973017693s) [] r=-1 lpr=30 pi=[28,30)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 74.918098450s@ mbc={}] state<Start>: transitioning to Stray
Dec  8 04:47:31 np0005550137 ceph-osd[83009]: osd.1 pg_epoch: 30 pg[4.1d( empty local-lis/les=23/24 n=0 ec=23/15 lis/c=23/23 les/c/f=24/24/0 sis=30 pruub=14.999066353s) [] r=-1 lpr=30 pi=[23,30)/1 crt=0'0 mlcod 0'0 active pruub 74.944252014s@ mbc={}] PeeringState::start_peering_interval up [1] -> [], acting [1] -> [], acting_primary 1 -> -1, up_primary 1 -> -1, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Dec  8 04:47:31 np0005550137 ceph-osd[83009]: osd.1 pg_epoch: 30 pg[4.1d( empty local-lis/les=23/24 n=0 ec=23/15 lis/c=23/23 les/c/f=24/24/0 sis=30 pruub=14.999066353s) [] r=-1 lpr=30 pi=[23,30)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 74.944252014s@ mbc={}] state<Start>: transitioning to Stray
Dec  8 04:47:31 np0005550137 ceph-osd[83009]: osd.1 pg_epoch: 30 pg[6.1( empty local-lis/les=25/27 n=0 ec=25/17 lis/c=25/25 les/c/f=27/27/0 sis=30 pruub=10.357782364s) [] r=-1 lpr=30 pi=[25,30)/1 crt=0'0 mlcod 0'0 active pruub 70.303115845s@ mbc={}] PeeringState::start_peering_interval up [1] -> [], acting [1] -> [], acting_primary 1 -> -1, up_primary 1 -> -1, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Dec  8 04:47:31 np0005550137 ceph-osd[83009]: osd.1 pg_epoch: 30 pg[3.8( empty local-lis/les=23/24 n=0 ec=23/13 lis/c=23/23 les/c/f=24/24/0 sis=30 pruub=14.998954773s) [] r=-1 lpr=30 pi=[23,30)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 74.944122314s@ mbc={}] state<Start>: transitioning to Stray
Dec  8 04:47:31 np0005550137 ceph-osd[83009]: osd.1 pg_epoch: 30 pg[6.1( empty local-lis/les=25/27 n=0 ec=25/17 lis/c=25/25 les/c/f=27/27/0 sis=30 pruub=10.357782364s) [] r=-1 lpr=30 pi=[25,30)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 70.303115845s@ mbc={}] state<Start>: transitioning to Stray
Dec  8 04:47:31 np0005550137 ceph-osd[83009]: osd.1 pg_epoch: 30 pg[4.6( empty local-lis/les=23/24 n=0 ec=23/15 lis/c=23/23 les/c/f=24/24/0 sis=30 pruub=14.998725891s) [] r=-1 lpr=30 pi=[23,30)/1 crt=0'0 mlcod 0'0 active pruub 74.944152832s@ mbc={}] PeeringState::start_peering_interval up [1] -> [], acting [1] -> [], acting_primary 1 -> -1, up_primary 1 -> -1, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Dec  8 04:47:31 np0005550137 ceph-osd[83009]: osd.1 pg_epoch: 30 pg[4.6( empty local-lis/les=23/24 n=0 ec=23/15 lis/c=23/23 les/c/f=24/24/0 sis=30 pruub=14.998725891s) [] r=-1 lpr=30 pi=[23,30)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 74.944152832s@ mbc={}] state<Start>: transitioning to Stray
Dec  8 04:47:31 np0005550137 ceph-osd[83009]: osd.1 pg_epoch: 30 pg[4.2( empty local-lis/les=23/24 n=0 ec=23/15 lis/c=23/23 les/c/f=24/24/0 sis=30 pruub=14.998367310s) [] r=-1 lpr=30 pi=[23,30)/1 crt=0'0 mlcod 0'0 active pruub 74.943923950s@ mbc={}] PeeringState::start_peering_interval up [1] -> [], acting [1] -> [], acting_primary 1 -> -1, up_primary 1 -> -1, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Dec  8 04:47:31 np0005550137 ceph-osd[83009]: osd.1 pg_epoch: 30 pg[4.2( empty local-lis/les=23/24 n=0 ec=23/15 lis/c=23/23 les/c/f=24/24/0 sis=30 pruub=14.998367310s) [] r=-1 lpr=30 pi=[23,30)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 74.943923950s@ mbc={}] state<Start>: transitioning to Stray
Dec  8 04:47:31 np0005550137 ceph-osd[83009]: osd.1 pg_epoch: 30 pg[5.0( empty local-lis/les=25/27 n=0 ec=16/16 lis/c=25/25 les/c/f=27/27/0 sis=30 pruub=10.357917786s) [] r=-1 lpr=30 pi=[25,30)/1 crt=0'0 mlcod 0'0 active pruub 70.303581238s@ mbc={}] PeeringState::start_peering_interval up [1] -> [], acting [1] -> [], acting_primary 1 -> -1, up_primary 1 -> -1, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Dec  8 04:47:31 np0005550137 ceph-osd[83009]: osd.1 pg_epoch: 30 pg[5.0( empty local-lis/les=25/27 n=0 ec=16/16 lis/c=25/25 les/c/f=27/27/0 sis=30 pruub=10.357917786s) [] r=-1 lpr=30 pi=[25,30)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 70.303581238s@ mbc={}] state<Start>: transitioning to Stray
Dec  8 04:47:31 np0005550137 ceph-osd[83009]: osd.1 pg_epoch: 30 pg[3.0( empty local-lis/les=23/24 n=0 ec=13/13 lis/c=23/23 les/c/f=24/24/0 sis=30 pruub=14.994618416s) [] r=-1 lpr=30 pi=[23,30)/1 crt=0'0 mlcod 0'0 active pruub 74.940399170s@ mbc={}] PeeringState::start_peering_interval up [1] -> [], acting [1] -> [], acting_primary 1 -> -1, up_primary 1 -> -1, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Dec  8 04:47:31 np0005550137 ceph-osd[83009]: osd.1 pg_epoch: 30 pg[3.0( empty local-lis/les=23/24 n=0 ec=13/13 lis/c=23/23 les/c/f=24/24/0 sis=30 pruub=14.994618416s) [] r=-1 lpr=30 pi=[23,30)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 74.940399170s@ mbc={}] state<Start>: transitioning to Stray
Dec  8 04:47:31 np0005550137 ceph-osd[83009]: osd.1 pg_epoch: 30 pg[2.a( empty local-lis/les=28/29 n=0 ec=21/12 lis/c=28/28 les/c/f=29/29/0 sis=30 pruub=14.972688675s) [] r=-1 lpr=30 pi=[28,30)/1 crt=0'0 mlcod 0'0 active pruub 74.918601990s@ mbc={}] PeeringState::start_peering_interval up [1] -> [], acting [1] -> [], acting_primary 1 -> -1, up_primary 1 -> -1, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Dec  8 04:47:31 np0005550137 ceph-osd[83009]: osd.1 pg_epoch: 30 pg[2.a( empty local-lis/les=28/29 n=0 ec=21/12 lis/c=28/28 les/c/f=29/29/0 sis=30 pruub=14.972688675s) [] r=-1 lpr=30 pi=[28,30)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 74.918601990s@ mbc={}] state<Start>: transitioning to Stray
Dec  8 04:47:31 np0005550137 ceph-osd[83009]: osd.1 pg_epoch: 30 pg[5.d( empty local-lis/les=25/27 n=0 ec=25/16 lis/c=25/25 les/c/f=27/27/0 sis=30 pruub=10.358081818s) [] r=-1 lpr=30 pi=[25,30)/1 crt=0'0 mlcod 0'0 active pruub 70.304046631s@ mbc={}] PeeringState::start_peering_interval up [1] -> [], acting [1] -> [], acting_primary 1 -> -1, up_primary 1 -> -1, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Dec  8 04:47:31 np0005550137 ceph-osd[83009]: osd.1 pg_epoch: 30 pg[2.d( empty local-lis/les=28/29 n=0 ec=21/12 lis/c=28/28 les/c/f=29/29/0 sis=30 pruub=14.972672462s) [] r=-1 lpr=30 pi=[28,30)/1 crt=0'0 mlcod 0'0 active pruub 74.918640137s@ mbc={}] PeeringState::start_peering_interval up [1] -> [], acting [1] -> [], acting_primary 1 -> -1, up_primary 1 -> -1, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Dec  8 04:47:31 np0005550137 ceph-osd[83009]: osd.1 pg_epoch: 30 pg[5.d( empty local-lis/les=25/27 n=0 ec=25/16 lis/c=25/25 les/c/f=27/27/0 sis=30 pruub=10.358081818s) [] r=-1 lpr=30 pi=[25,30)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 70.304046631s@ mbc={}] state<Start>: transitioning to Stray
Dec  8 04:47:31 np0005550137 ceph-osd[83009]: osd.1 pg_epoch: 30 pg[2.c( empty local-lis/les=28/29 n=0 ec=21/12 lis/c=28/28 les/c/f=29/29/0 sis=30 pruub=14.972544670s) [] r=-1 lpr=30 pi=[28,30)/1 crt=0'0 mlcod 0'0 active pruub 74.918655396s@ mbc={}] PeeringState::start_peering_interval up [1] -> [], acting [1] -> [], acting_primary 1 -> -1, up_primary 1 -> -1, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Dec  8 04:47:31 np0005550137 ceph-osd[83009]: osd.1 pg_epoch: 30 pg[2.d( empty local-lis/les=28/29 n=0 ec=21/12 lis/c=28/28 les/c/f=29/29/0 sis=30 pruub=14.972672462s) [] r=-1 lpr=30 pi=[28,30)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 74.918640137s@ mbc={}] state<Start>: transitioning to Stray
Dec  8 04:47:31 np0005550137 ceph-osd[83009]: osd.1 pg_epoch: 30 pg[2.c( empty local-lis/les=28/29 n=0 ec=21/12 lis/c=28/28 les/c/f=29/29/0 sis=30 pruub=14.972544670s) [] r=-1 lpr=30 pi=[28,30)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 74.918655396s@ mbc={}] state<Start>: transitioning to Stray
Dec  8 04:47:31 np0005550137 ceph-osd[83009]: osd.1 pg_epoch: 30 pg[5.b( empty local-lis/les=25/27 n=0 ec=25/16 lis/c=25/25 les/c/f=27/27/0 sis=30 pruub=10.358045578s) [] r=-1 lpr=30 pi=[25,30)/1 crt=0'0 mlcod 0'0 active pruub 70.304191589s@ mbc={}] PeeringState::start_peering_interval up [1] -> [], acting [1] -> [], acting_primary 1 -> -1, up_primary 1 -> -1, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Dec  8 04:47:31 np0005550137 ceph-osd[83009]: osd.1 pg_epoch: 30 pg[5.b( empty local-lis/les=25/27 n=0 ec=25/16 lis/c=25/25 les/c/f=27/27/0 sis=30 pruub=10.358045578s) [] r=-1 lpr=30 pi=[25,30)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 70.304191589s@ mbc={}] state<Start>: transitioning to Stray
Dec  8 04:47:31 np0005550137 ceph-osd[83009]: osd.1 pg_epoch: 30 pg[7.a( empty local-lis/les=28/29 n=0 ec=26/19 lis/c=28/28 les/c/f=29/29/0 sis=30 pruub=14.972459793s) [] r=-1 lpr=30 pi=[28,30)/1 crt=0'0 mlcod 0'0 active pruub 74.918685913s@ mbc={}] PeeringState::start_peering_interval up [1] -> [], acting [1] -> [], acting_primary 1 -> -1, up_primary 1 -> -1, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Dec  8 04:47:31 np0005550137 ceph-osd[83009]: osd.1 pg_epoch: 30 pg[5.8( empty local-lis/les=25/27 n=0 ec=25/16 lis/c=25/25 les/c/f=27/27/0 sis=30 pruub=10.358019829s) [] r=-1 lpr=30 pi=[25,30)/1 crt=0'0 mlcod 0'0 active pruub 70.304252625s@ mbc={}] PeeringState::start_peering_interval up [1] -> [], acting [1] -> [], acting_primary 1 -> -1, up_primary 1 -> -1, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Dec  8 04:47:31 np0005550137 ceph-osd[83009]: osd.1 pg_epoch: 30 pg[7.a( empty local-lis/les=28/29 n=0 ec=26/19 lis/c=28/28 les/c/f=29/29/0 sis=30 pruub=14.972459793s) [] r=-1 lpr=30 pi=[28,30)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 74.918685913s@ mbc={}] state<Start>: transitioning to Stray
Dec  8 04:47:31 np0005550137 ceph-osd[83009]: osd.1 pg_epoch: 30 pg[5.8( empty local-lis/les=25/27 n=0 ec=25/16 lis/c=25/25 les/c/f=27/27/0 sis=30 pruub=10.358019829s) [] r=-1 lpr=30 pi=[25,30)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 70.304252625s@ mbc={}] state<Start>: transitioning to Stray
Dec  8 04:47:31 np0005550137 ceph-osd[83009]: osd.1 pg_epoch: 30 pg[2.10( empty local-lis/les=28/29 n=0 ec=21/12 lis/c=28/28 les/c/f=29/29/0 sis=30 pruub=14.972378731s) [] r=-1 lpr=30 pi=[28,30)/1 crt=0'0 mlcod 0'0 active pruub 74.918762207s@ mbc={}] PeeringState::start_peering_interval up [1] -> [], acting [1] -> [], acting_primary 1 -> -1, up_primary 1 -> -1, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Dec  8 04:47:31 np0005550137 ceph-osd[83009]: osd.1 pg_epoch: 30 pg[7.14( empty local-lis/les=28/29 n=0 ec=26/19 lis/c=28/28 les/c/f=29/29/0 sis=30 pruub=14.972369194s) [] r=-1 lpr=30 pi=[28,30)/1 crt=0'0 mlcod 0'0 active pruub 74.918762207s@ mbc={}] PeeringState::start_peering_interval up [1] -> [], acting [1] -> [], acting_primary 1 -> -1, up_primary 1 -> -1, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Dec  8 04:47:31 np0005550137 ceph-osd[83009]: osd.1 pg_epoch: 30 pg[2.10( empty local-lis/les=28/29 n=0 ec=21/12 lis/c=28/28 les/c/f=29/29/0 sis=30 pruub=14.972378731s) [] r=-1 lpr=30 pi=[28,30)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 74.918762207s@ mbc={}] state<Start>: transitioning to Stray
Dec  8 04:47:31 np0005550137 ceph-osd[83009]: osd.1 pg_epoch: 30 pg[7.14( empty local-lis/les=28/29 n=0 ec=26/19 lis/c=28/28 les/c/f=29/29/0 sis=30 pruub=14.972369194s) [] r=-1 lpr=30 pi=[28,30)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 74.918762207s@ mbc={}] state<Start>: transitioning to Stray
Dec  8 04:47:31 np0005550137 ceph-osd[83009]: osd.1 pg_epoch: 30 pg[2.13( empty local-lis/les=28/29 n=0 ec=21/12 lis/c=28/28 les/c/f=29/29/0 sis=30 pruub=14.972288132s) [] r=-1 lpr=30 pi=[28,30)/1 crt=0'0 mlcod 0'0 active pruub 74.918800354s@ mbc={}] PeeringState::start_peering_interval up [1] -> [], acting [1] -> [], acting_primary 1 -> -1, up_primary 1 -> -1, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Dec  8 04:47:31 np0005550137 ceph-osd[83009]: osd.1 pg_epoch: 30 pg[2.13( empty local-lis/les=28/29 n=0 ec=21/12 lis/c=28/28 les/c/f=29/29/0 sis=30 pruub=14.972288132s) [] r=-1 lpr=30 pi=[28,30)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 74.918800354s@ mbc={}] state<Start>: transitioning to Stray
Dec  8 04:47:31 np0005550137 ceph-osd[83009]: osd.1 pg_epoch: 30 pg[4.14( empty local-lis/les=23/24 n=0 ec=23/15 lis/c=23/23 les/c/f=24/24/0 sis=30 pruub=14.992330551s) [] r=-1 lpr=30 pi=[23,30)/1 crt=0'0 mlcod 0'0 active pruub 74.939498901s@ mbc={}] PeeringState::start_peering_interval up [1] -> [], acting [1] -> [], acting_primary 1 -> -1, up_primary 1 -> -1, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Dec  8 04:47:31 np0005550137 ceph-osd[83009]: osd.1 pg_epoch: 30 pg[5.12( empty local-lis/les=25/27 n=0 ec=25/16 lis/c=25/25 les/c/f=27/27/0 sis=30 pruub=10.357446671s) [] r=-1 lpr=30 pi=[25,30)/1 crt=0'0 mlcod 0'0 active pruub 70.304702759s@ mbc={}] PeeringState::start_peering_interval up [1] -> [], acting [1] -> [], acting_primary 1 -> -1, up_primary 1 -> -1, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Dec  8 04:47:31 np0005550137 ceph-osd[83009]: osd.1 pg_epoch: 30 pg[5.12( empty local-lis/les=25/27 n=0 ec=25/16 lis/c=25/25 les/c/f=27/27/0 sis=30 pruub=10.357446671s) [] r=-1 lpr=30 pi=[25,30)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 70.304702759s@ mbc={}] state<Start>: transitioning to Stray
Dec  8 04:47:31 np0005550137 ceph-osd[83009]: osd.1 pg_epoch: 30 pg[2.15( empty local-lis/les=28/29 n=0 ec=21/12 lis/c=28/28 les/c/f=29/29/0 sis=30 pruub=14.971539497s) [] r=-1 lpr=30 pi=[28,30)/1 crt=0'0 mlcod 0'0 active pruub 74.918838501s@ mbc={}] PeeringState::start_peering_interval up [1] -> [], acting [1] -> [], acting_primary 1 -> -1, up_primary 1 -> -1, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Dec  8 04:47:31 np0005550137 ceph-osd[83009]: osd.1 pg_epoch: 30 pg[5.13( empty local-lis/les=25/27 n=0 ec=25/16 lis/c=25/25 les/c/f=27/27/0 sis=30 pruub=10.357419014s) [] r=-1 lpr=30 pi=[25,30)/1 crt=0'0 mlcod 0'0 active pruub 70.304748535s@ mbc={}] PeeringState::start_peering_interval up [1] -> [], acting [1] -> [], acting_primary 1 -> -1, up_primary 1 -> -1, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Dec  8 04:47:31 np0005550137 ceph-osd[83009]: osd.1 pg_epoch: 30 pg[2.15( empty local-lis/les=28/29 n=0 ec=21/12 lis/c=28/28 les/c/f=29/29/0 sis=30 pruub=14.971539497s) [] r=-1 lpr=30 pi=[28,30)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 74.918838501s@ mbc={}] state<Start>: transitioning to Stray
Dec  8 04:47:31 np0005550137 ceph-osd[83009]: osd.1 pg_epoch: 30 pg[5.13( empty local-lis/les=25/27 n=0 ec=25/16 lis/c=25/25 les/c/f=27/27/0 sis=30 pruub=10.357419014s) [] r=-1 lpr=30 pi=[25,30)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 70.304748535s@ mbc={}] state<Start>: transitioning to Stray
Dec  8 04:47:31 np0005550137 ceph-osd[83009]: osd.1 pg_epoch: 30 pg[4.14( empty local-lis/les=23/24 n=0 ec=23/15 lis/c=23/23 les/c/f=24/24/0 sis=30 pruub=14.992330551s) [] r=-1 lpr=30 pi=[23,30)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 74.939498901s@ mbc={}] state<Start>: transitioning to Stray
Dec  8 04:47:31 np0005550137 ceph-osd[83009]: osd.1 pg_epoch: 30 pg[7.1d( empty local-lis/les=28/29 n=0 ec=26/19 lis/c=28/28 les/c/f=29/29/0 sis=30 pruub=14.971382141s) [] r=-1 lpr=30 pi=[28,30)/1 crt=0'0 mlcod 0'0 active pruub 74.918884277s@ mbc={}] PeeringState::start_peering_interval up [1] -> [], acting [1] -> [], acting_primary 1 -> -1, up_primary 1 -> -1, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Dec  8 04:47:31 np0005550137 ceph-osd[83009]: osd.1 pg_epoch: 30 pg[7.1d( empty local-lis/les=28/29 n=0 ec=26/19 lis/c=28/28 les/c/f=29/29/0 sis=30 pruub=14.971382141s) [] r=-1 lpr=30 pi=[28,30)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 74.918884277s@ mbc={}] state<Start>: transitioning to Stray
Dec  8 04:47:31 np0005550137 ceph-osd[83009]: osd.1 pg_epoch: 30 pg[4.3( empty local-lis/les=23/24 n=0 ec=23/15 lis/c=23/23 les/c/f=24/24/0 sis=30 pruub=14.998869896s) [] r=-1 lpr=30 pi=[23,30)/1 crt=0'0 mlcod 0'0 active pruub 74.944084167s@ mbc={}] PeeringState::start_peering_interval up [1] -> [], acting [1] -> [], acting_primary 1 -> -1, up_primary 1 -> -1, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Dec  8 04:47:31 np0005550137 ceph-osd[83009]: osd.1 pg_epoch: 30 pg[4.3( empty local-lis/les=23/24 n=0 ec=23/15 lis/c=23/23 les/c/f=24/24/0 sis=30 pruub=14.998869896s) [] r=-1 lpr=30 pi=[23,30)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 74.944084167s@ mbc={}] state<Start>: transitioning to Stray
Dec  8 04:47:31 np0005550137 ceph-mgr[74806]: mgr.server handle_open ignoring open from osd.2 v2:192.168.122.102:6800/2213880029; not ready for session (expect reconnect)
Dec  8 04:47:31 np0005550137 ceph-mon[74516]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd metadata", "id": 2} v 0)
Dec  8 04:47:31 np0005550137 ceph-mon[74516]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/1555784564' entity='mgr.compute-0.kitiwu' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch
Dec  8 04:47:31 np0005550137 ceph-mgr[74806]: mgr finish mon failed to return metadata for osd.2: (2) No such file or directory
Dec  8 04:47:31 np0005550137 ceph-mgr[74806]: log_channel(audit) log [DBG] : from='client.14286 -' entity='client.admin' cmd=[{"prefix": "orch apply", "target": ["mon-mgr", ""]}]: dispatch
Dec  8 04:47:31 np0005550137 ceph-mgr[74806]: [cephadm INFO root] Saving service rgw.rgw spec with placement compute-0;compute-1;compute-2
Dec  8 04:47:31 np0005550137 ceph-mgr[74806]: log_channel(cephadm) log [INF] : Saving service rgw.rgw spec with placement compute-0;compute-1;compute-2
Dec  8 04:47:31 np0005550137 ceph-mon[74516]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.rgw.rgw}] v 0)
Dec  8 04:47:31 np0005550137 ceph-mon[74516]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1555784564' entity='mgr.compute-0.kitiwu' 
Dec  8 04:47:31 np0005550137 ceph-mgr[74806]: [cephadm INFO root] Saving service ingress.rgw.default spec with placement count:2
Dec  8 04:47:31 np0005550137 ceph-mgr[74806]: log_channel(cephadm) log [INF] : Saving service ingress.rgw.default spec with placement count:2
Dec  8 04:47:31 np0005550137 ceph-mon[74516]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.ingress.rgw.default}] v 0)
Dec  8 04:47:31 np0005550137 ceph-mon[74516]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1555784564' entity='mgr.compute-0.kitiwu' 
Dec  8 04:47:31 np0005550137 relaxed_lumiere[87422]: Scheduled rgw.rgw update...
Dec  8 04:47:31 np0005550137 relaxed_lumiere[87422]: Scheduled ingress.rgw.default update...
Dec  8 04:47:31 np0005550137 systemd[1]: libpod-cb7d2b5cfb9c27d1406ab50561ed2cf5a0f46346882a32abbdf0bc23c2f14eb6.scope: Deactivated successfully.
Dec  8 04:47:31 np0005550137 podman[87447]: 2025-12-08 09:47:31.504908106 +0000 UTC m=+0.031944956 container died cb7d2b5cfb9c27d1406ab50561ed2cf5a0f46346882a32abbdf0bc23c2f14eb6 (image=quay.io/ceph/ceph:v19, name=relaxed_lumiere, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, ceph=True, org.label-schema.license=GPLv2)
Dec  8 04:47:31 np0005550137 systemd[1]: var-lib-containers-storage-overlay-d29744c4dcd74f0f99ab033e6bce75fef5402009f467ab4a0b345b0b2df2bbb1-merged.mount: Deactivated successfully.
Dec  8 04:47:31 np0005550137 podman[87447]: 2025-12-08 09:47:31.547627122 +0000 UTC m=+0.074663952 container remove cb7d2b5cfb9c27d1406ab50561ed2cf5a0f46346882a32abbdf0bc23c2f14eb6 (image=quay.io/ceph/ceph:v19, name=relaxed_lumiere, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_REF=squid, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec  8 04:47:31 np0005550137 systemd[1]: libpod-conmon-cb7d2b5cfb9c27d1406ab50561ed2cf5a0f46346882a32abbdf0bc23c2f14eb6.scope: Deactivated successfully.
Dec  8 04:47:31 np0005550137 ceph-osd[83009]: log_channel(cluster) log [DBG] : 3.1e scrub starts
Dec  8 04:47:31 np0005550137 ceph-osd[83009]: log_channel(cluster) log [DBG] : 3.1e scrub ok
Dec  8 04:47:32 np0005550137 python3[87537]: ansible-ansible.legacy.stat Invoked with path=/tmp/ceph_dashboard.yml follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Dec  8 04:47:32 np0005550137 ceph-mon[74516]: from='osd.2 ' entity='osd.2' cmd='[{"prefix": "osd crush create-or-move", "id": 2, "weight":0.0195, "args": ["host=compute-2", "root=default"]}]': finished
Dec  8 04:47:32 np0005550137 ceph-mon[74516]: Saving service rgw.rgw spec with placement compute-0;compute-1;compute-2
Dec  8 04:47:32 np0005550137 ceph-mon[74516]: from='mgr.14122 192.168.122.100:0/1555784564' entity='mgr.compute-0.kitiwu' 
Dec  8 04:47:32 np0005550137 ceph-mon[74516]: Saving service ingress.rgw.default spec with placement count:2
Dec  8 04:47:32 np0005550137 ceph-mon[74516]: from='mgr.14122 192.168.122.100:0/1555784564' entity='mgr.compute-0.kitiwu' 
Dec  8 04:47:32 np0005550137 ceph-mgr[74806]: mgr.server handle_open ignoring open from osd.2 v2:192.168.122.102:6800/2213880029; not ready for session (expect reconnect)
Dec  8 04:47:32 np0005550137 ceph-mon[74516]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd metadata", "id": 2} v 0)
Dec  8 04:47:32 np0005550137 ceph-mon[74516]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/1555784564' entity='mgr.compute-0.kitiwu' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch
Dec  8 04:47:32 np0005550137 ceph-mgr[74806]: mgr finish mon failed to return metadata for osd.2: (2) No such file or directory
Dec  8 04:47:32 np0005550137 ceph-mgr[74806]: log_channel(cluster) log [DBG] : pgmap v91: 193 pgs: 193 active+clean; 449 KiB data, 53 MiB used, 40 GiB / 40 GiB avail
Dec  8 04:47:32 np0005550137 python3[87608]: ansible-ansible.legacy.copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1765187251.6962276-37197-274580564343066/source dest=/tmp/ceph_dashboard.yml mode=0644 force=True follow=False _original_basename=ceph_monitoring_stack.yml.j2 checksum=2701faaa92cae31b5bbad92984c27e2af7a44b84 backup=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  8 04:47:32 np0005550137 ceph-osd[83009]: log_channel(cluster) log [DBG] : 6.18 scrub starts
Dec  8 04:47:32 np0005550137 ceph-osd[83009]: log_channel(cluster) log [DBG] : 6.18 scrub ok
Dec  8 04:47:33 np0005550137 python3[87658]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z   --volume /tmp/ceph_dashboard.yml:/home/ceph_spec.yaml:z   --entrypoint ceph quay.io/ceph/ceph:v19 --fsid ceb838ef-9d5d-54e4-bddb-2f01adce2ad4 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   orch apply --in-file /home/ceph_spec.yaml _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec  8 04:47:33 np0005550137 ceph-mgr[74806]: [progress INFO root] Completed event 6228985e-e217-4bb6-93a3-5f73cb2469c2 (Global Recovery Event) in 20 seconds
Dec  8 04:47:33 np0005550137 podman[87659]: 2025-12-08 09:47:33.149031647 +0000 UTC m=+0.042794640 container create 472c6c5dee6cccd7a787d879b0d86c58537eec62e39fee342b415627a1a1b357 (image=quay.io/ceph/ceph:v19, name=crazy_elion, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS)
Dec  8 04:47:33 np0005550137 ceph-mon[74516]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2.devices.0}] v 0)
Dec  8 04:47:33 np0005550137 systemd[1]: Started libpod-conmon-472c6c5dee6cccd7a787d879b0d86c58537eec62e39fee342b415627a1a1b357.scope.
Dec  8 04:47:33 np0005550137 ceph-mon[74516]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1555784564' entity='mgr.compute-0.kitiwu' 
Dec  8 04:47:33 np0005550137 ceph-mon[74516]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2}] v 0)
Dec  8 04:47:33 np0005550137 systemd[1]: Started libcrun container.
Dec  8 04:47:33 np0005550137 ceph-mon[74516]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1555784564' entity='mgr.compute-0.kitiwu' 
Dec  8 04:47:33 np0005550137 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/cf5f8dd1690389e01e7d089875c56b70285309fbf3de307e56de2c66632d4023/merged/home/ceph_spec.yaml supports timestamps until 2038 (0x7fffffff)
Dec  8 04:47:33 np0005550137 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/cf5f8dd1690389e01e7d089875c56b70285309fbf3de307e56de2c66632d4023/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Dec  8 04:47:33 np0005550137 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/cf5f8dd1690389e01e7d089875c56b70285309fbf3de307e56de2c66632d4023/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec  8 04:47:33 np0005550137 podman[87659]: 2025-12-08 09:47:33.130758398 +0000 UTC m=+0.024521401 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Dec  8 04:47:33 np0005550137 podman[87659]: 2025-12-08 09:47:33.244058198 +0000 UTC m=+0.137821191 container init 472c6c5dee6cccd7a787d879b0d86c58537eec62e39fee342b415627a1a1b357 (image=quay.io/ceph/ceph:v19, name=crazy_elion, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True)
Dec  8 04:47:33 np0005550137 ceph-mon[74516]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config rm", "who": "osd.2", "name": "osd_memory_target"} v 0)
Dec  8 04:47:33 np0005550137 ceph-mon[74516]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1555784564' entity='mgr.compute-0.kitiwu' cmd=[{"prefix": "config rm", "who": "osd.2", "name": "osd_memory_target"}]: dispatch
Dec  8 04:47:33 np0005550137 ceph-mgr[74806]: [cephadm INFO root] Adjusting osd_memory_target on compute-2 to 127.9M
Dec  8 04:47:33 np0005550137 ceph-mgr[74806]: log_channel(cephadm) log [INF] : Adjusting osd_memory_target on compute-2 to 127.9M
Dec  8 04:47:33 np0005550137 ceph-mon[74516]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config set, name=osd_memory_target}] v 0)
Dec  8 04:47:33 np0005550137 ceph-mgr[74806]: [cephadm WARNING cephadm.serve] Unable to set osd_memory_target on compute-2 to 134211993: error parsing value: Value '134211993' is below minimum 939524096
Dec  8 04:47:33 np0005550137 ceph-mgr[74806]: log_channel(cephadm) log [WRN] : Unable to set osd_memory_target on compute-2 to 134211993: error parsing value: Value '134211993' is below minimum 939524096
Dec  8 04:47:33 np0005550137 ceph-mon[74516]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Dec  8 04:47:33 np0005550137 ceph-mon[74516]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/1555784564' entity='mgr.compute-0.kitiwu' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec  8 04:47:33 np0005550137 ceph-mon[74516]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Dec  8 04:47:33 np0005550137 ceph-mon[74516]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1555784564' entity='mgr.compute-0.kitiwu' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Dec  8 04:47:33 np0005550137 ceph-mgr[74806]: [cephadm INFO cephadm.serve] Updating compute-0:/etc/ceph/ceph.conf
Dec  8 04:47:33 np0005550137 ceph-mgr[74806]: log_channel(cephadm) log [INF] : Updating compute-0:/etc/ceph/ceph.conf
Dec  8 04:47:33 np0005550137 ceph-mgr[74806]: [cephadm INFO cephadm.serve] Updating compute-1:/etc/ceph/ceph.conf
Dec  8 04:47:33 np0005550137 ceph-mgr[74806]: log_channel(cephadm) log [INF] : Updating compute-1:/etc/ceph/ceph.conf
Dec  8 04:47:33 np0005550137 podman[87659]: 2025-12-08 09:47:33.255943152 +0000 UTC m=+0.149706185 container start 472c6c5dee6cccd7a787d879b0d86c58537eec62e39fee342b415627a1a1b357 (image=quay.io/ceph/ceph:v19, name=crazy_elion, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid)
Dec  8 04:47:33 np0005550137 ceph-mgr[74806]: [cephadm INFO cephadm.serve] Updating compute-2:/etc/ceph/ceph.conf
Dec  8 04:47:33 np0005550137 ceph-mgr[74806]: log_channel(cephadm) log [INF] : Updating compute-2:/etc/ceph/ceph.conf
Dec  8 04:47:33 np0005550137 ceph-mon[74516]: from='mgr.14122 192.168.122.100:0/1555784564' entity='mgr.compute-0.kitiwu' 
Dec  8 04:47:33 np0005550137 podman[87659]: 2025-12-08 09:47:33.262053069 +0000 UTC m=+0.155816142 container attach 472c6c5dee6cccd7a787d879b0d86c58537eec62e39fee342b415627a1a1b357 (image=quay.io/ceph/ceph:v19, name=crazy_elion, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, io.buildah.version=1.40.1, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec  8 04:47:33 np0005550137 ceph-mgr[74806]: mgr.server handle_open ignoring open from osd.2 v2:192.168.122.102:6800/2213880029; not ready for session (expect reconnect)
Dec  8 04:47:33 np0005550137 ceph-mon[74516]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd metadata", "id": 2} v 0)
Dec  8 04:47:33 np0005550137 ceph-mon[74516]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/1555784564' entity='mgr.compute-0.kitiwu' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch
Dec  8 04:47:33 np0005550137 ceph-mgr[74806]: mgr finish mon failed to return metadata for osd.2: (2) No such file or directory
Dec  8 04:47:33 np0005550137 ceph-osd[83009]: log_channel(cluster) log [DBG] : 5.1d scrub starts
Dec  8 04:47:33 np0005550137 ceph-osd[83009]: log_channel(cluster) log [DBG] : 5.1d scrub ok
Dec  8 04:47:33 np0005550137 ceph-mgr[74806]: log_channel(audit) log [DBG] : from='client.14292 -' entity='client.admin' cmd=[{"prefix": "orch apply", "target": ["mon-mgr", ""]}]: dispatch
Dec  8 04:47:33 np0005550137 ceph-mgr[74806]: [cephadm INFO root] Saving service node-exporter spec with placement *
Dec  8 04:47:33 np0005550137 ceph-mgr[74806]: log_channel(cephadm) log [INF] : Saving service node-exporter spec with placement *
Dec  8 04:47:33 np0005550137 ceph-mon[74516]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.node-exporter}] v 0)
Dec  8 04:47:33 np0005550137 ceph-mon[74516]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1555784564' entity='mgr.compute-0.kitiwu' 
Dec  8 04:47:33 np0005550137 ceph-mgr[74806]: [cephadm INFO root] Saving service grafana spec with placement compute-0;count:1
Dec  8 04:47:33 np0005550137 ceph-mgr[74806]: log_channel(cephadm) log [INF] : Saving service grafana spec with placement compute-0;count:1
Dec  8 04:47:33 np0005550137 ceph-mon[74516]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.grafana}] v 0)
Dec  8 04:47:33 np0005550137 ceph-mon[74516]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1555784564' entity='mgr.compute-0.kitiwu' 
Dec  8 04:47:33 np0005550137 ceph-mgr[74806]: [cephadm INFO root] Saving service prometheus spec with placement compute-0;count:1
Dec  8 04:47:33 np0005550137 ceph-mgr[74806]: log_channel(cephadm) log [INF] : Saving service prometheus spec with placement compute-0;count:1
Dec  8 04:47:33 np0005550137 ceph-mon[74516]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.prometheus}] v 0)
Dec  8 04:47:33 np0005550137 ceph-mon[74516]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1555784564' entity='mgr.compute-0.kitiwu' 
Dec  8 04:47:33 np0005550137 ceph-mgr[74806]: [cephadm INFO root] Saving service alertmanager spec with placement compute-0;count:1
Dec  8 04:47:33 np0005550137 ceph-mgr[74806]: log_channel(cephadm) log [INF] : Saving service alertmanager spec with placement compute-0;count:1
Dec  8 04:47:33 np0005550137 ceph-mon[74516]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.alertmanager}] v 0)
Dec  8 04:47:33 np0005550137 ceph-mon[74516]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1555784564' entity='mgr.compute-0.kitiwu' 
Dec  8 04:47:33 np0005550137 crazy_elion[87674]: Scheduled node-exporter update...
Dec  8 04:47:33 np0005550137 crazy_elion[87674]: Scheduled grafana update...
Dec  8 04:47:33 np0005550137 crazy_elion[87674]: Scheduled prometheus update...
Dec  8 04:47:33 np0005550137 crazy_elion[87674]: Scheduled alertmanager update...
Dec  8 04:47:33 np0005550137 systemd[1]: libpod-472c6c5dee6cccd7a787d879b0d86c58537eec62e39fee342b415627a1a1b357.scope: Deactivated successfully.
Dec  8 04:47:33 np0005550137 podman[87659]: 2025-12-08 09:47:33.778371408 +0000 UTC m=+0.672134441 container died 472c6c5dee6cccd7a787d879b0d86c58537eec62e39fee342b415627a1a1b357 (image=quay.io/ceph/ceph:v19, name=crazy_elion, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, io.buildah.version=1.40.1, ceph=True, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Dec  8 04:47:33 np0005550137 systemd[1]: var-lib-containers-storage-overlay-cf5f8dd1690389e01e7d089875c56b70285309fbf3de307e56de2c66632d4023-merged.mount: Deactivated successfully.
Dec  8 04:47:33 np0005550137 podman[87659]: 2025-12-08 09:47:33.818980014 +0000 UTC m=+0.712743007 container remove 472c6c5dee6cccd7a787d879b0d86c58537eec62e39fee342b415627a1a1b357 (image=quay.io/ceph/ceph:v19, name=crazy_elion, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, OSD_FLAVOR=default, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1)
Dec  8 04:47:33 np0005550137 systemd[1]: libpod-conmon-472c6c5dee6cccd7a787d879b0d86c58537eec62e39fee342b415627a1a1b357.scope: Deactivated successfully.
Dec  8 04:47:33 np0005550137 ceph-mgr[74806]: [cephadm INFO cephadm.serve] Updating compute-2:/var/lib/ceph/ceb838ef-9d5d-54e4-bddb-2f01adce2ad4/config/ceph.conf
Dec  8 04:47:33 np0005550137 ceph-mgr[74806]: log_channel(cephadm) log [INF] : Updating compute-2:/var/lib/ceph/ceb838ef-9d5d-54e4-bddb-2f01adce2ad4/config/ceph.conf
Dec  8 04:47:33 np0005550137 ceph-mgr[74806]: [cephadm INFO cephadm.serve] Updating compute-1:/var/lib/ceph/ceb838ef-9d5d-54e4-bddb-2f01adce2ad4/config/ceph.conf
Dec  8 04:47:33 np0005550137 ceph-mgr[74806]: log_channel(cephadm) log [INF] : Updating compute-1:/var/lib/ceph/ceb838ef-9d5d-54e4-bddb-2f01adce2ad4/config/ceph.conf
Dec  8 04:47:34 np0005550137 ceph-mgr[74806]: [cephadm INFO cephadm.serve] Updating compute-0:/var/lib/ceph/ceb838ef-9d5d-54e4-bddb-2f01adce2ad4/config/ceph.conf
Dec  8 04:47:34 np0005550137 ceph-mgr[74806]: log_channel(cephadm) log [INF] : Updating compute-0:/var/lib/ceph/ceb838ef-9d5d-54e4-bddb-2f01adce2ad4/config/ceph.conf
Dec  8 04:47:34 np0005550137 ceph-mgr[74806]: mgr.server handle_open ignoring open from osd.2 v2:192.168.122.102:6800/2213880029; not ready for session (expect reconnect)
Dec  8 04:47:34 np0005550137 ceph-mon[74516]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd metadata", "id": 2} v 0)
Dec  8 04:47:34 np0005550137 ceph-mon[74516]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/1555784564' entity='mgr.compute-0.kitiwu' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch
Dec  8 04:47:34 np0005550137 ceph-mgr[74806]: mgr finish mon failed to return metadata for osd.2: (2) No such file or directory
Dec  8 04:47:34 np0005550137 ceph-mon[74516]: from='mgr.14122 192.168.122.100:0/1555784564' entity='mgr.compute-0.kitiwu' 
Dec  8 04:47:34 np0005550137 ceph-mon[74516]: from='mgr.14122 192.168.122.100:0/1555784564' entity='mgr.compute-0.kitiwu' cmd=[{"prefix": "config rm", "who": "osd.2", "name": "osd_memory_target"}]: dispatch
Dec  8 04:47:34 np0005550137 ceph-mon[74516]: Adjusting osd_memory_target on compute-2 to 127.9M
Dec  8 04:47:34 np0005550137 ceph-mon[74516]: Unable to set osd_memory_target on compute-2 to 134211993: error parsing value: Value '134211993' is below minimum 939524096
Dec  8 04:47:34 np0005550137 ceph-mon[74516]: from='mgr.14122 192.168.122.100:0/1555784564' entity='mgr.compute-0.kitiwu' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Dec  8 04:47:34 np0005550137 ceph-mon[74516]: Updating compute-0:/etc/ceph/ceph.conf
Dec  8 04:47:34 np0005550137 ceph-mon[74516]: Updating compute-1:/etc/ceph/ceph.conf
Dec  8 04:47:34 np0005550137 ceph-mon[74516]: Updating compute-2:/etc/ceph/ceph.conf
Dec  8 04:47:34 np0005550137 ceph-mon[74516]: Saving service node-exporter spec with placement *
Dec  8 04:47:34 np0005550137 ceph-mon[74516]: from='mgr.14122 192.168.122.100:0/1555784564' entity='mgr.compute-0.kitiwu' 
Dec  8 04:47:34 np0005550137 ceph-mon[74516]: Saving service grafana spec with placement compute-0;count:1
Dec  8 04:47:34 np0005550137 ceph-mon[74516]: from='mgr.14122 192.168.122.100:0/1555784564' entity='mgr.compute-0.kitiwu' 
Dec  8 04:47:34 np0005550137 ceph-mon[74516]: Saving service prometheus spec with placement compute-0;count:1
Dec  8 04:47:34 np0005550137 ceph-mon[74516]: from='mgr.14122 192.168.122.100:0/1555784564' entity='mgr.compute-0.kitiwu' 
Dec  8 04:47:34 np0005550137 ceph-mon[74516]: Saving service alertmanager spec with placement compute-0;count:1
Dec  8 04:47:34 np0005550137 ceph-mon[74516]: from='mgr.14122 192.168.122.100:0/1555784564' entity='mgr.compute-0.kitiwu' 
Dec  8 04:47:34 np0005550137 ceph-mon[74516]: Updating compute-2:/var/lib/ceph/ceb838ef-9d5d-54e4-bddb-2f01adce2ad4/config/ceph.conf
Dec  8 04:47:34 np0005550137 ceph-mon[74516]: Updating compute-1:/var/lib/ceph/ceb838ef-9d5d-54e4-bddb-2f01adce2ad4/config/ceph.conf
Dec  8 04:47:34 np0005550137 ceph-mon[74516]: Updating compute-0:/var/lib/ceph/ceb838ef-9d5d-54e4-bddb-2f01adce2ad4/config/ceph.conf
Dec  8 04:47:34 np0005550137 python3[87988]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --interactive  --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z   --volume /home/ceph-admin/specs/ceph_spec.yaml:/home/ceph_spec.yaml:z   --entrypoint ceph quay.io/ceph/ceph:v19 --fsid ceb838ef-9d5d-54e4-bddb-2f01adce2ad4 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   config set mgr mgr/dashboard/server_port 8443 _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec  8 04:47:34 np0005550137 ceph-mgr[74806]: log_channel(cluster) log [DBG] : pgmap v92: 193 pgs: 193 active+clean; 449 KiB data, 53 MiB used, 40 GiB / 40 GiB avail
Dec  8 04:47:34 np0005550137 podman[88058]: 2025-12-08 09:47:34.404592359 +0000 UTC m=+0.040304959 container create 7d45189db318a681e4b6a2a0561e96205bf70b972cd4937fdd93fd40fecd318e (image=quay.io/ceph/ceph:v19, name=naughty_williams, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.build-date=20250325, CEPH_REF=squid, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec  8 04:47:34 np0005550137 systemd[1]: Started libpod-conmon-7d45189db318a681e4b6a2a0561e96205bf70b972cd4937fdd93fd40fecd318e.scope.
Dec  8 04:47:34 np0005550137 systemd[1]: Started libcrun container.
Dec  8 04:47:34 np0005550137 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2535ce156465997c893d268f0caae7b5e03c1a897fc66cba4383e4f78fa755e6/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Dec  8 04:47:34 np0005550137 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2535ce156465997c893d268f0caae7b5e03c1a897fc66cba4383e4f78fa755e6/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec  8 04:47:34 np0005550137 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2535ce156465997c893d268f0caae7b5e03c1a897fc66cba4383e4f78fa755e6/merged/home/ceph_spec.yaml supports timestamps until 2038 (0x7fffffff)
Dec  8 04:47:34 np0005550137 podman[88058]: 2025-12-08 09:47:34.480556518 +0000 UTC m=+0.116269158 container init 7d45189db318a681e4b6a2a0561e96205bf70b972cd4937fdd93fd40fecd318e (image=quay.io/ceph/ceph:v19, name=naughty_williams, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec  8 04:47:34 np0005550137 podman[88058]: 2025-12-08 09:47:34.388097141 +0000 UTC m=+0.023809751 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Dec  8 04:47:34 np0005550137 podman[88058]: 2025-12-08 09:47:34.486566222 +0000 UTC m=+0.122278822 container start 7d45189db318a681e4b6a2a0561e96205bf70b972cd4937fdd93fd40fecd318e (image=quay.io/ceph/ceph:v19, name=naughty_williams, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, org.label-schema.license=GPLv2, io.buildah.version=1.40.1)
Dec  8 04:47:34 np0005550137 podman[88058]: 2025-12-08 09:47:34.490048642 +0000 UTC m=+0.125761242 container attach 7d45189db318a681e4b6a2a0561e96205bf70b972cd4937fdd93fd40fecd318e (image=quay.io/ceph/ceph:v19, name=naughty_williams, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image)
Dec  8 04:47:34 np0005550137 ceph-mon[74516]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1.devices.0}] v 0)
Dec  8 04:47:34 np0005550137 ceph-mon[74516]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1555784564' entity='mgr.compute-0.kitiwu' 
Dec  8 04:47:34 np0005550137 ceph-mon[74516]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1}] v 0)
Dec  8 04:47:34 np0005550137 ceph-mon[74516]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1555784564' entity='mgr.compute-0.kitiwu' 
Dec  8 04:47:34 np0005550137 ceph-mon[74516]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2.devices.0}] v 0)
Dec  8 04:47:34 np0005550137 ceph-osd[83009]: log_channel(cluster) log [DBG] : 6.1f scrub starts
Dec  8 04:47:34 np0005550137 ceph-mon[74516]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1555784564' entity='mgr.compute-0.kitiwu' 
Dec  8 04:47:34 np0005550137 ceph-osd[83009]: log_channel(cluster) log [DBG] : 6.1f scrub ok
Dec  8 04:47:34 np0005550137 ceph-mon[74516]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2}] v 0)
Dec  8 04:47:34 np0005550137 ceph-mon[74516]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1555784564' entity='mgr.compute-0.kitiwu' 
Dec  8 04:47:34 np0005550137 ceph-mon[74516]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Dec  8 04:47:34 np0005550137 ceph-mon[74516]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1555784564' entity='mgr.compute-0.kitiwu' 
Dec  8 04:47:34 np0005550137 ceph-mon[74516]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Dec  8 04:47:34 np0005550137 ceph-mon[74516]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1555784564' entity='mgr.compute-0.kitiwu' 
Dec  8 04:47:34 np0005550137 ceph-mon[74516]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Dec  8 04:47:34 np0005550137 ceph-mon[74516]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1555784564' entity='mgr.compute-0.kitiwu' 
Dec  8 04:47:34 np0005550137 ceph-mon[74516]: mon.compute-0@0(leader).osd e30 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  8 04:47:34 np0005550137 ceph-mon[74516]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config set, name=mgr/dashboard/server_port}] v 0)
Dec  8 04:47:34 np0005550137 ceph-mon[74516]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Dec  8 04:47:34 np0005550137 ceph-mon[74516]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/1555784564' entity='mgr.compute-0.kitiwu' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Dec  8 04:47:34 np0005550137 ceph-mon[74516]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/2361686037' entity='client.admin' 
Dec  8 04:47:34 np0005550137 ceph-mon[74516]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Dec  8 04:47:34 np0005550137 ceph-mon[74516]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1555784564' entity='mgr.compute-0.kitiwu' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Dec  8 04:47:34 np0005550137 ceph-mon[74516]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Dec  8 04:47:34 np0005550137 ceph-mon[74516]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/1555784564' entity='mgr.compute-0.kitiwu' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec  8 04:47:34 np0005550137 systemd[1]: libpod-7d45189db318a681e4b6a2a0561e96205bf70b972cd4937fdd93fd40fecd318e.scope: Deactivated successfully.
Dec  8 04:47:34 np0005550137 podman[88058]: 2025-12-08 09:47:34.870496178 +0000 UTC m=+0.506208798 container died 7d45189db318a681e4b6a2a0561e96205bf70b972cd4937fdd93fd40fecd318e (image=quay.io/ceph/ceph:v19, name=naughty_williams, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1)
Dec  8 04:47:34 np0005550137 systemd[1]: var-lib-containers-storage-overlay-2535ce156465997c893d268f0caae7b5e03c1a897fc66cba4383e4f78fa755e6-merged.mount: Deactivated successfully.
Dec  8 04:47:34 np0005550137 podman[88058]: 2025-12-08 09:47:34.928416315 +0000 UTC m=+0.564128915 container remove 7d45189db318a681e4b6a2a0561e96205bf70b972cd4937fdd93fd40fecd318e (image=quay.io/ceph/ceph:v19, name=naughty_williams, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, OSD_FLAVOR=default, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec  8 04:47:34 np0005550137 systemd[1]: libpod-conmon-7d45189db318a681e4b6a2a0561e96205bf70b972cd4937fdd93fd40fecd318e.scope: Deactivated successfully.
Dec  8 04:47:35 np0005550137 python3[88310]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --interactive  --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z   --volume /home/ceph-admin/specs/ceph_spec.yaml:/home/ceph_spec.yaml:z   --entrypoint ceph quay.io/ceph/ceph:v19 --fsid ceb838ef-9d5d-54e4-bddb-2f01adce2ad4 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   config set mgr mgr/dashboard/ssl_server_port 8443 _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec  8 04:47:35 np0005550137 ceph-mgr[74806]: mgr.server handle_open ignoring open from osd.2 v2:192.168.122.102:6800/2213880029; not ready for session (expect reconnect)
Dec  8 04:47:35 np0005550137 ceph-mon[74516]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd metadata", "id": 2} v 0)
Dec  8 04:47:35 np0005550137 ceph-mon[74516]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/1555784564' entity='mgr.compute-0.kitiwu' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch
Dec  8 04:47:35 np0005550137 ceph-mgr[74806]: mgr finish mon failed to return metadata for osd.2: (2) No such file or directory
Dec  8 04:47:35 np0005550137 ceph-mon[74516]: from='mgr.14122 192.168.122.100:0/1555784564' entity='mgr.compute-0.kitiwu' 
Dec  8 04:47:35 np0005550137 ceph-mon[74516]: from='mgr.14122 192.168.122.100:0/1555784564' entity='mgr.compute-0.kitiwu' 
Dec  8 04:47:35 np0005550137 ceph-mon[74516]: from='mgr.14122 192.168.122.100:0/1555784564' entity='mgr.compute-0.kitiwu' 
Dec  8 04:47:35 np0005550137 ceph-mon[74516]: from='mgr.14122 192.168.122.100:0/1555784564' entity='mgr.compute-0.kitiwu' 
Dec  8 04:47:35 np0005550137 ceph-mon[74516]: from='mgr.14122 192.168.122.100:0/1555784564' entity='mgr.compute-0.kitiwu' 
Dec  8 04:47:35 np0005550137 ceph-mon[74516]: from='mgr.14122 192.168.122.100:0/1555784564' entity='mgr.compute-0.kitiwu' 
Dec  8 04:47:35 np0005550137 ceph-mon[74516]: from='mgr.14122 192.168.122.100:0/1555784564' entity='mgr.compute-0.kitiwu' 
Dec  8 04:47:35 np0005550137 ceph-mon[74516]: from='client.? 192.168.122.100:0/2361686037' entity='client.admin' 
Dec  8 04:47:35 np0005550137 ceph-mon[74516]: from='mgr.14122 192.168.122.100:0/1555784564' entity='mgr.compute-0.kitiwu' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Dec  8 04:47:35 np0005550137 podman[88334]: 2025-12-08 09:47:35.31959278 +0000 UTC m=+0.045310563 container create 25747754412e2abb3026c3ed5c4f81b480f7528b457a5e40b100fcc9bea9e001 (image=quay.io/ceph/ceph:v19, name=intelligent_kalam, org.label-schema.license=GPLv2, ceph=True, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image)
Dec  8 04:47:35 np0005550137 systemd[1]: Started libpod-conmon-25747754412e2abb3026c3ed5c4f81b480f7528b457a5e40b100fcc9bea9e001.scope.
Dec  8 04:47:35 np0005550137 systemd[1]: Started libcrun container.
Dec  8 04:47:35 np0005550137 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f10bdba42024b48727bb6cf1255bc2a273ba6a161a8a62bbd4479fdc77f811f8/merged/home/ceph_spec.yaml supports timestamps until 2038 (0x7fffffff)
Dec  8 04:47:35 np0005550137 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f10bdba42024b48727bb6cf1255bc2a273ba6a161a8a62bbd4479fdc77f811f8/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Dec  8 04:47:35 np0005550137 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f10bdba42024b48727bb6cf1255bc2a273ba6a161a8a62bbd4479fdc77f811f8/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec  8 04:47:35 np0005550137 podman[88334]: 2025-12-08 09:47:35.295957276 +0000 UTC m=+0.021675069 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Dec  8 04:47:35 np0005550137 podman[88334]: 2025-12-08 09:47:35.400482162 +0000 UTC m=+0.126199935 container init 25747754412e2abb3026c3ed5c4f81b480f7528b457a5e40b100fcc9bea9e001 (image=quay.io/ceph/ceph:v19, name=intelligent_kalam, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.build-date=20250325, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec  8 04:47:35 np0005550137 podman[88334]: 2025-12-08 09:47:35.406826656 +0000 UTC m=+0.132544449 container start 25747754412e2abb3026c3ed5c4f81b480f7528b457a5e40b100fcc9bea9e001 (image=quay.io/ceph/ceph:v19, name=intelligent_kalam, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, OSD_FLAVOR=default, org.label-schema.license=GPLv2, ceph=True, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec  8 04:47:35 np0005550137 podman[88334]: 2025-12-08 09:47:35.413057026 +0000 UTC m=+0.138774829 container attach 25747754412e2abb3026c3ed5c4f81b480f7528b457a5e40b100fcc9bea9e001 (image=quay.io/ceph/ceph:v19, name=intelligent_kalam, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, ceph=True, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default)
Dec  8 04:47:35 np0005550137 podman[88371]: 2025-12-08 09:47:35.483244079 +0000 UTC m=+0.041550744 container create b68d64950cd787336dd289bb7381d5fa0b4b1949a683c15b67bfeaf1d2747c6c (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=hungry_stonebraker, org.label-schema.build-date=20250325, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec  8 04:47:35 np0005550137 systemd[1]: Started libpod-conmon-b68d64950cd787336dd289bb7381d5fa0b4b1949a683c15b67bfeaf1d2747c6c.scope.
Dec  8 04:47:35 np0005550137 systemd[1]: Started libcrun container.
Dec  8 04:47:35 np0005550137 podman[88371]: 2025-12-08 09:47:35.555371197 +0000 UTC m=+0.113677892 container init b68d64950cd787336dd289bb7381d5fa0b4b1949a683c15b67bfeaf1d2747c6c (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=hungry_stonebraker, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True)
Dec  8 04:47:35 np0005550137 podman[88371]: 2025-12-08 09:47:35.561048391 +0000 UTC m=+0.119355046 container start b68d64950cd787336dd289bb7381d5fa0b4b1949a683c15b67bfeaf1d2747c6c (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=hungry_stonebraker, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec  8 04:47:35 np0005550137 podman[88371]: 2025-12-08 09:47:35.466522805 +0000 UTC m=+0.024829460 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Dec  8 04:47:35 np0005550137 hungry_stonebraker[88397]: 167 167
Dec  8 04:47:35 np0005550137 podman[88371]: 2025-12-08 09:47:35.564277205 +0000 UTC m=+0.122583860 container attach b68d64950cd787336dd289bb7381d5fa0b4b1949a683c15b67bfeaf1d2747c6c (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=hungry_stonebraker, io.buildah.version=1.40.1, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default)
Dec  8 04:47:35 np0005550137 systemd[1]: libpod-b68d64950cd787336dd289bb7381d5fa0b4b1949a683c15b67bfeaf1d2747c6c.scope: Deactivated successfully.
Dec  8 04:47:35 np0005550137 conmon[88397]: conmon b68d64950cd787336dd2 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-b68d64950cd787336dd289bb7381d5fa0b4b1949a683c15b67bfeaf1d2747c6c.scope/container/memory.events
Dec  8 04:47:35 np0005550137 podman[88371]: 2025-12-08 09:47:35.565335125 +0000 UTC m=+0.123641780 container died b68d64950cd787336dd289bb7381d5fa0b4b1949a683c15b67bfeaf1d2747c6c (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=hungry_stonebraker, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_REF=squid, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec  8 04:47:35 np0005550137 systemd[1]: var-lib-containers-storage-overlay-e339b4e0e46baf03e437248dddf3f8448d4cdd378e389a9e230e87f876d277ac-merged.mount: Deactivated successfully.
Dec  8 04:47:35 np0005550137 ceph-osd[83009]: log_channel(cluster) log [DBG] : 6.c scrub starts
Dec  8 04:47:35 np0005550137 podman[88371]: 2025-12-08 09:47:35.600237246 +0000 UTC m=+0.158543901 container remove b68d64950cd787336dd289bb7381d5fa0b4b1949a683c15b67bfeaf1d2747c6c (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=hungry_stonebraker, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, ceph=True, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1)
Dec  8 04:47:35 np0005550137 ceph-osd[83009]: log_channel(cluster) log [DBG] : 6.c scrub ok
Dec  8 04:47:35 np0005550137 systemd[1]: libpod-conmon-b68d64950cd787336dd289bb7381d5fa0b4b1949a683c15b67bfeaf1d2747c6c.scope: Deactivated successfully.
Dec  8 04:47:35 np0005550137 podman[88431]: 2025-12-08 09:47:35.779346411 +0000 UTC m=+0.056775135 container create 76a53b6c2b75e64db9f77e3e8906cc2794be0fea0672a86942401640f1ad56e0 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=pensive_ellis, OSD_FLAVOR=default, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec  8 04:47:35 np0005550137 ceph-mon[74516]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config set, name=mgr/dashboard/ssl_server_port}] v 0)
Dec  8 04:47:35 np0005550137 ceph-mon[74516]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/2766465450' entity='client.admin' 
Dec  8 04:47:35 np0005550137 podman[88334]: 2025-12-08 09:47:35.812371937 +0000 UTC m=+0.538089720 container died 25747754412e2abb3026c3ed5c4f81b480f7528b457a5e40b100fcc9bea9e001 (image=quay.io/ceph/ceph:v19, name=intelligent_kalam, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default)
Dec  8 04:47:35 np0005550137 systemd[1]: Started libpod-conmon-76a53b6c2b75e64db9f77e3e8906cc2794be0fea0672a86942401640f1ad56e0.scope.
Dec  8 04:47:35 np0005550137 systemd[1]: libpod-25747754412e2abb3026c3ed5c4f81b480f7528b457a5e40b100fcc9bea9e001.scope: Deactivated successfully.
Dec  8 04:47:35 np0005550137 systemd[1]: Started libcrun container.
Dec  8 04:47:35 np0005550137 ceph-mon[74516]: mon.compute-0@0(leader).osd e30 do_prune osdmap full prune enabled
Dec  8 04:47:35 np0005550137 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/177e533b273551e23e485a77501d963bbaafe9d0aecc20f8d8856d396f99fc87/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec  8 04:47:35 np0005550137 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/177e533b273551e23e485a77501d963bbaafe9d0aecc20f8d8856d396f99fc87/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec  8 04:47:35 np0005550137 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/177e533b273551e23e485a77501d963bbaafe9d0aecc20f8d8856d396f99fc87/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec  8 04:47:35 np0005550137 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/177e533b273551e23e485a77501d963bbaafe9d0aecc20f8d8856d396f99fc87/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec  8 04:47:35 np0005550137 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/177e533b273551e23e485a77501d963bbaafe9d0aecc20f8d8856d396f99fc87/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Dec  8 04:47:35 np0005550137 systemd[1]: var-lib-containers-storage-overlay-f10bdba42024b48727bb6cf1255bc2a273ba6a161a8a62bbd4479fdc77f811f8-merged.mount: Deactivated successfully.
Dec  8 04:47:35 np0005550137 podman[88431]: 2025-12-08 09:47:35.762058531 +0000 UTC m=+0.039487255 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Dec  8 04:47:35 np0005550137 ceph-mon[74516]: mon.compute-0@0(leader).osd e31 e31: 3 total, 3 up, 3 in
Dec  8 04:47:35 np0005550137 podman[88431]: 2025-12-08 09:47:35.866601368 +0000 UTC m=+0.144030092 container init 76a53b6c2b75e64db9f77e3e8906cc2794be0fea0672a86942401640f1ad56e0 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=pensive_ellis, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec  8 04:47:35 np0005550137 ceph-mon[74516]: log_channel(cluster) log [INF] : osd.2 [v2:192.168.122.102:6800/2213880029,v1:192.168.122.102:6801/2213880029] boot
Dec  8 04:47:35 np0005550137 ceph-mon[74516]: log_channel(cluster) log [DBG] : osdmap e31: 3 total, 3 up, 3 in
Dec  8 04:47:35 np0005550137 ceph-mon[74516]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd metadata", "id": 2} v 0)
Dec  8 04:47:35 np0005550137 ceph-mon[74516]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/1555784564' entity='mgr.compute-0.kitiwu' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch
Dec  8 04:47:35 np0005550137 ceph-osd[83009]: osd.1 pg_epoch: 31 pg[6.1b( empty local-lis/les=25/27 n=0 ec=25/17 lis/c=25/25 les/c/f=27/27/0 sis=31 pruub=5.740901470s) [2] r=-1 lpr=31 pi=[25,31)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 70.302452087s@ mbc={}] PeeringState::start_peering_interval up [] -> [2], acting [] -> [2], acting_primary ? -> 2, up_primary ? -> 2, role -1 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Dec  8 04:47:35 np0005550137 ceph-osd[83009]: osd.1 pg_epoch: 31 pg[4.19( empty local-lis/les=23/24 n=0 ec=23/15 lis/c=23/23 les/c/f=24/24/0 sis=31 pruub=10.382907867s) [2] r=-1 lpr=31 pi=[23,31)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 74.944480896s@ mbc={}] PeeringState::start_peering_interval up [] -> [2], acting [] -> [2], acting_primary ? -> 2, up_primary ? -> 2, role -1 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Dec  8 04:47:35 np0005550137 ceph-osd[83009]: osd.1 pg_epoch: 31 pg[6.1b( empty local-lis/les=25/27 n=0 ec=25/17 lis/c=25/25 les/c/f=27/27/0 sis=31 pruub=5.740872383s) [2] r=-1 lpr=31 pi=[25,31)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 70.302452087s@ mbc={}] state<Start>: transitioning to Stray
Dec  8 04:47:35 np0005550137 ceph-osd[83009]: osd.1 pg_epoch: 31 pg[4.19( empty local-lis/les=23/24 n=0 ec=23/15 lis/c=23/23 les/c/f=24/24/0 sis=31 pruub=10.382871628s) [2] r=-1 lpr=31 pi=[23,31)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 74.944480896s@ mbc={}] state<Start>: transitioning to Stray
Dec  8 04:47:35 np0005550137 ceph-osd[83009]: osd.1 pg_epoch: 31 pg[4.1c( empty local-lis/les=23/24 n=0 ec=23/15 lis/c=23/23 les/c/f=24/24/0 sis=31 pruub=10.382554054s) [2] r=-1 lpr=31 pi=[23,31)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 74.944427490s@ mbc={}] PeeringState::start_peering_interval up [] -> [2], acting [] -> [2], acting_primary ? -> 2, up_primary ? -> 2, role -1 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Dec  8 04:47:35 np0005550137 ceph-osd[83009]: osd.1 pg_epoch: 31 pg[4.1c( empty local-lis/les=23/24 n=0 ec=23/15 lis/c=23/23 les/c/f=24/24/0 sis=31 pruub=10.382541656s) [2] r=-1 lpr=31 pi=[23,31)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 74.944427490s@ mbc={}] state<Start>: transitioning to Stray
Dec  8 04:47:35 np0005550137 podman[88431]: 2025-12-08 09:47:35.872088437 +0000 UTC m=+0.149517161 container start 76a53b6c2b75e64db9f77e3e8906cc2794be0fea0672a86942401640f1ad56e0 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=pensive_ellis, OSD_FLAVOR=default, ceph=True, org.label-schema.schema-version=1.0, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Dec  8 04:47:35 np0005550137 ceph-osd[83009]: osd.1 pg_epoch: 31 pg[4.1d( empty local-lis/les=23/24 n=0 ec=23/15 lis/c=23/23 les/c/f=24/24/0 sis=31 pruub=10.381950378s) [2] r=-1 lpr=31 pi=[23,31)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 74.944252014s@ mbc={}] PeeringState::start_peering_interval up [] -> [2], acting [] -> [2], acting_primary ? -> 2, up_primary ? -> 2, role -1 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Dec  8 04:47:35 np0005550137 ceph-osd[83009]: osd.1 pg_epoch: 31 pg[4.1d( empty local-lis/les=23/24 n=0 ec=23/15 lis/c=23/23 les/c/f=24/24/0 sis=31 pruub=10.381937027s) [2] r=-1 lpr=31 pi=[23,31)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 74.944252014s@ mbc={}] state<Start>: transitioning to Stray
Dec  8 04:47:35 np0005550137 ceph-osd[83009]: osd.1 pg_epoch: 31 pg[3.8( empty local-lis/les=23/24 n=0 ec=23/13 lis/c=23/23 les/c/f=24/24/0 sis=31 pruub=10.381697655s) [2] r=-1 lpr=31 pi=[23,31)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 74.944122314s@ mbc={}] PeeringState::start_peering_interval up [] -> [2], acting [] -> [2], acting_primary ? -> 2, up_primary ? -> 2, role -1 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Dec  8 04:47:35 np0005550137 ceph-osd[83009]: osd.1 pg_epoch: 31 pg[3.8( empty local-lis/les=23/24 n=0 ec=23/13 lis/c=23/23 les/c/f=24/24/0 sis=31 pruub=10.381689072s) [2] r=-1 lpr=31 pi=[23,31)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 74.944122314s@ mbc={}] state<Start>: transitioning to Stray
Dec  8 04:47:35 np0005550137 ceph-osd[83009]: osd.1 pg_epoch: 31 pg[4.3( empty local-lis/les=23/24 n=0 ec=23/15 lis/c=23/23 les/c/f=24/24/0 sis=31 pruub=10.381570816s) [2] r=-1 lpr=31 pi=[23,31)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 74.944084167s@ mbc={}] PeeringState::start_peering_interval up [] -> [2], acting [] -> [2], acting_primary ? -> 2, up_primary ? -> 2, role -1 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Dec  8 04:47:35 np0005550137 ceph-osd[83009]: osd.1 pg_epoch: 31 pg[4.3( empty local-lis/les=23/24 n=0 ec=23/15 lis/c=23/23 les/c/f=24/24/0 sis=31 pruub=10.381563187s) [2] r=-1 lpr=31 pi=[23,31)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 74.944084167s@ mbc={}] state<Start>: transitioning to Stray
Dec  8 04:47:35 np0005550137 ceph-osd[83009]: osd.1 pg_epoch: 31 pg[6.1( empty local-lis/les=25/27 n=0 ec=25/17 lis/c=25/25 les/c/f=27/27/0 sis=31 pruub=5.740491867s) [2] r=-1 lpr=31 pi=[25,31)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 70.303115845s@ mbc={}] PeeringState::start_peering_interval up [] -> [2], acting [] -> [2], acting_primary ? -> 2, up_primary ? -> 2, role -1 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Dec  8 04:47:35 np0005550137 ceph-osd[83009]: osd.1 pg_epoch: 31 pg[6.1( empty local-lis/les=25/27 n=0 ec=25/17 lis/c=25/25 les/c/f=27/27/0 sis=31 pruub=5.740471363s) [2] r=-1 lpr=31 pi=[25,31)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 70.303115845s@ mbc={}] state<Start>: transitioning to Stray
Dec  8 04:47:35 np0005550137 ceph-osd[83009]: osd.1 pg_epoch: 31 pg[3.1b( empty local-lis/les=23/24 n=0 ec=23/13 lis/c=23/23 les/c/f=24/24/0 sis=31 pruub=10.381485939s) [2] r=-1 lpr=31 pi=[23,31)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 74.944259644s@ mbc={}] PeeringState::start_peering_interval up [] -> [2], acting [] -> [2], acting_primary ? -> 2, up_primary ? -> 2, role -1 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Dec  8 04:47:35 np0005550137 ceph-osd[83009]: osd.1 pg_epoch: 31 pg[3.1b( empty local-lis/les=23/24 n=0 ec=23/13 lis/c=23/23 les/c/f=24/24/0 sis=31 pruub=10.381472588s) [2] r=-1 lpr=31 pi=[23,31)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 74.944259644s@ mbc={}] state<Start>: transitioning to Stray
Dec  8 04:47:35 np0005550137 ceph-osd[83009]: osd.1 pg_epoch: 31 pg[4.6( empty local-lis/les=23/24 n=0 ec=23/15 lis/c=23/23 les/c/f=24/24/0 sis=31 pruub=10.381250381s) [2] r=-1 lpr=31 pi=[23,31)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 74.944152832s@ mbc={}] PeeringState::start_peering_interval up [] -> [2], acting [] -> [2], acting_primary ? -> 2, up_primary ? -> 2, role -1 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Dec  8 04:47:35 np0005550137 ceph-osd[83009]: osd.1 pg_epoch: 31 pg[4.6( empty local-lis/les=23/24 n=0 ec=23/15 lis/c=23/23 les/c/f=24/24/0 sis=31 pruub=10.381237984s) [2] r=-1 lpr=31 pi=[23,31)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 74.944152832s@ mbc={}] state<Start>: transitioning to Stray
Dec  8 04:47:35 np0005550137 ceph-osd[83009]: osd.1 pg_epoch: 31 pg[2.1b( empty local-lis/les=28/29 n=0 ec=21/12 lis/c=28/28 les/c/f=29/29/0 sis=31 pruub=10.355156898s) [2] r=-1 lpr=31 pi=[28,31)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 74.918098450s@ mbc={}] PeeringState::start_peering_interval up [] -> [2], acting [] -> [2], acting_primary ? -> 2, up_primary ? -> 2, role -1 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Dec  8 04:47:35 np0005550137 ceph-osd[83009]: osd.1 pg_epoch: 31 pg[2.1b( empty local-lis/les=28/29 n=0 ec=21/12 lis/c=28/28 les/c/f=29/29/0 sis=31 pruub=10.355140686s) [2] r=-1 lpr=31 pi=[28,31)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 74.918098450s@ mbc={}] state<Start>: transitioning to Stray
Dec  8 04:47:35 np0005550137 ceph-osd[83009]: osd.1 pg_epoch: 31 pg[4.2( empty local-lis/les=23/24 n=0 ec=23/15 lis/c=23/23 les/c/f=24/24/0 sis=31 pruub=10.380894661s) [2] r=-1 lpr=31 pi=[23,31)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 74.943923950s@ mbc={}] PeeringState::start_peering_interval up [] -> [2], acting [] -> [2], acting_primary ? -> 2, up_primary ? -> 2, role -1 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Dec  8 04:47:35 np0005550137 ceph-osd[83009]: osd.1 pg_epoch: 31 pg[4.2( empty local-lis/les=23/24 n=0 ec=23/15 lis/c=23/23 les/c/f=24/24/0 sis=31 pruub=10.380883217s) [2] r=-1 lpr=31 pi=[23,31)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 74.943923950s@ mbc={}] state<Start>: transitioning to Stray
Dec  8 04:47:35 np0005550137 ceph-osd[83009]: osd.1 pg_epoch: 31 pg[5.0( empty local-lis/les=25/27 n=0 ec=16/16 lis/c=25/25 les/c/f=27/27/0 sis=31 pruub=5.740465164s) [2] r=-1 lpr=31 pi=[25,31)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 70.303581238s@ mbc={}] PeeringState::start_peering_interval up [] -> [2], acting [] -> [2], acting_primary ? -> 2, up_primary ? -> 2, role -1 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Dec  8 04:47:35 np0005550137 ceph-osd[83009]: osd.1 pg_epoch: 31 pg[5.0( empty local-lis/les=25/27 n=0 ec=16/16 lis/c=25/25 les/c/f=27/27/0 sis=31 pruub=5.740452290s) [2] r=-1 lpr=31 pi=[25,31)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 70.303581238s@ mbc={}] state<Start>: transitioning to Stray
Dec  8 04:47:35 np0005550137 ceph-osd[83009]: osd.1 pg_epoch: 31 pg[3.0( empty local-lis/les=23/24 n=0 ec=13/13 lis/c=23/23 les/c/f=24/24/0 sis=31 pruub=10.377091408s) [2] r=-1 lpr=31 pi=[23,31)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 74.940399170s@ mbc={}] PeeringState::start_peering_interval up [] -> [2], acting [] -> [2], acting_primary ? -> 2, up_primary ? -> 2, role -1 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Dec  8 04:47:35 np0005550137 ceph-osd[83009]: osd.1 pg_epoch: 31 pg[3.0( empty local-lis/les=23/24 n=0 ec=13/13 lis/c=23/23 les/c/f=24/24/0 sis=31 pruub=10.377077103s) [2] r=-1 lpr=31 pi=[23,31)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 74.940399170s@ mbc={}] state<Start>: transitioning to Stray
Dec  8 04:47:35 np0005550137 ceph-osd[83009]: osd.1 pg_epoch: 31 pg[2.a( empty local-lis/les=28/29 n=0 ec=21/12 lis/c=28/28 les/c/f=29/29/0 sis=31 pruub=10.355154037s) [2] r=-1 lpr=31 pi=[28,31)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 74.918601990s@ mbc={}] PeeringState::start_peering_interval up [] -> [2], acting [] -> [2], acting_primary ? -> 2, up_primary ? -> 2, role -1 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Dec  8 04:47:35 np0005550137 ceph-osd[83009]: osd.1 pg_epoch: 31 pg[2.d( empty local-lis/les=28/29 n=0 ec=21/12 lis/c=28/28 les/c/f=29/29/0 sis=31 pruub=10.355174065s) [2] r=-1 lpr=31 pi=[28,31)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 74.918640137s@ mbc={}] PeeringState::start_peering_interval up [] -> [2], acting [] -> [2], acting_primary ? -> 2, up_primary ? -> 2, role -1 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Dec  8 04:47:35 np0005550137 ceph-osd[83009]: osd.1 pg_epoch: 31 pg[5.d( empty local-lis/les=25/27 n=0 ec=25/16 lis/c=25/25 les/c/f=27/27/0 sis=31 pruub=5.740571499s) [2] r=-1 lpr=31 pi=[25,31)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 70.304046631s@ mbc={}] PeeringState::start_peering_interval up [] -> [2], acting [] -> [2], acting_primary ? -> 2, up_primary ? -> 2, role -1 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Dec  8 04:47:35 np0005550137 ceph-osd[83009]: osd.1 pg_epoch: 31 pg[2.d( empty local-lis/les=28/29 n=0 ec=21/12 lis/c=28/28 les/c/f=29/29/0 sis=31 pruub=10.355161667s) [2] r=-1 lpr=31 pi=[28,31)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 74.918640137s@ mbc={}] state<Start>: transitioning to Stray
Dec  8 04:47:35 np0005550137 ceph-osd[83009]: osd.1 pg_epoch: 31 pg[2.a( empty local-lis/les=28/29 n=0 ec=21/12 lis/c=28/28 les/c/f=29/29/0 sis=31 pruub=10.355113983s) [2] r=-1 lpr=31 pi=[28,31)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 74.918601990s@ mbc={}] state<Start>: transitioning to Stray
Dec  8 04:47:35 np0005550137 ceph-osd[83009]: osd.1 pg_epoch: 31 pg[2.c( empty local-lis/les=28/29 n=0 ec=21/12 lis/c=28/28 les/c/f=29/29/0 sis=31 pruub=10.355030060s) [2] r=-1 lpr=31 pi=[28,31)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 74.918655396s@ mbc={}] PeeringState::start_peering_interval up [] -> [2], acting [] -> [2], acting_primary ? -> 2, up_primary ? -> 2, role -1 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Dec  8 04:47:35 np0005550137 ceph-osd[83009]: osd.1 pg_epoch: 31 pg[2.c( empty local-lis/les=28/29 n=0 ec=21/12 lis/c=28/28 les/c/f=29/29/0 sis=31 pruub=10.354991913s) [2] r=-1 lpr=31 pi=[28,31)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 74.918655396s@ mbc={}] state<Start>: transitioning to Stray
Dec  8 04:47:35 np0005550137 ceph-osd[83009]: osd.1 pg_epoch: 31 pg[5.d( empty local-lis/les=25/27 n=0 ec=25/16 lis/c=25/25 les/c/f=27/27/0 sis=31 pruub=5.740555286s) [2] r=-1 lpr=31 pi=[25,31)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 70.304046631s@ mbc={}] state<Start>: transitioning to Stray
Dec  8 04:47:35 np0005550137 ceph-osd[83009]: osd.1 pg_epoch: 31 pg[5.b( empty local-lis/les=25/27 n=0 ec=25/16 lis/c=25/25 les/c/f=27/27/0 sis=31 pruub=5.740469933s) [2] r=-1 lpr=31 pi=[25,31)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 70.304191589s@ mbc={}] PeeringState::start_peering_interval up [] -> [2], acting [] -> [2], acting_primary ? -> 2, up_primary ? -> 2, role -1 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Dec  8 04:47:35 np0005550137 ceph-osd[83009]: osd.1 pg_epoch: 31 pg[5.b( empty local-lis/les=25/27 n=0 ec=25/16 lis/c=25/25 les/c/f=27/27/0 sis=31 pruub=5.740458965s) [2] r=-1 lpr=31 pi=[25,31)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 70.304191589s@ mbc={}] state<Start>: transitioning to Stray
Dec  8 04:47:35 np0005550137 ceph-osd[83009]: osd.1 pg_epoch: 31 pg[7.a( empty local-lis/les=28/29 n=0 ec=26/19 lis/c=28/28 les/c/f=29/29/0 sis=31 pruub=10.354860306s) [2] r=-1 lpr=31 pi=[28,31)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 74.918685913s@ mbc={}] PeeringState::start_peering_interval up [] -> [2], acting [] -> [2], acting_primary ? -> 2, up_primary ? -> 2, role -1 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Dec  8 04:47:35 np0005550137 ceph-osd[83009]: osd.1 pg_epoch: 31 pg[7.a( empty local-lis/les=28/29 n=0 ec=26/19 lis/c=28/28 les/c/f=29/29/0 sis=31 pruub=10.354848862s) [2] r=-1 lpr=31 pi=[28,31)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 74.918685913s@ mbc={}] state<Start>: transitioning to Stray
Dec  8 04:47:35 np0005550137 ceph-osd[83009]: osd.1 pg_epoch: 31 pg[5.8( empty local-lis/les=25/27 n=0 ec=25/16 lis/c=25/25 les/c/f=27/27/0 sis=31 pruub=5.740404606s) [2] r=-1 lpr=31 pi=[25,31)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 70.304252625s@ mbc={}] PeeringState::start_peering_interval up [] -> [2], acting [] -> [2], acting_primary ? -> 2, up_primary ? -> 2, role -1 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Dec  8 04:47:35 np0005550137 ceph-osd[83009]: osd.1 pg_epoch: 31 pg[5.8( empty local-lis/les=25/27 n=0 ec=25/16 lis/c=25/25 les/c/f=27/27/0 sis=31 pruub=5.740383625s) [2] r=-1 lpr=31 pi=[25,31)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 70.304252625s@ mbc={}] state<Start>: transitioning to Stray
Dec  8 04:47:35 np0005550137 ceph-osd[83009]: osd.1 pg_epoch: 31 pg[2.10( empty local-lis/les=28/29 n=0 ec=21/12 lis/c=28/28 les/c/f=29/29/0 sis=31 pruub=10.354805946s) [2] r=-1 lpr=31 pi=[28,31)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 74.918762207s@ mbc={}] PeeringState::start_peering_interval up [] -> [2], acting [] -> [2], acting_primary ? -> 2, up_primary ? -> 2, role -1 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Dec  8 04:47:35 np0005550137 ceph-osd[83009]: osd.1 pg_epoch: 31 pg[7.14( empty local-lis/les=28/29 n=0 ec=26/19 lis/c=28/28 les/c/f=29/29/0 sis=31 pruub=10.354793549s) [2] r=-1 lpr=31 pi=[28,31)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 74.918762207s@ mbc={}] PeeringState::start_peering_interval up [] -> [2], acting [] -> [2], acting_primary ? -> 2, up_primary ? -> 2, role -1 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Dec  8 04:47:35 np0005550137 ceph-osd[83009]: osd.1 pg_epoch: 31 pg[2.10( empty local-lis/les=28/29 n=0 ec=21/12 lis/c=28/28 les/c/f=29/29/0 sis=31 pruub=10.354792595s) [2] r=-1 lpr=31 pi=[28,31)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 74.918762207s@ mbc={}] state<Start>: transitioning to Stray
Dec  8 04:47:35 np0005550137 ceph-osd[83009]: osd.1 pg_epoch: 31 pg[7.14( empty local-lis/les=28/29 n=0 ec=26/19 lis/c=28/28 les/c/f=29/29/0 sis=31 pruub=10.354768753s) [2] r=-1 lpr=31 pi=[28,31)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 74.918762207s@ mbc={}] state<Start>: transitioning to Stray
Dec  8 04:47:35 np0005550137 ceph-osd[83009]: osd.1 pg_epoch: 31 pg[2.13( empty local-lis/les=28/29 n=0 ec=21/12 lis/c=28/28 les/c/f=29/29/0 sis=31 pruub=10.354775429s) [2] r=-1 lpr=31 pi=[28,31)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 74.918800354s@ mbc={}] PeeringState::start_peering_interval up [] -> [2], acting [] -> [2], acting_primary ? -> 2, up_primary ? -> 2, role -1 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Dec  8 04:47:35 np0005550137 ceph-osd[83009]: osd.1 pg_epoch: 31 pg[2.13( empty local-lis/les=28/29 n=0 ec=21/12 lis/c=28/28 les/c/f=29/29/0 sis=31 pruub=10.354761124s) [2] r=-1 lpr=31 pi=[28,31)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 74.918800354s@ mbc={}] state<Start>: transitioning to Stray
Dec  8 04:47:35 np0005550137 ceph-osd[83009]: osd.1 pg_epoch: 31 pg[4.14( empty local-lis/les=23/24 n=0 ec=23/15 lis/c=23/23 les/c/f=24/24/0 sis=31 pruub=10.375432968s) [2] r=-1 lpr=31 pi=[23,31)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 74.939498901s@ mbc={}] PeeringState::start_peering_interval up [] -> [2], acting [] -> [2], acting_primary ? -> 2, up_primary ? -> 2, role -1 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Dec  8 04:47:35 np0005550137 ceph-osd[83009]: osd.1 pg_epoch: 31 pg[4.14( empty local-lis/les=23/24 n=0 ec=23/15 lis/c=23/23 les/c/f=24/24/0 sis=31 pruub=10.375420570s) [2] r=-1 lpr=31 pi=[23,31)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 74.939498901s@ mbc={}] state<Start>: transitioning to Stray
Dec  8 04:47:35 np0005550137 ceph-osd[83009]: osd.1 pg_epoch: 31 pg[2.15( empty local-lis/les=28/29 n=0 ec=21/12 lis/c=28/28 les/c/f=29/29/0 sis=31 pruub=10.354692459s) [2] r=-1 lpr=31 pi=[28,31)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 74.918838501s@ mbc={}] PeeringState::start_peering_interval up [] -> [2], acting [] -> [2], acting_primary ? -> 2, up_primary ? -> 2, role -1 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Dec  8 04:47:35 np0005550137 ceph-osd[83009]: osd.1 pg_epoch: 31 pg[2.15( empty local-lis/les=28/29 n=0 ec=21/12 lis/c=28/28 les/c/f=29/29/0 sis=31 pruub=10.354681969s) [2] r=-1 lpr=31 pi=[28,31)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 74.918838501s@ mbc={}] state<Start>: transitioning to Stray
Dec  8 04:47:35 np0005550137 ceph-osd[83009]: osd.1 pg_epoch: 31 pg[5.12( empty local-lis/les=25/27 n=0 ec=25/16 lis/c=25/25 les/c/f=27/27/0 sis=31 pruub=5.740491867s) [2] r=-1 lpr=31 pi=[25,31)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 70.304702759s@ mbc={}] PeeringState::start_peering_interval up [] -> [2], acting [] -> [2], acting_primary ? -> 2, up_primary ? -> 2, role -1 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Dec  8 04:47:35 np0005550137 ceph-osd[83009]: osd.1 pg_epoch: 31 pg[5.13( empty local-lis/les=25/27 n=0 ec=25/16 lis/c=25/25 les/c/f=27/27/0 sis=31 pruub=5.740507126s) [2] r=-1 lpr=31 pi=[25,31)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 70.304748535s@ mbc={}] PeeringState::start_peering_interval up [] -> [2], acting [] -> [2], acting_primary ? -> 2, up_primary ? -> 2, role -1 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Dec  8 04:47:35 np0005550137 ceph-osd[83009]: osd.1 pg_epoch: 31 pg[5.13( empty local-lis/les=25/27 n=0 ec=25/16 lis/c=25/25 les/c/f=27/27/0 sis=31 pruub=5.740497112s) [2] r=-1 lpr=31 pi=[25,31)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 70.304748535s@ mbc={}] state<Start>: transitioning to Stray
Dec  8 04:47:35 np0005550137 ceph-osd[83009]: osd.1 pg_epoch: 31 pg[5.12( empty local-lis/les=25/27 n=0 ec=25/16 lis/c=25/25 les/c/f=27/27/0 sis=31 pruub=5.740467072s) [2] r=-1 lpr=31 pi=[25,31)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 70.304702759s@ mbc={}] state<Start>: transitioning to Stray
Dec  8 04:47:35 np0005550137 ceph-osd[83009]: osd.1 pg_epoch: 31 pg[7.1d( empty local-lis/les=28/29 n=0 ec=26/19 lis/c=28/28 les/c/f=29/29/0 sis=31 pruub=10.354492188s) [2] r=-1 lpr=31 pi=[28,31)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 74.918884277s@ mbc={}] PeeringState::start_peering_interval up [] -> [2], acting [] -> [2], acting_primary ? -> 2, up_primary ? -> 2, role -1 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Dec  8 04:47:35 np0005550137 ceph-osd[83009]: osd.1 pg_epoch: 31 pg[7.1d( empty local-lis/les=28/29 n=0 ec=26/19 lis/c=28/28 les/c/f=29/29/0 sis=31 pruub=10.354479790s) [2] r=-1 lpr=31 pi=[28,31)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 74.918884277s@ mbc={}] state<Start>: transitioning to Stray
Dec  8 04:47:35 np0005550137 podman[88431]: 2025-12-08 09:47:35.879586093 +0000 UTC m=+0.157014817 container attach 76a53b6c2b75e64db9f77e3e8906cc2794be0fea0672a86942401640f1ad56e0 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=pensive_ellis, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, ceph=True, io.buildah.version=1.40.1, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0)
Dec  8 04:47:35 np0005550137 podman[88334]: 2025-12-08 09:47:35.885114863 +0000 UTC m=+0.610832636 container remove 25747754412e2abb3026c3ed5c4f81b480f7528b457a5e40b100fcc9bea9e001 (image=quay.io/ceph/ceph:v19, name=intelligent_kalam, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.vendor=CentOS, CEPH_REF=squid, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec  8 04:47:35 np0005550137 systemd[1]: libpod-conmon-25747754412e2abb3026c3ed5c4f81b480f7528b457a5e40b100fcc9bea9e001.scope: Deactivated successfully.
Dec  8 04:47:36 np0005550137 pensive_ellis[88450]: --> passed data devices: 0 physical, 1 LVM
Dec  8 04:47:36 np0005550137 pensive_ellis[88450]: --> All data devices are unavailable
Dec  8 04:47:36 np0005550137 python3[88494]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --interactive  --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z   --volume /home/ceph-admin/specs/ceph_spec.yaml:/home/ceph_spec.yaml:z   --entrypoint ceph quay.io/ceph/ceph:v19 --fsid ceb838ef-9d5d-54e4-bddb-2f01adce2ad4 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   config set mgr mgr/dashboard/ssl false _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec  8 04:47:36 np0005550137 systemd[1]: libpod-76a53b6c2b75e64db9f77e3e8906cc2794be0fea0672a86942401640f1ad56e0.scope: Deactivated successfully.
Dec  8 04:47:36 np0005550137 podman[88431]: 2025-12-08 09:47:36.245141198 +0000 UTC m=+0.522569922 container died 76a53b6c2b75e64db9f77e3e8906cc2794be0fea0672a86942401640f1ad56e0 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=pensive_ellis, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, CEPH_REF=squid, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Dec  8 04:47:36 np0005550137 systemd[1]: var-lib-containers-storage-overlay-177e533b273551e23e485a77501d963bbaafe9d0aecc20f8d8856d396f99fc87-merged.mount: Deactivated successfully.
Dec  8 04:47:36 np0005550137 podman[88431]: 2025-12-08 09:47:36.289526893 +0000 UTC m=+0.566955637 container remove 76a53b6c2b75e64db9f77e3e8906cc2794be0fea0672a86942401640f1ad56e0 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=pensive_ellis, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Dec  8 04:47:36 np0005550137 podman[88502]: 2025-12-08 09:47:36.296130843 +0000 UTC m=+0.057537086 container create 1d77300fb60567d3704f02d8c964fc76fbbe01a9a70545e1ed123b62e4aee27b (image=quay.io/ceph/ceph:v19, name=keen_sammet, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec  8 04:47:36 np0005550137 ceph-mon[74516]: OSD bench result of 5992.083020 IOPS is not within the threshold limit range of 50.000000 IOPS and 500.000000 IOPS for osd.2. IOPS capacity is unchanged at 315.000000 IOPS. The recommendation is to establish the osd's IOPS capacity using other benchmark tools (e.g. Fio) and then override osd_mclock_max_capacity_iops_[hdd|ssd].
Dec  8 04:47:36 np0005550137 ceph-mon[74516]: from='client.? 192.168.122.100:0/2766465450' entity='client.admin' 
Dec  8 04:47:36 np0005550137 ceph-mon[74516]: osd.2 [v2:192.168.122.102:6800/2213880029,v1:192.168.122.102:6801/2213880029] boot
Dec  8 04:47:36 np0005550137 systemd[1]: Started libpod-conmon-1d77300fb60567d3704f02d8c964fc76fbbe01a9a70545e1ed123b62e4aee27b.scope.
Dec  8 04:47:36 np0005550137 systemd[1]: libpod-conmon-76a53b6c2b75e64db9f77e3e8906cc2794be0fea0672a86942401640f1ad56e0.scope: Deactivated successfully.
Dec  8 04:47:36 np0005550137 systemd[1]: Started libcrun container.
Dec  8 04:47:36 np0005550137 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/30878a03ff3dc38bd01841a195a46650fefcf59ce502a0e5c725d77e023bcd97/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Dec  8 04:47:36 np0005550137 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/30878a03ff3dc38bd01841a195a46650fefcf59ce502a0e5c725d77e023bcd97/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec  8 04:47:36 np0005550137 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/30878a03ff3dc38bd01841a195a46650fefcf59ce502a0e5c725d77e023bcd97/merged/home/ceph_spec.yaml supports timestamps until 2038 (0x7fffffff)
Dec  8 04:47:36 np0005550137 podman[88502]: 2025-12-08 09:47:36.27703621 +0000 UTC m=+0.038442463 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Dec  8 04:47:36 np0005550137 ceph-mgr[74806]: log_channel(cluster) log [DBG] : pgmap v94: 193 pgs: 57 peering, 136 active+clean; 449 KiB data, 480 MiB used, 60 GiB / 60 GiB avail
Dec  8 04:47:36 np0005550137 podman[88502]: 2025-12-08 09:47:36.384477361 +0000 UTC m=+0.145883644 container init 1d77300fb60567d3704f02d8c964fc76fbbe01a9a70545e1ed123b62e4aee27b (image=quay.io/ceph/ceph:v19, name=keen_sammet, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, io.buildah.version=1.40.1, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2)
Dec  8 04:47:36 np0005550137 podman[88502]: 2025-12-08 09:47:36.392616197 +0000 UTC m=+0.154022470 container start 1d77300fb60567d3704f02d8c964fc76fbbe01a9a70545e1ed123b62e4aee27b (image=quay.io/ceph/ceph:v19, name=keen_sammet, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, CEPH_REF=squid, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec  8 04:47:36 np0005550137 podman[88502]: 2025-12-08 09:47:36.396287784 +0000 UTC m=+0.157694057 container attach 1d77300fb60567d3704f02d8c964fc76fbbe01a9a70545e1ed123b62e4aee27b (image=quay.io/ceph/ceph:v19, name=keen_sammet, org.label-schema.schema-version=1.0, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec  8 04:47:36 np0005550137 ceph-osd[83009]: log_channel(cluster) log [DBG] : 4.f scrub starts
Dec  8 04:47:36 np0005550137 ceph-osd[83009]: log_channel(cluster) log [DBG] : 4.f scrub ok
Dec  8 04:47:36 np0005550137 ceph-mon[74516]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config set, name=mgr/dashboard/ssl}] v 0)
Dec  8 04:47:36 np0005550137 ceph-mon[74516]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/1226724288' entity='client.admin' 
Dec  8 04:47:36 np0005550137 systemd[1]: libpod-1d77300fb60567d3704f02d8c964fc76fbbe01a9a70545e1ed123b62e4aee27b.scope: Deactivated successfully.
Dec  8 04:47:36 np0005550137 podman[88628]: 2025-12-08 09:47:36.824162221 +0000 UTC m=+0.028489066 container died 1d77300fb60567d3704f02d8c964fc76fbbe01a9a70545e1ed123b62e4aee27b (image=quay.io/ceph/ceph:v19, name=keen_sammet, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec  8 04:47:36 np0005550137 ceph-mon[74516]: mon.compute-0@0(leader).osd e31 do_prune osdmap full prune enabled
Dec  8 04:47:36 np0005550137 ceph-mon[74516]: mon.compute-0@0(leader).osd e32 e32: 3 total, 3 up, 3 in
Dec  8 04:47:36 np0005550137 ceph-mon[74516]: log_channel(cluster) log [DBG] : osdmap e32: 3 total, 3 up, 3 in
Dec  8 04:47:36 np0005550137 systemd[1]: var-lib-containers-storage-overlay-30878a03ff3dc38bd01841a195a46650fefcf59ce502a0e5c725d77e023bcd97-merged.mount: Deactivated successfully.
Dec  8 04:47:36 np0005550137 podman[88644]: 2025-12-08 09:47:36.894881258 +0000 UTC m=+0.057174456 container remove 1d77300fb60567d3704f02d8c964fc76fbbe01a9a70545e1ed123b62e4aee27b (image=quay.io/ceph/ceph:v19, name=keen_sammet, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325)
Dec  8 04:47:36 np0005550137 podman[88650]: 2025-12-08 09:47:36.899115791 +0000 UTC m=+0.047349331 container create 84089745133ba3e3ea85dd14d0dbdf4c06192c679b99d903bc1c930237b3f64f (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=strange_lehmann, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_REF=squid, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Dec  8 04:47:36 np0005550137 systemd[1]: libpod-conmon-1d77300fb60567d3704f02d8c964fc76fbbe01a9a70545e1ed123b62e4aee27b.scope: Deactivated successfully.
Dec  8 04:47:36 np0005550137 systemd[1]: Started libpod-conmon-84089745133ba3e3ea85dd14d0dbdf4c06192c679b99d903bc1c930237b3f64f.scope.
Dec  8 04:47:36 np0005550137 systemd[1]: Started libcrun container.
Dec  8 04:47:36 np0005550137 podman[88650]: 2025-12-08 09:47:36.955583486 +0000 UTC m=+0.103817126 container init 84089745133ba3e3ea85dd14d0dbdf4c06192c679b99d903bc1c930237b3f64f (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=strange_lehmann, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec  8 04:47:36 np0005550137 podman[88650]: 2025-12-08 09:47:36.960279062 +0000 UTC m=+0.108512602 container start 84089745133ba3e3ea85dd14d0dbdf4c06192c679b99d903bc1c930237b3f64f (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=strange_lehmann, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, OSD_FLAVOR=default, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec  8 04:47:36 np0005550137 podman[88650]: 2025-12-08 09:47:36.963602748 +0000 UTC m=+0.111836308 container attach 84089745133ba3e3ea85dd14d0dbdf4c06192c679b99d903bc1c930237b3f64f (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=strange_lehmann, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2)
Dec  8 04:47:36 np0005550137 strange_lehmann[88673]: 167 167
Dec  8 04:47:36 np0005550137 systemd[1]: libpod-84089745133ba3e3ea85dd14d0dbdf4c06192c679b99d903bc1c930237b3f64f.scope: Deactivated successfully.
Dec  8 04:47:36 np0005550137 podman[88650]: 2025-12-08 09:47:36.965310868 +0000 UTC m=+0.113544408 container died 84089745133ba3e3ea85dd14d0dbdf4c06192c679b99d903bc1c930237b3f64f (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=strange_lehmann, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec  8 04:47:36 np0005550137 podman[88650]: 2025-12-08 09:47:36.873955703 +0000 UTC m=+0.022189293 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Dec  8 04:47:36 np0005550137 systemd[1]: var-lib-containers-storage-overlay-4f1cceddf4518e2c0210fbcccb0a4922d8076b74007cfc84be19c1c9a1218f8a-merged.mount: Deactivated successfully.
Dec  8 04:47:37 np0005550137 podman[88650]: 2025-12-08 09:47:37.007128408 +0000 UTC m=+0.155361978 container remove 84089745133ba3e3ea85dd14d0dbdf4c06192c679b99d903bc1c930237b3f64f (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=strange_lehmann, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS)
Dec  8 04:47:37 np0005550137 systemd[1]: libpod-conmon-84089745133ba3e3ea85dd14d0dbdf4c06192c679b99d903bc1c930237b3f64f.scope: Deactivated successfully.
Dec  8 04:47:37 np0005550137 podman[88695]: 2025-12-08 09:47:37.194212395 +0000 UTC m=+0.053825449 container create 55d3247c90c0fc74acd8c4f04a9d477e1e7e3addf2da796050ea1f1e6bcd2585 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=flamboyant_yalow, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0)
Dec  8 04:47:37 np0005550137 systemd[1]: Started libpod-conmon-55d3247c90c0fc74acd8c4f04a9d477e1e7e3addf2da796050ea1f1e6bcd2585.scope.
Dec  8 04:47:37 np0005550137 podman[88695]: 2025-12-08 09:47:37.168301685 +0000 UTC m=+0.027914759 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Dec  8 04:47:37 np0005550137 systemd[1]: Started libcrun container.
Dec  8 04:47:37 np0005550137 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e3169c3e6c6f2a75c6e909280ef411a92d81fc469338e99d32df982962859273/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec  8 04:47:37 np0005550137 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e3169c3e6c6f2a75c6e909280ef411a92d81fc469338e99d32df982962859273/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec  8 04:47:37 np0005550137 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e3169c3e6c6f2a75c6e909280ef411a92d81fc469338e99d32df982962859273/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec  8 04:47:37 np0005550137 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e3169c3e6c6f2a75c6e909280ef411a92d81fc469338e99d32df982962859273/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec  8 04:47:37 np0005550137 podman[88695]: 2025-12-08 09:47:37.335097604 +0000 UTC m=+0.194710708 container init 55d3247c90c0fc74acd8c4f04a9d477e1e7e3addf2da796050ea1f1e6bcd2585 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=flamboyant_yalow, org.label-schema.vendor=CentOS, ceph=True, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0)
Dec  8 04:47:37 np0005550137 ceph-mon[74516]: from='client.? 192.168.122.100:0/1226724288' entity='client.admin' 
Dec  8 04:47:37 np0005550137 podman[88695]: 2025-12-08 09:47:37.343732274 +0000 UTC m=+0.203345318 container start 55d3247c90c0fc74acd8c4f04a9d477e1e7e3addf2da796050ea1f1e6bcd2585 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=flamboyant_yalow, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True)
Dec  8 04:47:37 np0005550137 podman[88695]: 2025-12-08 09:47:37.357800081 +0000 UTC m=+0.217413095 container attach 55d3247c90c0fc74acd8c4f04a9d477e1e7e3addf2da796050ea1f1e6bcd2585 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=flamboyant_yalow, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec  8 04:47:37 np0005550137 python3[88741]: ansible-ansible.legacy.command Invoked with _raw_params=podman ps -a -f 'name=ceph-?(.*)-mgr.*' --format \{\{\.Command\}\} --no-trunc#012 _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec  8 04:47:37 np0005550137 ceph-osd[83009]: log_channel(cluster) log [DBG] : 3.4 scrub starts
Dec  8 04:47:37 np0005550137 ceph-osd[83009]: log_channel(cluster) log [DBG] : 3.4 scrub ok
Dec  8 04:47:37 np0005550137 flamboyant_yalow[88711]: {
Dec  8 04:47:37 np0005550137 flamboyant_yalow[88711]:    "1": [
Dec  8 04:47:37 np0005550137 flamboyant_yalow[88711]:        {
Dec  8 04:47:37 np0005550137 flamboyant_yalow[88711]:            "devices": [
Dec  8 04:47:37 np0005550137 flamboyant_yalow[88711]:                "/dev/loop3"
Dec  8 04:47:37 np0005550137 flamboyant_yalow[88711]:            ],
Dec  8 04:47:37 np0005550137 flamboyant_yalow[88711]:            "lv_name": "ceph_lv0",
Dec  8 04:47:37 np0005550137 flamboyant_yalow[88711]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Dec  8 04:47:37 np0005550137 flamboyant_yalow[88711]:            "lv_size": "21470642176",
Dec  8 04:47:37 np0005550137 flamboyant_yalow[88711]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=RomYjf-Huw1-Uvyl-0ZXT-yb3F-i1Wo-KjPFYE,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=ceb838ef-9d5d-54e4-bddb-2f01adce2ad4,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=10863df8-16d4-4896-ae26-227efb76290e,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Dec  8 04:47:37 np0005550137 flamboyant_yalow[88711]:            "lv_uuid": "RomYjf-Huw1-Uvyl-0ZXT-yb3F-i1Wo-KjPFYE",
Dec  8 04:47:37 np0005550137 flamboyant_yalow[88711]:            "name": "ceph_lv0",
Dec  8 04:47:37 np0005550137 flamboyant_yalow[88711]:            "path": "/dev/ceph_vg0/ceph_lv0",
Dec  8 04:47:37 np0005550137 flamboyant_yalow[88711]:            "tags": {
Dec  8 04:47:37 np0005550137 flamboyant_yalow[88711]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Dec  8 04:47:37 np0005550137 flamboyant_yalow[88711]:                "ceph.block_uuid": "RomYjf-Huw1-Uvyl-0ZXT-yb3F-i1Wo-KjPFYE",
Dec  8 04:47:37 np0005550137 flamboyant_yalow[88711]:                "ceph.cephx_lockbox_secret": "",
Dec  8 04:47:37 np0005550137 flamboyant_yalow[88711]:                "ceph.cluster_fsid": "ceb838ef-9d5d-54e4-bddb-2f01adce2ad4",
Dec  8 04:47:37 np0005550137 flamboyant_yalow[88711]:                "ceph.cluster_name": "ceph",
Dec  8 04:47:37 np0005550137 flamboyant_yalow[88711]:                "ceph.crush_device_class": "",
Dec  8 04:47:37 np0005550137 flamboyant_yalow[88711]:                "ceph.encrypted": "0",
Dec  8 04:47:37 np0005550137 flamboyant_yalow[88711]:                "ceph.osd_fsid": "10863df8-16d4-4896-ae26-227efb76290e",
Dec  8 04:47:37 np0005550137 flamboyant_yalow[88711]:                "ceph.osd_id": "1",
Dec  8 04:47:37 np0005550137 flamboyant_yalow[88711]:                "ceph.osdspec_affinity": "default_drive_group",
Dec  8 04:47:37 np0005550137 flamboyant_yalow[88711]:                "ceph.type": "block",
Dec  8 04:47:37 np0005550137 flamboyant_yalow[88711]:                "ceph.vdo": "0",
Dec  8 04:47:37 np0005550137 flamboyant_yalow[88711]:                "ceph.with_tpm": "0"
Dec  8 04:47:37 np0005550137 flamboyant_yalow[88711]:            },
Dec  8 04:47:37 np0005550137 flamboyant_yalow[88711]:            "type": "block",
Dec  8 04:47:37 np0005550137 flamboyant_yalow[88711]:            "vg_name": "ceph_vg0"
Dec  8 04:47:37 np0005550137 flamboyant_yalow[88711]:        }
Dec  8 04:47:37 np0005550137 flamboyant_yalow[88711]:    ]
Dec  8 04:47:37 np0005550137 flamboyant_yalow[88711]: }
Dec  8 04:47:37 np0005550137 systemd[1]: libpod-55d3247c90c0fc74acd8c4f04a9d477e1e7e3addf2da796050ea1f1e6bcd2585.scope: Deactivated successfully.
Dec  8 04:47:37 np0005550137 podman[88695]: 2025-12-08 09:47:37.697151616 +0000 UTC m=+0.556764631 container died 55d3247c90c0fc74acd8c4f04a9d477e1e7e3addf2da796050ea1f1e6bcd2585 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=flamboyant_yalow, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, OSD_FLAVOR=default)
Dec  8 04:47:37 np0005550137 systemd[1]: var-lib-containers-storage-overlay-e3169c3e6c6f2a75c6e909280ef411a92d81fc469338e99d32df982962859273-merged.mount: Deactivated successfully.
Dec  8 04:47:37 np0005550137 podman[88695]: 2025-12-08 09:47:37.739192874 +0000 UTC m=+0.598805888 container remove 55d3247c90c0fc74acd8c4f04a9d477e1e7e3addf2da796050ea1f1e6bcd2585 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=flamboyant_yalow, CEPH_REF=squid, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True)
Dec  8 04:47:37 np0005550137 systemd[1]: libpod-conmon-55d3247c90c0fc74acd8c4f04a9d477e1e7e3addf2da796050ea1f1e6bcd2585.scope: Deactivated successfully.
Dec  8 04:47:37 np0005550137 ceph-mon[74516]: mon.compute-0@0(leader).osd e32 do_prune osdmap full prune enabled
Dec  8 04:47:37 np0005550137 ceph-mon[74516]: mon.compute-0@0(leader).osd e33 e33: 3 total, 3 up, 3 in
Dec  8 04:47:37 np0005550137 ceph-mon[74516]: log_channel(cluster) log [DBG] : osdmap e33: 3 total, 3 up, 3 in
Dec  8 04:47:38 np0005550137 python3[88837]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --interactive  --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z   --volume /home/ceph-admin/specs/ceph_spec.yaml:/home/ceph_spec.yaml:z   --entrypoint ceph quay.io/ceph/ceph:v19 --fsid ceb838ef-9d5d-54e4-bddb-2f01adce2ad4 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   config set mgr mgr/dashboard/compute-0.kitiwu/server_addr 192.168.122.100#012 _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec  8 04:47:38 np0005550137 podman[88846]: 2025-12-08 09:47:38.119866625 +0000 UTC m=+0.054811378 container create 6c09c0148071807f928b087753dd0f35d07d3b0e7e4f640c3d1adda4fec31bdb (image=quay.io/ceph/ceph:v19, name=priceless_shaw, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_REF=squid, OSD_FLAVOR=default)
Dec  8 04:47:38 np0005550137 ceph-mgr[74806]: [progress INFO root] Writing back 12 completed events
Dec  8 04:47:38 np0005550137 ceph-mon[74516]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/progress/completed}] v 0)
Dec  8 04:47:38 np0005550137 ceph-mon[74516]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1555784564' entity='mgr.compute-0.kitiwu' 
Dec  8 04:47:38 np0005550137 systemd[1]: Started libpod-conmon-6c09c0148071807f928b087753dd0f35d07d3b0e7e4f640c3d1adda4fec31bdb.scope.
Dec  8 04:47:38 np0005550137 systemd[1]: Started libcrun container.
Dec  8 04:47:38 np0005550137 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/10c15907d076f9f24972a3484848bfd217f4e4047cfd91760c31123c9f8e9fb4/merged/home/ceph_spec.yaml supports timestamps until 2038 (0x7fffffff)
Dec  8 04:47:38 np0005550137 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/10c15907d076f9f24972a3484848bfd217f4e4047cfd91760c31123c9f8e9fb4/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Dec  8 04:47:38 np0005550137 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/10c15907d076f9f24972a3484848bfd217f4e4047cfd91760c31123c9f8e9fb4/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec  8 04:47:38 np0005550137 podman[88846]: 2025-12-08 09:47:38.097798436 +0000 UTC m=+0.032743169 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Dec  8 04:47:38 np0005550137 podman[88846]: 2025-12-08 09:47:38.196471973 +0000 UTC m=+0.131416776 container init 6c09c0148071807f928b087753dd0f35d07d3b0e7e4f640c3d1adda4fec31bdb (image=quay.io/ceph/ceph:v19, name=priceless_shaw, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, CEPH_REF=squid, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec  8 04:47:38 np0005550137 podman[88846]: 2025-12-08 09:47:38.210059187 +0000 UTC m=+0.145003940 container start 6c09c0148071807f928b087753dd0f35d07d3b0e7e4f640c3d1adda4fec31bdb (image=quay.io/ceph/ceph:v19, name=priceless_shaw, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid)
Dec  8 04:47:38 np0005550137 podman[88846]: 2025-12-08 09:47:38.215294388 +0000 UTC m=+0.150239151 container attach 6c09c0148071807f928b087753dd0f35d07d3b0e7e4f640c3d1adda4fec31bdb (image=quay.io/ceph/ceph:v19, name=priceless_shaw, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1)
Dec  8 04:47:38 np0005550137 ceph-mon[74516]: from='mgr.14122 192.168.122.100:0/1555784564' entity='mgr.compute-0.kitiwu' 
Dec  8 04:47:38 np0005550137 ceph-mgr[74806]: log_channel(cluster) log [DBG] : pgmap v97: 193 pgs: 57 peering, 136 active+clean; 449 KiB data, 480 MiB used, 60 GiB / 60 GiB avail
Dec  8 04:47:38 np0005550137 podman[88905]: 2025-12-08 09:47:38.401386925 +0000 UTC m=+0.068192664 container create e2a582e0ceeaf99ddf33591e5779b27ba6a8a632c1e593503d3475c4866dd8e5 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=condescending_shamir, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, ceph=True, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec  8 04:47:38 np0005550137 systemd[1]: Started libpod-conmon-e2a582e0ceeaf99ddf33591e5779b27ba6a8a632c1e593503d3475c4866dd8e5.scope.
Dec  8 04:47:38 np0005550137 systemd[1]: Started libcrun container.
Dec  8 04:47:38 np0005550137 podman[88905]: 2025-12-08 09:47:38.373320843 +0000 UTC m=+0.040126662 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Dec  8 04:47:38 np0005550137 podman[88905]: 2025-12-08 09:47:38.476048498 +0000 UTC m=+0.142854257 container init e2a582e0ceeaf99ddf33591e5779b27ba6a8a632c1e593503d3475c4866dd8e5 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=condescending_shamir, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_REF=squid, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS)
Dec  8 04:47:38 np0005550137 podman[88905]: 2025-12-08 09:47:38.484014488 +0000 UTC m=+0.150820217 container start e2a582e0ceeaf99ddf33591e5779b27ba6a8a632c1e593503d3475c4866dd8e5 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=condescending_shamir, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS)
Dec  8 04:47:38 np0005550137 podman[88905]: 2025-12-08 09:47:38.487341745 +0000 UTC m=+0.154147574 container attach e2a582e0ceeaf99ddf33591e5779b27ba6a8a632c1e593503d3475c4866dd8e5 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=condescending_shamir, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_REF=squid, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Dec  8 04:47:38 np0005550137 condescending_shamir[88940]: 167 167
Dec  8 04:47:38 np0005550137 systemd[1]: libpod-e2a582e0ceeaf99ddf33591e5779b27ba6a8a632c1e593503d3475c4866dd8e5.scope: Deactivated successfully.
Dec  8 04:47:38 np0005550137 podman[88905]: 2025-12-08 09:47:38.491033261 +0000 UTC m=+0.157838990 container died e2a582e0ceeaf99ddf33591e5779b27ba6a8a632c1e593503d3475c4866dd8e5 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=condescending_shamir, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_REF=squid, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325)
Dec  8 04:47:38 np0005550137 systemd[1]: var-lib-containers-storage-overlay-228993bbac9c46ce8c86944b1b6c92ffbb864f3eddbec605bbeb61d71ef33a1b-merged.mount: Deactivated successfully.
Dec  8 04:47:38 np0005550137 ceph-osd[83009]: log_channel(cluster) log [DBG] : 4.4 scrub starts
Dec  8 04:47:38 np0005550137 podman[88905]: 2025-12-08 09:47:38.530640928 +0000 UTC m=+0.197446657 container remove e2a582e0ceeaf99ddf33591e5779b27ba6a8a632c1e593503d3475c4866dd8e5 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=condescending_shamir, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec  8 04:47:38 np0005550137 ceph-osd[83009]: log_channel(cluster) log [DBG] : 4.4 scrub ok
Dec  8 04:47:38 np0005550137 systemd[1]: libpod-conmon-e2a582e0ceeaf99ddf33591e5779b27ba6a8a632c1e593503d3475c4866dd8e5.scope: Deactivated successfully.
Dec  8 04:47:38 np0005550137 ceph-mon[74516]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config set, name=mgr/dashboard/compute-0.kitiwu/server_addr}] v 0)
Dec  8 04:47:38 np0005550137 ceph-mon[74516]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/2940995855' entity='client.admin' 
Dec  8 04:47:38 np0005550137 systemd[1]: libpod-6c09c0148071807f928b087753dd0f35d07d3b0e7e4f640c3d1adda4fec31bdb.scope: Deactivated successfully.
Dec  8 04:47:38 np0005550137 conmon[88876]: conmon 6c09c0148071807f928b <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-6c09c0148071807f928b087753dd0f35d07d3b0e7e4f640c3d1adda4fec31bdb.scope/container/memory.events
Dec  8 04:47:38 np0005550137 podman[88846]: 2025-12-08 09:47:38.623383723 +0000 UTC m=+0.558328436 container died 6c09c0148071807f928b087753dd0f35d07d3b0e7e4f640c3d1adda4fec31bdb (image=quay.io/ceph/ceph:v19, name=priceless_shaw, org.label-schema.vendor=CentOS, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec  8 04:47:38 np0005550137 systemd[1]: var-lib-containers-storage-overlay-10c15907d076f9f24972a3484848bfd217f4e4047cfd91760c31123c9f8e9fb4-merged.mount: Deactivated successfully.
Dec  8 04:47:38 np0005550137 podman[88846]: 2025-12-08 09:47:38.659106167 +0000 UTC m=+0.594050890 container remove 6c09c0148071807f928b087753dd0f35d07d3b0e7e4f640c3d1adda4fec31bdb (image=quay.io/ceph/ceph:v19, name=priceless_shaw, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec  8 04:47:38 np0005550137 systemd[1]: libpod-conmon-6c09c0148071807f928b087753dd0f35d07d3b0e7e4f640c3d1adda4fec31bdb.scope: Deactivated successfully.
Dec  8 04:47:38 np0005550137 podman[88975]: 2025-12-08 09:47:38.734779218 +0000 UTC m=+0.041402520 container create 275e2e3bf1513229c602a928968f3a466e0ce20dbea520164562150af02b24d3 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=vibrant_kirch, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325)
Dec  8 04:47:38 np0005550137 systemd[1]: Started libpod-conmon-275e2e3bf1513229c602a928968f3a466e0ce20dbea520164562150af02b24d3.scope.
Dec  8 04:47:38 np0005550137 systemd[1]: Started libcrun container.
Dec  8 04:47:38 np0005550137 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c34b13f2f3b3c7c73b11fe5031836be850fa1c730f21b8b56c08fb7818dd3553/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec  8 04:47:38 np0005550137 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c34b13f2f3b3c7c73b11fe5031836be850fa1c730f21b8b56c08fb7818dd3553/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec  8 04:47:38 np0005550137 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c34b13f2f3b3c7c73b11fe5031836be850fa1c730f21b8b56c08fb7818dd3553/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec  8 04:47:38 np0005550137 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c34b13f2f3b3c7c73b11fe5031836be850fa1c730f21b8b56c08fb7818dd3553/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec  8 04:47:38 np0005550137 podman[88975]: 2025-12-08 09:47:38.714360388 +0000 UTC m=+0.020983740 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Dec  8 04:47:38 np0005550137 podman[88975]: 2025-12-08 09:47:38.810693416 +0000 UTC m=+0.117316738 container init 275e2e3bf1513229c602a928968f3a466e0ce20dbea520164562150af02b24d3 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=vibrant_kirch, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325)
Dec  8 04:47:38 np0005550137 podman[88975]: 2025-12-08 09:47:38.816906046 +0000 UTC m=+0.123529348 container start 275e2e3bf1513229c602a928968f3a466e0ce20dbea520164562150af02b24d3 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=vibrant_kirch, ceph=True, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_REF=squid)
Dec  8 04:47:38 np0005550137 podman[88975]: 2025-12-08 09:47:38.819485971 +0000 UTC m=+0.126109283 container attach 275e2e3bf1513229c602a928968f3a466e0ce20dbea520164562150af02b24d3 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=vibrant_kirch, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, OSD_FLAVOR=default, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec  8 04:47:39 np0005550137 lvm[89092]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Dec  8 04:47:39 np0005550137 lvm[89092]: VG ceph_vg0 finished
Dec  8 04:47:39 np0005550137 vibrant_kirch[88992]: {}
Dec  8 04:47:39 np0005550137 systemd[1]: libpod-275e2e3bf1513229c602a928968f3a466e0ce20dbea520164562150af02b24d3.scope: Deactivated successfully.
Dec  8 04:47:39 np0005550137 podman[88975]: 2025-12-08 09:47:39.496846312 +0000 UTC m=+0.803469614 container died 275e2e3bf1513229c602a928968f3a466e0ce20dbea520164562150af02b24d3 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=vibrant_kirch, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0)
Dec  8 04:47:39 np0005550137 systemd[1]: libpod-275e2e3bf1513229c602a928968f3a466e0ce20dbea520164562150af02b24d3.scope: Consumed 1.049s CPU time.
Dec  8 04:47:39 np0005550137 systemd[1]: var-lib-containers-storage-overlay-c34b13f2f3b3c7c73b11fe5031836be850fa1c730f21b8b56c08fb7818dd3553-merged.mount: Deactivated successfully.
Dec  8 04:47:39 np0005550137 ceph-osd[83009]: log_channel(cluster) log [DBG] : 5.5 scrub starts
Dec  8 04:47:39 np0005550137 ceph-osd[83009]: log_channel(cluster) log [DBG] : 5.5 scrub ok
Dec  8 04:47:39 np0005550137 podman[88975]: 2025-12-08 09:47:39.546156179 +0000 UTC m=+0.852779471 container remove 275e2e3bf1513229c602a928968f3a466e0ce20dbea520164562150af02b24d3 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=vibrant_kirch, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec  8 04:47:39 np0005550137 python3[89090]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --interactive  --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z   --volume /home/ceph-admin/specs/ceph_spec.yaml:/home/ceph_spec.yaml:z   --entrypoint ceph quay.io/ceph/ceph:v19 --fsid ceb838ef-9d5d-54e4-bddb-2f01adce2ad4 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   config set mgr mgr/dashboard/compute-1.mmkaif/server_addr 192.168.122.101#012 _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec  8 04:47:39 np0005550137 systemd[1]: libpod-conmon-275e2e3bf1513229c602a928968f3a466e0ce20dbea520164562150af02b24d3.scope: Deactivated successfully.
Dec  8 04:47:39 np0005550137 ceph-mon[74516]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Dec  8 04:47:39 np0005550137 ceph-mon[74516]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1555784564' entity='mgr.compute-0.kitiwu' 
Dec  8 04:47:39 np0005550137 ceph-mon[74516]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Dec  8 04:47:39 np0005550137 ceph-mon[74516]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1555784564' entity='mgr.compute-0.kitiwu' 
Dec  8 04:47:39 np0005550137 ceph-mgr[74806]: [progress INFO root] update: starting ev 3c2da717-374b-4152-9776-bbb1070bc81d (Updating rgw.rgw deployment (+3 -> 3))
Dec  8 04:47:39 np0005550137 ceph-mon[74516]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get-or-create", "entity": "client.rgw.rgw.compute-2.dimexm", "caps": ["mon", "allow *", "mgr", "allow rw", "osd", "allow rwx tag rgw *=*"]} v 0)
Dec  8 04:47:39 np0005550137 ceph-mon[74516]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1555784564' entity='mgr.compute-0.kitiwu' cmd=[{"prefix": "auth get-or-create", "entity": "client.rgw.rgw.compute-2.dimexm", "caps": ["mon", "allow *", "mgr", "allow rw", "osd", "allow rwx tag rgw *=*"]}]: dispatch
Dec  8 04:47:39 np0005550137 ceph-mon[74516]: from='client.? 192.168.122.100:0/2940995855' entity='client.admin' 
Dec  8 04:47:39 np0005550137 ceph-mon[74516]: from='mgr.14122 192.168.122.100:0/1555784564' entity='mgr.compute-0.kitiwu' 
Dec  8 04:47:39 np0005550137 podman[89108]: 2025-12-08 09:47:39.624839667 +0000 UTC m=+0.055441486 container create f7096f9ce66fcea61a66d5db7f5f35321c0b7d063a3aa683e515d43376d0696d (image=quay.io/ceph/ceph:v19, name=beautiful_wozniak, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, ceph=True, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec  8 04:47:39 np0005550137 ceph-mon[74516]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1555784564' entity='mgr.compute-0.kitiwu' cmd='[{"prefix": "auth get-or-create", "entity": "client.rgw.rgw.compute-2.dimexm", "caps": ["mon", "allow *", "mgr", "allow rw", "osd", "allow rwx tag rgw *=*"]}]': finished
Dec  8 04:47:39 np0005550137 ceph-mon[74516]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config set, name=rgw_frontends}] v 0)
Dec  8 04:47:39 np0005550137 ceph-mon[74516]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1555784564' entity='mgr.compute-0.kitiwu' 
Dec  8 04:47:39 np0005550137 ceph-mon[74516]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Dec  8 04:47:39 np0005550137 ceph-mon[74516]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/1555784564' entity='mgr.compute-0.kitiwu' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec  8 04:47:39 np0005550137 ceph-mgr[74806]: [cephadm INFO cephadm.serve] Deploying daemon rgw.rgw.compute-2.dimexm on compute-2
Dec  8 04:47:39 np0005550137 ceph-mgr[74806]: log_channel(cephadm) log [INF] : Deploying daemon rgw.rgw.compute-2.dimexm on compute-2
Dec  8 04:47:39 np0005550137 systemd[1]: Started libpod-conmon-f7096f9ce66fcea61a66d5db7f5f35321c0b7d063a3aa683e515d43376d0696d.scope.
Dec  8 04:47:39 np0005550137 podman[89108]: 2025-12-08 09:47:39.600369409 +0000 UTC m=+0.030971248 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Dec  8 04:47:39 np0005550137 systemd[1]: Started libcrun container.
Dec  8 04:47:39 np0005550137 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/cbd31e5e8dc3de2b1ba4da049930b709f97364d57b222185c2853337159cbc5e/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec  8 04:47:39 np0005550137 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/cbd31e5e8dc3de2b1ba4da049930b709f97364d57b222185c2853337159cbc5e/merged/home/ceph_spec.yaml supports timestamps until 2038 (0x7fffffff)
Dec  8 04:47:39 np0005550137 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/cbd31e5e8dc3de2b1ba4da049930b709f97364d57b222185c2853337159cbc5e/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Dec  8 04:47:39 np0005550137 podman[89108]: 2025-12-08 09:47:39.720342713 +0000 UTC m=+0.150944622 container init f7096f9ce66fcea61a66d5db7f5f35321c0b7d063a3aa683e515d43376d0696d (image=quay.io/ceph/ceph:v19, name=beautiful_wozniak, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, CEPH_REF=squid, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec  8 04:47:39 np0005550137 podman[89108]: 2025-12-08 09:47:39.728817398 +0000 UTC m=+0.159419247 container start f7096f9ce66fcea61a66d5db7f5f35321c0b7d063a3aa683e515d43376d0696d (image=quay.io/ceph/ceph:v19, name=beautiful_wozniak, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, ceph=True, CEPH_REF=squid, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, io.buildah.version=1.40.1)
Dec  8 04:47:39 np0005550137 podman[89108]: 2025-12-08 09:47:39.733375461 +0000 UTC m=+0.163977300 container attach f7096f9ce66fcea61a66d5db7f5f35321c0b7d063a3aa683e515d43376d0696d (image=quay.io/ceph/ceph:v19, name=beautiful_wozniak, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, ceph=True)
Dec  8 04:47:39 np0005550137 ceph-mon[74516]: mon.compute-0@0(leader).osd e33 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  8 04:47:40 np0005550137 ceph-mon[74516]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config set, name=mgr/dashboard/compute-1.mmkaif/server_addr}] v 0)
Dec  8 04:47:40 np0005550137 ceph-mon[74516]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/1238176050' entity='client.admin' 
Dec  8 04:47:40 np0005550137 systemd[1]: libpod-f7096f9ce66fcea61a66d5db7f5f35321c0b7d063a3aa683e515d43376d0696d.scope: Deactivated successfully.
Dec  8 04:47:40 np0005550137 podman[89108]: 2025-12-08 09:47:40.117223893 +0000 UTC m=+0.547825712 container died f7096f9ce66fcea61a66d5db7f5f35321c0b7d063a3aa683e515d43376d0696d (image=quay.io/ceph/ceph:v19, name=beautiful_wozniak, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, ceph=True, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec  8 04:47:40 np0005550137 systemd[1]: var-lib-containers-storage-overlay-cbd31e5e8dc3de2b1ba4da049930b709f97364d57b222185c2853337159cbc5e-merged.mount: Deactivated successfully.
Dec  8 04:47:40 np0005550137 podman[89108]: 2025-12-08 09:47:40.157206811 +0000 UTC m=+0.587808630 container remove f7096f9ce66fcea61a66d5db7f5f35321c0b7d063a3aa683e515d43376d0696d (image=quay.io/ceph/ceph:v19, name=beautiful_wozniak, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec  8 04:47:40 np0005550137 systemd[1]: libpod-conmon-f7096f9ce66fcea61a66d5db7f5f35321c0b7d063a3aa683e515d43376d0696d.scope: Deactivated successfully.
Dec  8 04:47:40 np0005550137 ceph-mgr[74806]: log_channel(cluster) log [DBG] : pgmap v98: 193 pgs: 57 peering, 136 active+clean; 449 KiB data, 480 MiB used, 60 GiB / 60 GiB avail
Dec  8 04:47:40 np0005550137 ceph-osd[83009]: log_channel(cluster) log [DBG] : 6.6 scrub starts
Dec  8 04:47:40 np0005550137 ceph-osd[83009]: log_channel(cluster) log [DBG] : 6.6 scrub ok
Dec  8 04:47:40 np0005550137 ceph-mon[74516]: from='mgr.14122 192.168.122.100:0/1555784564' entity='mgr.compute-0.kitiwu' 
Dec  8 04:47:40 np0005550137 ceph-mon[74516]: from='mgr.14122 192.168.122.100:0/1555784564' entity='mgr.compute-0.kitiwu' cmd=[{"prefix": "auth get-or-create", "entity": "client.rgw.rgw.compute-2.dimexm", "caps": ["mon", "allow *", "mgr", "allow rw", "osd", "allow rwx tag rgw *=*"]}]: dispatch
Dec  8 04:47:40 np0005550137 ceph-mon[74516]: from='mgr.14122 192.168.122.100:0/1555784564' entity='mgr.compute-0.kitiwu' cmd='[{"prefix": "auth get-or-create", "entity": "client.rgw.rgw.compute-2.dimexm", "caps": ["mon", "allow *", "mgr", "allow rw", "osd", "allow rwx tag rgw *=*"]}]': finished
Dec  8 04:47:40 np0005550137 ceph-mon[74516]: from='mgr.14122 192.168.122.100:0/1555784564' entity='mgr.compute-0.kitiwu' 
Dec  8 04:47:40 np0005550137 ceph-mon[74516]: Deploying daemon rgw.rgw.compute-2.dimexm on compute-2
Dec  8 04:47:40 np0005550137 ceph-mon[74516]: from='client.? 192.168.122.100:0/1238176050' entity='client.admin' 
Dec  8 04:47:41 np0005550137 python3[89187]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --interactive  --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z   --volume /home/ceph-admin/specs/ceph_spec.yaml:/home/ceph_spec.yaml:z   --entrypoint ceph quay.io/ceph/ceph:v19 --fsid ceb838ef-9d5d-54e4-bddb-2f01adce2ad4 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   config set mgr mgr/dashboard/compute-2.zqytsv/server_addr 192.168.122.102#012 _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec  8 04:47:41 np0005550137 podman[89188]: 2025-12-08 09:47:41.119503631 +0000 UTC m=+0.054155988 container create f8961f1e1aa4a16311d4df6c92770cd9f2b8a3bdd6c3a956b034030deaf01934 (image=quay.io/ceph/ceph:v19, name=strange_brown, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1)
Dec  8 04:47:41 np0005550137 systemd[1]: Started libpod-conmon-f8961f1e1aa4a16311d4df6c92770cd9f2b8a3bdd6c3a956b034030deaf01934.scope.
Dec  8 04:47:41 np0005550137 systemd[1]: Started libcrun container.
Dec  8 04:47:41 np0005550137 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b279941bf0d1e9f89f515da743cd0d55bb23b7f23dbc5854518064ef2a98ff18/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Dec  8 04:47:41 np0005550137 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b279941bf0d1e9f89f515da743cd0d55bb23b7f23dbc5854518064ef2a98ff18/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec  8 04:47:41 np0005550137 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b279941bf0d1e9f89f515da743cd0d55bb23b7f23dbc5854518064ef2a98ff18/merged/home/ceph_spec.yaml supports timestamps until 2038 (0x7fffffff)
Dec  8 04:47:41 np0005550137 podman[89188]: 2025-12-08 09:47:41.103117167 +0000 UTC m=+0.037769534 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Dec  8 04:47:41 np0005550137 podman[89188]: 2025-12-08 09:47:41.203154734 +0000 UTC m=+0.137807121 container init f8961f1e1aa4a16311d4df6c92770cd9f2b8a3bdd6c3a956b034030deaf01934 (image=quay.io/ceph/ceph:v19, name=strange_brown, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec  8 04:47:41 np0005550137 podman[89188]: 2025-12-08 09:47:41.210252909 +0000 UTC m=+0.144905256 container start f8961f1e1aa4a16311d4df6c92770cd9f2b8a3bdd6c3a956b034030deaf01934 (image=quay.io/ceph/ceph:v19, name=strange_brown, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, ceph=True, org.label-schema.schema-version=1.0, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec  8 04:47:41 np0005550137 podman[89188]: 2025-12-08 09:47:41.214203244 +0000 UTC m=+0.148855631 container attach f8961f1e1aa4a16311d4df6c92770cd9f2b8a3bdd6c3a956b034030deaf01934 (image=quay.io/ceph/ceph:v19, name=strange_brown, OSD_FLAVOR=default, CEPH_REF=squid, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325)
Dec  8 04:47:41 np0005550137 ceph-mon[74516]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2.devices.0}] v 0)
Dec  8 04:47:41 np0005550137 ceph-mon[74516]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1555784564' entity='mgr.compute-0.kitiwu' 
Dec  8 04:47:41 np0005550137 ceph-mon[74516]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2}] v 0)
Dec  8 04:47:41 np0005550137 ceph-mon[74516]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1555784564' entity='mgr.compute-0.kitiwu' 
Dec  8 04:47:41 np0005550137 ceph-mon[74516]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.rgw.rgw}] v 0)
Dec  8 04:47:41 np0005550137 ceph-mon[74516]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1555784564' entity='mgr.compute-0.kitiwu' 
Dec  8 04:47:41 np0005550137 ceph-mon[74516]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get-or-create", "entity": "client.rgw.rgw.compute-1.rblbpq", "caps": ["mon", "allow *", "mgr", "allow rw", "osd", "allow rwx tag rgw *=*"]} v 0)
Dec  8 04:47:41 np0005550137 ceph-mon[74516]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1555784564' entity='mgr.compute-0.kitiwu' cmd=[{"prefix": "auth get-or-create", "entity": "client.rgw.rgw.compute-1.rblbpq", "caps": ["mon", "allow *", "mgr", "allow rw", "osd", "allow rwx tag rgw *=*"]}]: dispatch
Dec  8 04:47:41 np0005550137 ceph-mon[74516]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1555784564' entity='mgr.compute-0.kitiwu' cmd='[{"prefix": "auth get-or-create", "entity": "client.rgw.rgw.compute-1.rblbpq", "caps": ["mon", "allow *", "mgr", "allow rw", "osd", "allow rwx tag rgw *=*"]}]': finished
Dec  8 04:47:41 np0005550137 ceph-mon[74516]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config set, name=rgw_frontends}] v 0)
Dec  8 04:47:41 np0005550137 ceph-mon[74516]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1555784564' entity='mgr.compute-0.kitiwu' 
Dec  8 04:47:41 np0005550137 ceph-mon[74516]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Dec  8 04:47:41 np0005550137 ceph-mon[74516]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/1555784564' entity='mgr.compute-0.kitiwu' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec  8 04:47:41 np0005550137 ceph-mgr[74806]: [cephadm INFO cephadm.serve] Deploying daemon rgw.rgw.compute-1.rblbpq on compute-1
Dec  8 04:47:41 np0005550137 ceph-mgr[74806]: log_channel(cephadm) log [INF] : Deploying daemon rgw.rgw.compute-1.rblbpq on compute-1
Dec  8 04:47:41 np0005550137 ceph-osd[83009]: log_channel(cluster) log [DBG] : 3.2 scrub starts
Dec  8 04:47:41 np0005550137 ceph-osd[83009]: log_channel(cluster) log [DBG] : 3.2 scrub ok
Dec  8 04:47:41 np0005550137 ceph-mon[74516]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config set, name=mgr/dashboard/compute-2.zqytsv/server_addr}] v 0)
Dec  8 04:47:41 np0005550137 ceph-mon[74516]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/3406242588' entity='client.admin' 
Dec  8 04:47:41 np0005550137 systemd[1]: libpod-f8961f1e1aa4a16311d4df6c92770cd9f2b8a3bdd6c3a956b034030deaf01934.scope: Deactivated successfully.
Dec  8 04:47:41 np0005550137 podman[89188]: 2025-12-08 09:47:41.61129088 +0000 UTC m=+0.545943227 container died f8961f1e1aa4a16311d4df6c92770cd9f2b8a3bdd6c3a956b034030deaf01934 (image=quay.io/ceph/ceph:v19, name=strange_brown, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, ceph=True)
Dec  8 04:47:41 np0005550137 systemd[1]: var-lib-containers-storage-overlay-b279941bf0d1e9f89f515da743cd0d55bb23b7f23dbc5854518064ef2a98ff18-merged.mount: Deactivated successfully.
Dec  8 04:47:41 np0005550137 ceph-mon[74516]: from='mgr.14122 192.168.122.100:0/1555784564' entity='mgr.compute-0.kitiwu' 
Dec  8 04:47:41 np0005550137 ceph-mon[74516]: from='mgr.14122 192.168.122.100:0/1555784564' entity='mgr.compute-0.kitiwu' 
Dec  8 04:47:41 np0005550137 ceph-mon[74516]: from='mgr.14122 192.168.122.100:0/1555784564' entity='mgr.compute-0.kitiwu' 
Dec  8 04:47:41 np0005550137 ceph-mon[74516]: from='mgr.14122 192.168.122.100:0/1555784564' entity='mgr.compute-0.kitiwu' cmd=[{"prefix": "auth get-or-create", "entity": "client.rgw.rgw.compute-1.rblbpq", "caps": ["mon", "allow *", "mgr", "allow rw", "osd", "allow rwx tag rgw *=*"]}]: dispatch
Dec  8 04:47:41 np0005550137 ceph-mon[74516]: from='mgr.14122 192.168.122.100:0/1555784564' entity='mgr.compute-0.kitiwu' cmd='[{"prefix": "auth get-or-create", "entity": "client.rgw.rgw.compute-1.rblbpq", "caps": ["mon", "allow *", "mgr", "allow rw", "osd", "allow rwx tag rgw *=*"]}]': finished
Dec  8 04:47:41 np0005550137 ceph-mon[74516]: from='mgr.14122 192.168.122.100:0/1555784564' entity='mgr.compute-0.kitiwu' 
Dec  8 04:47:41 np0005550137 ceph-mon[74516]: from='client.? 192.168.122.100:0/3406242588' entity='client.admin' 
Dec  8 04:47:41 np0005550137 podman[89188]: 2025-12-08 09:47:41.662126122 +0000 UTC m=+0.596778489 container remove f8961f1e1aa4a16311d4df6c92770cd9f2b8a3bdd6c3a956b034030deaf01934 (image=quay.io/ceph/ceph:v19, name=strange_brown, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1)
Dec  8 04:47:41 np0005550137 systemd[1]: libpod-conmon-f8961f1e1aa4a16311d4df6c92770cd9f2b8a3bdd6c3a956b034030deaf01934.scope: Deactivated successfully.
Dec  8 04:47:41 np0005550137 python3[89269]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --interactive  --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z   --volume /home/ceph-admin/specs/ceph_spec.yaml:/home/ceph_spec.yaml:z   --entrypoint ceph quay.io/ceph/ceph:v19 --fsid ceb838ef-9d5d-54e4-bddb-2f01adce2ad4 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   mgr module disable dashboard _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec  8 04:47:42 np0005550137 podman[89270]: 2025-12-08 09:47:42.058721744 +0000 UTC m=+0.054086407 container create 547309ab6c68c7b1d5e0208bdff24d0517b9018f003423a92a2f93c53c1482bf (image=quay.io/ceph/ceph:v19, name=xenodochial_mayer, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, ceph=True, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid)
Dec  8 04:47:42 np0005550137 systemd[1]: Started libpod-conmon-547309ab6c68c7b1d5e0208bdff24d0517b9018f003423a92a2f93c53c1482bf.scope.
Dec  8 04:47:42 np0005550137 systemd[1]: Started libcrun container.
Dec  8 04:47:42 np0005550137 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/cb718c64c358750f575092ee0c3e7fd6b8c93f5c9bf160f6febb4635901970ec/merged/home/ceph_spec.yaml supports timestamps until 2038 (0x7fffffff)
Dec  8 04:47:42 np0005550137 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/cb718c64c358750f575092ee0c3e7fd6b8c93f5c9bf160f6febb4635901970ec/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Dec  8 04:47:42 np0005550137 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/cb718c64c358750f575092ee0c3e7fd6b8c93f5c9bf160f6febb4635901970ec/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec  8 04:47:42 np0005550137 podman[89270]: 2025-12-08 09:47:42.125362913 +0000 UTC m=+0.120727616 container init 547309ab6c68c7b1d5e0208bdff24d0517b9018f003423a92a2f93c53c1482bf (image=quay.io/ceph/ceph:v19, name=xenodochial_mayer, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec  8 04:47:42 np0005550137 podman[89270]: 2025-12-08 09:47:42.035241364 +0000 UTC m=+0.030606077 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Dec  8 04:47:42 np0005550137 podman[89270]: 2025-12-08 09:47:42.16873472 +0000 UTC m=+0.164099383 container start 547309ab6c68c7b1d5e0208bdff24d0517b9018f003423a92a2f93c53c1482bf (image=quay.io/ceph/ceph:v19, name=xenodochial_mayer, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec  8 04:47:42 np0005550137 podman[89270]: 2025-12-08 09:47:42.172159588 +0000 UTC m=+0.167524251 container attach 547309ab6c68c7b1d5e0208bdff24d0517b9018f003423a92a2f93c53c1482bf (image=quay.io/ceph/ceph:v19, name=xenodochial_mayer, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec  8 04:47:42 np0005550137 ceph-mgr[74806]: log_channel(cluster) log [DBG] : pgmap v99: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Dec  8 04:47:42 np0005550137 ceph-mon[74516]: mon.compute-0@0(leader).osd e33 do_prune osdmap full prune enabled
Dec  8 04:47:42 np0005550137 ceph-mon[74516]: mon.compute-0@0(leader).osd e34 e34: 3 total, 3 up, 3 in
Dec  8 04:47:42 np0005550137 ceph-mon[74516]: log_channel(cluster) log [DBG] : osdmap e34: 3 total, 3 up, 3 in
Dec  8 04:47:42 np0005550137 ceph-mon[74516]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool application enable","pool": ".rgw.root","app": "rgw"} v 0)
Dec  8 04:47:42 np0005550137 ceph-mon[74516]: log_channel(audit) log [INF] : from='client.? ' entity='client.rgw.rgw.compute-2.dimexm' cmd=[{"prefix": "osd pool application enable","pool": ".rgw.root","app": "rgw"}]: dispatch
Dec  8 04:47:42 np0005550137 ceph-osd[83009]: log_channel(cluster) log [DBG] : 3.1 scrub starts
Dec  8 04:47:42 np0005550137 ceph-osd[83009]: log_channel(cluster) log [DBG] : 3.1 scrub ok
Dec  8 04:47:42 np0005550137 ceph-osd[83009]: osd.1 pg_epoch: 34 pg[8.0( empty local-lis/les=0/0 n=0 ec=34/34 lis/c=0/0 les/c/f=0/0/0 sis=34) [1] r=0 lpr=34 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  8 04:47:42 np0005550137 ceph-mon[74516]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mgr module disable", "module": "dashboard"} v 0)
Dec  8 04:47:42 np0005550137 ceph-mon[74516]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/698056903' entity='client.admin' cmd=[{"prefix": "mgr module disable", "module": "dashboard"}]: dispatch
Dec  8 04:47:42 np0005550137 ceph-mon[74516]: Deploying daemon rgw.rgw.compute-1.rblbpq on compute-1
Dec  8 04:47:42 np0005550137 ceph-mon[74516]: from='client.? 192.168.122.102:0/1719203410' entity='client.rgw.rgw.compute-2.dimexm' cmd=[{"prefix": "osd pool application enable","pool": ".rgw.root","app": "rgw"}]: dispatch
Dec  8 04:47:42 np0005550137 ceph-mon[74516]: from='client.? ' entity='client.rgw.rgw.compute-2.dimexm' cmd=[{"prefix": "osd pool application enable","pool": ".rgw.root","app": "rgw"}]: dispatch
Dec  8 04:47:42 np0005550137 ceph-mon[74516]: from='client.? 192.168.122.100:0/698056903' entity='client.admin' cmd=[{"prefix": "mgr module disable", "module": "dashboard"}]: dispatch
Dec  8 04:47:43 np0005550137 ceph-mgr[74806]: [progress WARNING root] Starting Global Recovery Event,1 pgs not in active + clean state
Dec  8 04:47:43 np0005550137 ceph-osd[83009]: log_channel(cluster) log [DBG] : 6.4 deep-scrub starts
Dec  8 04:47:43 np0005550137 ceph-mon[74516]: mon.compute-0@0(leader).osd e34 do_prune osdmap full prune enabled
Dec  8 04:47:43 np0005550137 ceph-mon[74516]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1.devices.0}] v 0)
Dec  8 04:47:43 np0005550137 ceph-osd[83009]: log_channel(cluster) log [DBG] : 6.4 deep-scrub ok
Dec  8 04:47:43 np0005550137 ceph-mon[74516]: log_channel(audit) log [INF] : from='client.? ' entity='client.rgw.rgw.compute-2.dimexm' cmd='[{"prefix": "osd pool application enable","pool": ".rgw.root","app": "rgw"}]': finished
Dec  8 04:47:43 np0005550137 ceph-mon[74516]: mon.compute-0@0(leader).osd e35 e35: 3 total, 3 up, 3 in
Dec  8 04:47:43 np0005550137 ceph-mon[74516]: log_channel(cluster) log [DBG] : osdmap e35: 3 total, 3 up, 3 in
Dec  8 04:47:43 np0005550137 ceph-osd[83009]: osd.1 pg_epoch: 35 pg[8.0( empty local-lis/les=34/35 n=0 ec=34/34 lis/c=0/0 les/c/f=0/0/0 sis=34) [1] r=0 lpr=34 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  8 04:47:43 np0005550137 ceph-mon[74516]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/698056903' entity='client.admin' cmd='[{"prefix": "mgr module disable", "module": "dashboard"}]': finished
Dec  8 04:47:43 np0005550137 xenodochial_mayer[89285]: module 'dashboard' is already disabled
Dec  8 04:47:43 np0005550137 ceph-mon[74516]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1555784564' entity='mgr.compute-0.kitiwu' 
Dec  8 04:47:43 np0005550137 ceph-mon[74516]: log_channel(cluster) log [DBG] : mgrmap e12: compute-0.kitiwu(active, since 2m), standbys: compute-2.zqytsv, compute-1.mmkaif
Dec  8 04:47:43 np0005550137 ceph-mon[74516]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1}] v 0)
Dec  8 04:47:43 np0005550137 systemd[1]: libpod-547309ab6c68c7b1d5e0208bdff24d0517b9018f003423a92a2f93c53c1482bf.scope: Deactivated successfully.
Dec  8 04:47:43 np0005550137 podman[89270]: 2025-12-08 09:47:43.597563037 +0000 UTC m=+1.592927710 container died 547309ab6c68c7b1d5e0208bdff24d0517b9018f003423a92a2f93c53c1482bf (image=quay.io/ceph/ceph:v19, name=xenodochial_mayer, CEPH_REF=squid, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, ceph=True, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325)
Dec  8 04:47:43 np0005550137 ceph-mon[74516]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1555784564' entity='mgr.compute-0.kitiwu' 
Dec  8 04:47:43 np0005550137 ceph-mon[74516]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.rgw.rgw}] v 0)
Dec  8 04:47:43 np0005550137 ceph-mon[74516]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1555784564' entity='mgr.compute-0.kitiwu' 
Dec  8 04:47:43 np0005550137 ceph-mon[74516]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get-or-create", "entity": "client.rgw.rgw.compute-0.slkrtm", "caps": ["mon", "allow *", "mgr", "allow rw", "osd", "allow rwx tag rgw *=*"]} v 0)
Dec  8 04:47:43 np0005550137 ceph-mon[74516]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1555784564' entity='mgr.compute-0.kitiwu' cmd=[{"prefix": "auth get-or-create", "entity": "client.rgw.rgw.compute-0.slkrtm", "caps": ["mon", "allow *", "mgr", "allow rw", "osd", "allow rwx tag rgw *=*"]}]: dispatch
Dec  8 04:47:43 np0005550137 ceph-mon[74516]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1555784564' entity='mgr.compute-0.kitiwu' cmd='[{"prefix": "auth get-or-create", "entity": "client.rgw.rgw.compute-0.slkrtm", "caps": ["mon", "allow *", "mgr", "allow rw", "osd", "allow rwx tag rgw *=*"]}]': finished
Dec  8 04:47:43 np0005550137 systemd[1]: var-lib-containers-storage-overlay-cb718c64c358750f575092ee0c3e7fd6b8c93f5c9bf160f6febb4635901970ec-merged.mount: Deactivated successfully.
Dec  8 04:47:43 np0005550137 ceph-mon[74516]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config set, name=rgw_frontends}] v 0)
Dec  8 04:47:43 np0005550137 ceph-mon[74516]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/1555784564' entity='mgr.compute-0.kitiwu' 
Dec  8 04:47:43 np0005550137 ceph-mon[74516]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Dec  8 04:47:43 np0005550137 ceph-mon[74516]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/1555784564' entity='mgr.compute-0.kitiwu' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec  8 04:47:43 np0005550137 ceph-mgr[74806]: [cephadm INFO cephadm.serve] Deploying daemon rgw.rgw.compute-0.slkrtm on compute-0
Dec  8 04:47:43 np0005550137 ceph-mgr[74806]: log_channel(cephadm) log [INF] : Deploying daemon rgw.rgw.compute-0.slkrtm on compute-0
Dec  8 04:47:43 np0005550137 podman[89270]: 2025-12-08 09:47:43.652786227 +0000 UTC m=+1.648150890 container remove 547309ab6c68c7b1d5e0208bdff24d0517b9018f003423a92a2f93c53c1482bf (image=quay.io/ceph/ceph:v19, name=xenodochial_mayer, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec  8 04:47:43 np0005550137 systemd[1]: libpod-conmon-547309ab6c68c7b1d5e0208bdff24d0517b9018f003423a92a2f93c53c1482bf.scope: Deactivated successfully.
Dec  8 04:47:43 np0005550137 ceph-mon[74516]: from='client.? ' entity='client.rgw.rgw.compute-2.dimexm' cmd='[{"prefix": "osd pool application enable","pool": ".rgw.root","app": "rgw"}]': finished
Dec  8 04:47:43 np0005550137 ceph-mon[74516]: from='client.? 192.168.122.100:0/698056903' entity='client.admin' cmd='[{"prefix": "mgr module disable", "module": "dashboard"}]': finished
Dec  8 04:47:43 np0005550137 ceph-mon[74516]: from='mgr.14122 192.168.122.100:0/1555784564' entity='mgr.compute-0.kitiwu' 
Dec  8 04:47:43 np0005550137 ceph-mon[74516]: from='mgr.14122 192.168.122.100:0/1555784564' entity='mgr.compute-0.kitiwu' 
Dec  8 04:47:43 np0005550137 ceph-mon[74516]: from='mgr.14122 192.168.122.100:0/1555784564' entity='mgr.compute-0.kitiwu' 
Dec  8 04:47:43 np0005550137 ceph-mon[74516]: from='mgr.14122 192.168.122.100:0/1555784564' entity='mgr.compute-0.kitiwu' cmd=[{"prefix": "auth get-or-create", "entity": "client.rgw.rgw.compute-0.slkrtm", "caps": ["mon", "allow *", "mgr", "allow rw", "osd", "allow rwx tag rgw *=*"]}]: dispatch
Dec  8 04:47:43 np0005550137 ceph-mon[74516]: from='mgr.14122 192.168.122.100:0/1555784564' entity='mgr.compute-0.kitiwu' cmd='[{"prefix": "auth get-or-create", "entity": "client.rgw.rgw.compute-0.slkrtm", "caps": ["mon", "allow *", "mgr", "allow rw", "osd", "allow rwx tag rgw *=*"]}]': finished
Dec  8 04:47:43 np0005550137 ceph-mon[74516]: from='mgr.14122 192.168.122.100:0/1555784564' entity='mgr.compute-0.kitiwu' 
Dec  8 04:47:43 np0005550137 python3[89399]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --interactive  --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z   --volume /home/ceph-admin/specs/ceph_spec.yaml:/home/ceph_spec.yaml:z   --entrypoint ceph quay.io/ceph/ceph:v19 --fsid ceb838ef-9d5d-54e4-bddb-2f01adce2ad4 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   mgr module enable dashboard _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec  8 04:47:44 np0005550137 podman[89400]: 2025-12-08 09:47:44.02697299 +0000 UTC m=+0.045518749 container create ba09afe62885823dd6698b4bab7f808a63f94d263286bb219b2aeb951f03b3d9 (image=quay.io/ceph/ceph:v19, name=blissful_cartwright, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_REF=squid, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image)
Dec  8 04:47:44 np0005550137 systemd[1]: Started libpod-conmon-ba09afe62885823dd6698b4bab7f808a63f94d263286bb219b2aeb951f03b3d9.scope.
Dec  8 04:47:44 np0005550137 systemd[1]: Started libcrun container.
Dec  8 04:47:44 np0005550137 podman[89400]: 2025-12-08 09:47:44.007761644 +0000 UTC m=+0.026307483 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Dec  8 04:47:44 np0005550137 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a5535e9178b0ca589f6b5dbf2472514683a0485494dddd99701c392b017f22d5/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec  8 04:47:44 np0005550137 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a5535e9178b0ca589f6b5dbf2472514683a0485494dddd99701c392b017f22d5/merged/home/ceph_spec.yaml supports timestamps until 2038 (0x7fffffff)
Dec  8 04:47:44 np0005550137 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a5535e9178b0ca589f6b5dbf2472514683a0485494dddd99701c392b017f22d5/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Dec  8 04:47:44 np0005550137 podman[89400]: 2025-12-08 09:47:44.117710357 +0000 UTC m=+0.136256136 container init ba09afe62885823dd6698b4bab7f808a63f94d263286bb219b2aeb951f03b3d9 (image=quay.io/ceph/ceph:v19, name=blissful_cartwright, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image)
Dec  8 04:47:44 np0005550137 podman[89400]: 2025-12-08 09:47:44.124752711 +0000 UTC m=+0.143298470 container start ba09afe62885823dd6698b4bab7f808a63f94d263286bb219b2aeb951f03b3d9 (image=quay.io/ceph/ceph:v19, name=blissful_cartwright, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, org.label-schema.build-date=20250325)
Dec  8 04:47:44 np0005550137 podman[89400]: 2025-12-08 09:47:44.128104118 +0000 UTC m=+0.146649887 container attach ba09afe62885823dd6698b4bab7f808a63f94d263286bb219b2aeb951f03b3d9 (image=quay.io/ceph/ceph:v19, name=blissful_cartwright, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default)
Dec  8 04:47:44 np0005550137 podman[89455]: 2025-12-08 09:47:44.197986482 +0000 UTC m=+0.035727886 container create a85aab5e28a79da56dc0514f4ed7f2f00d17b91c88bfd5e790f890d703e93c5c (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=unruffled_lederberg, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True)
Dec  8 04:47:44 np0005550137 systemd[1]: Started libpod-conmon-a85aab5e28a79da56dc0514f4ed7f2f00d17b91c88bfd5e790f890d703e93c5c.scope.
Dec  8 04:47:44 np0005550137 systemd[1]: Started libcrun container.
Dec  8 04:47:44 np0005550137 podman[89455]: 2025-12-08 09:47:44.262262262 +0000 UTC m=+0.100003686 container init a85aab5e28a79da56dc0514f4ed7f2f00d17b91c88bfd5e790f890d703e93c5c (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=unruffled_lederberg, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, OSD_FLAVOR=default, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0)
Dec  8 04:47:44 np0005550137 podman[89455]: 2025-12-08 09:47:44.267807663 +0000 UTC m=+0.105549087 container start a85aab5e28a79da56dc0514f4ed7f2f00d17b91c88bfd5e790f890d703e93c5c (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=unruffled_lederberg, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image)
Dec  8 04:47:44 np0005550137 unruffled_lederberg[89471]: 167 167
Dec  8 04:47:44 np0005550137 podman[89455]: 2025-12-08 09:47:44.27115646 +0000 UTC m=+0.108897914 container attach a85aab5e28a79da56dc0514f4ed7f2f00d17b91c88bfd5e790f890d703e93c5c (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=unruffled_lederberg, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, ceph=True, OSD_FLAVOR=default, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image)
Dec  8 04:47:44 np0005550137 systemd[1]: libpod-a85aab5e28a79da56dc0514f4ed7f2f00d17b91c88bfd5e790f890d703e93c5c.scope: Deactivated successfully.
Dec  8 04:47:44 np0005550137 podman[89455]: 2025-12-08 09:47:44.271700235 +0000 UTC m=+0.109441639 container died a85aab5e28a79da56dc0514f4ed7f2f00d17b91c88bfd5e790f890d703e93c5c (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=unruffled_lederberg, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec  8 04:47:44 np0005550137 podman[89455]: 2025-12-08 09:47:44.182506413 +0000 UTC m=+0.020247847 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Dec  8 04:47:44 np0005550137 systemd[1]: var-lib-containers-storage-overlay-7e2ede86b666faad152a8b5da86d2f9dfa9cb228e614b23f38da615e9d1dc7dd-merged.mount: Deactivated successfully.
Dec  8 04:47:44 np0005550137 podman[89455]: 2025-12-08 09:47:44.308538452 +0000 UTC m=+0.146279856 container remove a85aab5e28a79da56dc0514f4ed7f2f00d17b91c88bfd5e790f890d703e93c5c (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=unruffled_lederberg, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image)
Dec  8 04:47:44 np0005550137 systemd[1]: libpod-conmon-a85aab5e28a79da56dc0514f4ed7f2f00d17b91c88bfd5e790f890d703e93c5c.scope: Deactivated successfully.
Dec  8 04:47:44 np0005550137 systemd[1]: Reloading.
Dec  8 04:47:44 np0005550137 ceph-mgr[74806]: log_channel(cluster) log [DBG] : pgmap v102: 194 pgs: 1 creating+peering, 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Dec  8 04:47:44 np0005550137 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec  8 04:47:44 np0005550137 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Dec  8 04:47:44 np0005550137 ceph-osd[83009]: log_channel(cluster) log [DBG] : 6.0 scrub starts
Dec  8 04:47:44 np0005550137 ceph-osd[83009]: log_channel(cluster) log [DBG] : 6.0 scrub ok
Dec  8 04:47:44 np0005550137 ceph-mon[74516]: mon.compute-0@0(leader).osd e35 do_prune osdmap full prune enabled
Dec  8 04:47:44 np0005550137 ceph-mon[74516]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mgr module enable", "module": "dashboard"} v 0)
Dec  8 04:47:44 np0005550137 ceph-mon[74516]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/1032861131' entity='client.admin' cmd=[{"prefix": "mgr module enable", "module": "dashboard"}]: dispatch
Dec  8 04:47:44 np0005550137 ceph-mon[74516]: mon.compute-0@0(leader).osd e36 e36: 3 total, 3 up, 3 in
Dec  8 04:47:44 np0005550137 ceph-mon[74516]: log_channel(cluster) log [DBG] : osdmap e36: 3 total, 3 up, 3 in
Dec  8 04:47:44 np0005550137 ceph-osd[83009]: osd.1 pg_epoch: 36 pg[9.0( empty local-lis/les=0/0 n=0 ec=36/36 lis/c=0/0 les/c/f=0/0/0 sis=36) [1] r=0 lpr=36 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  8 04:47:44 np0005550137 ceph-mon[74516]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool application enable","pool": "default.rgw.log","app": "rgw"} v 0)
Dec  8 04:47:44 np0005550137 ceph-mon[74516]: log_channel(audit) log [INF] : from='client.? ' entity='client.rgw.rgw.compute-1.rblbpq' cmd=[{"prefix": "osd pool application enable","pool": "default.rgw.log","app": "rgw"}]: dispatch
Dec  8 04:47:44 np0005550137 ceph-mon[74516]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool application enable","pool": "default.rgw.log","app": "rgw"} v 0)
Dec  8 04:47:44 np0005550137 ceph-mon[74516]: log_channel(audit) log [INF] : from='client.? ' entity='client.rgw.rgw.compute-2.dimexm' cmd=[{"prefix": "osd pool application enable","pool": "default.rgw.log","app": "rgw"}]: dispatch
Dec  8 04:47:44 np0005550137 ceph-mon[74516]: Deploying daemon rgw.rgw.compute-0.slkrtm on compute-0
Dec  8 04:47:44 np0005550137 ceph-mon[74516]: from='client.? 192.168.122.100:0/1032861131' entity='client.admin' cmd=[{"prefix": "mgr module enable", "module": "dashboard"}]: dispatch
Dec  8 04:47:44 np0005550137 ceph-mon[74516]: from='client.? 192.168.122.101:0/3268586272' entity='client.rgw.rgw.compute-1.rblbpq' cmd=[{"prefix": "osd pool application enable","pool": "default.rgw.log","app": "rgw"}]: dispatch
Dec  8 04:47:44 np0005550137 ceph-mon[74516]: from='client.? ' entity='client.rgw.rgw.compute-1.rblbpq' cmd=[{"prefix": "osd pool application enable","pool": "default.rgw.log","app": "rgw"}]: dispatch
Dec  8 04:47:44 np0005550137 ceph-mon[74516]: from='client.? 192.168.122.102:0/2102705496' entity='client.rgw.rgw.compute-2.dimexm' cmd=[{"prefix": "osd pool application enable","pool": "default.rgw.log","app": "rgw"}]: dispatch
Dec  8 04:47:44 np0005550137 ceph-mon[74516]: from='client.? ' entity='client.rgw.rgw.compute-2.dimexm' cmd=[{"prefix": "osd pool application enable","pool": "default.rgw.log","app": "rgw"}]: dispatch
Dec  8 04:47:44 np0005550137 systemd[1]: Reloading.
Dec  8 04:47:44 np0005550137 ceph-mon[74516]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/1032861131' entity='client.admin' cmd='[{"prefix": "mgr module enable", "module": "dashboard"}]': finished
Dec  8 04:47:44 np0005550137 ceph-mgr[74806]: mgr handle_mgr_map respawning because set of enabled modules changed!
Dec  8 04:47:44 np0005550137 ceph-mgr[74806]: mgr respawn  e: '/usr/bin/ceph-mgr'
Dec  8 04:47:44 np0005550137 ceph-mgr[74806]: mgr respawn  0: '/usr/bin/ceph-mgr'
Dec  8 04:47:44 np0005550137 ceph-mgr[74806]: mgr respawn  1: '-n'
Dec  8 04:47:44 np0005550137 ceph-mgr[74806]: mgr respawn  2: 'mgr.compute-0.kitiwu'
Dec  8 04:47:44 np0005550137 ceph-mgr[74806]: mgr respawn  3: '-f'
Dec  8 04:47:44 np0005550137 ceph-mon[74516]: log_channel(cluster) log [DBG] : mgrmap e13: compute-0.kitiwu(active, since 2m), standbys: compute-2.zqytsv, compute-1.mmkaif
Dec  8 04:47:44 np0005550137 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Dec  8 04:47:44 np0005550137 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec  8 04:47:44 np0005550137 podman[89553]: 2025-12-08 09:47:44.828551898 +0000 UTC m=+0.050102252 container died ba09afe62885823dd6698b4bab7f808a63f94d263286bb219b2aeb951f03b3d9 (image=quay.io/ceph/ceph:v19, name=blissful_cartwright, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.build-date=20250325, CEPH_REF=squid, io.buildah.version=1.40.1, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec  8 04:47:44 np0005550137 ceph-mon[74516]: mon.compute-0@0(leader).osd e36 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  8 04:47:44 np0005550137 ceph-ceb838ef-9d5d-54e4-bddb-2f01adce2ad4-mgr-compute-0-kitiwu[74802]: ignoring --setuser ceph since I am not root
Dec  8 04:47:44 np0005550137 ceph-ceb838ef-9d5d-54e4-bddb-2f01adce2ad4-mgr-compute-0-kitiwu[74802]: ignoring --setgroup ceph since I am not root
Dec  8 04:47:44 np0005550137 ceph-mgr[74806]: ceph version 19.2.3 (c92aebb279828e9c3c1f5d24613efca272649e62) squid (stable), process ceph-mgr, pid 2
Dec  8 04:47:44 np0005550137 ceph-mgr[74806]: pidfile_write: ignore empty --pid-file
Dec  8 04:47:44 np0005550137 ceph-mgr[74806]: mgr[py] Loading python module 'alerts'
Dec  8 04:47:44 np0005550137 ceph-mgr[74806]: mgr[py] Module alerts has missing NOTIFY_TYPES member
Dec  8 04:47:44 np0005550137 ceph-mgr[74806]: mgr[py] Loading python module 'balancer'
Dec  8 04:47:44 np0005550137 ceph-ceb838ef-9d5d-54e4-bddb-2f01adce2ad4-mgr-compute-0-kitiwu[74802]: 2025-12-08T09:47:44.996+0000 7fa18da21140 -1 mgr[py] Module alerts has missing NOTIFY_TYPES member
Dec  8 04:47:45 np0005550137 systemd[1]: libpod-ba09afe62885823dd6698b4bab7f808a63f94d263286bb219b2aeb951f03b3d9.scope: Deactivated successfully.
Dec  8 04:47:45 np0005550137 systemd[1]: var-lib-containers-storage-overlay-a5535e9178b0ca589f6b5dbf2472514683a0485494dddd99701c392b017f22d5-merged.mount: Deactivated successfully.
Dec  8 04:47:45 np0005550137 systemd[1]: session-30.scope: Deactivated successfully.
Dec  8 04:47:45 np0005550137 systemd[1]: session-26.scope: Deactivated successfully.
Dec  8 04:47:45 np0005550137 podman[89553]: 2025-12-08 09:47:45.030059032 +0000 UTC m=+0.251609356 container remove ba09afe62885823dd6698b4bab7f808a63f94d263286bb219b2aeb951f03b3d9 (image=quay.io/ceph/ceph:v19, name=blissful_cartwright, org.label-schema.vendor=CentOS, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec  8 04:47:45 np0005550137 systemd[1]: Starting Ceph rgw.rgw.compute-0.slkrtm for ceb838ef-9d5d-54e4-bddb-2f01adce2ad4...
Dec  8 04:47:45 np0005550137 systemd[1]: session-23.scope: Deactivated successfully.
Dec  8 04:47:45 np0005550137 systemd[1]: libpod-conmon-ba09afe62885823dd6698b4bab7f808a63f94d263286bb219b2aeb951f03b3d9.scope: Deactivated successfully.
Dec  8 04:47:45 np0005550137 systemd[1]: session-31.scope: Deactivated successfully.
Dec  8 04:47:45 np0005550137 systemd[1]: session-21.scope: Deactivated successfully.
Dec  8 04:47:45 np0005550137 systemd[1]: session-32.scope: Deactivated successfully.
Dec  8 04:47:45 np0005550137 systemd-logind[805]: Session 30 logged out. Waiting for processes to exit.
Dec  8 04:47:45 np0005550137 systemd-logind[805]: Session 26 logged out. Waiting for processes to exit.
Dec  8 04:47:45 np0005550137 systemd-logind[805]: Session 23 logged out. Waiting for processes to exit.
Dec  8 04:47:45 np0005550137 systemd-logind[805]: Session 31 logged out. Waiting for processes to exit.
Dec  8 04:47:45 np0005550137 systemd-logind[805]: Session 21 logged out. Waiting for processes to exit.
Dec  8 04:47:45 np0005550137 systemd-logind[805]: Session 32 logged out. Waiting for processes to exit.
Dec  8 04:47:45 np0005550137 systemd[1]: session-24.scope: Deactivated successfully.
Dec  8 04:47:45 np0005550137 systemd[1]: session-25.scope: Deactivated successfully.
Dec  8 04:47:45 np0005550137 systemd-logind[805]: Session 25 logged out. Waiting for processes to exit.
Dec  8 04:47:45 np0005550137 systemd[1]: session-29.scope: Deactivated successfully.
Dec  8 04:47:45 np0005550137 systemd-logind[805]: Session 24 logged out. Waiting for processes to exit.
Dec  8 04:47:45 np0005550137 systemd-logind[805]: Session 29 logged out. Waiting for processes to exit.
Dec  8 04:47:45 np0005550137 systemd-logind[805]: Session 33 logged out. Waiting for processes to exit.
Dec  8 04:47:45 np0005550137 systemd-logind[805]: Removed session 32.
Dec  8 04:47:45 np0005550137 systemd-logind[805]: Removed session 24.
Dec  8 04:47:45 np0005550137 systemd[1]: session-27.scope: Deactivated successfully.
Dec  8 04:47:45 np0005550137 systemd[1]: session-28.scope: Deactivated successfully.
Dec  8 04:47:45 np0005550137 systemd-logind[805]: Removed session 31.
Dec  8 04:47:45 np0005550137 systemd-logind[805]: Removed session 28.
Dec  8 04:47:45 np0005550137 systemd-logind[805]: Removed session 23.
Dec  8 04:47:45 np0005550137 systemd-logind[805]: Removed session 27.
Dec  8 04:47:45 np0005550137 systemd-logind[805]: Removed session 26.
Dec  8 04:47:45 np0005550137 systemd-logind[805]: Removed session 30.
Dec  8 04:47:45 np0005550137 systemd-logind[805]: Removed session 29.
Dec  8 04:47:45 np0005550137 systemd-logind[805]: Removed session 25.
Dec  8 04:47:45 np0005550137 systemd-logind[805]: Removed session 21.
Dec  8 04:47:45 np0005550137 ceph-mgr[74806]: mgr[py] Module balancer has missing NOTIFY_TYPES member
Dec  8 04:47:45 np0005550137 ceph-mgr[74806]: mgr[py] Loading python module 'cephadm'
Dec  8 04:47:45 np0005550137 ceph-ceb838ef-9d5d-54e4-bddb-2f01adce2ad4-mgr-compute-0-kitiwu[74802]: 2025-12-08T09:47:45.091+0000 7fa18da21140 -1 mgr[py] Module balancer has missing NOTIFY_TYPES member
Dec  8 04:47:45 np0005550137 podman[89672]: 2025-12-08 09:47:45.272243084 +0000 UTC m=+0.048083214 container create d9ae01bc5eec17dbede4c7c64987c5a44ee785691a5ccd14949b6f8ce03c7c2e (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-ceb838ef-9d5d-54e4-bddb-2f01adce2ad4-rgw-rgw-compute-0-slkrtm, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_REF=squid, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default)
Dec  8 04:47:45 np0005550137 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/060b9f05a2f3eccdc3ca0a2c062ef2fc789c23d0f27273d8276c5e3949ae58d1/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec  8 04:47:45 np0005550137 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/060b9f05a2f3eccdc3ca0a2c062ef2fc789c23d0f27273d8276c5e3949ae58d1/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec  8 04:47:45 np0005550137 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/060b9f05a2f3eccdc3ca0a2c062ef2fc789c23d0f27273d8276c5e3949ae58d1/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec  8 04:47:45 np0005550137 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/060b9f05a2f3eccdc3ca0a2c062ef2fc789c23d0f27273d8276c5e3949ae58d1/merged/var/lib/ceph/radosgw/ceph-rgw.rgw.compute-0.slkrtm supports timestamps until 2038 (0x7fffffff)
Dec  8 04:47:45 np0005550137 podman[89672]: 2025-12-08 09:47:45.330288604 +0000 UTC m=+0.106128744 container init d9ae01bc5eec17dbede4c7c64987c5a44ee785691a5ccd14949b6f8ce03c7c2e (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-ceb838ef-9d5d-54e4-bddb-2f01adce2ad4-rgw-rgw-compute-0-slkrtm, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, CEPH_REF=squid, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image)
Dec  8 04:47:45 np0005550137 podman[89672]: 2025-12-08 09:47:45.341438417 +0000 UTC m=+0.117278537 container start d9ae01bc5eec17dbede4c7c64987c5a44ee785691a5ccd14949b6f8ce03c7c2e (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-ceb838ef-9d5d-54e4-bddb-2f01adce2ad4-rgw-rgw-compute-0-slkrtm, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2)
Dec  8 04:47:45 np0005550137 bash[89672]: d9ae01bc5eec17dbede4c7c64987c5a44ee785691a5ccd14949b6f8ce03c7c2e
Dec  8 04:47:45 np0005550137 podman[89672]: 2025-12-08 09:47:45.251560044 +0000 UTC m=+0.027400194 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Dec  8 04:47:45 np0005550137 systemd[1]: Started Ceph rgw.rgw.compute-0.slkrtm for ceb838ef-9d5d-54e4-bddb-2f01adce2ad4.
Dec  8 04:47:45 np0005550137 radosgw[89717]: deferred set uid:gid to 167:167 (ceph:ceph)
Dec  8 04:47:45 np0005550137 radosgw[89717]: ceph version 19.2.3 (c92aebb279828e9c3c1f5d24613efca272649e62) squid (stable), process radosgw, pid 2
Dec  8 04:47:45 np0005550137 radosgw[89717]: framework: beast
Dec  8 04:47:45 np0005550137 radosgw[89717]: framework conf key: endpoint, val: 192.168.122.100:8082
Dec  8 04:47:45 np0005550137 radosgw[89717]: init_numa not setting numa affinity
Dec  8 04:47:45 np0005550137 systemd[1]: session-33.scope: Deactivated successfully.
Dec  8 04:47:45 np0005550137 systemd[1]: session-33.scope: Consumed 27.402s CPU time.
Dec  8 04:47:45 np0005550137 systemd-logind[805]: Removed session 33.
Dec  8 04:47:45 np0005550137 python3[89716]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --interactive  --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z   --volume /home/ceph-admin/specs/ceph_spec.yaml:/home/ceph_spec.yaml:z   --entrypoint ceph quay.io/ceph/ceph:v19 --fsid ceb838ef-9d5d-54e4-bddb-2f01adce2ad4 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   dashboard set-grafana-api-username admin _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec  8 04:47:45 np0005550137 ceph-osd[83009]: log_channel(cluster) log [DBG] : 5.3 scrub starts
Dec  8 04:47:45 np0005550137 podman[90309]: 2025-12-08 09:47:45.544076814 +0000 UTC m=+0.044159959 container create 5f2eb39fb0a9703afb6cc015e65299cd7ebe229387f960523d5339d61ad96a11 (image=quay.io/ceph/ceph:v19, name=youthful_bose, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, io.buildah.version=1.40.1, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Dec  8 04:47:45 np0005550137 ceph-osd[83009]: log_channel(cluster) log [DBG] : 5.3 scrub ok
Dec  8 04:47:45 np0005550137 ceph-mon[74516]: mon.compute-0@0(leader).osd e36 do_prune osdmap full prune enabled
Dec  8 04:47:45 np0005550137 ceph-mon[74516]: log_channel(audit) log [INF] : from='client.? ' entity='client.rgw.rgw.compute-1.rblbpq' cmd='[{"prefix": "osd pool application enable","pool": "default.rgw.log","app": "rgw"}]': finished
Dec  8 04:47:45 np0005550137 ceph-mon[74516]: log_channel(audit) log [INF] : from='client.? ' entity='client.rgw.rgw.compute-2.dimexm' cmd='[{"prefix": "osd pool application enable","pool": "default.rgw.log","app": "rgw"}]': finished
Dec  8 04:47:45 np0005550137 ceph-mon[74516]: mon.compute-0@0(leader).osd e37 e37: 3 total, 3 up, 3 in
Dec  8 04:47:45 np0005550137 ceph-mon[74516]: log_channel(cluster) log [DBG] : osdmap e37: 3 total, 3 up, 3 in
Dec  8 04:47:45 np0005550137 ceph-osd[83009]: osd.1 pg_epoch: 37 pg[9.0( empty local-lis/les=36/37 n=0 ec=36/36 lis/c=0/0 les/c/f=0/0/0 sis=36) [1] r=0 lpr=36 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  8 04:47:45 np0005550137 systemd[1]: Started libpod-conmon-5f2eb39fb0a9703afb6cc015e65299cd7ebe229387f960523d5339d61ad96a11.scope.
Dec  8 04:47:45 np0005550137 systemd[1]: Started libcrun container.
Dec  8 04:47:45 np0005550137 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ff15ce9d075b4dba4533368ba206cbb610d73a3c47962d1ad953b6fbc416593f/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec  8 04:47:45 np0005550137 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ff15ce9d075b4dba4533368ba206cbb610d73a3c47962d1ad953b6fbc416593f/merged/home/ceph_spec.yaml supports timestamps until 2038 (0x7fffffff)
Dec  8 04:47:45 np0005550137 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ff15ce9d075b4dba4533368ba206cbb610d73a3c47962d1ad953b6fbc416593f/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Dec  8 04:47:45 np0005550137 podman[90309]: 2025-12-08 09:47:45.523099466 +0000 UTC m=+0.023182641 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Dec  8 04:47:45 np0005550137 podman[90309]: 2025-12-08 09:47:45.634530063 +0000 UTC m=+0.134613258 container init 5f2eb39fb0a9703afb6cc015e65299cd7ebe229387f960523d5339d61ad96a11 (image=quay.io/ceph/ceph:v19, name=youthful_bose, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2)
Dec  8 04:47:45 np0005550137 podman[90309]: 2025-12-08 09:47:45.641215806 +0000 UTC m=+0.141298951 container start 5f2eb39fb0a9703afb6cc015e65299cd7ebe229387f960523d5339d61ad96a11 (image=quay.io/ceph/ceph:v19, name=youthful_bose, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2)
Dec  8 04:47:45 np0005550137 podman[90309]: 2025-12-08 09:47:45.644430619 +0000 UTC m=+0.144513764 container attach 5f2eb39fb0a9703afb6cc015e65299cd7ebe229387f960523d5339d61ad96a11 (image=quay.io/ceph/ceph:v19, name=youthful_bose, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, CEPH_REF=squid, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image)
Dec  8 04:47:45 np0005550137 ceph-mon[74516]: from='client.? 192.168.122.100:0/1032861131' entity='client.admin' cmd='[{"prefix": "mgr module enable", "module": "dashboard"}]': finished
Dec  8 04:47:45 np0005550137 ceph-mon[74516]: from='client.? ' entity='client.rgw.rgw.compute-1.rblbpq' cmd='[{"prefix": "osd pool application enable","pool": "default.rgw.log","app": "rgw"}]': finished
Dec  8 04:47:45 np0005550137 ceph-mon[74516]: from='client.? ' entity='client.rgw.rgw.compute-2.dimexm' cmd='[{"prefix": "osd pool application enable","pool": "default.rgw.log","app": "rgw"}]': finished
Dec  8 04:47:45 np0005550137 ceph-mgr[74806]: mgr[py] Loading python module 'crash'
Dec  8 04:47:45 np0005550137 ceph-mgr[74806]: mgr[py] Module crash has missing NOTIFY_TYPES member
Dec  8 04:47:45 np0005550137 ceph-mgr[74806]: mgr[py] Loading python module 'dashboard'
Dec  8 04:47:45 np0005550137 ceph-ceb838ef-9d5d-54e4-bddb-2f01adce2ad4-mgr-compute-0-kitiwu[74802]: 2025-12-08T09:47:45.911+0000 7fa18da21140 -1 mgr[py] Module crash has missing NOTIFY_TYPES member
Dec  8 04:47:46 np0005550137 ceph-mgr[74806]: mgr[py] Loading python module 'devicehealth'
Dec  8 04:47:46 np0005550137 ceph-mgr[74806]: mgr[py] Module devicehealth has missing NOTIFY_TYPES member
Dec  8 04:47:46 np0005550137 ceph-ceb838ef-9d5d-54e4-bddb-2f01adce2ad4-mgr-compute-0-kitiwu[74802]: 2025-12-08T09:47:46.543+0000 7fa18da21140 -1 mgr[py] Module devicehealth has missing NOTIFY_TYPES member
Dec  8 04:47:46 np0005550137 ceph-mgr[74806]: mgr[py] Loading python module 'diskprediction_local'
Dec  8 04:47:46 np0005550137 ceph-osd[83009]: log_channel(cluster) log [DBG] : 3.6 scrub starts
Dec  8 04:47:46 np0005550137 ceph-osd[83009]: log_channel(cluster) log [DBG] : 3.6 scrub ok
Dec  8 04:47:46 np0005550137 ceph-mon[74516]: mon.compute-0@0(leader).osd e37 do_prune osdmap full prune enabled
Dec  8 04:47:46 np0005550137 ceph-mon[74516]: mon.compute-0@0(leader).osd e38 e38: 3 total, 3 up, 3 in
Dec  8 04:47:46 np0005550137 ceph-mon[74516]: log_channel(cluster) log [DBG] : osdmap e38: 3 total, 3 up, 3 in
Dec  8 04:47:46 np0005550137 ceph-mon[74516]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool application enable","pool": "default.rgw.control","app": "rgw"} v 0)
Dec  8 04:47:46 np0005550137 ceph-mon[74516]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/3979683973' entity='client.rgw.rgw.compute-0.slkrtm' cmd=[{"prefix": "osd pool application enable","pool": "default.rgw.control","app": "rgw"}]: dispatch
Dec  8 04:47:46 np0005550137 ceph-mon[74516]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool application enable","pool": "default.rgw.control","app": "rgw"} v 0)
Dec  8 04:47:46 np0005550137 ceph-mon[74516]: log_channel(audit) log [INF] : from='client.? ' entity='client.rgw.rgw.compute-1.rblbpq' cmd=[{"prefix": "osd pool application enable","pool": "default.rgw.control","app": "rgw"}]: dispatch
Dec  8 04:47:46 np0005550137 ceph-mon[74516]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool application enable","pool": "default.rgw.control","app": "rgw"} v 0)
Dec  8 04:47:46 np0005550137 ceph-mon[74516]: log_channel(audit) log [INF] : from='client.? ' entity='client.rgw.rgw.compute-2.dimexm' cmd=[{"prefix": "osd pool application enable","pool": "default.rgw.control","app": "rgw"}]: dispatch
Dec  8 04:47:46 np0005550137 ceph-ceb838ef-9d5d-54e4-bddb-2f01adce2ad4-mgr-compute-0-kitiwu[74802]: /lib64/python3.9/site-packages/scipy/__init__.py:73: UserWarning: NumPy was imported from a Python sub-interpreter but NumPy does not properly support sub-interpreters. This will likely work for most users but might cause hard to track down issues or subtle bugs. A common user of the rare sub-interpreter feature is wsgi which also allows single-interpreter mode.
Dec  8 04:47:46 np0005550137 ceph-ceb838ef-9d5d-54e4-bddb-2f01adce2ad4-mgr-compute-0-kitiwu[74802]: Improvements in the case of bugs are welcome, but is not on the NumPy roadmap, and full support may require significant effort to achieve.
Dec  8 04:47:46 np0005550137 ceph-ceb838ef-9d5d-54e4-bddb-2f01adce2ad4-mgr-compute-0-kitiwu[74802]:  from numpy import show_config as show_numpy_config
Dec  8 04:47:46 np0005550137 ceph-ceb838ef-9d5d-54e4-bddb-2f01adce2ad4-mgr-compute-0-kitiwu[74802]: 2025-12-08T09:47:46.717+0000 7fa18da21140 -1 mgr[py] Module diskprediction_local has missing NOTIFY_TYPES member
Dec  8 04:47:46 np0005550137 ceph-mgr[74806]: mgr[py] Module diskprediction_local has missing NOTIFY_TYPES member
Dec  8 04:47:46 np0005550137 ceph-mgr[74806]: mgr[py] Loading python module 'influx'
Dec  8 04:47:46 np0005550137 ceph-mon[74516]: from='client.? 192.168.122.100:0/3979683973' entity='client.rgw.rgw.compute-0.slkrtm' cmd=[{"prefix": "osd pool application enable","pool": "default.rgw.control","app": "rgw"}]: dispatch
Dec  8 04:47:46 np0005550137 ceph-mon[74516]: from='client.? 192.168.122.101:0/3268586272' entity='client.rgw.rgw.compute-1.rblbpq' cmd=[{"prefix": "osd pool application enable","pool": "default.rgw.control","app": "rgw"}]: dispatch
Dec  8 04:47:46 np0005550137 ceph-mon[74516]: from='client.? 192.168.122.102:0/2102705496' entity='client.rgw.rgw.compute-2.dimexm' cmd=[{"prefix": "osd pool application enable","pool": "default.rgw.control","app": "rgw"}]: dispatch
Dec  8 04:47:46 np0005550137 ceph-mon[74516]: from='client.? ' entity='client.rgw.rgw.compute-1.rblbpq' cmd=[{"prefix": "osd pool application enable","pool": "default.rgw.control","app": "rgw"}]: dispatch
Dec  8 04:47:46 np0005550137 ceph-mon[74516]: from='client.? ' entity='client.rgw.rgw.compute-2.dimexm' cmd=[{"prefix": "osd pool application enable","pool": "default.rgw.control","app": "rgw"}]: dispatch
Dec  8 04:47:46 np0005550137 ceph-mgr[74806]: mgr[py] Module influx has missing NOTIFY_TYPES member
Dec  8 04:47:46 np0005550137 ceph-mgr[74806]: mgr[py] Loading python module 'insights'
Dec  8 04:47:46 np0005550137 ceph-ceb838ef-9d5d-54e4-bddb-2f01adce2ad4-mgr-compute-0-kitiwu[74802]: 2025-12-08T09:47:46.787+0000 7fa18da21140 -1 mgr[py] Module influx has missing NOTIFY_TYPES member
Dec  8 04:47:46 np0005550137 ceph-mgr[74806]: mgr[py] Loading python module 'iostat'
Dec  8 04:47:46 np0005550137 ceph-mgr[74806]: mgr[py] Module iostat has missing NOTIFY_TYPES member
Dec  8 04:47:46 np0005550137 ceph-mgr[74806]: mgr[py] Loading python module 'k8sevents'
Dec  8 04:47:46 np0005550137 ceph-ceb838ef-9d5d-54e4-bddb-2f01adce2ad4-mgr-compute-0-kitiwu[74802]: 2025-12-08T09:47:46.922+0000 7fa18da21140 -1 mgr[py] Module iostat has missing NOTIFY_TYPES member
Dec  8 04:47:47 np0005550137 ceph-mgr[74806]: mgr[py] Loading python module 'localpool'
Dec  8 04:47:47 np0005550137 ceph-mgr[74806]: mgr[py] Loading python module 'mds_autoscaler'
Dec  8 04:47:47 np0005550137 ceph-osd[83009]: log_channel(cluster) log [DBG] : 4.0 scrub starts
Dec  8 04:47:47 np0005550137 ceph-osd[83009]: log_channel(cluster) log [DBG] : 4.0 scrub ok
Dec  8 04:47:47 np0005550137 ceph-mon[74516]: mon.compute-0@0(leader).osd e38 do_prune osdmap full prune enabled
Dec  8 04:47:47 np0005550137 ceph-mon[74516]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/3979683973' entity='client.rgw.rgw.compute-0.slkrtm' cmd='[{"prefix": "osd pool application enable","pool": "default.rgw.control","app": "rgw"}]': finished
Dec  8 04:47:47 np0005550137 ceph-mon[74516]: log_channel(audit) log [INF] : from='client.? ' entity='client.rgw.rgw.compute-1.rblbpq' cmd='[{"prefix": "osd pool application enable","pool": "default.rgw.control","app": "rgw"}]': finished
Dec  8 04:47:47 np0005550137 ceph-mon[74516]: log_channel(audit) log [INF] : from='client.? ' entity='client.rgw.rgw.compute-2.dimexm' cmd='[{"prefix": "osd pool application enable","pool": "default.rgw.control","app": "rgw"}]': finished
Dec  8 04:47:47 np0005550137 ceph-mon[74516]: mon.compute-0@0(leader).osd e39 e39: 3 total, 3 up, 3 in
Dec  8 04:47:47 np0005550137 ceph-mon[74516]: log_channel(cluster) log [DBG] : osdmap e39: 3 total, 3 up, 3 in
Dec  8 04:47:47 np0005550137 ceph-mgr[74806]: mgr[py] Loading python module 'mirroring'
Dec  8 04:47:47 np0005550137 ceph-mgr[74806]: mgr[py] Loading python module 'nfs'
Dec  8 04:47:47 np0005550137 ceph-mon[74516]: from='client.? 192.168.122.100:0/3979683973' entity='client.rgw.rgw.compute-0.slkrtm' cmd='[{"prefix": "osd pool application enable","pool": "default.rgw.control","app": "rgw"}]': finished
Dec  8 04:47:47 np0005550137 ceph-mon[74516]: from='client.? ' entity='client.rgw.rgw.compute-1.rblbpq' cmd='[{"prefix": "osd pool application enable","pool": "default.rgw.control","app": "rgw"}]': finished
Dec  8 04:47:47 np0005550137 ceph-mon[74516]: from='client.? ' entity='client.rgw.rgw.compute-2.dimexm' cmd='[{"prefix": "osd pool application enable","pool": "default.rgw.control","app": "rgw"}]': finished
Dec  8 04:47:47 np0005550137 ceph-mgr[74806]: mgr[py] Module nfs has missing NOTIFY_TYPES member
Dec  8 04:47:47 np0005550137 ceph-mgr[74806]: mgr[py] Loading python module 'orchestrator'
Dec  8 04:47:47 np0005550137 ceph-ceb838ef-9d5d-54e4-bddb-2f01adce2ad4-mgr-compute-0-kitiwu[74802]: 2025-12-08T09:47:47.928+0000 7fa18da21140 -1 mgr[py] Module nfs has missing NOTIFY_TYPES member
Dec  8 04:47:48 np0005550137 ceph-mgr[74806]: mgr[py] Module orchestrator has missing NOTIFY_TYPES member
Dec  8 04:47:48 np0005550137 ceph-mgr[74806]: mgr[py] Loading python module 'osd_perf_query'
Dec  8 04:47:48 np0005550137 ceph-ceb838ef-9d5d-54e4-bddb-2f01adce2ad4-mgr-compute-0-kitiwu[74802]: 2025-12-08T09:47:48.160+0000 7fa18da21140 -1 mgr[py] Module orchestrator has missing NOTIFY_TYPES member
Dec  8 04:47:48 np0005550137 ceph-mgr[74806]: mgr[py] Module osd_perf_query has missing NOTIFY_TYPES member
Dec  8 04:47:48 np0005550137 ceph-ceb838ef-9d5d-54e4-bddb-2f01adce2ad4-mgr-compute-0-kitiwu[74802]: 2025-12-08T09:47:48.241+0000 7fa18da21140 -1 mgr[py] Module osd_perf_query has missing NOTIFY_TYPES member
Dec  8 04:47:48 np0005550137 ceph-mgr[74806]: mgr[py] Loading python module 'osd_support'
Dec  8 04:47:48 np0005550137 ceph-mgr[74806]: mgr[py] Module osd_support has missing NOTIFY_TYPES member
Dec  8 04:47:48 np0005550137 ceph-ceb838ef-9d5d-54e4-bddb-2f01adce2ad4-mgr-compute-0-kitiwu[74802]: 2025-12-08T09:47:48.317+0000 7fa18da21140 -1 mgr[py] Module osd_support has missing NOTIFY_TYPES member
Dec  8 04:47:48 np0005550137 ceph-mgr[74806]: mgr[py] Loading python module 'pg_autoscaler'
Dec  8 04:47:48 np0005550137 ceph-ceb838ef-9d5d-54e4-bddb-2f01adce2ad4-mgr-compute-0-kitiwu[74802]: 2025-12-08T09:47:48.399+0000 7fa18da21140 -1 mgr[py] Module pg_autoscaler has missing NOTIFY_TYPES member
Dec  8 04:47:48 np0005550137 ceph-mgr[74806]: mgr[py] Module pg_autoscaler has missing NOTIFY_TYPES member
Dec  8 04:47:48 np0005550137 ceph-mgr[74806]: mgr[py] Loading python module 'progress'
Dec  8 04:47:48 np0005550137 ceph-mgr[74806]: mgr[py] Module progress has missing NOTIFY_TYPES member
Dec  8 04:47:48 np0005550137 ceph-ceb838ef-9d5d-54e4-bddb-2f01adce2ad4-mgr-compute-0-kitiwu[74802]: 2025-12-08T09:47:48.481+0000 7fa18da21140 -1 mgr[py] Module progress has missing NOTIFY_TYPES member
Dec  8 04:47:48 np0005550137 ceph-mgr[74806]: mgr[py] Loading python module 'prometheus'
Dec  8 04:47:48 np0005550137 ceph-mon[74516]: mon.compute-0@0(leader).osd e39 do_prune osdmap full prune enabled
Dec  8 04:47:48 np0005550137 ceph-osd[83009]: log_channel(cluster) log [DBG] : 3.7 scrub starts
Dec  8 04:47:48 np0005550137 ceph-mon[74516]: mon.compute-0@0(leader).osd e40 e40: 3 total, 3 up, 3 in
Dec  8 04:47:48 np0005550137 ceph-mon[74516]: log_channel(cluster) log [DBG] : osdmap e40: 3 total, 3 up, 3 in
Dec  8 04:47:48 np0005550137 ceph-osd[83009]: log_channel(cluster) log [DBG] : 3.7 scrub ok
Dec  8 04:47:48 np0005550137 ceph-mon[74516]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool application enable","pool": "default.rgw.meta","app": "rgw"} v 0)
Dec  8 04:47:48 np0005550137 ceph-mon[74516]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/3979683973' entity='client.rgw.rgw.compute-0.slkrtm' cmd=[{"prefix": "osd pool application enable","pool": "default.rgw.meta","app": "rgw"}]: dispatch
Dec  8 04:47:48 np0005550137 ceph-mon[74516]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool application enable","pool": "default.rgw.meta","app": "rgw"} v 0)
Dec  8 04:47:48 np0005550137 ceph-mon[74516]: log_channel(audit) log [INF] : from='client.? ' entity='client.rgw.rgw.compute-1.rblbpq' cmd=[{"prefix": "osd pool application enable","pool": "default.rgw.meta","app": "rgw"}]: dispatch
Dec  8 04:47:48 np0005550137 ceph-mon[74516]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool application enable","pool": "default.rgw.meta","app": "rgw"} v 0)
Dec  8 04:47:48 np0005550137 ceph-mon[74516]: log_channel(audit) log [INF] : from='client.? ' entity='client.rgw.rgw.compute-2.dimexm' cmd=[{"prefix": "osd pool application enable","pool": "default.rgw.meta","app": "rgw"}]: dispatch
Dec  8 04:47:48 np0005550137 ceph-osd[83009]: osd.1 pg_epoch: 40 pg[11.0( empty local-lis/les=0/0 n=0 ec=40/40 lis/c=0/0 les/c/f=0/0/0 sis=40) [1] r=0 lpr=40 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  8 04:47:48 np0005550137 ceph-mon[74516]: from='client.? 192.168.122.100:0/3979683973' entity='client.rgw.rgw.compute-0.slkrtm' cmd=[{"prefix": "osd pool application enable","pool": "default.rgw.meta","app": "rgw"}]: dispatch
Dec  8 04:47:48 np0005550137 ceph-mon[74516]: from='client.? 192.168.122.101:0/3268586272' entity='client.rgw.rgw.compute-1.rblbpq' cmd=[{"prefix": "osd pool application enable","pool": "default.rgw.meta","app": "rgw"}]: dispatch
Dec  8 04:47:48 np0005550137 ceph-mon[74516]: from='client.? ' entity='client.rgw.rgw.compute-1.rblbpq' cmd=[{"prefix": "osd pool application enable","pool": "default.rgw.meta","app": "rgw"}]: dispatch
Dec  8 04:47:48 np0005550137 ceph-mon[74516]: from='client.? 192.168.122.102:0/2102705496' entity='client.rgw.rgw.compute-2.dimexm' cmd=[{"prefix": "osd pool application enable","pool": "default.rgw.meta","app": "rgw"}]: dispatch
Dec  8 04:47:48 np0005550137 ceph-mon[74516]: from='client.? ' entity='client.rgw.rgw.compute-2.dimexm' cmd=[{"prefix": "osd pool application enable","pool": "default.rgw.meta","app": "rgw"}]: dispatch
Dec  8 04:47:48 np0005550137 ceph-mgr[74806]: mgr[py] Module prometheus has missing NOTIFY_TYPES member
Dec  8 04:47:48 np0005550137 ceph-mgr[74806]: mgr[py] Loading python module 'rbd_support'
Dec  8 04:47:48 np0005550137 ceph-ceb838ef-9d5d-54e4-bddb-2f01adce2ad4-mgr-compute-0-kitiwu[74802]: 2025-12-08T09:47:48.889+0000 7fa18da21140 -1 mgr[py] Module prometheus has missing NOTIFY_TYPES member
Dec  8 04:47:48 np0005550137 ceph-mgr[74806]: mgr[py] Module rbd_support has missing NOTIFY_TYPES member
Dec  8 04:47:48 np0005550137 ceph-mgr[74806]: mgr[py] Loading python module 'restful'
Dec  8 04:47:48 np0005550137 ceph-ceb838ef-9d5d-54e4-bddb-2f01adce2ad4-mgr-compute-0-kitiwu[74802]: 2025-12-08T09:47:48.990+0000 7fa18da21140 -1 mgr[py] Module rbd_support has missing NOTIFY_TYPES member
Dec  8 04:47:49 np0005550137 ceph-mgr[74806]: mgr[py] Loading python module 'rgw'
Dec  8 04:47:49 np0005550137 ceph-mgr[74806]: mgr[py] Module rgw has missing NOTIFY_TYPES member
Dec  8 04:47:49 np0005550137 ceph-mgr[74806]: mgr[py] Loading python module 'rook'
Dec  8 04:47:49 np0005550137 ceph-ceb838ef-9d5d-54e4-bddb-2f01adce2ad4-mgr-compute-0-kitiwu[74802]: 2025-12-08T09:47:49.432+0000 7fa18da21140 -1 mgr[py] Module rgw has missing NOTIFY_TYPES member
Dec  8 04:47:49 np0005550137 ceph-mon[74516]: mon.compute-0@0(leader).osd e40 do_prune osdmap full prune enabled
Dec  8 04:47:49 np0005550137 ceph-mon[74516]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/3979683973' entity='client.rgw.rgw.compute-0.slkrtm' cmd='[{"prefix": "osd pool application enable","pool": "default.rgw.meta","app": "rgw"}]': finished
Dec  8 04:47:49 np0005550137 ceph-mon[74516]: log_channel(audit) log [INF] : from='client.? ' entity='client.rgw.rgw.compute-1.rblbpq' cmd='[{"prefix": "osd pool application enable","pool": "default.rgw.meta","app": "rgw"}]': finished
Dec  8 04:47:49 np0005550137 ceph-mon[74516]: log_channel(audit) log [INF] : from='client.? ' entity='client.rgw.rgw.compute-2.dimexm' cmd='[{"prefix": "osd pool application enable","pool": "default.rgw.meta","app": "rgw"}]': finished
Dec  8 04:47:49 np0005550137 ceph-mon[74516]: mon.compute-0@0(leader).osd e41 e41: 3 total, 3 up, 3 in
Dec  8 04:47:49 np0005550137 ceph-osd[83009]: log_channel(cluster) log [DBG] : 4.7 deep-scrub starts
Dec  8 04:47:49 np0005550137 ceph-mon[74516]: log_channel(cluster) log [DBG] : osdmap e41: 3 total, 3 up, 3 in
Dec  8 04:47:49 np0005550137 ceph-mon[74516]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_autoscale_bias", "val": "4"} v 0)
Dec  8 04:47:49 np0005550137 ceph-mon[74516]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/3979683973' entity='client.rgw.rgw.compute-0.slkrtm' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_autoscale_bias", "val": "4"}]: dispatch
Dec  8 04:47:49 np0005550137 ceph-mon[74516]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_autoscale_bias", "val": "4"} v 0)
Dec  8 04:47:49 np0005550137 ceph-mon[74516]: log_channel(audit) log [INF] : from='client.? ' entity='client.rgw.rgw.compute-2.dimexm' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_autoscale_bias", "val": "4"}]: dispatch
Dec  8 04:47:49 np0005550137 ceph-mon[74516]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_autoscale_bias", "val": "4"} v 0)
Dec  8 04:47:49 np0005550137 ceph-mon[74516]: log_channel(audit) log [INF] : from='client.? ' entity='client.rgw.rgw.compute-1.rblbpq' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_autoscale_bias", "val": "4"}]: dispatch
Dec  8 04:47:49 np0005550137 ceph-osd[83009]: log_channel(cluster) log [DBG] : 4.7 deep-scrub ok
Dec  8 04:47:49 np0005550137 ceph-osd[83009]: osd.1 pg_epoch: 41 pg[11.0( empty local-lis/les=40/41 n=0 ec=40/40 lis/c=0/0 les/c/f=0/0/0 sis=40) [1] r=0 lpr=40 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  8 04:47:49 np0005550137 ceph-mon[74516]: from='client.? 192.168.122.100:0/3979683973' entity='client.rgw.rgw.compute-0.slkrtm' cmd='[{"prefix": "osd pool application enable","pool": "default.rgw.meta","app": "rgw"}]': finished
Dec  8 04:47:49 np0005550137 ceph-mon[74516]: from='client.? ' entity='client.rgw.rgw.compute-1.rblbpq' cmd='[{"prefix": "osd pool application enable","pool": "default.rgw.meta","app": "rgw"}]': finished
Dec  8 04:47:49 np0005550137 ceph-mon[74516]: from='client.? ' entity='client.rgw.rgw.compute-2.dimexm' cmd='[{"prefix": "osd pool application enable","pool": "default.rgw.meta","app": "rgw"}]': finished
Dec  8 04:47:49 np0005550137 ceph-mon[74516]: from='client.? 192.168.122.102:0/2102705496' entity='client.rgw.rgw.compute-2.dimexm' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_autoscale_bias", "val": "4"}]: dispatch
Dec  8 04:47:49 np0005550137 ceph-mon[74516]: from='client.? 192.168.122.100:0/3979683973' entity='client.rgw.rgw.compute-0.slkrtm' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_autoscale_bias", "val": "4"}]: dispatch
Dec  8 04:47:49 np0005550137 ceph-mon[74516]: from='client.? ' entity='client.rgw.rgw.compute-2.dimexm' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_autoscale_bias", "val": "4"}]: dispatch
Dec  8 04:47:49 np0005550137 ceph-mon[74516]: from='client.? 192.168.122.101:0/3268586272' entity='client.rgw.rgw.compute-1.rblbpq' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_autoscale_bias", "val": "4"}]: dispatch
Dec  8 04:47:49 np0005550137 ceph-mon[74516]: from='client.? ' entity='client.rgw.rgw.compute-1.rblbpq' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_autoscale_bias", "val": "4"}]: dispatch
Dec  8 04:47:49 np0005550137 ceph-mon[74516]: mon.compute-0@0(leader).osd e41 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  8 04:47:49 np0005550137 ceph-ceb838ef-9d5d-54e4-bddb-2f01adce2ad4-mgr-compute-0-kitiwu[74802]: 2025-12-08T09:47:49.987+0000 7fa18da21140 -1 mgr[py] Module rook has missing NOTIFY_TYPES member
Dec  8 04:47:49 np0005550137 ceph-mgr[74806]: mgr[py] Module rook has missing NOTIFY_TYPES member
Dec  8 04:47:49 np0005550137 ceph-mgr[74806]: mgr[py] Loading python module 'selftest'
Dec  8 04:47:50 np0005550137 ceph-ceb838ef-9d5d-54e4-bddb-2f01adce2ad4-mgr-compute-0-kitiwu[74802]: 2025-12-08T09:47:50.059+0000 7fa18da21140 -1 mgr[py] Module selftest has missing NOTIFY_TYPES member
Dec  8 04:47:50 np0005550137 ceph-mgr[74806]: mgr[py] Module selftest has missing NOTIFY_TYPES member
Dec  8 04:47:50 np0005550137 ceph-mgr[74806]: mgr[py] Loading python module 'snap_schedule'
Dec  8 04:47:50 np0005550137 ceph-ceb838ef-9d5d-54e4-bddb-2f01adce2ad4-mgr-compute-0-kitiwu[74802]: 2025-12-08T09:47:50.140+0000 7fa18da21140 -1 mgr[py] Module snap_schedule has missing NOTIFY_TYPES member
Dec  8 04:47:50 np0005550137 ceph-mgr[74806]: mgr[py] Module snap_schedule has missing NOTIFY_TYPES member
Dec  8 04:47:50 np0005550137 ceph-mgr[74806]: mgr[py] Loading python module 'stats'
Dec  8 04:47:50 np0005550137 ceph-mgr[74806]: mgr[py] Loading python module 'status'
Dec  8 04:47:50 np0005550137 ceph-ceb838ef-9d5d-54e4-bddb-2f01adce2ad4-mgr-compute-0-kitiwu[74802]: 2025-12-08T09:47:50.303+0000 7fa18da21140 -1 mgr[py] Module status has missing NOTIFY_TYPES member
Dec  8 04:47:50 np0005550137 ceph-mgr[74806]: mgr[py] Module status has missing NOTIFY_TYPES member
Dec  8 04:47:50 np0005550137 ceph-mgr[74806]: mgr[py] Loading python module 'telegraf'
Dec  8 04:47:50 np0005550137 ceph-ceb838ef-9d5d-54e4-bddb-2f01adce2ad4-mgr-compute-0-kitiwu[74802]: 2025-12-08T09:47:50.376+0000 7fa18da21140 -1 mgr[py] Module telegraf has missing NOTIFY_TYPES member
Dec  8 04:47:50 np0005550137 ceph-mgr[74806]: mgr[py] Module telegraf has missing NOTIFY_TYPES member
Dec  8 04:47:50 np0005550137 ceph-mgr[74806]: mgr[py] Loading python module 'telemetry'
Dec  8 04:47:50 np0005550137 ceph-ceb838ef-9d5d-54e4-bddb-2f01adce2ad4-mgr-compute-0-kitiwu[74802]: 2025-12-08T09:47:50.532+0000 7fa18da21140 -1 mgr[py] Module telemetry has missing NOTIFY_TYPES member
Dec  8 04:47:50 np0005550137 ceph-mgr[74806]: mgr[py] Module telemetry has missing NOTIFY_TYPES member
Dec  8 04:47:50 np0005550137 ceph-mgr[74806]: mgr[py] Loading python module 'test_orchestrator'
Dec  8 04:47:50 np0005550137 ceph-osd[83009]: log_channel(cluster) log [DBG] : 5.6 scrub starts
Dec  8 04:47:50 np0005550137 ceph-osd[83009]: log_channel(cluster) log [DBG] : 5.6 scrub ok
Dec  8 04:47:50 np0005550137 ceph-mon[74516]: mon.compute-0@0(leader).osd e41 do_prune osdmap full prune enabled
Dec  8 04:47:50 np0005550137 ceph-mon[74516]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/3979683973' entity='client.rgw.rgw.compute-0.slkrtm' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_autoscale_bias", "val": "4"}]': finished
Dec  8 04:47:50 np0005550137 ceph-mon[74516]: log_channel(audit) log [INF] : from='client.? ' entity='client.rgw.rgw.compute-2.dimexm' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_autoscale_bias", "val": "4"}]': finished
Dec  8 04:47:50 np0005550137 ceph-mon[74516]: log_channel(audit) log [INF] : from='client.? ' entity='client.rgw.rgw.compute-1.rblbpq' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_autoscale_bias", "val": "4"}]': finished
Dec  8 04:47:50 np0005550137 ceph-mon[74516]: mon.compute-0@0(leader).osd e42 e42: 3 total, 3 up, 3 in
Dec  8 04:47:50 np0005550137 ceph-mon[74516]: log_channel(cluster) log [DBG] : osdmap e42: 3 total, 3 up, 3 in
Dec  8 04:47:50 np0005550137 ceph-mon[74516]: from='client.? 192.168.122.100:0/3979683973' entity='client.rgw.rgw.compute-0.slkrtm' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_autoscale_bias", "val": "4"}]': finished
Dec  8 04:47:50 np0005550137 ceph-mon[74516]: from='client.? ' entity='client.rgw.rgw.compute-2.dimexm' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_autoscale_bias", "val": "4"}]': finished
Dec  8 04:47:50 np0005550137 ceph-mon[74516]: from='client.? ' entity='client.rgw.rgw.compute-1.rblbpq' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_autoscale_bias", "val": "4"}]': finished
Dec  8 04:47:50 np0005550137 ceph-ceb838ef-9d5d-54e4-bddb-2f01adce2ad4-mgr-compute-0-kitiwu[74802]: 2025-12-08T09:47:50.766+0000 7fa18da21140 -1 mgr[py] Module test_orchestrator has missing NOTIFY_TYPES member
Dec  8 04:47:50 np0005550137 ceph-mgr[74806]: mgr[py] Module test_orchestrator has missing NOTIFY_TYPES member
Dec  8 04:47:50 np0005550137 ceph-mgr[74806]: mgr[py] Loading python module 'volumes'
Dec  8 04:47:50 np0005550137 radosgw[89717]: v1 topic migration: starting v1 topic migration..
Dec  8 04:47:50 np0005550137 radosgw[89717]: LDAP not started since no server URIs were provided in the configuration.
Dec  8 04:47:50 np0005550137 radosgw[89717]: v1 topic migration: finished v1 topic migration
Dec  8 04:47:50 np0005550137 ceph-ceb838ef-9d5d-54e4-bddb-2f01adce2ad4-rgw-rgw-compute-0-slkrtm[89708]: 2025-12-08T09:47:50.820+0000 7ff81e070980 -1 LDAP not started since no server URIs were provided in the configuration.
Dec  8 04:47:50 np0005550137 radosgw[89717]: framework: beast
Dec  8 04:47:50 np0005550137 radosgw[89717]: framework conf key: ssl_certificate, val: config://rgw/cert/$realm/$zone.crt
Dec  8 04:47:50 np0005550137 radosgw[89717]: framework conf key: ssl_private_key, val: config://rgw/cert/$realm/$zone.key
Dec  8 04:47:50 np0005550137 radosgw[89717]: INFO: RGWReshardLock::lock found lock on reshard.0000000002 to be held by another RGW process; skipping for now
Dec  8 04:47:50 np0005550137 radosgw[89717]: INFO: RGWReshardLock::lock found lock on reshard.0000000004 to be held by another RGW process; skipping for now
Dec  8 04:47:50 np0005550137 radosgw[89717]: INFO: RGWReshardLock::lock found lock on reshard.0000000005 to be held by another RGW process; skipping for now
Dec  8 04:47:50 np0005550137 radosgw[89717]: starting handler: beast
Dec  8 04:47:50 np0005550137 radosgw[89717]: INFO: RGWReshardLock::lock found lock on reshard.0000000006 to be held by another RGW process; skipping for now
Dec  8 04:47:50 np0005550137 radosgw[89717]: INFO: RGWReshardLock::lock found lock on reshard.0000000008 to be held by another RGW process; skipping for now
Dec  8 04:47:50 np0005550137 radosgw[89717]: INFO: RGWReshardLock::lock found lock on reshard.0000000009 to be held by another RGW process; skipping for now
Dec  8 04:47:50 np0005550137 radosgw[89717]: set uid:gid to 167:167 (ceph:ceph)
Dec  8 04:47:50 np0005550137 radosgw[89717]: mgrc service_daemon_register rgw.14391 metadata {arch=x86_64,ceph_release=squid,ceph_version=ceph version 19.2.3 (c92aebb279828e9c3c1f5d24613efca272649e62) squid (stable),ceph_version_short=19.2.3,container_hostname=compute-0,container_image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec,cpu=AMD EPYC-Rome Processor,distro=centos,distro_description=CentOS Stream 9,distro_version=9,frontend_config#0=beast endpoint=192.168.122.100:8082,frontend_type#0=beast,hostname=compute-0,id=rgw.compute-0.slkrtm,kernel_description=#1 SMP PREEMPT_DYNAMIC Fri Nov 28 14:01:17 UTC 2025,kernel_version=5.14.0-645.el9.x86_64,mem_swap_kb=1048572,mem_total_kb=7864320,num_handles=1,os=Linux,pid=2,realm_id=,realm_name=,zone_id=f2fa6c7a-b392-4a6f-84e7-a8a07770c620,zone_name=default,zonegroup_id=68492763-3f06-49eb-87b1-edc419fff75a,zonegroup_name=default}
Dec  8 04:47:50 np0005550137 ceph-mon[74516]: log_channel(cluster) log [DBG] : Standby manager daemon compute-2.zqytsv restarted
Dec  8 04:47:50 np0005550137 ceph-mon[74516]: log_channel(cluster) log [DBG] : Standby manager daemon compute-2.zqytsv started
Dec  8 04:47:50 np0005550137 radosgw[89717]: INFO: RGWReshardLock::lock found lock on reshard.0000000013 to be held by another RGW process; skipping for now
Dec  8 04:47:50 np0005550137 radosgw[89717]: INFO: RGWReshardLock::lock found lock on reshard.0000000015 to be held by another RGW process; skipping for now
Dec  8 04:47:51 np0005550137 ceph-mon[74516]: log_channel(cluster) log [DBG] : Standby manager daemon compute-1.mmkaif restarted
Dec  8 04:47:51 np0005550137 ceph-mon[74516]: log_channel(cluster) log [DBG] : Standby manager daemon compute-1.mmkaif started
Dec  8 04:47:51 np0005550137 ceph-ceb838ef-9d5d-54e4-bddb-2f01adce2ad4-mgr-compute-0-kitiwu[74802]: 2025-12-08T09:47:51.042+0000 7fa18da21140 -1 mgr[py] Module volumes has missing NOTIFY_TYPES member
Dec  8 04:47:51 np0005550137 ceph-mgr[74806]: mgr[py] Module volumes has missing NOTIFY_TYPES member
Dec  8 04:47:51 np0005550137 ceph-mgr[74806]: mgr[py] Loading python module 'zabbix'
Dec  8 04:47:51 np0005550137 ceph-ceb838ef-9d5d-54e4-bddb-2f01adce2ad4-mgr-compute-0-kitiwu[74802]: 2025-12-08T09:47:51.135+0000 7fa18da21140 -1 mgr[py] Module zabbix has missing NOTIFY_TYPES member
Dec  8 04:47:51 np0005550137 ceph-mgr[74806]: mgr[py] Module zabbix has missing NOTIFY_TYPES member
Dec  8 04:47:51 np0005550137 ceph-mon[74516]: log_channel(cluster) log [INF] : Active manager daemon compute-0.kitiwu restarted
Dec  8 04:47:51 np0005550137 ceph-mon[74516]: mon.compute-0@0(leader).osd e42 do_prune osdmap full prune enabled
Dec  8 04:47:51 np0005550137 ceph-mon[74516]: log_channel(cluster) log [INF] : Activating manager daemon compute-0.kitiwu
Dec  8 04:47:51 np0005550137 ceph-mgr[74806]: ms_deliver_dispatch: unhandled message 0x55d81edb3860 mon_map magic: 0 from mon.0 v2:192.168.122.100:3300/0
Dec  8 04:47:51 np0005550137 ceph-mon[74516]: mon.compute-0@0(leader).osd e43 e43: 3 total, 3 up, 3 in
Dec  8 04:47:51 np0005550137 ceph-mgr[74806]: mgr handle_mgr_map Activating!
Dec  8 04:47:51 np0005550137 ceph-mgr[74806]: mgr handle_mgr_map I am now activating
Dec  8 04:47:51 np0005550137 ceph-mon[74516]: log_channel(cluster) log [DBG] : osdmap e43: 3 total, 3 up, 3 in
Dec  8 04:47:51 np0005550137 ceph-mon[74516]: log_channel(cluster) log [DBG] : mgrmap e14: compute-0.kitiwu(active, starting, since 0.0342847s), standbys: compute-1.mmkaif, compute-2.zqytsv
Dec  8 04:47:51 np0005550137 ceph-mon[74516]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mon metadata", "id": "compute-0"} v 0)
Dec  8 04:47:51 np0005550137 ceph-mon[74516]: log_channel(audit) log [DBG] : from='mgr.14364 192.168.122.100:0/3260131459' entity='mgr.compute-0.kitiwu' cmd=[{"prefix": "mon metadata", "id": "compute-0"}]: dispatch
Dec  8 04:47:51 np0005550137 ceph-mon[74516]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mon metadata", "id": "compute-1"} v 0)
Dec  8 04:47:51 np0005550137 ceph-mon[74516]: log_channel(audit) log [DBG] : from='mgr.14364 192.168.122.100:0/3260131459' entity='mgr.compute-0.kitiwu' cmd=[{"prefix": "mon metadata", "id": "compute-1"}]: dispatch
Dec  8 04:47:51 np0005550137 ceph-mon[74516]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mon metadata", "id": "compute-2"} v 0)
Dec  8 04:47:51 np0005550137 ceph-mon[74516]: log_channel(audit) log [DBG] : from='mgr.14364 192.168.122.100:0/3260131459' entity='mgr.compute-0.kitiwu' cmd=[{"prefix": "mon metadata", "id": "compute-2"}]: dispatch
Dec  8 04:47:51 np0005550137 ceph-mon[74516]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mgr metadata", "who": "compute-0.kitiwu", "id": "compute-0.kitiwu"} v 0)
Dec  8 04:47:51 np0005550137 ceph-mon[74516]: log_channel(audit) log [DBG] : from='mgr.14364 192.168.122.100:0/3260131459' entity='mgr.compute-0.kitiwu' cmd=[{"prefix": "mgr metadata", "who": "compute-0.kitiwu", "id": "compute-0.kitiwu"}]: dispatch
Dec  8 04:47:51 np0005550137 ceph-mon[74516]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mgr metadata", "who": "compute-1.mmkaif", "id": "compute-1.mmkaif"} v 0)
Dec  8 04:47:51 np0005550137 ceph-mon[74516]: log_channel(audit) log [DBG] : from='mgr.14364 192.168.122.100:0/3260131459' entity='mgr.compute-0.kitiwu' cmd=[{"prefix": "mgr metadata", "who": "compute-1.mmkaif", "id": "compute-1.mmkaif"}]: dispatch
Dec  8 04:47:51 np0005550137 ceph-mon[74516]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mgr metadata", "who": "compute-2.zqytsv", "id": "compute-2.zqytsv"} v 0)
Dec  8 04:47:51 np0005550137 ceph-mon[74516]: log_channel(audit) log [DBG] : from='mgr.14364 192.168.122.100:0/3260131459' entity='mgr.compute-0.kitiwu' cmd=[{"prefix": "mgr metadata", "who": "compute-2.zqytsv", "id": "compute-2.zqytsv"}]: dispatch
Dec  8 04:47:51 np0005550137 ceph-mon[74516]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd metadata", "id": 0} v 0)
Dec  8 04:47:51 np0005550137 ceph-mon[74516]: log_channel(audit) log [DBG] : from='mgr.14364 192.168.122.100:0/3260131459' entity='mgr.compute-0.kitiwu' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch
Dec  8 04:47:51 np0005550137 ceph-mon[74516]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd metadata", "id": 1} v 0)
Dec  8 04:47:51 np0005550137 ceph-mon[74516]: log_channel(audit) log [DBG] : from='mgr.14364 192.168.122.100:0/3260131459' entity='mgr.compute-0.kitiwu' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch
Dec  8 04:47:51 np0005550137 ceph-mon[74516]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd metadata", "id": 2} v 0)
Dec  8 04:47:51 np0005550137 ceph-mon[74516]: log_channel(audit) log [DBG] : from='mgr.14364 192.168.122.100:0/3260131459' entity='mgr.compute-0.kitiwu' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch
Dec  8 04:47:51 np0005550137 ceph-mon[74516]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mds metadata"} v 0)
Dec  8 04:47:51 np0005550137 ceph-mon[74516]: log_channel(audit) log [DBG] : from='mgr.14364 192.168.122.100:0/3260131459' entity='mgr.compute-0.kitiwu' cmd=[{"prefix": "mds metadata"}]: dispatch
Dec  8 04:47:51 np0005550137 ceph-mon[74516]: mon.compute-0@0(leader).mds e1 all = 1
Dec  8 04:47:51 np0005550137 ceph-mon[74516]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd metadata"} v 0)
Dec  8 04:47:51 np0005550137 ceph-mon[74516]: log_channel(audit) log [DBG] : from='mgr.14364 192.168.122.100:0/3260131459' entity='mgr.compute-0.kitiwu' cmd=[{"prefix": "osd metadata"}]: dispatch
Dec  8 04:47:51 np0005550137 ceph-mon[74516]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mon metadata"} v 0)
Dec  8 04:47:51 np0005550137 ceph-mon[74516]: log_channel(audit) log [DBG] : from='mgr.14364 192.168.122.100:0/3260131459' entity='mgr.compute-0.kitiwu' cmd=[{"prefix": "mon metadata"}]: dispatch
Dec  8 04:47:51 np0005550137 ceph-mon[74516]: log_channel(cluster) log [INF] : Manager daemon compute-0.kitiwu is now available
Dec  8 04:47:51 np0005550137 ceph-mgr[74806]: [balancer DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Dec  8 04:47:51 np0005550137 ceph-mgr[74806]: mgr load Constructed class from module: balancer
Dec  8 04:47:51 np0005550137 ceph-mgr[74806]: [balancer INFO root] Starting
Dec  8 04:47:51 np0005550137 ceph-mgr[74806]: [balancer INFO root] Optimize plan auto_2025-12-08_09:47:51
Dec  8 04:47:51 np0005550137 ceph-mgr[74806]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Dec  8 04:47:51 np0005550137 ceph-mgr[74806]: [balancer INFO root] Some PGs (1.000000) are unknown; try again later
Dec  8 04:47:51 np0005550137 ceph-mgr[74806]: [cephadm DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Dec  8 04:47:51 np0005550137 ceph-mgr[74806]: mgr load Constructed class from module: cephadm
Dec  8 04:47:51 np0005550137 ceph-mgr[74806]: [crash DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Dec  8 04:47:51 np0005550137 ceph-mgr[74806]: mgr load Constructed class from module: crash
Dec  8 04:47:51 np0005550137 ceph-mgr[74806]: [dashboard DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Dec  8 04:47:51 np0005550137 ceph-mgr[74806]: mgr load Constructed class from module: dashboard
Dec  8 04:47:51 np0005550137 ceph-mgr[74806]: [dashboard INFO access_control] Loading user roles DB version=2
Dec  8 04:47:51 np0005550137 ceph-mgr[74806]: [dashboard INFO sso] Loading SSO DB version=1
Dec  8 04:47:51 np0005550137 ceph-mgr[74806]: [dashboard INFO root] server: ssl=no host=192.168.122.100 port=8443
Dec  8 04:47:51 np0005550137 ceph-mgr[74806]: [dashboard INFO root] Configured CherryPy, starting engine...
Dec  8 04:47:51 np0005550137 ceph-mgr[74806]: [devicehealth DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Dec  8 04:47:51 np0005550137 ceph-mgr[74806]: mgr load Constructed class from module: devicehealth
Dec  8 04:47:51 np0005550137 ceph-mgr[74806]: [iostat DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Dec  8 04:47:51 np0005550137 ceph-mgr[74806]: mgr load Constructed class from module: iostat
Dec  8 04:47:51 np0005550137 ceph-mgr[74806]: [devicehealth INFO root] Starting
Dec  8 04:47:51 np0005550137 ceph-mgr[74806]: [nfs DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Dec  8 04:47:51 np0005550137 ceph-mgr[74806]: mgr load Constructed class from module: nfs
Dec  8 04:47:51 np0005550137 ceph-mgr[74806]: [orchestrator DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Dec  8 04:47:51 np0005550137 ceph-mgr[74806]: mgr load Constructed class from module: orchestrator
Dec  8 04:47:51 np0005550137 ceph-mgr[74806]: [pg_autoscaler DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Dec  8 04:47:51 np0005550137 ceph-mgr[74806]: mgr load Constructed class from module: pg_autoscaler
Dec  8 04:47:51 np0005550137 ceph-mgr[74806]: [progress DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Dec  8 04:47:51 np0005550137 ceph-mgr[74806]: mgr load Constructed class from module: progress
Dec  8 04:47:51 np0005550137 ceph-mgr[74806]: [rbd_support DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Dec  8 04:47:51 np0005550137 ceph-mgr[74806]: [pg_autoscaler INFO root] _maybe_adjust
Dec  8 04:47:51 np0005550137 ceph-mgr[74806]: [progress INFO root] Loading...
Dec  8 04:47:51 np0005550137 ceph-mgr[74806]: [progress INFO root] Loaded [<progress.module.GhostEvent object at 0x7fa108535c40>, <progress.module.GhostEvent object at 0x7fa108535c70>, <progress.module.GhostEvent object at 0x7fa108535ca0>, <progress.module.GhostEvent object at 0x7fa108535cd0>, <progress.module.GhostEvent object at 0x7fa108535d00>, <progress.module.GhostEvent object at 0x7fa108535d30>, <progress.module.GhostEvent object at 0x7fa108535d60>, <progress.module.GhostEvent object at 0x7fa108535d90>, <progress.module.GhostEvent object at 0x7fa108535dc0>, <progress.module.GhostEvent object at 0x7fa108535df0>, <progress.module.GhostEvent object at 0x7fa108535e20>, <progress.module.GhostEvent object at 0x7fa108535e50>] historic events
Dec  8 04:47:51 np0005550137 ceph-mgr[74806]: [progress INFO root] Loaded OSDMap, ready.
Dec  8 04:47:51 np0005550137 ceph-mgr[74806]: [rbd_support INFO root] recovery thread starting
Dec  8 04:47:51 np0005550137 ceph-mgr[74806]: [rbd_support INFO root] starting setup
Dec  8 04:47:51 np0005550137 ceph-mgr[74806]: mgr load Constructed class from module: rbd_support
Dec  8 04:47:51 np0005550137 ceph-mgr[74806]: [restful DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Dec  8 04:47:51 np0005550137 ceph-mgr[74806]: mgr load Constructed class from module: restful
Dec  8 04:47:51 np0005550137 ceph-mon[74516]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/compute-0.kitiwu/mirror_snapshot_schedule"} v 0)
Dec  8 04:47:51 np0005550137 ceph-mon[74516]: log_channel(audit) log [INF] : from='mgr.14364 192.168.122.100:0/3260131459' entity='mgr.compute-0.kitiwu' cmd=[{"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/compute-0.kitiwu/mirror_snapshot_schedule"}]: dispatch
Dec  8 04:47:51 np0005550137 ceph-mgr[74806]: [status DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Dec  8 04:47:51 np0005550137 ceph-mgr[74806]: mgr load Constructed class from module: status
Dec  8 04:47:51 np0005550137 ceph-mgr[74806]: [restful INFO root] server_addr: :: server_port: 8003
Dec  8 04:47:51 np0005550137 ceph-mgr[74806]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Dec  8 04:47:51 np0005550137 ceph-mgr[74806]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec  8 04:47:51 np0005550137 ceph-mgr[74806]: [telemetry DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Dec  8 04:47:51 np0005550137 ceph-mgr[74806]: mgr load Constructed class from module: telemetry
Dec  8 04:47:51 np0005550137 ceph-mgr[74806]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec  8 04:47:51 np0005550137 ceph-mgr[74806]: [volumes DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Dec  8 04:47:51 np0005550137 ceph-mgr[74806]: [restful WARNING root] server not running: no certificate configured
Dec  8 04:47:51 np0005550137 ceph-mgr[74806]: [rbd_support INFO root] load_schedules: backups, start_after=
Dec  8 04:47:51 np0005550137 ceph-mgr[74806]: [rbd_support INFO root] load_schedules: images, start_after=
Dec  8 04:47:51 np0005550137 ceph-mgr[74806]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: starting
Dec  8 04:47:51 np0005550137 ceph-mgr[74806]: [rbd_support INFO root] PerfHandler: starting
Dec  8 04:47:51 np0005550137 ceph-mgr[74806]: [rbd_support INFO root] load_task_task: vms, start_after=
Dec  8 04:47:51 np0005550137 ceph-mgr[74806]: [rbd_support INFO root] load_task_task: volumes, start_after=
Dec  8 04:47:51 np0005550137 ceph-mgr[74806]: [rbd_support INFO root] load_task_task: backups, start_after=
Dec  8 04:47:51 np0005550137 ceph-mgr[74806]: [rbd_support INFO root] load_task_task: images, start_after=
Dec  8 04:47:51 np0005550137 ceph-mgr[74806]: [rbd_support INFO root] TaskHandler: starting
Dec  8 04:47:51 np0005550137 ceph-mon[74516]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/compute-0.kitiwu/trash_purge_schedule"} v 0)
Dec  8 04:47:51 np0005550137 ceph-mon[74516]: log_channel(audit) log [INF] : from='mgr.14364 192.168.122.100:0/3260131459' entity='mgr.compute-0.kitiwu' cmd=[{"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/compute-0.kitiwu/trash_purge_schedule"}]: dispatch
Dec  8 04:47:51 np0005550137 ceph-mgr[74806]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Dec  8 04:47:51 np0005550137 ceph-mgr[74806]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec  8 04:47:51 np0005550137 ceph-mgr[74806]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec  8 04:47:51 np0005550137 ceph-mgr[74806]: [rbd_support INFO root] load_schedules: backups, start_after=
Dec  8 04:47:51 np0005550137 ceph-mgr[74806]: mgr load Constructed class from module: volumes
Dec  8 04:47:51 np0005550137 ceph-mgr[74806]: [rbd_support INFO root] load_schedules: images, start_after=
Dec  8 04:47:51 np0005550137 ceph-mgr[74806]: [rbd_support INFO root] TrashPurgeScheduleHandler: starting
Dec  8 04:47:51 np0005550137 ceph-mgr[74806]: [rbd_support INFO root] setup complete
Dec  8 04:47:51 np0005550137 ceph-mgr[74806]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: CephFS -> /api/cephfs
Dec  8 04:47:51 np0005550137 ceph-mgr[74806]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: CephFsUi -> /ui-api/cephfs
Dec  8 04:47:51 np0005550137 ceph-mgr[74806]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: CephFSSubvolume -> /api/cephfs/subvolume
Dec  8 04:47:51 np0005550137 ceph-mgr[74806]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: CephFSSubvolumeGroups -> /api/cephfs/subvolume/group
Dec  8 04:47:51 np0005550137 ceph-mgr[74806]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: CephFSSubvolumeSnapshots -> /api/cephfs/subvolume/snapshot
Dec  8 04:47:51 np0005550137 ceph-mgr[74806]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: CephFsSnapshotClone -> /api/cephfs/subvolume/snapshot/clone
Dec  8 04:47:51 np0005550137 ceph-mgr[74806]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: CephFSSnapshotSchedule -> /api/cephfs/snapshot/schedule
Dec  8 04:47:51 np0005550137 ceph-mgr[74806]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: IscsiUi -> /ui-api/iscsi
Dec  8 04:47:51 np0005550137 ceph-mgr[74806]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: Iscsi -> /api/iscsi
Dec  8 04:47:51 np0005550137 ceph-mgr[74806]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: IscsiTarget -> /api/iscsi/target
Dec  8 04:47:51 np0005550137 ceph-mgr[74806]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: NFSGaneshaCluster -> /api/nfs-ganesha/cluster
Dec  8 04:47:51 np0005550137 ceph-mgr[74806]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: NFSGaneshaExports -> /api/nfs-ganesha/export
Dec  8 04:47:51 np0005550137 ceph-mgr[74806]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: NFSGaneshaUi -> /ui-api/nfs-ganesha
Dec  8 04:47:51 np0005550137 ceph-mgr[74806]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: Orchestrator -> /ui-api/orchestrator
Dec  8 04:47:51 np0005550137 ceph-mgr[74806]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: Service -> /api/service
Dec  8 04:47:51 np0005550137 ceph-mgr[74806]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: RbdMirroring -> /api/block/mirroring
Dec  8 04:47:51 np0005550137 ceph-mgr[74806]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: RbdMirroringSummary -> /api/block/mirroring/summary
Dec  8 04:47:51 np0005550137 ceph-mgr[74806]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: RbdMirroringPoolMode -> /api/block/mirroring/pool
Dec  8 04:47:51 np0005550137 ceph-mgr[74806]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: RbdMirroringPoolBootstrap -> /api/block/mirroring/pool/{pool_name}/bootstrap
Dec  8 04:47:51 np0005550137 ceph-mgr[74806]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: RbdMirroringPoolPeer -> /api/block/mirroring/pool/{pool_name}/peer
Dec  8 04:47:51 np0005550137 ceph-mgr[74806]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: RbdMirroringStatus -> /ui-api/block/mirroring
Dec  8 04:47:51 np0005550137 ceph-mgr[74806]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: Pool -> /api/pool
Dec  8 04:47:51 np0005550137 ceph-mgr[74806]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: PoolUi -> /ui-api/pool
Dec  8 04:47:51 np0005550137 ceph-mgr[74806]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: RBDPool -> /api/pool
Dec  8 04:47:51 np0005550137 ceph-mgr[74806]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: Rbd -> /api/block/image
Dec  8 04:47:51 np0005550137 ceph-mgr[74806]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: RbdStatus -> /ui-api/block/rbd
Dec  8 04:47:51 np0005550137 ceph-mgr[74806]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: RbdSnapshot -> /api/block/image/{image_spec}/snap
Dec  8 04:47:51 np0005550137 ceph-mgr[74806]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: RbdTrash -> /api/block/image/trash
Dec  8 04:47:51 np0005550137 ceph-mgr[74806]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: RbdNamespace -> /api/block/pool/{pool_name}/namespace
Dec  8 04:47:51 np0005550137 ceph-mgr[74806]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: Rgw -> /ui-api/rgw
Dec  8 04:47:51 np0005550137 ceph-mgr[74806]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: RgwMultisiteStatus -> /ui-api/rgw/multisite
Dec  8 04:47:51 np0005550137 ceph-mgr[74806]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: RgwMultisiteController -> /api/rgw/multisite
Dec  8 04:47:51 np0005550137 ceph-mgr[74806]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: RgwDaemon -> /api/rgw/daemon
Dec  8 04:47:51 np0005550137 ceph-mgr[74806]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: RgwSite -> /api/rgw/site
Dec  8 04:47:51 np0005550137 ceph-mgr[74806]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: RgwBucket -> /api/rgw/bucket
Dec  8 04:47:51 np0005550137 ceph-mgr[74806]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: RgwBucketUi -> /ui-api/rgw/bucket
Dec  8 04:47:51 np0005550137 ceph-mgr[74806]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: RgwUser -> /api/rgw/user
Dec  8 04:47:51 np0005550137 ceph-mgr[74806]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: rgwroles_CRUDClass -> /api/rgw/roles
Dec  8 04:47:51 np0005550137 ceph-mgr[74806]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: rgwroles_CRUDClassMetadata -> /ui-api/rgw/roles
Dec  8 04:47:51 np0005550137 ceph-mgr[74806]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: RgwRealm -> /api/rgw/realm
Dec  8 04:47:51 np0005550137 ceph-mgr[74806]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: RgwZonegroup -> /api/rgw/zonegroup
Dec  8 04:47:51 np0005550137 ceph-mgr[74806]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: RgwZone -> /api/rgw/zone
Dec  8 04:47:51 np0005550137 ceph-mgr[74806]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: Auth -> /api/auth
Dec  8 04:47:51 np0005550137 ceph-mgr[74806]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: clusteruser_CRUDClass -> /api/cluster/user
Dec  8 04:47:51 np0005550137 ceph-mgr[74806]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: clusteruser_CRUDClassMetadata -> /ui-api/cluster/user
Dec  8 04:47:51 np0005550137 ceph-mgr[74806]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: Cluster -> /api/cluster
Dec  8 04:47:51 np0005550137 ceph-mgr[74806]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: ClusterUpgrade -> /api/cluster/upgrade
Dec  8 04:47:51 np0005550137 ceph-mgr[74806]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: ClusterConfiguration -> /api/cluster_conf
Dec  8 04:47:51 np0005550137 ceph-mgr[74806]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: CrushRule -> /api/crush_rule
Dec  8 04:47:51 np0005550137 ceph-mgr[74806]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: CrushRuleUi -> /ui-api/crush_rule
Dec  8 04:47:51 np0005550137 ceph-mgr[74806]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: Daemon -> /api/daemon
Dec  8 04:47:51 np0005550137 ceph-mgr[74806]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: Docs -> /docs
Dec  8 04:47:51 np0005550137 ceph-mgr[74806]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: ErasureCodeProfile -> /api/erasure_code_profile
Dec  8 04:47:51 np0005550137 ceph-mgr[74806]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: ErasureCodeProfileUi -> /ui-api/erasure_code_profile
Dec  8 04:47:51 np0005550137 ceph-mgr[74806]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: FeedbackController -> /api/feedback
Dec  8 04:47:51 np0005550137 ceph-mgr[74806]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: FeedbackApiController -> /api/feedback/api_key
Dec  8 04:47:51 np0005550137 ceph-mgr[74806]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: FeedbackUiController -> /ui-api/feedback/api_key
Dec  8 04:47:51 np0005550137 ceph-mgr[74806]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: FrontendLogging -> /ui-api/logging
Dec  8 04:47:51 np0005550137 ceph-mgr[74806]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: Grafana -> /api/grafana
Dec  8 04:47:51 np0005550137 ceph-mgr[74806]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: Host -> /api/host
Dec  8 04:47:51 np0005550137 ceph-mgr[74806]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: HostUi -> /ui-api/host
Dec  8 04:47:51 np0005550137 ceph-mgr[74806]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: Health -> /api/health
Dec  8 04:47:51 np0005550137 ceph-mgr[74806]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: HomeController -> /
Dec  8 04:47:51 np0005550137 ceph-mgr[74806]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: LangsController -> /ui-api/langs
Dec  8 04:47:51 np0005550137 ceph-mgr[74806]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: LoginController -> /ui-api/login
Dec  8 04:47:51 np0005550137 ceph-mgr[74806]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: Logs -> /api/logs
Dec  8 04:47:51 np0005550137 ceph-mgr[74806]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: MgrModules -> /api/mgr/module
Dec  8 04:47:51 np0005550137 ceph-mgr[74806]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: Monitor -> /api/monitor
Dec  8 04:47:51 np0005550137 ceph-mgr[74806]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: NVMeoFGateway -> /api/nvmeof/gateway
Dec  8 04:47:51 np0005550137 ceph-mgr[74806]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: NVMeoFSpdk -> /api/nvmeof/spdk
Dec  8 04:47:51 np0005550137 ceph-mgr[74806]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: NVMeoFSubsystem -> /api/nvmeof/subsystem
Dec  8 04:47:51 np0005550137 ceph-mgr[74806]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: NVMeoFListener -> /api/nvmeof/subsystem/{nqn}/listener
Dec  8 04:47:51 np0005550137 ceph-mgr[74806]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: NVMeoFNamespace -> /api/nvmeof/subsystem/{nqn}/namespace
Dec  8 04:47:51 np0005550137 ceph-mgr[74806]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: NVMeoFHost -> /api/nvmeof/subsystem/{nqn}/host
Dec  8 04:47:51 np0005550137 ceph-mgr[74806]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: NVMeoFConnection -> /api/nvmeof/subsystem/{nqn}/connection
Dec  8 04:47:51 np0005550137 ceph-mgr[74806]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: NVMeoFTcpUI -> /ui-api/nvmeof
Dec  8 04:47:51 np0005550137 ceph-mgr[74806]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: Osd -> /api/osd
Dec  8 04:47:51 np0005550137 ceph-mgr[74806]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: OsdUi -> /ui-api/osd
Dec  8 04:47:51 np0005550137 ceph-mgr[74806]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: OsdFlagsController -> /api/osd/flags
Dec  8 04:47:51 np0005550137 ceph-mgr[74806]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: MdsPerfCounter -> /api/perf_counters/mds
Dec  8 04:47:51 np0005550137 ceph-mgr[74806]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: MonPerfCounter -> /api/perf_counters/mon
Dec  8 04:47:51 np0005550137 ceph-mgr[74806]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: OsdPerfCounter -> /api/perf_counters/osd
Dec  8 04:47:51 np0005550137 ceph-mgr[74806]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: RgwPerfCounter -> /api/perf_counters/rgw
Dec  8 04:47:51 np0005550137 ceph-mgr[74806]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: RbdMirrorPerfCounter -> /api/perf_counters/rbd-mirror
Dec  8 04:47:51 np0005550137 ceph-mgr[74806]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: MgrPerfCounter -> /api/perf_counters/mgr
Dec  8 04:47:51 np0005550137 ceph-mgr[74806]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: TcmuRunnerPerfCounter -> /api/perf_counters/tcmu-runner
Dec  8 04:47:51 np0005550137 ceph-mgr[74806]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: PerfCounters -> /api/perf_counters
Dec  8 04:47:51 np0005550137 ceph-mgr[74806]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: PrometheusReceiver -> /api/prometheus_receiver
Dec  8 04:47:51 np0005550137 ceph-mgr[74806]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: Prometheus -> /api/prometheus
Dec  8 04:47:51 np0005550137 ceph-mgr[74806]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: PrometheusNotifications -> /api/prometheus/notifications
Dec  8 04:47:51 np0005550137 ceph-mgr[74806]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: PrometheusSettings -> /ui-api/prometheus
Dec  8 04:47:51 np0005550137 ceph-mgr[74806]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: Role -> /api/role
Dec  8 04:47:51 np0005550137 ceph-mgr[74806]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: Scope -> /ui-api/scope
Dec  8 04:47:51 np0005550137 ceph-mgr[74806]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: Saml2 -> /auth/saml2
Dec  8 04:47:51 np0005550137 ceph-mgr[74806]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: Settings -> /api/settings
Dec  8 04:47:51 np0005550137 ceph-mgr[74806]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: StandardSettings -> /ui-api/standard_settings
Dec  8 04:47:51 np0005550137 ceph-mgr[74806]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: Summary -> /api/summary
Dec  8 04:47:51 np0005550137 ceph-mgr[74806]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: Task -> /api/task
Dec  8 04:47:51 np0005550137 ceph-mgr[74806]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: Telemetry -> /api/telemetry
Dec  8 04:47:51 np0005550137 ceph-mgr[74806]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: User -> /api/user
Dec  8 04:47:51 np0005550137 ceph-mgr[74806]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: UserPasswordPolicy -> /api/user
Dec  8 04:47:51 np0005550137 ceph-mgr[74806]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: UserChangePassword -> /api/user/{username}
Dec  8 04:47:51 np0005550137 ceph-mgr[74806]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: FeatureTogglesEndpoint -> /api/feature_toggles
Dec  8 04:47:51 np0005550137 ceph-mgr[74806]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: MessageOfTheDay -> /ui-api/motd
Dec  8 04:47:51 np0005550137 ceph-osd[83009]: log_channel(cluster) log [DBG] : 5.c scrub starts
Dec  8 04:47:51 np0005550137 ceph-osd[83009]: log_channel(cluster) log [DBG] : 5.c scrub ok
Dec  8 04:47:51 np0005550137 systemd-logind[805]: New session 34 of user ceph-admin.
Dec  8 04:47:51 np0005550137 systemd[1]: Started Session 34 of User ceph-admin.
Dec  8 04:47:51 np0005550137 ceph-mgr[74806]: [dashboard INFO dashboard.module] Engine started.
Dec  8 04:47:51 np0005550137 ceph-mon[74516]: Active manager daemon compute-0.kitiwu restarted
Dec  8 04:47:51 np0005550137 ceph-mon[74516]: Activating manager daemon compute-0.kitiwu
Dec  8 04:47:51 np0005550137 ceph-mon[74516]: Manager daemon compute-0.kitiwu is now available
Dec  8 04:47:51 np0005550137 ceph-mon[74516]: from='mgr.14364 192.168.122.100:0/3260131459' entity='mgr.compute-0.kitiwu' cmd=[{"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/compute-0.kitiwu/mirror_snapshot_schedule"}]: dispatch
Dec  8 04:47:51 np0005550137 ceph-mon[74516]: from='mgr.14364 192.168.122.100:0/3260131459' entity='mgr.compute-0.kitiwu' cmd=[{"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/compute-0.kitiwu/trash_purge_schedule"}]: dispatch
Dec  8 04:47:52 np0005550137 ceph-mgr[74806]: log_channel(audit) log [DBG] : from='client.14397 -' entity='client.admin' cmd=[{"prefix": "dashboard set-grafana-api-username", "value": "admin", "target": ["mon-mgr", ""]}]: dispatch
Dec  8 04:47:52 np0005550137 ceph-mon[74516]: log_channel(cluster) log [DBG] : mgrmap e15: compute-0.kitiwu(active, since 1.06674s), standbys: compute-1.mmkaif, compute-2.zqytsv
Dec  8 04:47:52 np0005550137 ceph-mon[74516]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config set, name=mgr/dashboard/GRAFANA_API_USERNAME}] v 0)
Dec  8 04:47:52 np0005550137 ceph-mgr[74806]: log_channel(cluster) log [DBG] : pgmap v3: 197 pgs: 197 active+clean; 454 KiB data, 85 MiB used, 60 GiB / 60 GiB avail
Dec  8 04:47:52 np0005550137 ceph-mon[74516]: log_channel(audit) log [INF] : from='mgr.14364 192.168.122.100:0/3260131459' entity='mgr.compute-0.kitiwu' 
Dec  8 04:47:52 np0005550137 youthful_bose[90332]: Option GRAFANA_API_USERNAME updated
Dec  8 04:47:52 np0005550137 systemd[1]: libpod-5f2eb39fb0a9703afb6cc015e65299cd7ebe229387f960523d5339d61ad96a11.scope: Deactivated successfully.
Dec  8 04:47:52 np0005550137 podman[90309]: 2025-12-08 09:47:52.250192403 +0000 UTC m=+6.750275548 container died 5f2eb39fb0a9703afb6cc015e65299cd7ebe229387f960523d5339d61ad96a11 (image=quay.io/ceph/ceph:v19, name=youthful_bose, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, OSD_FLAVOR=default, org.label-schema.build-date=20250325, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image)
Dec  8 04:47:52 np0005550137 systemd[1]: var-lib-containers-storage-overlay-ff15ce9d075b4dba4533368ba206cbb610d73a3c47962d1ad953b6fbc416593f-merged.mount: Deactivated successfully.
Dec  8 04:47:52 np0005550137 podman[90309]: 2025-12-08 09:47:52.289077389 +0000 UTC m=+6.789160534 container remove 5f2eb39fb0a9703afb6cc015e65299cd7ebe229387f960523d5339d61ad96a11 (image=quay.io/ceph/ceph:v19, name=youthful_bose, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, ceph=True, org.label-schema.schema-version=1.0)
Dec  8 04:47:52 np0005550137 ceph-mgr[74806]: [cephadm INFO cherrypy.error] [08/Dec/2025:09:47:52] ENGINE Bus STARTING
Dec  8 04:47:52 np0005550137 ceph-mgr[74806]: log_channel(cephadm) log [INF] : [08/Dec/2025:09:47:52] ENGINE Bus STARTING
Dec  8 04:47:52 np0005550137 systemd[1]: libpod-conmon-5f2eb39fb0a9703afb6cc015e65299cd7ebe229387f960523d5339d61ad96a11.scope: Deactivated successfully.
Dec  8 04:47:52 np0005550137 ceph-mgr[74806]: [cephadm INFO cherrypy.error] [08/Dec/2025:09:47:52] ENGINE Serving on http://192.168.122.100:8765
Dec  8 04:47:52 np0005550137 ceph-mgr[74806]: log_channel(cephadm) log [INF] : [08/Dec/2025:09:47:52] ENGINE Serving on http://192.168.122.100:8765
Dec  8 04:47:52 np0005550137 podman[90677]: 2025-12-08 09:47:52.439280827 +0000 UTC m=+0.060797721 container exec e9eed32aa882af654b63b8cfe5830bc1e3b4cbe08ed44f11c04405084f3a1007 (image=quay.io/ceph/ceph:v19, name=ceph-ceb838ef-9d5d-54e4-bddb-2f01adce2ad4-mon-compute-0, ceph=True, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS)
Dec  8 04:47:52 np0005550137 ceph-mgr[74806]: [cephadm INFO cherrypy.error] [08/Dec/2025:09:47:52] ENGINE Serving on https://192.168.122.100:7150
Dec  8 04:47:52 np0005550137 ceph-mgr[74806]: log_channel(cephadm) log [INF] : [08/Dec/2025:09:47:52] ENGINE Serving on https://192.168.122.100:7150
Dec  8 04:47:52 np0005550137 ceph-mgr[74806]: [cephadm INFO cherrypy.error] [08/Dec/2025:09:47:52] ENGINE Bus STARTED
Dec  8 04:47:52 np0005550137 ceph-mgr[74806]: log_channel(cephadm) log [INF] : [08/Dec/2025:09:47:52] ENGINE Bus STARTED
Dec  8 04:47:52 np0005550137 ceph-mgr[74806]: [cephadm INFO cherrypy.error] [08/Dec/2025:09:47:52] ENGINE Client ('192.168.122.100', 50968) lost — peer dropped the TLS connection suddenly, during handshake: (6, 'TLS/SSL connection has been closed (EOF) (_ssl.c:1147)')
Dec  8 04:47:52 np0005550137 ceph-mgr[74806]: log_channel(cephadm) log [INF] : [08/Dec/2025:09:47:52] ENGINE Client ('192.168.122.100', 50968) lost — peer dropped the TLS connection suddenly, during handshake: (6, 'TLS/SSL connection has been closed (EOF) (_ssl.c:1147)')
Dec  8 04:47:52 np0005550137 podman[90677]: 2025-12-08 09:47:52.56198868 +0000 UTC m=+0.183505564 container exec_died e9eed32aa882af654b63b8cfe5830bc1e3b4cbe08ed44f11c04405084f3a1007 (image=quay.io/ceph/ceph:v19, name=ceph-ceb838ef-9d5d-54e4-bddb-2f01adce2ad4-mon-compute-0, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, ceph=True, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS)
Dec  8 04:47:52 np0005550137 ceph-osd[83009]: log_channel(cluster) log [DBG] : 6.f scrub starts
Dec  8 04:47:52 np0005550137 ceph-osd[83009]: log_channel(cluster) log [DBG] : 6.f scrub ok
Dec  8 04:47:52 np0005550137 python3[90733]: ansible-ansible.legacy.command Invoked with stdin=/home/grafana_password.yml stdin_add_newline=False _raw_params=podman run --rm --net=host --ipc=host   --interactive  --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z   --volume /home/ceph-admin/specs/ceph_spec.yaml:/home/ceph_spec.yaml:z   --entrypoint ceph quay.io/ceph/ceph:v19 --fsid ceb838ef-9d5d-54e4-bddb-2f01adce2ad4 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   dashboard set-grafana-api-password -i - _uses_shell=False strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None
Dec  8 04:47:52 np0005550137 podman[90749]: 2025-12-08 09:47:52.684747544 +0000 UTC m=+0.069170744 container create 86ec0c41a25f2a0dd8f15b558a77f4b6a04e46b29472e36a0a66db23c6d8d4e2 (image=quay.io/ceph/ceph:v19, name=stoic_edison, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, io.buildah.version=1.40.1, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0)
Dec  8 04:47:52 np0005550137 systemd[1]: Started libpod-conmon-86ec0c41a25f2a0dd8f15b558a77f4b6a04e46b29472e36a0a66db23c6d8d4e2.scope.
Dec  8 04:47:52 np0005550137 systemd[1]: Started libcrun container.
Dec  8 04:47:52 np0005550137 podman[90749]: 2025-12-08 09:47:52.655496128 +0000 UTC m=+0.039919388 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Dec  8 04:47:52 np0005550137 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/66e8591956328cc6e99f54a66f79f5fb1105768b2330a1cea552ff84b8b61680/merged/home/ceph_spec.yaml supports timestamps until 2038 (0x7fffffff)
Dec  8 04:47:52 np0005550137 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/66e8591956328cc6e99f54a66f79f5fb1105768b2330a1cea552ff84b8b61680/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Dec  8 04:47:52 np0005550137 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/66e8591956328cc6e99f54a66f79f5fb1105768b2330a1cea552ff84b8b61680/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec  8 04:47:52 np0005550137 podman[90749]: 2025-12-08 09:47:52.762547227 +0000 UTC m=+0.146970507 container init 86ec0c41a25f2a0dd8f15b558a77f4b6a04e46b29472e36a0a66db23c6d8d4e2 (image=quay.io/ceph/ceph:v19, name=stoic_edison, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2)
Dec  8 04:47:52 np0005550137 podman[90749]: 2025-12-08 09:47:52.769318543 +0000 UTC m=+0.153741723 container start 86ec0c41a25f2a0dd8f15b558a77f4b6a04e46b29472e36a0a66db23c6d8d4e2 (image=quay.io/ceph/ceph:v19, name=stoic_edison, org.label-schema.vendor=CentOS, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid)
Dec  8 04:47:52 np0005550137 podman[90749]: 2025-12-08 09:47:52.773096252 +0000 UTC m=+0.157519532 container attach 86ec0c41a25f2a0dd8f15b558a77f4b6a04e46b29472e36a0a66db23c6d8d4e2 (image=quay.io/ceph/ceph:v19, name=stoic_edison, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1)
Dec  8 04:47:52 np0005550137 ceph-mon[74516]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Dec  8 04:47:52 np0005550137 ceph-mon[74516]: log_channel(audit) log [INF] : from='mgr.14364 192.168.122.100:0/3260131459' entity='mgr.compute-0.kitiwu' 
Dec  8 04:47:52 np0005550137 ceph-mon[74516]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Dec  8 04:47:53 np0005550137 ceph-mon[74516]: log_channel(audit) log [INF] : from='mgr.14364 192.168.122.100:0/3260131459' entity='mgr.compute-0.kitiwu' 
Dec  8 04:47:53 np0005550137 ceph-mgr[74806]: log_channel(audit) log [DBG] : from='client.14421 -' entity='client.admin' cmd=[{"prefix": "dashboard set-grafana-api-password", "target": ["mon-mgr", ""]}]: dispatch
Dec  8 04:47:53 np0005550137 ceph-mon[74516]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config set, name=mgr/dashboard/GRAFANA_API_PASSWORD}] v 0)
Dec  8 04:47:53 np0005550137 ceph-mon[74516]: log_channel(audit) log [INF] : from='mgr.14364 192.168.122.100:0/3260131459' entity='mgr.compute-0.kitiwu' 
Dec  8 04:47:53 np0005550137 stoic_edison[90791]: Option GRAFANA_API_PASSWORD updated
Dec  8 04:47:53 np0005550137 systemd[1]: libpod-86ec0c41a25f2a0dd8f15b558a77f4b6a04e46b29472e36a0a66db23c6d8d4e2.scope: Deactivated successfully.
Dec  8 04:47:53 np0005550137 podman[90749]: 2025-12-08 09:47:53.161392124 +0000 UTC m=+0.545815294 container died 86ec0c41a25f2a0dd8f15b558a77f4b6a04e46b29472e36a0a66db23c6d8d4e2 (image=quay.io/ceph/ceph:v19, name=stoic_edison, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2)
Dec  8 04:47:53 np0005550137 ceph-mon[74516]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2.devices.0}] v 0)
Dec  8 04:47:53 np0005550137 ceph-mon[74516]: log_channel(audit) log [INF] : from='mgr.14364 192.168.122.100:0/3260131459' entity='mgr.compute-0.kitiwu' 
Dec  8 04:47:53 np0005550137 ceph-mon[74516]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2}] v 0)
Dec  8 04:47:53 np0005550137 ceph-mgr[74806]: log_channel(cluster) log [DBG] : pgmap v4: 197 pgs: 197 active+clean; 454 KiB data, 85 MiB used, 60 GiB / 60 GiB avail
Dec  8 04:47:53 np0005550137 systemd[1]: var-lib-containers-storage-overlay-66e8591956328cc6e99f54a66f79f5fb1105768b2330a1cea552ff84b8b61680-merged.mount: Deactivated successfully.
Dec  8 04:47:53 np0005550137 ceph-mon[74516]: log_channel(audit) log [INF] : from='mgr.14364 192.168.122.100:0/3260131459' entity='mgr.compute-0.kitiwu' 
Dec  8 04:47:53 np0005550137 podman[90749]: 2025-12-08 09:47:53.203226625 +0000 UTC m=+0.587649815 container remove 86ec0c41a25f2a0dd8f15b558a77f4b6a04e46b29472e36a0a66db23c6d8d4e2 (image=quay.io/ceph/ceph:v19, name=stoic_edison, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec  8 04:47:53 np0005550137 ceph-mon[74516]: from='mgr.14364 192.168.122.100:0/3260131459' entity='mgr.compute-0.kitiwu' 
Dec  8 04:47:53 np0005550137 ceph-mon[74516]: [08/Dec/2025:09:47:52] ENGINE Bus STARTING
Dec  8 04:47:53 np0005550137 ceph-mon[74516]: [08/Dec/2025:09:47:52] ENGINE Serving on http://192.168.122.100:8765
Dec  8 04:47:53 np0005550137 ceph-mon[74516]: [08/Dec/2025:09:47:52] ENGINE Serving on https://192.168.122.100:7150
Dec  8 04:47:53 np0005550137 ceph-mon[74516]: [08/Dec/2025:09:47:52] ENGINE Bus STARTED
Dec  8 04:47:53 np0005550137 ceph-mon[74516]: [08/Dec/2025:09:47:52] ENGINE Client ('192.168.122.100', 50968) lost — peer dropped the TLS connection suddenly, during handshake: (6, 'TLS/SSL connection has been closed (EOF) (_ssl.c:1147)')
Dec  8 04:47:53 np0005550137 ceph-mon[74516]: from='mgr.14364 192.168.122.100:0/3260131459' entity='mgr.compute-0.kitiwu' 
Dec  8 04:47:53 np0005550137 ceph-mon[74516]: from='mgr.14364 192.168.122.100:0/3260131459' entity='mgr.compute-0.kitiwu' 
Dec  8 04:47:53 np0005550137 ceph-mon[74516]: from='mgr.14364 192.168.122.100:0/3260131459' entity='mgr.compute-0.kitiwu' 
Dec  8 04:47:53 np0005550137 ceph-mon[74516]: from='mgr.14364 192.168.122.100:0/3260131459' entity='mgr.compute-0.kitiwu' 
Dec  8 04:47:53 np0005550137 systemd[1]: libpod-conmon-86ec0c41a25f2a0dd8f15b558a77f4b6a04e46b29472e36a0a66db23c6d8d4e2.scope: Deactivated successfully.
Dec  8 04:47:53 np0005550137 ceph-mon[74516]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1.devices.0}] v 0)
Dec  8 04:47:53 np0005550137 ceph-mon[74516]: log_channel(audit) log [INF] : from='mgr.14364 192.168.122.100:0/3260131459' entity='mgr.compute-0.kitiwu' 
Dec  8 04:47:53 np0005550137 ceph-mon[74516]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1}] v 0)
Dec  8 04:47:53 np0005550137 ceph-mon[74516]: log_channel(audit) log [INF] : from='mgr.14364 192.168.122.100:0/3260131459' entity='mgr.compute-0.kitiwu' 
Dec  8 04:47:53 np0005550137 ceph-mgr[74806]: [devicehealth INFO root] Check health
Dec  8 04:47:53 np0005550137 ceph-osd[83009]: log_channel(cluster) log [DBG] : 3.b deep-scrub starts
Dec  8 04:47:53 np0005550137 ceph-osd[83009]: log_channel(cluster) log [DBG] : 3.b deep-scrub ok
Dec  8 04:47:53 np0005550137 python3[90970]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --interactive  --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z   --volume /home/ceph-admin/specs/ceph_spec.yaml:/home/ceph_spec.yaml:z   --entrypoint ceph quay.io/ceph/ceph:v19 --fsid ceb838ef-9d5d-54e4-bddb-2f01adce2ad4 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   dashboard set-alertmanager-api-host http://192.168.122.100:9093#012 _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec  8 04:47:53 np0005550137 podman[90974]: 2025-12-08 09:47:53.676425906 +0000 UTC m=+0.069104362 container create f888c68760816e2f5c9ad1ecd834d4c734bb4469f222f1510f94781232e9ddcd (image=quay.io/ceph/ceph:v19, name=frosty_einstein, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, CEPH_REF=squid, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.license=GPLv2)
Dec  8 04:47:53 np0005550137 systemd[1]: Started libpod-conmon-f888c68760816e2f5c9ad1ecd834d4c734bb4469f222f1510f94781232e9ddcd.scope.
Dec  8 04:47:53 np0005550137 systemd[1]: Started libcrun container.
Dec  8 04:47:53 np0005550137 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4b375356cb381fd2690c9f0834c3c91d040fa44ba064761d3fa95e2b1716913b/merged/home/ceph_spec.yaml supports timestamps until 2038 (0x7fffffff)
Dec  8 04:47:53 np0005550137 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4b375356cb381fd2690c9f0834c3c91d040fa44ba064761d3fa95e2b1716913b/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Dec  8 04:47:53 np0005550137 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4b375356cb381fd2690c9f0834c3c91d040fa44ba064761d3fa95e2b1716913b/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec  8 04:47:53 np0005550137 podman[90974]: 2025-12-08 09:47:53.748838432 +0000 UTC m=+0.141516948 container init f888c68760816e2f5c9ad1ecd834d4c734bb4469f222f1510f94781232e9ddcd (image=quay.io/ceph/ceph:v19, name=frosty_einstein, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1)
Dec  8 04:47:53 np0005550137 podman[90974]: 2025-12-08 09:47:53.654552692 +0000 UTC m=+0.047231218 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Dec  8 04:47:53 np0005550137 podman[90974]: 2025-12-08 09:47:53.756769461 +0000 UTC m=+0.149447927 container start f888c68760816e2f5c9ad1ecd834d4c734bb4469f222f1510f94781232e9ddcd (image=quay.io/ceph/ceph:v19, name=frosty_einstein, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Dec  8 04:47:53 np0005550137 podman[90974]: 2025-12-08 09:47:53.760325825 +0000 UTC m=+0.153004301 container attach f888c68760816e2f5c9ad1ecd834d4c734bb4469f222f1510f94781232e9ddcd (image=quay.io/ceph/ceph:v19, name=frosty_einstein, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True)
Dec  8 04:47:54 np0005550137 ceph-mgr[74806]: log_channel(audit) log [DBG] : from='client.14433 -' entity='client.admin' cmd=[{"prefix": "dashboard set-alertmanager-api-host", "value": "http://192.168.122.100:9093", "target": ["mon-mgr", ""]}]: dispatch
Dec  8 04:47:54 np0005550137 ceph-mon[74516]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config set, name=mgr/dashboard/ALERTMANAGER_API_HOST}] v 0)
Dec  8 04:47:54 np0005550137 ceph-mon[74516]: log_channel(audit) log [INF] : from='mgr.14364 192.168.122.100:0/3260131459' entity='mgr.compute-0.kitiwu' 
Dec  8 04:47:54 np0005550137 frosty_einstein[90997]: Option ALERTMANAGER_API_HOST updated
Dec  8 04:47:54 np0005550137 systemd[1]: libpod-f888c68760816e2f5c9ad1ecd834d4c734bb4469f222f1510f94781232e9ddcd.scope: Deactivated successfully.
Dec  8 04:47:54 np0005550137 podman[90974]: 2025-12-08 09:47:54.165302169 +0000 UTC m=+0.557980635 container died f888c68760816e2f5c9ad1ecd834d4c734bb4469f222f1510f94781232e9ddcd (image=quay.io/ceph/ceph:v19, name=frosty_einstein, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec  8 04:47:54 np0005550137 ceph-mon[74516]: log_channel(cluster) log [DBG] : mgrmap e16: compute-0.kitiwu(active, since 3s), standbys: compute-1.mmkaif, compute-2.zqytsv
Dec  8 04:47:54 np0005550137 systemd[1]: var-lib-containers-storage-overlay-4b375356cb381fd2690c9f0834c3c91d040fa44ba064761d3fa95e2b1716913b-merged.mount: Deactivated successfully.
Dec  8 04:47:54 np0005550137 podman[90974]: 2025-12-08 09:47:54.203177757 +0000 UTC m=+0.595856223 container remove f888c68760816e2f5c9ad1ecd834d4c734bb4469f222f1510f94781232e9ddcd (image=quay.io/ceph/ceph:v19, name=frosty_einstein, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, org.label-schema.license=GPLv2)
Dec  8 04:47:54 np0005550137 systemd[1]: libpod-conmon-f888c68760816e2f5c9ad1ecd834d4c734bb4469f222f1510f94781232e9ddcd.scope: Deactivated successfully.
Dec  8 04:47:54 np0005550137 ceph-mon[74516]: from='mgr.14364 192.168.122.100:0/3260131459' entity='mgr.compute-0.kitiwu' 
Dec  8 04:47:54 np0005550137 ceph-mon[74516]: from='mgr.14364 192.168.122.100:0/3260131459' entity='mgr.compute-0.kitiwu' 
Dec  8 04:47:54 np0005550137 ceph-mon[74516]: from='mgr.14364 192.168.122.100:0/3260131459' entity='mgr.compute-0.kitiwu' 
Dec  8 04:47:54 np0005550137 ceph-mon[74516]: from='mgr.14364 192.168.122.100:0/3260131459' entity='mgr.compute-0.kitiwu' 
Dec  8 04:47:54 np0005550137 ceph-mon[74516]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Dec  8 04:47:54 np0005550137 ceph-mon[74516]: log_channel(audit) log [INF] : from='mgr.14364 192.168.122.100:0/3260131459' entity='mgr.compute-0.kitiwu' 
Dec  8 04:47:54 np0005550137 ceph-mon[74516]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Dec  8 04:47:54 np0005550137 ceph-mon[74516]: log_channel(audit) log [INF] : from='mgr.14364 192.168.122.100:0/3260131459' entity='mgr.compute-0.kitiwu' 
Dec  8 04:47:54 np0005550137 ceph-mon[74516]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"} v 0)
Dec  8 04:47:54 np0005550137 ceph-mon[74516]: log_channel(audit) log [INF] : from='mgr.14364 192.168.122.100:0/3260131459' entity='mgr.compute-0.kitiwu' cmd=[{"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"}]: dispatch
Dec  8 04:47:54 np0005550137 ceph-mon[74516]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2.devices.0}] v 0)
Dec  8 04:47:54 np0005550137 ceph-mon[74516]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1.devices.0}] v 0)
Dec  8 04:47:54 np0005550137 ceph-mon[74516]: log_channel(audit) log [INF] : from='mgr.14364 192.168.122.100:0/3260131459' entity='mgr.compute-0.kitiwu' 
Dec  8 04:47:54 np0005550137 ceph-mon[74516]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2}] v 0)
Dec  8 04:47:54 np0005550137 ceph-mon[74516]: log_channel(audit) log [INF] : from='mgr.14364 192.168.122.100:0/3260131459' entity='mgr.compute-0.kitiwu' 
Dec  8 04:47:54 np0005550137 ceph-mon[74516]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1}] v 0)
Dec  8 04:47:54 np0005550137 ceph-mon[74516]: log_channel(audit) log [INF] : from='mgr.14364 192.168.122.100:0/3260131459' entity='mgr.compute-0.kitiwu' 
Dec  8 04:47:54 np0005550137 ceph-mon[74516]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config rm", "who": "osd/host:compute-2", "name": "osd_memory_target"} v 0)
Dec  8 04:47:54 np0005550137 ceph-mon[74516]: log_channel(audit) log [INF] : from='mgr.14364 192.168.122.100:0/3260131459' entity='mgr.compute-0.kitiwu' cmd=[{"prefix": "config rm", "who": "osd/host:compute-2", "name": "osd_memory_target"}]: dispatch
Dec  8 04:47:54 np0005550137 ceph-mon[74516]: log_channel(audit) log [INF] : from='mgr.14364 192.168.122.100:0/3260131459' entity='mgr.compute-0.kitiwu' 
Dec  8 04:47:54 np0005550137 ceph-mon[74516]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config rm", "who": "osd/host:compute-1", "name": "osd_memory_target"} v 0)
Dec  8 04:47:54 np0005550137 ceph-mon[74516]: log_channel(audit) log [INF] : from='mgr.14364 192.168.122.100:0/3260131459' entity='mgr.compute-0.kitiwu' cmd=[{"prefix": "config rm", "who": "osd/host:compute-1", "name": "osd_memory_target"}]: dispatch
Dec  8 04:47:54 np0005550137 ceph-mon[74516]: log_channel(audit) log [INF] : from='mgr.14364 192.168.122.100:0/3260131459' entity='mgr.compute-0.kitiwu' cmd='[{"prefix": "config rm", "who": "osd/host:compute-1", "name": "osd_memory_target"}]': finished
Dec  8 04:47:54 np0005550137 ceph-mon[74516]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Dec  8 04:47:54 np0005550137 ceph-mon[74516]: log_channel(audit) log [DBG] : from='mgr.14364 192.168.122.100:0/3260131459' entity='mgr.compute-0.kitiwu' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec  8 04:47:54 np0005550137 ceph-mon[74516]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Dec  8 04:47:54 np0005550137 ceph-mon[74516]: log_channel(audit) log [INF] : from='mgr.14364 192.168.122.100:0/3260131459' entity='mgr.compute-0.kitiwu' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Dec  8 04:47:54 np0005550137 ceph-mgr[74806]: [cephadm INFO cephadm.serve] Updating compute-0:/etc/ceph/ceph.conf
Dec  8 04:47:54 np0005550137 ceph-mgr[74806]: log_channel(cephadm) log [INF] : Updating compute-0:/etc/ceph/ceph.conf
Dec  8 04:47:54 np0005550137 ceph-mgr[74806]: [cephadm INFO cephadm.serve] Updating compute-1:/etc/ceph/ceph.conf
Dec  8 04:47:54 np0005550137 ceph-mgr[74806]: [cephadm INFO cephadm.serve] Updating compute-2:/etc/ceph/ceph.conf
Dec  8 04:47:54 np0005550137 ceph-mgr[74806]: log_channel(cephadm) log [INF] : Updating compute-1:/etc/ceph/ceph.conf
Dec  8 04:47:54 np0005550137 ceph-mgr[74806]: log_channel(cephadm) log [INF] : Updating compute-2:/etc/ceph/ceph.conf
Dec  8 04:47:54 np0005550137 python3[91135]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --interactive  --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z   --volume /home/ceph-admin/specs/ceph_spec.yaml:/home/ceph_spec.yaml:z   --entrypoint ceph quay.io/ceph/ceph:v19 --fsid ceb838ef-9d5d-54e4-bddb-2f01adce2ad4 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   dashboard set-prometheus-api-host http://192.168.122.100:9092#012 _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec  8 04:47:54 np0005550137 ceph-osd[83009]: log_channel(cluster) log [DBG] : 4.b scrub starts
Dec  8 04:47:54 np0005550137 ceph-osd[83009]: log_channel(cluster) log [DBG] : 4.b scrub ok
Dec  8 04:47:54 np0005550137 podman[91159]: 2025-12-08 09:47:54.566686411 +0000 UTC m=+0.051830461 container create ef681da726402f5be1ce1c9f3094c18f6a8c339721e0e09af0e8e4531a2c1edd (image=quay.io/ceph/ceph:v19, name=elastic_heyrovsky, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Dec  8 04:47:54 np0005550137 systemd[1]: Started libpod-conmon-ef681da726402f5be1ce1c9f3094c18f6a8c339721e0e09af0e8e4531a2c1edd.scope.
Dec  8 04:47:54 np0005550137 podman[91159]: 2025-12-08 09:47:54.543831799 +0000 UTC m=+0.028975909 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Dec  8 04:47:54 np0005550137 systemd[1]: Started libcrun container.
Dec  8 04:47:54 np0005550137 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/bb6f75fb7b5efac46a6ba79163ff773fee2b87fd94d53bc7ab2abf56dcfbfb93/merged/home/ceph_spec.yaml supports timestamps until 2038 (0x7fffffff)
Dec  8 04:47:54 np0005550137 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/bb6f75fb7b5efac46a6ba79163ff773fee2b87fd94d53bc7ab2abf56dcfbfb93/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Dec  8 04:47:54 np0005550137 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/bb6f75fb7b5efac46a6ba79163ff773fee2b87fd94d53bc7ab2abf56dcfbfb93/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec  8 04:47:54 np0005550137 podman[91159]: 2025-12-08 09:47:54.666695386 +0000 UTC m=+0.151839456 container init ef681da726402f5be1ce1c9f3094c18f6a8c339721e0e09af0e8e4531a2c1edd (image=quay.io/ceph/ceph:v19, name=elastic_heyrovsky, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_REF=squid, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec  8 04:47:54 np0005550137 podman[91159]: 2025-12-08 09:47:54.674428411 +0000 UTC m=+0.159572471 container start ef681da726402f5be1ce1c9f3094c18f6a8c339721e0e09af0e8e4531a2c1edd (image=quay.io/ceph/ceph:v19, name=elastic_heyrovsky, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2)
Dec  8 04:47:54 np0005550137 podman[91159]: 2025-12-08 09:47:54.679288241 +0000 UTC m=+0.164432311 container attach ef681da726402f5be1ce1c9f3094c18f6a8c339721e0e09af0e8e4531a2c1edd (image=quay.io/ceph/ceph:v19, name=elastic_heyrovsky, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec  8 04:47:54 np0005550137 ceph-mon[74516]: mon.compute-0@0(leader).osd e43 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  8 04:47:55 np0005550137 ceph-mgr[74806]: log_channel(audit) log [DBG] : from='client.24187 -' entity='client.admin' cmd=[{"prefix": "dashboard set-prometheus-api-host", "value": "http://192.168.122.100:9092", "target": ["mon-mgr", ""]}]: dispatch
Dec  8 04:47:55 np0005550137 ceph-mon[74516]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config set, name=mgr/dashboard/PROMETHEUS_API_HOST}] v 0)
Dec  8 04:47:55 np0005550137 ceph-mon[74516]: log_channel(audit) log [INF] : from='mgr.14364 192.168.122.100:0/3260131459' entity='mgr.compute-0.kitiwu' 
Dec  8 04:47:55 np0005550137 elastic_heyrovsky[91200]: Option PROMETHEUS_API_HOST updated
Dec  8 04:47:55 np0005550137 systemd[1]: libpod-ef681da726402f5be1ce1c9f3094c18f6a8c339721e0e09af0e8e4531a2c1edd.scope: Deactivated successfully.
Dec  8 04:47:55 np0005550137 podman[91159]: 2025-12-08 09:47:55.090291151 +0000 UTC m=+0.575435201 container died ef681da726402f5be1ce1c9f3094c18f6a8c339721e0e09af0e8e4531a2c1edd (image=quay.io/ceph/ceph:v19, name=elastic_heyrovsky, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, ceph=True, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Dec  8 04:47:55 np0005550137 ceph-mgr[74806]: [cephadm INFO cephadm.serve] Updating compute-0:/var/lib/ceph/ceb838ef-9d5d-54e4-bddb-2f01adce2ad4/config/ceph.conf
Dec  8 04:47:55 np0005550137 ceph-mgr[74806]: log_channel(cephadm) log [INF] : Updating compute-0:/var/lib/ceph/ceb838ef-9d5d-54e4-bddb-2f01adce2ad4/config/ceph.conf
Dec  8 04:47:55 np0005550137 systemd[1]: var-lib-containers-storage-overlay-bb6f75fb7b5efac46a6ba79163ff773fee2b87fd94d53bc7ab2abf56dcfbfb93-merged.mount: Deactivated successfully.
Dec  8 04:47:55 np0005550137 podman[91159]: 2025-12-08 09:47:55.123163432 +0000 UTC m=+0.608307462 container remove ef681da726402f5be1ce1c9f3094c18f6a8c339721e0e09af0e8e4531a2c1edd (image=quay.io/ceph/ceph:v19, name=elastic_heyrovsky, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, ceph=True, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, CEPH_REF=squid, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec  8 04:47:55 np0005550137 systemd[1]: libpod-conmon-ef681da726402f5be1ce1c9f3094c18f6a8c339721e0e09af0e8e4531a2c1edd.scope: Deactivated successfully.
Dec  8 04:47:55 np0005550137 ceph-mgr[74806]: [cephadm INFO cephadm.serve] Updating compute-1:/var/lib/ceph/ceb838ef-9d5d-54e4-bddb-2f01adce2ad4/config/ceph.conf
Dec  8 04:47:55 np0005550137 ceph-mgr[74806]: log_channel(cephadm) log [INF] : Updating compute-1:/var/lib/ceph/ceb838ef-9d5d-54e4-bddb-2f01adce2ad4/config/ceph.conf
Dec  8 04:47:55 np0005550137 ceph-mgr[74806]: [cephadm INFO cephadm.serve] Updating compute-2:/var/lib/ceph/ceb838ef-9d5d-54e4-bddb-2f01adce2ad4/config/ceph.conf
Dec  8 04:47:55 np0005550137 ceph-mgr[74806]: log_channel(cephadm) log [INF] : Updating compute-2:/var/lib/ceph/ceb838ef-9d5d-54e4-bddb-2f01adce2ad4/config/ceph.conf
Dec  8 04:47:55 np0005550137 ceph-mgr[74806]: log_channel(cluster) log [DBG] : pgmap v5: 197 pgs: 197 active+clean; 454 KiB data, 85 MiB used, 60 GiB / 60 GiB avail
Dec  8 04:47:55 np0005550137 ceph-mon[74516]: from='mgr.14364 192.168.122.100:0/3260131459' entity='mgr.compute-0.kitiwu' 
Dec  8 04:47:55 np0005550137 ceph-mon[74516]: from='mgr.14364 192.168.122.100:0/3260131459' entity='mgr.compute-0.kitiwu' 
Dec  8 04:47:55 np0005550137 ceph-mon[74516]: from='mgr.14364 192.168.122.100:0/3260131459' entity='mgr.compute-0.kitiwu' cmd=[{"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"}]: dispatch
Dec  8 04:47:55 np0005550137 ceph-mon[74516]: from='mgr.14364 192.168.122.100:0/3260131459' entity='mgr.compute-0.kitiwu' 
Dec  8 04:47:55 np0005550137 ceph-mon[74516]: from='mgr.14364 192.168.122.100:0/3260131459' entity='mgr.compute-0.kitiwu' 
Dec  8 04:47:55 np0005550137 ceph-mon[74516]: from='mgr.14364 192.168.122.100:0/3260131459' entity='mgr.compute-0.kitiwu' 
Dec  8 04:47:55 np0005550137 ceph-mon[74516]: from='mgr.14364 192.168.122.100:0/3260131459' entity='mgr.compute-0.kitiwu' cmd=[{"prefix": "config rm", "who": "osd/host:compute-2", "name": "osd_memory_target"}]: dispatch
Dec  8 04:47:55 np0005550137 ceph-mon[74516]: from='mgr.14364 192.168.122.100:0/3260131459' entity='mgr.compute-0.kitiwu' 
Dec  8 04:47:55 np0005550137 ceph-mon[74516]: from='mgr.14364 192.168.122.100:0/3260131459' entity='mgr.compute-0.kitiwu' cmd=[{"prefix": "config rm", "who": "osd/host:compute-1", "name": "osd_memory_target"}]: dispatch
Dec  8 04:47:55 np0005550137 ceph-mon[74516]: from='mgr.14364 192.168.122.100:0/3260131459' entity='mgr.compute-0.kitiwu' cmd='[{"prefix": "config rm", "who": "osd/host:compute-1", "name": "osd_memory_target"}]': finished
Dec  8 04:47:55 np0005550137 ceph-mon[74516]: from='mgr.14364 192.168.122.100:0/3260131459' entity='mgr.compute-0.kitiwu' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Dec  8 04:47:55 np0005550137 ceph-mon[74516]: Updating compute-0:/etc/ceph/ceph.conf
Dec  8 04:47:55 np0005550137 ceph-mon[74516]: Updating compute-1:/etc/ceph/ceph.conf
Dec  8 04:47:55 np0005550137 ceph-mon[74516]: Updating compute-2:/etc/ceph/ceph.conf
Dec  8 04:47:55 np0005550137 ceph-mon[74516]: from='mgr.14364 192.168.122.100:0/3260131459' entity='mgr.compute-0.kitiwu' 
Dec  8 04:47:55 np0005550137 python3[91510]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --interactive  --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z   --volume /home/ceph-admin/specs/ceph_spec.yaml:/home/ceph_spec.yaml:z   --entrypoint ceph quay.io/ceph/ceph:v19 --fsid ceb838ef-9d5d-54e4-bddb-2f01adce2ad4 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   dashboard set-grafana-api-url http://192.168.122.100:3100#012 _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec  8 04:47:55 np0005550137 podman[91560]: 2025-12-08 09:47:55.508819289 +0000 UTC m=+0.061311577 container create 331ecfee3c43ea3f6024df24710d40b6c403d4586b2639fa04ff141c5f8b041f (image=quay.io/ceph/ceph:v19, name=gracious_hawking, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0)
Dec  8 04:47:55 np0005550137 systemd[1]: Started libpod-conmon-331ecfee3c43ea3f6024df24710d40b6c403d4586b2639fa04ff141c5f8b041f.scope.
Dec  8 04:47:55 np0005550137 systemd[1]: Started libcrun container.
Dec  8 04:47:55 np0005550137 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a1eb27b4e3ba638a8622a840d4b2e3d319955dfde030f04c345f71298eb25f44/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Dec  8 04:47:55 np0005550137 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a1eb27b4e3ba638a8622a840d4b2e3d319955dfde030f04c345f71298eb25f44/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec  8 04:47:55 np0005550137 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a1eb27b4e3ba638a8622a840d4b2e3d319955dfde030f04c345f71298eb25f44/merged/home/ceph_spec.yaml supports timestamps until 2038 (0x7fffffff)
Dec  8 04:47:55 np0005550137 podman[91560]: 2025-12-08 09:47:55.572898644 +0000 UTC m=+0.125390962 container init 331ecfee3c43ea3f6024df24710d40b6c403d4586b2639fa04ff141c5f8b041f (image=quay.io/ceph/ceph:v19, name=gracious_hawking, OSD_FLAVOR=default, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec  8 04:47:55 np0005550137 podman[91560]: 2025-12-08 09:47:55.577935 +0000 UTC m=+0.130427258 container start 331ecfee3c43ea3f6024df24710d40b6c403d4586b2639fa04ff141c5f8b041f (image=quay.io/ceph/ceph:v19, name=gracious_hawking, OSD_FLAVOR=default, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS)
Dec  8 04:47:55 np0005550137 podman[91560]: 2025-12-08 09:47:55.581831322 +0000 UTC m=+0.134323610 container attach 331ecfee3c43ea3f6024df24710d40b6c403d4586b2639fa04ff141c5f8b041f (image=quay.io/ceph/ceph:v19, name=gracious_hawking, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Dec  8 04:47:55 np0005550137 podman[91560]: 2025-12-08 09:47:55.489341605 +0000 UTC m=+0.041833883 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Dec  8 04:47:55 np0005550137 ceph-osd[83009]: log_channel(cluster) log [DBG] : 5.a scrub starts
Dec  8 04:47:55 np0005550137 ceph-osd[83009]: log_channel(cluster) log [DBG] : 5.a scrub ok
Dec  8 04:47:55 np0005550137 ceph-mgr[74806]: [cephadm INFO cephadm.serve] Updating compute-0:/etc/ceph/ceph.client.admin.keyring
Dec  8 04:47:55 np0005550137 ceph-mgr[74806]: log_channel(cephadm) log [INF] : Updating compute-0:/etc/ceph/ceph.client.admin.keyring
Dec  8 04:47:55 np0005550137 ceph-mgr[74806]: [cephadm INFO cephadm.serve] Updating compute-2:/etc/ceph/ceph.client.admin.keyring
Dec  8 04:47:55 np0005550137 ceph-mgr[74806]: log_channel(cephadm) log [INF] : Updating compute-2:/etc/ceph/ceph.client.admin.keyring
Dec  8 04:47:55 np0005550137 ceph-mgr[74806]: [cephadm INFO cephadm.serve] Updating compute-1:/etc/ceph/ceph.client.admin.keyring
Dec  8 04:47:55 np0005550137 ceph-mgr[74806]: log_channel(cephadm) log [INF] : Updating compute-1:/etc/ceph/ceph.client.admin.keyring
Dec  8 04:47:55 np0005550137 ceph-mgr[74806]: log_channel(audit) log [DBG] : from='client.14445 -' entity='client.admin' cmd=[{"prefix": "dashboard set-grafana-api-url", "value": "http://192.168.122.100:3100", "target": ["mon-mgr", ""]}]: dispatch
Dec  8 04:47:55 np0005550137 ceph-mon[74516]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config set, name=mgr/dashboard/GRAFANA_API_URL}] v 0)
Dec  8 04:47:55 np0005550137 ceph-mon[74516]: log_channel(audit) log [INF] : from='mgr.14364 192.168.122.100:0/3260131459' entity='mgr.compute-0.kitiwu' 
Dec  8 04:47:55 np0005550137 gracious_hawking[91616]: Option GRAFANA_API_URL updated
Dec  8 04:47:56 np0005550137 systemd[1]: libpod-331ecfee3c43ea3f6024df24710d40b6c403d4586b2639fa04ff141c5f8b041f.scope: Deactivated successfully.
Dec  8 04:47:56 np0005550137 podman[91560]: 2025-12-08 09:47:56.006137187 +0000 UTC m=+0.558629455 container died 331ecfee3c43ea3f6024df24710d40b6c403d4586b2639fa04ff141c5f8b041f (image=quay.io/ceph/ceph:v19, name=gracious_hawking, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Dec  8 04:47:56 np0005550137 systemd[1]: var-lib-containers-storage-overlay-a1eb27b4e3ba638a8622a840d4b2e3d319955dfde030f04c345f71298eb25f44-merged.mount: Deactivated successfully.
Dec  8 04:47:56 np0005550137 podman[91560]: 2025-12-08 09:47:56.044115966 +0000 UTC m=+0.596608234 container remove 331ecfee3c43ea3f6024df24710d40b6c403d4586b2639fa04ff141c5f8b041f (image=quay.io/ceph/ceph:v19, name=gracious_hawking, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, OSD_FLAVOR=default, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid)
Dec  8 04:47:56 np0005550137 systemd[1]: libpod-conmon-331ecfee3c43ea3f6024df24710d40b6c403d4586b2639fa04ff141c5f8b041f.scope: Deactivated successfully.
Dec  8 04:47:56 np0005550137 ceph-mon[74516]: log_channel(cluster) log [DBG] : mgrmap e17: compute-0.kitiwu(active, since 4s), standbys: compute-1.mmkaif, compute-2.zqytsv
Dec  8 04:47:56 np0005550137 ceph-mgr[74806]: [cephadm INFO cephadm.serve] Updating compute-0:/var/lib/ceph/ceb838ef-9d5d-54e4-bddb-2f01adce2ad4/config/ceph.client.admin.keyring
Dec  8 04:47:56 np0005550137 ceph-mgr[74806]: log_channel(cephadm) log [INF] : Updating compute-0:/var/lib/ceph/ceb838ef-9d5d-54e4-bddb-2f01adce2ad4/config/ceph.client.admin.keyring
Dec  8 04:47:56 np0005550137 ceph-mon[74516]: Updating compute-0:/var/lib/ceph/ceb838ef-9d5d-54e4-bddb-2f01adce2ad4/config/ceph.conf
Dec  8 04:47:56 np0005550137 ceph-mon[74516]: Updating compute-1:/var/lib/ceph/ceb838ef-9d5d-54e4-bddb-2f01adce2ad4/config/ceph.conf
Dec  8 04:47:56 np0005550137 ceph-mon[74516]: Updating compute-2:/var/lib/ceph/ceb838ef-9d5d-54e4-bddb-2f01adce2ad4/config/ceph.conf
Dec  8 04:47:56 np0005550137 ceph-mon[74516]: Updating compute-0:/etc/ceph/ceph.client.admin.keyring
Dec  8 04:47:56 np0005550137 ceph-mon[74516]: Updating compute-2:/etc/ceph/ceph.client.admin.keyring
Dec  8 04:47:56 np0005550137 ceph-mon[74516]: Updating compute-1:/etc/ceph/ceph.client.admin.keyring
Dec  8 04:47:56 np0005550137 ceph-mon[74516]: from='mgr.14364 192.168.122.100:0/3260131459' entity='mgr.compute-0.kitiwu' 
Dec  8 04:47:56 np0005550137 ceph-mgr[74806]: [cephadm INFO cephadm.serve] Updating compute-2:/var/lib/ceph/ceb838ef-9d5d-54e4-bddb-2f01adce2ad4/config/ceph.client.admin.keyring
Dec  8 04:47:56 np0005550137 ceph-mgr[74806]: log_channel(cephadm) log [INF] : Updating compute-2:/var/lib/ceph/ceb838ef-9d5d-54e4-bddb-2f01adce2ad4/config/ceph.client.admin.keyring
Dec  8 04:47:56 np0005550137 python3[91955]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --interactive  --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z   --volume /home/ceph-admin/specs/ceph_spec.yaml:/home/ceph_spec.yaml:z   --entrypoint ceph quay.io/ceph/ceph:v19 --fsid ceb838ef-9d5d-54e4-bddb-2f01adce2ad4 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   mgr module disable dashboard _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec  8 04:47:56 np0005550137 ceph-mgr[74806]: [cephadm INFO cephadm.serve] Updating compute-1:/var/lib/ceph/ceb838ef-9d5d-54e4-bddb-2f01adce2ad4/config/ceph.client.admin.keyring
Dec  8 04:47:56 np0005550137 ceph-mgr[74806]: log_channel(cephadm) log [INF] : Updating compute-1:/var/lib/ceph/ceb838ef-9d5d-54e4-bddb-2f01adce2ad4/config/ceph.client.admin.keyring
Dec  8 04:47:56 np0005550137 podman[92006]: 2025-12-08 09:47:56.415745457 +0000 UTC m=+0.046732865 container create 2ac9e842d8fb74bff0534af6d05bfe5b7db89c854f443c04e6f7189e4fe70764 (image=quay.io/ceph/ceph:v19, name=sharp_moser, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, CEPH_REF=squid, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325)
Dec  8 04:47:56 np0005550137 systemd[1]: Started libpod-conmon-2ac9e842d8fb74bff0534af6d05bfe5b7db89c854f443c04e6f7189e4fe70764.scope.
Dec  8 04:47:56 np0005550137 systemd[1]: Started libcrun container.
Dec  8 04:47:56 np0005550137 podman[92006]: 2025-12-08 09:47:56.397554809 +0000 UTC m=+0.028542227 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Dec  8 04:47:56 np0005550137 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/cbb7ebfc0b8815fc27768e29a99f93e74ff9f83bc1d5c63a8a3094786959849c/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec  8 04:47:56 np0005550137 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/cbb7ebfc0b8815fc27768e29a99f93e74ff9f83bc1d5c63a8a3094786959849c/merged/home/ceph_spec.yaml supports timestamps until 2038 (0x7fffffff)
Dec  8 04:47:56 np0005550137 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/cbb7ebfc0b8815fc27768e29a99f93e74ff9f83bc1d5c63a8a3094786959849c/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Dec  8 04:47:56 np0005550137 podman[92006]: 2025-12-08 09:47:56.540794557 +0000 UTC m=+0.171781985 container init 2ac9e842d8fb74bff0534af6d05bfe5b7db89c854f443c04e6f7189e4fe70764 (image=quay.io/ceph/ceph:v19, name=sharp_moser, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, CEPH_REF=squid, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec  8 04:47:56 np0005550137 podman[92006]: 2025-12-08 09:47:56.549502699 +0000 UTC m=+0.180490107 container start 2ac9e842d8fb74bff0534af6d05bfe5b7db89c854f443c04e6f7189e4fe70764 (image=quay.io/ceph/ceph:v19, name=sharp_moser, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_REF=squid, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec  8 04:47:56 np0005550137 podman[92006]: 2025-12-08 09:47:56.552833076 +0000 UTC m=+0.183820484 container attach 2ac9e842d8fb74bff0534af6d05bfe5b7db89c854f443c04e6f7189e4fe70764 (image=quay.io/ceph/ceph:v19, name=sharp_moser, io.buildah.version=1.40.1, CEPH_REF=squid, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325)
Dec  8 04:47:56 np0005550137 ceph-osd[83009]: log_channel(cluster) log [DBG] : 6.9 scrub starts
Dec  8 04:47:56 np0005550137 ceph-osd[83009]: log_channel(cluster) log [DBG] : 6.9 scrub ok
Dec  8 04:47:56 np0005550137 ceph-mon[74516]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Dec  8 04:47:56 np0005550137 ceph-mon[74516]: log_channel(audit) log [INF] : from='mgr.14364 192.168.122.100:0/3260131459' entity='mgr.compute-0.kitiwu' 
Dec  8 04:47:56 np0005550137 ceph-mon[74516]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Dec  8 04:47:56 np0005550137 ceph-mon[74516]: log_channel(audit) log [INF] : from='mgr.14364 192.168.122.100:0/3260131459' entity='mgr.compute-0.kitiwu' 
Dec  8 04:47:56 np0005550137 ceph-mon[74516]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2.devices.0}] v 0)
Dec  8 04:47:56 np0005550137 ceph-mon[74516]: log_channel(audit) log [INF] : from='mgr.14364 192.168.122.100:0/3260131459' entity='mgr.compute-0.kitiwu' 
Dec  8 04:47:56 np0005550137 ceph-mon[74516]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2}] v 0)
Dec  8 04:47:56 np0005550137 ceph-mon[74516]: log_channel(audit) log [INF] : from='mgr.14364 192.168.122.100:0/3260131459' entity='mgr.compute-0.kitiwu' 
Dec  8 04:47:56 np0005550137 ceph-mon[74516]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mgr module disable", "module": "dashboard"} v 0)
Dec  8 04:47:56 np0005550137 ceph-mon[74516]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/4114852922' entity='client.admin' cmd=[{"prefix": "mgr module disable", "module": "dashboard"}]: dispatch
Dec  8 04:47:56 np0005550137 ceph-mon[74516]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1.devices.0}] v 0)
Dec  8 04:47:57 np0005550137 ceph-mon[74516]: log_channel(audit) log [INF] : from='mgr.14364 192.168.122.100:0/3260131459' entity='mgr.compute-0.kitiwu' 
Dec  8 04:47:57 np0005550137 ceph-mon[74516]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1}] v 0)
Dec  8 04:47:57 np0005550137 ceph-mon[74516]: log_channel(audit) log [INF] : from='mgr.14364 192.168.122.100:0/3260131459' entity='mgr.compute-0.kitiwu' 
Dec  8 04:47:57 np0005550137 ceph-mon[74516]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Dec  8 04:47:57 np0005550137 ceph-mon[74516]: log_channel(audit) log [INF] : from='mgr.14364 192.168.122.100:0/3260131459' entity='mgr.compute-0.kitiwu' 
Dec  8 04:47:57 np0005550137 ceph-mgr[74806]: [progress INFO root] update: starting ev 5ea95278-962d-43f2-a06d-702d93ea2d36 (Updating node-exporter deployment (+3 -> 3))
Dec  8 04:47:57 np0005550137 ceph-mgr[74806]: [cephadm INFO cephadm.serve] Deploying daemon node-exporter.compute-0 on compute-0
Dec  8 04:47:57 np0005550137 ceph-mgr[74806]: log_channel(cephadm) log [INF] : Deploying daemon node-exporter.compute-0 on compute-0
Dec  8 04:47:57 np0005550137 ceph-mgr[74806]: log_channel(cluster) log [DBG] : pgmap v6: 197 pgs: 197 active+clean; 454 KiB data, 85 MiB used, 60 GiB / 60 GiB avail
Dec  8 04:47:57 np0005550137 ceph-mon[74516]: Updating compute-0:/var/lib/ceph/ceb838ef-9d5d-54e4-bddb-2f01adce2ad4/config/ceph.client.admin.keyring
Dec  8 04:47:57 np0005550137 ceph-mon[74516]: Updating compute-2:/var/lib/ceph/ceb838ef-9d5d-54e4-bddb-2f01adce2ad4/config/ceph.client.admin.keyring
Dec  8 04:47:57 np0005550137 ceph-mon[74516]: Updating compute-1:/var/lib/ceph/ceb838ef-9d5d-54e4-bddb-2f01adce2ad4/config/ceph.client.admin.keyring
Dec  8 04:47:57 np0005550137 ceph-mon[74516]: from='mgr.14364 192.168.122.100:0/3260131459' entity='mgr.compute-0.kitiwu' 
Dec  8 04:47:57 np0005550137 ceph-mon[74516]: from='mgr.14364 192.168.122.100:0/3260131459' entity='mgr.compute-0.kitiwu' 
Dec  8 04:47:57 np0005550137 ceph-mon[74516]: from='mgr.14364 192.168.122.100:0/3260131459' entity='mgr.compute-0.kitiwu' 
Dec  8 04:47:57 np0005550137 ceph-mon[74516]: from='mgr.14364 192.168.122.100:0/3260131459' entity='mgr.compute-0.kitiwu' 
Dec  8 04:47:57 np0005550137 ceph-mon[74516]: from='client.? 192.168.122.100:0/4114852922' entity='client.admin' cmd=[{"prefix": "mgr module disable", "module": "dashboard"}]: dispatch
Dec  8 04:47:57 np0005550137 ceph-mon[74516]: from='mgr.14364 192.168.122.100:0/3260131459' entity='mgr.compute-0.kitiwu' 
Dec  8 04:47:57 np0005550137 ceph-mon[74516]: from='mgr.14364 192.168.122.100:0/3260131459' entity='mgr.compute-0.kitiwu' 
Dec  8 04:47:57 np0005550137 ceph-mon[74516]: from='mgr.14364 192.168.122.100:0/3260131459' entity='mgr.compute-0.kitiwu' 
Dec  8 04:47:57 np0005550137 systemd[1]: Reloading.
Dec  8 04:47:57 np0005550137 ceph-osd[83009]: log_channel(cluster) log [DBG] : 6.b scrub starts
Dec  8 04:47:57 np0005550137 ceph-osd[83009]: log_channel(cluster) log [DBG] : 6.b scrub ok
Dec  8 04:47:57 np0005550137 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Dec  8 04:47:57 np0005550137 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec  8 04:47:57 np0005550137 systemd[1]: Reloading.
Dec  8 04:47:57 np0005550137 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Dec  8 04:47:57 np0005550137 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec  8 04:47:57 np0005550137 ceph-mon[74516]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/4114852922' entity='client.admin' cmd='[{"prefix": "mgr module disable", "module": "dashboard"}]': finished
Dec  8 04:47:57 np0005550137 ceph-mon[74516]: log_channel(cluster) log [DBG] : mgrmap e18: compute-0.kitiwu(active, since 6s), standbys: compute-1.mmkaif, compute-2.zqytsv
Dec  8 04:47:57 np0005550137 ceph-mgr[74806]: mgr handle_mgr_map respawning because set of enabled modules changed!
Dec  8 04:47:57 np0005550137 ceph-mgr[74806]: mgr respawn  e: '/usr/bin/ceph-mgr'
Dec  8 04:47:57 np0005550137 ceph-mgr[74806]: mgr respawn  0: '/usr/bin/ceph-mgr'
Dec  8 04:47:57 np0005550137 ceph-mgr[74806]: mgr respawn  1: '-n'
Dec  8 04:47:57 np0005550137 ceph-mgr[74806]: mgr respawn  2: 'mgr.compute-0.kitiwu'
Dec  8 04:47:57 np0005550137 ceph-mgr[74806]: mgr respawn  3: '-f'
Dec  8 04:47:57 np0005550137 ceph-mgr[74806]: mgr respawn  4: '--setuser'
Dec  8 04:47:57 np0005550137 ceph-mgr[74806]: mgr respawn  5: 'ceph'
Dec  8 04:47:57 np0005550137 ceph-mgr[74806]: mgr respawn  6: '--setgroup'
Dec  8 04:47:57 np0005550137 ceph-mgr[74806]: mgr respawn  7: 'ceph'
Dec  8 04:47:57 np0005550137 ceph-mgr[74806]: mgr respawn  8: '--default-log-to-file=false'
Dec  8 04:47:57 np0005550137 ceph-mgr[74806]: mgr respawn  9: '--default-log-to-journald=true'
Dec  8 04:47:57 np0005550137 ceph-mgr[74806]: mgr respawn  10: '--default-log-to-stderr=false'
Dec  8 04:47:57 np0005550137 ceph-mgr[74806]: mgr respawn respawning with exe /usr/bin/ceph-mgr
Dec  8 04:47:57 np0005550137 ceph-mgr[74806]: mgr respawn  exe_path /proc/self/exe
Dec  8 04:47:57 np0005550137 podman[92006]: 2025-12-08 09:47:57.985460493 +0000 UTC m=+1.616447911 container died 2ac9e842d8fb74bff0534af6d05bfe5b7db89c854f443c04e6f7189e4fe70764 (image=quay.io/ceph/ceph:v19, name=sharp_moser, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, OSD_FLAVOR=default)
Dec  8 04:47:58 np0005550137 ceph-ceb838ef-9d5d-54e4-bddb-2f01adce2ad4-mgr-compute-0-kitiwu[74802]: ignoring --setuser ceph since I am not root
Dec  8 04:47:58 np0005550137 ceph-ceb838ef-9d5d-54e4-bddb-2f01adce2ad4-mgr-compute-0-kitiwu[74802]: ignoring --setgroup ceph since I am not root
Dec  8 04:47:58 np0005550137 ceph-mgr[74806]: ceph version 19.2.3 (c92aebb279828e9c3c1f5d24613efca272649e62) squid (stable), process ceph-mgr, pid 2
Dec  8 04:47:58 np0005550137 ceph-mgr[74806]: pidfile_write: ignore empty --pid-file
Dec  8 04:47:58 np0005550137 ceph-mgr[74806]: mgr[py] Loading python module 'alerts'
Dec  8 04:47:58 np0005550137 systemd[1]: libpod-2ac9e842d8fb74bff0534af6d05bfe5b7db89c854f443c04e6f7189e4fe70764.scope: Deactivated successfully.
Dec  8 04:47:58 np0005550137 systemd[1]: Starting Ceph node-exporter.compute-0 for ceb838ef-9d5d-54e4-bddb-2f01adce2ad4...
Dec  8 04:47:58 np0005550137 systemd[1]: var-lib-containers-storage-overlay-cbb7ebfc0b8815fc27768e29a99f93e74ff9f83bc1d5c63a8a3094786959849c-merged.mount: Deactivated successfully.
Dec  8 04:47:58 np0005550137 systemd-logind[805]: Session 34 logged out. Waiting for processes to exit.
Dec  8 04:47:58 np0005550137 podman[92006]: 2025-12-08 09:47:58.163809127 +0000 UTC m=+1.794796585 container remove 2ac9e842d8fb74bff0534af6d05bfe5b7db89c854f443c04e6f7189e4fe70764 (image=quay.io/ceph/ceph:v19, name=sharp_moser, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, ceph=True, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0)
Dec  8 04:47:58 np0005550137 systemd[1]: libpod-conmon-2ac9e842d8fb74bff0534af6d05bfe5b7db89c854f443c04e6f7189e4fe70764.scope: Deactivated successfully.
Dec  8 04:47:58 np0005550137 ceph-ceb838ef-9d5d-54e4-bddb-2f01adce2ad4-mgr-compute-0-kitiwu[74802]: 2025-12-08T09:47:58.222+0000 7ff01ac77140 -1 mgr[py] Module alerts has missing NOTIFY_TYPES member
Dec  8 04:47:58 np0005550137 ceph-mgr[74806]: mgr[py] Module alerts has missing NOTIFY_TYPES member
Dec  8 04:47:58 np0005550137 ceph-mgr[74806]: mgr[py] Loading python module 'balancer'
Dec  8 04:47:58 np0005550137 ceph-mon[74516]: Deploying daemon node-exporter.compute-0 on compute-0
Dec  8 04:47:58 np0005550137 ceph-mon[74516]: from='client.? 192.168.122.100:0/4114852922' entity='client.admin' cmd='[{"prefix": "mgr module disable", "module": "dashboard"}]': finished
Dec  8 04:47:58 np0005550137 ceph-ceb838ef-9d5d-54e4-bddb-2f01adce2ad4-mgr-compute-0-kitiwu[74802]: 2025-12-08T09:47:58.311+0000 7ff01ac77140 -1 mgr[py] Module balancer has missing NOTIFY_TYPES member
Dec  8 04:47:58 np0005550137 ceph-mgr[74806]: mgr[py] Module balancer has missing NOTIFY_TYPES member
Dec  8 04:47:58 np0005550137 ceph-mgr[74806]: mgr[py] Loading python module 'cephadm'
Dec  8 04:47:58 np0005550137 bash[92490]: Trying to pull quay.io/prometheus/node-exporter:v1.7.0...
Dec  8 04:47:58 np0005550137 python3[92489]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --interactive  --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z   --volume /home/ceph-admin/specs/ceph_spec.yaml:/home/ceph_spec.yaml:z   --entrypoint ceph quay.io/ceph/ceph:v19 --fsid ceb838ef-9d5d-54e4-bddb-2f01adce2ad4 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   mgr module enable dashboard _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec  8 04:47:58 np0005550137 ceph-osd[83009]: log_channel(cluster) log [DBG] : 4.17 scrub starts
Dec  8 04:47:58 np0005550137 podman[92502]: 2025-12-08 09:47:58.54109459 +0000 UTC m=+0.043468099 container create edcbb71de0df9df01f440f72072d1fa51a0a46c5b2e13591400ec1811f36fdea (image=quay.io/ceph/ceph:v19, name=thirsty_varahamihira, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, ceph=True, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec  8 04:47:58 np0005550137 ceph-osd[83009]: log_channel(cluster) log [DBG] : 4.17 scrub ok
Dec  8 04:47:58 np0005550137 systemd[1]: Started libpod-conmon-edcbb71de0df9df01f440f72072d1fa51a0a46c5b2e13591400ec1811f36fdea.scope.
Dec  8 04:47:58 np0005550137 systemd[1]: Started libcrun container.
Dec  8 04:47:58 np0005550137 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9bd9948a04404965ff4a8e2429b3b532768cfcf7713e4c4c3908481046a26147/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Dec  8 04:47:58 np0005550137 podman[92502]: 2025-12-08 09:47:58.52452214 +0000 UTC m=+0.026895669 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Dec  8 04:47:58 np0005550137 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9bd9948a04404965ff4a8e2429b3b532768cfcf7713e4c4c3908481046a26147/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec  8 04:47:58 np0005550137 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9bd9948a04404965ff4a8e2429b3b532768cfcf7713e4c4c3908481046a26147/merged/home/ceph_spec.yaml supports timestamps until 2038 (0x7fffffff)
Dec  8 04:47:58 np0005550137 podman[92502]: 2025-12-08 09:47:58.636721898 +0000 UTC m=+0.139095417 container init edcbb71de0df9df01f440f72072d1fa51a0a46c5b2e13591400ec1811f36fdea (image=quay.io/ceph/ceph:v19, name=thirsty_varahamihira, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2)
Dec  8 04:47:58 np0005550137 podman[92502]: 2025-12-08 09:47:58.648530921 +0000 UTC m=+0.150904440 container start edcbb71de0df9df01f440f72072d1fa51a0a46c5b2e13591400ec1811f36fdea (image=quay.io/ceph/ceph:v19, name=thirsty_varahamihira, io.buildah.version=1.40.1, OSD_FLAVOR=default, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec  8 04:47:58 np0005550137 podman[92502]: 2025-12-08 09:47:58.652616279 +0000 UTC m=+0.154989778 container attach edcbb71de0df9df01f440f72072d1fa51a0a46c5b2e13591400ec1811f36fdea (image=quay.io/ceph/ceph:v19, name=thirsty_varahamihira, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, ceph=True)
Dec  8 04:47:58 np0005550137 bash[92490]: Getting image source signatures
Dec  8 04:47:58 np0005550137 bash[92490]: Copying blob sha256:324153f2810a9927fcce320af9e4e291e0b6e805cbdd1f338386c756b9defa24
Dec  8 04:47:58 np0005550137 bash[92490]: Copying blob sha256:2abcce694348cd2c949c0e98a7400ebdfd8341021bcf6b541bc72033ce982510
Dec  8 04:47:58 np0005550137 bash[92490]: Copying blob sha256:455fd88e5221bc1e278ef2d059cd70e4df99a24e5af050ede621534276f6cf9a
Dec  8 04:47:59 np0005550137 ceph-mgr[74806]: mgr[py] Loading python module 'crash'
Dec  8 04:47:59 np0005550137 ceph-mon[74516]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mgr module enable", "module": "dashboard"} v 0)
Dec  8 04:47:59 np0005550137 ceph-mon[74516]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/3310196236' entity='client.admin' cmd=[{"prefix": "mgr module enable", "module": "dashboard"}]: dispatch
Dec  8 04:47:59 np0005550137 ceph-ceb838ef-9d5d-54e4-bddb-2f01adce2ad4-mgr-compute-0-kitiwu[74802]: 2025-12-08T09:47:59.096+0000 7ff01ac77140 -1 mgr[py] Module crash has missing NOTIFY_TYPES member
Dec  8 04:47:59 np0005550137 ceph-mgr[74806]: mgr[py] Module crash has missing NOTIFY_TYPES member
Dec  8 04:47:59 np0005550137 ceph-mgr[74806]: mgr[py] Loading python module 'dashboard'
Dec  8 04:47:59 np0005550137 ceph-mon[74516]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/3310196236' entity='client.admin' cmd='[{"prefix": "mgr module enable", "module": "dashboard"}]': finished
Dec  8 04:47:59 np0005550137 ceph-mon[74516]: log_channel(cluster) log [DBG] : mgrmap e19: compute-0.kitiwu(active, since 8s), standbys: compute-1.mmkaif, compute-2.zqytsv
Dec  8 04:47:59 np0005550137 ceph-mon[74516]: from='client.? 192.168.122.100:0/3310196236' entity='client.admin' cmd=[{"prefix": "mgr module enable", "module": "dashboard"}]: dispatch
Dec  8 04:47:59 np0005550137 systemd[1]: libpod-edcbb71de0df9df01f440f72072d1fa51a0a46c5b2e13591400ec1811f36fdea.scope: Deactivated successfully.
Dec  8 04:47:59 np0005550137 bash[92490]: Copying config sha256:72c9c208898624938c9e4183d6686ea4a5fd3f912bc29bc3f00147924c521a3e
Dec  8 04:47:59 np0005550137 bash[92490]: Writing manifest to image destination
Dec  8 04:47:59 np0005550137 podman[92615]: 2025-12-08 09:47:59.414053225 +0000 UTC m=+0.045973913 container died edcbb71de0df9df01f440f72072d1fa51a0a46c5b2e13591400ec1811f36fdea (image=quay.io/ceph/ceph:v19, name=thirsty_varahamihira, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_REF=squid, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True)
Dec  8 04:47:59 np0005550137 systemd[1]: var-lib-containers-storage-overlay-9bd9948a04404965ff4a8e2429b3b532768cfcf7713e4c4c3908481046a26147-merged.mount: Deactivated successfully.
Dec  8 04:47:59 np0005550137 podman[92615]: 2025-12-08 09:47:59.448770189 +0000 UTC m=+0.080690887 container remove edcbb71de0df9df01f440f72072d1fa51a0a46c5b2e13591400ec1811f36fdea (image=quay.io/ceph/ceph:v19, name=thirsty_varahamihira, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec  8 04:47:59 np0005550137 systemd[1]: libpod-conmon-edcbb71de0df9df01f440f72072d1fa51a0a46c5b2e13591400ec1811f36fdea.scope: Deactivated successfully.
Dec  8 04:47:59 np0005550137 podman[92490]: 2025-12-08 09:47:59.454000971 +0000 UTC m=+1.098916917 container create 76b11ba15419197dfbc2b41db10c193f2abd6fd1997bcedb2a4f58e24399d05c (image=quay.io/prometheus/node-exporter:v1.7.0, name=ceph-ceb838ef-9d5d-54e4-bddb-2f01adce2ad4-node-exporter-compute-0, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Dec  8 04:47:59 np0005550137 podman[92490]: 2025-12-08 09:47:59.435967079 +0000 UTC m=+1.080883045 image pull 72c9c208898624938c9e4183d6686ea4a5fd3f912bc29bc3f00147924c521a3e quay.io/prometheus/node-exporter:v1.7.0
Dec  8 04:47:59 np0005550137 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f7401ea864925811372830366978da613707859a0568bf1f0ffe1780de2241ec/merged/etc/node-exporter supports timestamps until 2038 (0x7fffffff)
Dec  8 04:47:59 np0005550137 podman[92490]: 2025-12-08 09:47:59.517016016 +0000 UTC m=+1.161931972 container init 76b11ba15419197dfbc2b41db10c193f2abd6fd1997bcedb2a4f58e24399d05c (image=quay.io/prometheus/node-exporter:v1.7.0, name=ceph-ceb838ef-9d5d-54e4-bddb-2f01adce2ad4-node-exporter-compute-0, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Dec  8 04:47:59 np0005550137 podman[92490]: 2025-12-08 09:47:59.521949738 +0000 UTC m=+1.166865684 container start 76b11ba15419197dfbc2b41db10c193f2abd6fd1997bcedb2a4f58e24399d05c (image=quay.io/prometheus/node-exporter:v1.7.0, name=ceph-ceb838ef-9d5d-54e4-bddb-2f01adce2ad4-node-exporter-compute-0, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Dec  8 04:47:59 np0005550137 bash[92490]: 76b11ba15419197dfbc2b41db10c193f2abd6fd1997bcedb2a4f58e24399d05c
Dec  8 04:47:59 np0005550137 ceph-ceb838ef-9d5d-54e4-bddb-2f01adce2ad4-node-exporter-compute-0[92630]: ts=2025-12-08T09:47:59.533Z caller=node_exporter.go:192 level=info msg="Starting node_exporter" version="(version=1.7.0, branch=HEAD, revision=7333465abf9efba81876303bb57e6fadb946041b)"
Dec  8 04:47:59 np0005550137 ceph-ceb838ef-9d5d-54e4-bddb-2f01adce2ad4-node-exporter-compute-0[92630]: ts=2025-12-08T09:47:59.533Z caller=node_exporter.go:193 level=info msg="Build context" build_context="(go=go1.21.4, platform=linux/amd64, user=root@35918982f6d8, date=20231112-23:53:35, tags=netgo osusergo static_build)"
Dec  8 04:47:59 np0005550137 systemd[1]: Started Ceph node-exporter.compute-0 for ceb838ef-9d5d-54e4-bddb-2f01adce2ad4.
Dec  8 04:47:59 np0005550137 ceph-osd[83009]: log_channel(cluster) log [DBG] : 4.16 scrub starts
Dec  8 04:47:59 np0005550137 ceph-ceb838ef-9d5d-54e4-bddb-2f01adce2ad4-node-exporter-compute-0[92630]: ts=2025-12-08T09:47:59.535Z caller=filesystem_common.go:111 level=info collector=filesystem msg="Parsed flag --collector.filesystem.mount-points-exclude" flag=^/(dev|proc|run/credentials/.+|sys|var/lib/docker/.+|var/lib/containers/storage/.+)($|/)
Dec  8 04:47:59 np0005550137 ceph-ceb838ef-9d5d-54e4-bddb-2f01adce2ad4-node-exporter-compute-0[92630]: ts=2025-12-08T09:47:59.535Z caller=filesystem_common.go:113 level=info collector=filesystem msg="Parsed flag --collector.filesystem.fs-types-exclude" flag=^(autofs|binfmt_misc|bpf|cgroup2?|configfs|debugfs|devpts|devtmpfs|fusectl|hugetlbfs|iso9660|mqueue|nsfs|overlay|proc|procfs|pstore|rpc_pipefs|securityfs|selinuxfs|squashfs|sysfs|tracefs)$
Dec  8 04:47:59 np0005550137 ceph-ceb838ef-9d5d-54e4-bddb-2f01adce2ad4-node-exporter-compute-0[92630]: ts=2025-12-08T09:47:59.536Z caller=diskstats_common.go:111 level=info collector=diskstats msg="Parsed flag --collector.diskstats.device-exclude" flag=^(ram|loop|fd|(h|s|v|xv)d[a-z]|nvme\d+n\d+p)\d+$
Dec  8 04:47:59 np0005550137 ceph-ceb838ef-9d5d-54e4-bddb-2f01adce2ad4-node-exporter-compute-0[92630]: ts=2025-12-08T09:47:59.537Z caller=diskstats_linux.go:265 level=error collector=diskstats msg="Failed to open directory, disabling udev device properties" path=/run/udev/data
Dec  8 04:47:59 np0005550137 ceph-ceb838ef-9d5d-54e4-bddb-2f01adce2ad4-node-exporter-compute-0[92630]: ts=2025-12-08T09:47:59.537Z caller=node_exporter.go:110 level=info msg="Enabled collectors"
Dec  8 04:47:59 np0005550137 ceph-ceb838ef-9d5d-54e4-bddb-2f01adce2ad4-node-exporter-compute-0[92630]: ts=2025-12-08T09:47:59.537Z caller=node_exporter.go:117 level=info collector=arp
Dec  8 04:47:59 np0005550137 ceph-ceb838ef-9d5d-54e4-bddb-2f01adce2ad4-node-exporter-compute-0[92630]: ts=2025-12-08T09:47:59.537Z caller=node_exporter.go:117 level=info collector=bcache
Dec  8 04:47:59 np0005550137 ceph-ceb838ef-9d5d-54e4-bddb-2f01adce2ad4-node-exporter-compute-0[92630]: ts=2025-12-08T09:47:59.537Z caller=node_exporter.go:117 level=info collector=bonding
Dec  8 04:47:59 np0005550137 ceph-ceb838ef-9d5d-54e4-bddb-2f01adce2ad4-node-exporter-compute-0[92630]: ts=2025-12-08T09:47:59.537Z caller=node_exporter.go:117 level=info collector=btrfs
Dec  8 04:47:59 np0005550137 ceph-ceb838ef-9d5d-54e4-bddb-2f01adce2ad4-node-exporter-compute-0[92630]: ts=2025-12-08T09:47:59.537Z caller=node_exporter.go:117 level=info collector=conntrack
Dec  8 04:47:59 np0005550137 ceph-ceb838ef-9d5d-54e4-bddb-2f01adce2ad4-node-exporter-compute-0[92630]: ts=2025-12-08T09:47:59.537Z caller=node_exporter.go:117 level=info collector=cpu
Dec  8 04:47:59 np0005550137 ceph-ceb838ef-9d5d-54e4-bddb-2f01adce2ad4-node-exporter-compute-0[92630]: ts=2025-12-08T09:47:59.537Z caller=node_exporter.go:117 level=info collector=cpufreq
Dec  8 04:47:59 np0005550137 ceph-ceb838ef-9d5d-54e4-bddb-2f01adce2ad4-node-exporter-compute-0[92630]: ts=2025-12-08T09:47:59.537Z caller=node_exporter.go:117 level=info collector=diskstats
Dec  8 04:47:59 np0005550137 ceph-ceb838ef-9d5d-54e4-bddb-2f01adce2ad4-node-exporter-compute-0[92630]: ts=2025-12-08T09:47:59.537Z caller=node_exporter.go:117 level=info collector=dmi
Dec  8 04:47:59 np0005550137 ceph-ceb838ef-9d5d-54e4-bddb-2f01adce2ad4-node-exporter-compute-0[92630]: ts=2025-12-08T09:47:59.537Z caller=node_exporter.go:117 level=info collector=edac
Dec  8 04:47:59 np0005550137 ceph-ceb838ef-9d5d-54e4-bddb-2f01adce2ad4-node-exporter-compute-0[92630]: ts=2025-12-08T09:47:59.537Z caller=node_exporter.go:117 level=info collector=entropy
Dec  8 04:47:59 np0005550137 ceph-ceb838ef-9d5d-54e4-bddb-2f01adce2ad4-node-exporter-compute-0[92630]: ts=2025-12-08T09:47:59.537Z caller=node_exporter.go:117 level=info collector=fibrechannel
Dec  8 04:47:59 np0005550137 ceph-ceb838ef-9d5d-54e4-bddb-2f01adce2ad4-node-exporter-compute-0[92630]: ts=2025-12-08T09:47:59.537Z caller=node_exporter.go:117 level=info collector=filefd
Dec  8 04:47:59 np0005550137 ceph-ceb838ef-9d5d-54e4-bddb-2f01adce2ad4-node-exporter-compute-0[92630]: ts=2025-12-08T09:47:59.537Z caller=node_exporter.go:117 level=info collector=filesystem
Dec  8 04:47:59 np0005550137 ceph-ceb838ef-9d5d-54e4-bddb-2f01adce2ad4-node-exporter-compute-0[92630]: ts=2025-12-08T09:47:59.537Z caller=node_exporter.go:117 level=info collector=hwmon
Dec  8 04:47:59 np0005550137 ceph-ceb838ef-9d5d-54e4-bddb-2f01adce2ad4-node-exporter-compute-0[92630]: ts=2025-12-08T09:47:59.537Z caller=node_exporter.go:117 level=info collector=infiniband
Dec  8 04:47:59 np0005550137 ceph-ceb838ef-9d5d-54e4-bddb-2f01adce2ad4-node-exporter-compute-0[92630]: ts=2025-12-08T09:47:59.537Z caller=node_exporter.go:117 level=info collector=ipvs
Dec  8 04:47:59 np0005550137 ceph-ceb838ef-9d5d-54e4-bddb-2f01adce2ad4-node-exporter-compute-0[92630]: ts=2025-12-08T09:47:59.537Z caller=node_exporter.go:117 level=info collector=loadavg
Dec  8 04:47:59 np0005550137 ceph-ceb838ef-9d5d-54e4-bddb-2f01adce2ad4-node-exporter-compute-0[92630]: ts=2025-12-08T09:47:59.537Z caller=node_exporter.go:117 level=info collector=mdadm
Dec  8 04:47:59 np0005550137 ceph-ceb838ef-9d5d-54e4-bddb-2f01adce2ad4-node-exporter-compute-0[92630]: ts=2025-12-08T09:47:59.537Z caller=node_exporter.go:117 level=info collector=meminfo
Dec  8 04:47:59 np0005550137 ceph-ceb838ef-9d5d-54e4-bddb-2f01adce2ad4-node-exporter-compute-0[92630]: ts=2025-12-08T09:47:59.537Z caller=node_exporter.go:117 level=info collector=netclass
Dec  8 04:47:59 np0005550137 ceph-ceb838ef-9d5d-54e4-bddb-2f01adce2ad4-node-exporter-compute-0[92630]: ts=2025-12-08T09:47:59.537Z caller=node_exporter.go:117 level=info collector=netdev
Dec  8 04:47:59 np0005550137 ceph-ceb838ef-9d5d-54e4-bddb-2f01adce2ad4-node-exporter-compute-0[92630]: ts=2025-12-08T09:47:59.537Z caller=node_exporter.go:117 level=info collector=netstat
Dec  8 04:47:59 np0005550137 ceph-ceb838ef-9d5d-54e4-bddb-2f01adce2ad4-node-exporter-compute-0[92630]: ts=2025-12-08T09:47:59.537Z caller=node_exporter.go:117 level=info collector=nfs
Dec  8 04:47:59 np0005550137 ceph-ceb838ef-9d5d-54e4-bddb-2f01adce2ad4-node-exporter-compute-0[92630]: ts=2025-12-08T09:47:59.537Z caller=node_exporter.go:117 level=info collector=nfsd
Dec  8 04:47:59 np0005550137 ceph-ceb838ef-9d5d-54e4-bddb-2f01adce2ad4-node-exporter-compute-0[92630]: ts=2025-12-08T09:47:59.537Z caller=node_exporter.go:117 level=info collector=nvme
Dec  8 04:47:59 np0005550137 ceph-ceb838ef-9d5d-54e4-bddb-2f01adce2ad4-node-exporter-compute-0[92630]: ts=2025-12-08T09:47:59.537Z caller=node_exporter.go:117 level=info collector=os
Dec  8 04:47:59 np0005550137 ceph-ceb838ef-9d5d-54e4-bddb-2f01adce2ad4-node-exporter-compute-0[92630]: ts=2025-12-08T09:47:59.537Z caller=node_exporter.go:117 level=info collector=powersupplyclass
Dec  8 04:47:59 np0005550137 ceph-ceb838ef-9d5d-54e4-bddb-2f01adce2ad4-node-exporter-compute-0[92630]: ts=2025-12-08T09:47:59.537Z caller=node_exporter.go:117 level=info collector=pressure
Dec  8 04:47:59 np0005550137 ceph-ceb838ef-9d5d-54e4-bddb-2f01adce2ad4-node-exporter-compute-0[92630]: ts=2025-12-08T09:47:59.537Z caller=node_exporter.go:117 level=info collector=rapl
Dec  8 04:47:59 np0005550137 ceph-ceb838ef-9d5d-54e4-bddb-2f01adce2ad4-node-exporter-compute-0[92630]: ts=2025-12-08T09:47:59.537Z caller=node_exporter.go:117 level=info collector=schedstat
Dec  8 04:47:59 np0005550137 ceph-ceb838ef-9d5d-54e4-bddb-2f01adce2ad4-node-exporter-compute-0[92630]: ts=2025-12-08T09:47:59.537Z caller=node_exporter.go:117 level=info collector=selinux
Dec  8 04:47:59 np0005550137 ceph-ceb838ef-9d5d-54e4-bddb-2f01adce2ad4-node-exporter-compute-0[92630]: ts=2025-12-08T09:47:59.537Z caller=node_exporter.go:117 level=info collector=sockstat
Dec  8 04:47:59 np0005550137 ceph-ceb838ef-9d5d-54e4-bddb-2f01adce2ad4-node-exporter-compute-0[92630]: ts=2025-12-08T09:47:59.537Z caller=node_exporter.go:117 level=info collector=softnet
Dec  8 04:47:59 np0005550137 ceph-ceb838ef-9d5d-54e4-bddb-2f01adce2ad4-node-exporter-compute-0[92630]: ts=2025-12-08T09:47:59.537Z caller=node_exporter.go:117 level=info collector=stat
Dec  8 04:47:59 np0005550137 ceph-ceb838ef-9d5d-54e4-bddb-2f01adce2ad4-node-exporter-compute-0[92630]: ts=2025-12-08T09:47:59.537Z caller=node_exporter.go:117 level=info collector=tapestats
Dec  8 04:47:59 np0005550137 ceph-ceb838ef-9d5d-54e4-bddb-2f01adce2ad4-node-exporter-compute-0[92630]: ts=2025-12-08T09:47:59.537Z caller=node_exporter.go:117 level=info collector=textfile
Dec  8 04:47:59 np0005550137 ceph-ceb838ef-9d5d-54e4-bddb-2f01adce2ad4-node-exporter-compute-0[92630]: ts=2025-12-08T09:47:59.537Z caller=node_exporter.go:117 level=info collector=thermal_zone
Dec  8 04:47:59 np0005550137 ceph-ceb838ef-9d5d-54e4-bddb-2f01adce2ad4-node-exporter-compute-0[92630]: ts=2025-12-08T09:47:59.537Z caller=node_exporter.go:117 level=info collector=time
Dec  8 04:47:59 np0005550137 ceph-ceb838ef-9d5d-54e4-bddb-2f01adce2ad4-node-exporter-compute-0[92630]: ts=2025-12-08T09:47:59.537Z caller=node_exporter.go:117 level=info collector=udp_queues
Dec  8 04:47:59 np0005550137 ceph-ceb838ef-9d5d-54e4-bddb-2f01adce2ad4-node-exporter-compute-0[92630]: ts=2025-12-08T09:47:59.537Z caller=node_exporter.go:117 level=info collector=uname
Dec  8 04:47:59 np0005550137 ceph-ceb838ef-9d5d-54e4-bddb-2f01adce2ad4-node-exporter-compute-0[92630]: ts=2025-12-08T09:47:59.537Z caller=node_exporter.go:117 level=info collector=vmstat
Dec  8 04:47:59 np0005550137 ceph-ceb838ef-9d5d-54e4-bddb-2f01adce2ad4-node-exporter-compute-0[92630]: ts=2025-12-08T09:47:59.537Z caller=node_exporter.go:117 level=info collector=xfs
Dec  8 04:47:59 np0005550137 ceph-ceb838ef-9d5d-54e4-bddb-2f01adce2ad4-node-exporter-compute-0[92630]: ts=2025-12-08T09:47:59.537Z caller=node_exporter.go:117 level=info collector=zfs
Dec  8 04:47:59 np0005550137 ceph-ceb838ef-9d5d-54e4-bddb-2f01adce2ad4-node-exporter-compute-0[92630]: ts=2025-12-08T09:47:59.538Z caller=tls_config.go:274 level=info msg="Listening on" address=[::]:9100
Dec  8 04:47:59 np0005550137 ceph-ceb838ef-9d5d-54e4-bddb-2f01adce2ad4-node-exporter-compute-0[92630]: ts=2025-12-08T09:47:59.538Z caller=tls_config.go:277 level=info msg="TLS is disabled." http2=false address=[::]:9100
Dec  8 04:47:59 np0005550137 ceph-osd[83009]: log_channel(cluster) log [DBG] : 4.16 scrub ok
Dec  8 04:47:59 np0005550137 systemd[1]: session-34.scope: Deactivated successfully.
Dec  8 04:47:59 np0005550137 systemd[1]: session-34.scope: Consumed 5.561s CPU time.
Dec  8 04:47:59 np0005550137 systemd-logind[805]: Removed session 34.
Dec  8 04:47:59 np0005550137 ceph-mgr[74806]: mgr[py] Loading python module 'devicehealth'
Dec  8 04:47:59 np0005550137 ceph-ceb838ef-9d5d-54e4-bddb-2f01adce2ad4-mgr-compute-0-kitiwu[74802]: 2025-12-08T09:47:59.703+0000 7ff01ac77140 -1 mgr[py] Module devicehealth has missing NOTIFY_TYPES member
Dec  8 04:47:59 np0005550137 ceph-mgr[74806]: mgr[py] Module devicehealth has missing NOTIFY_TYPES member
Dec  8 04:47:59 np0005550137 ceph-mgr[74806]: mgr[py] Loading python module 'diskprediction_local'
Dec  8 04:47:59 np0005550137 ceph-mon[74516]: mon.compute-0@0(leader).osd e43 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  8 04:47:59 np0005550137 ceph-ceb838ef-9d5d-54e4-bddb-2f01adce2ad4-mgr-compute-0-kitiwu[74802]: /lib64/python3.9/site-packages/scipy/__init__.py:73: UserWarning: NumPy was imported from a Python sub-interpreter but NumPy does not properly support sub-interpreters. This will likely work for most users but might cause hard to track down issues or subtle bugs. A common user of the rare sub-interpreter feature is wsgi which also allows single-interpreter mode.
Dec  8 04:47:59 np0005550137 ceph-ceb838ef-9d5d-54e4-bddb-2f01adce2ad4-mgr-compute-0-kitiwu[74802]: Improvements in the case of bugs are welcome, but is not on the NumPy roadmap, and full support may require significant effort to achieve.
Dec  8 04:47:59 np0005550137 ceph-ceb838ef-9d5d-54e4-bddb-2f01adce2ad4-mgr-compute-0-kitiwu[74802]:  from numpy import show_config as show_numpy_config
Dec  8 04:47:59 np0005550137 ceph-ceb838ef-9d5d-54e4-bddb-2f01adce2ad4-mgr-compute-0-kitiwu[74802]: 2025-12-08T09:47:59.857+0000 7ff01ac77140 -1 mgr[py] Module diskprediction_local has missing NOTIFY_TYPES member
Dec  8 04:47:59 np0005550137 ceph-mgr[74806]: mgr[py] Module diskprediction_local has missing NOTIFY_TYPES member
Dec  8 04:47:59 np0005550137 ceph-mgr[74806]: mgr[py] Loading python module 'influx'
Dec  8 04:47:59 np0005550137 ceph-ceb838ef-9d5d-54e4-bddb-2f01adce2ad4-mgr-compute-0-kitiwu[74802]: 2025-12-08T09:47:59.924+0000 7ff01ac77140 -1 mgr[py] Module influx has missing NOTIFY_TYPES member
Dec  8 04:47:59 np0005550137 ceph-mgr[74806]: mgr[py] Module influx has missing NOTIFY_TYPES member
Dec  8 04:47:59 np0005550137 ceph-mgr[74806]: mgr[py] Loading python module 'insights'
Dec  8 04:47:59 np0005550137 ceph-mgr[74806]: mgr[py] Loading python module 'iostat'
Dec  8 04:48:00 np0005550137 ceph-ceb838ef-9d5d-54e4-bddb-2f01adce2ad4-mgr-compute-0-kitiwu[74802]: 2025-12-08T09:48:00.057+0000 7ff01ac77140 -1 mgr[py] Module iostat has missing NOTIFY_TYPES member
Dec  8 04:48:00 np0005550137 ceph-mgr[74806]: mgr[py] Module iostat has missing NOTIFY_TYPES member
Dec  8 04:48:00 np0005550137 ceph-mgr[74806]: mgr[py] Loading python module 'k8sevents'
Dec  8 04:48:00 np0005550137 python3[92714]: ansible-ansible.legacy.stat Invoked with path=/tmp/ceph_mds.yml follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Dec  8 04:48:00 np0005550137 ceph-mon[74516]: from='client.? 192.168.122.100:0/3310196236' entity='client.admin' cmd='[{"prefix": "mgr module enable", "module": "dashboard"}]': finished
Dec  8 04:48:00 np0005550137 ceph-mgr[74806]: mgr[py] Loading python module 'localpool'
Dec  8 04:48:00 np0005550137 ceph-mgr[74806]: mgr[py] Loading python module 'mds_autoscaler'
Dec  8 04:48:00 np0005550137 ceph-osd[83009]: log_channel(cluster) log [DBG] : 5.17 deep-scrub starts
Dec  8 04:48:00 np0005550137 ceph-osd[83009]: log_channel(cluster) log [DBG] : 5.17 deep-scrub ok
Dec  8 04:48:00 np0005550137 python3[92785]: ansible-ansible.legacy.copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1765187279.9882212-37312-236304118111260/source dest=/tmp/ceph_mds.yml mode=0644 force=True follow=False _original_basename=ceph_mds.yml.j2 checksum=b1f36629bdb347469f4890c95dfdef5abc68c3ae backup=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  8 04:48:00 np0005550137 ceph-mgr[74806]: mgr[py] Loading python module 'mirroring'
Dec  8 04:48:00 np0005550137 ceph-mgr[74806]: mgr[py] Loading python module 'nfs'
Dec  8 04:48:01 np0005550137 ceph-ceb838ef-9d5d-54e4-bddb-2f01adce2ad4-mgr-compute-0-kitiwu[74802]: 2025-12-08T09:48:01.088+0000 7ff01ac77140 -1 mgr[py] Module nfs has missing NOTIFY_TYPES member
Dec  8 04:48:01 np0005550137 ceph-mgr[74806]: mgr[py] Module nfs has missing NOTIFY_TYPES member
Dec  8 04:48:01 np0005550137 ceph-mgr[74806]: mgr[py] Loading python module 'orchestrator'
Dec  8 04:48:01 np0005550137 python3[92835]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z   --volume /tmp/ceph_mds.yml:/home/ceph_spec.yaml:z   --entrypoint ceph quay.io/ceph/ceph:v19 --fsid ceb838ef-9d5d-54e4-bddb-2f01adce2ad4 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   fs volume create cephfs '--placement=compute-0 compute-1 compute-2 '#012 _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec  8 04:48:01 np0005550137 podman[92836]: 2025-12-08 09:48:01.196390508 +0000 UTC m=+0.058043792 container create cff9be17f1650e4b4884cccc9b81bdc052b0d64b009c6b1760440641efe3b5bd (image=quay.io/ceph/ceph:v19, name=compassionate_merkle, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default)
Dec  8 04:48:01 np0005550137 systemd[1]: Started libpod-conmon-cff9be17f1650e4b4884cccc9b81bdc052b0d64b009c6b1760440641efe3b5bd.scope.
Dec  8 04:48:01 np0005550137 podman[92836]: 2025-12-08 09:48:01.171214109 +0000 UTC m=+0.032867393 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Dec  8 04:48:01 np0005550137 systemd[1]: Started libcrun container.
Dec  8 04:48:01 np0005550137 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7981e43bca5f77376085be4da150306aefef36488722d2dfe017b17b55ec39fc/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Dec  8 04:48:01 np0005550137 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7981e43bca5f77376085be4da150306aefef36488722d2dfe017b17b55ec39fc/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec  8 04:48:01 np0005550137 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7981e43bca5f77376085be4da150306aefef36488722d2dfe017b17b55ec39fc/merged/home/ceph_spec.yaml supports timestamps until 2038 (0x7fffffff)
Dec  8 04:48:01 np0005550137 ceph-ceb838ef-9d5d-54e4-bddb-2f01adce2ad4-mgr-compute-0-kitiwu[74802]: 2025-12-08T09:48:01.296+0000 7ff01ac77140 -1 mgr[py] Module orchestrator has missing NOTIFY_TYPES member
Dec  8 04:48:01 np0005550137 ceph-mgr[74806]: mgr[py] Module orchestrator has missing NOTIFY_TYPES member
Dec  8 04:48:01 np0005550137 ceph-mgr[74806]: mgr[py] Loading python module 'osd_perf_query'
Dec  8 04:48:01 np0005550137 podman[92836]: 2025-12-08 09:48:01.304907949 +0000 UTC m=+0.166561273 container init cff9be17f1650e4b4884cccc9b81bdc052b0d64b009c6b1760440641efe3b5bd (image=quay.io/ceph/ceph:v19, name=compassionate_merkle, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, io.buildah.version=1.40.1, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS)
Dec  8 04:48:01 np0005550137 podman[92836]: 2025-12-08 09:48:01.31492564 +0000 UTC m=+0.176578894 container start cff9be17f1650e4b4884cccc9b81bdc052b0d64b009c6b1760440641efe3b5bd (image=quay.io/ceph/ceph:v19, name=compassionate_merkle, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid)
Dec  8 04:48:01 np0005550137 podman[92836]: 2025-12-08 09:48:01.318698318 +0000 UTC m=+0.180351582 container attach cff9be17f1650e4b4884cccc9b81bdc052b0d64b009c6b1760440641efe3b5bd (image=quay.io/ceph/ceph:v19, name=compassionate_merkle, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.license=GPLv2)
Dec  8 04:48:01 np0005550137 ceph-ceb838ef-9d5d-54e4-bddb-2f01adce2ad4-mgr-compute-0-kitiwu[74802]: 2025-12-08T09:48:01.374+0000 7ff01ac77140 -1 mgr[py] Module osd_perf_query has missing NOTIFY_TYPES member
Dec  8 04:48:01 np0005550137 ceph-mgr[74806]: mgr[py] Module osd_perf_query has missing NOTIFY_TYPES member
Dec  8 04:48:01 np0005550137 ceph-mgr[74806]: mgr[py] Loading python module 'osd_support'
Dec  8 04:48:01 np0005550137 ceph-ceb838ef-9d5d-54e4-bddb-2f01adce2ad4-mgr-compute-0-kitiwu[74802]: 2025-12-08T09:48:01.444+0000 7ff01ac77140 -1 mgr[py] Module osd_support has missing NOTIFY_TYPES member
Dec  8 04:48:01 np0005550137 ceph-mgr[74806]: mgr[py] Module osd_support has missing NOTIFY_TYPES member
Dec  8 04:48:01 np0005550137 ceph-mgr[74806]: mgr[py] Loading python module 'pg_autoscaler'
Dec  8 04:48:01 np0005550137 ceph-ceb838ef-9d5d-54e4-bddb-2f01adce2ad4-mgr-compute-0-kitiwu[74802]: 2025-12-08T09:48:01.532+0000 7ff01ac77140 -1 mgr[py] Module pg_autoscaler has missing NOTIFY_TYPES member
Dec  8 04:48:01 np0005550137 ceph-mgr[74806]: mgr[py] Module pg_autoscaler has missing NOTIFY_TYPES member
Dec  8 04:48:01 np0005550137 ceph-mgr[74806]: mgr[py] Loading python module 'progress'
Dec  8 04:48:01 np0005550137 ceph-ceb838ef-9d5d-54e4-bddb-2f01adce2ad4-mgr-compute-0-kitiwu[74802]: 2025-12-08T09:48:01.606+0000 7ff01ac77140 -1 mgr[py] Module progress has missing NOTIFY_TYPES member
Dec  8 04:48:01 np0005550137 ceph-mgr[74806]: mgr[py] Module progress has missing NOTIFY_TYPES member
Dec  8 04:48:01 np0005550137 ceph-mgr[74806]: mgr[py] Loading python module 'prometheus'
Dec  8 04:48:01 np0005550137 ceph-osd[83009]: log_channel(cluster) log [DBG] : 6.14 scrub starts
Dec  8 04:48:01 np0005550137 ceph-osd[83009]: log_channel(cluster) log [DBG] : 6.14 scrub ok
Dec  8 04:48:01 np0005550137 ceph-ceb838ef-9d5d-54e4-bddb-2f01adce2ad4-mgr-compute-0-kitiwu[74802]: 2025-12-08T09:48:01.979+0000 7ff01ac77140 -1 mgr[py] Module prometheus has missing NOTIFY_TYPES member
Dec  8 04:48:01 np0005550137 ceph-mgr[74806]: mgr[py] Module prometheus has missing NOTIFY_TYPES member
Dec  8 04:48:01 np0005550137 ceph-mgr[74806]: mgr[py] Loading python module 'rbd_support'
Dec  8 04:48:02 np0005550137 ceph-ceb838ef-9d5d-54e4-bddb-2f01adce2ad4-mgr-compute-0-kitiwu[74802]: 2025-12-08T09:48:02.084+0000 7ff01ac77140 -1 mgr[py] Module rbd_support has missing NOTIFY_TYPES member
Dec  8 04:48:02 np0005550137 ceph-mgr[74806]: mgr[py] Module rbd_support has missing NOTIFY_TYPES member
Dec  8 04:48:02 np0005550137 ceph-mgr[74806]: mgr[py] Loading python module 'restful'
Dec  8 04:48:02 np0005550137 ceph-mgr[74806]: mgr[py] Loading python module 'rgw'
Dec  8 04:48:02 np0005550137 ceph-ceb838ef-9d5d-54e4-bddb-2f01adce2ad4-mgr-compute-0-kitiwu[74802]: 2025-12-08T09:48:02.549+0000 7ff01ac77140 -1 mgr[py] Module rgw has missing NOTIFY_TYPES member
Dec  8 04:48:02 np0005550137 ceph-mgr[74806]: mgr[py] Module rgw has missing NOTIFY_TYPES member
Dec  8 04:48:02 np0005550137 ceph-mgr[74806]: mgr[py] Loading python module 'rook'
Dec  8 04:48:02 np0005550137 ceph-osd[83009]: log_channel(cluster) log [DBG] : 3.12 scrub starts
Dec  8 04:48:02 np0005550137 ceph-osd[83009]: log_channel(cluster) log [DBG] : 3.12 scrub ok
Dec  8 04:48:03 np0005550137 ceph-ceb838ef-9d5d-54e4-bddb-2f01adce2ad4-mgr-compute-0-kitiwu[74802]: 2025-12-08T09:48:03.131+0000 7ff01ac77140 -1 mgr[py] Module rook has missing NOTIFY_TYPES member
Dec  8 04:48:03 np0005550137 ceph-mgr[74806]: mgr[py] Module rook has missing NOTIFY_TYPES member
Dec  8 04:48:03 np0005550137 ceph-mgr[74806]: mgr[py] Loading python module 'selftest'
Dec  8 04:48:03 np0005550137 ceph-ceb838ef-9d5d-54e4-bddb-2f01adce2ad4-mgr-compute-0-kitiwu[74802]: 2025-12-08T09:48:03.203+0000 7ff01ac77140 -1 mgr[py] Module selftest has missing NOTIFY_TYPES member
Dec  8 04:48:03 np0005550137 ceph-mgr[74806]: mgr[py] Module selftest has missing NOTIFY_TYPES member
Dec  8 04:48:03 np0005550137 ceph-mgr[74806]: mgr[py] Loading python module 'snap_schedule'
Dec  8 04:48:03 np0005550137 ceph-ceb838ef-9d5d-54e4-bddb-2f01adce2ad4-mgr-compute-0-kitiwu[74802]: 2025-12-08T09:48:03.283+0000 7ff01ac77140 -1 mgr[py] Module snap_schedule has missing NOTIFY_TYPES member
Dec  8 04:48:03 np0005550137 ceph-mgr[74806]: mgr[py] Module snap_schedule has missing NOTIFY_TYPES member
Dec  8 04:48:03 np0005550137 ceph-mgr[74806]: mgr[py] Loading python module 'stats'
Dec  8 04:48:03 np0005550137 ceph-mgr[74806]: mgr[py] Loading python module 'status'
Dec  8 04:48:03 np0005550137 ceph-ceb838ef-9d5d-54e4-bddb-2f01adce2ad4-mgr-compute-0-kitiwu[74802]: 2025-12-08T09:48:03.437+0000 7ff01ac77140 -1 mgr[py] Module status has missing NOTIFY_TYPES member
Dec  8 04:48:03 np0005550137 ceph-mgr[74806]: mgr[py] Module status has missing NOTIFY_TYPES member
Dec  8 04:48:03 np0005550137 ceph-mgr[74806]: mgr[py] Loading python module 'telegraf'
Dec  8 04:48:03 np0005550137 ceph-ceb838ef-9d5d-54e4-bddb-2f01adce2ad4-mgr-compute-0-kitiwu[74802]: 2025-12-08T09:48:03.513+0000 7ff01ac77140 -1 mgr[py] Module telegraf has missing NOTIFY_TYPES member
Dec  8 04:48:03 np0005550137 ceph-mgr[74806]: mgr[py] Module telegraf has missing NOTIFY_TYPES member
Dec  8 04:48:03 np0005550137 ceph-mgr[74806]: mgr[py] Loading python module 'telemetry'
Dec  8 04:48:03 np0005550137 ceph-osd[83009]: log_channel(cluster) log [DBG] : 5.14 scrub starts
Dec  8 04:48:03 np0005550137 ceph-osd[83009]: log_channel(cluster) log [DBG] : 5.14 scrub ok
Dec  8 04:48:03 np0005550137 ceph-ceb838ef-9d5d-54e4-bddb-2f01adce2ad4-mgr-compute-0-kitiwu[74802]: 2025-12-08T09:48:03.677+0000 7ff01ac77140 -1 mgr[py] Module telemetry has missing NOTIFY_TYPES member
Dec  8 04:48:03 np0005550137 ceph-mgr[74806]: mgr[py] Module telemetry has missing NOTIFY_TYPES member
Dec  8 04:48:03 np0005550137 ceph-mgr[74806]: mgr[py] Loading python module 'test_orchestrator'
Dec  8 04:48:03 np0005550137 ceph-ceb838ef-9d5d-54e4-bddb-2f01adce2ad4-mgr-compute-0-kitiwu[74802]: 2025-12-08T09:48:03.901+0000 7ff01ac77140 -1 mgr[py] Module test_orchestrator has missing NOTIFY_TYPES member
Dec  8 04:48:03 np0005550137 ceph-mgr[74806]: mgr[py] Module test_orchestrator has missing NOTIFY_TYPES member
Dec  8 04:48:03 np0005550137 ceph-mgr[74806]: mgr[py] Loading python module 'volumes'
Dec  8 04:48:04 np0005550137 ceph-ceb838ef-9d5d-54e4-bddb-2f01adce2ad4-mgr-compute-0-kitiwu[74802]: 2025-12-08T09:48:04.176+0000 7ff01ac77140 -1 mgr[py] Module volumes has missing NOTIFY_TYPES member
Dec  8 04:48:04 np0005550137 ceph-mgr[74806]: mgr[py] Module volumes has missing NOTIFY_TYPES member
Dec  8 04:48:04 np0005550137 ceph-mgr[74806]: mgr[py] Loading python module 'zabbix'
Dec  8 04:48:04 np0005550137 ceph-mon[74516]: log_channel(cluster) log [DBG] : Standby manager daemon compute-1.mmkaif restarted
Dec  8 04:48:04 np0005550137 ceph-mon[74516]: log_channel(cluster) log [DBG] : Standby manager daemon compute-1.mmkaif started
Dec  8 04:48:04 np0005550137 ceph-ceb838ef-9d5d-54e4-bddb-2f01adce2ad4-mgr-compute-0-kitiwu[74802]: 2025-12-08T09:48:04.249+0000 7ff01ac77140 -1 mgr[py] Module zabbix has missing NOTIFY_TYPES member
Dec  8 04:48:04 np0005550137 ceph-mgr[74806]: mgr[py] Module zabbix has missing NOTIFY_TYPES member
Dec  8 04:48:04 np0005550137 ceph-mon[74516]: log_channel(cluster) log [INF] : Active manager daemon compute-0.kitiwu restarted
Dec  8 04:48:04 np0005550137 ceph-mon[74516]: mon.compute-0@0(leader).osd e43 do_prune osdmap full prune enabled
Dec  8 04:48:04 np0005550137 ceph-mon[74516]: log_channel(cluster) log [INF] : Activating manager daemon compute-0.kitiwu
Dec  8 04:48:04 np0005550137 ceph-mgr[74806]: ms_deliver_dispatch: unhandled message 0x56358a29b860 mon_map magic: 0 from mon.0 v2:192.168.122.100:3300/0
Dec  8 04:48:04 np0005550137 ceph-mgr[74806]: mgr handle_mgr_map respawning because set of enabled modules changed!
Dec  8 04:48:04 np0005550137 ceph-mon[74516]: mon.compute-0@0(leader).osd e44 e44: 3 total, 3 up, 3 in
Dec  8 04:48:04 np0005550137 ceph-mon[74516]: log_channel(cluster) log [DBG] : osdmap e44: 3 total, 3 up, 3 in
Dec  8 04:48:04 np0005550137 ceph-mon[74516]: log_channel(cluster) log [DBG] : mgrmap e20: compute-0.kitiwu(active, starting, since 0.0321847s), standbys: compute-2.zqytsv, compute-1.mmkaif
Dec  8 04:48:04 np0005550137 ceph-mon[74516]: log_channel(cluster) log [DBG] : Standby manager daemon compute-2.zqytsv restarted
Dec  8 04:48:04 np0005550137 ceph-mon[74516]: log_channel(cluster) log [DBG] : Standby manager daemon compute-2.zqytsv started
Dec  8 04:48:04 np0005550137 ceph-ceb838ef-9d5d-54e4-bddb-2f01adce2ad4-mgr-compute-0-kitiwu[74802]: ignoring --setuser ceph since I am not root
Dec  8 04:48:04 np0005550137 ceph-ceb838ef-9d5d-54e4-bddb-2f01adce2ad4-mgr-compute-0-kitiwu[74802]: ignoring --setgroup ceph since I am not root
Dec  8 04:48:04 np0005550137 ceph-mgr[74806]: ceph version 19.2.3 (c92aebb279828e9c3c1f5d24613efca272649e62) squid (stable), process ceph-mgr, pid 2
Dec  8 04:48:04 np0005550137 ceph-mgr[74806]: pidfile_write: ignore empty --pid-file
Dec  8 04:48:04 np0005550137 ceph-mgr[74806]: mgr[py] Loading python module 'alerts'
Dec  8 04:48:04 np0005550137 ceph-ceb838ef-9d5d-54e4-bddb-2f01adce2ad4-mgr-compute-0-kitiwu[74802]: 2025-12-08T09:48:04.516+0000 7f46c01bb140 -1 mgr[py] Module alerts has missing NOTIFY_TYPES member
Dec  8 04:48:04 np0005550137 ceph-mgr[74806]: mgr[py] Module alerts has missing NOTIFY_TYPES member
Dec  8 04:48:04 np0005550137 ceph-mgr[74806]: mgr[py] Loading python module 'balancer'
Dec  8 04:48:04 np0005550137 ceph-mon[74516]: Active manager daemon compute-0.kitiwu restarted
Dec  8 04:48:04 np0005550137 ceph-mon[74516]: Activating manager daemon compute-0.kitiwu
Dec  8 04:48:04 np0005550137 ceph-ceb838ef-9d5d-54e4-bddb-2f01adce2ad4-mgr-compute-0-kitiwu[74802]: 2025-12-08T09:48:04.593+0000 7f46c01bb140 -1 mgr[py] Module balancer has missing NOTIFY_TYPES member
Dec  8 04:48:04 np0005550137 ceph-mgr[74806]: mgr[py] Module balancer has missing NOTIFY_TYPES member
Dec  8 04:48:04 np0005550137 ceph-mgr[74806]: mgr[py] Loading python module 'cephadm'
Dec  8 04:48:04 np0005550137 ceph-osd[83009]: log_channel(cluster) log [DBG] : 6.16 scrub starts
Dec  8 04:48:04 np0005550137 ceph-osd[83009]: log_channel(cluster) log [DBG] : 6.16 scrub ok
Dec  8 04:48:04 np0005550137 ceph-mon[74516]: mon.compute-0@0(leader).osd e44 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  8 04:48:05 np0005550137 ceph-mgr[74806]: mgr[py] Loading python module 'crash'
Dec  8 04:48:05 np0005550137 ceph-mon[74516]: log_channel(cluster) log [DBG] : mgrmap e21: compute-0.kitiwu(active, starting, since 1.12235s), standbys: compute-1.mmkaif, compute-2.zqytsv
Dec  8 04:48:05 np0005550137 ceph-ceb838ef-9d5d-54e4-bddb-2f01adce2ad4-mgr-compute-0-kitiwu[74802]: 2025-12-08T09:48:05.400+0000 7f46c01bb140 -1 mgr[py] Module crash has missing NOTIFY_TYPES member
Dec  8 04:48:05 np0005550137 ceph-mgr[74806]: mgr[py] Module crash has missing NOTIFY_TYPES member
Dec  8 04:48:05 np0005550137 ceph-mgr[74806]: mgr[py] Loading python module 'dashboard'
Dec  8 04:48:05 np0005550137 ceph-osd[83009]: log_channel(cluster) log [DBG] : 6.11 scrub starts
Dec  8 04:48:05 np0005550137 ceph-osd[83009]: log_channel(cluster) log [DBG] : 6.11 scrub ok
Dec  8 04:48:05 np0005550137 ceph-mgr[74806]: mgr[py] Loading python module 'devicehealth'
Dec  8 04:48:06 np0005550137 ceph-ceb838ef-9d5d-54e4-bddb-2f01adce2ad4-mgr-compute-0-kitiwu[74802]: 2025-12-08T09:48:06.027+0000 7f46c01bb140 -1 mgr[py] Module devicehealth has missing NOTIFY_TYPES member
Dec  8 04:48:06 np0005550137 ceph-mgr[74806]: mgr[py] Module devicehealth has missing NOTIFY_TYPES member
Dec  8 04:48:06 np0005550137 ceph-mgr[74806]: mgr[py] Loading python module 'diskprediction_local'
Dec  8 04:48:06 np0005550137 ceph-ceb838ef-9d5d-54e4-bddb-2f01adce2ad4-mgr-compute-0-kitiwu[74802]: /lib64/python3.9/site-packages/scipy/__init__.py:73: UserWarning: NumPy was imported from a Python sub-interpreter but NumPy does not properly support sub-interpreters. This will likely work for most users but might cause hard to track down issues or subtle bugs. A common user of the rare sub-interpreter feature is wsgi which also allows single-interpreter mode.
Dec  8 04:48:06 np0005550137 ceph-ceb838ef-9d5d-54e4-bddb-2f01adce2ad4-mgr-compute-0-kitiwu[74802]: Improvements in the case of bugs are welcome, but is not on the NumPy roadmap, and full support may require significant effort to achieve.
Dec  8 04:48:06 np0005550137 ceph-ceb838ef-9d5d-54e4-bddb-2f01adce2ad4-mgr-compute-0-kitiwu[74802]:  from numpy import show_config as show_numpy_config
Dec  8 04:48:06 np0005550137 ceph-ceb838ef-9d5d-54e4-bddb-2f01adce2ad4-mgr-compute-0-kitiwu[74802]: 2025-12-08T09:48:06.200+0000 7f46c01bb140 -1 mgr[py] Module diskprediction_local has missing NOTIFY_TYPES member
Dec  8 04:48:06 np0005550137 ceph-mgr[74806]: mgr[py] Module diskprediction_local has missing NOTIFY_TYPES member
Dec  8 04:48:06 np0005550137 ceph-mgr[74806]: mgr[py] Loading python module 'influx'
Dec  8 04:48:06 np0005550137 ceph-ceb838ef-9d5d-54e4-bddb-2f01adce2ad4-mgr-compute-0-kitiwu[74802]: 2025-12-08T09:48:06.271+0000 7f46c01bb140 -1 mgr[py] Module influx has missing NOTIFY_TYPES member
Dec  8 04:48:06 np0005550137 ceph-mgr[74806]: mgr[py] Module influx has missing NOTIFY_TYPES member
Dec  8 04:48:06 np0005550137 ceph-mgr[74806]: mgr[py] Loading python module 'insights'
Dec  8 04:48:06 np0005550137 ceph-mgr[74806]: mgr[py] Loading python module 'iostat'
Dec  8 04:48:06 np0005550137 ceph-ceb838ef-9d5d-54e4-bddb-2f01adce2ad4-mgr-compute-0-kitiwu[74802]: 2025-12-08T09:48:06.407+0000 7f46c01bb140 -1 mgr[py] Module iostat has missing NOTIFY_TYPES member
Dec  8 04:48:06 np0005550137 ceph-mgr[74806]: mgr[py] Module iostat has missing NOTIFY_TYPES member
Dec  8 04:48:06 np0005550137 ceph-mgr[74806]: mgr[py] Loading python module 'k8sevents'
Dec  8 04:48:06 np0005550137 ceph-osd[83009]: log_channel(cluster) log [DBG] : 6.10 scrub starts
Dec  8 04:48:06 np0005550137 ceph-osd[83009]: log_channel(cluster) log [DBG] : 6.10 scrub ok
Dec  8 04:48:06 np0005550137 ceph-mgr[74806]: mgr[py] Loading python module 'localpool'
Dec  8 04:48:06 np0005550137 ceph-mgr[74806]: mgr[py] Loading python module 'mds_autoscaler'
Dec  8 04:48:07 np0005550137 ceph-mgr[74806]: mgr[py] Loading python module 'mirroring'
Dec  8 04:48:07 np0005550137 ceph-mgr[74806]: mgr[py] Loading python module 'nfs'
Dec  8 04:48:07 np0005550137 ceph-ceb838ef-9d5d-54e4-bddb-2f01adce2ad4-mgr-compute-0-kitiwu[74802]: 2025-12-08T09:48:07.393+0000 7f46c01bb140 -1 mgr[py] Module nfs has missing NOTIFY_TYPES member
Dec  8 04:48:07 np0005550137 ceph-mgr[74806]: mgr[py] Module nfs has missing NOTIFY_TYPES member
Dec  8 04:48:07 np0005550137 ceph-mgr[74806]: mgr[py] Loading python module 'orchestrator'
Dec  8 04:48:07 np0005550137 ceph-ceb838ef-9d5d-54e4-bddb-2f01adce2ad4-mgr-compute-0-kitiwu[74802]: 2025-12-08T09:48:07.615+0000 7f46c01bb140 -1 mgr[py] Module orchestrator has missing NOTIFY_TYPES member
Dec  8 04:48:07 np0005550137 ceph-mgr[74806]: mgr[py] Module orchestrator has missing NOTIFY_TYPES member
Dec  8 04:48:07 np0005550137 ceph-mgr[74806]: mgr[py] Loading python module 'osd_perf_query'
Dec  8 04:48:07 np0005550137 ceph-osd[83009]: log_channel(cluster) log [DBG] : 4.11 deep-scrub starts
Dec  8 04:48:07 np0005550137 ceph-osd[83009]: log_channel(cluster) log [DBG] : 4.11 deep-scrub ok
Dec  8 04:48:07 np0005550137 ceph-ceb838ef-9d5d-54e4-bddb-2f01adce2ad4-mgr-compute-0-kitiwu[74802]: 2025-12-08T09:48:07.688+0000 7f46c01bb140 -1 mgr[py] Module osd_perf_query has missing NOTIFY_TYPES member
Dec  8 04:48:07 np0005550137 ceph-mgr[74806]: mgr[py] Module osd_perf_query has missing NOTIFY_TYPES member
Dec  8 04:48:07 np0005550137 ceph-mgr[74806]: mgr[py] Loading python module 'osd_support'
Dec  8 04:48:07 np0005550137 ceph-ceb838ef-9d5d-54e4-bddb-2f01adce2ad4-mgr-compute-0-kitiwu[74802]: 2025-12-08T09:48:07.758+0000 7f46c01bb140 -1 mgr[py] Module osd_support has missing NOTIFY_TYPES member
Dec  8 04:48:07 np0005550137 ceph-mgr[74806]: mgr[py] Module osd_support has missing NOTIFY_TYPES member
Dec  8 04:48:07 np0005550137 ceph-mgr[74806]: mgr[py] Loading python module 'pg_autoscaler'
Dec  8 04:48:07 np0005550137 ceph-ceb838ef-9d5d-54e4-bddb-2f01adce2ad4-mgr-compute-0-kitiwu[74802]: 2025-12-08T09:48:07.835+0000 7f46c01bb140 -1 mgr[py] Module pg_autoscaler has missing NOTIFY_TYPES member
Dec  8 04:48:07 np0005550137 ceph-mgr[74806]: mgr[py] Module pg_autoscaler has missing NOTIFY_TYPES member
Dec  8 04:48:07 np0005550137 ceph-mgr[74806]: mgr[py] Loading python module 'progress'
Dec  8 04:48:07 np0005550137 ceph-ceb838ef-9d5d-54e4-bddb-2f01adce2ad4-mgr-compute-0-kitiwu[74802]: 2025-12-08T09:48:07.904+0000 7f46c01bb140 -1 mgr[py] Module progress has missing NOTIFY_TYPES member
Dec  8 04:48:07 np0005550137 ceph-mgr[74806]: mgr[py] Module progress has missing NOTIFY_TYPES member
Dec  8 04:48:07 np0005550137 ceph-mgr[74806]: mgr[py] Loading python module 'prometheus'
Dec  8 04:48:08 np0005550137 ceph-ceb838ef-9d5d-54e4-bddb-2f01adce2ad4-mgr-compute-0-kitiwu[74802]: 2025-12-08T09:48:08.253+0000 7f46c01bb140 -1 mgr[py] Module prometheus has missing NOTIFY_TYPES member
Dec  8 04:48:08 np0005550137 ceph-mgr[74806]: mgr[py] Module prometheus has missing NOTIFY_TYPES member
Dec  8 04:48:08 np0005550137 ceph-mgr[74806]: mgr[py] Loading python module 'rbd_support'
Dec  8 04:48:08 np0005550137 ceph-ceb838ef-9d5d-54e4-bddb-2f01adce2ad4-mgr-compute-0-kitiwu[74802]: 2025-12-08T09:48:08.356+0000 7f46c01bb140 -1 mgr[py] Module rbd_support has missing NOTIFY_TYPES member
Dec  8 04:48:08 np0005550137 ceph-mgr[74806]: mgr[py] Module rbd_support has missing NOTIFY_TYPES member
Dec  8 04:48:08 np0005550137 ceph-mgr[74806]: mgr[py] Loading python module 'restful'
Dec  8 04:48:08 np0005550137 ceph-mgr[74806]: mgr[py] Loading python module 'rgw'
Dec  8 04:48:08 np0005550137 ceph-osd[83009]: log_channel(cluster) log [DBG] : 6.13 scrub starts
Dec  8 04:48:08 np0005550137 ceph-osd[83009]: log_channel(cluster) log [DBG] : 6.13 scrub ok
Dec  8 04:48:08 np0005550137 ceph-ceb838ef-9d5d-54e4-bddb-2f01adce2ad4-mgr-compute-0-kitiwu[74802]: 2025-12-08T09:48:08.817+0000 7f46c01bb140 -1 mgr[py] Module rgw has missing NOTIFY_TYPES member
Dec  8 04:48:08 np0005550137 ceph-mgr[74806]: mgr[py] Module rgw has missing NOTIFY_TYPES member
Dec  8 04:48:08 np0005550137 ceph-mgr[74806]: mgr[py] Loading python module 'rook'
Dec  8 04:48:09 np0005550137 ceph-ceb838ef-9d5d-54e4-bddb-2f01adce2ad4-mgr-compute-0-kitiwu[74802]: 2025-12-08T09:48:09.409+0000 7f46c01bb140 -1 mgr[py] Module rook has missing NOTIFY_TYPES member
Dec  8 04:48:09 np0005550137 ceph-mgr[74806]: mgr[py] Module rook has missing NOTIFY_TYPES member
Dec  8 04:48:09 np0005550137 ceph-mgr[74806]: mgr[py] Loading python module 'selftest'
Dec  8 04:48:09 np0005550137 ceph-ceb838ef-9d5d-54e4-bddb-2f01adce2ad4-mgr-compute-0-kitiwu[74802]: 2025-12-08T09:48:09.482+0000 7f46c01bb140 -1 mgr[py] Module selftest has missing NOTIFY_TYPES member
Dec  8 04:48:09 np0005550137 ceph-mgr[74806]: mgr[py] Module selftest has missing NOTIFY_TYPES member
Dec  8 04:48:09 np0005550137 ceph-mgr[74806]: mgr[py] Loading python module 'snap_schedule'
Dec  8 04:48:09 np0005550137 ceph-ceb838ef-9d5d-54e4-bddb-2f01adce2ad4-mgr-compute-0-kitiwu[74802]: 2025-12-08T09:48:09.561+0000 7f46c01bb140 -1 mgr[py] Module snap_schedule has missing NOTIFY_TYPES member
Dec  8 04:48:09 np0005550137 ceph-mgr[74806]: mgr[py] Module snap_schedule has missing NOTIFY_TYPES member
Dec  8 04:48:09 np0005550137 ceph-mgr[74806]: mgr[py] Loading python module 'stats'
Dec  8 04:48:09 np0005550137 systemd[1]: Stopping User Manager for UID 42477...
Dec  8 04:48:09 np0005550137 systemd[75846]: Activating special unit Exit the Session...
Dec  8 04:48:09 np0005550137 systemd[75846]: Stopped target Main User Target.
Dec  8 04:48:09 np0005550137 systemd[75846]: Stopped target Basic System.
Dec  8 04:48:09 np0005550137 systemd[75846]: Stopped target Paths.
Dec  8 04:48:09 np0005550137 systemd[75846]: Stopped target Sockets.
Dec  8 04:48:09 np0005550137 systemd[75846]: Stopped target Timers.
Dec  8 04:48:09 np0005550137 systemd[75846]: Stopped Mark boot as successful after the user session has run 2 minutes.
Dec  8 04:48:09 np0005550137 systemd[75846]: Stopped Daily Cleanup of User's Temporary Directories.
Dec  8 04:48:09 np0005550137 systemd[75846]: Closed D-Bus User Message Bus Socket.
Dec  8 04:48:09 np0005550137 systemd[75846]: Stopped Create User's Volatile Files and Directories.
Dec  8 04:48:09 np0005550137 systemd[75846]: Removed slice User Application Slice.
Dec  8 04:48:09 np0005550137 systemd[75846]: Reached target Shutdown.
Dec  8 04:48:09 np0005550137 systemd[75846]: Finished Exit the Session.
Dec  8 04:48:09 np0005550137 systemd[75846]: Reached target Exit the Session.
Dec  8 04:48:09 np0005550137 systemd[1]: user@42477.service: Deactivated successfully.
Dec  8 04:48:09 np0005550137 systemd[1]: Stopped User Manager for UID 42477.
Dec  8 04:48:09 np0005550137 ceph-mgr[74806]: mgr[py] Loading python module 'status'
Dec  8 04:48:09 np0005550137 systemd[1]: Stopping User Runtime Directory /run/user/42477...
Dec  8 04:48:09 np0005550137 ceph-osd[83009]: log_channel(cluster) log [DBG] : 4.10 deep-scrub starts
Dec  8 04:48:09 np0005550137 systemd[1]: run-user-42477.mount: Deactivated successfully.
Dec  8 04:48:09 np0005550137 ceph-osd[83009]: log_channel(cluster) log [DBG] : 4.10 deep-scrub ok
Dec  8 04:48:09 np0005550137 systemd[1]: user-runtime-dir@42477.service: Deactivated successfully.
Dec  8 04:48:09 np0005550137 systemd[1]: Stopped User Runtime Directory /run/user/42477.
Dec  8 04:48:09 np0005550137 systemd[1]: Removed slice User Slice of UID 42477.
Dec  8 04:48:09 np0005550137 systemd[1]: user-42477.slice: Consumed 34.694s CPU time.
Dec  8 04:48:09 np0005550137 ceph-ceb838ef-9d5d-54e4-bddb-2f01adce2ad4-mgr-compute-0-kitiwu[74802]: 2025-12-08T09:48:09.724+0000 7f46c01bb140 -1 mgr[py] Module status has missing NOTIFY_TYPES member
Dec  8 04:48:09 np0005550137 ceph-mgr[74806]: mgr[py] Module status has missing NOTIFY_TYPES member
Dec  8 04:48:09 np0005550137 ceph-mgr[74806]: mgr[py] Loading python module 'telegraf'
Dec  8 04:48:09 np0005550137 ceph-ceb838ef-9d5d-54e4-bddb-2f01adce2ad4-mgr-compute-0-kitiwu[74802]: 2025-12-08T09:48:09.796+0000 7f46c01bb140 -1 mgr[py] Module telegraf has missing NOTIFY_TYPES member
Dec  8 04:48:09 np0005550137 ceph-mgr[74806]: mgr[py] Module telegraf has missing NOTIFY_TYPES member
Dec  8 04:48:09 np0005550137 ceph-mgr[74806]: mgr[py] Loading python module 'telemetry'
Dec  8 04:48:09 np0005550137 ceph-mon[74516]: mon.compute-0@0(leader).osd e44 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  8 04:48:09 np0005550137 ceph-ceb838ef-9d5d-54e4-bddb-2f01adce2ad4-mgr-compute-0-kitiwu[74802]: 2025-12-08T09:48:09.946+0000 7f46c01bb140 -1 mgr[py] Module telemetry has missing NOTIFY_TYPES member
Dec  8 04:48:09 np0005550137 ceph-mgr[74806]: mgr[py] Module telemetry has missing NOTIFY_TYPES member
Dec  8 04:48:09 np0005550137 ceph-mgr[74806]: mgr[py] Loading python module 'test_orchestrator'
Dec  8 04:48:10 np0005550137 ceph-ceb838ef-9d5d-54e4-bddb-2f01adce2ad4-mgr-compute-0-kitiwu[74802]: 2025-12-08T09:48:10.172+0000 7f46c01bb140 -1 mgr[py] Module test_orchestrator has missing NOTIFY_TYPES member
Dec  8 04:48:10 np0005550137 ceph-mgr[74806]: mgr[py] Module test_orchestrator has missing NOTIFY_TYPES member
Dec  8 04:48:10 np0005550137 ceph-mgr[74806]: mgr[py] Loading python module 'volumes'
Dec  8 04:48:10 np0005550137 ceph-mon[74516]: log_channel(cluster) log [DBG] : Standby manager daemon compute-1.mmkaif restarted
Dec  8 04:48:10 np0005550137 ceph-mon[74516]: log_channel(cluster) log [DBG] : Standby manager daemon compute-1.mmkaif started
Dec  8 04:48:10 np0005550137 ceph-ceb838ef-9d5d-54e4-bddb-2f01adce2ad4-mgr-compute-0-kitiwu[74802]: 2025-12-08T09:48:10.440+0000 7f46c01bb140 -1 mgr[py] Module volumes has missing NOTIFY_TYPES member
Dec  8 04:48:10 np0005550137 ceph-mgr[74806]: mgr[py] Module volumes has missing NOTIFY_TYPES member
Dec  8 04:48:10 np0005550137 ceph-mgr[74806]: mgr[py] Loading python module 'zabbix'
Dec  8 04:48:10 np0005550137 ceph-ceb838ef-9d5d-54e4-bddb-2f01adce2ad4-mgr-compute-0-kitiwu[74802]: 2025-12-08T09:48:10.509+0000 7f46c01bb140 -1 mgr[py] Module zabbix has missing NOTIFY_TYPES member
Dec  8 04:48:10 np0005550137 ceph-mgr[74806]: mgr[py] Module zabbix has missing NOTIFY_TYPES member
Dec  8 04:48:10 np0005550137 ceph-mon[74516]: log_channel(cluster) log [INF] : Active manager daemon compute-0.kitiwu restarted
Dec  8 04:48:10 np0005550137 ceph-mon[74516]: mon.compute-0@0(leader).osd e44 do_prune osdmap full prune enabled
Dec  8 04:48:10 np0005550137 ceph-mon[74516]: log_channel(cluster) log [INF] : Activating manager daemon compute-0.kitiwu
Dec  8 04:48:10 np0005550137 ceph-mgr[74806]: ms_deliver_dispatch: unhandled message 0x55f473123860 mon_map magic: 0 from mon.0 v2:192.168.122.100:3300/0
Dec  8 04:48:10 np0005550137 ceph-mon[74516]: mon.compute-0@0(leader).osd e45 e45: 3 total, 3 up, 3 in
Dec  8 04:48:10 np0005550137 ceph-mgr[74806]: mgr handle_mgr_map Activating!
Dec  8 04:48:10 np0005550137 ceph-mon[74516]: log_channel(cluster) log [DBG] : osdmap e45: 3 total, 3 up, 3 in
Dec  8 04:48:10 np0005550137 ceph-mgr[74806]: mgr handle_mgr_map I am now activating
Dec  8 04:48:10 np0005550137 ceph-mon[74516]: log_channel(cluster) log [DBG] : mgrmap e22: compute-0.kitiwu(active, starting, since 0.0355604s), standbys: compute-1.mmkaif, compute-2.zqytsv
Dec  8 04:48:10 np0005550137 ceph-mon[74516]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mon metadata", "id": "compute-0"} v 0)
Dec  8 04:48:10 np0005550137 ceph-mon[74516]: log_channel(audit) log [DBG] : from='mgr.14478 192.168.122.100:0/575685615' entity='mgr.compute-0.kitiwu' cmd=[{"prefix": "mon metadata", "id": "compute-0"}]: dispatch
Dec  8 04:48:10 np0005550137 ceph-mon[74516]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mon metadata", "id": "compute-1"} v 0)
Dec  8 04:48:10 np0005550137 ceph-mon[74516]: log_channel(audit) log [DBG] : from='mgr.14478 192.168.122.100:0/575685615' entity='mgr.compute-0.kitiwu' cmd=[{"prefix": "mon metadata", "id": "compute-1"}]: dispatch
Dec  8 04:48:10 np0005550137 ceph-mon[74516]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mon metadata", "id": "compute-2"} v 0)
Dec  8 04:48:10 np0005550137 ceph-mon[74516]: log_channel(audit) log [DBG] : from='mgr.14478 192.168.122.100:0/575685615' entity='mgr.compute-0.kitiwu' cmd=[{"prefix": "mon metadata", "id": "compute-2"}]: dispatch
Dec  8 04:48:10 np0005550137 ceph-mon[74516]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mgr metadata", "who": "compute-0.kitiwu", "id": "compute-0.kitiwu"} v 0)
Dec  8 04:48:10 np0005550137 ceph-mon[74516]: log_channel(audit) log [DBG] : from='mgr.14478 192.168.122.100:0/575685615' entity='mgr.compute-0.kitiwu' cmd=[{"prefix": "mgr metadata", "who": "compute-0.kitiwu", "id": "compute-0.kitiwu"}]: dispatch
Dec  8 04:48:10 np0005550137 ceph-mon[74516]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mgr metadata", "who": "compute-1.mmkaif", "id": "compute-1.mmkaif"} v 0)
Dec  8 04:48:10 np0005550137 ceph-mon[74516]: log_channel(audit) log [DBG] : from='mgr.14478 192.168.122.100:0/575685615' entity='mgr.compute-0.kitiwu' cmd=[{"prefix": "mgr metadata", "who": "compute-1.mmkaif", "id": "compute-1.mmkaif"}]: dispatch
Dec  8 04:48:10 np0005550137 ceph-mon[74516]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mgr metadata", "who": "compute-2.zqytsv", "id": "compute-2.zqytsv"} v 0)
Dec  8 04:48:10 np0005550137 ceph-mon[74516]: log_channel(audit) log [DBG] : from='mgr.14478 192.168.122.100:0/575685615' entity='mgr.compute-0.kitiwu' cmd=[{"prefix": "mgr metadata", "who": "compute-2.zqytsv", "id": "compute-2.zqytsv"}]: dispatch
Dec  8 04:48:10 np0005550137 ceph-mon[74516]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd metadata", "id": 0} v 0)
Dec  8 04:48:10 np0005550137 ceph-mon[74516]: log_channel(audit) log [DBG] : from='mgr.14478 192.168.122.100:0/575685615' entity='mgr.compute-0.kitiwu' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch
Dec  8 04:48:10 np0005550137 ceph-mon[74516]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd metadata", "id": 1} v 0)
Dec  8 04:48:10 np0005550137 ceph-mon[74516]: log_channel(audit) log [DBG] : from='mgr.14478 192.168.122.100:0/575685615' entity='mgr.compute-0.kitiwu' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch
Dec  8 04:48:10 np0005550137 ceph-mon[74516]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd metadata", "id": 2} v 0)
Dec  8 04:48:10 np0005550137 ceph-mon[74516]: log_channel(audit) log [DBG] : from='mgr.14478 192.168.122.100:0/575685615' entity='mgr.compute-0.kitiwu' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch
Dec  8 04:48:10 np0005550137 ceph-mon[74516]: log_channel(cluster) log [DBG] : Standby manager daemon compute-2.zqytsv restarted
Dec  8 04:48:10 np0005550137 ceph-mon[74516]: log_channel(cluster) log [DBG] : Standby manager daemon compute-2.zqytsv started
Dec  8 04:48:10 np0005550137 ceph-mon[74516]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mds metadata"} v 0)
Dec  8 04:48:10 np0005550137 ceph-mon[74516]: log_channel(audit) log [DBG] : from='mgr.14478 192.168.122.100:0/575685615' entity='mgr.compute-0.kitiwu' cmd=[{"prefix": "mds metadata"}]: dispatch
Dec  8 04:48:10 np0005550137 ceph-mon[74516]: mon.compute-0@0(leader).mds e1 all = 1
Dec  8 04:48:10 np0005550137 ceph-mon[74516]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd metadata"} v 0)
Dec  8 04:48:10 np0005550137 ceph-mon[74516]: log_channel(audit) log [DBG] : from='mgr.14478 192.168.122.100:0/575685615' entity='mgr.compute-0.kitiwu' cmd=[{"prefix": "osd metadata"}]: dispatch
Dec  8 04:48:10 np0005550137 ceph-mon[74516]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mon metadata"} v 0)
Dec  8 04:48:10 np0005550137 ceph-mon[74516]: log_channel(audit) log [DBG] : from='mgr.14478 192.168.122.100:0/575685615' entity='mgr.compute-0.kitiwu' cmd=[{"prefix": "mon metadata"}]: dispatch
Dec  8 04:48:10 np0005550137 ceph-mgr[74806]: [balancer DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Dec  8 04:48:10 np0005550137 ceph-mgr[74806]: mgr load Constructed class from module: balancer
Dec  8 04:48:10 np0005550137 ceph-mgr[74806]: [balancer INFO root] Starting
Dec  8 04:48:10 np0005550137 ceph-mon[74516]: log_channel(cluster) log [INF] : Manager daemon compute-0.kitiwu is now available
Dec  8 04:48:10 np0005550137 ceph-mgr[74806]: [cephadm DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Dec  8 04:48:10 np0005550137 ceph-mgr[74806]: [balancer INFO root] Optimize plan auto_2025-12-08_09:48:10
Dec  8 04:48:10 np0005550137 ceph-mgr[74806]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Dec  8 04:48:10 np0005550137 ceph-mgr[74806]: [balancer INFO root] Some PGs (1.000000) are unknown; try again later
Dec  8 04:48:10 np0005550137 ceph-mon[74516]: Active manager daemon compute-0.kitiwu restarted
Dec  8 04:48:10 np0005550137 ceph-mon[74516]: Activating manager daemon compute-0.kitiwu
Dec  8 04:48:10 np0005550137 ceph-osd[83009]: log_channel(cluster) log [DBG] : 5.1e scrub starts
Dec  8 04:48:10 np0005550137 ceph-osd[83009]: log_channel(cluster) log [DBG] : 5.1e scrub ok
Dec  8 04:48:10 np0005550137 ceph-mgr[74806]: mgr load Constructed class from module: cephadm
Dec  8 04:48:10 np0005550137 ceph-mgr[74806]: [crash DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Dec  8 04:48:10 np0005550137 ceph-mgr[74806]: mgr load Constructed class from module: crash
Dec  8 04:48:10 np0005550137 ceph-mgr[74806]: [dashboard DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Dec  8 04:48:10 np0005550137 ceph-mgr[74806]: mgr load Constructed class from module: dashboard
Dec  8 04:48:10 np0005550137 ceph-mgr[74806]: [devicehealth DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Dec  8 04:48:10 np0005550137 ceph-mgr[74806]: mgr load Constructed class from module: devicehealth
Dec  8 04:48:10 np0005550137 ceph-mgr[74806]: [dashboard INFO access_control] Loading user roles DB version=2
Dec  8 04:48:10 np0005550137 ceph-mgr[74806]: [dashboard INFO sso] Loading SSO DB version=1
Dec  8 04:48:10 np0005550137 ceph-mgr[74806]: [dashboard INFO root] server: ssl=no host=192.168.122.100 port=8443
Dec  8 04:48:10 np0005550137 ceph-mgr[74806]: [dashboard INFO root] Configured CherryPy, starting engine...
Dec  8 04:48:10 np0005550137 ceph-mgr[74806]: [devicehealth INFO root] Starting
Dec  8 04:48:10 np0005550137 ceph-mgr[74806]: [iostat DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Dec  8 04:48:10 np0005550137 ceph-mgr[74806]: mgr load Constructed class from module: iostat
Dec  8 04:48:10 np0005550137 ceph-mgr[74806]: [nfs DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Dec  8 04:48:10 np0005550137 ceph-mgr[74806]: mgr load Constructed class from module: nfs
Dec  8 04:48:10 np0005550137 ceph-mgr[74806]: [orchestrator DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Dec  8 04:48:10 np0005550137 ceph-mgr[74806]: mgr load Constructed class from module: orchestrator
Dec  8 04:48:10 np0005550137 ceph-mgr[74806]: [pg_autoscaler DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Dec  8 04:48:10 np0005550137 ceph-mgr[74806]: mgr load Constructed class from module: pg_autoscaler
Dec  8 04:48:10 np0005550137 ceph-mgr[74806]: [progress DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Dec  8 04:48:10 np0005550137 ceph-mgr[74806]: mgr load Constructed class from module: progress
Dec  8 04:48:10 np0005550137 ceph-mgr[74806]: [rbd_support DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Dec  8 04:48:10 np0005550137 ceph-mgr[74806]: [pg_autoscaler INFO root] _maybe_adjust
Dec  8 04:48:10 np0005550137 ceph-mgr[74806]: [progress INFO root] Loading...
Dec  8 04:48:10 np0005550137 ceph-mgr[74806]: [progress INFO root] Loaded [<progress.module.GhostEvent object at 0x7f464558bb20>, <progress.module.GhostEvent object at 0x7f464558bb50>, <progress.module.GhostEvent object at 0x7f464558bb80>, <progress.module.GhostEvent object at 0x7f464558bbb0>, <progress.module.GhostEvent object at 0x7f464558bbe0>, <progress.module.GhostEvent object at 0x7f464558bc10>, <progress.module.GhostEvent object at 0x7f464558bc40>, <progress.module.GhostEvent object at 0x7f464558bc70>, <progress.module.GhostEvent object at 0x7f464558bca0>, <progress.module.GhostEvent object at 0x7f464558bcd0>, <progress.module.GhostEvent object at 0x7f464558bd00>, <progress.module.GhostEvent object at 0x7f464558bd30>] historic events
Dec  8 04:48:10 np0005550137 ceph-mgr[74806]: [rbd_support INFO root] recovery thread starting
Dec  8 04:48:10 np0005550137 ceph-mgr[74806]: [rbd_support INFO root] starting setup
Dec  8 04:48:10 np0005550137 ceph-mgr[74806]: [progress INFO root] Loaded OSDMap, ready.
Dec  8 04:48:10 np0005550137 ceph-mgr[74806]: mgr load Constructed class from module: rbd_support
Dec  8 04:48:10 np0005550137 ceph-mon[74516]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/compute-0.kitiwu/mirror_snapshot_schedule"} v 0)
Dec  8 04:48:10 np0005550137 ceph-mon[74516]: log_channel(audit) log [INF] : from='mgr.14478 192.168.122.100:0/575685615' entity='mgr.compute-0.kitiwu' cmd=[{"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/compute-0.kitiwu/mirror_snapshot_schedule"}]: dispatch
Dec  8 04:48:10 np0005550137 ceph-mgr[74806]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Dec  8 04:48:10 np0005550137 ceph-mgr[74806]: [restful DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Dec  8 04:48:10 np0005550137 ceph-mgr[74806]: mgr load Constructed class from module: restful
Dec  8 04:48:10 np0005550137 ceph-mgr[74806]: [restful INFO root] server_addr: :: server_port: 8003
Dec  8 04:48:10 np0005550137 ceph-mgr[74806]: [status DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Dec  8 04:48:10 np0005550137 ceph-mgr[74806]: mgr load Constructed class from module: status
Dec  8 04:48:10 np0005550137 ceph-mgr[74806]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec  8 04:48:10 np0005550137 ceph-mgr[74806]: [restful WARNING root] server not running: no certificate configured
Dec  8 04:48:10 np0005550137 ceph-mgr[74806]: [telemetry DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Dec  8 04:48:10 np0005550137 ceph-mgr[74806]: mgr load Constructed class from module: telemetry
Dec  8 04:48:10 np0005550137 ceph-mgr[74806]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec  8 04:48:10 np0005550137 ceph-mgr[74806]: [volumes DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Dec  8 04:48:10 np0005550137 ceph-mgr[74806]: [rbd_support INFO root] load_schedules: backups, start_after=
Dec  8 04:48:10 np0005550137 ceph-mgr[74806]: [rbd_support INFO root] load_schedules: images, start_after=
Dec  8 04:48:10 np0005550137 ceph-mgr[74806]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: starting
Dec  8 04:48:10 np0005550137 ceph-mgr[74806]: [rbd_support INFO root] PerfHandler: starting
Dec  8 04:48:10 np0005550137 ceph-mgr[74806]: [rbd_support INFO root] load_task_task: vms, start_after=
Dec  8 04:48:10 np0005550137 ceph-mgr[74806]: [rbd_support INFO root] load_task_task: volumes, start_after=
Dec  8 04:48:10 np0005550137 ceph-mgr[74806]: mgr load Constructed class from module: volumes
Dec  8 04:48:10 np0005550137 ceph-mgr[74806]: [rbd_support INFO root] load_task_task: backups, start_after=
Dec  8 04:48:10 np0005550137 ceph-mgr[74806]: [rbd_support INFO root] load_task_task: images, start_after=
Dec  8 04:48:10 np0005550137 ceph-mgr[74806]: [rbd_support INFO root] TaskHandler: starting
Dec  8 04:48:10 np0005550137 ceph-mon[74516]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/compute-0.kitiwu/trash_purge_schedule"} v 0)
Dec  8 04:48:10 np0005550137 ceph-mon[74516]: log_channel(audit) log [INF] : from='mgr.14478 192.168.122.100:0/575685615' entity='mgr.compute-0.kitiwu' cmd=[{"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/compute-0.kitiwu/trash_purge_schedule"}]: dispatch
Dec  8 04:48:10 np0005550137 ceph-mgr[74806]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Dec  8 04:48:10 np0005550137 ceph-mgr[74806]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec  8 04:48:10 np0005550137 ceph-mgr[74806]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec  8 04:48:10 np0005550137 ceph-mgr[74806]: [rbd_support INFO root] load_schedules: backups, start_after=
Dec  8 04:48:10 np0005550137 ceph-mgr[74806]: [rbd_support INFO root] load_schedules: images, start_after=
Dec  8 04:48:10 np0005550137 ceph-mgr[74806]: [rbd_support INFO root] TrashPurgeScheduleHandler: starting
Dec  8 04:48:10 np0005550137 ceph-mgr[74806]: [rbd_support INFO root] setup complete
Dec  8 04:48:10 np0005550137 ceph-mgr[74806]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: CephFS -> /api/cephfs
Dec  8 04:48:10 np0005550137 ceph-mgr[74806]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: CephFsUi -> /ui-api/cephfs
Dec  8 04:48:10 np0005550137 ceph-mgr[74806]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: CephFSSubvolume -> /api/cephfs/subvolume
Dec  8 04:48:10 np0005550137 ceph-mgr[74806]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: CephFSSubvolumeGroups -> /api/cephfs/subvolume/group
Dec  8 04:48:10 np0005550137 ceph-mgr[74806]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: CephFSSubvolumeSnapshots -> /api/cephfs/subvolume/snapshot
Dec  8 04:48:10 np0005550137 ceph-mgr[74806]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: CephFsSnapshotClone -> /api/cephfs/subvolume/snapshot/clone
Dec  8 04:48:10 np0005550137 ceph-mgr[74806]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: CephFSSnapshotSchedule -> /api/cephfs/snapshot/schedule
Dec  8 04:48:10 np0005550137 ceph-mgr[74806]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: IscsiUi -> /ui-api/iscsi
Dec  8 04:48:10 np0005550137 ceph-mgr[74806]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: Iscsi -> /api/iscsi
Dec  8 04:48:10 np0005550137 ceph-mgr[74806]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: IscsiTarget -> /api/iscsi/target
Dec  8 04:48:10 np0005550137 ceph-mgr[74806]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: NFSGaneshaCluster -> /api/nfs-ganesha/cluster
Dec  8 04:48:10 np0005550137 ceph-mgr[74806]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: NFSGaneshaExports -> /api/nfs-ganesha/export
Dec  8 04:48:10 np0005550137 ceph-mgr[74806]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: NFSGaneshaUi -> /ui-api/nfs-ganesha
Dec  8 04:48:10 np0005550137 ceph-mgr[74806]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: Orchestrator -> /ui-api/orchestrator
Dec  8 04:48:10 np0005550137 ceph-mgr[74806]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: Service -> /api/service
Dec  8 04:48:10 np0005550137 ceph-mgr[74806]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: RbdMirroring -> /api/block/mirroring
Dec  8 04:48:10 np0005550137 ceph-mgr[74806]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: RbdMirroringSummary -> /api/block/mirroring/summary
Dec  8 04:48:10 np0005550137 ceph-mgr[74806]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: RbdMirroringPoolMode -> /api/block/mirroring/pool
Dec  8 04:48:10 np0005550137 ceph-mgr[74806]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: RbdMirroringPoolBootstrap -> /api/block/mirroring/pool/{pool_name}/bootstrap
Dec  8 04:48:10 np0005550137 ceph-mgr[74806]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: RbdMirroringPoolPeer -> /api/block/mirroring/pool/{pool_name}/peer
Dec  8 04:48:10 np0005550137 ceph-mgr[74806]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: RbdMirroringStatus -> /ui-api/block/mirroring
Dec  8 04:48:10 np0005550137 ceph-mgr[74806]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: Pool -> /api/pool
Dec  8 04:48:10 np0005550137 ceph-mgr[74806]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: PoolUi -> /ui-api/pool
Dec  8 04:48:10 np0005550137 ceph-mgr[74806]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: RBDPool -> /api/pool
Dec  8 04:48:10 np0005550137 ceph-mgr[74806]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: Rbd -> /api/block/image
Dec  8 04:48:10 np0005550137 ceph-mgr[74806]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: RbdStatus -> /ui-api/block/rbd
Dec  8 04:48:10 np0005550137 ceph-mgr[74806]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: RbdSnapshot -> /api/block/image/{image_spec}/snap
Dec  8 04:48:10 np0005550137 ceph-mgr[74806]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: RbdTrash -> /api/block/image/trash
Dec  8 04:48:10 np0005550137 ceph-mgr[74806]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: RbdNamespace -> /api/block/pool/{pool_name}/namespace
Dec  8 04:48:10 np0005550137 ceph-mgr[74806]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: Rgw -> /ui-api/rgw
Dec  8 04:48:10 np0005550137 ceph-mgr[74806]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: RgwMultisiteStatus -> /ui-api/rgw/multisite
Dec  8 04:48:10 np0005550137 ceph-mgr[74806]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: RgwMultisiteController -> /api/rgw/multisite
Dec  8 04:48:10 np0005550137 ceph-mgr[74806]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: RgwDaemon -> /api/rgw/daemon
Dec  8 04:48:10 np0005550137 ceph-mgr[74806]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: RgwSite -> /api/rgw/site
Dec  8 04:48:10 np0005550137 ceph-mgr[74806]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: RgwBucket -> /api/rgw/bucket
Dec  8 04:48:10 np0005550137 ceph-mgr[74806]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: RgwBucketUi -> /ui-api/rgw/bucket
Dec  8 04:48:10 np0005550137 ceph-mgr[74806]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: RgwUser -> /api/rgw/user
Dec  8 04:48:10 np0005550137 ceph-mgr[74806]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: rgwroles_CRUDClass -> /api/rgw/roles
Dec  8 04:48:10 np0005550137 ceph-mgr[74806]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: rgwroles_CRUDClassMetadata -> /ui-api/rgw/roles
Dec  8 04:48:10 np0005550137 ceph-mgr[74806]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: RgwRealm -> /api/rgw/realm
Dec  8 04:48:10 np0005550137 ceph-mgr[74806]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: RgwZonegroup -> /api/rgw/zonegroup
Dec  8 04:48:10 np0005550137 ceph-mgr[74806]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: RgwZone -> /api/rgw/zone
Dec  8 04:48:10 np0005550137 ceph-mgr[74806]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: Auth -> /api/auth
Dec  8 04:48:10 np0005550137 ceph-mgr[74806]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: clusteruser_CRUDClass -> /api/cluster/user
Dec  8 04:48:10 np0005550137 ceph-mgr[74806]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: clusteruser_CRUDClassMetadata -> /ui-api/cluster/user
Dec  8 04:48:10 np0005550137 ceph-mgr[74806]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: Cluster -> /api/cluster
Dec  8 04:48:10 np0005550137 ceph-mgr[74806]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: ClusterUpgrade -> /api/cluster/upgrade
Dec  8 04:48:10 np0005550137 ceph-mgr[74806]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: ClusterConfiguration -> /api/cluster_conf
Dec  8 04:48:10 np0005550137 ceph-mgr[74806]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: CrushRule -> /api/crush_rule
Dec  8 04:48:10 np0005550137 ceph-mgr[74806]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: CrushRuleUi -> /ui-api/crush_rule
Dec  8 04:48:10 np0005550137 ceph-mgr[74806]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: Daemon -> /api/daemon
Dec  8 04:48:10 np0005550137 ceph-mgr[74806]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: Docs -> /docs
Dec  8 04:48:10 np0005550137 ceph-mgr[74806]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: ErasureCodeProfile -> /api/erasure_code_profile
Dec  8 04:48:10 np0005550137 ceph-mgr[74806]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: ErasureCodeProfileUi -> /ui-api/erasure_code_profile
Dec  8 04:48:10 np0005550137 ceph-mgr[74806]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: FeedbackController -> /api/feedback
Dec  8 04:48:10 np0005550137 ceph-mgr[74806]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: FeedbackApiController -> /api/feedback/api_key
Dec  8 04:48:10 np0005550137 ceph-mgr[74806]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: FeedbackUiController -> /ui-api/feedback/api_key
Dec  8 04:48:10 np0005550137 ceph-mgr[74806]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: FrontendLogging -> /ui-api/logging
Dec  8 04:48:10 np0005550137 ceph-mgr[74806]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: Grafana -> /api/grafana
Dec  8 04:48:10 np0005550137 ceph-mgr[74806]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: Host -> /api/host
Dec  8 04:48:10 np0005550137 ceph-mgr[74806]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: HostUi -> /ui-api/host
Dec  8 04:48:10 np0005550137 ceph-mgr[74806]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: Health -> /api/health
Dec  8 04:48:10 np0005550137 ceph-mgr[74806]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: HomeController -> /
Dec  8 04:48:10 np0005550137 ceph-mgr[74806]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: LangsController -> /ui-api/langs
Dec  8 04:48:11 np0005550137 ceph-mgr[74806]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: LoginController -> /ui-api/login
Dec  8 04:48:11 np0005550137 ceph-mgr[74806]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: Logs -> /api/logs
Dec  8 04:48:11 np0005550137 ceph-mgr[74806]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: MgrModules -> /api/mgr/module
Dec  8 04:48:11 np0005550137 ceph-mgr[74806]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: Monitor -> /api/monitor
Dec  8 04:48:11 np0005550137 ceph-mgr[74806]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: NVMeoFGateway -> /api/nvmeof/gateway
Dec  8 04:48:11 np0005550137 ceph-mgr[74806]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: NVMeoFSpdk -> /api/nvmeof/spdk
Dec  8 04:48:11 np0005550137 ceph-mgr[74806]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: NVMeoFSubsystem -> /api/nvmeof/subsystem
Dec  8 04:48:11 np0005550137 ceph-mgr[74806]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: NVMeoFListener -> /api/nvmeof/subsystem/{nqn}/listener
Dec  8 04:48:11 np0005550137 ceph-mgr[74806]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: NVMeoFNamespace -> /api/nvmeof/subsystem/{nqn}/namespace
Dec  8 04:48:11 np0005550137 ceph-mgr[74806]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: NVMeoFHost -> /api/nvmeof/subsystem/{nqn}/host
Dec  8 04:48:11 np0005550137 ceph-mgr[74806]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: NVMeoFConnection -> /api/nvmeof/subsystem/{nqn}/connection
Dec  8 04:48:11 np0005550137 ceph-mgr[74806]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: NVMeoFTcpUI -> /ui-api/nvmeof
Dec  8 04:48:11 np0005550137 ceph-mgr[74806]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: Osd -> /api/osd
Dec  8 04:48:11 np0005550137 ceph-mgr[74806]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: OsdUi -> /ui-api/osd
Dec  8 04:48:11 np0005550137 ceph-mgr[74806]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: OsdFlagsController -> /api/osd/flags
Dec  8 04:48:11 np0005550137 ceph-mgr[74806]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: MdsPerfCounter -> /api/perf_counters/mds
Dec  8 04:48:11 np0005550137 ceph-mgr[74806]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: MonPerfCounter -> /api/perf_counters/mon
Dec  8 04:48:11 np0005550137 ceph-mgr[74806]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: OsdPerfCounter -> /api/perf_counters/osd
Dec  8 04:48:11 np0005550137 ceph-mgr[74806]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: RgwPerfCounter -> /api/perf_counters/rgw
Dec  8 04:48:11 np0005550137 ceph-mgr[74806]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: RbdMirrorPerfCounter -> /api/perf_counters/rbd-mirror
Dec  8 04:48:11 np0005550137 ceph-mgr[74806]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: MgrPerfCounter -> /api/perf_counters/mgr
Dec  8 04:48:11 np0005550137 ceph-mgr[74806]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: TcmuRunnerPerfCounter -> /api/perf_counters/tcmu-runner
Dec  8 04:48:11 np0005550137 ceph-mgr[74806]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: PerfCounters -> /api/perf_counters
Dec  8 04:48:11 np0005550137 ceph-mgr[74806]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: PrometheusReceiver -> /api/prometheus_receiver
Dec  8 04:48:11 np0005550137 ceph-mgr[74806]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: Prometheus -> /api/prometheus
Dec  8 04:48:11 np0005550137 ceph-mgr[74806]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: PrometheusNotifications -> /api/prometheus/notifications
Dec  8 04:48:11 np0005550137 ceph-mgr[74806]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: PrometheusSettings -> /ui-api/prometheus
Dec  8 04:48:11 np0005550137 ceph-mgr[74806]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: Role -> /api/role
Dec  8 04:48:11 np0005550137 ceph-mgr[74806]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: Scope -> /ui-api/scope
Dec  8 04:48:11 np0005550137 ceph-mgr[74806]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: Saml2 -> /auth/saml2
Dec  8 04:48:11 np0005550137 ceph-mgr[74806]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: Settings -> /api/settings
Dec  8 04:48:11 np0005550137 ceph-mgr[74806]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: StandardSettings -> /ui-api/standard_settings
Dec  8 04:48:11 np0005550137 ceph-mgr[74806]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: Summary -> /api/summary
Dec  8 04:48:11 np0005550137 ceph-mgr[74806]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: Task -> /api/task
Dec  8 04:48:11 np0005550137 ceph-mgr[74806]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: Telemetry -> /api/telemetry
Dec  8 04:48:11 np0005550137 ceph-mgr[74806]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: User -> /api/user
Dec  8 04:48:11 np0005550137 ceph-mgr[74806]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: UserPasswordPolicy -> /api/user
Dec  8 04:48:11 np0005550137 ceph-mgr[74806]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: UserChangePassword -> /api/user/{username}
Dec  8 04:48:11 np0005550137 ceph-mgr[74806]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: FeatureTogglesEndpoint -> /api/feature_toggles
Dec  8 04:48:11 np0005550137 ceph-mgr[74806]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: MessageOfTheDay -> /ui-api/motd
Dec  8 04:48:11 np0005550137 systemd[1]: Created slice User Slice of UID 42477.
Dec  8 04:48:11 np0005550137 systemd[1]: Starting User Runtime Directory /run/user/42477...
Dec  8 04:48:11 np0005550137 systemd-logind[805]: New session 35 of user ceph-admin.
Dec  8 04:48:11 np0005550137 systemd[1]: Finished User Runtime Directory /run/user/42477.
Dec  8 04:48:11 np0005550137 systemd[1]: Starting User Manager for UID 42477...
Dec  8 04:48:11 np0005550137 ceph-mgr[74806]: [dashboard INFO dashboard.module] Engine started.
Dec  8 04:48:11 np0005550137 systemd[93042]: Queued start job for default target Main User Target.
Dec  8 04:48:11 np0005550137 systemd[93042]: Created slice User Application Slice.
Dec  8 04:48:11 np0005550137 systemd[93042]: Started Mark boot as successful after the user session has run 2 minutes.
Dec  8 04:48:11 np0005550137 systemd[93042]: Started Daily Cleanup of User's Temporary Directories.
Dec  8 04:48:11 np0005550137 systemd[93042]: Reached target Paths.
Dec  8 04:48:11 np0005550137 systemd[93042]: Reached target Timers.
Dec  8 04:48:11 np0005550137 systemd[93042]: Starting D-Bus User Message Bus Socket...
Dec  8 04:48:11 np0005550137 systemd[93042]: Starting Create User's Volatile Files and Directories...
Dec  8 04:48:11 np0005550137 systemd[93042]: Finished Create User's Volatile Files and Directories.
Dec  8 04:48:11 np0005550137 systemd[93042]: Listening on D-Bus User Message Bus Socket.
Dec  8 04:48:11 np0005550137 systemd[93042]: Reached target Sockets.
Dec  8 04:48:11 np0005550137 systemd[93042]: Reached target Basic System.
Dec  8 04:48:11 np0005550137 systemd[93042]: Reached target Main User Target.
Dec  8 04:48:11 np0005550137 systemd[93042]: Startup finished in 132ms.
Dec  8 04:48:11 np0005550137 systemd[1]: Started User Manager for UID 42477.
Dec  8 04:48:11 np0005550137 systemd[1]: Started Session 35 of User ceph-admin.
Dec  8 04:48:11 np0005550137 ceph-mon[74516]: log_channel(cluster) log [DBG] : mgrmap e23: compute-0.kitiwu(active, since 1.07357s), standbys: compute-1.mmkaif, compute-2.zqytsv
Dec  8 04:48:11 np0005550137 ceph-mgr[74806]: log_channel(audit) log [DBG] : from='client.14472 -' entity='client.admin' cmd=[{"prefix": "fs volume create", "name": "cephfs", "placement": "compute-0 compute-1 compute-2 ", "target": ["mon-mgr", ""]}]: dispatch
Dec  8 04:48:11 np0005550137 ceph-mgr[74806]: [volumes INFO volumes.module] Starting _cmd_fs_volume_create(name:cephfs, placement:compute-0 compute-1 compute-2 , prefix:fs volume create, target:['mon-mgr', '']) < ""
Dec  8 04:48:11 np0005550137 ceph-mon[74516]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool create", "pool": "cephfs.cephfs.meta"} v 0)
Dec  8 04:48:11 np0005550137 ceph-mon[74516]: log_channel(audit) log [INF] : from='mgr.14478 192.168.122.100:0/575685615' entity='mgr.compute-0.kitiwu' cmd=[{"prefix": "osd pool create", "pool": "cephfs.cephfs.meta"}]: dispatch
Dec  8 04:48:11 np0005550137 ceph-mgr[74806]: log_channel(cluster) log [DBG] : pgmap v3: 197 pgs: 197 active+clean; 454 KiB data, 85 MiB used, 60 GiB / 60 GiB avail
Dec  8 04:48:11 np0005550137 ceph-mon[74516]: Manager daemon compute-0.kitiwu is now available
Dec  8 04:48:11 np0005550137 ceph-mon[74516]: from='mgr.14478 192.168.122.100:0/575685615' entity='mgr.compute-0.kitiwu' cmd=[{"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/compute-0.kitiwu/mirror_snapshot_schedule"}]: dispatch
Dec  8 04:48:11 np0005550137 ceph-mon[74516]: from='mgr.14478 192.168.122.100:0/575685615' entity='mgr.compute-0.kitiwu' cmd=[{"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/compute-0.kitiwu/trash_purge_schedule"}]: dispatch
Dec  8 04:48:11 np0005550137 ceph-mon[74516]: mon.compute-0@0(leader) e3 handle_command mon_command({"bulk": true, "prefix": "osd pool create", "pool": "cephfs.cephfs.data"} v 0)
Dec  8 04:48:11 np0005550137 ceph-mon[74516]: log_channel(audit) log [INF] : from='mgr.14478 192.168.122.100:0/575685615' entity='mgr.compute-0.kitiwu' cmd=[{"bulk": true, "prefix": "osd pool create", "pool": "cephfs.cephfs.data"}]: dispatch
Dec  8 04:48:11 np0005550137 ceph-mon[74516]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "fs new", "fs_name": "cephfs", "metadata": "cephfs.cephfs.meta", "data": "cephfs.cephfs.data"} v 0)
Dec  8 04:48:11 np0005550137 ceph-mon[74516]: log_channel(audit) log [INF] : from='mgr.14478 192.168.122.100:0/575685615' entity='mgr.compute-0.kitiwu' cmd=[{"prefix": "fs new", "fs_name": "cephfs", "metadata": "cephfs.cephfs.meta", "data": "cephfs.cephfs.data"}]: dispatch
Dec  8 04:48:11 np0005550137 ceph-mon[74516]: mon.compute-0@0(leader).osd e45 do_prune osdmap full prune enabled
Dec  8 04:48:11 np0005550137 ceph-mon[74516]: log_channel(cluster) log [ERR] : Health check failed: 1 filesystem is offline (MDS_ALL_DOWN)
Dec  8 04:48:11 np0005550137 ceph-mon[74516]: log_channel(cluster) log [WRN] : Health check failed: 1 filesystem is online with fewer MDS than max_mds (MDS_UP_LESS_THAN_MAX)
Dec  8 04:48:11 np0005550137 ceph-ceb838ef-9d5d-54e4-bddb-2f01adce2ad4-mon-compute-0[74512]: 2025-12-08T09:48:11.623+0000 7fb59a5d4640 -1 log_channel(cluster) log [ERR] : Health check failed: 1 filesystem is offline (MDS_ALL_DOWN)
Dec  8 04:48:11 np0005550137 ceph-mon[74516]: log_channel(audit) log [INF] : from='mgr.14478 192.168.122.100:0/575685615' entity='mgr.compute-0.kitiwu' cmd='[{"prefix": "fs new", "fs_name": "cephfs", "metadata": "cephfs.cephfs.meta", "data": "cephfs.cephfs.data"}]': finished
Dec  8 04:48:11 np0005550137 ceph-mon[74516]: mon.compute-0@0(leader).mds e2 new map
Dec  8 04:48:11 np0005550137 ceph-mon[74516]: mon.compute-0@0(leader).mds e2 print_map#012e2#012btime 2025-12-08T09:48:11:623626+0000#012enable_multiple, ever_enabled_multiple: 1,1#012default compat: compat={},rocompat={},incompat={1=base v0.20,2=client writeable ranges,3=default file layouts on dirs,4=dir inode in separate object,5=mds uses versioned encoding,6=dirfrag is stored in omap,8=no anchor table,9=file layout v2,10=snaprealm v2,11=minor log segments,12=quiesce subvolumes}#012legacy client fscid: 1#012 #012Filesystem 'cephfs' (1)#012fs_name#011cephfs#012epoch#0112#012flags#01112 joinable allow_snaps allow_multimds_snaps#012created#0112025-12-08T09:48:11.623571+0000#012modified#0112025-12-08T09:48:11.623571+0000#012tableserver#0110#012root#0110#012session_timeout#01160#012session_autoclose#011300#012max_file_size#0111099511627776#012max_xattr_size#01165536#012required_client_features#011{}#012last_failure#0110#012last_failure_osd_epoch#0110#012compat#011compat={},rocompat={},incompat={1=base v0.20,2=client writeable ranges,3=default file layouts on dirs,4=dir inode in separate object,5=mds uses versioned encoding,6=dirfrag is stored in omap,7=mds uses inline data,8=no anchor table,9=file layout v2,10=snaprealm v2,11=minor log segments,12=quiesce subvolumes}#012max_mds#0111#012in#011#012up#011{}#012failed#011#012damaged#011#012stopped#011#012data_pools#011[7]#012metadata_pool#0116#012inline_data#011disabled#012balancer#011#012bal_rank_mask#011-1#012standby_count_wanted#0110#012qdb_cluster#011leader: 0 members: #012 #012 
Dec  8 04:48:11 np0005550137 ceph-mon[74516]: mon.compute-0@0(leader).osd e46 e46: 3 total, 3 up, 3 in
Dec  8 04:48:11 np0005550137 ceph-mon[74516]: log_channel(cluster) log [DBG] : osdmap e46: 3 total, 3 up, 3 in
Dec  8 04:48:11 np0005550137 ceph-mon[74516]: log_channel(cluster) log [DBG] : fsmap cephfs:0
Dec  8 04:48:11 np0005550137 ceph-mgr[74806]: [cephadm INFO root] Saving service mds.cephfs spec with placement compute-0;compute-1;compute-2
Dec  8 04:48:11 np0005550137 ceph-mgr[74806]: log_channel(cephadm) log [INF] : Saving service mds.cephfs spec with placement compute-0;compute-1;compute-2
Dec  8 04:48:11 np0005550137 ceph-mon[74516]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.mds.cephfs}] v 0)
Dec  8 04:48:11 np0005550137 ceph-osd[83009]: log_channel(cluster) log [DBG] : 3.19 scrub starts
Dec  8 04:48:11 np0005550137 ceph-mon[74516]: log_channel(audit) log [INF] : from='mgr.14478 192.168.122.100:0/575685615' entity='mgr.compute-0.kitiwu' 
Dec  8 04:48:11 np0005550137 ceph-mgr[74806]: [volumes INFO volumes.module] Finishing _cmd_fs_volume_create(name:cephfs, placement:compute-0 compute-1 compute-2 , prefix:fs volume create, target:['mon-mgr', '']) < ""
Dec  8 04:48:11 np0005550137 ceph-osd[83009]: log_channel(cluster) log [DBG] : 3.19 scrub ok
Dec  8 04:48:11 np0005550137 systemd[1]: libpod-cff9be17f1650e4b4884cccc9b81bdc052b0d64b009c6b1760440641efe3b5bd.scope: Deactivated successfully.
Dec  8 04:48:11 np0005550137 podman[92836]: 2025-12-08 09:48:11.686624126 +0000 UTC m=+10.548277380 container died cff9be17f1650e4b4884cccc9b81bdc052b0d64b009c6b1760440641efe3b5bd (image=quay.io/ceph/ceph:v19, name=compassionate_merkle, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, ceph=True, OSD_FLAVOR=default, io.buildah.version=1.40.1, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Dec  8 04:48:11 np0005550137 systemd[1]: var-lib-containers-storage-overlay-7981e43bca5f77376085be4da150306aefef36488722d2dfe017b17b55ec39fc-merged.mount: Deactivated successfully.
Dec  8 04:48:11 np0005550137 podman[92836]: 2025-12-08 09:48:11.741576956 +0000 UTC m=+10.603230200 container remove cff9be17f1650e4b4884cccc9b81bdc052b0d64b009c6b1760440641efe3b5bd (image=quay.io/ceph/ceph:v19, name=compassionate_merkle, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, org.label-schema.license=GPLv2)
Dec  8 04:48:11 np0005550137 systemd[1]: libpod-conmon-cff9be17f1650e4b4884cccc9b81bdc052b0d64b009c6b1760440641efe3b5bd.scope: Deactivated successfully.
Dec  8 04:48:12 np0005550137 python3[93188]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z   --volume /tmp/ceph_mds.yml:/home/ceph_spec.yaml:z   --entrypoint ceph quay.io/ceph/ceph:v19 --fsid ceb838ef-9d5d-54e4-bddb-2f01adce2ad4 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   orch apply --in-file /home/ceph_spec.yaml _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec  8 04:48:12 np0005550137 podman[93218]: 2025-12-08 09:48:12.103006491 +0000 UTC m=+0.037770084 container create 87a370fc08c53a3391dd93320485065b7769d7f8cdf9e3e7fabd2b6cbdd5699d (image=quay.io/ceph/ceph:v19, name=condescending_brown, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, ceph=True, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image)
Dec  8 04:48:12 np0005550137 systemd[1]: Started libpod-conmon-87a370fc08c53a3391dd93320485065b7769d7f8cdf9e3e7fabd2b6cbdd5699d.scope.
Dec  8 04:48:12 np0005550137 podman[93222]: 2025-12-08 09:48:12.148160088 +0000 UTC m=+0.073309843 container exec e9eed32aa882af654b63b8cfe5830bc1e3b4cbe08ed44f11c04405084f3a1007 (image=quay.io/ceph/ceph:v19, name=ceph-ceb838ef-9d5d-54e4-bddb-2f01adce2ad4-mon-compute-0, ceph=True, CEPH_REF=squid, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec  8 04:48:12 np0005550137 systemd[1]: Started libcrun container.
Dec  8 04:48:12 np0005550137 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6d5b63ab7184f54b823cf9b46ad7901ea99088ca1483bb6d39fb951bd03591ce/merged/home/ceph_spec.yaml supports timestamps until 2038 (0x7fffffff)
Dec  8 04:48:12 np0005550137 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6d5b63ab7184f54b823cf9b46ad7901ea99088ca1483bb6d39fb951bd03591ce/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Dec  8 04:48:12 np0005550137 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6d5b63ab7184f54b823cf9b46ad7901ea99088ca1483bb6d39fb951bd03591ce/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec  8 04:48:12 np0005550137 podman[93218]: 2025-12-08 09:48:12.084980189 +0000 UTC m=+0.019743752 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Dec  8 04:48:12 np0005550137 podman[93218]: 2025-12-08 09:48:12.201997367 +0000 UTC m=+0.136760930 container init 87a370fc08c53a3391dd93320485065b7769d7f8cdf9e3e7fabd2b6cbdd5699d (image=quay.io/ceph/ceph:v19, name=condescending_brown, CEPH_REF=squid, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec  8 04:48:12 np0005550137 podman[93218]: 2025-12-08 09:48:12.219760141 +0000 UTC m=+0.154523684 container start 87a370fc08c53a3391dd93320485065b7769d7f8cdf9e3e7fabd2b6cbdd5699d (image=quay.io/ceph/ceph:v19, name=condescending_brown, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, OSD_FLAVOR=default, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, ceph=True)
Dec  8 04:48:12 np0005550137 podman[93218]: 2025-12-08 09:48:12.223048156 +0000 UTC m=+0.157811729 container attach 87a370fc08c53a3391dd93320485065b7769d7f8cdf9e3e7fabd2b6cbdd5699d (image=quay.io/ceph/ceph:v19, name=condescending_brown, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, io.buildah.version=1.40.1, OSD_FLAVOR=default, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, ceph=True)
Dec  8 04:48:12 np0005550137 ceph-mgr[74806]: [cephadm INFO cherrypy.error] [08/Dec/2025:09:48:12] ENGINE Bus STARTING
Dec  8 04:48:12 np0005550137 ceph-mgr[74806]: log_channel(cephadm) log [INF] : [08/Dec/2025:09:48:12] ENGINE Bus STARTING
Dec  8 04:48:12 np0005550137 podman[93222]: 2025-12-08 09:48:12.256568127 +0000 UTC m=+0.181717862 container exec_died e9eed32aa882af654b63b8cfe5830bc1e3b4cbe08ed44f11c04405084f3a1007 (image=quay.io/ceph/ceph:v19, name=ceph-ceb838ef-9d5d-54e4-bddb-2f01adce2ad4-mon-compute-0, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, ceph=True, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default)
Dec  8 04:48:12 np0005550137 ceph-mgr[74806]: [cephadm INFO cherrypy.error] [08/Dec/2025:09:48:12] ENGINE Serving on http://192.168.122.100:8765
Dec  8 04:48:12 np0005550137 ceph-mgr[74806]: log_channel(cephadm) log [INF] : [08/Dec/2025:09:48:12] ENGINE Serving on http://192.168.122.100:8765
Dec  8 04:48:12 np0005550137 ceph-mgr[74806]: [cephadm INFO cherrypy.error] [08/Dec/2025:09:48:12] ENGINE Serving on https://192.168.122.100:7150
Dec  8 04:48:12 np0005550137 ceph-mgr[74806]: log_channel(cephadm) log [INF] : [08/Dec/2025:09:48:12] ENGINE Serving on https://192.168.122.100:7150
Dec  8 04:48:12 np0005550137 ceph-mgr[74806]: [cephadm INFO cherrypy.error] [08/Dec/2025:09:48:12] ENGINE Bus STARTED
Dec  8 04:48:12 np0005550137 ceph-mgr[74806]: log_channel(cephadm) log [INF] : [08/Dec/2025:09:48:12] ENGINE Bus STARTED
Dec  8 04:48:12 np0005550137 ceph-mgr[74806]: [cephadm INFO cherrypy.error] [08/Dec/2025:09:48:12] ENGINE Client ('192.168.122.100', 54030) lost — peer dropped the TLS connection suddenly, during handshake: (6, 'TLS/SSL connection has been closed (EOF) (_ssl.c:1147)')
Dec  8 04:48:12 np0005550137 ceph-mgr[74806]: log_channel(cephadm) log [INF] : [08/Dec/2025:09:48:12] ENGINE Client ('192.168.122.100', 54030) lost — peer dropped the TLS connection suddenly, during handshake: (6, 'TLS/SSL connection has been closed (EOF) (_ssl.c:1147)')
Dec  8 04:48:12 np0005550137 ceph-mgr[74806]: log_channel(cluster) log [DBG] : pgmap v5: 197 pgs: 197 active+clean; 454 KiB data, 85 MiB used, 60 GiB / 60 GiB avail
Dec  8 04:48:12 np0005550137 ceph-mgr[74806]: log_channel(audit) log [DBG] : from='client.14502 -' entity='client.admin' cmd=[{"prefix": "orch apply", "target": ["mon-mgr", ""]}]: dispatch
Dec  8 04:48:12 np0005550137 ceph-mgr[74806]: [cephadm INFO root] Saving service mds.cephfs spec with placement compute-0;compute-1;compute-2
Dec  8 04:48:12 np0005550137 ceph-mgr[74806]: log_channel(cephadm) log [INF] : Saving service mds.cephfs spec with placement compute-0;compute-1;compute-2
Dec  8 04:48:12 np0005550137 ceph-mon[74516]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.mds.cephfs}] v 0)
Dec  8 04:48:12 np0005550137 ceph-mon[74516]: log_channel(audit) log [INF] : from='mgr.14478 192.168.122.100:0/575685615' entity='mgr.compute-0.kitiwu' 
Dec  8 04:48:12 np0005550137 condescending_brown[93255]: Scheduled mds.cephfs update...
Dec  8 04:48:12 np0005550137 systemd[1]: libpod-87a370fc08c53a3391dd93320485065b7769d7f8cdf9e3e7fabd2b6cbdd5699d.scope: Deactivated successfully.
Dec  8 04:48:12 np0005550137 podman[93218]: 2025-12-08 09:48:12.608840726 +0000 UTC m=+0.543604299 container died 87a370fc08c53a3391dd93320485065b7769d7f8cdf9e3e7fabd2b6cbdd5699d (image=quay.io/ceph/ceph:v19, name=condescending_brown, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Dec  8 04:48:12 np0005550137 ceph-mon[74516]: from='mgr.14478 192.168.122.100:0/575685615' entity='mgr.compute-0.kitiwu' cmd=[{"prefix": "osd pool create", "pool": "cephfs.cephfs.meta"}]: dispatch
Dec  8 04:48:12 np0005550137 ceph-mon[74516]: from='mgr.14478 192.168.122.100:0/575685615' entity='mgr.compute-0.kitiwu' cmd=[{"bulk": true, "prefix": "osd pool create", "pool": "cephfs.cephfs.data"}]: dispatch
Dec  8 04:48:12 np0005550137 ceph-mon[74516]: from='mgr.14478 192.168.122.100:0/575685615' entity='mgr.compute-0.kitiwu' cmd=[{"prefix": "fs new", "fs_name": "cephfs", "metadata": "cephfs.cephfs.meta", "data": "cephfs.cephfs.data"}]: dispatch
Dec  8 04:48:12 np0005550137 ceph-mon[74516]: Health check failed: 1 filesystem is offline (MDS_ALL_DOWN)
Dec  8 04:48:12 np0005550137 ceph-mon[74516]: Health check failed: 1 filesystem is online with fewer MDS than max_mds (MDS_UP_LESS_THAN_MAX)
Dec  8 04:48:12 np0005550137 ceph-mon[74516]: from='mgr.14478 192.168.122.100:0/575685615' entity='mgr.compute-0.kitiwu' cmd='[{"prefix": "fs new", "fs_name": "cephfs", "metadata": "cephfs.cephfs.meta", "data": "cephfs.cephfs.data"}]': finished
Dec  8 04:48:12 np0005550137 ceph-mon[74516]: Saving service mds.cephfs spec with placement compute-0;compute-1;compute-2
Dec  8 04:48:12 np0005550137 ceph-mon[74516]: from='mgr.14478 192.168.122.100:0/575685615' entity='mgr.compute-0.kitiwu' 
Dec  8 04:48:12 np0005550137 ceph-mon[74516]: [08/Dec/2025:09:48:12] ENGINE Bus STARTING
Dec  8 04:48:12 np0005550137 ceph-mon[74516]: [08/Dec/2025:09:48:12] ENGINE Serving on http://192.168.122.100:8765
Dec  8 04:48:12 np0005550137 ceph-mon[74516]: from='mgr.14478 192.168.122.100:0/575685615' entity='mgr.compute-0.kitiwu' 
Dec  8 04:48:12 np0005550137 systemd[1]: var-lib-containers-storage-overlay-6d5b63ab7184f54b823cf9b46ad7901ea99088ca1483bb6d39fb951bd03591ce-merged.mount: Deactivated successfully.
Dec  8 04:48:12 np0005550137 podman[93218]: 2025-12-08 09:48:12.661438849 +0000 UTC m=+0.596202402 container remove 87a370fc08c53a3391dd93320485065b7769d7f8cdf9e3e7fabd2b6cbdd5699d (image=quay.io/ceph/ceph:v19, name=condescending_brown, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, ceph=True, org.label-schema.build-date=20250325)
Dec  8 04:48:12 np0005550137 systemd[1]: libpod-conmon-87a370fc08c53a3391dd93320485065b7769d7f8cdf9e3e7fabd2b6cbdd5699d.scope: Deactivated successfully.
Dec  8 04:48:12 np0005550137 ceph-osd[83009]: log_channel(cluster) log [DBG] : 2.6 scrub starts
Dec  8 04:48:12 np0005550137 ceph-osd[83009]: log_channel(cluster) log [DBG] : 2.6 scrub ok
Dec  8 04:48:12 np0005550137 ceph-mon[74516]: log_channel(cluster) log [DBG] : mgrmap e24: compute-0.kitiwu(active, since 2s), standbys: compute-1.mmkaif, compute-2.zqytsv
Dec  8 04:48:12 np0005550137 ceph-mgr[74806]: [devicehealth INFO root] Check health
Dec  8 04:48:12 np0005550137 ceph-mon[74516]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2.devices.0}] v 0)
Dec  8 04:48:12 np0005550137 ceph-mon[74516]: log_channel(audit) log [INF] : from='mgr.14478 192.168.122.100:0/575685615' entity='mgr.compute-0.kitiwu' 
Dec  8 04:48:12 np0005550137 ceph-mon[74516]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2}] v 0)
Dec  8 04:48:12 np0005550137 ceph-mon[74516]: log_channel(audit) log [INF] : from='mgr.14478 192.168.122.100:0/575685615' entity='mgr.compute-0.kitiwu' 
Dec  8 04:48:12 np0005550137 podman[93441]: 2025-12-08 09:48:12.813910973 +0000 UTC m=+0.067135315 container exec 76b11ba15419197dfbc2b41db10c193f2abd6fd1997bcedb2a4f58e24399d05c (image=quay.io/prometheus/node-exporter:v1.7.0, name=ceph-ceb838ef-9d5d-54e4-bddb-2f01adce2ad4-node-exporter-compute-0, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Dec  8 04:48:12 np0005550137 podman[93441]: 2025-12-08 09:48:12.824939172 +0000 UTC m=+0.078163484 container exec_died 76b11ba15419197dfbc2b41db10c193f2abd6fd1997bcedb2a4f58e24399d05c (image=quay.io/prometheus/node-exporter:v1.7.0, name=ceph-ceb838ef-9d5d-54e4-bddb-2f01adce2ad4-node-exporter-compute-0, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Dec  8 04:48:12 np0005550137 ceph-mon[74516]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1.devices.0}] v 0)
Dec  8 04:48:12 np0005550137 ceph-mon[74516]: log_channel(audit) log [INF] : from='mgr.14478 192.168.122.100:0/575685615' entity='mgr.compute-0.kitiwu' 
Dec  8 04:48:12 np0005550137 ceph-mon[74516]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1}] v 0)
Dec  8 04:48:12 np0005550137 ceph-mon[74516]: log_channel(audit) log [INF] : from='mgr.14478 192.168.122.100:0/575685615' entity='mgr.compute-0.kitiwu' 
Dec  8 04:48:12 np0005550137 ceph-mon[74516]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Dec  8 04:48:12 np0005550137 ceph-mon[74516]: log_channel(audit) log [INF] : from='mgr.14478 192.168.122.100:0/575685615' entity='mgr.compute-0.kitiwu' 
Dec  8 04:48:12 np0005550137 ceph-mon[74516]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Dec  8 04:48:12 np0005550137 ceph-mon[74516]: log_channel(audit) log [INF] : from='mgr.14478 192.168.122.100:0/575685615' entity='mgr.compute-0.kitiwu' 
Dec  8 04:48:13 np0005550137 python3[93498]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z   --volume /tmp/ceph_mds.yml:/home/ceph_spec.yaml:z   --entrypoint ceph quay.io/ceph/ceph:v19 --fsid ceb838ef-9d5d-54e4-bddb-2f01adce2ad4 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   nfs cluster create cephfs --ingress --virtual-ip=192.168.122.2/24 --ingress-mode=haproxy-protocol '--placement=compute-0 compute-1 compute-2 '#012 _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec  8 04:48:13 np0005550137 podman[93533]: 2025-12-08 09:48:13.084169358 +0000 UTC m=+0.047384052 container create 00e0d5887f8ae2452e3a022c1045e54557f2d1bdba44f6b5f3357c5385091753 (image=quay.io/ceph/ceph:v19, name=compassionate_sinoussi, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default)
Dec  8 04:48:13 np0005550137 systemd[1]: Started libpod-conmon-00e0d5887f8ae2452e3a022c1045e54557f2d1bdba44f6b5f3357c5385091753.scope.
Dec  8 04:48:13 np0005550137 systemd[1]: Started libcrun container.
Dec  8 04:48:13 np0005550137 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e4f99745724323c8126ae13c5e1ff290f5f5e036031024dd2f5e6e3bf079da59/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec  8 04:48:13 np0005550137 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e4f99745724323c8126ae13c5e1ff290f5f5e036031024dd2f5e6e3bf079da59/merged/home/ceph_spec.yaml supports timestamps until 2038 (0x7fffffff)
Dec  8 04:48:13 np0005550137 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e4f99745724323c8126ae13c5e1ff290f5f5e036031024dd2f5e6e3bf079da59/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Dec  8 04:48:13 np0005550137 podman[93533]: 2025-12-08 09:48:13.059132094 +0000 UTC m=+0.022346818 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Dec  8 04:48:13 np0005550137 podman[93533]: 2025-12-08 09:48:13.162241439 +0000 UTC m=+0.125456153 container init 00e0d5887f8ae2452e3a022c1045e54557f2d1bdba44f6b5f3357c5385091753 (image=quay.io/ceph/ceph:v19, name=compassionate_sinoussi, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS)
Dec  8 04:48:13 np0005550137 podman[93533]: 2025-12-08 09:48:13.175320227 +0000 UTC m=+0.138534901 container start 00e0d5887f8ae2452e3a022c1045e54557f2d1bdba44f6b5f3357c5385091753 (image=quay.io/ceph/ceph:v19, name=compassionate_sinoussi, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default)
Dec  8 04:48:13 np0005550137 podman[93533]: 2025-12-08 09:48:13.179401376 +0000 UTC m=+0.142616070 container attach 00e0d5887f8ae2452e3a022c1045e54557f2d1bdba44f6b5f3357c5385091753 (image=quay.io/ceph/ceph:v19, name=compassionate_sinoussi, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, io.buildah.version=1.40.1)
Dec  8 04:48:13 np0005550137 ceph-mgr[74806]: log_channel(audit) log [DBG] : from='client.14514 -' entity='client.admin' cmd=[{"prefix": "nfs cluster create", "cluster_id": "cephfs", "ingress": true, "virtual_ip": "192.168.122.2/24", "ingress_mode": "haproxy-protocol", "placement": "compute-0 compute-1 compute-2 ", "target": ["mon-mgr", ""]}]: dispatch
Dec  8 04:48:13 np0005550137 ceph-mon[74516]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool create", "pool": ".nfs", "yes_i_really_mean_it": true} v 0)
Dec  8 04:48:13 np0005550137 ceph-mon[74516]: log_channel(audit) log [INF] : from='mgr.14478 192.168.122.100:0/575685615' entity='mgr.compute-0.kitiwu' cmd=[{"prefix": "osd pool create", "pool": ".nfs", "yes_i_really_mean_it": true}]: dispatch
Dec  8 04:48:13 np0005550137 ceph-osd[83009]: log_channel(cluster) log [DBG] : 2.4 scrub starts
Dec  8 04:48:13 np0005550137 ceph-osd[83009]: log_channel(cluster) log [DBG] : 2.4 scrub ok
Dec  8 04:48:13 np0005550137 ceph-mon[74516]: [08/Dec/2025:09:48:12] ENGINE Serving on https://192.168.122.100:7150
Dec  8 04:48:13 np0005550137 ceph-mon[74516]: [08/Dec/2025:09:48:12] ENGINE Bus STARTED
Dec  8 04:48:13 np0005550137 ceph-mon[74516]: [08/Dec/2025:09:48:12] ENGINE Client ('192.168.122.100', 54030) lost — peer dropped the TLS connection suddenly, during handshake: (6, 'TLS/SSL connection has been closed (EOF) (_ssl.c:1147)')
Dec  8 04:48:13 np0005550137 ceph-mon[74516]: Saving service mds.cephfs spec with placement compute-0;compute-1;compute-2
Dec  8 04:48:13 np0005550137 ceph-mon[74516]: from='mgr.14478 192.168.122.100:0/575685615' entity='mgr.compute-0.kitiwu' 
Dec  8 04:48:13 np0005550137 ceph-mon[74516]: from='mgr.14478 192.168.122.100:0/575685615' entity='mgr.compute-0.kitiwu' 
Dec  8 04:48:13 np0005550137 ceph-mon[74516]: from='mgr.14478 192.168.122.100:0/575685615' entity='mgr.compute-0.kitiwu' 
Dec  8 04:48:13 np0005550137 ceph-mon[74516]: from='mgr.14478 192.168.122.100:0/575685615' entity='mgr.compute-0.kitiwu' 
Dec  8 04:48:13 np0005550137 ceph-mon[74516]: from='mgr.14478 192.168.122.100:0/575685615' entity='mgr.compute-0.kitiwu' 
Dec  8 04:48:13 np0005550137 ceph-mon[74516]: from='mgr.14478 192.168.122.100:0/575685615' entity='mgr.compute-0.kitiwu' 
Dec  8 04:48:13 np0005550137 ceph-mon[74516]: from='mgr.14478 192.168.122.100:0/575685615' entity='mgr.compute-0.kitiwu' cmd=[{"prefix": "osd pool create", "pool": ".nfs", "yes_i_really_mean_it": true}]: dispatch
Dec  8 04:48:13 np0005550137 ceph-mon[74516]: mon.compute-0@0(leader).osd e46 do_prune osdmap full prune enabled
Dec  8 04:48:13 np0005550137 ceph-mon[74516]: log_channel(audit) log [INF] : from='mgr.14478 192.168.122.100:0/575685615' entity='mgr.compute-0.kitiwu' cmd='[{"prefix": "osd pool create", "pool": ".nfs", "yes_i_really_mean_it": true}]': finished
Dec  8 04:48:13 np0005550137 ceph-mon[74516]: mon.compute-0@0(leader).osd e47 e47: 3 total, 3 up, 3 in
Dec  8 04:48:13 np0005550137 ceph-mon[74516]: log_channel(cluster) log [DBG] : osdmap e47: 3 total, 3 up, 3 in
Dec  8 04:48:13 np0005550137 ceph-mon[74516]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool application enable", "pool": ".nfs", "app": "nfs"} v 0)
Dec  8 04:48:13 np0005550137 ceph-mon[74516]: log_channel(audit) log [INF] : from='mgr.14478 192.168.122.100:0/575685615' entity='mgr.compute-0.kitiwu' cmd=[{"prefix": "osd pool application enable", "pool": ".nfs", "app": "nfs"}]: dispatch
Dec  8 04:48:14 np0005550137 ceph-mon[74516]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Dec  8 04:48:14 np0005550137 ceph-mon[74516]: log_channel(audit) log [INF] : from='mgr.14478 192.168.122.100:0/575685615' entity='mgr.compute-0.kitiwu' 
Dec  8 04:48:14 np0005550137 ceph-mon[74516]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Dec  8 04:48:14 np0005550137 ceph-mon[74516]: log_channel(audit) log [INF] : from='mgr.14478 192.168.122.100:0/575685615' entity='mgr.compute-0.kitiwu' 
Dec  8 04:48:14 np0005550137 ceph-mon[74516]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"} v 0)
Dec  8 04:48:14 np0005550137 ceph-mon[74516]: log_channel(audit) log [INF] : from='mgr.14478 192.168.122.100:0/575685615' entity='mgr.compute-0.kitiwu' cmd=[{"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"}]: dispatch
Dec  8 04:48:14 np0005550137 ceph-mon[74516]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2.devices.0}] v 0)
Dec  8 04:48:14 np0005550137 ceph-mon[74516]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1.devices.0}] v 0)
Dec  8 04:48:14 np0005550137 ceph-mon[74516]: log_channel(audit) log [INF] : from='mgr.14478 192.168.122.100:0/575685615' entity='mgr.compute-0.kitiwu' 
Dec  8 04:48:14 np0005550137 ceph-mon[74516]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2}] v 0)
Dec  8 04:48:14 np0005550137 ceph-mon[74516]: log_channel(audit) log [INF] : from='mgr.14478 192.168.122.100:0/575685615' entity='mgr.compute-0.kitiwu' 
Dec  8 04:48:14 np0005550137 ceph-mon[74516]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1}] v 0)
Dec  8 04:48:14 np0005550137 ceph-mon[74516]: log_channel(audit) log [INF] : from='mgr.14478 192.168.122.100:0/575685615' entity='mgr.compute-0.kitiwu' 
Dec  8 04:48:14 np0005550137 ceph-mon[74516]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config rm", "who": "osd/host:compute-2", "name": "osd_memory_target"} v 0)
Dec  8 04:48:14 np0005550137 ceph-mon[74516]: log_channel(audit) log [INF] : from='mgr.14478 192.168.122.100:0/575685615' entity='mgr.compute-0.kitiwu' cmd=[{"prefix": "config rm", "who": "osd/host:compute-2", "name": "osd_memory_target"}]: dispatch
Dec  8 04:48:14 np0005550137 ceph-mon[74516]: log_channel(audit) log [INF] : from='mgr.14478 192.168.122.100:0/575685615' entity='mgr.compute-0.kitiwu' 
Dec  8 04:48:14 np0005550137 ceph-mon[74516]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config rm", "who": "osd/host:compute-1", "name": "osd_memory_target"} v 0)
Dec  8 04:48:14 np0005550137 ceph-mon[74516]: log_channel(audit) log [INF] : from='mgr.14478 192.168.122.100:0/575685615' entity='mgr.compute-0.kitiwu' cmd=[{"prefix": "config rm", "who": "osd/host:compute-1", "name": "osd_memory_target"}]: dispatch
Dec  8 04:48:14 np0005550137 ceph-mon[74516]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Dec  8 04:48:14 np0005550137 ceph-mon[74516]: log_channel(audit) log [DBG] : from='mgr.14478 192.168.122.100:0/575685615' entity='mgr.compute-0.kitiwu' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec  8 04:48:14 np0005550137 ceph-mon[74516]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Dec  8 04:48:14 np0005550137 ceph-mon[74516]: log_channel(audit) log [INF] : from='mgr.14478 192.168.122.100:0/575685615' entity='mgr.compute-0.kitiwu' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Dec  8 04:48:14 np0005550137 ceph-mgr[74806]: [cephadm INFO cephadm.serve] Updating compute-0:/etc/ceph/ceph.conf
Dec  8 04:48:14 np0005550137 ceph-mgr[74806]: log_channel(cephadm) log [INF] : Updating compute-0:/etc/ceph/ceph.conf
Dec  8 04:48:14 np0005550137 ceph-mgr[74806]: [cephadm INFO cephadm.serve] Updating compute-1:/etc/ceph/ceph.conf
Dec  8 04:48:14 np0005550137 ceph-mgr[74806]: [cephadm INFO cephadm.serve] Updating compute-2:/etc/ceph/ceph.conf
Dec  8 04:48:14 np0005550137 ceph-mgr[74806]: log_channel(cephadm) log [INF] : Updating compute-1:/etc/ceph/ceph.conf
Dec  8 04:48:14 np0005550137 ceph-mgr[74806]: log_channel(cephadm) log [INF] : Updating compute-2:/etc/ceph/ceph.conf
Dec  8 04:48:14 np0005550137 ceph-mgr[74806]: log_channel(cluster) log [DBG] : pgmap v7: 198 pgs: 1 unknown, 197 active+clean; 454 KiB data, 85 MiB used, 60 GiB / 60 GiB avail
Dec  8 04:48:14 np0005550137 ceph-mon[74516]: from='mgr.14478 192.168.122.100:0/575685615' entity='mgr.compute-0.kitiwu' cmd='[{"prefix": "osd pool create", "pool": ".nfs", "yes_i_really_mean_it": true}]': finished
Dec  8 04:48:14 np0005550137 ceph-mon[74516]: from='mgr.14478 192.168.122.100:0/575685615' entity='mgr.compute-0.kitiwu' cmd=[{"prefix": "osd pool application enable", "pool": ".nfs", "app": "nfs"}]: dispatch
Dec  8 04:48:14 np0005550137 ceph-mon[74516]: from='mgr.14478 192.168.122.100:0/575685615' entity='mgr.compute-0.kitiwu' 
Dec  8 04:48:14 np0005550137 ceph-mon[74516]: from='mgr.14478 192.168.122.100:0/575685615' entity='mgr.compute-0.kitiwu' 
Dec  8 04:48:14 np0005550137 ceph-mon[74516]: from='mgr.14478 192.168.122.100:0/575685615' entity='mgr.compute-0.kitiwu' cmd=[{"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"}]: dispatch
Dec  8 04:48:14 np0005550137 ceph-mon[74516]: from='mgr.14478 192.168.122.100:0/575685615' entity='mgr.compute-0.kitiwu' 
Dec  8 04:48:14 np0005550137 ceph-mon[74516]: from='mgr.14478 192.168.122.100:0/575685615' entity='mgr.compute-0.kitiwu' 
Dec  8 04:48:14 np0005550137 ceph-mon[74516]: from='mgr.14478 192.168.122.100:0/575685615' entity='mgr.compute-0.kitiwu' 
Dec  8 04:48:14 np0005550137 ceph-mon[74516]: from='mgr.14478 192.168.122.100:0/575685615' entity='mgr.compute-0.kitiwu' cmd=[{"prefix": "config rm", "who": "osd/host:compute-2", "name": "osd_memory_target"}]: dispatch
Dec  8 04:48:14 np0005550137 ceph-mon[74516]: from='mgr.14478 192.168.122.100:0/575685615' entity='mgr.compute-0.kitiwu' 
Dec  8 04:48:14 np0005550137 ceph-mon[74516]: from='mgr.14478 192.168.122.100:0/575685615' entity='mgr.compute-0.kitiwu' cmd=[{"prefix": "config rm", "who": "osd/host:compute-1", "name": "osd_memory_target"}]: dispatch
Dec  8 04:48:14 np0005550137 ceph-mon[74516]: from='mgr.14478 192.168.122.100:0/575685615' entity='mgr.compute-0.kitiwu' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Dec  8 04:48:14 np0005550137 ceph-mon[74516]: Updating compute-0:/etc/ceph/ceph.conf
Dec  8 04:48:14 np0005550137 ceph-mon[74516]: Updating compute-1:/etc/ceph/ceph.conf
Dec  8 04:48:14 np0005550137 ceph-mon[74516]: Updating compute-2:/etc/ceph/ceph.conf
Dec  8 04:48:14 np0005550137 ceph-osd[83009]: log_channel(cluster) log [DBG] : 2.e scrub starts
Dec  8 04:48:14 np0005550137 ceph-osd[83009]: log_channel(cluster) log [DBG] : 2.e scrub ok
Dec  8 04:48:14 np0005550137 ceph-mgr[74806]: [cephadm INFO cephadm.serve] Updating compute-0:/var/lib/ceph/ceb838ef-9d5d-54e4-bddb-2f01adce2ad4/config/ceph.conf
Dec  8 04:48:14 np0005550137 ceph-mgr[74806]: log_channel(cephadm) log [INF] : Updating compute-0:/var/lib/ceph/ceb838ef-9d5d-54e4-bddb-2f01adce2ad4/config/ceph.conf
Dec  8 04:48:14 np0005550137 ceph-mgr[74806]: [cephadm INFO cephadm.serve] Updating compute-2:/var/lib/ceph/ceb838ef-9d5d-54e4-bddb-2f01adce2ad4/config/ceph.conf
Dec  8 04:48:14 np0005550137 ceph-mgr[74806]: log_channel(cephadm) log [INF] : Updating compute-2:/var/lib/ceph/ceb838ef-9d5d-54e4-bddb-2f01adce2ad4/config/ceph.conf
Dec  8 04:48:14 np0005550137 ceph-mon[74516]: mon.compute-0@0(leader).osd e47 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  8 04:48:14 np0005550137 ceph-mgr[74806]: [cephadm INFO cephadm.serve] Updating compute-1:/var/lib/ceph/ceb838ef-9d5d-54e4-bddb-2f01adce2ad4/config/ceph.conf
Dec  8 04:48:14 np0005550137 ceph-mgr[74806]: log_channel(cephadm) log [INF] : Updating compute-1:/var/lib/ceph/ceb838ef-9d5d-54e4-bddb-2f01adce2ad4/config/ceph.conf
Dec  8 04:48:14 np0005550137 ceph-mon[74516]: mon.compute-0@0(leader).osd e47 do_prune osdmap full prune enabled
Dec  8 04:48:14 np0005550137 ceph-mon[74516]: log_channel(audit) log [INF] : from='mgr.14478 192.168.122.100:0/575685615' entity='mgr.compute-0.kitiwu' cmd='[{"prefix": "osd pool application enable", "pool": ".nfs", "app": "nfs"}]': finished
Dec  8 04:48:14 np0005550137 ceph-mon[74516]: mon.compute-0@0(leader).osd e48 e48: 3 total, 3 up, 3 in
Dec  8 04:48:14 np0005550137 ceph-mon[74516]: log_channel(cluster) log [DBG] : osdmap e48: 3 total, 3 up, 3 in
Dec  8 04:48:14 np0005550137 ceph-mgr[74806]: [nfs INFO nfs.cluster] Created empty object:conf-nfs.cephfs
Dec  8 04:48:14 np0005550137 ceph-mgr[74806]: [cephadm INFO root] Saving service nfs.cephfs spec with placement compute-0;compute-1;compute-2
Dec  8 04:48:14 np0005550137 ceph-mgr[74806]: log_channel(cephadm) log [INF] : Saving service nfs.cephfs spec with placement compute-0;compute-1;compute-2
Dec  8 04:48:14 np0005550137 ceph-mon[74516]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.nfs.cephfs}] v 0)
Dec  8 04:48:15 np0005550137 ceph-mon[74516]: log_channel(audit) log [INF] : from='mgr.14478 192.168.122.100:0/575685615' entity='mgr.compute-0.kitiwu' 
Dec  8 04:48:15 np0005550137 ceph-mgr[74806]: [cephadm INFO root] Saving service ingress.nfs.cephfs spec with placement compute-0;compute-1;compute-2
Dec  8 04:48:15 np0005550137 ceph-mgr[74806]: log_channel(cephadm) log [INF] : Saving service ingress.nfs.cephfs spec with placement compute-0;compute-1;compute-2
Dec  8 04:48:15 np0005550137 ceph-mon[74516]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.ingress.nfs.cephfs}] v 0)
Dec  8 04:48:15 np0005550137 ceph-mon[74516]: log_channel(audit) log [INF] : from='mgr.14478 192.168.122.100:0/575685615' entity='mgr.compute-0.kitiwu' 
Dec  8 04:48:15 np0005550137 systemd[1]: libpod-00e0d5887f8ae2452e3a022c1045e54557f2d1bdba44f6b5f3357c5385091753.scope: Deactivated successfully.
Dec  8 04:48:15 np0005550137 podman[93533]: 2025-12-08 09:48:15.065732069 +0000 UTC m=+2.028946773 container died 00e0d5887f8ae2452e3a022c1045e54557f2d1bdba44f6b5f3357c5385091753 (image=quay.io/ceph/ceph:v19, name=compassionate_sinoussi, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, CEPH_REF=squid, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec  8 04:48:15 np0005550137 systemd[1]: var-lib-containers-storage-overlay-e4f99745724323c8126ae13c5e1ff290f5f5e036031024dd2f5e6e3bf079da59-merged.mount: Deactivated successfully.
Dec  8 04:48:15 np0005550137 podman[93533]: 2025-12-08 09:48:15.109917118 +0000 UTC m=+2.073131792 container remove 00e0d5887f8ae2452e3a022c1045e54557f2d1bdba44f6b5f3357c5385091753 (image=quay.io/ceph/ceph:v19, name=compassionate_sinoussi, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec  8 04:48:15 np0005550137 ceph-mon[74516]: log_channel(cluster) log [WRN] : Health check failed: 1 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED)
Dec  8 04:48:15 np0005550137 systemd[1]: libpod-conmon-00e0d5887f8ae2452e3a022c1045e54557f2d1bdba44f6b5f3357c5385091753.scope: Deactivated successfully.
Dec  8 04:48:15 np0005550137 ceph-mon[74516]: log_channel(cluster) log [DBG] : mgrmap e25: compute-0.kitiwu(active, since 4s), standbys: compute-1.mmkaif, compute-2.zqytsv
Dec  8 04:48:15 np0005550137 ceph-mgr[74806]: [cephadm INFO cephadm.serve] Updating compute-0:/etc/ceph/ceph.client.admin.keyring
Dec  8 04:48:15 np0005550137 ceph-mgr[74806]: log_channel(cephadm) log [INF] : Updating compute-0:/etc/ceph/ceph.client.admin.keyring
Dec  8 04:48:15 np0005550137 ceph-mgr[74806]: [cephadm INFO cephadm.serve] Updating compute-2:/etc/ceph/ceph.client.admin.keyring
Dec  8 04:48:15 np0005550137 ceph-mgr[74806]: log_channel(cephadm) log [INF] : Updating compute-2:/etc/ceph/ceph.client.admin.keyring
Dec  8 04:48:15 np0005550137 ceph-mgr[74806]: [cephadm INFO cephadm.serve] Updating compute-1:/etc/ceph/ceph.client.admin.keyring
Dec  8 04:48:15 np0005550137 ceph-mgr[74806]: log_channel(cephadm) log [INF] : Updating compute-1:/etc/ceph/ceph.client.admin.keyring
Dec  8 04:48:15 np0005550137 ceph-osd[83009]: log_channel(cluster) log [DBG] : 2.19 scrub starts
Dec  8 04:48:15 np0005550137 ceph-osd[83009]: log_channel(cluster) log [DBG] : 2.19 scrub ok
Dec  8 04:48:15 np0005550137 python3[94261]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint radosgw-admin quay.io/ceph/ceph:v19 --fsid ceb838ef-9d5d-54e4-bddb-2f01adce2ad4 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   user info --uid glance _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec  8 04:48:15 np0005550137 podman[94332]: 2025-12-08 09:48:15.800369968 +0000 UTC m=+0.036353303 container create a80bacce2cd9a61dae98b2d886047835c489a06ca9137456bf53294e25dc814e (image=quay.io/ceph/ceph:v19, name=interesting_gagarin, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, io.buildah.version=1.40.1, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Dec  8 04:48:15 np0005550137 systemd[1]: Started libpod-conmon-a80bacce2cd9a61dae98b2d886047835c489a06ca9137456bf53294e25dc814e.scope.
Dec  8 04:48:15 np0005550137 systemd[1]: Started libcrun container.
Dec  8 04:48:15 np0005550137 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8c088825f7b871c1d044299e8ed24737ef42456f3a7efd3ce415e0fc059b5af0/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Dec  8 04:48:15 np0005550137 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8c088825f7b871c1d044299e8ed24737ef42456f3a7efd3ce415e0fc059b5af0/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec  8 04:48:15 np0005550137 podman[94332]: 2025-12-08 09:48:15.871356474 +0000 UTC m=+0.107339819 container init a80bacce2cd9a61dae98b2d886047835c489a06ca9137456bf53294e25dc814e (image=quay.io/ceph/ceph:v19, name=interesting_gagarin, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, ceph=True, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.vendor=CentOS)
Dec  8 04:48:15 np0005550137 podman[94332]: 2025-12-08 09:48:15.878339407 +0000 UTC m=+0.114322742 container start a80bacce2cd9a61dae98b2d886047835c489a06ca9137456bf53294e25dc814e (image=quay.io/ceph/ceph:v19, name=interesting_gagarin, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, OSD_FLAVOR=default, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS)
Dec  8 04:48:15 np0005550137 podman[94332]: 2025-12-08 09:48:15.784491309 +0000 UTC m=+0.020474664 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Dec  8 04:48:15 np0005550137 podman[94332]: 2025-12-08 09:48:15.881719324 +0000 UTC m=+0.117702739 container attach a80bacce2cd9a61dae98b2d886047835c489a06ca9137456bf53294e25dc814e (image=quay.io/ceph/ceph:v19, name=interesting_gagarin, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, ceph=True, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0)
Dec  8 04:48:15 np0005550137 ceph-mon[74516]: mon.compute-0@0(leader).osd e48 do_prune osdmap full prune enabled
Dec  8 04:48:15 np0005550137 ceph-mon[74516]: Updating compute-0:/var/lib/ceph/ceb838ef-9d5d-54e4-bddb-2f01adce2ad4/config/ceph.conf
Dec  8 04:48:15 np0005550137 ceph-mon[74516]: Updating compute-2:/var/lib/ceph/ceb838ef-9d5d-54e4-bddb-2f01adce2ad4/config/ceph.conf
Dec  8 04:48:15 np0005550137 ceph-mon[74516]: Updating compute-1:/var/lib/ceph/ceb838ef-9d5d-54e4-bddb-2f01adce2ad4/config/ceph.conf
Dec  8 04:48:15 np0005550137 ceph-mon[74516]: from='mgr.14478 192.168.122.100:0/575685615' entity='mgr.compute-0.kitiwu' cmd='[{"prefix": "osd pool application enable", "pool": ".nfs", "app": "nfs"}]': finished
Dec  8 04:48:15 np0005550137 ceph-mon[74516]: Saving service nfs.cephfs spec with placement compute-0;compute-1;compute-2
Dec  8 04:48:15 np0005550137 ceph-mon[74516]: from='mgr.14478 192.168.122.100:0/575685615' entity='mgr.compute-0.kitiwu' 
Dec  8 04:48:15 np0005550137 ceph-mon[74516]: Saving service ingress.nfs.cephfs spec with placement compute-0;compute-1;compute-2
Dec  8 04:48:15 np0005550137 ceph-mon[74516]: from='mgr.14478 192.168.122.100:0/575685615' entity='mgr.compute-0.kitiwu' 
Dec  8 04:48:15 np0005550137 ceph-mon[74516]: Health check failed: 1 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED)
Dec  8 04:48:15 np0005550137 ceph-mon[74516]: mon.compute-0@0(leader).osd e49 e49: 3 total, 3 up, 3 in
Dec  8 04:48:15 np0005550137 ceph-mon[74516]: log_channel(cluster) log [DBG] : osdmap e49: 3 total, 3 up, 3 in
Dec  8 04:48:16 np0005550137 ceph-mgr[74806]: [cephadm INFO cephadm.serve] Updating compute-0:/var/lib/ceph/ceb838ef-9d5d-54e4-bddb-2f01adce2ad4/config/ceph.client.admin.keyring
Dec  8 04:48:16 np0005550137 ceph-mgr[74806]: log_channel(cephadm) log [INF] : Updating compute-0:/var/lib/ceph/ceb838ef-9d5d-54e4-bddb-2f01adce2ad4/config/ceph.client.admin.keyring
Dec  8 04:48:16 np0005550137 interesting_gagarin[94372]: could not fetch user info: no user info saved
Dec  8 04:48:16 np0005550137 systemd[1]: libpod-a80bacce2cd9a61dae98b2d886047835c489a06ca9137456bf53294e25dc814e.scope: Deactivated successfully.
Dec  8 04:48:16 np0005550137 podman[94332]: 2025-12-08 09:48:16.10126031 +0000 UTC m=+0.337243655 container died a80bacce2cd9a61dae98b2d886047835c489a06ca9137456bf53294e25dc814e (image=quay.io/ceph/ceph:v19, name=interesting_gagarin, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec  8 04:48:16 np0005550137 systemd[1]: var-lib-containers-storage-overlay-8c088825f7b871c1d044299e8ed24737ef42456f3a7efd3ce415e0fc059b5af0-merged.mount: Deactivated successfully.
Dec  8 04:48:16 np0005550137 podman[94332]: 2025-12-08 09:48:16.141997309 +0000 UTC m=+0.377980684 container remove a80bacce2cd9a61dae98b2d886047835c489a06ca9137456bf53294e25dc814e (image=quay.io/ceph/ceph:v19, name=interesting_gagarin, OSD_FLAVOR=default, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Dec  8 04:48:16 np0005550137 systemd[1]: libpod-conmon-a80bacce2cd9a61dae98b2d886047835c489a06ca9137456bf53294e25dc814e.scope: Deactivated successfully.
Dec  8 04:48:16 np0005550137 ceph-mgr[74806]: [cephadm INFO cephadm.serve] Updating compute-2:/var/lib/ceph/ceb838ef-9d5d-54e4-bddb-2f01adce2ad4/config/ceph.client.admin.keyring
Dec  8 04:48:16 np0005550137 ceph-mgr[74806]: log_channel(cephadm) log [INF] : Updating compute-2:/var/lib/ceph/ceb838ef-9d5d-54e4-bddb-2f01adce2ad4/config/ceph.client.admin.keyring
Dec  8 04:48:16 np0005550137 ceph-mgr[74806]: [cephadm INFO cephadm.serve] Updating compute-1:/var/lib/ceph/ceb838ef-9d5d-54e4-bddb-2f01adce2ad4/config/ceph.client.admin.keyring
Dec  8 04:48:16 np0005550137 ceph-mgr[74806]: log_channel(cephadm) log [INF] : Updating compute-1:/var/lib/ceph/ceb838ef-9d5d-54e4-bddb-2f01adce2ad4/config/ceph.client.admin.keyring
Dec  8 04:48:16 np0005550137 python3[94694]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint radosgw-admin quay.io/ceph/ceph:v19 --fsid ceb838ef-9d5d-54e4-bddb-2f01adce2ad4 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   user create --uid="glance" --display-name="Glance S3 User" _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec  8 04:48:16 np0005550137 ceph-mgr[74806]: log_channel(cluster) log [DBG] : pgmap v10: 198 pgs: 1 unknown, 197 active+clean; 454 KiB data, 85 MiB used, 60 GiB / 60 GiB avail
Dec  8 04:48:16 np0005550137 podman[94744]: 2025-12-08 09:48:16.567463758 +0000 UTC m=+0.049537125 container create 0f960722d1fbfbe67ec5855ad089dae814e98a7f6d00327bcd52da91ffe6d863 (image=quay.io/ceph/ceph:v19, name=charming_varahamihira, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, CEPH_REF=squid, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec  8 04:48:16 np0005550137 ceph-mon[74516]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Dec  8 04:48:16 np0005550137 ceph-mon[74516]: log_channel(audit) log [INF] : from='mgr.14478 192.168.122.100:0/575685615' entity='mgr.compute-0.kitiwu' 
Dec  8 04:48:16 np0005550137 ceph-mon[74516]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Dec  8 04:48:16 np0005550137 systemd[1]: Started libpod-conmon-0f960722d1fbfbe67ec5855ad089dae814e98a7f6d00327bcd52da91ffe6d863.scope.
Dec  8 04:48:16 np0005550137 ceph-mon[74516]: log_channel(audit) log [INF] : from='mgr.14478 192.168.122.100:0/575685615' entity='mgr.compute-0.kitiwu' 
Dec  8 04:48:16 np0005550137 systemd[1]: Started libcrun container.
Dec  8 04:48:16 np0005550137 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2563826280f9df0d29dfac2fafad670085bc791464e2da81fae33f309e319e41/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Dec  8 04:48:16 np0005550137 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2563826280f9df0d29dfac2fafad670085bc791464e2da81fae33f309e319e41/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec  8 04:48:16 np0005550137 podman[94744]: 2025-12-08 09:48:16.550647911 +0000 UTC m=+0.032721278 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Dec  8 04:48:16 np0005550137 podman[94744]: 2025-12-08 09:48:16.652958254 +0000 UTC m=+0.135031621 container init 0f960722d1fbfbe67ec5855ad089dae814e98a7f6d00327bcd52da91ffe6d863 (image=quay.io/ceph/ceph:v19, name=charming_varahamihira, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec  8 04:48:16 np0005550137 podman[94744]: 2025-12-08 09:48:16.66146599 +0000 UTC m=+0.143539337 container start 0f960722d1fbfbe67ec5855ad089dae814e98a7f6d00327bcd52da91ffe6d863 (image=quay.io/ceph/ceph:v19, name=charming_varahamihira, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, CEPH_REF=squid, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.build-date=20250325, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec  8 04:48:16 np0005550137 podman[94744]: 2025-12-08 09:48:16.664470847 +0000 UTC m=+0.146544254 container attach 0f960722d1fbfbe67ec5855ad089dae814e98a7f6d00327bcd52da91ffe6d863 (image=quay.io/ceph/ceph:v19, name=charming_varahamihira, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, org.label-schema.vendor=CentOS, ceph=True, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec  8 04:48:16 np0005550137 ceph-osd[83009]: log_channel(cluster) log [DBG] : 7.1b scrub starts
Dec  8 04:48:16 np0005550137 ceph-osd[83009]: log_channel(cluster) log [DBG] : 7.1b scrub ok
Dec  8 04:48:16 np0005550137 ceph-mon[74516]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2.devices.0}] v 0)
Dec  8 04:48:16 np0005550137 charming_varahamihira[94785]: {
Dec  8 04:48:16 np0005550137 charming_varahamihira[94785]:    "user_id": "glance",
Dec  8 04:48:16 np0005550137 charming_varahamihira[94785]:    "display_name": "Glance S3 User",
Dec  8 04:48:16 np0005550137 charming_varahamihira[94785]:    "email": "",
Dec  8 04:48:16 np0005550137 charming_varahamihira[94785]:    "suspended": 0,
Dec  8 04:48:16 np0005550137 charming_varahamihira[94785]:    "max_buckets": 1000,
Dec  8 04:48:16 np0005550137 charming_varahamihira[94785]:    "subusers": [],
Dec  8 04:48:16 np0005550137 charming_varahamihira[94785]:    "keys": [
Dec  8 04:48:16 np0005550137 charming_varahamihira[94785]:        {
Dec  8 04:48:16 np0005550137 charming_varahamihira[94785]:            "user": "glance",
Dec  8 04:48:16 np0005550137 charming_varahamihira[94785]:            "access_key": "QVSQQ74P3H4LKKWS79M3",
Dec  8 04:48:16 np0005550137 charming_varahamihira[94785]:            "secret_key": "IS5M0nNNhDkeACCDKXqbUGbFBm96piyTrCpESssP",
Dec  8 04:48:16 np0005550137 charming_varahamihira[94785]:            "active": true,
Dec  8 04:48:16 np0005550137 charming_varahamihira[94785]:            "create_date": "2025-12-08T09:48:16.821789Z"
Dec  8 04:48:16 np0005550137 charming_varahamihira[94785]:        }
Dec  8 04:48:16 np0005550137 charming_varahamihira[94785]:    ],
Dec  8 04:48:16 np0005550137 charming_varahamihira[94785]:    "swift_keys": [],
Dec  8 04:48:16 np0005550137 charming_varahamihira[94785]:    "caps": [],
Dec  8 04:48:16 np0005550137 charming_varahamihira[94785]:    "op_mask": "read, write, delete",
Dec  8 04:48:16 np0005550137 charming_varahamihira[94785]:    "default_placement": "",
Dec  8 04:48:16 np0005550137 charming_varahamihira[94785]:    "default_storage_class": "",
Dec  8 04:48:16 np0005550137 charming_varahamihira[94785]:    "placement_tags": [],
Dec  8 04:48:16 np0005550137 charming_varahamihira[94785]:    "bucket_quota": {
Dec  8 04:48:16 np0005550137 charming_varahamihira[94785]:        "enabled": false,
Dec  8 04:48:16 np0005550137 charming_varahamihira[94785]:        "check_on_raw": false,
Dec  8 04:48:16 np0005550137 charming_varahamihira[94785]:        "max_size": -1,
Dec  8 04:48:16 np0005550137 charming_varahamihira[94785]:        "max_size_kb": 0,
Dec  8 04:48:16 np0005550137 charming_varahamihira[94785]:        "max_objects": -1
Dec  8 04:48:16 np0005550137 charming_varahamihira[94785]:    },
Dec  8 04:48:16 np0005550137 charming_varahamihira[94785]:    "user_quota": {
Dec  8 04:48:16 np0005550137 charming_varahamihira[94785]:        "enabled": false,
Dec  8 04:48:16 np0005550137 charming_varahamihira[94785]:        "check_on_raw": false,
Dec  8 04:48:16 np0005550137 charming_varahamihira[94785]:        "max_size": -1,
Dec  8 04:48:16 np0005550137 charming_varahamihira[94785]:        "max_size_kb": 0,
Dec  8 04:48:16 np0005550137 charming_varahamihira[94785]:        "max_objects": -1
Dec  8 04:48:16 np0005550137 charming_varahamihira[94785]:    },
Dec  8 04:48:16 np0005550137 charming_varahamihira[94785]:    "temp_url_keys": [],
Dec  8 04:48:16 np0005550137 charming_varahamihira[94785]:    "type": "rgw",
Dec  8 04:48:16 np0005550137 charming_varahamihira[94785]:    "mfa_ids": [],
Dec  8 04:48:16 np0005550137 charming_varahamihira[94785]:    "account_id": "",
Dec  8 04:48:16 np0005550137 charming_varahamihira[94785]:    "path": "/",
Dec  8 04:48:16 np0005550137 charming_varahamihira[94785]:    "create_date": "2025-12-08T09:48:16.821224Z",
Dec  8 04:48:16 np0005550137 charming_varahamihira[94785]:    "tags": [],
Dec  8 04:48:16 np0005550137 charming_varahamihira[94785]:    "group_ids": []
Dec  8 04:48:16 np0005550137 charming_varahamihira[94785]: }
Dec  8 04:48:16 np0005550137 charming_varahamihira[94785]: 
Dec  8 04:48:16 np0005550137 ceph-mon[74516]: log_channel(audit) log [INF] : from='mgr.14478 192.168.122.100:0/575685615' entity='mgr.compute-0.kitiwu' 
Dec  8 04:48:16 np0005550137 ceph-mon[74516]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2}] v 0)
Dec  8 04:48:16 np0005550137 ceph-mon[74516]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1.devices.0}] v 0)
Dec  8 04:48:16 np0005550137 ceph-mon[74516]: log_channel(audit) log [INF] : from='mgr.14478 192.168.122.100:0/575685615' entity='mgr.compute-0.kitiwu' 
Dec  8 04:48:16 np0005550137 ceph-mon[74516]: log_channel(audit) log [INF] : from='mgr.14478 192.168.122.100:0/575685615' entity='mgr.compute-0.kitiwu' 
Dec  8 04:48:16 np0005550137 ceph-mon[74516]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1}] v 0)
Dec  8 04:48:16 np0005550137 ceph-mon[74516]: log_channel(audit) log [INF] : from='mgr.14478 192.168.122.100:0/575685615' entity='mgr.compute-0.kitiwu' 
Dec  8 04:48:16 np0005550137 ceph-mon[74516]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Dec  8 04:48:16 np0005550137 systemd[1]: libpod-0f960722d1fbfbe67ec5855ad089dae814e98a7f6d00327bcd52da91ffe6d863.scope: Deactivated successfully.
Dec  8 04:48:16 np0005550137 podman[94744]: 2025-12-08 09:48:16.900867061 +0000 UTC m=+0.382940448 container died 0f960722d1fbfbe67ec5855ad089dae814e98a7f6d00327bcd52da91ffe6d863 (image=quay.io/ceph/ceph:v19, name=charming_varahamihira, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec  8 04:48:16 np0005550137 ceph-mon[74516]: log_channel(audit) log [INF] : from='mgr.14478 192.168.122.100:0/575685615' entity='mgr.compute-0.kitiwu' 
Dec  8 04:48:16 np0005550137 ceph-mgr[74806]: [progress INFO root] update: starting ev 4c9d7d4d-b64a-4ee4-8d23-e571b7e17f09 (Updating node-exporter deployment (+2 -> 3))
Dec  8 04:48:16 np0005550137 ceph-mgr[74806]: [cephadm INFO cephadm.serve] Deploying daemon node-exporter.compute-1 on compute-1
Dec  8 04:48:16 np0005550137 ceph-mgr[74806]: log_channel(cephadm) log [INF] : Deploying daemon node-exporter.compute-1 on compute-1
Dec  8 04:48:16 np0005550137 systemd[1]: var-lib-containers-storage-overlay-2563826280f9df0d29dfac2fafad670085bc791464e2da81fae33f309e319e41-merged.mount: Deactivated successfully.
Dec  8 04:48:16 np0005550137 podman[94744]: 2025-12-08 09:48:16.954759861 +0000 UTC m=+0.436833248 container remove 0f960722d1fbfbe67ec5855ad089dae814e98a7f6d00327bcd52da91ffe6d863 (image=quay.io/ceph/ceph:v19, name=charming_varahamihira, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, OSD_FLAVOR=default, CEPH_REF=squid, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True)
Dec  8 04:48:16 np0005550137 systemd[1]: libpod-conmon-0f960722d1fbfbe67ec5855ad089dae814e98a7f6d00327bcd52da91ffe6d863.scope: Deactivated successfully.
Dec  8 04:48:16 np0005550137 ceph-mon[74516]: log_channel(cluster) log [INF] : Health check cleared: POOL_APP_NOT_ENABLED (was: 1 pool(s) do not have an application enabled)
Dec  8 04:48:17 np0005550137 ceph-mon[74516]: Updating compute-0:/etc/ceph/ceph.client.admin.keyring
Dec  8 04:48:17 np0005550137 ceph-mon[74516]: Updating compute-2:/etc/ceph/ceph.client.admin.keyring
Dec  8 04:48:17 np0005550137 ceph-mon[74516]: Updating compute-1:/etc/ceph/ceph.client.admin.keyring
Dec  8 04:48:17 np0005550137 ceph-mon[74516]: Updating compute-0:/var/lib/ceph/ceb838ef-9d5d-54e4-bddb-2f01adce2ad4/config/ceph.client.admin.keyring
Dec  8 04:48:17 np0005550137 ceph-mon[74516]: Updating compute-2:/var/lib/ceph/ceb838ef-9d5d-54e4-bddb-2f01adce2ad4/config/ceph.client.admin.keyring
Dec  8 04:48:17 np0005550137 ceph-mon[74516]: Updating compute-1:/var/lib/ceph/ceb838ef-9d5d-54e4-bddb-2f01adce2ad4/config/ceph.client.admin.keyring
Dec  8 04:48:17 np0005550137 ceph-mon[74516]: from='mgr.14478 192.168.122.100:0/575685615' entity='mgr.compute-0.kitiwu' 
Dec  8 04:48:17 np0005550137 ceph-mon[74516]: from='mgr.14478 192.168.122.100:0/575685615' entity='mgr.compute-0.kitiwu' 
Dec  8 04:48:17 np0005550137 ceph-mon[74516]: from='mgr.14478 192.168.122.100:0/575685615' entity='mgr.compute-0.kitiwu' 
Dec  8 04:48:17 np0005550137 ceph-mon[74516]: from='mgr.14478 192.168.122.100:0/575685615' entity='mgr.compute-0.kitiwu' 
Dec  8 04:48:17 np0005550137 ceph-mon[74516]: from='mgr.14478 192.168.122.100:0/575685615' entity='mgr.compute-0.kitiwu' 
Dec  8 04:48:17 np0005550137 ceph-mon[74516]: from='mgr.14478 192.168.122.100:0/575685615' entity='mgr.compute-0.kitiwu' 
Dec  8 04:48:17 np0005550137 ceph-mon[74516]: from='mgr.14478 192.168.122.100:0/575685615' entity='mgr.compute-0.kitiwu' 
Dec  8 04:48:17 np0005550137 ceph-mon[74516]: log_channel(cluster) log [DBG] : mgrmap e26: compute-0.kitiwu(active, since 6s), standbys: compute-1.mmkaif, compute-2.zqytsv
Dec  8 04:48:17 np0005550137 podman[94909]: 2025-12-08 09:48:17.388485018 +0000 UTC m=+0.052118299 container create e688fb5abb909a425f0930e567afebbe2a2c311bd8c45465c54af417ccf9f366 (image=quay.io/ceph/ceph:v19, name=festive_liskov, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec  8 04:48:17 np0005550137 systemd[1]: Started libpod-conmon-e688fb5abb909a425f0930e567afebbe2a2c311bd8c45465c54af417ccf9f366.scope.
Dec  8 04:48:17 np0005550137 systemd[1]: Started libcrun container.
Dec  8 04:48:17 np0005550137 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9e19ba8abd19076a8fcdfe991f7ba2801a301b1413520f5a91beee9e0abc1eb7/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Dec  8 04:48:17 np0005550137 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9e19ba8abd19076a8fcdfe991f7ba2801a301b1413520f5a91beee9e0abc1eb7/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec  8 04:48:17 np0005550137 podman[94909]: 2025-12-08 09:48:17.452242444 +0000 UTC m=+0.115875725 container init e688fb5abb909a425f0930e567afebbe2a2c311bd8c45465c54af417ccf9f366 (image=quay.io/ceph/ceph:v19, name=festive_liskov, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec  8 04:48:17 np0005550137 podman[94909]: 2025-12-08 09:48:17.457753854 +0000 UTC m=+0.121387135 container start e688fb5abb909a425f0930e567afebbe2a2c311bd8c45465c54af417ccf9f366 (image=quay.io/ceph/ceph:v19, name=festive_liskov, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, org.label-schema.build-date=20250325, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec  8 04:48:17 np0005550137 podman[94909]: 2025-12-08 09:48:17.460517144 +0000 UTC m=+0.124150535 container attach e688fb5abb909a425f0930e567afebbe2a2c311bd8c45465c54af417ccf9f366 (image=quay.io/ceph/ceph:v19, name=festive_liskov, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image)
Dec  8 04:48:17 np0005550137 podman[94909]: 2025-12-08 09:48:17.367715248 +0000 UTC m=+0.031348579 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Dec  8 04:48:17 np0005550137 festive_liskov[94925]: {
Dec  8 04:48:17 np0005550137 festive_liskov[94925]:    "user_id": "glance",
Dec  8 04:48:17 np0005550137 festive_liskov[94925]:    "display_name": "Glance S3 User",
Dec  8 04:48:17 np0005550137 festive_liskov[94925]:    "email": "",
Dec  8 04:48:17 np0005550137 festive_liskov[94925]:    "suspended": 0,
Dec  8 04:48:17 np0005550137 festive_liskov[94925]:    "max_buckets": 1000,
Dec  8 04:48:17 np0005550137 festive_liskov[94925]:    "subusers": [],
Dec  8 04:48:17 np0005550137 festive_liskov[94925]:    "keys": [
Dec  8 04:48:17 np0005550137 festive_liskov[94925]:        {
Dec  8 04:48:17 np0005550137 festive_liskov[94925]:            "user": "glance",
Dec  8 04:48:17 np0005550137 festive_liskov[94925]:            "access_key": "QVSQQ74P3H4LKKWS79M3",
Dec  8 04:48:17 np0005550137 festive_liskov[94925]:            "secret_key": "IS5M0nNNhDkeACCDKXqbUGbFBm96piyTrCpESssP",
Dec  8 04:48:17 np0005550137 festive_liskov[94925]:            "active": true,
Dec  8 04:48:17 np0005550137 festive_liskov[94925]:            "create_date": "2025-12-08T09:48:16.821789Z"
Dec  8 04:48:17 np0005550137 festive_liskov[94925]:        }
Dec  8 04:48:17 np0005550137 festive_liskov[94925]:    ],
Dec  8 04:48:17 np0005550137 festive_liskov[94925]:    "swift_keys": [],
Dec  8 04:48:17 np0005550137 festive_liskov[94925]:    "caps": [],
Dec  8 04:48:17 np0005550137 festive_liskov[94925]:    "op_mask": "read, write, delete",
Dec  8 04:48:17 np0005550137 festive_liskov[94925]:    "default_placement": "",
Dec  8 04:48:17 np0005550137 festive_liskov[94925]:    "default_storage_class": "",
Dec  8 04:48:17 np0005550137 festive_liskov[94925]:    "placement_tags": [],
Dec  8 04:48:17 np0005550137 festive_liskov[94925]:    "bucket_quota": {
Dec  8 04:48:17 np0005550137 festive_liskov[94925]:        "enabled": false,
Dec  8 04:48:17 np0005550137 festive_liskov[94925]:        "check_on_raw": false,
Dec  8 04:48:17 np0005550137 festive_liskov[94925]:        "max_size": -1,
Dec  8 04:48:17 np0005550137 festive_liskov[94925]:        "max_size_kb": 0,
Dec  8 04:48:17 np0005550137 festive_liskov[94925]:        "max_objects": -1
Dec  8 04:48:17 np0005550137 festive_liskov[94925]:    },
Dec  8 04:48:17 np0005550137 festive_liskov[94925]:    "user_quota": {
Dec  8 04:48:17 np0005550137 festive_liskov[94925]:        "enabled": false,
Dec  8 04:48:17 np0005550137 festive_liskov[94925]:        "check_on_raw": false,
Dec  8 04:48:17 np0005550137 festive_liskov[94925]:        "max_size": -1,
Dec  8 04:48:17 np0005550137 festive_liskov[94925]:        "max_size_kb": 0,
Dec  8 04:48:17 np0005550137 festive_liskov[94925]:        "max_objects": -1
Dec  8 04:48:17 np0005550137 festive_liskov[94925]:    },
Dec  8 04:48:17 np0005550137 festive_liskov[94925]:    "temp_url_keys": [],
Dec  8 04:48:17 np0005550137 festive_liskov[94925]:    "type": "rgw",
Dec  8 04:48:17 np0005550137 festive_liskov[94925]:    "mfa_ids": [],
Dec  8 04:48:17 np0005550137 festive_liskov[94925]:    "account_id": "",
Dec  8 04:48:17 np0005550137 festive_liskov[94925]:    "path": "/",
Dec  8 04:48:17 np0005550137 festive_liskov[94925]:    "create_date": "2025-12-08T09:48:16.821224Z",
Dec  8 04:48:17 np0005550137 festive_liskov[94925]:    "tags": [],
Dec  8 04:48:17 np0005550137 festive_liskov[94925]:    "group_ids": []
Dec  8 04:48:17 np0005550137 festive_liskov[94925]: }
Dec  8 04:48:17 np0005550137 festive_liskov[94925]: 
Dec  8 04:48:17 np0005550137 systemd[1]: libpod-e688fb5abb909a425f0930e567afebbe2a2c311bd8c45465c54af417ccf9f366.scope: Deactivated successfully.
Dec  8 04:48:17 np0005550137 podman[94909]: 2025-12-08 09:48:17.652831352 +0000 UTC m=+0.316464693 container died e688fb5abb909a425f0930e567afebbe2a2c311bd8c45465c54af417ccf9f366 (image=quay.io/ceph/ceph:v19, name=festive_liskov, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, ceph=True, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec  8 04:48:17 np0005550137 systemd[1]: var-lib-containers-storage-overlay-9e19ba8abd19076a8fcdfe991f7ba2801a301b1413520f5a91beee9e0abc1eb7-merged.mount: Deactivated successfully.
Dec  8 04:48:17 np0005550137 ceph-osd[83009]: log_channel(cluster) log [DBG] : 7.18 scrub starts
Dec  8 04:48:17 np0005550137 ceph-osd[83009]: log_channel(cluster) log [DBG] : 7.18 scrub ok
Dec  8 04:48:17 np0005550137 podman[94909]: 2025-12-08 09:48:17.698774762 +0000 UTC m=+0.362408043 container remove e688fb5abb909a425f0930e567afebbe2a2c311bd8c45465c54af417ccf9f366 (image=quay.io/ceph/ceph:v19, name=festive_liskov, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec  8 04:48:17 np0005550137 systemd[1]: libpod-conmon-e688fb5abb909a425f0930e567afebbe2a2c311bd8c45465c54af417ccf9f366.scope: Deactivated successfully.
Dec  8 04:48:18 np0005550137 ceph-mon[74516]: Deploying daemon node-exporter.compute-1 on compute-1
Dec  8 04:48:18 np0005550137 ceph-mon[74516]: Health check cleared: POOL_APP_NOT_ENABLED (was: 1 pool(s) do not have an application enabled)
Dec  8 04:48:18 np0005550137 rsyslogd[1006]: imjournal: journal files changed, reloading...  [v8.2510.0-2.el9 try https://www.rsyslog.com/e/0 ]
Dec  8 04:48:18 np0005550137 python3[95101]: ansible-ansible.legacy.stat Invoked with path=/etc/ceph/ceph.client.openstack.keyring follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Dec  8 04:48:18 np0005550137 ceph-mgr[74806]: log_channel(cluster) log [DBG] : pgmap v11: 198 pgs: 198 active+clean; 454 KiB data, 85 MiB used, 60 GiB / 60 GiB avail; 31 KiB/s rd, 0 B/s wr, 14 op/s
Dec  8 04:48:18 np0005550137 python3[95175]: ansible-ansible.legacy.copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1765187297.9082057-37358-187426867731148/source dest=/etc/ceph/ceph.client.openstack.keyring mode=0644 force=True owner=167 group=167 follow=False _original_basename=ceph_key.j2 checksum=e55994fcf36046011ff4d258cd3183c8a55fbcf8 backup=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  8 04:48:18 np0005550137 ceph-osd[83009]: log_channel(cluster) log [DBG] : 7.1e scrub starts
Dec  8 04:48:18 np0005550137 ceph-osd[83009]: log_channel(cluster) log [DBG] : 7.1e scrub ok
Dec  8 04:48:19 np0005550137 python3[95225]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v19 --fsid ceb838ef-9d5d-54e4-bddb-2f01adce2ad4 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   auth import -i /etc/ceph/ceph.client.openstack.keyring _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec  8 04:48:19 np0005550137 podman[95226]: 2025-12-08 09:48:19.312815613 +0000 UTC m=+0.072763698 container create a738a072cc0103b7852efe7d2f8dc91aa3fdbbc36bb743f57094691b24390316 (image=quay.io/ceph/ceph:v19, name=nice_rosalind, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec  8 04:48:19 np0005550137 systemd[1]: Started libpod-conmon-a738a072cc0103b7852efe7d2f8dc91aa3fdbbc36bb743f57094691b24390316.scope.
Dec  8 04:48:19 np0005550137 podman[95226]: 2025-12-08 09:48:19.290081505 +0000 UTC m=+0.050029580 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Dec  8 04:48:19 np0005550137 systemd[1]: Started libcrun container.
Dec  8 04:48:19 np0005550137 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/eeb04e0756a244947903cbd621fe187e439f387e9472354a8759ba5af6cd3f6b/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Dec  8 04:48:19 np0005550137 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/eeb04e0756a244947903cbd621fe187e439f387e9472354a8759ba5af6cd3f6b/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec  8 04:48:19 np0005550137 podman[95226]: 2025-12-08 09:48:19.404699363 +0000 UTC m=+0.164647458 container init a738a072cc0103b7852efe7d2f8dc91aa3fdbbc36bb743f57094691b24390316 (image=quay.io/ceph/ceph:v19, name=nice_rosalind, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, CEPH_REF=squid, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec  8 04:48:19 np0005550137 podman[95226]: 2025-12-08 09:48:19.412041075 +0000 UTC m=+0.171989120 container start a738a072cc0103b7852efe7d2f8dc91aa3fdbbc36bb743f57094691b24390316 (image=quay.io/ceph/ceph:v19, name=nice_rosalind, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, CEPH_REF=squid)
Dec  8 04:48:19 np0005550137 podman[95226]: 2025-12-08 09:48:19.415258839 +0000 UTC m=+0.175206904 container attach a738a072cc0103b7852efe7d2f8dc91aa3fdbbc36bb743f57094691b24390316 (image=quay.io/ceph/ceph:v19, name=nice_rosalind, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image)
Dec  8 04:48:19 np0005550137 ceph-mon[74516]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1.devices.0}] v 0)
Dec  8 04:48:19 np0005550137 ceph-mon[74516]: log_channel(audit) log [INF] : from='mgr.14478 192.168.122.100:0/575685615' entity='mgr.compute-0.kitiwu' 
Dec  8 04:48:19 np0005550137 ceph-mon[74516]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1}] v 0)
Dec  8 04:48:19 np0005550137 ceph-mon[74516]: log_channel(audit) log [INF] : from='mgr.14478 192.168.122.100:0/575685615' entity='mgr.compute-0.kitiwu' 
Dec  8 04:48:19 np0005550137 ceph-mon[74516]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.node-exporter}] v 0)
Dec  8 04:48:19 np0005550137 ceph-mon[74516]: log_channel(audit) log [INF] : from='mgr.14478 192.168.122.100:0/575685615' entity='mgr.compute-0.kitiwu' 
Dec  8 04:48:19 np0005550137 ceph-mgr[74806]: [cephadm INFO cephadm.serve] Deploying daemon node-exporter.compute-2 on compute-2
Dec  8 04:48:19 np0005550137 ceph-mgr[74806]: log_channel(cephadm) log [INF] : Deploying daemon node-exporter.compute-2 on compute-2
Dec  8 04:48:19 np0005550137 ceph-osd[83009]: log_channel(cluster) log [DBG] : 7.6 scrub starts
Dec  8 04:48:19 np0005550137 ceph-osd[83009]: log_channel(cluster) log [DBG] : 7.6 scrub ok
Dec  8 04:48:19 np0005550137 ceph-mon[74516]: mon.compute-0@0(leader).osd e49 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  8 04:48:19 np0005550137 ceph-mon[74516]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth import"} v 0)
Dec  8 04:48:19 np0005550137 ceph-mon[74516]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/3367952435' entity='client.admin' cmd=[{"prefix": "auth import"}]: dispatch
Dec  8 04:48:19 np0005550137 ceph-mon[74516]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/3367952435' entity='client.admin' cmd='[{"prefix": "auth import"}]': finished
Dec  8 04:48:19 np0005550137 systemd[1]: libpod-a738a072cc0103b7852efe7d2f8dc91aa3fdbbc36bb743f57094691b24390316.scope: Deactivated successfully.
Dec  8 04:48:19 np0005550137 conmon[95241]: conmon a738a072cc0103b7852e <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-a738a072cc0103b7852efe7d2f8dc91aa3fdbbc36bb743f57094691b24390316.scope/container/memory.events
Dec  8 04:48:19 np0005550137 podman[95226]: 2025-12-08 09:48:19.909402876 +0000 UTC m=+0.669350961 container died a738a072cc0103b7852efe7d2f8dc91aa3fdbbc36bb743f57094691b24390316 (image=quay.io/ceph/ceph:v19, name=nice_rosalind, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325)
Dec  8 04:48:19 np0005550137 systemd[1]: var-lib-containers-storage-overlay-eeb04e0756a244947903cbd621fe187e439f387e9472354a8759ba5af6cd3f6b-merged.mount: Deactivated successfully.
Dec  8 04:48:19 np0005550137 podman[95226]: 2025-12-08 09:48:19.959377132 +0000 UTC m=+0.719325187 container remove a738a072cc0103b7852efe7d2f8dc91aa3fdbbc36bb743f57094691b24390316 (image=quay.io/ceph/ceph:v19, name=nice_rosalind, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, ceph=True, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default)
Dec  8 04:48:19 np0005550137 systemd[1]: libpod-conmon-a738a072cc0103b7852efe7d2f8dc91aa3fdbbc36bb743f57094691b24390316.scope: Deactivated successfully.
Dec  8 04:48:20 np0005550137 ceph-mon[74516]: from='mgr.14478 192.168.122.100:0/575685615' entity='mgr.compute-0.kitiwu' 
Dec  8 04:48:20 np0005550137 ceph-mon[74516]: from='mgr.14478 192.168.122.100:0/575685615' entity='mgr.compute-0.kitiwu' 
Dec  8 04:48:20 np0005550137 ceph-mon[74516]: from='mgr.14478 192.168.122.100:0/575685615' entity='mgr.compute-0.kitiwu' 
Dec  8 04:48:20 np0005550137 ceph-mon[74516]: from='client.? 192.168.122.100:0/3367952435' entity='client.admin' cmd=[{"prefix": "auth import"}]: dispatch
Dec  8 04:48:20 np0005550137 ceph-mon[74516]: from='client.? 192.168.122.100:0/3367952435' entity='client.admin' cmd='[{"prefix": "auth import"}]': finished
Dec  8 04:48:20 np0005550137 ceph-mgr[74806]: log_channel(cluster) log [DBG] : pgmap v12: 198 pgs: 198 active+clean; 454 KiB data, 85 MiB used, 60 GiB / 60 GiB avail; 29 KiB/s rd, 0 B/s wr, 12 op/s
Dec  8 04:48:20 np0005550137 ceph-osd[83009]: log_channel(cluster) log [DBG] : 7.2 scrub starts
Dec  8 04:48:20 np0005550137 ceph-osd[83009]: log_channel(cluster) log [DBG] : 7.2 scrub ok
Dec  8 04:48:20 np0005550137 python3[95305]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v19 --fsid ceb838ef-9d5d-54e4-bddb-2f01adce2ad4 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   status --format json | jq .monmap.num_mons _uses_shell=True stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec  8 04:48:20 np0005550137 podman[95307]: 2025-12-08 09:48:20.782334449 +0000 UTC m=+0.036979312 container create ccd41d805118b0743d5deb7e98f99c7504a6ed141ec82cdaeec83fdcb2e85ed3 (image=quay.io/ceph/ceph:v19, name=sleepy_morse, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid)
Dec  8 04:48:20 np0005550137 systemd[1]: Started libpod-conmon-ccd41d805118b0743d5deb7e98f99c7504a6ed141ec82cdaeec83fdcb2e85ed3.scope.
Dec  8 04:48:20 np0005550137 systemd[1]: Started libcrun container.
Dec  8 04:48:20 np0005550137 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8f74c89233ce095151403106355ead742e0df1ae00543403f43c168f02710cd4/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Dec  8 04:48:20 np0005550137 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8f74c89233ce095151403106355ead742e0df1ae00543403f43c168f02710cd4/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec  8 04:48:20 np0005550137 podman[95307]: 2025-12-08 09:48:20.767929352 +0000 UTC m=+0.022574225 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Dec  8 04:48:20 np0005550137 podman[95307]: 2025-12-08 09:48:20.877109093 +0000 UTC m=+0.131753976 container init ccd41d805118b0743d5deb7e98f99c7504a6ed141ec82cdaeec83fdcb2e85ed3 (image=quay.io/ceph/ceph:v19, name=sleepy_morse, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_REF=squid, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2)
Dec  8 04:48:20 np0005550137 podman[95307]: 2025-12-08 09:48:20.883414486 +0000 UTC m=+0.138059359 container start ccd41d805118b0743d5deb7e98f99c7504a6ed141ec82cdaeec83fdcb2e85ed3 (image=quay.io/ceph/ceph:v19, name=sleepy_morse, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec  8 04:48:20 np0005550137 podman[95307]: 2025-12-08 09:48:20.886831125 +0000 UTC m=+0.141475998 container attach ccd41d805118b0743d5deb7e98f99c7504a6ed141ec82cdaeec83fdcb2e85ed3 (image=quay.io/ceph/ceph:v19, name=sleepy_morse, CEPH_REF=squid, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec  8 04:48:21 np0005550137 ceph-mon[74516]: Deploying daemon node-exporter.compute-2 on compute-2
Dec  8 04:48:21 np0005550137 ceph-mon[74516]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "status", "format": "json"} v 0)
Dec  8 04:48:21 np0005550137 ceph-mon[74516]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4175476984' entity='client.admin' cmd=[{"prefix": "status", "format": "json"}]: dispatch
Dec  8 04:48:21 np0005550137 sleepy_morse[95323]: 
Dec  8 04:48:21 np0005550137 sleepy_morse[95323]: {"fsid":"ceb838ef-9d5d-54e4-bddb-2f01adce2ad4","health":{"status":"HEALTH_ERR","checks":{"MDS_ALL_DOWN":{"severity":"HEALTH_ERR","summary":{"message":"1 filesystem is offline","count":1},"muted":false},"MDS_UP_LESS_THAN_MAX":{"severity":"HEALTH_WARN","summary":{"message":"1 filesystem is online with fewer MDS than max_mds","count":1},"muted":false}},"mutes":[]},"election_epoch":14,"quorum":[0,1,2],"quorum_names":["compute-0","compute-2","compute-1"],"quorum_age":72,"monmap":{"epoch":3,"min_mon_release_name":"squid","num_mons":3},"osdmap":{"epoch":49,"num_osds":3,"num_up_osds":3,"osd_up_since":1765187255,"num_in_osds":3,"osd_in_since":1765187235,"num_remapped_pgs":0},"pgmap":{"pgs_by_state":[{"state_name":"active+clean","count":198}],"num_pgs":198,"num_pools":12,"num_objects":195,"data_bytes":464595,"bytes_used":88944640,"bytes_avail":64322981888,"bytes_total":64411926528,"read_bytes_sec":29205,"write_bytes_sec":0,"read_op_per_sec":11,"write_op_per_sec":1},"fsmap":{"epoch":2,"btime":"2025-12-08T09:48:11:623626+0000","id":1,"up":0,"in":0,"max":1,"by_rank":[],"up:standby":0},"mgrmap":{"available":true,"num_standbys":2,"modules":["cephadm","dashboard","iostat","nfs","restful"],"services":{"dashboard":"http://192.168.122.100:8443/"}},"servicemap":{"epoch":5,"modified":"2025-12-08T09:47:53.183814+0000","services":{"mgr":{"daemons":{"summary":"","compute-0.kitiwu":{"start_epoch":0,"start_stamp":"0.000000","gid":0,"addr":"(unrecognized address family 0)/0","metadata":{},"task_status":{}},"compute-1.mmkaif":{"start_epoch":0,"start_stamp":"0.000000","gid":0,"addr":"(unrecognized address family 0)/0","metadata":{},"task_status":{}},"compute-2.zqytsv":{"start_epoch":0,"start_stamp":"0.000000","gid":0,"addr":"(unrecognized address family 0)/0","metadata":{},"task_status":{}}}},"mon":{"daemons":{"summary":"","compute-0":{"start_epoch":0,"start_stamp":"0.000000","gid":0,"addr":"(unrecognized address family 0)/0","metadata":{},"task_status":{}},"compute-1":{"start_epoch":0,"start_stamp":"0.000000","gid":0,"addr":"(unrecognized address family 0)/0","metadata":{},"task_status":{}},"compute-2":{"start_epoch":0,"start_stamp":"0.000000","gid":0,"addr":"(unrecognized address family 0)/0","metadata":{},"task_status":{}}}},"osd":{"daemons":{"summary":"","0":{"start_epoch":0,"start_stamp":"0.000000","gid":0,"addr":"(unrecognized address family 0)/0","metadata":{},"task_status":{}},"1":{"start_epoch":0,"start_stamp":"0.000000","gid":0,"addr":"(unrecognized address family 0)/0","metadata":{},"task_status":{}},"2":{"start_epoch":0,"start_stamp":"0.000000","gid":0,"addr":"(unrecognized address family 0)/0","metadata":{},"task_status":{}}}},"rgw":{"daemons":{"summary":"","14391":{"start_epoch":4,"start_stamp":"2025-12-08T09:47:52.204639+0000","gid":14391,"addr":"192.168.122.100:0/3979683973","metadata":{"arch":"x86_64","ceph_release":"squid","ceph_version":"ceph version 19.2.3 (c92aebb279828e9c3c1f5d24613efca272649e62) squid (stable)","ceph_version_short":"19.2.3","container_hostname":"compute-0","container_image":"quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec","cpu":"AMD EPYC-Rome Processor","distro":"centos","distro_description":"CentOS Stream 9","distro_version":"9","frontend_config#0":"beast endpoint=192.168.122.100:8082","frontend_type#0":"beast","hostname":"compute-0","id":"rgw.compute-0.slkrtm","kernel_description":"#1 SMP PREEMPT_DYNAMIC Fri Nov 28 14:01:17 UTC 2025","kernel_version":"5.14.0-645.el9.x86_64","mem_swap_kb":"1048572","mem_total_kb":"7864320","num_handles":"1","os":"Linux","pid":"2","realm_id":"","realm_name":"","zone_id":"f2fa6c7a-b392-4a6f-84e7-a8a07770c620","zone_name":"default","zonegroup_id":"68492763-3f06-49eb-87b1-edc419fff75a","zonegroup_name":"default"},"task_status":{}},"24149":{"start_epoch":5,"start_stamp":"2025-12-08T09:47:52.220080+0000","gid":24149,"addr":"192.168.122.101:0/3268586272","metadata":{"arch":"x86_64","ceph_release":"squid","ceph_version":"ceph version 19.2.3 (c92aebb279828e9c3c1f5d24613efca272649e62) squid (stable)","ceph_version_short":"19.2.3","container_hostname":"compute-1","container_image":"quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec","cpu":"AMD EPYC-Rome Processor","distro":"centos","distro_description":"CentOS Stream 9","distro_version":"9","frontend_config#0":"beast endpoint=192.168.122.101:8082","frontend_type#0":"beast","hostname":"compute-1","id":"rgw.compute-1.rblbpq","kernel_description":"#1 SMP PREEMPT_DYNAMIC Fri Nov 28 14:01:17 UTC 2025","kernel_version":"5.14.0-645.el9.x86_64","mem_swap_kb":"1048572","mem_total_kb":"7864320","num_handles":"1","os":"Linux","pid":"2","realm_id":"","realm_name":"","zone_id":"f2fa6c7a-b392-4a6f-84e7-a8a07770c620","zone_name":"default","zonegroup_id":"68492763-3f06-49eb-87b1-edc419fff75a","zonegroup_name":"default"},"task_status":{}},"24160":{"start_epoch":5,"start_stamp":"2025-12-08T09:47:52.213233+0000","gid":24160,"addr":"192.168.122.102:0/2102705496","metadata":{"arch":"x86_64","ceph_release":"squid","ceph_version":"ceph version 19.2.3 (c92aebb279828e9c3c1f5d24613efca272649e62) squid (stable)","ceph_version_short":"19.2.3","container_hostname":"compute-2","container_image":"quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec","cpu":"AMD EPYC-Rome Processor","distro":"centos","distro_description":"CentOS Stream 9","distro_version":"9","frontend_config#0":"beast endpoint=192.168.122.102:8082","frontend_type#0":"beast","hostname":"compute-2","id":"rgw.compute-2.dimexm","kernel_description":"#1 SMP PREEMPT_DYNAMIC Fri Nov 28 14:01:17 UTC 2025","kernel_version":"5.14.0-645.el9.x86_64","mem_swap_kb":"1048572","mem_total_kb":"7864312","num_handles":"1","os":"Linux","pid":"2","realm_id":"","realm_name":"","zone_id":"f2fa6c7a-b392-4a6f-84e7-a8a07770c620","zone_name":"default","zonegroup_id":"68492763-3f06-49eb-87b1-edc419fff75a","zonegroup_name":"default"},"task_status":{}}}}}},"progress_events":{"4c9d7d4d-b64a-4ee4-8d23-e571b7e17f09":{"message":"Updating node-exporter deployment (+2 -> 3) (2s)\n      [==============..............] (remaining: 2s)","progress":0.5,"add_to_ceph_s":true}}}
Dec  8 04:48:21 np0005550137 systemd[1]: libpod-ccd41d805118b0743d5deb7e98f99c7504a6ed141ec82cdaeec83fdcb2e85ed3.scope: Deactivated successfully.
Dec  8 04:48:21 np0005550137 podman[95307]: 2025-12-08 09:48:21.31842906 +0000 UTC m=+0.573073933 container died ccd41d805118b0743d5deb7e98f99c7504a6ed141ec82cdaeec83fdcb2e85ed3 (image=quay.io/ceph/ceph:v19, name=sleepy_morse, CEPH_REF=squid, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0)
Dec  8 04:48:21 np0005550137 systemd[1]: var-lib-containers-storage-overlay-8f74c89233ce095151403106355ead742e0df1ae00543403f43c168f02710cd4-merged.mount: Deactivated successfully.
Dec  8 04:48:21 np0005550137 podman[95307]: 2025-12-08 09:48:21.353803185 +0000 UTC m=+0.608448038 container remove ccd41d805118b0743d5deb7e98f99c7504a6ed141ec82cdaeec83fdcb2e85ed3 (image=quay.io/ceph/ceph:v19, name=sleepy_morse, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default)
Dec  8 04:48:21 np0005550137 systemd[1]: libpod-conmon-ccd41d805118b0743d5deb7e98f99c7504a6ed141ec82cdaeec83fdcb2e85ed3.scope: Deactivated successfully.
Dec  8 04:48:21 np0005550137 python3[95385]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v19 --fsid ceb838ef-9d5d-54e4-bddb-2f01adce2ad4 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   mon dump --format json _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec  8 04:48:21 np0005550137 ceph-osd[83009]: log_channel(cluster) log [DBG] : 7.3 scrub starts
Dec  8 04:48:21 np0005550137 ceph-osd[83009]: log_channel(cluster) log [DBG] : 7.3 scrub ok
Dec  8 04:48:21 np0005550137 podman[95386]: 2025-12-08 09:48:21.757323828 +0000 UTC m=+0.045905941 container create 105e46e8b9c959b0115937a3d91aec353c3cb68bb29a6d031b1d904435ecc1f2 (image=quay.io/ceph/ceph:v19, name=funny_lamport, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.build-date=20250325, CEPH_REF=squid)
Dec  8 04:48:21 np0005550137 systemd[1]: Started libpod-conmon-105e46e8b9c959b0115937a3d91aec353c3cb68bb29a6d031b1d904435ecc1f2.scope.
Dec  8 04:48:21 np0005550137 systemd[1]: Started libcrun container.
Dec  8 04:48:21 np0005550137 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/73b05ff960c023ffdab12513030d44ded95fe984ce5f2e16341b362c55a5d72b/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Dec  8 04:48:21 np0005550137 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/73b05ff960c023ffdab12513030d44ded95fe984ce5f2e16341b362c55a5d72b/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec  8 04:48:21 np0005550137 podman[95386]: 2025-12-08 09:48:21.734363563 +0000 UTC m=+0.022945616 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Dec  8 04:48:21 np0005550137 podman[95386]: 2025-12-08 09:48:21.83098767 +0000 UTC m=+0.119569713 container init 105e46e8b9c959b0115937a3d91aec353c3cb68bb29a6d031b1d904435ecc1f2 (image=quay.io/ceph/ceph:v19, name=funny_lamport, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325)
Dec  8 04:48:21 np0005550137 podman[95386]: 2025-12-08 09:48:21.837742056 +0000 UTC m=+0.126324079 container start 105e46e8b9c959b0115937a3d91aec353c3cb68bb29a6d031b1d904435ecc1f2 (image=quay.io/ceph/ceph:v19, name=funny_lamport, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2)
Dec  8 04:48:21 np0005550137 podman[95386]: 2025-12-08 09:48:21.841352831 +0000 UTC m=+0.129934854 container attach 105e46e8b9c959b0115937a3d91aec353c3cb68bb29a6d031b1d904435ecc1f2 (image=quay.io/ceph/ceph:v19, name=funny_lamport, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image)
Dec  8 04:48:22 np0005550137 ceph-mon[74516]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2.devices.0}] v 0)
Dec  8 04:48:22 np0005550137 ceph-mon[74516]: log_channel(audit) log [INF] : from='mgr.14478 192.168.122.100:0/575685615' entity='mgr.compute-0.kitiwu' 
Dec  8 04:48:22 np0005550137 ceph-mon[74516]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2}] v 0)
Dec  8 04:48:22 np0005550137 ceph-mon[74516]: log_channel(audit) log [INF] : from='mgr.14478 192.168.122.100:0/575685615' entity='mgr.compute-0.kitiwu' 
Dec  8 04:48:22 np0005550137 ceph-mon[74516]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.node-exporter}] v 0)
Dec  8 04:48:22 np0005550137 ceph-mon[74516]: log_channel(audit) log [INF] : from='mgr.14478 192.168.122.100:0/575685615' entity='mgr.compute-0.kitiwu' 
Dec  8 04:48:22 np0005550137 ceph-mgr[74806]: [progress INFO root] complete: finished ev 4c9d7d4d-b64a-4ee4-8d23-e571b7e17f09 (Updating node-exporter deployment (+2 -> 3))
Dec  8 04:48:22 np0005550137 ceph-mgr[74806]: [progress INFO root] Completed event 4c9d7d4d-b64a-4ee4-8d23-e571b7e17f09 (Updating node-exporter deployment (+2 -> 3)) in 5 seconds
Dec  8 04:48:22 np0005550137 ceph-mon[74516]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.node-exporter}] v 0)
Dec  8 04:48:22 np0005550137 ceph-mon[74516]: log_channel(audit) log [INF] : from='mgr.14478 192.168.122.100:0/575685615' entity='mgr.compute-0.kitiwu' 
Dec  8 04:48:22 np0005550137 ceph-mon[74516]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Dec  8 04:48:22 np0005550137 ceph-mon[74516]: log_channel(audit) log [DBG] : from='mgr.14478 192.168.122.100:0/575685615' entity='mgr.compute-0.kitiwu' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Dec  8 04:48:22 np0005550137 ceph-mon[74516]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Dec  8 04:48:22 np0005550137 ceph-mon[74516]: log_channel(audit) log [INF] : from='mgr.14478 192.168.122.100:0/575685615' entity='mgr.compute-0.kitiwu' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Dec  8 04:48:22 np0005550137 ceph-mon[74516]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Dec  8 04:48:22 np0005550137 ceph-mon[74516]: log_channel(audit) log [DBG] : from='mgr.14478 192.168.122.100:0/575685615' entity='mgr.compute-0.kitiwu' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec  8 04:48:22 np0005550137 ceph-mon[74516]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Dec  8 04:48:22 np0005550137 ceph-mon[74516]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1179970966' entity='client.admin' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Dec  8 04:48:22 np0005550137 funny_lamport[95402]: 
Dec  8 04:48:22 np0005550137 funny_lamport[95402]: {"epoch":3,"fsid":"ceb838ef-9d5d-54e4-bddb-2f01adce2ad4","modified":"2025-12-08T09:47:03.886776Z","created":"2025-12-08T09:44:55.163607Z","min_mon_release":19,"min_mon_release_name":"squid","election_strategy":1,"disallowed_leaders":"","stretch_mode":false,"tiebreaker_mon":"","removed_ranks":"","features":{"persistent":["kraken","luminous","mimic","osdmap-prune","nautilus","octopus","pacific","elector-pinging","quincy","reef","squid"],"optional":[]},"mons":[{"rank":0,"name":"compute-0","public_addrs":{"addrvec":[{"type":"v2","addr":"192.168.122.100:3300","nonce":0},{"type":"v1","addr":"192.168.122.100:6789","nonce":0}]},"addr":"192.168.122.100:6789/0","public_addr":"192.168.122.100:6789/0","priority":0,"weight":0,"crush_location":"{}"},{"rank":1,"name":"compute-2","public_addrs":{"addrvec":[{"type":"v2","addr":"192.168.122.102:3300","nonce":0},{"type":"v1","addr":"192.168.122.102:6789","nonce":0}]},"addr":"192.168.122.102:6789/0","public_addr":"192.168.122.102:6789/0","priority":0,"weight":0,"crush_location":"{}"},{"rank":2,"name":"compute-1","public_addrs":{"addrvec":[{"type":"v2","addr":"192.168.122.101:3300","nonce":0},{"type":"v1","addr":"192.168.122.101:6789","nonce":0}]},"addr":"192.168.122.101:6789/0","public_addr":"192.168.122.101:6789/0","priority":0,"weight":0,"crush_location":"{}"}],"quorum":[0,1,2]}
Dec  8 04:48:22 np0005550137 funny_lamport[95402]: dumped monmap epoch 3
Dec  8 04:48:22 np0005550137 systemd[1]: libpod-105e46e8b9c959b0115937a3d91aec353c3cb68bb29a6d031b1d904435ecc1f2.scope: Deactivated successfully.
Dec  8 04:48:22 np0005550137 podman[95386]: 2025-12-08 09:48:22.299632889 +0000 UTC m=+0.588214922 container died 105e46e8b9c959b0115937a3d91aec353c3cb68bb29a6d031b1d904435ecc1f2 (image=quay.io/ceph/ceph:v19, name=funny_lamport, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_REF=squid, io.buildah.version=1.40.1, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325)
Dec  8 04:48:22 np0005550137 systemd[1]: var-lib-containers-storage-overlay-73b05ff960c023ffdab12513030d44ded95fe984ce5f2e16341b362c55a5d72b-merged.mount: Deactivated successfully.
Dec  8 04:48:22 np0005550137 podman[95386]: 2025-12-08 09:48:22.341676816 +0000 UTC m=+0.630258829 container remove 105e46e8b9c959b0115937a3d91aec353c3cb68bb29a6d031b1d904435ecc1f2 (image=quay.io/ceph/ceph:v19, name=funny_lamport, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid)
Dec  8 04:48:22 np0005550137 systemd[1]: libpod-conmon-105e46e8b9c959b0115937a3d91aec353c3cb68bb29a6d031b1d904435ecc1f2.scope: Deactivated successfully.
Dec  8 04:48:22 np0005550137 ceph-mgr[74806]: log_channel(cluster) log [DBG] : pgmap v13: 198 pgs: 198 active+clean; 454 KiB data, 102 MiB used, 60 GiB / 60 GiB avail; 27 KiB/s rd, 255 B/s wr, 14 op/s
Dec  8 04:48:22 np0005550137 ceph-osd[83009]: log_channel(cluster) log [DBG] : 7.4 deep-scrub starts
Dec  8 04:48:22 np0005550137 podman[95525]: 2025-12-08 09:48:22.742055368 +0000 UTC m=+0.042818991 container create 9aa1a8db915ecd2d7e34397b598a1c194ba7c6ff20d19f6c330449cf5a64bffc (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=brave_euclid, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, OSD_FLAVOR=default)
Dec  8 04:48:22 np0005550137 ceph-osd[83009]: log_channel(cluster) log [DBG] : 7.4 deep-scrub ok
Dec  8 04:48:22 np0005550137 systemd[1]: Started libpod-conmon-9aa1a8db915ecd2d7e34397b598a1c194ba7c6ff20d19f6c330449cf5a64bffc.scope.
Dec  8 04:48:22 np0005550137 systemd[1]: Started libcrun container.
Dec  8 04:48:22 np0005550137 podman[95525]: 2025-12-08 09:48:22.722531463 +0000 UTC m=+0.023295126 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Dec  8 04:48:22 np0005550137 podman[95525]: 2025-12-08 09:48:22.827384078 +0000 UTC m=+0.128147711 container init 9aa1a8db915ecd2d7e34397b598a1c194ba7c6ff20d19f6c330449cf5a64bffc (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=brave_euclid, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, ceph=True, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec  8 04:48:22 np0005550137 podman[95525]: 2025-12-08 09:48:22.837722478 +0000 UTC m=+0.138486101 container start 9aa1a8db915ecd2d7e34397b598a1c194ba7c6ff20d19f6c330449cf5a64bffc (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=brave_euclid, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, CEPH_REF=squid, org.label-schema.vendor=CentOS, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec  8 04:48:22 np0005550137 brave_euclid[95566]: 167 167
Dec  8 04:48:22 np0005550137 systemd[1]: libpod-9aa1a8db915ecd2d7e34397b598a1c194ba7c6ff20d19f6c330449cf5a64bffc.scope: Deactivated successfully.
Dec  8 04:48:22 np0005550137 podman[95525]: 2025-12-08 09:48:22.84127665 +0000 UTC m=+0.142040273 container attach 9aa1a8db915ecd2d7e34397b598a1c194ba7c6ff20d19f6c330449cf5a64bffc (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=brave_euclid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default)
Dec  8 04:48:22 np0005550137 podman[95525]: 2025-12-08 09:48:22.843405112 +0000 UTC m=+0.144168765 container died 9aa1a8db915ecd2d7e34397b598a1c194ba7c6ff20d19f6c330449cf5a64bffc (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=brave_euclid, OSD_FLAVOR=default, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, CEPH_REF=squid, org.label-schema.schema-version=1.0, ceph=True, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec  8 04:48:22 np0005550137 systemd[1]: var-lib-containers-storage-overlay-dfc2a9a52f805e1c2d45f4931df41c4ba3689c09b3bd1f723ba9066f2f4b5455-merged.mount: Deactivated successfully.
Dec  8 04:48:22 np0005550137 podman[95525]: 2025-12-08 09:48:22.89482349 +0000 UTC m=+0.195587153 container remove 9aa1a8db915ecd2d7e34397b598a1c194ba7c6ff20d19f6c330449cf5a64bffc (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=brave_euclid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image)
Dec  8 04:48:22 np0005550137 systemd[1]: libpod-conmon-9aa1a8db915ecd2d7e34397b598a1c194ba7c6ff20d19f6c330449cf5a64bffc.scope: Deactivated successfully.
Dec  8 04:48:22 np0005550137 python3[95568]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v19 --fsid ceb838ef-9d5d-54e4-bddb-2f01adce2ad4 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   auth get client.openstack _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec  8 04:48:22 np0005550137 podman[95587]: 2025-12-08 09:48:22.988513293 +0000 UTC m=+0.046635300 container create c3578dc7d224488aa8c86132f8f0de66bcce5fbee3dd200bc2869efd1c787676 (image=quay.io/ceph/ceph:v19, name=angry_hofstadter, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, ceph=True, CEPH_REF=squid, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Dec  8 04:48:23 np0005550137 systemd[1]: Started libpod-conmon-c3578dc7d224488aa8c86132f8f0de66bcce5fbee3dd200bc2869efd1c787676.scope.
Dec  8 04:48:23 np0005550137 systemd[1]: Started libcrun container.
Dec  8 04:48:23 np0005550137 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/86774233d286d8f280507c25d513e7547c641ca85e65ed7c343a9cdc465abdcc/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Dec  8 04:48:23 np0005550137 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/86774233d286d8f280507c25d513e7547c641ca85e65ed7c343a9cdc465abdcc/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec  8 04:48:23 np0005550137 podman[95587]: 2025-12-08 09:48:22.969140723 +0000 UTC m=+0.027262740 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Dec  8 04:48:23 np0005550137 podman[95587]: 2025-12-08 09:48:23.083427671 +0000 UTC m=+0.141549668 container init c3578dc7d224488aa8c86132f8f0de66bcce5fbee3dd200bc2869efd1c787676 (image=quay.io/ceph/ceph:v19, name=angry_hofstadter, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_REF=squid)
Dec  8 04:48:23 np0005550137 podman[95587]: 2025-12-08 09:48:23.092213276 +0000 UTC m=+0.150335263 container start c3578dc7d224488aa8c86132f8f0de66bcce5fbee3dd200bc2869efd1c787676 (image=quay.io/ceph/ceph:v19, name=angry_hofstadter, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, CEPH_REF=squid, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True)
Dec  8 04:48:23 np0005550137 podman[95607]: 2025-12-08 09:48:23.095990555 +0000 UTC m=+0.045208829 container create 2125b53a87931183226f59952df066cf0965a84d1d7dab860bf95faf2811fea5 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ecstatic_dubinsky, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, ceph=True, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS)
Dec  8 04:48:23 np0005550137 podman[95587]: 2025-12-08 09:48:23.105706406 +0000 UTC m=+0.163828413 container attach c3578dc7d224488aa8c86132f8f0de66bcce5fbee3dd200bc2869efd1c787676 (image=quay.io/ceph/ceph:v19, name=angry_hofstadter, ceph=True, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image)
Dec  8 04:48:23 np0005550137 systemd[1]: Started libpod-conmon-2125b53a87931183226f59952df066cf0965a84d1d7dab860bf95faf2811fea5.scope.
Dec  8 04:48:23 np0005550137 systemd[1]: Started libcrun container.
Dec  8 04:48:23 np0005550137 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/02d85f7dd89309f084ec7dc5be7449ac760f3c8975d89d54d3a2750b062d423b/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec  8 04:48:23 np0005550137 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/02d85f7dd89309f084ec7dc5be7449ac760f3c8975d89d54d3a2750b062d423b/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec  8 04:48:23 np0005550137 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/02d85f7dd89309f084ec7dc5be7449ac760f3c8975d89d54d3a2750b062d423b/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec  8 04:48:23 np0005550137 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/02d85f7dd89309f084ec7dc5be7449ac760f3c8975d89d54d3a2750b062d423b/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec  8 04:48:23 np0005550137 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/02d85f7dd89309f084ec7dc5be7449ac760f3c8975d89d54d3a2750b062d423b/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Dec  8 04:48:23 np0005550137 podman[95607]: 2025-12-08 09:48:23.073390121 +0000 UTC m=+0.022608425 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Dec  8 04:48:23 np0005550137 podman[95607]: 2025-12-08 09:48:23.173615743 +0000 UTC m=+0.122834057 container init 2125b53a87931183226f59952df066cf0965a84d1d7dab860bf95faf2811fea5 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ecstatic_dubinsky, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image)
Dec  8 04:48:23 np0005550137 podman[95607]: 2025-12-08 09:48:23.181783499 +0000 UTC m=+0.131001773 container start 2125b53a87931183226f59952df066cf0965a84d1d7dab860bf95faf2811fea5 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ecstatic_dubinsky, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325)
Dec  8 04:48:23 np0005550137 podman[95607]: 2025-12-08 09:48:23.185585389 +0000 UTC m=+0.134803713 container attach 2125b53a87931183226f59952df066cf0965a84d1d7dab860bf95faf2811fea5 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ecstatic_dubinsky, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_REF=squid, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec  8 04:48:23 np0005550137 ceph-mon[74516]: from='mgr.14478 192.168.122.100:0/575685615' entity='mgr.compute-0.kitiwu' 
Dec  8 04:48:23 np0005550137 ceph-mon[74516]: from='mgr.14478 192.168.122.100:0/575685615' entity='mgr.compute-0.kitiwu' 
Dec  8 04:48:23 np0005550137 ceph-mon[74516]: from='mgr.14478 192.168.122.100:0/575685615' entity='mgr.compute-0.kitiwu' 
Dec  8 04:48:23 np0005550137 ceph-mon[74516]: from='mgr.14478 192.168.122.100:0/575685615' entity='mgr.compute-0.kitiwu' 
Dec  8 04:48:23 np0005550137 ceph-mon[74516]: from='mgr.14478 192.168.122.100:0/575685615' entity='mgr.compute-0.kitiwu' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Dec  8 04:48:23 np0005550137 ecstatic_dubinsky[95627]: --> passed data devices: 0 physical, 1 LVM
Dec  8 04:48:23 np0005550137 ecstatic_dubinsky[95627]: --> All data devices are unavailable
Dec  8 04:48:23 np0005550137 systemd[1]: libpod-2125b53a87931183226f59952df066cf0965a84d1d7dab860bf95faf2811fea5.scope: Deactivated successfully.
Dec  8 04:48:23 np0005550137 podman[95607]: 2025-12-08 09:48:23.52721912 +0000 UTC m=+0.476437424 container died 2125b53a87931183226f59952df066cf0965a84d1d7dab860bf95faf2811fea5 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ecstatic_dubinsky, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, OSD_FLAVOR=default, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec  8 04:48:23 np0005550137 ceph-mon[74516]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.openstack"} v 0)
Dec  8 04:48:23 np0005550137 ceph-mon[74516]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/3067774280' entity='client.admin' cmd=[{"prefix": "auth get", "entity": "client.openstack"}]: dispatch
Dec  8 04:48:23 np0005550137 angry_hofstadter[95599]: [client.openstack]
Dec  8 04:48:23 np0005550137 angry_hofstadter[95599]: #011key = AQD0nTZpAAAAABAA2mYVreoIKyTzcjzsUbUcew==
Dec  8 04:48:23 np0005550137 angry_hofstadter[95599]: #011caps mgr = "allow *"
Dec  8 04:48:23 np0005550137 angry_hofstadter[95599]: #011caps mon = "profile rbd"
Dec  8 04:48:23 np0005550137 angry_hofstadter[95599]: #011caps osd = "profile rbd pool=vms, profile rbd pool=volumes, profile rbd pool=backups, profile rbd pool=images, profile rbd pool=cephfs.cephfs.meta, profile rbd pool=cephfs.cephfs.data"
Dec  8 04:48:23 np0005550137 systemd[1]: var-lib-containers-storage-overlay-02d85f7dd89309f084ec7dc5be7449ac760f3c8975d89d54d3a2750b062d423b-merged.mount: Deactivated successfully.
Dec  8 04:48:23 np0005550137 systemd[1]: libpod-c3578dc7d224488aa8c86132f8f0de66bcce5fbee3dd200bc2869efd1c787676.scope: Deactivated successfully.
Dec  8 04:48:23 np0005550137 conmon[95599]: conmon c3578dc7d224488aa8c8 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-c3578dc7d224488aa8c86132f8f0de66bcce5fbee3dd200bc2869efd1c787676.scope/container/memory.events
Dec  8 04:48:23 np0005550137 podman[95587]: 2025-12-08 09:48:23.565525799 +0000 UTC m=+0.623647846 container died c3578dc7d224488aa8c86132f8f0de66bcce5fbee3dd200bc2869efd1c787676 (image=quay.io/ceph/ceph:v19, name=angry_hofstadter, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec  8 04:48:23 np0005550137 podman[95607]: 2025-12-08 09:48:23.581379168 +0000 UTC m=+0.530597442 container remove 2125b53a87931183226f59952df066cf0965a84d1d7dab860bf95faf2811fea5 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ecstatic_dubinsky, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec  8 04:48:23 np0005550137 systemd[1]: libpod-conmon-2125b53a87931183226f59952df066cf0965a84d1d7dab860bf95faf2811fea5.scope: Deactivated successfully.
Dec  8 04:48:23 np0005550137 systemd[1]: var-lib-containers-storage-overlay-86774233d286d8f280507c25d513e7547c641ca85e65ed7c343a9cdc465abdcc-merged.mount: Deactivated successfully.
Dec  8 04:48:23 np0005550137 podman[95587]: 2025-12-08 09:48:23.617617917 +0000 UTC m=+0.675739904 container remove c3578dc7d224488aa8c86132f8f0de66bcce5fbee3dd200bc2869efd1c787676 (image=quay.io/ceph/ceph:v19, name=angry_hofstadter, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, CEPH_REF=squid, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True)
Dec  8 04:48:23 np0005550137 systemd[1]: libpod-conmon-c3578dc7d224488aa8c86132f8f0de66bcce5fbee3dd200bc2869efd1c787676.scope: Deactivated successfully.
Dec  8 04:48:23 np0005550137 ceph-osd[83009]: log_channel(cluster) log [DBG] : 7.e scrub starts
Dec  8 04:48:23 np0005550137 ceph-osd[83009]: log_channel(cluster) log [DBG] : 7.e scrub ok
Dec  8 04:48:24 np0005550137 podman[95777]: 2025-12-08 09:48:24.142803483 +0000 UTC m=+0.041340228 container create 15d8807593383bf102d1f82dddf51e7badca8491a369d2a4c05a0bd0699d374e (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=intelligent_easley, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, io.buildah.version=1.40.1, CEPH_REF=squid, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default)
Dec  8 04:48:24 np0005550137 systemd[1]: Started libpod-conmon-15d8807593383bf102d1f82dddf51e7badca8491a369d2a4c05a0bd0699d374e.scope.
Dec  8 04:48:24 np0005550137 systemd[1]: Started libcrun container.
Dec  8 04:48:24 np0005550137 podman[95777]: 2025-12-08 09:48:24.211390219 +0000 UTC m=+0.109926994 container init 15d8807593383bf102d1f82dddf51e7badca8491a369d2a4c05a0bd0699d374e (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=intelligent_easley, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default)
Dec  8 04:48:24 np0005550137 podman[95777]: 2025-12-08 09:48:24.122381202 +0000 UTC m=+0.020918007 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Dec  8 04:48:24 np0005550137 podman[95777]: 2025-12-08 09:48:24.219620067 +0000 UTC m=+0.118156862 container start 15d8807593383bf102d1f82dddf51e7badca8491a369d2a4c05a0bd0699d374e (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=intelligent_easley, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0)
Dec  8 04:48:24 np0005550137 podman[95777]: 2025-12-08 09:48:24.223091538 +0000 UTC m=+0.121628313 container attach 15d8807593383bf102d1f82dddf51e7badca8491a369d2a4c05a0bd0699d374e (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=intelligent_easley, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image)
Dec  8 04:48:24 np0005550137 intelligent_easley[95794]: 167 167
Dec  8 04:48:24 np0005550137 podman[95777]: 2025-12-08 09:48:24.224957771 +0000 UTC m=+0.123494526 container died 15d8807593383bf102d1f82dddf51e7badca8491a369d2a4c05a0bd0699d374e (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=intelligent_easley, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec  8 04:48:24 np0005550137 systemd[1]: libpod-15d8807593383bf102d1f82dddf51e7badca8491a369d2a4c05a0bd0699d374e.scope: Deactivated successfully.
Dec  8 04:48:24 np0005550137 ceph-mon[74516]: from='client.? 192.168.122.100:0/3067774280' entity='client.admin' cmd=[{"prefix": "auth get", "entity": "client.openstack"}]: dispatch
Dec  8 04:48:24 np0005550137 systemd[1]: var-lib-containers-storage-overlay-036a8cfa846c32d7f6ec234df898e3bf7c8970a5af7644ba4c0300d4e444d941-merged.mount: Deactivated successfully.
Dec  8 04:48:24 np0005550137 podman[95777]: 2025-12-08 09:48:24.264482056 +0000 UTC m=+0.163018811 container remove 15d8807593383bf102d1f82dddf51e7badca8491a369d2a4c05a0bd0699d374e (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=intelligent_easley, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Dec  8 04:48:24 np0005550137 systemd[1]: libpod-conmon-15d8807593383bf102d1f82dddf51e7badca8491a369d2a4c05a0bd0699d374e.scope: Deactivated successfully.
Dec  8 04:48:24 np0005550137 podman[95818]: 2025-12-08 09:48:24.427678501 +0000 UTC m=+0.023879103 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Dec  8 04:48:24 np0005550137 podman[95818]: 2025-12-08 09:48:24.532869416 +0000 UTC m=+0.129070008 container create 0ae8318e3dcfb744246b464a78f8a8192e9752b68a133d1207dac607b01f126d (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=wonderful_fermat, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image)
Dec  8 04:48:24 np0005550137 ceph-mgr[74806]: log_channel(cluster) log [DBG] : pgmap v14: 198 pgs: 198 active+clean; 454 KiB data, 102 MiB used, 60 GiB / 60 GiB avail; 23 KiB/s rd, 212 B/s wr, 12 op/s
Dec  8 04:48:24 np0005550137 systemd[1]: Started libpod-conmon-0ae8318e3dcfb744246b464a78f8a8192e9752b68a133d1207dac607b01f126d.scope.
Dec  8 04:48:24 np0005550137 systemd[1]: Started libcrun container.
Dec  8 04:48:24 np0005550137 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/12de04e76f532a62f191c4c65772f1d37020c3a611853f1a7aef1239f75ce1c8/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec  8 04:48:24 np0005550137 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/12de04e76f532a62f191c4c65772f1d37020c3a611853f1a7aef1239f75ce1c8/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec  8 04:48:24 np0005550137 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/12de04e76f532a62f191c4c65772f1d37020c3a611853f1a7aef1239f75ce1c8/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec  8 04:48:24 np0005550137 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/12de04e76f532a62f191c4c65772f1d37020c3a611853f1a7aef1239f75ce1c8/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec  8 04:48:24 np0005550137 ceph-osd[83009]: log_channel(cluster) log [DBG] : 7.f scrub starts
Dec  8 04:48:24 np0005550137 ceph-osd[83009]: log_channel(cluster) log [DBG] : 7.f scrub ok
Dec  8 04:48:24 np0005550137 podman[95818]: 2025-12-08 09:48:24.726449311 +0000 UTC m=+0.322649943 container init 0ae8318e3dcfb744246b464a78f8a8192e9752b68a133d1207dac607b01f126d (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=wonderful_fermat, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2)
Dec  8 04:48:24 np0005550137 podman[95818]: 2025-12-08 09:48:24.734241556 +0000 UTC m=+0.330442128 container start 0ae8318e3dcfb744246b464a78f8a8192e9752b68a133d1207dac607b01f126d (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=wonderful_fermat, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec  8 04:48:24 np0005550137 podman[95818]: 2025-12-08 09:48:24.73781717 +0000 UTC m=+0.334017722 container attach 0ae8318e3dcfb744246b464a78f8a8192e9752b68a133d1207dac607b01f126d (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=wonderful_fermat, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325)
Dec  8 04:48:24 np0005550137 ceph-mon[74516]: mon.compute-0@0(leader).osd e49 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  8 04:48:25 np0005550137 wonderful_fermat[95855]: {
Dec  8 04:48:25 np0005550137 wonderful_fermat[95855]:    "1": [
Dec  8 04:48:25 np0005550137 wonderful_fermat[95855]:        {
Dec  8 04:48:25 np0005550137 wonderful_fermat[95855]:            "devices": [
Dec  8 04:48:25 np0005550137 wonderful_fermat[95855]:                "/dev/loop3"
Dec  8 04:48:25 np0005550137 wonderful_fermat[95855]:            ],
Dec  8 04:48:25 np0005550137 wonderful_fermat[95855]:            "lv_name": "ceph_lv0",
Dec  8 04:48:25 np0005550137 wonderful_fermat[95855]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Dec  8 04:48:25 np0005550137 wonderful_fermat[95855]:            "lv_size": "21470642176",
Dec  8 04:48:25 np0005550137 wonderful_fermat[95855]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=RomYjf-Huw1-Uvyl-0ZXT-yb3F-i1Wo-KjPFYE,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=ceb838ef-9d5d-54e4-bddb-2f01adce2ad4,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=10863df8-16d4-4896-ae26-227efb76290e,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Dec  8 04:48:25 np0005550137 wonderful_fermat[95855]:            "lv_uuid": "RomYjf-Huw1-Uvyl-0ZXT-yb3F-i1Wo-KjPFYE",
Dec  8 04:48:25 np0005550137 wonderful_fermat[95855]:            "name": "ceph_lv0",
Dec  8 04:48:25 np0005550137 wonderful_fermat[95855]:            "path": "/dev/ceph_vg0/ceph_lv0",
Dec  8 04:48:25 np0005550137 wonderful_fermat[95855]:            "tags": {
Dec  8 04:48:25 np0005550137 wonderful_fermat[95855]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Dec  8 04:48:25 np0005550137 wonderful_fermat[95855]:                "ceph.block_uuid": "RomYjf-Huw1-Uvyl-0ZXT-yb3F-i1Wo-KjPFYE",
Dec  8 04:48:25 np0005550137 wonderful_fermat[95855]:                "ceph.cephx_lockbox_secret": "",
Dec  8 04:48:25 np0005550137 wonderful_fermat[95855]:                "ceph.cluster_fsid": "ceb838ef-9d5d-54e4-bddb-2f01adce2ad4",
Dec  8 04:48:25 np0005550137 wonderful_fermat[95855]:                "ceph.cluster_name": "ceph",
Dec  8 04:48:25 np0005550137 wonderful_fermat[95855]:                "ceph.crush_device_class": "",
Dec  8 04:48:25 np0005550137 wonderful_fermat[95855]:                "ceph.encrypted": "0",
Dec  8 04:48:25 np0005550137 wonderful_fermat[95855]:                "ceph.osd_fsid": "10863df8-16d4-4896-ae26-227efb76290e",
Dec  8 04:48:25 np0005550137 wonderful_fermat[95855]:                "ceph.osd_id": "1",
Dec  8 04:48:25 np0005550137 wonderful_fermat[95855]:                "ceph.osdspec_affinity": "default_drive_group",
Dec  8 04:48:25 np0005550137 wonderful_fermat[95855]:                "ceph.type": "block",
Dec  8 04:48:25 np0005550137 wonderful_fermat[95855]:                "ceph.vdo": "0",
Dec  8 04:48:25 np0005550137 wonderful_fermat[95855]:                "ceph.with_tpm": "0"
Dec  8 04:48:25 np0005550137 wonderful_fermat[95855]:            },
Dec  8 04:48:25 np0005550137 wonderful_fermat[95855]:            "type": "block",
Dec  8 04:48:25 np0005550137 wonderful_fermat[95855]:            "vg_name": "ceph_vg0"
Dec  8 04:48:25 np0005550137 wonderful_fermat[95855]:        }
Dec  8 04:48:25 np0005550137 wonderful_fermat[95855]:    ]
Dec  8 04:48:25 np0005550137 wonderful_fermat[95855]: }
Dec  8 04:48:25 np0005550137 systemd[1]: libpod-0ae8318e3dcfb744246b464a78f8a8192e9752b68a133d1207dac607b01f126d.scope: Deactivated successfully.
Dec  8 04:48:25 np0005550137 podman[95818]: 2025-12-08 09:48:25.054909521 +0000 UTC m=+0.651110093 container died 0ae8318e3dcfb744246b464a78f8a8192e9752b68a133d1207dac607b01f126d (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=wonderful_fermat, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325)
Dec  8 04:48:25 np0005550137 systemd[1]: var-lib-containers-storage-overlay-12de04e76f532a62f191c4c65772f1d37020c3a611853f1a7aef1239f75ce1c8-merged.mount: Deactivated successfully.
Dec  8 04:48:25 np0005550137 podman[95818]: 2025-12-08 09:48:25.098226285 +0000 UTC m=+0.694426837 container remove 0ae8318e3dcfb744246b464a78f8a8192e9752b68a133d1207dac607b01f126d (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=wonderful_fermat, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Dec  8 04:48:25 np0005550137 systemd[1]: libpod-conmon-0ae8318e3dcfb744246b464a78f8a8192e9752b68a133d1207dac607b01f126d.scope: Deactivated successfully.
Dec  8 04:48:25 np0005550137 ansible-async_wrapper.py[95992]: Invoked with j815837397950 30 /home/zuul/.ansible/tmp/ansible-tmp-1765187304.6762655-37430-150785462719639/AnsiballZ_command.py _
Dec  8 04:48:25 np0005550137 ansible-async_wrapper.py[96015]: Starting module and watcher
Dec  8 04:48:25 np0005550137 ansible-async_wrapper.py[96015]: Start watching 96020 (30)
Dec  8 04:48:25 np0005550137 ansible-async_wrapper.py[96020]: Start module (96020)
Dec  8 04:48:25 np0005550137 ansible-async_wrapper.py[95992]: Return async_wrapper task started.
Dec  8 04:48:25 np0005550137 python3[96026]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v19 --fsid ceb838ef-9d5d-54e4-bddb-2f01adce2ad4 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   orch status --format json _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec  8 04:48:25 np0005550137 podman[96060]: 2025-12-08 09:48:25.381082174 +0000 UTC m=+0.051197893 container create b595da1b2785e3b109b32b9d23fc3f08f457602474f412b244a7d6c93fc9ab09 (image=quay.io/ceph/ceph:v19, name=jovial_lumiere, OSD_FLAVOR=default, ceph=True, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image)
Dec  8 04:48:25 np0005550137 systemd[1]: Started libpod-conmon-b595da1b2785e3b109b32b9d23fc3f08f457602474f412b244a7d6c93fc9ab09.scope.
Dec  8 04:48:25 np0005550137 podman[96060]: 2025-12-08 09:48:25.35984112 +0000 UTC m=+0.029956819 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Dec  8 04:48:25 np0005550137 systemd[1]: Started libcrun container.
Dec  8 04:48:25 np0005550137 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7823781fd432cb96a179e6a9f96b58b0020a357517a7ccabce8cb317dcdcf6a8/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Dec  8 04:48:25 np0005550137 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7823781fd432cb96a179e6a9f96b58b0020a357517a7ccabce8cb317dcdcf6a8/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec  8 04:48:25 np0005550137 ceph-osd[83009]: log_channel(cluster) log [DBG] : 7.8 scrub starts
Dec  8 04:48:25 np0005550137 ceph-osd[83009]: log_channel(cluster) log [DBG] : 7.8 scrub ok
Dec  8 04:48:25 np0005550137 ceph-mgr[74806]: [progress INFO root] Writing back 13 completed events
Dec  8 04:48:25 np0005550137 ceph-mon[74516]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/progress/completed}] v 0)
Dec  8 04:48:25 np0005550137 podman[96060]: 2025-12-08 09:48:25.7722829 +0000 UTC m=+0.442398609 container init b595da1b2785e3b109b32b9d23fc3f08f457602474f412b244a7d6c93fc9ab09 (image=quay.io/ceph/ceph:v19, name=jovial_lumiere, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Dec  8 04:48:25 np0005550137 ceph-mon[74516]: log_channel(audit) log [INF] : from='mgr.14478 192.168.122.100:0/575685615' entity='mgr.compute-0.kitiwu' 
Dec  8 04:48:25 np0005550137 podman[96060]: 2025-12-08 09:48:25.779292963 +0000 UTC m=+0.449408652 container start b595da1b2785e3b109b32b9d23fc3f08f457602474f412b244a7d6c93fc9ab09 (image=quay.io/ceph/ceph:v19, name=jovial_lumiere, CEPH_REF=squid, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec  8 04:48:25 np0005550137 podman[96060]: 2025-12-08 09:48:25.782890288 +0000 UTC m=+0.453006117 container attach b595da1b2785e3b109b32b9d23fc3f08f457602474f412b244a7d6c93fc9ab09 (image=quay.io/ceph/ceph:v19, name=jovial_lumiere, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, io.buildah.version=1.40.1, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, OSD_FLAVOR=default)
Dec  8 04:48:25 np0005550137 podman[96119]: 2025-12-08 09:48:25.89453537 +0000 UTC m=+0.039238537 container create 323dfe98b99d6cdd5a3c487cddcb322225e1c840e5fd016118544aa9df3a1cf2 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=hungry_cohen, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, OSD_FLAVOR=default, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Dec  8 04:48:25 np0005550137 systemd[1]: Started libpod-conmon-323dfe98b99d6cdd5a3c487cddcb322225e1c840e5fd016118544aa9df3a1cf2.scope.
Dec  8 04:48:25 np0005550137 systemd[1]: Started libcrun container.
Dec  8 04:48:25 np0005550137 podman[96119]: 2025-12-08 09:48:25.877449175 +0000 UTC m=+0.022152372 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Dec  8 04:48:25 np0005550137 podman[96119]: 2025-12-08 09:48:25.98055288 +0000 UTC m=+0.125256067 container init 323dfe98b99d6cdd5a3c487cddcb322225e1c840e5fd016118544aa9df3a1cf2 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=hungry_cohen, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, OSD_FLAVOR=default)
Dec  8 04:48:25 np0005550137 podman[96119]: 2025-12-08 09:48:25.986139212 +0000 UTC m=+0.130842389 container start 323dfe98b99d6cdd5a3c487cddcb322225e1c840e5fd016118544aa9df3a1cf2 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=hungry_cohen, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, OSD_FLAVOR=default, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325)
Dec  8 04:48:25 np0005550137 podman[96119]: 2025-12-08 09:48:25.989989824 +0000 UTC m=+0.134692981 container attach 323dfe98b99d6cdd5a3c487cddcb322225e1c840e5fd016118544aa9df3a1cf2 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=hungry_cohen, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0)
Dec  8 04:48:25 np0005550137 hungry_cohen[96154]: 167 167
Dec  8 04:48:25 np0005550137 systemd[1]: libpod-323dfe98b99d6cdd5a3c487cddcb322225e1c840e5fd016118544aa9df3a1cf2.scope: Deactivated successfully.
Dec  8 04:48:25 np0005550137 conmon[96154]: conmon 323dfe98b99d6cdd5a3c <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-323dfe98b99d6cdd5a3c487cddcb322225e1c840e5fd016118544aa9df3a1cf2.scope/container/memory.events
Dec  8 04:48:25 np0005550137 podman[96119]: 2025-12-08 09:48:25.991920159 +0000 UTC m=+0.136623316 container died 323dfe98b99d6cdd5a3c487cddcb322225e1c840e5fd016118544aa9df3a1cf2 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=hungry_cohen, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325)
Dec  8 04:48:26 np0005550137 systemd[1]: var-lib-containers-storage-overlay-c5ecbdf70ef6c84876f76cb3478d3f417ce35fd8a1b97f3c0d75b9bdd6dccf92-merged.mount: Deactivated successfully.
Dec  8 04:48:26 np0005550137 podman[96119]: 2025-12-08 09:48:26.027904391 +0000 UTC m=+0.172607538 container remove 323dfe98b99d6cdd5a3c487cddcb322225e1c840e5fd016118544aa9df3a1cf2 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=hungry_cohen, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1)
Dec  8 04:48:26 np0005550137 systemd[1]: libpod-conmon-323dfe98b99d6cdd5a3c487cddcb322225e1c840e5fd016118544aa9df3a1cf2.scope: Deactivated successfully.
Dec  8 04:48:26 np0005550137 podman[96178]: 2025-12-08 09:48:26.181199179 +0000 UTC m=+0.044481029 container create 349af7c5d73f44767a46f6e483d816cebebbb2a1099c415acc7db8c402d9f456 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=hopeful_khayyam, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, OSD_FLAVOR=default, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0)
Dec  8 04:48:26 np0005550137 systemd[1]: Started libpod-conmon-349af7c5d73f44767a46f6e483d816cebebbb2a1099c415acc7db8c402d9f456.scope.
Dec  8 04:48:26 np0005550137 ceph-mgr[74806]: log_channel(audit) log [DBG] : from='client.14604 -' entity='client.admin' cmd=[{"prefix": "orch status", "target": ["mon-mgr", ""], "format": "json"}]: dispatch
Dec  8 04:48:26 np0005550137 jovial_lumiere[96075]: 
Dec  8 04:48:26 np0005550137 jovial_lumiere[96075]: {"available": true, "backend": "cephadm", "paused": false, "workers": 10}
Dec  8 04:48:26 np0005550137 podman[96060]: 2025-12-08 09:48:26.251450293 +0000 UTC m=+0.921565972 container died b595da1b2785e3b109b32b9d23fc3f08f457602474f412b244a7d6c93fc9ab09 (image=quay.io/ceph/ceph:v19, name=jovial_lumiere, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, OSD_FLAVOR=default, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0)
Dec  8 04:48:26 np0005550137 systemd[1]: Started libcrun container.
Dec  8 04:48:26 np0005550137 systemd[1]: libpod-b595da1b2785e3b109b32b9d23fc3f08f457602474f412b244a7d6c93fc9ab09.scope: Deactivated successfully.
Dec  8 04:48:26 np0005550137 podman[96178]: 2025-12-08 09:48:26.162275062 +0000 UTC m=+0.025556932 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Dec  8 04:48:26 np0005550137 ceph-mon[74516]: from='mgr.14478 192.168.122.100:0/575685615' entity='mgr.compute-0.kitiwu' 
Dec  8 04:48:26 np0005550137 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/11a72040316b6e91a6c09843b196466bbc8a5537278dee895b43db6f9252bc28/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec  8 04:48:26 np0005550137 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/11a72040316b6e91a6c09843b196466bbc8a5537278dee895b43db6f9252bc28/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec  8 04:48:26 np0005550137 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/11a72040316b6e91a6c09843b196466bbc8a5537278dee895b43db6f9252bc28/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec  8 04:48:26 np0005550137 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/11a72040316b6e91a6c09843b196466bbc8a5537278dee895b43db6f9252bc28/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec  8 04:48:26 np0005550137 podman[96178]: 2025-12-08 09:48:26.289071373 +0000 UTC m=+0.152353283 container init 349af7c5d73f44767a46f6e483d816cebebbb2a1099c415acc7db8c402d9f456 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=hopeful_khayyam, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0)
Dec  8 04:48:26 np0005550137 systemd[1]: var-lib-containers-storage-overlay-7823781fd432cb96a179e6a9f96b58b0020a357517a7ccabce8cb317dcdcf6a8-merged.mount: Deactivated successfully.
Dec  8 04:48:26 np0005550137 podman[96178]: 2025-12-08 09:48:26.30036027 +0000 UTC m=+0.163642120 container start 349af7c5d73f44767a46f6e483d816cebebbb2a1099c415acc7db8c402d9f456 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=hopeful_khayyam, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, io.buildah.version=1.40.1)
Dec  8 04:48:26 np0005550137 podman[96060]: 2025-12-08 09:48:26.314334524 +0000 UTC m=+0.984450203 container remove b595da1b2785e3b109b32b9d23fc3f08f457602474f412b244a7d6c93fc9ab09 (image=quay.io/ceph/ceph:v19, name=jovial_lumiere, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, OSD_FLAVOR=default, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec  8 04:48:26 np0005550137 podman[96178]: 2025-12-08 09:48:26.323790458 +0000 UTC m=+0.187072308 container attach 349af7c5d73f44767a46f6e483d816cebebbb2a1099c415acc7db8c402d9f456 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=hopeful_khayyam, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325)
Dec  8 04:48:26 np0005550137 systemd[1]: libpod-conmon-b595da1b2785e3b109b32b9d23fc3f08f457602474f412b244a7d6c93fc9ab09.scope: Deactivated successfully.
Dec  8 04:48:26 np0005550137 ansible-async_wrapper.py[96020]: Module complete (96020)
Dec  8 04:48:26 np0005550137 python3[96261]: ansible-ansible.legacy.async_status Invoked with jid=j815837397950.95992 mode=status _async_dir=/root/.ansible_async
Dec  8 04:48:26 np0005550137 ceph-mgr[74806]: log_channel(cluster) log [DBG] : pgmap v15: 198 pgs: 198 active+clean; 454 KiB data, 102 MiB used, 60 GiB / 60 GiB avail; 21 KiB/s rd, 193 B/s wr, 11 op/s
Dec  8 04:48:26 np0005550137 ceph-osd[83009]: log_channel(cluster) log [DBG] : 7.9 scrub starts
Dec  8 04:48:26 np0005550137 ceph-osd[83009]: log_channel(cluster) log [DBG] : 7.9 scrub ok
Dec  8 04:48:26 np0005550137 python3[96344]: ansible-ansible.legacy.async_status Invoked with jid=j815837397950.95992 mode=cleanup _async_dir=/root/.ansible_async
Dec  8 04:48:26 np0005550137 lvm[96380]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Dec  8 04:48:26 np0005550137 lvm[96380]: VG ceph_vg0 finished
Dec  8 04:48:26 np0005550137 hopeful_khayyam[96194]: {}
Dec  8 04:48:26 np0005550137 systemd[1]: libpod-349af7c5d73f44767a46f6e483d816cebebbb2a1099c415acc7db8c402d9f456.scope: Deactivated successfully.
Dec  8 04:48:27 np0005550137 systemd[1]: libpod-349af7c5d73f44767a46f6e483d816cebebbb2a1099c415acc7db8c402d9f456.scope: Consumed 1.090s CPU time.
Dec  8 04:48:27 np0005550137 podman[96384]: 2025-12-08 09:48:27.04157191 +0000 UTC m=+0.025258733 container died 349af7c5d73f44767a46f6e483d816cebebbb2a1099c415acc7db8c402d9f456 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=hopeful_khayyam, ceph=True, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, CEPH_REF=squid)
Dec  8 04:48:27 np0005550137 systemd[1]: var-lib-containers-storage-overlay-11a72040316b6e91a6c09843b196466bbc8a5537278dee895b43db6f9252bc28-merged.mount: Deactivated successfully.
Dec  8 04:48:27 np0005550137 podman[96384]: 2025-12-08 09:48:27.077747827 +0000 UTC m=+0.061434650 container remove 349af7c5d73f44767a46f6e483d816cebebbb2a1099c415acc7db8c402d9f456 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=hopeful_khayyam, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_REF=squid, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, ceph=True, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1)
Dec  8 04:48:27 np0005550137 systemd[1]: libpod-conmon-349af7c5d73f44767a46f6e483d816cebebbb2a1099c415acc7db8c402d9f456.scope: Deactivated successfully.
Dec  8 04:48:27 np0005550137 ceph-mon[74516]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Dec  8 04:48:27 np0005550137 ceph-mon[74516]: log_channel(audit) log [INF] : from='mgr.14478 192.168.122.100:0/575685615' entity='mgr.compute-0.kitiwu' 
Dec  8 04:48:27 np0005550137 ceph-mon[74516]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Dec  8 04:48:27 np0005550137 ceph-mon[74516]: log_channel(audit) log [INF] : from='mgr.14478 192.168.122.100:0/575685615' entity='mgr.compute-0.kitiwu' 
Dec  8 04:48:27 np0005550137 ceph-mgr[74806]: [cephadm INFO cephadm.services.cephadmservice] Saving service rgw.rgw spec with placement compute-0;compute-1;compute-2
Dec  8 04:48:27 np0005550137 ceph-mon[74516]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.rgw.rgw}] v 0)
Dec  8 04:48:27 np0005550137 ceph-mgr[74806]: log_channel(cephadm) log [INF] : Saving service rgw.rgw spec with placement compute-0;compute-1;compute-2
Dec  8 04:48:27 np0005550137 ceph-mon[74516]: log_channel(audit) log [INF] : from='mgr.14478 192.168.122.100:0/575685615' entity='mgr.compute-0.kitiwu' 
Dec  8 04:48:27 np0005550137 ceph-mon[74516]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.rgw.rgw}] v 0)
Dec  8 04:48:27 np0005550137 ceph-mon[74516]: log_channel(audit) log [INF] : from='mgr.14478 192.168.122.100:0/575685615' entity='mgr.compute-0.kitiwu' 
Dec  8 04:48:27 np0005550137 ceph-mgr[74806]: [progress INFO root] update: starting ev 75282a90-c848-47ae-a797-547d665919ae (Updating mds.cephfs deployment (+3 -> 3))
Dec  8 04:48:27 np0005550137 ceph-mon[74516]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get-or-create", "entity": "mds.cephfs.compute-2.hhmzvb", "caps": ["mon", "profile mds", "osd", "allow rw tag cephfs *=*", "mds", "allow"]} v 0)
Dec  8 04:48:27 np0005550137 ceph-mon[74516]: log_channel(audit) log [INF] : from='mgr.14478 192.168.122.100:0/575685615' entity='mgr.compute-0.kitiwu' cmd=[{"prefix": "auth get-or-create", "entity": "mds.cephfs.compute-2.hhmzvb", "caps": ["mon", "profile mds", "osd", "allow rw tag cephfs *=*", "mds", "allow"]}]: dispatch
Dec  8 04:48:27 np0005550137 ceph-mon[74516]: log_channel(audit) log [INF] : from='mgr.14478 192.168.122.100:0/575685615' entity='mgr.compute-0.kitiwu' cmd='[{"prefix": "auth get-or-create", "entity": "mds.cephfs.compute-2.hhmzvb", "caps": ["mon", "profile mds", "osd", "allow rw tag cephfs *=*", "mds", "allow"]}]': finished
Dec  8 04:48:27 np0005550137 ceph-mon[74516]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Dec  8 04:48:27 np0005550137 ceph-mon[74516]: log_channel(audit) log [DBG] : from='mgr.14478 192.168.122.100:0/575685615' entity='mgr.compute-0.kitiwu' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec  8 04:48:27 np0005550137 ceph-mgr[74806]: [cephadm INFO cephadm.serve] Deploying daemon mds.cephfs.compute-2.hhmzvb on compute-2
Dec  8 04:48:27 np0005550137 ceph-mgr[74806]: log_channel(cephadm) log [INF] : Deploying daemon mds.cephfs.compute-2.hhmzvb on compute-2
Dec  8 04:48:27 np0005550137 ceph-mon[74516]: from='mgr.14478 192.168.122.100:0/575685615' entity='mgr.compute-0.kitiwu' 
Dec  8 04:48:27 np0005550137 ceph-mon[74516]: from='mgr.14478 192.168.122.100:0/575685615' entity='mgr.compute-0.kitiwu' 
Dec  8 04:48:27 np0005550137 ceph-mon[74516]: from='mgr.14478 192.168.122.100:0/575685615' entity='mgr.compute-0.kitiwu' 
Dec  8 04:48:27 np0005550137 ceph-mon[74516]: from='mgr.14478 192.168.122.100:0/575685615' entity='mgr.compute-0.kitiwu' 
Dec  8 04:48:27 np0005550137 ceph-mon[74516]: from='mgr.14478 192.168.122.100:0/575685615' entity='mgr.compute-0.kitiwu' cmd=[{"prefix": "auth get-or-create", "entity": "mds.cephfs.compute-2.hhmzvb", "caps": ["mon", "profile mds", "osd", "allow rw tag cephfs *=*", "mds", "allow"]}]: dispatch
Dec  8 04:48:27 np0005550137 ceph-mon[74516]: from='mgr.14478 192.168.122.100:0/575685615' entity='mgr.compute-0.kitiwu' cmd='[{"prefix": "auth get-or-create", "entity": "mds.cephfs.compute-2.hhmzvb", "caps": ["mon", "profile mds", "osd", "allow rw tag cephfs *=*", "mds", "allow"]}]': finished
Dec  8 04:48:27 np0005550137 python3[96424]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v19 --fsid ceb838ef-9d5d-54e4-bddb-2f01adce2ad4 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   orch status --format json _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec  8 04:48:27 np0005550137 podman[96425]: 2025-12-08 09:48:27.456920925 +0000 UTC m=+0.049792812 container create d5de29615d6c1520d8342ad48192bf60bdc7432c19bc316d38b4c70f43d18cf3 (image=quay.io/ceph/ceph:v19, name=hungry_bohr, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, ceph=True, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec  8 04:48:27 np0005550137 systemd[1]: Started libpod-conmon-d5de29615d6c1520d8342ad48192bf60bdc7432c19bc316d38b4c70f43d18cf3.scope.
Dec  8 04:48:27 np0005550137 podman[96425]: 2025-12-08 09:48:27.436434962 +0000 UTC m=+0.029306869 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Dec  8 04:48:27 np0005550137 systemd[1]: Started libcrun container.
Dec  8 04:48:27 np0005550137 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/407c8f9b38df07636084f0540e8d87f5e9f36a63773db07394f774c94defe065/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec  8 04:48:27 np0005550137 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/407c8f9b38df07636084f0540e8d87f5e9f36a63773db07394f774c94defe065/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Dec  8 04:48:27 np0005550137 podman[96425]: 2025-12-08 09:48:27.563940074 +0000 UTC m=+0.156812041 container init d5de29615d6c1520d8342ad48192bf60bdc7432c19bc316d38b4c70f43d18cf3 (image=quay.io/ceph/ceph:v19, name=hungry_bohr, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, ceph=True)
Dec  8 04:48:27 np0005550137 podman[96425]: 2025-12-08 09:48:27.574631313 +0000 UTC m=+0.167503230 container start d5de29615d6c1520d8342ad48192bf60bdc7432c19bc316d38b4c70f43d18cf3 (image=quay.io/ceph/ceph:v19, name=hungry_bohr, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.build-date=20250325, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Dec  8 04:48:27 np0005550137 podman[96425]: 2025-12-08 09:48:27.578503045 +0000 UTC m=+0.171374972 container attach d5de29615d6c1520d8342ad48192bf60bdc7432c19bc316d38b4c70f43d18cf3 (image=quay.io/ceph/ceph:v19, name=hungry_bohr, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.build-date=20250325)
Dec  8 04:48:27 np0005550137 ceph-osd[83009]: log_channel(cluster) log [DBG] : 7.b scrub starts
Dec  8 04:48:27 np0005550137 ceph-osd[83009]: log_channel(cluster) log [DBG] : 7.b scrub ok
Dec  8 04:48:27 np0005550137 ceph-mgr[74806]: log_channel(audit) log [DBG] : from='client.14610 -' entity='client.admin' cmd=[{"prefix": "orch status", "target": ["mon-mgr", ""], "format": "json"}]: dispatch
Dec  8 04:48:27 np0005550137 hungry_bohr[96440]: 
Dec  8 04:48:27 np0005550137 hungry_bohr[96440]: {"available": true, "backend": "cephadm", "paused": false, "workers": 10}
Dec  8 04:48:27 np0005550137 systemd[1]: libpod-d5de29615d6c1520d8342ad48192bf60bdc7432c19bc316d38b4c70f43d18cf3.scope: Deactivated successfully.
Dec  8 04:48:27 np0005550137 podman[96425]: 2025-12-08 09:48:27.943158473 +0000 UTC m=+0.536030400 container died d5de29615d6c1520d8342ad48192bf60bdc7432c19bc316d38b4c70f43d18cf3 (image=quay.io/ceph/ceph:v19, name=hungry_bohr, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec  8 04:48:27 np0005550137 systemd[1]: var-lib-containers-storage-overlay-407c8f9b38df07636084f0540e8d87f5e9f36a63773db07394f774c94defe065-merged.mount: Deactivated successfully.
Dec  8 04:48:27 np0005550137 podman[96425]: 2025-12-08 09:48:27.987698892 +0000 UTC m=+0.580570789 container remove d5de29615d6c1520d8342ad48192bf60bdc7432c19bc316d38b4c70f43d18cf3 (image=quay.io/ceph/ceph:v19, name=hungry_bohr, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_REF=squid, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image)
Dec  8 04:48:28 np0005550137 systemd[1]: libpod-conmon-d5de29615d6c1520d8342ad48192bf60bdc7432c19bc316d38b4c70f43d18cf3.scope: Deactivated successfully.
Dec  8 04:48:28 np0005550137 ceph-mon[74516]: Saving service rgw.rgw spec with placement compute-0;compute-1;compute-2
Dec  8 04:48:28 np0005550137 ceph-mon[74516]: Deploying daemon mds.cephfs.compute-2.hhmzvb on compute-2
Dec  8 04:48:28 np0005550137 ceph-mgr[74806]: log_channel(cluster) log [DBG] : pgmap v16: 198 pgs: 198 active+clean; 454 KiB data, 102 MiB used, 60 GiB / 60 GiB avail; 18 KiB/s rd, 170 B/s wr, 9 op/s
Dec  8 04:48:28 np0005550137 ceph-osd[83009]: log_channel(cluster) log [DBG] : 7.10 scrub starts
Dec  8 04:48:28 np0005550137 ceph-osd[83009]: log_channel(cluster) log [DBG] : 7.10 scrub ok
Dec  8 04:48:28 np0005550137 ceph-mon[74516]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2.devices.0}] v 0)
Dec  8 04:48:28 np0005550137 ceph-mon[74516]: log_channel(audit) log [INF] : from='mgr.14478 192.168.122.100:0/575685615' entity='mgr.compute-0.kitiwu' 
Dec  8 04:48:28 np0005550137 ceph-mon[74516]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2}] v 0)
Dec  8 04:48:28 np0005550137 python3[96501]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v19 --fsid ceb838ef-9d5d-54e4-bddb-2f01adce2ad4 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   orch ls --export -f json _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec  8 04:48:28 np0005550137 ceph-mon[74516]: log_channel(audit) log [INF] : from='mgr.14478 192.168.122.100:0/575685615' entity='mgr.compute-0.kitiwu' 
Dec  8 04:48:28 np0005550137 ceph-mon[74516]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.mds.cephfs}] v 0)
Dec  8 04:48:28 np0005550137 ceph-mon[74516]: log_channel(audit) log [INF] : from='mgr.14478 192.168.122.100:0/575685615' entity='mgr.compute-0.kitiwu' 
Dec  8 04:48:28 np0005550137 ceph-mon[74516]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get-or-create", "entity": "mds.cephfs.compute-0.ywanut", "caps": ["mon", "profile mds", "osd", "allow rw tag cephfs *=*", "mds", "allow"]} v 0)
Dec  8 04:48:28 np0005550137 ceph-mon[74516]: log_channel(audit) log [INF] : from='mgr.14478 192.168.122.100:0/575685615' entity='mgr.compute-0.kitiwu' cmd=[{"prefix": "auth get-or-create", "entity": "mds.cephfs.compute-0.ywanut", "caps": ["mon", "profile mds", "osd", "allow rw tag cephfs *=*", "mds", "allow"]}]: dispatch
Dec  8 04:48:28 np0005550137 ceph-mon[74516]: log_channel(audit) log [INF] : from='mgr.14478 192.168.122.100:0/575685615' entity='mgr.compute-0.kitiwu' cmd='[{"prefix": "auth get-or-create", "entity": "mds.cephfs.compute-0.ywanut", "caps": ["mon", "profile mds", "osd", "allow rw tag cephfs *=*", "mds", "allow"]}]': finished
Dec  8 04:48:28 np0005550137 ceph-mon[74516]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Dec  8 04:48:28 np0005550137 ceph-mon[74516]: log_channel(audit) log [DBG] : from='mgr.14478 192.168.122.100:0/575685615' entity='mgr.compute-0.kitiwu' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec  8 04:48:28 np0005550137 ceph-mgr[74806]: [cephadm INFO cephadm.serve] Deploying daemon mds.cephfs.compute-0.ywanut on compute-0
Dec  8 04:48:28 np0005550137 ceph-mgr[74806]: log_channel(cephadm) log [INF] : Deploying daemon mds.cephfs.compute-0.ywanut on compute-0
Dec  8 04:48:28 np0005550137 podman[96502]: 2025-12-08 09:48:28.968793958 +0000 UTC m=+0.075156037 container create 4185e9e4c2312327726eed6fdf9b51dae0eb8559037f18c2bc1ce96e49106de3 (image=quay.io/ceph/ceph:v19, name=wonderful_fermi, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image)
Dec  8 04:48:29 np0005550137 systemd[1]: Started libpod-conmon-4185e9e4c2312327726eed6fdf9b51dae0eb8559037f18c2bc1ce96e49106de3.scope.
Dec  8 04:48:29 np0005550137 podman[96502]: 2025-12-08 09:48:28.927723028 +0000 UTC m=+0.034085097 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Dec  8 04:48:29 np0005550137 systemd[1]: Started libcrun container.
Dec  8 04:48:29 np0005550137 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/58e6ebab66576cbe66203b3a1201d365f84c239573da2237362bf6b94dc1c52e/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Dec  8 04:48:29 np0005550137 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/58e6ebab66576cbe66203b3a1201d365f84c239573da2237362bf6b94dc1c52e/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec  8 04:48:29 np0005550137 podman[96502]: 2025-12-08 09:48:29.061206143 +0000 UTC m=+0.167568192 container init 4185e9e4c2312327726eed6fdf9b51dae0eb8559037f18c2bc1ce96e49106de3 (image=quay.io/ceph/ceph:v19, name=wonderful_fermi, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid)
Dec  8 04:48:29 np0005550137 podman[96502]: 2025-12-08 09:48:29.067446543 +0000 UTC m=+0.173808582 container start 4185e9e4c2312327726eed6fdf9b51dae0eb8559037f18c2bc1ce96e49106de3 (image=quay.io/ceph/ceph:v19, name=wonderful_fermi, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid)
Dec  8 04:48:29 np0005550137 podman[96502]: 2025-12-08 09:48:29.070727068 +0000 UTC m=+0.177089127 container attach 4185e9e4c2312327726eed6fdf9b51dae0eb8559037f18c2bc1ce96e49106de3 (image=quay.io/ceph/ceph:v19, name=wonderful_fermi, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325)
Dec  8 04:48:29 np0005550137 ceph-mon[74516]: mon.compute-0@0(leader).mds e3 new map
Dec  8 04:48:29 np0005550137 ceph-mon[74516]: from='mgr.14478 192.168.122.100:0/575685615' entity='mgr.compute-0.kitiwu' 
Dec  8 04:48:29 np0005550137 ceph-mon[74516]: from='mgr.14478 192.168.122.100:0/575685615' entity='mgr.compute-0.kitiwu' 
Dec  8 04:48:29 np0005550137 ceph-mon[74516]: from='mgr.14478 192.168.122.100:0/575685615' entity='mgr.compute-0.kitiwu' 
Dec  8 04:48:29 np0005550137 ceph-mon[74516]: from='mgr.14478 192.168.122.100:0/575685615' entity='mgr.compute-0.kitiwu' cmd=[{"prefix": "auth get-or-create", "entity": "mds.cephfs.compute-0.ywanut", "caps": ["mon", "profile mds", "osd", "allow rw tag cephfs *=*", "mds", "allow"]}]: dispatch
Dec  8 04:48:29 np0005550137 ceph-mon[74516]: from='mgr.14478 192.168.122.100:0/575685615' entity='mgr.compute-0.kitiwu' cmd='[{"prefix": "auth get-or-create", "entity": "mds.cephfs.compute-0.ywanut", "caps": ["mon", "profile mds", "osd", "allow rw tag cephfs *=*", "mds", "allow"]}]': finished
Dec  8 04:48:29 np0005550137 ceph-mon[74516]: mon.compute-0@0(leader).mds e3 print_map#012e3#012btime 2025-12-08T09:48:29:301156+0000#012enable_multiple, ever_enabled_multiple: 1,1#012default compat: compat={},rocompat={},incompat={1=base v0.20,2=client writeable ranges,3=default file layouts on dirs,4=dir inode in separate object,5=mds uses versioned encoding,6=dirfrag is stored in omap,8=no anchor table,9=file layout v2,10=snaprealm v2,11=minor log segments,12=quiesce subvolumes}#012legacy client fscid: 1#012 #012Filesystem 'cephfs' (1)#012fs_name#011cephfs#012epoch#0112#012flags#01112 joinable allow_snaps allow_multimds_snaps#012created#0112025-12-08T09:48:11.623571+0000#012modified#0112025-12-08T09:48:11.623571+0000#012tableserver#0110#012root#0110#012session_timeout#01160#012session_autoclose#011300#012max_file_size#0111099511627776#012max_xattr_size#01165536#012required_client_features#011{}#012last_failure#0110#012last_failure_osd_epoch#0110#012compat#011compat={},rocompat={},incompat={1=base v0.20,2=client writeable ranges,3=default file layouts on dirs,4=dir inode in separate object,5=mds uses versioned encoding,6=dirfrag is stored in omap,7=mds uses inline data,8=no anchor table,9=file layout v2,10=snaprealm v2,11=minor log segments,12=quiesce subvolumes}#012max_mds#0111#012in#011#012up#011{}#012failed#011#012damaged#011#012stopped#011#012data_pools#011[7]#012metadata_pool#0116#012inline_data#011disabled#012balancer#011#012bal_rank_mask#011-1#012standby_count_wanted#0110#012qdb_cluster#011leader: 0 members: #012 #012 #012Standby daemons:#012 #012[mds.cephfs.compute-2.hhmzvb{-1:24232} state up:standby seq 1 addr [v2:192.168.122.102:6804/1007969270,v1:192.168.122.102:6805/1007969270] compat {c=[1],r=[1],i=[1fff]}]
Dec  8 04:48:29 np0005550137 ceph-mon[74516]: log_channel(cluster) log [DBG] : mds.? [v2:192.168.122.102:6804/1007969270,v1:192.168.122.102:6805/1007969270] up:boot
Dec  8 04:48:29 np0005550137 ceph-mon[74516]: mon.compute-0@0(leader).mds e3 assigned standby [v2:192.168.122.102:6804/1007969270,v1:192.168.122.102:6805/1007969270] as mds.0
Dec  8 04:48:29 np0005550137 ceph-mon[74516]: log_channel(cluster) log [INF] : daemon mds.cephfs.compute-2.hhmzvb assigned to filesystem cephfs as rank 0 (now has 1 ranks)
Dec  8 04:48:29 np0005550137 ceph-mon[74516]: log_channel(cluster) log [INF] : Health check cleared: MDS_ALL_DOWN (was: 1 filesystem is offline)
Dec  8 04:48:29 np0005550137 ceph-mon[74516]: log_channel(cluster) log [INF] : Health check cleared: MDS_UP_LESS_THAN_MAX (was: 1 filesystem is online with fewer MDS than max_mds)
Dec  8 04:48:29 np0005550137 ceph-mon[74516]: log_channel(cluster) log [INF] : Cluster is now healthy
Dec  8 04:48:29 np0005550137 ceph-mon[74516]: log_channel(cluster) log [DBG] : fsmap cephfs:0 1 up:standby
Dec  8 04:48:29 np0005550137 ceph-mon[74516]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mds metadata", "who": "cephfs.compute-2.hhmzvb"} v 0)
Dec  8 04:48:29 np0005550137 ceph-mon[74516]: log_channel(audit) log [DBG] : from='mgr.14478 192.168.122.100:0/575685615' entity='mgr.compute-0.kitiwu' cmd=[{"prefix": "mds metadata", "who": "cephfs.compute-2.hhmzvb"}]: dispatch
Dec  8 04:48:29 np0005550137 ceph-mon[74516]: mon.compute-0@0(leader).mds e3 all = 0
Dec  8 04:48:29 np0005550137 ceph-mon[74516]: mon.compute-0@0(leader).mds e4 new map
Dec  8 04:48:29 np0005550137 ceph-mon[74516]: mon.compute-0@0(leader).mds e4 print_map#012e4#012btime 2025-12-08T09:48:29:331502+0000#012enable_multiple, ever_enabled_multiple: 1,1#012default compat: compat={},rocompat={},incompat={1=base v0.20,2=client writeable ranges,3=default file layouts on dirs,4=dir inode in separate object,5=mds uses versioned encoding,6=dirfrag is stored in omap,8=no anchor table,9=file layout v2,10=snaprealm v2,11=minor log segments,12=quiesce subvolumes}#012legacy client fscid: 1#012 #012Filesystem 'cephfs' (1)#012fs_name#011cephfs#012epoch#0114#012flags#01112 joinable allow_snaps allow_multimds_snaps#012created#0112025-12-08T09:48:11.623571+0000#012modified#0112025-12-08T09:48:29.331497+0000#012tableserver#0110#012root#0110#012session_timeout#01160#012session_autoclose#011300#012max_file_size#0111099511627776#012max_xattr_size#01165536#012required_client_features#011{}#012last_failure#0110#012last_failure_osd_epoch#0110#012compat#011compat={},rocompat={},incompat={1=base v0.20,2=client writeable ranges,3=default file layouts on dirs,4=dir inode in separate object,5=mds uses versioned encoding,6=dirfrag is stored in omap,7=mds uses inline data,8=no anchor table,9=file layout v2,10=snaprealm v2,11=minor log segments,12=quiesce subvolumes}#012max_mds#0111#012in#0110#012up#011{0=24232}#012failed#011#012damaged#011#012stopped#011#012data_pools#011[7]#012metadata_pool#0116#012inline_data#011disabled#012balancer#011#012bal_rank_mask#011-1#012standby_count_wanted#0110#012qdb_cluster#011leader: 0 members: #012[mds.cephfs.compute-2.hhmzvb{0:24232} state up:creating seq 1 addr [v2:192.168.122.102:6804/1007969270,v1:192.168.122.102:6805/1007969270] compat {c=[1],r=[1],i=[1fff]}]#012 #012 
Dec  8 04:48:29 np0005550137 ceph-mon[74516]: log_channel(cluster) log [DBG] : fsmap cephfs:1 {0=cephfs.compute-2.hhmzvb=up:creating}
Dec  8 04:48:29 np0005550137 ceph-mon[74516]: log_channel(cluster) log [INF] : daemon mds.cephfs.compute-2.hhmzvb is now active in filesystem cephfs as rank 0
Dec  8 04:48:29 np0005550137 ceph-mgr[74806]: log_channel(audit) log [DBG] : from='client.14616 -' entity='client.admin' cmd=[{"prefix": "orch ls", "export": true, "target": ["mon-mgr", ""], "format": "json"}]: dispatch
Dec  8 04:48:29 np0005550137 wonderful_fermi[96544]: 
Dec  8 04:48:29 np0005550137 wonderful_fermi[96544]: [{"networks": ["192.168.122.0/24"], "placement": {"count": 1, "hosts": ["compute-0"]}, "service_name": "alertmanager", "service_type": "alertmanager"}, {"placement": {"host_pattern": "*"}, "service_name": "crash", "service_type": "crash"}, {"networks": ["192.168.122.0/24"], "placement": {"count": 1, "hosts": ["compute-0"]}, "service_name": "grafana", "service_type": "grafana", "spec": {"anonymous_access": true, "protocol": "https"}}, {"placement": {"hosts": ["compute-0", "compute-1", "compute-2"]}, "service_id": "nfs.cephfs", "service_name": "ingress.nfs.cephfs", "service_type": "ingress", "spec": {"backend_service": "nfs.cephfs", "enable_haproxy_protocol": true, "first_virtual_router_id": 50, "frontend_port": 2049, "monitor_port": 9049, "virtual_ip": "192.168.122.2/24"}}, {"placement": {"count": 2}, "service_id": "rgw.default", "service_name": "ingress.rgw.default", "service_type": "ingress", "spec": {"backend_service": "rgw.rgw", "first_virtual_router_id": 50, "frontend_port": 8080, "monitor_port": 8999, "virtual_interface_networks": ["192.168.122.0/24"], "virtual_ip": "192.168.122.2/24"}}, {"placement": {"hosts": ["compute-0", "compute-1", "compute-2"]}, "service_id": "cephfs", "service_name": "mds.cephfs", "service_type": "mds"}, {"placement": {"hosts": ["compute-0", "compute-1", "compute-2"]}, "service_name": "mgr", "service_type": "mgr"}, {"placement": {"hosts": ["compute-0", "compute-1", "compute-2"]}, "service_name": "mon", "service_type": "mon"}, {"placement": {"hosts": ["compute-0", "compute-1", "compute-2"]}, "service_id": "cephfs", "service_name": "nfs.cephfs", "service_type": "nfs", "spec": {"enable_haproxy_protocol": true, "port": 12049}}, {"placement": {"host_pattern": "*"}, "service_name": "node-exporter", "service_type": "node-exporter"}, {"placement": {"hosts": ["compute-0", "compute-1", "compute-2"]}, "service_id": "default_drive_group", "service_name": "osd.default_drive_group", "service_type": "osd", "spec": {"data_devices": {"paths": ["/dev/ceph_vg0/ceph_lv0"]}, "filter_logic": "AND", "objectstore": "bluestore"}}, {"networks": ["192.168.122.0/24"], "placement": {"count": 1, "hosts": ["compute-0"]}, "service_name": "prometheus", "service_type": "prometheus"}, {"networks": ["192.168.122.0/24"], "placement": {"hosts": ["compute-0", "compute-1", "compute-2"]}, "service_id": "rgw", "service_name": "rgw.rgw", "service_type": "rgw", "spec": {"rgw_frontend_port": 8082}}]
Dec  8 04:48:29 np0005550137 podman[96631]: 2025-12-08 09:48:29.489966577 +0000 UTC m=+0.052820031 container create 5f9c822e62e1ce5e8e1083fde8e3ce5938b1d002d64af8e27bc5618ff5f4974c (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=competent_newton, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_REF=squid, org.label-schema.license=GPLv2, ceph=True, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec  8 04:48:29 np0005550137 systemd[1]: libpod-4185e9e4c2312327726eed6fdf9b51dae0eb8559037f18c2bc1ce96e49106de3.scope: Deactivated successfully.
Dec  8 04:48:29 np0005550137 podman[96502]: 2025-12-08 09:48:29.501416407 +0000 UTC m=+0.607778466 container died 4185e9e4c2312327726eed6fdf9b51dae0eb8559037f18c2bc1ce96e49106de3 (image=quay.io/ceph/ceph:v19, name=wonderful_fermi, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325)
Dec  8 04:48:29 np0005550137 systemd[1]: Started libpod-conmon-5f9c822e62e1ce5e8e1083fde8e3ce5938b1d002d64af8e27bc5618ff5f4974c.scope.
Dec  8 04:48:29 np0005550137 systemd[1]: var-lib-containers-storage-overlay-58e6ebab66576cbe66203b3a1201d365f84c239573da2237362bf6b94dc1c52e-merged.mount: Deactivated successfully.
Dec  8 04:48:29 np0005550137 systemd[1]: Started libcrun container.
Dec  8 04:48:29 np0005550137 podman[96502]: 2025-12-08 09:48:29.551470387 +0000 UTC m=+0.657832426 container remove 4185e9e4c2312327726eed6fdf9b51dae0eb8559037f18c2bc1ce96e49106de3 (image=quay.io/ceph/ceph:v19, name=wonderful_fermi, OSD_FLAVOR=default, org.label-schema.license=GPLv2, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image)
Dec  8 04:48:29 np0005550137 systemd[1]: libpod-conmon-4185e9e4c2312327726eed6fdf9b51dae0eb8559037f18c2bc1ce96e49106de3.scope: Deactivated successfully.
Dec  8 04:48:29 np0005550137 podman[96631]: 2025-12-08 09:48:29.469069351 +0000 UTC m=+0.031922815 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Dec  8 04:48:29 np0005550137 podman[96631]: 2025-12-08 09:48:29.566054119 +0000 UTC m=+0.128907593 container init 5f9c822e62e1ce5e8e1083fde8e3ce5938b1d002d64af8e27bc5618ff5f4974c (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=competent_newton, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec  8 04:48:29 np0005550137 podman[96631]: 2025-12-08 09:48:29.573854805 +0000 UTC m=+0.136708259 container start 5f9c822e62e1ce5e8e1083fde8e3ce5938b1d002d64af8e27bc5618ff5f4974c (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=competent_newton, org.label-schema.license=GPLv2, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec  8 04:48:29 np0005550137 podman[96631]: 2025-12-08 09:48:29.577780758 +0000 UTC m=+0.140634252 container attach 5f9c822e62e1ce5e8e1083fde8e3ce5938b1d002d64af8e27bc5618ff5f4974c (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=competent_newton, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec  8 04:48:29 np0005550137 competent_newton[96661]: 167 167
Dec  8 04:48:29 np0005550137 systemd[1]: libpod-5f9c822e62e1ce5e8e1083fde8e3ce5938b1d002d64af8e27bc5618ff5f4974c.scope: Deactivated successfully.
Dec  8 04:48:29 np0005550137 podman[96631]: 2025-12-08 09:48:29.579255782 +0000 UTC m=+0.142109266 container died 5f9c822e62e1ce5e8e1083fde8e3ce5938b1d002d64af8e27bc5618ff5f4974c (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=competent_newton, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, OSD_FLAVOR=default, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Dec  8 04:48:29 np0005550137 systemd[1]: var-lib-containers-storage-overlay-8cfa3c5563596f878ff9a7e462fb79d8d5c7dcc571e1727b61c3402965bdd854-merged.mount: Deactivated successfully.
Dec  8 04:48:29 np0005550137 podman[96631]: 2025-12-08 09:48:29.629196167 +0000 UTC m=+0.192049621 container remove 5f9c822e62e1ce5e8e1083fde8e3ce5938b1d002d64af8e27bc5618ff5f4974c (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=competent_newton, org.label-schema.build-date=20250325, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True)
Dec  8 04:48:29 np0005550137 systemd[1]: libpod-conmon-5f9c822e62e1ce5e8e1083fde8e3ce5938b1d002d64af8e27bc5618ff5f4974c.scope: Deactivated successfully.
Dec  8 04:48:29 np0005550137 systemd[1]: Reloading.
Dec  8 04:48:29 np0005550137 ceph-osd[83009]: log_channel(cluster) log [DBG] : 7.13 scrub starts
Dec  8 04:48:29 np0005550137 ceph-osd[83009]: log_channel(cluster) log [DBG] : 7.13 scrub ok
Dec  8 04:48:29 np0005550137 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Dec  8 04:48:29 np0005550137 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec  8 04:48:29 np0005550137 ceph-mon[74516]: mon.compute-0@0(leader).osd e49 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  8 04:48:30 np0005550137 systemd[1]: Reloading.
Dec  8 04:48:30 np0005550137 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Dec  8 04:48:30 np0005550137 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec  8 04:48:30 np0005550137 ansible-async_wrapper.py[96015]: Done in kid B.
Dec  8 04:48:30 np0005550137 systemd[1]: Starting Ceph mds.cephfs.compute-0.ywanut for ceb838ef-9d5d-54e4-bddb-2f01adce2ad4...
Dec  8 04:48:30 np0005550137 ceph-mon[74516]: Deploying daemon mds.cephfs.compute-0.ywanut on compute-0
Dec  8 04:48:30 np0005550137 ceph-mon[74516]: daemon mds.cephfs.compute-2.hhmzvb assigned to filesystem cephfs as rank 0 (now has 1 ranks)
Dec  8 04:48:30 np0005550137 ceph-mon[74516]: Health check cleared: MDS_ALL_DOWN (was: 1 filesystem is offline)
Dec  8 04:48:30 np0005550137 ceph-mon[74516]: Health check cleared: MDS_UP_LESS_THAN_MAX (was: 1 filesystem is online with fewer MDS than max_mds)
Dec  8 04:48:30 np0005550137 ceph-mon[74516]: Cluster is now healthy
Dec  8 04:48:30 np0005550137 ceph-mon[74516]: daemon mds.cephfs.compute-2.hhmzvb is now active in filesystem cephfs as rank 0
Dec  8 04:48:30 np0005550137 ceph-mon[74516]: mon.compute-0@0(leader).mds e5 new map
Dec  8 04:48:30 np0005550137 ceph-mon[74516]: mon.compute-0@0(leader).mds e5 print_map#012e5#012btime 2025-12-08T09:48:30:344635+0000#012enable_multiple, ever_enabled_multiple: 1,1#012default compat: compat={},rocompat={},incompat={1=base v0.20,2=client writeable ranges,3=default file layouts on dirs,4=dir inode in separate object,5=mds uses versioned encoding,6=dirfrag is stored in omap,8=no anchor table,9=file layout v2,10=snaprealm v2,11=minor log segments,12=quiesce subvolumes}#012legacy client fscid: 1#012 #012Filesystem 'cephfs' (1)#012fs_name#011cephfs#012epoch#0115#012flags#01112 joinable allow_snaps allow_multimds_snaps#012created#0112025-12-08T09:48:11.623571+0000#012modified#0112025-12-08T09:48:30.344631+0000#012tableserver#0110#012root#0110#012session_timeout#01160#012session_autoclose#011300#012max_file_size#0111099511627776#012max_xattr_size#01165536#012required_client_features#011{}#012last_failure#0110#012last_failure_osd_epoch#0110#012compat#011compat={},rocompat={},incompat={1=base v0.20,2=client writeable ranges,3=default file layouts on dirs,4=dir inode in separate object,5=mds uses versioned encoding,6=dirfrag is stored in omap,7=mds uses inline data,8=no anchor table,9=file layout v2,10=snaprealm v2,11=minor log segments,12=quiesce subvolumes}#012max_mds#0111#012in#0110#012up#011{0=24232}#012failed#011#012damaged#011#012stopped#011#012data_pools#011[7]#012metadata_pool#0116#012inline_data#011disabled#012balancer#011#012bal_rank_mask#011-1#012standby_count_wanted#0110#012qdb_cluster#011leader: 24232 members: 24232#012[mds.cephfs.compute-2.hhmzvb{0:24232} state up:active seq 2 addr [v2:192.168.122.102:6804/1007969270,v1:192.168.122.102:6805/1007969270] compat {c=[1],r=[1],i=[1fff]}]#012 #012 
Dec  8 04:48:30 np0005550137 ceph-mon[74516]: log_channel(cluster) log [DBG] : mds.? [v2:192.168.122.102:6804/1007969270,v1:192.168.122.102:6805/1007969270] up:active
Dec  8 04:48:30 np0005550137 ceph-mon[74516]: log_channel(cluster) log [DBG] : fsmap cephfs:1 {0=cephfs.compute-2.hhmzvb=up:active}
Dec  8 04:48:30 np0005550137 python3[96798]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v19 --fsid ceb838ef-9d5d-54e4-bddb-2f01adce2ad4 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   orch ps -f json _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec  8 04:48:30 np0005550137 podman[96832]: 2025-12-08 09:48:30.503312705 +0000 UTC m=+0.054731056 container create d46237cee2e1bb74c5d504d54539d912eec73a625516b920c93a2c6df36f6e69 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-ceb838ef-9d5d-54e4-bddb-2f01adce2ad4-mds-cephfs-compute-0-ywanut, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, CEPH_REF=squid, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec  8 04:48:30 np0005550137 podman[96845]: 2025-12-08 09:48:30.551961723 +0000 UTC m=+0.056277971 container create 40df8328f563c9b7dbadd7023de01ad28dcd0fbf7cc0c56a0661d254e73fcb25 (image=quay.io/ceph/ceph:v19, name=pedantic_pasteur, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default)
Dec  8 04:48:30 np0005550137 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/036da4e6a2e9b85b4c1197eefcad99555e6ad98cbd0e959229a3cc2aae48b96f/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec  8 04:48:30 np0005550137 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/036da4e6a2e9b85b4c1197eefcad99555e6ad98cbd0e959229a3cc2aae48b96f/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec  8 04:48:30 np0005550137 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/036da4e6a2e9b85b4c1197eefcad99555e6ad98cbd0e959229a3cc2aae48b96f/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec  8 04:48:30 np0005550137 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/036da4e6a2e9b85b4c1197eefcad99555e6ad98cbd0e959229a3cc2aae48b96f/merged/var/lib/ceph/mds/ceph-cephfs.compute-0.ywanut supports timestamps until 2038 (0x7fffffff)
Dec  8 04:48:30 np0005550137 ceph-mgr[74806]: log_channel(cluster) log [DBG] : pgmap v17: 198 pgs: 198 active+clean; 454 KiB data, 102 MiB used, 60 GiB / 60 GiB avail; 2.4 KiB/s rd, 170 B/s wr, 2 op/s
Dec  8 04:48:30 np0005550137 podman[96832]: 2025-12-08 09:48:30.476819028 +0000 UTC m=+0.028237389 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Dec  8 04:48:30 np0005550137 podman[96832]: 2025-12-08 09:48:30.576889004 +0000 UTC m=+0.128307345 container init d46237cee2e1bb74c5d504d54539d912eec73a625516b920c93a2c6df36f6e69 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-ceb838ef-9d5d-54e4-bddb-2f01adce2ad4-mds-cephfs-compute-0-ywanut, ceph=True, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325)
Dec  8 04:48:30 np0005550137 systemd[1]: Started libpod-conmon-40df8328f563c9b7dbadd7023de01ad28dcd0fbf7cc0c56a0661d254e73fcb25.scope.
Dec  8 04:48:30 np0005550137 podman[96832]: 2025-12-08 09:48:30.587666576 +0000 UTC m=+0.139084897 container start d46237cee2e1bb74c5d504d54539d912eec73a625516b920c93a2c6df36f6e69 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-ceb838ef-9d5d-54e4-bddb-2f01adce2ad4-mds-cephfs-compute-0-ywanut, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec  8 04:48:30 np0005550137 bash[96832]: d46237cee2e1bb74c5d504d54539d912eec73a625516b920c93a2c6df36f6e69
Dec  8 04:48:30 np0005550137 systemd[1]: Started libcrun container.
Dec  8 04:48:30 np0005550137 systemd[1]: Started Ceph mds.cephfs.compute-0.ywanut for ceb838ef-9d5d-54e4-bddb-2f01adce2ad4.
Dec  8 04:48:30 np0005550137 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/74706ba680ec7fecd4d44f4da3b7558d70203324a2208e9ed9fdfe63e275232b/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Dec  8 04:48:30 np0005550137 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/74706ba680ec7fecd4d44f4da3b7558d70203324a2208e9ed9fdfe63e275232b/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec  8 04:48:30 np0005550137 podman[96845]: 2025-12-08 09:48:30.530395559 +0000 UTC m=+0.034711837 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Dec  8 04:48:30 np0005550137 podman[96845]: 2025-12-08 09:48:30.625131071 +0000 UTC m=+0.129447329 container init 40df8328f563c9b7dbadd7023de01ad28dcd0fbf7cc0c56a0661d254e73fcb25 (image=quay.io/ceph/ceph:v19, name=pedantic_pasteur, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid)
Dec  8 04:48:30 np0005550137 ceph-mds[96868]: set uid:gid to 167:167 (ceph:ceph)
Dec  8 04:48:30 np0005550137 ceph-mds[96868]: ceph version 19.2.3 (c92aebb279828e9c3c1f5d24613efca272649e62) squid (stable), process ceph-mds, pid 2
Dec  8 04:48:30 np0005550137 ceph-mds[96868]: main not setting numa affinity
Dec  8 04:48:30 np0005550137 ceph-mds[96868]: pidfile_write: ignore empty --pid-file
Dec  8 04:48:30 np0005550137 ceph-ceb838ef-9d5d-54e4-bddb-2f01adce2ad4-mds-cephfs-compute-0-ywanut[96860]: starting mds.cephfs.compute-0.ywanut at 
Dec  8 04:48:30 np0005550137 podman[96845]: 2025-12-08 09:48:30.633877485 +0000 UTC m=+0.138193713 container start 40df8328f563c9b7dbadd7023de01ad28dcd0fbf7cc0c56a0661d254e73fcb25 (image=quay.io/ceph/ceph:v19, name=pedantic_pasteur, org.label-schema.vendor=CentOS, ceph=True, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Dec  8 04:48:30 np0005550137 ceph-mds[96868]: mds.cephfs.compute-0.ywanut Updating MDS map to version 5 from mon.0
Dec  8 04:48:30 np0005550137 podman[96845]: 2025-12-08 09:48:30.637380326 +0000 UTC m=+0.141696574 container attach 40df8328f563c9b7dbadd7023de01ad28dcd0fbf7cc0c56a0661d254e73fcb25 (image=quay.io/ceph/ceph:v19, name=pedantic_pasteur, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, ceph=True, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Dec  8 04:48:30 np0005550137 ceph-mon[74516]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Dec  8 04:48:30 np0005550137 ceph-mon[74516]: log_channel(audit) log [INF] : from='mgr.14478 192.168.122.100:0/575685615' entity='mgr.compute-0.kitiwu' 
Dec  8 04:48:30 np0005550137 ceph-mon[74516]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Dec  8 04:48:30 np0005550137 ceph-mon[74516]: log_channel(audit) log [INF] : from='mgr.14478 192.168.122.100:0/575685615' entity='mgr.compute-0.kitiwu' 
Dec  8 04:48:30 np0005550137 ceph-mon[74516]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.mds.cephfs}] v 0)
Dec  8 04:48:30 np0005550137 ceph-mon[74516]: log_channel(audit) log [INF] : from='mgr.14478 192.168.122.100:0/575685615' entity='mgr.compute-0.kitiwu' 
Dec  8 04:48:30 np0005550137 ceph-mon[74516]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get-or-create", "entity": "mds.cephfs.compute-1.tjxjxt", "caps": ["mon", "profile mds", "osd", "allow rw tag cephfs *=*", "mds", "allow"]} v 0)
Dec  8 04:48:30 np0005550137 ceph-mon[74516]: log_channel(audit) log [INF] : from='mgr.14478 192.168.122.100:0/575685615' entity='mgr.compute-0.kitiwu' cmd=[{"prefix": "auth get-or-create", "entity": "mds.cephfs.compute-1.tjxjxt", "caps": ["mon", "profile mds", "osd", "allow rw tag cephfs *=*", "mds", "allow"]}]: dispatch
Dec  8 04:48:30 np0005550137 ceph-mon[74516]: log_channel(audit) log [INF] : from='mgr.14478 192.168.122.100:0/575685615' entity='mgr.compute-0.kitiwu' cmd='[{"prefix": "auth get-or-create", "entity": "mds.cephfs.compute-1.tjxjxt", "caps": ["mon", "profile mds", "osd", "allow rw tag cephfs *=*", "mds", "allow"]}]': finished
Dec  8 04:48:30 np0005550137 ceph-mon[74516]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Dec  8 04:48:30 np0005550137 ceph-mon[74516]: log_channel(audit) log [DBG] : from='mgr.14478 192.168.122.100:0/575685615' entity='mgr.compute-0.kitiwu' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec  8 04:48:30 np0005550137 ceph-mgr[74806]: [cephadm INFO cephadm.serve] Deploying daemon mds.cephfs.compute-1.tjxjxt on compute-1
Dec  8 04:48:30 np0005550137 ceph-mgr[74806]: log_channel(cephadm) log [INF] : Deploying daemon mds.cephfs.compute-1.tjxjxt on compute-1
Dec  8 04:48:30 np0005550137 ceph-osd[83009]: log_channel(cluster) log [DBG] : 6.1d scrub starts
Dec  8 04:48:30 np0005550137 ceph-osd[83009]: log_channel(cluster) log [DBG] : 6.1d scrub ok
Dec  8 04:48:31 np0005550137 ceph-mgr[74806]: log_channel(audit) log [DBG] : from='client.14628 -' entity='client.admin' cmd=[{"prefix": "orch ps", "target": ["mon-mgr", ""], "format": "json"}]: dispatch
Dec  8 04:48:31 np0005550137 pedantic_pasteur[96866]: 
Dec  8 04:48:31 np0005550137 pedantic_pasteur[96866]: [{"container_id": "bc6254304966", "container_image_digests": ["quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec", "quay.io/ceph/ceph@sha256:af0c5903e901e329adabe219dfc8d0c3efc1f05102a753902f33ee16c26b6cee"], "container_image_id": "aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c", "container_image_name": "quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec", "cpu_percentage": "0.14%", "created": "2025-12-08T09:45:38.099459Z", "daemon_id": "compute-0", "daemon_name": "crash.compute-0", "daemon_type": "crash", "hostname": "compute-0", "is_active": false, "last_refresh": "2025-12-08T09:48:12.898360Z", "memory_usage": 7795113, "ports": [], "service_name": "crash", "started": "2025-12-08T09:45:37.971584Z", "status": 1, "status_desc": "running", "systemd_unit": "ceph-ceb838ef-9d5d-54e4-bddb-2f01adce2ad4@crash.compute-0", "version": "19.2.3"}, {"container_id": "0b1ceffabe23", "container_image_digests": ["quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec"], "container_image_id": "aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c", "container_image_name": "quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec", "cpu_percentage": "0.49%", "created": "2025-12-08T09:46:16.734219Z", "daemon_id": "compute-1", "daemon_name": "crash.compute-1", "daemon_type": "crash", "hostname": "compute-1", "is_active": false, "last_refresh": "2025-12-08T09:48:12.823286Z", "memory_usage": 7821328, "ports": [], "service_name": "crash", "started": "2025-12-08T09:46:16.623407Z", "status": 1, "status_desc": "running", "systemd_unit": "ceph-ceb838ef-9d5d-54e4-bddb-2f01adce2ad4@crash.compute-1", "version": "19.2.3"}, {"container_id": "285c43b94a76", "container_image_digests": ["quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec"], "container_image_id": "aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c", "container_image_name": "quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec", "cpu_percentage": "0.27%", "created": "2025-12-08T09:47:13.644841Z", "daemon_id": "compute-2", "daemon_name": "crash.compute-2", "daemon_type": "crash", "hostname": "compute-2", "is_active": false, "last_refresh": "2025-12-08T09:48:12.772118Z", "memory_usage": 7799308, "ports": [], "service_name": "crash", "started": "2025-12-08T09:47:13.547823Z", "status": 1, "status_desc": "running", "systemd_unit": "ceph-ceb838ef-9d5d-54e4-bddb-2f01adce2ad4@crash.compute-2", "version": "19.2.3"}, {"daemon_id": "cephfs.compute-0.ywanut", "daemon_name": "mds.cephfs.compute-0.ywanut", "daemon_type": "mds", "events": ["2025-12-08T09:48:30.680312Z daemon:mds.cephfs.compute-0.ywanut [INFO] \"Deployed mds.cephfs.compute-0.ywanut on host 'compute-0'\""], "hostname": "compute-0", "is_active": false, "ports": [], "service_name": "mds.cephfs", "status": 2, "status_desc": "starting"}, {"daemon_id": "cephfs.compute-2.hhmzvb", "daemon_name": "mds.cephfs.compute-2.hhmzvb", "daemon_type": "mds", "events": ["2025-12-08T09:48:28.884283Z daemon:mds.cephfs.compute-2.hhmzvb [INFO] \"Deployed mds.cephfs.compute-2.hhmzvb on host 'compute-2'\""], "hostname": "compute-2", "is_active": false, "ports": [], "service_name": "mds.cephfs", "status": 2, "status_desc": "starting"}, {"container_id": "45414a27262c", "container_image_digests": ["quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec", "quay.io/ceph/ceph@sha256:af0c5903e901e329adabe219dfc8d0c3efc1f05102a753902f33ee16c26b6cee"], "container_image_id": "aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c", "container_image_name": "quay.io/ceph/ceph:v19", "cpu_percentage": "27.95%", "created": "2025-12-08T09:45:01.537292Z", "daemon_id": "compute-0.kitiwu", "daemon_name": "mgr.compute-0.kitiwu", "daemon_type": "mgr", "hostname": "compute-0", "is_active": false, "last_refresh": "2025-12-08T09:48:12.898252Z", "memory_usage": 542008934, "ports": [9283, 8765], "service_name": "mgr", "started": "2025-12-08T09:45:01.421415Z", "status": 1, "status_desc": "running", "systemd_unit": "ceph-ceb838ef-9d5d-54e4-bddb-2f01adce2ad4@mgr.compute-0.kitiwu", "version": "19.2.3"}, {"container_id": "9f365c7893a6", "container_image_digests": ["quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec"], "container_image_id": "aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c", "container_image_name": "quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec", "cpu_percentage": "41.11%", "created": "2025-12-08T09:47:10.984763Z", "daemon_id": "compute-1.mmkaif", "daemon_name": "mgr.compute-1.mmkaif", "daemon_type": "mgr", "hostname": "compute-1", "is_active": false, "last_refresh": "2025-12-08T09:48:12.823701Z", "memory_usage": 503735910, "ports": [8765], "service_name": "mgr", "started": "2025-12-08T09:47:10.878510Z", "status": 1, "status_desc": "running", "systemd_unit": "ceph-ceb838ef-9d5d-54e4-bddb-2f01adce2ad4@mgr.compute-1.mmkaif", "version": "19.2.3"}, {"container_id": "c1057e782db2", "container_image_digests": ["quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec"], "container_image_id": "aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c", "container_image_name": "quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec", "cpu_percentage": "36.87%", "created": "2025-12-08T09:47:04.237586Z", "daemon_id": "compute-2.zqytsv", "daemon_name": "mgr.compute-2.zqytsv", "daemon_type": "mgr", "hostname": "compute-2", "is_active": false, "last_refresh": "2025-12-08T09:48:12.771967Z", "memory_usage": 503421337, "ports": [8765], "service_name": "mgr", "started": "2025-12-08T09:47:04.141779Z", "status": 1, "status_desc": "running", "systemd_unit": "ceph-ceb838ef-9d5d-54e4-bddb-2f01adce2ad4@mgr.compute-2.zqytsv", "version": "19.2.3"}, {"container_id": "e9eed32aa882", "container_image_digests": ["quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec", "quay.io/ceph/ceph@sha256:af0c5903e901e329adabe219dfc8d0c3efc1f05102a753902f33ee16c26b6cee"], "container_image_id": "aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c", "container_image_name": "quay.io/ceph/ceph:v19", "cpu_percentage": "2.92%", "created": "2025-12-08T09:44:57.223155Z", "daemon_id": "compute-0", "daemon_name": "mon.compute-0", "daemon_type": "mon", "hostname": "compute-0", "is_active": false, "last_refresh": "2025-12-08T09:48:12.898117Z", "memory_request": 2147483648, "memory_usage": 59464744, "ports": [], "service_name": "mon", "started": "2025-12-08T09:44:59.622943Z", "status": 1, "status_desc": "running", "systemd_unit": "ceph-ceb838ef-9d5d-54e4-bddb-2f01adce2ad4@mon.compute-0", "version": "19.2.3"}, {"container_id": "064bc633f509", "container_image_digests": ["quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec"], "container_image_id": "aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c", "container_image_name": "quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec", "cpu_percentage": "2.43%", "created": "2025-12-08T09:46:59.483436Z", "daemon_id": "compute-1", "daemon_name": "mon.compute-1", "daemon_type": "mon", "hostname": "compute-1", "is_active": false, "last_refresh": "2025-12-08T09:48:12.823536Z", "memory_request": 2147483648, "memory_usage": 51652853, "ports": [], "service_name": "mon", "started": "2025-12-08T09:46:59.364248Z", "status": 1, "status_desc": "running", "systemd_unit": "ceph-ceb838ef-9d5d-54e4-bddb-2f01adce2ad4@mon.compute-1", "version": "19.2.3"}, {"container_id": "3f8f4ec9b581", "container_image_digests": ["quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec"], "container_image_id": "aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c", "container_image_name": "qu
Dec  8 04:48:31 np0005550137 systemd[1]: libpod-40df8328f563c9b7dbadd7023de01ad28dcd0fbf7cc0c56a0661d254e73fcb25.scope: Deactivated successfully.
Dec  8 04:48:31 np0005550137 conmon[96866]: conmon 40df8328f563c9b7dbad <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-40df8328f563c9b7dbadd7023de01ad28dcd0fbf7cc0c56a0661d254e73fcb25.scope/container/memory.events
Dec  8 04:48:31 np0005550137 podman[96845]: 2025-12-08 09:48:31.033031801 +0000 UTC m=+0.537348049 container died 40df8328f563c9b7dbadd7023de01ad28dcd0fbf7cc0c56a0661d254e73fcb25 (image=quay.io/ceph/ceph:v19, name=pedantic_pasteur, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec  8 04:48:31 np0005550137 systemd[1]: var-lib-containers-storage-overlay-74706ba680ec7fecd4d44f4da3b7558d70203324a2208e9ed9fdfe63e275232b-merged.mount: Deactivated successfully.
Dec  8 04:48:31 np0005550137 podman[96845]: 2025-12-08 09:48:31.077370995 +0000 UTC m=+0.581687263 container remove 40df8328f563c9b7dbadd7023de01ad28dcd0fbf7cc0c56a0661d254e73fcb25 (image=quay.io/ceph/ceph:v19, name=pedantic_pasteur, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid)
Dec  8 04:48:31 np0005550137 systemd[1]: libpod-conmon-40df8328f563c9b7dbadd7023de01ad28dcd0fbf7cc0c56a0661d254e73fcb25.scope: Deactivated successfully.
Dec  8 04:48:31 np0005550137 ceph-mon[74516]: from='mgr.14478 192.168.122.100:0/575685615' entity='mgr.compute-0.kitiwu' 
Dec  8 04:48:31 np0005550137 ceph-mon[74516]: from='mgr.14478 192.168.122.100:0/575685615' entity='mgr.compute-0.kitiwu' 
Dec  8 04:48:31 np0005550137 ceph-mon[74516]: from='mgr.14478 192.168.122.100:0/575685615' entity='mgr.compute-0.kitiwu' 
Dec  8 04:48:31 np0005550137 ceph-mon[74516]: from='mgr.14478 192.168.122.100:0/575685615' entity='mgr.compute-0.kitiwu' cmd=[{"prefix": "auth get-or-create", "entity": "mds.cephfs.compute-1.tjxjxt", "caps": ["mon", "profile mds", "osd", "allow rw tag cephfs *=*", "mds", "allow"]}]: dispatch
Dec  8 04:48:31 np0005550137 ceph-mon[74516]: from='mgr.14478 192.168.122.100:0/575685615' entity='mgr.compute-0.kitiwu' cmd='[{"prefix": "auth get-or-create", "entity": "mds.cephfs.compute-1.tjxjxt", "caps": ["mon", "profile mds", "osd", "allow rw tag cephfs *=*", "mds", "allow"]}]': finished
Dec  8 04:48:31 np0005550137 rsyslogd[1006]: message too long (15927) with configured size 8096, begin of message is: [{"container_id": "bc6254304966", "container_image_digests": ["quay.io/ceph/ceph [v8.2510.0-2.el9 try https://www.rsyslog.com/e/2445 ]
Dec  8 04:48:31 np0005550137 ceph-mon[74516]: mon.compute-0@0(leader).mds e6 new map
Dec  8 04:48:31 np0005550137 ceph-mon[74516]: mon.compute-0@0(leader).mds e6 print_map#012e6#012btime 2025-12-08T09:48:31:354977+0000#012enable_multiple, ever_enabled_multiple: 1,1#012default compat: compat={},rocompat={},incompat={1=base v0.20,2=client writeable ranges,3=default file layouts on dirs,4=dir inode in separate object,5=mds uses versioned encoding,6=dirfrag is stored in omap,8=no anchor table,9=file layout v2,10=snaprealm v2,11=minor log segments,12=quiesce subvolumes}#012legacy client fscid: 1#012 #012Filesystem 'cephfs' (1)#012fs_name#011cephfs#012epoch#0115#012flags#01112 joinable allow_snaps allow_multimds_snaps#012created#0112025-12-08T09:48:11.623571+0000#012modified#0112025-12-08T09:48:30.344631+0000#012tableserver#0110#012root#0110#012session_timeout#01160#012session_autoclose#011300#012max_file_size#0111099511627776#012max_xattr_size#01165536#012required_client_features#011{}#012last_failure#0110#012last_failure_osd_epoch#0110#012compat#011compat={},rocompat={},incompat={1=base v0.20,2=client writeable ranges,3=default file layouts on dirs,4=dir inode in separate object,5=mds uses versioned encoding,6=dirfrag is stored in omap,7=mds uses inline data,8=no anchor table,9=file layout v2,10=snaprealm v2,11=minor log segments,12=quiesce subvolumes}#012max_mds#0111#012in#0110#012up#011{0=24232}#012failed#011#012damaged#011#012stopped#011#012data_pools#011[7]#012metadata_pool#0116#012inline_data#011disabled#012balancer#011#012bal_rank_mask#011-1#012standby_count_wanted#0110#012qdb_cluster#011leader: 24232 members: 24232#012[mds.cephfs.compute-2.hhmzvb{0:24232} state up:active seq 2 addr [v2:192.168.122.102:6804/1007969270,v1:192.168.122.102:6805/1007969270] compat {c=[1],r=[1],i=[1fff]}]#012 #012 #012Standby daemons:#012 #012[mds.cephfs.compute-0.ywanut{-1:14622} state up:standby seq 1 addr [v2:192.168.122.100:6806/629465497,v1:192.168.122.100:6807/629465497] compat {c=[1],r=[1],i=[1fff]}]
Dec  8 04:48:31 np0005550137 ceph-mds[96868]: mds.cephfs.compute-0.ywanut Updating MDS map to version 6 from mon.0
Dec  8 04:48:31 np0005550137 ceph-mds[96868]: mds.cephfs.compute-0.ywanut Monitors have assigned me to become a standby
Dec  8 04:48:31 np0005550137 ceph-mon[74516]: log_channel(cluster) log [DBG] : mds.? [v2:192.168.122.100:6806/629465497,v1:192.168.122.100:6807/629465497] up:boot
Dec  8 04:48:31 np0005550137 ceph-mon[74516]: log_channel(cluster) log [DBG] : fsmap cephfs:1 {0=cephfs.compute-2.hhmzvb=up:active} 1 up:standby
Dec  8 04:48:31 np0005550137 ceph-mon[74516]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mds metadata", "who": "cephfs.compute-0.ywanut"} v 0)
Dec  8 04:48:31 np0005550137 ceph-mon[74516]: log_channel(audit) log [DBG] : from='mgr.14478 192.168.122.100:0/575685615' entity='mgr.compute-0.kitiwu' cmd=[{"prefix": "mds metadata", "who": "cephfs.compute-0.ywanut"}]: dispatch
Dec  8 04:48:31 np0005550137 ceph-mon[74516]: mon.compute-0@0(leader).mds e6 all = 0
Dec  8 04:48:31 np0005550137 ceph-mon[74516]: mon.compute-0@0(leader).mds e7 new map
Dec  8 04:48:31 np0005550137 ceph-mon[74516]: mon.compute-0@0(leader).mds e7 print_map#012e7#012btime 2025-12-08T09:48:31:374596+0000#012enable_multiple, ever_enabled_multiple: 1,1#012default compat: compat={},rocompat={},incompat={1=base v0.20,2=client writeable ranges,3=default file layouts on dirs,4=dir inode in separate object,5=mds uses versioned encoding,6=dirfrag is stored in omap,8=no anchor table,9=file layout v2,10=snaprealm v2,11=minor log segments,12=quiesce subvolumes}#012legacy client fscid: 1#012 #012Filesystem 'cephfs' (1)#012fs_name#011cephfs#012epoch#0115#012flags#01112 joinable allow_snaps allow_multimds_snaps#012created#0112025-12-08T09:48:11.623571+0000#012modified#0112025-12-08T09:48:30.344631+0000#012tableserver#0110#012root#0110#012session_timeout#01160#012session_autoclose#011300#012max_file_size#0111099511627776#012max_xattr_size#01165536#012required_client_features#011{}#012last_failure#0110#012last_failure_osd_epoch#0110#012compat#011compat={},rocompat={},incompat={1=base v0.20,2=client writeable ranges,3=default file layouts on dirs,4=dir inode in separate object,5=mds uses versioned encoding,6=dirfrag is stored in omap,7=mds uses inline data,8=no anchor table,9=file layout v2,10=snaprealm v2,11=minor log segments,12=quiesce subvolumes}#012max_mds#0111#012in#0110#012up#011{0=24232}#012failed#011#012damaged#011#012stopped#011#012data_pools#011[7]#012metadata_pool#0116#012inline_data#011disabled#012balancer#011#012bal_rank_mask#011-1#012standby_count_wanted#0111#012qdb_cluster#011leader: 24232 members: 24232#012[mds.cephfs.compute-2.hhmzvb{0:24232} state up:active seq 2 addr [v2:192.168.122.102:6804/1007969270,v1:192.168.122.102:6805/1007969270] compat {c=[1],r=[1],i=[1fff]}]#012 #012 #012Standby daemons:#012 #012[mds.cephfs.compute-0.ywanut{-1:14622} state up:standby seq 1 addr [v2:192.168.122.100:6806/629465497,v1:192.168.122.100:6807/629465497] compat {c=[1],r=[1],i=[1fff]}]
Dec  8 04:48:31 np0005550137 ceph-mon[74516]: log_channel(cluster) log [DBG] : fsmap cephfs:1 {0=cephfs.compute-2.hhmzvb=up:active} 1 up:standby
Dec  8 04:48:32 np0005550137 python3[96947]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v19 --fsid ceb838ef-9d5d-54e4-bddb-2f01adce2ad4 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   -s -f json _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec  8 04:48:32 np0005550137 podman[96948]: 2025-12-08 09:48:32.149374912 +0000 UTC m=+0.043991256 container create 46608557971baad5b89e1b7112e7203de00d5eb58aa99e27206d130c9875bf39 (image=quay.io/ceph/ceph:v19, name=angry_northcutt, CEPH_REF=squid, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, OSD_FLAVOR=default)
Dec  8 04:48:32 np0005550137 systemd[1]: Started libpod-conmon-46608557971baad5b89e1b7112e7203de00d5eb58aa99e27206d130c9875bf39.scope.
Dec  8 04:48:32 np0005550137 systemd[1]: Started libcrun container.
Dec  8 04:48:32 np0005550137 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9ca3126aac7ecc411184fa4c9e4475472944f107ca63b31e7abb74cb1edc97ac/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Dec  8 04:48:32 np0005550137 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9ca3126aac7ecc411184fa4c9e4475472944f107ca63b31e7abb74cb1edc97ac/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec  8 04:48:32 np0005550137 podman[96948]: 2025-12-08 09:48:32.13036311 +0000 UTC m=+0.024979424 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Dec  8 04:48:32 np0005550137 podman[96948]: 2025-12-08 09:48:32.230065737 +0000 UTC m=+0.124682071 container init 46608557971baad5b89e1b7112e7203de00d5eb58aa99e27206d130c9875bf39 (image=quay.io/ceph/ceph:v19, name=angry_northcutt, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image)
Dec  8 04:48:32 np0005550137 podman[96948]: 2025-12-08 09:48:32.236863674 +0000 UTC m=+0.131479988 container start 46608557971baad5b89e1b7112e7203de00d5eb58aa99e27206d130c9875bf39 (image=quay.io/ceph/ceph:v19, name=angry_northcutt, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True)
Dec  8 04:48:32 np0005550137 podman[96948]: 2025-12-08 09:48:32.239797719 +0000 UTC m=+0.134414033 container attach 46608557971baad5b89e1b7112e7203de00d5eb58aa99e27206d130c9875bf39 (image=quay.io/ceph/ceph:v19, name=angry_northcutt, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0)
Dec  8 04:48:32 np0005550137 ceph-mon[74516]: Deploying daemon mds.cephfs.compute-1.tjxjxt on compute-1
Dec  8 04:48:32 np0005550137 ceph-mon[74516]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1.devices.0}] v 0)
Dec  8 04:48:32 np0005550137 ceph-mon[74516]: log_channel(audit) log [INF] : from='mgr.14478 192.168.122.100:0/575685615' entity='mgr.compute-0.kitiwu' 
Dec  8 04:48:32 np0005550137 ceph-mon[74516]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1}] v 0)
Dec  8 04:48:32 np0005550137 ceph-mon[74516]: log_channel(audit) log [INF] : from='mgr.14478 192.168.122.100:0/575685615' entity='mgr.compute-0.kitiwu' 
Dec  8 04:48:32 np0005550137 ceph-mon[74516]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.mds.cephfs}] v 0)
Dec  8 04:48:32 np0005550137 ceph-mon[74516]: log_channel(audit) log [INF] : from='mgr.14478 192.168.122.100:0/575685615' entity='mgr.compute-0.kitiwu' 
Dec  8 04:48:32 np0005550137 ceph-mgr[74806]: [progress INFO root] complete: finished ev 75282a90-c848-47ae-a797-547d665919ae (Updating mds.cephfs deployment (+3 -> 3))
Dec  8 04:48:32 np0005550137 ceph-mgr[74806]: [progress INFO root] Completed event 75282a90-c848-47ae-a797-547d665919ae (Updating mds.cephfs deployment (+3 -> 3)) in 5 seconds
Dec  8 04:48:32 np0005550137 ceph-mon[74516]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config set, name=mds_join_fs}] v 0)
Dec  8 04:48:32 np0005550137 ceph-mon[74516]: log_channel(audit) log [INF] : from='mgr.14478 192.168.122.100:0/575685615' entity='mgr.compute-0.kitiwu' 
Dec  8 04:48:32 np0005550137 ceph-mon[74516]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.mds.cephfs}] v 0)
Dec  8 04:48:32 np0005550137 ceph-mon[74516]: log_channel(audit) log [INF] : from='mgr.14478 192.168.122.100:0/575685615' entity='mgr.compute-0.kitiwu' 
Dec  8 04:48:32 np0005550137 ceph-mgr[74806]: [progress INFO root] update: starting ev b5459fec-57ce-4ca0-8b16-50ba00cd4784 (Updating nfs.cephfs deployment (+3 -> 3))
Dec  8 04:48:32 np0005550137 ceph-mon[74516]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.nfs.cephfs}] v 0)
Dec  8 04:48:32 np0005550137 ceph-mgr[74806]: log_channel(cluster) log [DBG] : pgmap v18: 198 pgs: 198 active+clean; 456 KiB data, 102 MiB used, 60 GiB / 60 GiB avail; 2.4 KiB/s rd, 1.2 KiB/s wr, 5 op/s
Dec  8 04:48:32 np0005550137 ceph-mon[74516]: log_channel(audit) log [INF] : from='mgr.14478 192.168.122.100:0/575685615' entity='mgr.compute-0.kitiwu' 
Dec  8 04:48:32 np0005550137 ceph-mgr[74806]: [cephadm INFO cephadm.services.nfs] Creating key for client.nfs.cephfs.0.0.compute-1.drrxym
Dec  8 04:48:32 np0005550137 ceph-mgr[74806]: log_channel(cephadm) log [INF] : Creating key for client.nfs.cephfs.0.0.compute-1.drrxym
Dec  8 04:48:32 np0005550137 ceph-mon[74516]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get-or-create", "entity": "client.nfs.cephfs.0.0.compute-1.drrxym", "caps": ["mon", "allow r", "osd", "allow rw pool=.nfs namespace=cephfs"]} v 0)
Dec  8 04:48:32 np0005550137 ceph-mon[74516]: log_channel(audit) log [INF] : from='mgr.14478 192.168.122.100:0/575685615' entity='mgr.compute-0.kitiwu' cmd=[{"prefix": "auth get-or-create", "entity": "client.nfs.cephfs.0.0.compute-1.drrxym", "caps": ["mon", "allow r", "osd", "allow rw pool=.nfs namespace=cephfs"]}]: dispatch
Dec  8 04:48:32 np0005550137 ceph-mon[74516]: log_channel(audit) log [INF] : from='mgr.14478 192.168.122.100:0/575685615' entity='mgr.compute-0.kitiwu' cmd='[{"prefix": "auth get-or-create", "entity": "client.nfs.cephfs.0.0.compute-1.drrxym", "caps": ["mon", "allow r", "osd", "allow rw pool=.nfs namespace=cephfs"]}]': finished
Dec  8 04:48:32 np0005550137 ceph-mgr[74806]: [cephadm INFO root] Ensuring nfs.cephfs.0 is in the ganesha grace table
Dec  8 04:48:32 np0005550137 ceph-mgr[74806]: log_channel(cephadm) log [INF] : Ensuring nfs.cephfs.0 is in the ganesha grace table
Dec  8 04:48:32 np0005550137 ceph-mon[74516]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get-or-create", "entity": "client.mgr.nfs.grace.nfs.cephfs", "caps": ["mon", "allow r", "osd", "allow rwx pool .nfs"]} v 0)
Dec  8 04:48:32 np0005550137 ceph-mon[74516]: log_channel(audit) log [INF] : from='mgr.14478 192.168.122.100:0/575685615' entity='mgr.compute-0.kitiwu' cmd=[{"prefix": "auth get-or-create", "entity": "client.mgr.nfs.grace.nfs.cephfs", "caps": ["mon", "allow r", "osd", "allow rwx pool .nfs"]}]: dispatch
Dec  8 04:48:32 np0005550137 ceph-mon[74516]: log_channel(audit) log [INF] : from='mgr.14478 192.168.122.100:0/575685615' entity='mgr.compute-0.kitiwu' cmd='[{"prefix": "auth get-or-create", "entity": "client.mgr.nfs.grace.nfs.cephfs", "caps": ["mon", "allow r", "osd", "allow rwx pool .nfs"]}]': finished
Dec  8 04:48:32 np0005550137 ceph-mon[74516]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Dec  8 04:48:32 np0005550137 ceph-mon[74516]: log_channel(audit) log [DBG] : from='mgr.14478 192.168.122.100:0/575685615' entity='mgr.compute-0.kitiwu' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec  8 04:48:32 np0005550137 ceph-mon[74516]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "status", "format": "json"} v 0)
Dec  8 04:48:32 np0005550137 ceph-mon[74516]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2405458525' entity='client.admin' cmd=[{"prefix": "status", "format": "json"}]: dispatch
Dec  8 04:48:32 np0005550137 angry_northcutt[96963]: 
Dec  8 04:48:32 np0005550137 angry_northcutt[96963]: {"fsid":"ceb838ef-9d5d-54e4-bddb-2f01adce2ad4","health":{"status":"HEALTH_OK","checks":{},"mutes":[]},"election_epoch":14,"quorum":[0,1,2],"quorum_names":["compute-0","compute-2","compute-1"],"quorum_age":83,"monmap":{"epoch":3,"min_mon_release_name":"squid","num_mons":3},"osdmap":{"epoch":49,"num_osds":3,"num_up_osds":3,"osd_up_since":1765187255,"num_in_osds":3,"osd_in_since":1765187235,"num_remapped_pgs":0},"pgmap":{"pgs_by_state":[{"state_name":"active+clean","count":198}],"num_pgs":198,"num_pools":12,"num_objects":197,"data_bytes":465002,"bytes_used":107438080,"bytes_avail":64304488448,"bytes_total":64411926528,"read_bytes_sec":2473,"write_bytes_sec":170,"read_op_per_sec":2,"write_op_per_sec":0},"fsmap":{"epoch":7,"btime":"2025-12-08T09:48:31:374596+0000","id":1,"up":1,"in":1,"max":1,"by_rank":[{"filesystem_id":1,"rank":0,"name":"cephfs.compute-2.hhmzvb","status":"up:active","gid":24232}],"up:standby":1},"mgrmap":{"available":true,"num_standbys":2,"modules":["cephadm","dashboard","iostat","nfs","restful"],"services":{"dashboard":"http://192.168.122.100:8443/"}},"servicemap":{"epoch":5,"modified":"2025-12-08T09:47:53.183814+0000","services":{"mgr":{"daemons":{"summary":"","compute-0.kitiwu":{"start_epoch":0,"start_stamp":"0.000000","gid":0,"addr":"(unrecognized address family 0)/0","metadata":{},"task_status":{}},"compute-1.mmkaif":{"start_epoch":0,"start_stamp":"0.000000","gid":0,"addr":"(unrecognized address family 0)/0","metadata":{},"task_status":{}},"compute-2.zqytsv":{"start_epoch":0,"start_stamp":"0.000000","gid":0,"addr":"(unrecognized address family 0)/0","metadata":{},"task_status":{}}}},"mon":{"daemons":{"summary":"","compute-0":{"start_epoch":0,"start_stamp":"0.000000","gid":0,"addr":"(unrecognized address family 0)/0","metadata":{},"task_status":{}},"compute-1":{"start_epoch":0,"start_stamp":"0.000000","gid":0,"addr":"(unrecognized address family 0)/0","metadata":{},"task_status":{}},"compute-2":{"start_epoch":0,"start_stamp":"0.000000","gid":0,"addr":"(unrecognized address family 0)/0","metadata":{},"task_status":{}}}},"osd":{"daemons":{"summary":"","0":{"start_epoch":0,"start_stamp":"0.000000","gid":0,"addr":"(unrecognized address family 0)/0","metadata":{},"task_status":{}},"1":{"start_epoch":0,"start_stamp":"0.000000","gid":0,"addr":"(unrecognized address family 0)/0","metadata":{},"task_status":{}},"2":{"start_epoch":0,"start_stamp":"0.000000","gid":0,"addr":"(unrecognized address family 0)/0","metadata":{},"task_status":{}}}},"rgw":{"daemons":{"summary":"","14391":{"start_epoch":4,"start_stamp":"2025-12-08T09:47:52.204639+0000","gid":14391,"addr":"192.168.122.100:0/3979683973","metadata":{"arch":"x86_64","ceph_release":"squid","ceph_version":"ceph version 19.2.3 (c92aebb279828e9c3c1f5d24613efca272649e62) squid (stable)","ceph_version_short":"19.2.3","container_hostname":"compute-0","container_image":"quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec","cpu":"AMD EPYC-Rome Processor","distro":"centos","distro_description":"CentOS Stream 9","distro_version":"9","frontend_config#0":"beast endpoint=192.168.122.100:8082","frontend_type#0":"beast","hostname":"compute-0","id":"rgw.compute-0.slkrtm","kernel_description":"#1 SMP PREEMPT_DYNAMIC Fri Nov 28 14:01:17 UTC 2025","kernel_version":"5.14.0-645.el9.x86_64","mem_swap_kb":"1048572","mem_total_kb":"7864320","num_handles":"1","os":"Linux","pid":"2","realm_id":"","realm_name":"","zone_id":"f2fa6c7a-b392-4a6f-84e7-a8a07770c620","zone_name":"default","zonegroup_id":"68492763-3f06-49eb-87b1-edc419fff75a","zonegroup_name":"default"},"task_status":{}},"24149":{"start_epoch":5,"start_stamp":"2025-12-08T09:47:52.220080+0000","gid":24149,"addr":"192.168.122.101:0/3268586272","metadata":{"arch":"x86_64","ceph_release":"squid","ceph_version":"ceph version 19.2.3 (c92aebb279828e9c3c1f5d24613efca272649e62) squid (stable)","ceph_version_short":"19.2.3","container_hostname":"compute-1","container_image":"quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec","cpu":"AMD EPYC-Rome Processor","distro":"centos","distro_description":"CentOS Stream 9","distro_version":"9","frontend_config#0":"beast endpoint=192.168.122.101:8082","frontend_type#0":"beast","hostname":"compute-1","id":"rgw.compute-1.rblbpq","kernel_description":"#1 SMP PREEMPT_DYNAMIC Fri Nov 28 14:01:17 UTC 2025","kernel_version":"5.14.0-645.el9.x86_64","mem_swap_kb":"1048572","mem_total_kb":"7864320","num_handles":"1","os":"Linux","pid":"2","realm_id":"","realm_name":"","zone_id":"f2fa6c7a-b392-4a6f-84e7-a8a07770c620","zone_name":"default","zonegroup_id":"68492763-3f06-49eb-87b1-edc419fff75a","zonegroup_name":"default"},"task_status":{}},"24160":{"start_epoch":5,"start_stamp":"2025-12-08T09:47:52.213233+0000","gid":24160,"addr":"192.168.122.102:0/2102705496","metadata":{"arch":"x86_64","ceph_release":"squid","ceph_version":"ceph version 19.2.3 (c92aebb279828e9c3c1f5d24613efca272649e62) squid (stable)","ceph_version_short":"19.2.3","container_hostname":"compute-2","container_image":"quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec","cpu":"AMD EPYC-Rome Processor","distro":"centos","distro_description":"CentOS Stream 9","distro_version":"9","frontend_config#0":"beast endpoint=192.168.122.102:8082","frontend_type#0":"beast","hostname":"compute-2","id":"rgw.compute-2.dimexm","kernel_description":"#1 SMP PREEMPT_DYNAMIC Fri Nov 28 14:01:17 UTC 2025","kernel_version":"5.14.0-645.el9.x86_64","mem_swap_kb":"1048572","mem_total_kb":"7864312","num_handles":"1","os":"Linux","pid":"2","realm_id":"","realm_name":"","zone_id":"f2fa6c7a-b392-4a6f-84e7-a8a07770c620","zone_name":"default","zonegroup_id":"68492763-3f06-49eb-87b1-edc419fff75a","zonegroup_name":"default"},"task_status":{}}}}}},"progress_events":{"75282a90-c848-47ae-a797-547d665919ae":{"message":"Updating mds.cephfs deployment (+3 -> 3) (1s)\n      [=========...................] (remaining: 3s)","progress":0.3333333432674408,"add_to_ceph_s":true}}}
Dec  8 04:48:32 np0005550137 systemd[1]: libpod-46608557971baad5b89e1b7112e7203de00d5eb58aa99e27206d130c9875bf39.scope: Deactivated successfully.
Dec  8 04:48:32 np0005550137 conmon[96963]: conmon 46608557971baad5b89e <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-46608557971baad5b89e1b7112e7203de00d5eb58aa99e27206d130c9875bf39.scope/container/memory.events
Dec  8 04:48:32 np0005550137 podman[96948]: 2025-12-08 09:48:32.675046521 +0000 UTC m=+0.569662835 container died 46608557971baad5b89e1b7112e7203de00d5eb58aa99e27206d130c9875bf39 (image=quay.io/ceph/ceph:v19, name=angry_northcutt, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image)
Dec  8 04:48:32 np0005550137 systemd[1]: var-lib-containers-storage-overlay-9ca3126aac7ecc411184fa4c9e4475472944f107ca63b31e7abb74cb1edc97ac-merged.mount: Deactivated successfully.
Dec  8 04:48:32 np0005550137 podman[96948]: 2025-12-08 09:48:32.721895817 +0000 UTC m=+0.616512141 container remove 46608557971baad5b89e1b7112e7203de00d5eb58aa99e27206d130c9875bf39 (image=quay.io/ceph/ceph:v19, name=angry_northcutt, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1)
Dec  8 04:48:32 np0005550137 systemd[1]: libpod-conmon-46608557971baad5b89e1b7112e7203de00d5eb58aa99e27206d130c9875bf39.scope: Deactivated successfully.
Dec  8 04:48:32 np0005550137 ceph-mon[74516]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth rm", "entity": "client.mgr.nfs.grace.nfs.cephfs"} v 0)
Dec  8 04:48:32 np0005550137 ceph-mon[74516]: log_channel(audit) log [INF] : from='mgr.14478 192.168.122.100:0/575685615' entity='mgr.compute-0.kitiwu' cmd=[{"prefix": "auth rm", "entity": "client.mgr.nfs.grace.nfs.cephfs"}]: dispatch
Dec  8 04:48:32 np0005550137 ceph-mon[74516]: log_channel(audit) log [INF] : from='mgr.14478 192.168.122.100:0/575685615' entity='mgr.compute-0.kitiwu' cmd='[{"prefix": "auth rm", "entity": "client.mgr.nfs.grace.nfs.cephfs"}]': finished
Dec  8 04:48:32 np0005550137 ceph-mgr[74806]: [cephadm INFO cephadm.services.nfs] Rados config object exists: conf-nfs.cephfs
Dec  8 04:48:32 np0005550137 ceph-mgr[74806]: log_channel(cephadm) log [INF] : Rados config object exists: conf-nfs.cephfs
Dec  8 04:48:32 np0005550137 ceph-mgr[74806]: [cephadm INFO cephadm.services.nfs] Creating key for client.nfs.cephfs.0.0.compute-1.drrxym-rgw
Dec  8 04:48:32 np0005550137 ceph-mgr[74806]: log_channel(cephadm) log [INF] : Creating key for client.nfs.cephfs.0.0.compute-1.drrxym-rgw
Dec  8 04:48:32 np0005550137 ceph-mon[74516]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get-or-create", "entity": "client.nfs.cephfs.0.0.compute-1.drrxym-rgw", "caps": ["mon", "allow r", "osd", "allow rwx tag rgw *=*"]} v 0)
Dec  8 04:48:32 np0005550137 ceph-mon[74516]: log_channel(audit) log [INF] : from='mgr.14478 192.168.122.100:0/575685615' entity='mgr.compute-0.kitiwu' cmd=[{"prefix": "auth get-or-create", "entity": "client.nfs.cephfs.0.0.compute-1.drrxym-rgw", "caps": ["mon", "allow r", "osd", "allow rwx tag rgw *=*"]}]: dispatch
Dec  8 04:48:32 np0005550137 ceph-mon[74516]: log_channel(audit) log [INF] : from='mgr.14478 192.168.122.100:0/575685615' entity='mgr.compute-0.kitiwu' cmd='[{"prefix": "auth get-or-create", "entity": "client.nfs.cephfs.0.0.compute-1.drrxym-rgw", "caps": ["mon", "allow r", "osd", "allow rwx tag rgw *=*"]}]': finished
Dec  8 04:48:32 np0005550137 ceph-mgr[74806]: [cephadm WARNING cephadm.services.nfs] Bind address in nfs.cephfs.0.0.compute-1.drrxym's ganesha conf is defaulting to empty
Dec  8 04:48:32 np0005550137 ceph-mgr[74806]: log_channel(cephadm) log [WRN] : Bind address in nfs.cephfs.0.0.compute-1.drrxym's ganesha conf is defaulting to empty
Dec  8 04:48:32 np0005550137 ceph-mon[74516]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Dec  8 04:48:32 np0005550137 ceph-mon[74516]: log_channel(audit) log [DBG] : from='mgr.14478 192.168.122.100:0/575685615' entity='mgr.compute-0.kitiwu' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec  8 04:48:32 np0005550137 ceph-mgr[74806]: [cephadm INFO cephadm.serve] Deploying daemon nfs.cephfs.0.0.compute-1.drrxym on compute-1
Dec  8 04:48:32 np0005550137 ceph-mgr[74806]: log_channel(cephadm) log [INF] : Deploying daemon nfs.cephfs.0.0.compute-1.drrxym on compute-1
Dec  8 04:48:33 np0005550137 ceph-mon[74516]: mon.compute-0@0(leader).mds e8 new map
Dec  8 04:48:33 np0005550137 ceph-mon[74516]: mon.compute-0@0(leader).mds e8 print_map#012e8#012btime 2025-12-08T09:48:33:354090+0000#012enable_multiple, ever_enabled_multiple: 1,1#012default compat: compat={},rocompat={},incompat={1=base v0.20,2=client writeable ranges,3=default file layouts on dirs,4=dir inode in separate object,5=mds uses versioned encoding,6=dirfrag is stored in omap,8=no anchor table,9=file layout v2,10=snaprealm v2,11=minor log segments,12=quiesce subvolumes}#012legacy client fscid: 1#012 #012Filesystem 'cephfs' (1)#012fs_name#011cephfs#012epoch#0115#012flags#01112 joinable allow_snaps allow_multimds_snaps#012created#0112025-12-08T09:48:11.623571+0000#012modified#0112025-12-08T09:48:30.344631+0000#012tableserver#0110#012root#0110#012session_timeout#01160#012session_autoclose#011300#012max_file_size#0111099511627776#012max_xattr_size#01165536#012required_client_features#011{}#012last_failure#0110#012last_failure_osd_epoch#0110#012compat#011compat={},rocompat={},incompat={1=base v0.20,2=client writeable ranges,3=default file layouts on dirs,4=dir inode in separate object,5=mds uses versioned encoding,6=dirfrag is stored in omap,7=mds uses inline data,8=no anchor table,9=file layout v2,10=snaprealm v2,11=minor log segments,12=quiesce subvolumes}#012max_mds#0111#012in#0110#012up#011{0=24232}#012failed#011#012damaged#011#012stopped#011#012data_pools#011[7]#012metadata_pool#0116#012inline_data#011disabled#012balancer#011#012bal_rank_mask#011-1#012standby_count_wanted#0111#012qdb_cluster#011leader: 24232 members: 24232#012[mds.cephfs.compute-2.hhmzvb{0:24232} state up:active seq 2 addr [v2:192.168.122.102:6804/1007969270,v1:192.168.122.102:6805/1007969270] compat {c=[1],r=[1],i=[1fff]}]#012 #012 #012Standby daemons:#012 #012[mds.cephfs.compute-0.ywanut{-1:14622} state up:standby seq 1 addr [v2:192.168.122.100:6806/629465497,v1:192.168.122.100:6807/629465497] compat {c=[1],r=[1],i=[1fff]}]#012[mds.cephfs.compute-1.tjxjxt{-1:24218} state up:standby seq 1 addr [v2:192.168.122.101:6804/1497473063,v1:192.168.122.101:6805/1497473063] compat {c=[1],r=[1],i=[1fff]}]
Dec  8 04:48:33 np0005550137 ceph-mon[74516]: log_channel(cluster) log [DBG] : mds.? [v2:192.168.122.101:6804/1497473063,v1:192.168.122.101:6805/1497473063] up:boot
Dec  8 04:48:33 np0005550137 ceph-mon[74516]: log_channel(cluster) log [DBG] : fsmap cephfs:1 {0=cephfs.compute-2.hhmzvb=up:active} 2 up:standby
Dec  8 04:48:33 np0005550137 ceph-mon[74516]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mds metadata", "who": "cephfs.compute-1.tjxjxt"} v 0)
Dec  8 04:48:33 np0005550137 ceph-mon[74516]: log_channel(audit) log [DBG] : from='mgr.14478 192.168.122.100:0/575685615' entity='mgr.compute-0.kitiwu' cmd=[{"prefix": "mds metadata", "who": "cephfs.compute-1.tjxjxt"}]: dispatch
Dec  8 04:48:33 np0005550137 ceph-mon[74516]: mon.compute-0@0(leader).mds e8 all = 0
Dec  8 04:48:33 np0005550137 ceph-mon[74516]: from='mgr.14478 192.168.122.100:0/575685615' entity='mgr.compute-0.kitiwu' 
Dec  8 04:48:33 np0005550137 ceph-mon[74516]: from='mgr.14478 192.168.122.100:0/575685615' entity='mgr.compute-0.kitiwu' 
Dec  8 04:48:33 np0005550137 ceph-mon[74516]: from='mgr.14478 192.168.122.100:0/575685615' entity='mgr.compute-0.kitiwu' 
Dec  8 04:48:33 np0005550137 ceph-mon[74516]: from='mgr.14478 192.168.122.100:0/575685615' entity='mgr.compute-0.kitiwu' 
Dec  8 04:48:33 np0005550137 ceph-mon[74516]: from='mgr.14478 192.168.122.100:0/575685615' entity='mgr.compute-0.kitiwu' 
Dec  8 04:48:33 np0005550137 ceph-mon[74516]: from='mgr.14478 192.168.122.100:0/575685615' entity='mgr.compute-0.kitiwu' 
Dec  8 04:48:33 np0005550137 ceph-mon[74516]: Creating key for client.nfs.cephfs.0.0.compute-1.drrxym
Dec  8 04:48:33 np0005550137 ceph-mon[74516]: from='mgr.14478 192.168.122.100:0/575685615' entity='mgr.compute-0.kitiwu' cmd=[{"prefix": "auth get-or-create", "entity": "client.nfs.cephfs.0.0.compute-1.drrxym", "caps": ["mon", "allow r", "osd", "allow rw pool=.nfs namespace=cephfs"]}]: dispatch
Dec  8 04:48:33 np0005550137 ceph-mon[74516]: from='mgr.14478 192.168.122.100:0/575685615' entity='mgr.compute-0.kitiwu' cmd='[{"prefix": "auth get-or-create", "entity": "client.nfs.cephfs.0.0.compute-1.drrxym", "caps": ["mon", "allow r", "osd", "allow rw pool=.nfs namespace=cephfs"]}]': finished
Dec  8 04:48:33 np0005550137 ceph-mon[74516]: Ensuring nfs.cephfs.0 is in the ganesha grace table
Dec  8 04:48:33 np0005550137 ceph-mon[74516]: from='mgr.14478 192.168.122.100:0/575685615' entity='mgr.compute-0.kitiwu' cmd=[{"prefix": "auth get-or-create", "entity": "client.mgr.nfs.grace.nfs.cephfs", "caps": ["mon", "allow r", "osd", "allow rwx pool .nfs"]}]: dispatch
Dec  8 04:48:33 np0005550137 ceph-mon[74516]: from='mgr.14478 192.168.122.100:0/575685615' entity='mgr.compute-0.kitiwu' cmd='[{"prefix": "auth get-or-create", "entity": "client.mgr.nfs.grace.nfs.cephfs", "caps": ["mon", "allow r", "osd", "allow rwx pool .nfs"]}]': finished
Dec  8 04:48:33 np0005550137 ceph-mon[74516]: from='mgr.14478 192.168.122.100:0/575685615' entity='mgr.compute-0.kitiwu' cmd=[{"prefix": "auth rm", "entity": "client.mgr.nfs.grace.nfs.cephfs"}]: dispatch
Dec  8 04:48:33 np0005550137 ceph-mon[74516]: from='mgr.14478 192.168.122.100:0/575685615' entity='mgr.compute-0.kitiwu' cmd='[{"prefix": "auth rm", "entity": "client.mgr.nfs.grace.nfs.cephfs"}]': finished
Dec  8 04:48:33 np0005550137 ceph-mon[74516]: Rados config object exists: conf-nfs.cephfs
Dec  8 04:48:33 np0005550137 ceph-mon[74516]: Creating key for client.nfs.cephfs.0.0.compute-1.drrxym-rgw
Dec  8 04:48:33 np0005550137 ceph-mon[74516]: from='mgr.14478 192.168.122.100:0/575685615' entity='mgr.compute-0.kitiwu' cmd=[{"prefix": "auth get-or-create", "entity": "client.nfs.cephfs.0.0.compute-1.drrxym-rgw", "caps": ["mon", "allow r", "osd", "allow rwx tag rgw *=*"]}]: dispatch
Dec  8 04:48:33 np0005550137 ceph-mon[74516]: from='mgr.14478 192.168.122.100:0/575685615' entity='mgr.compute-0.kitiwu' cmd='[{"prefix": "auth get-or-create", "entity": "client.nfs.cephfs.0.0.compute-1.drrxym-rgw", "caps": ["mon", "allow r", "osd", "allow rwx tag rgw *=*"]}]': finished
Dec  8 04:48:33 np0005550137 ceph-mon[74516]: Bind address in nfs.cephfs.0.0.compute-1.drrxym's ganesha conf is defaulting to empty
Dec  8 04:48:33 np0005550137 ceph-mon[74516]: Deploying daemon nfs.cephfs.0.0.compute-1.drrxym on compute-1
Dec  8 04:48:33 np0005550137 python3[97061]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v19 --fsid ceb838ef-9d5d-54e4-bddb-2f01adce2ad4 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   config dump -f json _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec  8 04:48:33 np0005550137 podman[97062]: 2025-12-08 09:48:33.787831398 +0000 UTC m=+0.072871091 container create 73d70337528e1bb1da56b5c8785387535687e5d1b38b9da858d7210c6d132970 (image=quay.io/ceph/ceph:v19, name=youthful_gould, OSD_FLAVOR=default, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Dec  8 04:48:33 np0005550137 systemd[1]: Started libpod-conmon-73d70337528e1bb1da56b5c8785387535687e5d1b38b9da858d7210c6d132970.scope.
Dec  8 04:48:33 np0005550137 podman[97062]: 2025-12-08 09:48:33.754802522 +0000 UTC m=+0.039842265 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Dec  8 04:48:33 np0005550137 systemd[1]: Started libcrun container.
Dec  8 04:48:33 np0005550137 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fa658ac5fdfd261d9d7f2bf864f6d8b291144e90d9f2f3c7894e9dffe2952de3/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec  8 04:48:33 np0005550137 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fa658ac5fdfd261d9d7f2bf864f6d8b291144e90d9f2f3c7894e9dffe2952de3/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Dec  8 04:48:33 np0005550137 podman[97062]: 2025-12-08 09:48:33.890137631 +0000 UTC m=+0.175177314 container init 73d70337528e1bb1da56b5c8785387535687e5d1b38b9da858d7210c6d132970 (image=quay.io/ceph/ceph:v19, name=youthful_gould, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS)
Dec  8 04:48:33 np0005550137 podman[97062]: 2025-12-08 09:48:33.898283376 +0000 UTC m=+0.183323039 container start 73d70337528e1bb1da56b5c8785387535687e5d1b38b9da858d7210c6d132970 (image=quay.io/ceph/ceph:v19, name=youthful_gould, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, OSD_FLAVOR=default, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec  8 04:48:33 np0005550137 podman[97062]: 2025-12-08 09:48:33.901675755 +0000 UTC m=+0.186715438 container attach 73d70337528e1bb1da56b5c8785387535687e5d1b38b9da858d7210c6d132970 (image=quay.io/ceph/ceph:v19, name=youthful_gould, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, CEPH_REF=squid)
Dec  8 04:48:34 np0005550137 ceph-mon[74516]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config dump", "format": "json"} v 0)
Dec  8 04:48:34 np0005550137 ceph-mon[74516]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/29586248' entity='client.admin' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch
Dec  8 04:48:34 np0005550137 youthful_gould[97077]: 
Dec  8 04:48:34 np0005550137 systemd[1]: libpod-73d70337528e1bb1da56b5c8785387535687e5d1b38b9da858d7210c6d132970.scope: Deactivated successfully.
Dec  8 04:48:34 np0005550137 youthful_gould[97077]: [{"section":"global","name":"cluster_network","value":"172.20.0.0/24","level":"advanced","can_update_at_runtime":false,"mask":""},{"section":"global","name":"container_image","value":"quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec","level":"basic","can_update_at_runtime":false,"mask":""},{"section":"global","name":"log_to_file","value":"true","level":"basic","can_update_at_runtime":true,"mask":""},{"section":"global","name":"mon_cluster_log_to_file","value":"true","level":"advanced","can_update_at_runtime":true,"mask":""},{"section":"global","name":"ms_bind_ipv4","value":"true","level":"advanced","can_update_at_runtime":true,"mask":""},{"section":"global","name":"ms_bind_ipv6","value":"false","level":"advanced","can_update_at_runtime":true,"mask":""},{"section":"global","name":"osd_pool_default_size","value":"1","level":"advanced","can_update_at_runtime":true,"mask":""},{"section":"global","name":"public_network","value":"192.168.122.0/24","level":"advanced","can_update_at_runtime":false,"mask":""},{"section":"global","name":"rgw_keystone_accepted_admin_roles","value":"ResellerAdmin, swiftoperator","level":"advanced","can_update_at_runtime":false,"mask":""},{"section":"global","name":"rgw_keystone_accepted_roles","value":"member, Member, admin","level":"advanced","can_update_at_runtime":false,"mask":""},{"section":"global","name":"rgw_keystone_admin_domain","value":"default","level":"advanced","can_update_at_runtime":false,"mask":""},{"section":"global","name":"rgw_keystone_admin_password","value":"12345678","level":"advanced","can_update_at_runtime":false,"mask":""},{"section":"global","name":"rgw_keystone_admin_project","value":"service","level":"advanced","can_update_at_runtime":false,"mask":""},{"section":"global","name":"rgw_keystone_admin_user","value":"swift","level":"advanced","can_update_at_runtime":false,"mask":""},{"section":"global","name":"rgw_keystone_api_version","value":"3","level":"advanced","can_update_at_runtime":true,"mask":""},{"section":"global","name":"rgw_keystone_implicit_tenants","value":"true","level":"advanced","can_update_at_runtime":false,"mask":""},{"section":"global","name":"rgw_keystone_url","value":"https://keystone-internal.openstack.svc:5000","level":"basic","can_update_at_runtime":false,"mask":""},{"section":"global","name":"rgw_keystone_verify_ssl","value":"false","level":"advanced","can_update_at_runtime":true,"mask":""},{"section":"global","name":"rgw_max_attr_name_len","value":"128","level":"advanced","can_update_at_runtime":true,"mask":""},{"section":"global","name":"rgw_max_attr_size","value":"1024","level":"advanced","can_update_at_runtime":true,"mask":""},{"section":"global","name":"rgw_max_attrs_num_in_req","value":"90","level":"advanced","can_update_at_runtime":true,"mask":""},{"section":"global","name":"rgw_s3_auth_use_keystone","value":"true","level":"advanced","can_update_at_runtime":true,"mask":""},{"section":"global","name":"rgw_swift_account_in_url","value":"true","level":"advanced","can_update_at_runtime":true,"mask":""},{"section":"global","name":"rgw_swift_enforce_content_length","value":"true","level":"advanced","can_update_at_runtime":true,"mask":""},{"section":"global","name":"rgw_swift_versioning_enabled","value":"true","level":"advanced","can_update_at_runtime":true,"mask":""},{"section":"global","name":"rgw_trust_forwarded_https","value":"true","level":"advanced","can_update_at_runtime":true,"mask":""},{"section":"mon","name":"auth_allow_insecure_global_id_reclaim","value":"false","level":"advanced","can_update_at_runtime":true,"mask":""},{"section":"mon","name":"mon_warn_on_pool_no_redundancy","value":"false","level":"advanced","can_update_at_runtime":true,"mask":""},{"section":"mgr","name":"mgr/cephadm/container_init","value":"True","level":"advanced","can_update_at_runtime":false,"mask":""},{"section":"mgr","name":"mgr/cephadm/migration_current","value":"7","level":"advanced","can_update_at_runtime":false,"mask":""},{"section":"mgr","name":"mgr/cephadm/use_repo_digest","value":"false","level":"advanced","can_update_at_runtime":false,"mask":""},{"section":"mgr","name":"mgr/dashboard/ALERTMANAGER_API_HOST","value":"http://192.168.122.100:9093","level":"advanced","can_update_at_runtime":false,"mask":""},{"section":"mgr","name":"mgr/dashboard/GRAFANA_API_PASSWORD","value":"/home/grafana_password.yml","level":"advanced","can_update_at_runtime":false,"mask":""},{"section":"mgr","name":"mgr/dashboard/GRAFANA_API_URL","value":"http://192.168.122.100:3100","level":"advanced","can_update_at_runtime":false,"mask":""},{"section":"mgr","name":"mgr/dashboard/GRAFANA_API_USERNAME","value":"admin","level":"advanced","can_update_at_runtime":false,"mask":""},{"section":"mgr","name":"mgr/dashboard/PROMETHEUS_API_HOST","value":"http://192.168.122.100:9092","level":"advanced","can_update_at_runtime":false,"mask":""},{"section":"mgr","name":"mgr/dashboard/compute-0.kitiwu/server_addr","value":"192.168.122.100","level":"advanced","can_update_at_runtime":false,"mask":""},{"section":"mgr","name":"mgr/dashboard/compute-1.mmkaif/server_addr","value":"192.168.122.101","level":"advanced","can_update_at_runtime":false,"mask":""},{"section":"mgr","name":"mgr/dashboard/compute-2.zqytsv/server_addr","value":"192.168.122.102","level":"advanced","can_update_at_runtime":false,"mask":""},{"section":"mgr","name":"mgr/dashboard/server_port","value":"8443","level":"advanced","can_update_at_runtime":false,"mask":""},{"section":"mgr","name":"mgr/dashboard/ssl","value":"false","level":"advanced","can_update_at_runtime":false,"mask":""},{"section":"mgr","name":"mgr/dashboard/ssl_server_port","value":"8443","level":"advanced","can_update_at_runtime":false,"mask":""},{"section":"mgr","name":"mgr/orchestrator/orchestrator","value":"cephadm","level":"advanced","can_update_at_runtime":true,"mask":""},{"section":"osd","name":"osd_memory_target_autotune","value":"true","level":"advanced","can_update_at_runtime":true,"mask":""},{"section":"mds.cephfs","name":"mds_join_fs","value":"cephfs","level":"basic","can_update_at_runtime":true,"mask":""},{"section":"client.rgw.rgw.compute-0.slkrtm","name":"rgw_frontends","value":"beast endpoint=192.168.122.100:8082","level":"basic","can_update_at_runtime":false,"mask":""},{"section":"client.rgw.rgw.compute-1.rblbpq","name":"rgw_frontends","value":"beast endpoint=192.168.122.101:8082","level":"basic","can_update_at_runtime":false,"mask":""},{"section":"client.rgw.rgw.compute-2.dimexm","name":"rgw_frontends","value":"beast endpoint=192.168.122.102:8082","level":"basic","can_update_at_runtime":false,"mask":""}]
Dec  8 04:48:34 np0005550137 podman[97062]: 2025-12-08 09:48:34.295301711 +0000 UTC m=+0.580341374 container died 73d70337528e1bb1da56b5c8785387535687e5d1b38b9da858d7210c6d132970 (image=quay.io/ceph/ceph:v19, name=youthful_gould, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, io.buildah.version=1.40.1, ceph=True, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec  8 04:48:34 np0005550137 systemd[1]: var-lib-containers-storage-overlay-fa658ac5fdfd261d9d7f2bf864f6d8b291144e90d9f2f3c7894e9dffe2952de3-merged.mount: Deactivated successfully.
Dec  8 04:48:34 np0005550137 podman[97062]: 2025-12-08 09:48:34.333784693 +0000 UTC m=+0.618824356 container remove 73d70337528e1bb1da56b5c8785387535687e5d1b38b9da858d7210c6d132970 (image=quay.io/ceph/ceph:v19, name=youthful_gould, ceph=True, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image)
Dec  8 04:48:34 np0005550137 systemd[1]: libpod-conmon-73d70337528e1bb1da56b5c8785387535687e5d1b38b9da858d7210c6d132970.scope: Deactivated successfully.
Dec  8 04:48:34 np0005550137 ceph-mon[74516]: mon.compute-0@0(leader).mds e9 new map
Dec  8 04:48:34 np0005550137 ceph-mon[74516]: mon.compute-0@0(leader).mds e9 print_map#012e9#012btime 2025-12-08T09:48:34:364938+0000#012enable_multiple, ever_enabled_multiple: 1,1#012default compat: compat={},rocompat={},incompat={1=base v0.20,2=client writeable ranges,3=default file layouts on dirs,4=dir inode in separate object,5=mds uses versioned encoding,6=dirfrag is stored in omap,8=no anchor table,9=file layout v2,10=snaprealm v2,11=minor log segments,12=quiesce subvolumes}#012legacy client fscid: 1#012 #012Filesystem 'cephfs' (1)#012fs_name#011cephfs#012epoch#0119#012flags#01112 joinable allow_snaps allow_multimds_snaps#012created#0112025-12-08T09:48:11.623571+0000#012modified#0112025-12-08T09:48:33.376389+0000#012tableserver#0110#012root#0110#012session_timeout#01160#012session_autoclose#011300#012max_file_size#0111099511627776#012max_xattr_size#01165536#012required_client_features#011{}#012last_failure#0110#012last_failure_osd_epoch#0110#012compat#011compat={},rocompat={},incompat={1=base v0.20,2=client writeable ranges,3=default file layouts on dirs,4=dir inode in separate object,5=mds uses versioned encoding,6=dirfrag is stored in omap,7=mds uses inline data,8=no anchor table,9=file layout v2,10=snaprealm v2,11=minor log segments,12=quiesce subvolumes}#012max_mds#0111#012in#0110#012up#011{0=24232}#012failed#011#012damaged#011#012stopped#011#012data_pools#011[7]#012metadata_pool#0116#012inline_data#011disabled#012balancer#011#012bal_rank_mask#011-1#012standby_count_wanted#0111#012qdb_cluster#011leader: 24232 members: 24232#012[mds.cephfs.compute-2.hhmzvb{0:24232} state up:active seq 3 join_fscid=1 addr [v2:192.168.122.102:6804/1007969270,v1:192.168.122.102:6805/1007969270] compat {c=[1],r=[1],i=[1fff]}]#012 #012 #012Standby daemons:#012 #012[mds.cephfs.compute-0.ywanut{-1:14622} state up:standby seq 1 addr [v2:192.168.122.100:6806/629465497,v1:192.168.122.100:6807/629465497] compat {c=[1],r=[1],i=[1fff]}]#012[mds.cephfs.compute-1.tjxjxt{-1:24218} state up:standby seq 1 addr [v2:192.168.122.101:6804/1497473063,v1:192.168.122.101:6805/1497473063] compat {c=[1],r=[1],i=[1fff]}]
Dec  8 04:48:34 np0005550137 ceph-mon[74516]: log_channel(cluster) log [DBG] : mds.? [v2:192.168.122.102:6804/1007969270,v1:192.168.122.102:6805/1007969270] up:active
Dec  8 04:48:34 np0005550137 ceph-mon[74516]: log_channel(cluster) log [DBG] : fsmap cephfs:1 {0=cephfs.compute-2.hhmzvb=up:active} 2 up:standby
Dec  8 04:48:34 np0005550137 ceph-mon[74516]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1.devices.0}] v 0)
Dec  8 04:48:34 np0005550137 ceph-mon[74516]: log_channel(audit) log [INF] : from='mgr.14478 192.168.122.100:0/575685615' entity='mgr.compute-0.kitiwu' 
Dec  8 04:48:34 np0005550137 ceph-mon[74516]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1}] v 0)
Dec  8 04:48:34 np0005550137 ceph-mon[74516]: log_channel(audit) log [INF] : from='mgr.14478 192.168.122.100:0/575685615' entity='mgr.compute-0.kitiwu' 
Dec  8 04:48:34 np0005550137 ceph-mon[74516]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.nfs.cephfs}] v 0)
Dec  8 04:48:34 np0005550137 ceph-mon[74516]: log_channel(audit) log [INF] : from='mgr.14478 192.168.122.100:0/575685615' entity='mgr.compute-0.kitiwu' 
Dec  8 04:48:34 np0005550137 ceph-mgr[74806]: [cephadm INFO cephadm.services.nfs] Creating key for client.nfs.cephfs.1.0.compute-2.wmyfrt
Dec  8 04:48:34 np0005550137 ceph-mgr[74806]: log_channel(cephadm) log [INF] : Creating key for client.nfs.cephfs.1.0.compute-2.wmyfrt
Dec  8 04:48:34 np0005550137 ceph-mon[74516]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get-or-create", "entity": "client.nfs.cephfs.1.0.compute-2.wmyfrt", "caps": ["mon", "allow r", "osd", "allow rw pool=.nfs namespace=cephfs"]} v 0)
Dec  8 04:48:34 np0005550137 ceph-mon[74516]: log_channel(audit) log [INF] : from='mgr.14478 192.168.122.100:0/575685615' entity='mgr.compute-0.kitiwu' cmd=[{"prefix": "auth get-or-create", "entity": "client.nfs.cephfs.1.0.compute-2.wmyfrt", "caps": ["mon", "allow r", "osd", "allow rw pool=.nfs namespace=cephfs"]}]: dispatch
Dec  8 04:48:34 np0005550137 ceph-mon[74516]: log_channel(audit) log [INF] : from='mgr.14478 192.168.122.100:0/575685615' entity='mgr.compute-0.kitiwu' cmd='[{"prefix": "auth get-or-create", "entity": "client.nfs.cephfs.1.0.compute-2.wmyfrt", "caps": ["mon", "allow r", "osd", "allow rw pool=.nfs namespace=cephfs"]}]': finished
Dec  8 04:48:34 np0005550137 ceph-mgr[74806]: [cephadm INFO root] Ensuring nfs.cephfs.1 is in the ganesha grace table
Dec  8 04:48:34 np0005550137 ceph-mgr[74806]: log_channel(cephadm) log [INF] : Ensuring nfs.cephfs.1 is in the ganesha grace table
Dec  8 04:48:34 np0005550137 ceph-mon[74516]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get-or-create", "entity": "client.mgr.nfs.grace.nfs.cephfs", "caps": ["mon", "allow r", "osd", "allow rwx pool .nfs"]} v 0)
Dec  8 04:48:34 np0005550137 ceph-mon[74516]: log_channel(audit) log [INF] : from='mgr.14478 192.168.122.100:0/575685615' entity='mgr.compute-0.kitiwu' cmd=[{"prefix": "auth get-or-create", "entity": "client.mgr.nfs.grace.nfs.cephfs", "caps": ["mon", "allow r", "osd", "allow rwx pool .nfs"]}]: dispatch
Dec  8 04:48:34 np0005550137 ceph-mon[74516]: log_channel(audit) log [INF] : from='mgr.14478 192.168.122.100:0/575685615' entity='mgr.compute-0.kitiwu' cmd='[{"prefix": "auth get-or-create", "entity": "client.mgr.nfs.grace.nfs.cephfs", "caps": ["mon", "allow r", "osd", "allow rwx pool .nfs"]}]': finished
Dec  8 04:48:34 np0005550137 ceph-mon[74516]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Dec  8 04:48:34 np0005550137 ceph-mon[74516]: log_channel(audit) log [DBG] : from='mgr.14478 192.168.122.100:0/575685615' entity='mgr.compute-0.kitiwu' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec  8 04:48:34 np0005550137 ceph-mgr[74806]: log_channel(cluster) log [DBG] : pgmap v19: 198 pgs: 198 active+clean; 456 KiB data, 102 MiB used, 60 GiB / 60 GiB avail; 1.1 KiB/s wr, 3 op/s
Dec  8 04:48:34 np0005550137 ceph-mon[74516]: from='mgr.14478 192.168.122.100:0/575685615' entity='mgr.compute-0.kitiwu' 
Dec  8 04:48:34 np0005550137 ceph-mon[74516]: from='mgr.14478 192.168.122.100:0/575685615' entity='mgr.compute-0.kitiwu' 
Dec  8 04:48:34 np0005550137 ceph-mon[74516]: from='mgr.14478 192.168.122.100:0/575685615' entity='mgr.compute-0.kitiwu' 
Dec  8 04:48:34 np0005550137 ceph-mon[74516]: from='mgr.14478 192.168.122.100:0/575685615' entity='mgr.compute-0.kitiwu' cmd=[{"prefix": "auth get-or-create", "entity": "client.nfs.cephfs.1.0.compute-2.wmyfrt", "caps": ["mon", "allow r", "osd", "allow rw pool=.nfs namespace=cephfs"]}]: dispatch
Dec  8 04:48:34 np0005550137 ceph-mon[74516]: from='mgr.14478 192.168.122.100:0/575685615' entity='mgr.compute-0.kitiwu' cmd='[{"prefix": "auth get-or-create", "entity": "client.nfs.cephfs.1.0.compute-2.wmyfrt", "caps": ["mon", "allow r", "osd", "allow rw pool=.nfs namespace=cephfs"]}]': finished
Dec  8 04:48:34 np0005550137 ceph-mon[74516]: from='mgr.14478 192.168.122.100:0/575685615' entity='mgr.compute-0.kitiwu' cmd=[{"prefix": "auth get-or-create", "entity": "client.mgr.nfs.grace.nfs.cephfs", "caps": ["mon", "allow r", "osd", "allow rwx pool .nfs"]}]: dispatch
Dec  8 04:48:34 np0005550137 ceph-mon[74516]: from='mgr.14478 192.168.122.100:0/575685615' entity='mgr.compute-0.kitiwu' cmd='[{"prefix": "auth get-or-create", "entity": "client.mgr.nfs.grace.nfs.cephfs", "caps": ["mon", "allow r", "osd", "allow rwx pool .nfs"]}]': finished
Dec  8 04:48:34 np0005550137 ceph-mon[74516]: mon.compute-0@0(leader).osd e49 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  8 04:48:35 np0005550137 python3[97155]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v19 --fsid ceb838ef-9d5d-54e4-bddb-2f01adce2ad4 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   osd get-require-min-compat-client _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec  8 04:48:35 np0005550137 podman[97156]: 2025-12-08 09:48:35.323495765 +0000 UTC m=+0.054219071 container create 0ce0c1710d234a4fe8682bed44db95b153249e5cc2d6bf2cd14c4ee0b98b483d (image=quay.io/ceph/ceph:v19, name=elastic_pare, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_REF=squid, org.label-schema.build-date=20250325)
Dec  8 04:48:35 np0005550137 systemd[1]: Started libpod-conmon-0ce0c1710d234a4fe8682bed44db95b153249e5cc2d6bf2cd14c4ee0b98b483d.scope.
Dec  8 04:48:35 np0005550137 systemd[1]: Started libcrun container.
Dec  8 04:48:35 np0005550137 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b965d0b54a2de01204f8e8eb86f2a3e43a3e5f62be93a8d48d8afbee35de1784/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec  8 04:48:35 np0005550137 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b965d0b54a2de01204f8e8eb86f2a3e43a3e5f62be93a8d48d8afbee35de1784/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Dec  8 04:48:35 np0005550137 podman[97156]: 2025-12-08 09:48:35.390622079 +0000 UTC m=+0.121345375 container init 0ce0c1710d234a4fe8682bed44db95b153249e5cc2d6bf2cd14c4ee0b98b483d (image=quay.io/ceph/ceph:v19, name=elastic_pare, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec  8 04:48:35 np0005550137 podman[97156]: 2025-12-08 09:48:35.298985233 +0000 UTC m=+0.029708569 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Dec  8 04:48:35 np0005550137 podman[97156]: 2025-12-08 09:48:35.397105053 +0000 UTC m=+0.127828349 container start 0ce0c1710d234a4fe8682bed44db95b153249e5cc2d6bf2cd14c4ee0b98b483d (image=quay.io/ceph/ceph:v19, name=elastic_pare, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec  8 04:48:35 np0005550137 podman[97156]: 2025-12-08 09:48:35.401237156 +0000 UTC m=+0.131960452 container attach 0ce0c1710d234a4fe8682bed44db95b153249e5cc2d6bf2cd14c4ee0b98b483d (image=quay.io/ceph/ceph:v19, name=elastic_pare, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, CEPH_REF=squid, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec  8 04:48:35 np0005550137 ceph-mon[74516]: Creating key for client.nfs.cephfs.1.0.compute-2.wmyfrt
Dec  8 04:48:35 np0005550137 ceph-mon[74516]: Ensuring nfs.cephfs.1 is in the ganesha grace table
Dec  8 04:48:35 np0005550137 ceph-mon[74516]: mon.compute-0@0(leader).mds e10 new map
Dec  8 04:48:35 np0005550137 ceph-mon[74516]: mon.compute-0@0(leader).mds e10 print_map#012e10#012btime 2025-12-08T09:48:35:579331+0000#012enable_multiple, ever_enabled_multiple: 1,1#012default compat: compat={},rocompat={},incompat={1=base v0.20,2=client writeable ranges,3=default file layouts on dirs,4=dir inode in separate object,5=mds uses versioned encoding,6=dirfrag is stored in omap,8=no anchor table,9=file layout v2,10=snaprealm v2,11=minor log segments,12=quiesce subvolumes}#012legacy client fscid: 1#012 #012Filesystem 'cephfs' (1)#012fs_name#011cephfs#012epoch#0119#012flags#01112 joinable allow_snaps allow_multimds_snaps#012created#0112025-12-08T09:48:11.623571+0000#012modified#0112025-12-08T09:48:33.376389+0000#012tableserver#0110#012root#0110#012session_timeout#01160#012session_autoclose#011300#012max_file_size#0111099511627776#012max_xattr_size#01165536#012required_client_features#011{}#012last_failure#0110#012last_failure_osd_epoch#0110#012compat#011compat={},rocompat={},incompat={1=base v0.20,2=client writeable ranges,3=default file layouts on dirs,4=dir inode in separate object,5=mds uses versioned encoding,6=dirfrag is stored in omap,7=mds uses inline data,8=no anchor table,9=file layout v2,10=snaprealm v2,11=minor log segments,12=quiesce subvolumes}#012max_mds#0111#012in#0110#012up#011{0=24232}#012failed#011#012damaged#011#012stopped#011#012data_pools#011[7]#012metadata_pool#0116#012inline_data#011disabled#012balancer#011#012bal_rank_mask#011-1#012standby_count_wanted#0111#012qdb_cluster#011leader: 24232 members: 24232#012[mds.cephfs.compute-2.hhmzvb{0:24232} state up:active seq 3 join_fscid=1 addr [v2:192.168.122.102:6804/1007969270,v1:192.168.122.102:6805/1007969270] compat {c=[1],r=[1],i=[1fff]}]#012 #012 #012Standby daemons:#012 #012[mds.cephfs.compute-0.ywanut{-1:14622} state up:standby seq 2 join_fscid=1 addr [v2:192.168.122.100:6806/629465497,v1:192.168.122.100:6807/629465497] compat {c=[1],r=[1],i=[1fff]}]#012[mds.cephfs.compute-1.tjxjxt{-1:24218} state up:standby seq 1 addr [v2:192.168.122.101:6804/1497473063,v1:192.168.122.101:6805/1497473063] compat {c=[1],r=[1],i=[1fff]}]
Dec  8 04:48:35 np0005550137 ceph-mds[96868]: mds.cephfs.compute-0.ywanut Updating MDS map to version 10 from mon.0
Dec  8 04:48:35 np0005550137 ceph-mon[74516]: log_channel(cluster) log [DBG] : mds.? [v2:192.168.122.100:6806/629465497,v1:192.168.122.100:6807/629465497] up:standby
Dec  8 04:48:35 np0005550137 ceph-mon[74516]: log_channel(cluster) log [DBG] : fsmap cephfs:1 {0=cephfs.compute-2.hhmzvb=up:active} 2 up:standby
Dec  8 04:48:35 np0005550137 ceph-mon[74516]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd get-require-min-compat-client"} v 0)
Dec  8 04:48:35 np0005550137 ceph-mon[74516]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1716906672' entity='client.admin' cmd=[{"prefix": "osd get-require-min-compat-client"}]: dispatch
Dec  8 04:48:35 np0005550137 elastic_pare[97171]: mimic
Dec  8 04:48:35 np0005550137 ceph-mgr[74806]: [progress INFO root] Writing back 14 completed events
Dec  8 04:48:35 np0005550137 ceph-mon[74516]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/progress/completed}] v 0)
Dec  8 04:48:35 np0005550137 ceph-mon[74516]: log_channel(audit) log [INF] : from='mgr.14478 192.168.122.100:0/575685615' entity='mgr.compute-0.kitiwu' 
Dec  8 04:48:35 np0005550137 systemd[1]: libpod-0ce0c1710d234a4fe8682bed44db95b153249e5cc2d6bf2cd14c4ee0b98b483d.scope: Deactivated successfully.
Dec  8 04:48:35 np0005550137 podman[97156]: 2025-12-08 09:48:35.81095744 +0000 UTC m=+0.541680736 container died 0ce0c1710d234a4fe8682bed44db95b153249e5cc2d6bf2cd14c4ee0b98b483d (image=quay.io/ceph/ceph:v19, name=elastic_pare, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec  8 04:48:35 np0005550137 systemd[1]: var-lib-containers-storage-overlay-b965d0b54a2de01204f8e8eb86f2a3e43a3e5f62be93a8d48d8afbee35de1784-merged.mount: Deactivated successfully.
Dec  8 04:48:35 np0005550137 podman[97156]: 2025-12-08 09:48:35.850247043 +0000 UTC m=+0.580970339 container remove 0ce0c1710d234a4fe8682bed44db95b153249e5cc2d6bf2cd14c4ee0b98b483d (image=quay.io/ceph/ceph:v19, name=elastic_pare, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_REF=squid, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0)
Dec  8 04:48:35 np0005550137 systemd[1]: libpod-conmon-0ce0c1710d234a4fe8682bed44db95b153249e5cc2d6bf2cd14c4ee0b98b483d.scope: Deactivated successfully.
Dec  8 04:48:36 np0005550137 ceph-mgr[74806]: log_channel(cluster) log [DBG] : pgmap v20: 198 pgs: 198 active+clean; 456 KiB data, 102 MiB used, 60 GiB / 60 GiB avail; 1.1 KiB/s wr, 3 op/s
Dec  8 04:48:36 np0005550137 ceph-mon[74516]: from='mgr.14478 192.168.122.100:0/575685615' entity='mgr.compute-0.kitiwu' 
Dec  8 04:48:36 np0005550137 ceph-mon[74516]: mon.compute-0@0(leader).mds e11 new map
Dec  8 04:48:36 np0005550137 ceph-mon[74516]: mon.compute-0@0(leader).mds e11 print_map#012e11#012btime 2025-12-08T09:48:36:799335+0000#012enable_multiple, ever_enabled_multiple: 1,1#012default compat: compat={},rocompat={},incompat={1=base v0.20,2=client writeable ranges,3=default file layouts on dirs,4=dir inode in separate object,5=mds uses versioned encoding,6=dirfrag is stored in omap,8=no anchor table,9=file layout v2,10=snaprealm v2,11=minor log segments,12=quiesce subvolumes}#012legacy client fscid: 1#012 #012Filesystem 'cephfs' (1)#012fs_name#011cephfs#012epoch#0119#012flags#01112 joinable allow_snaps allow_multimds_snaps#012created#0112025-12-08T09:48:11.623571+0000#012modified#0112025-12-08T09:48:33.376389+0000#012tableserver#0110#012root#0110#012session_timeout#01160#012session_autoclose#011300#012max_file_size#0111099511627776#012max_xattr_size#01165536#012required_client_features#011{}#012last_failure#0110#012last_failure_osd_epoch#0110#012compat#011compat={},rocompat={},incompat={1=base v0.20,2=client writeable ranges,3=default file layouts on dirs,4=dir inode in separate object,5=mds uses versioned encoding,6=dirfrag is stored in omap,7=mds uses inline data,8=no anchor table,9=file layout v2,10=snaprealm v2,11=minor log segments,12=quiesce subvolumes}#012max_mds#0111#012in#0110#012up#011{0=24232}#012failed#011#012damaged#011#012stopped#011#012data_pools#011[7]#012metadata_pool#0116#012inline_data#011disabled#012balancer#011#012bal_rank_mask#011-1#012standby_count_wanted#0111#012qdb_cluster#011leader: 24232 members: 24232#012[mds.cephfs.compute-2.hhmzvb{0:24232} state up:active seq 3 join_fscid=1 addr [v2:192.168.122.102:6804/1007969270,v1:192.168.122.102:6805/1007969270] compat {c=[1],r=[1],i=[1fff]}]#012 #012 #012Standby daemons:#012 #012[mds.cephfs.compute-0.ywanut{-1:14622} state up:standby seq 2 join_fscid=1 addr [v2:192.168.122.100:6806/629465497,v1:192.168.122.100:6807/629465497] compat {c=[1],r=[1],i=[1fff]}]#012[mds.cephfs.compute-1.tjxjxt{-1:24218} state up:standby seq 2 join_fscid=1 addr [v2:192.168.122.101:6804/1497473063,v1:192.168.122.101:6805/1497473063] compat {c=[1],r=[1],i=[1fff]}]
Dec  8 04:48:36 np0005550137 ceph-mon[74516]: log_channel(cluster) log [DBG] : mds.? [v2:192.168.122.101:6804/1497473063,v1:192.168.122.101:6805/1497473063] up:standby
Dec  8 04:48:36 np0005550137 ceph-mon[74516]: log_channel(cluster) log [DBG] : fsmap cephfs:1 {0=cephfs.compute-2.hhmzvb=up:active} 2 up:standby
Dec  8 04:48:36 np0005550137 python3[97233]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v19 --fsid ceb838ef-9d5d-54e4-bddb-2f01adce2ad4 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   versions -f json _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec  8 04:48:36 np0005550137 podman[97234]: 2025-12-08 09:48:36.935367603 +0000 UTC m=+0.070231928 container create 2379f7ea2f111fca3a4475c3d5bb52ad8b41e92d09276f437a68e5c758b79d30 (image=quay.io/ceph/ceph:v19, name=vigilant_sinoussi, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, ceph=True, org.label-schema.build-date=20250325, CEPH_REF=squid, org.label-schema.vendor=CentOS)
Dec  8 04:48:36 np0005550137 systemd[1]: Started libpod-conmon-2379f7ea2f111fca3a4475c3d5bb52ad8b41e92d09276f437a68e5c758b79d30.scope.
Dec  8 04:48:36 np0005550137 podman[97234]: 2025-12-08 09:48:36.907754839 +0000 UTC m=+0.042619224 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Dec  8 04:48:37 np0005550137 systemd[1]: Started libcrun container.
Dec  8 04:48:37 np0005550137 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1508fb4682d238d5b709ab93a0d24c9b16609ffe1ea16e323c96de82bcc3ebab/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Dec  8 04:48:37 np0005550137 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1508fb4682d238d5b709ab93a0d24c9b16609ffe1ea16e323c96de82bcc3ebab/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec  8 04:48:37 np0005550137 podman[97234]: 2025-12-08 09:48:37.029959477 +0000 UTC m=+0.164823822 container init 2379f7ea2f111fca3a4475c3d5bb52ad8b41e92d09276f437a68e5c758b79d30 (image=quay.io/ceph/ceph:v19, name=vigilant_sinoussi, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image)
Dec  8 04:48:37 np0005550137 podman[97234]: 2025-12-08 09:48:37.037598626 +0000 UTC m=+0.172462921 container start 2379f7ea2f111fca3a4475c3d5bb52ad8b41e92d09276f437a68e5c758b79d30 (image=quay.io/ceph/ceph:v19, name=vigilant_sinoussi, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec  8 04:48:37 np0005550137 podman[97234]: 2025-12-08 09:48:37.041377188 +0000 UTC m=+0.176241493 container attach 2379f7ea2f111fca3a4475c3d5bb52ad8b41e92d09276f437a68e5c758b79d30 (image=quay.io/ceph/ceph:v19, name=vigilant_sinoussi, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_REF=squid, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec  8 04:48:37 np0005550137 ceph-mon[74516]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "versions", "format": "json"} v 0)
Dec  8 04:48:37 np0005550137 ceph-mon[74516]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3042081611' entity='client.admin' cmd=[{"prefix": "versions", "format": "json"}]: dispatch
Dec  8 04:48:37 np0005550137 vigilant_sinoussi[97250]: 
Dec  8 04:48:37 np0005550137 ceph-mon[74516]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth rm", "entity": "client.mgr.nfs.grace.nfs.cephfs"} v 0)
Dec  8 04:48:37 np0005550137 ceph-mon[74516]: log_channel(audit) log [INF] : from='mgr.14478 192.168.122.100:0/575685615' entity='mgr.compute-0.kitiwu' cmd=[{"prefix": "auth rm", "entity": "client.mgr.nfs.grace.nfs.cephfs"}]: dispatch
Dec  8 04:48:37 np0005550137 ceph-mon[74516]: log_channel(audit) log [INF] : from='mgr.14478 192.168.122.100:0/575685615' entity='mgr.compute-0.kitiwu' cmd='[{"prefix": "auth rm", "entity": "client.mgr.nfs.grace.nfs.cephfs"}]': finished
Dec  8 04:48:37 np0005550137 vigilant_sinoussi[97250]: {"mon":{"ceph version 19.2.3 (c92aebb279828e9c3c1f5d24613efca272649e62) squid (stable)":3},"mgr":{"ceph version 19.2.3 (c92aebb279828e9c3c1f5d24613efca272649e62) squid (stable)":3},"osd":{"ceph version 19.2.3 (c92aebb279828e9c3c1f5d24613efca272649e62) squid (stable)":3},"mds":{"ceph version 19.2.3 (c92aebb279828e9c3c1f5d24613efca272649e62) squid (stable)":3},"rgw":{"ceph version 19.2.3 (c92aebb279828e9c3c1f5d24613efca272649e62) squid (stable)":3},"overall":{"ceph version 19.2.3 (c92aebb279828e9c3c1f5d24613efca272649e62) squid (stable)":15}}
Dec  8 04:48:37 np0005550137 systemd[1]: libpod-2379f7ea2f111fca3a4475c3d5bb52ad8b41e92d09276f437a68e5c758b79d30.scope: Deactivated successfully.
Dec  8 04:48:37 np0005550137 podman[97234]: 2025-12-08 09:48:37.642031704 +0000 UTC m=+0.776896019 container died 2379f7ea2f111fca3a4475c3d5bb52ad8b41e92d09276f437a68e5c758b79d30 (image=quay.io/ceph/ceph:v19, name=vigilant_sinoussi, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image)
Dec  8 04:48:37 np0005550137 systemd[1]: var-lib-containers-storage-overlay-1508fb4682d238d5b709ab93a0d24c9b16609ffe1ea16e323c96de82bcc3ebab-merged.mount: Deactivated successfully.
Dec  8 04:48:37 np0005550137 podman[97234]: 2025-12-08 09:48:37.690036026 +0000 UTC m=+0.824900311 container remove 2379f7ea2f111fca3a4475c3d5bb52ad8b41e92d09276f437a68e5c758b79d30 (image=quay.io/ceph/ceph:v19, name=vigilant_sinoussi, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_REF=squid, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default)
Dec  8 04:48:37 np0005550137 systemd[1]: libpod-conmon-2379f7ea2f111fca3a4475c3d5bb52ad8b41e92d09276f437a68e5c758b79d30.scope: Deactivated successfully.
Dec  8 04:48:37 np0005550137 ceph-mgr[74806]: [cephadm INFO cephadm.services.nfs] Rados config object exists: conf-nfs.cephfs
Dec  8 04:48:37 np0005550137 ceph-mgr[74806]: log_channel(cephadm) log [INF] : Rados config object exists: conf-nfs.cephfs
Dec  8 04:48:37 np0005550137 ceph-mgr[74806]: [cephadm INFO cephadm.services.nfs] Creating key for client.nfs.cephfs.1.0.compute-2.wmyfrt-rgw
Dec  8 04:48:37 np0005550137 ceph-mgr[74806]: log_channel(cephadm) log [INF] : Creating key for client.nfs.cephfs.1.0.compute-2.wmyfrt-rgw
Dec  8 04:48:37 np0005550137 ceph-mon[74516]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get-or-create", "entity": "client.nfs.cephfs.1.0.compute-2.wmyfrt-rgw", "caps": ["mon", "allow r", "osd", "allow rwx tag rgw *=*"]} v 0)
Dec  8 04:48:37 np0005550137 ceph-mon[74516]: log_channel(audit) log [INF] : from='mgr.14478 192.168.122.100:0/575685615' entity='mgr.compute-0.kitiwu' cmd=[{"prefix": "auth get-or-create", "entity": "client.nfs.cephfs.1.0.compute-2.wmyfrt-rgw", "caps": ["mon", "allow r", "osd", "allow rwx tag rgw *=*"]}]: dispatch
Dec  8 04:48:37 np0005550137 ceph-mon[74516]: log_channel(audit) log [INF] : from='mgr.14478 192.168.122.100:0/575685615' entity='mgr.compute-0.kitiwu' cmd='[{"prefix": "auth get-or-create", "entity": "client.nfs.cephfs.1.0.compute-2.wmyfrt-rgw", "caps": ["mon", "allow r", "osd", "allow rwx tag rgw *=*"]}]': finished
Dec  8 04:48:37 np0005550137 ceph-mgr[74806]: [cephadm WARNING cephadm.services.nfs] Bind address in nfs.cephfs.1.0.compute-2.wmyfrt's ganesha conf is defaulting to empty
Dec  8 04:48:37 np0005550137 ceph-mgr[74806]: log_channel(cephadm) log [WRN] : Bind address in nfs.cephfs.1.0.compute-2.wmyfrt's ganesha conf is defaulting to empty
Dec  8 04:48:37 np0005550137 ceph-mon[74516]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Dec  8 04:48:37 np0005550137 ceph-mon[74516]: log_channel(audit) log [DBG] : from='mgr.14478 192.168.122.100:0/575685615' entity='mgr.compute-0.kitiwu' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec  8 04:48:37 np0005550137 ceph-mgr[74806]: [cephadm INFO cephadm.serve] Deploying daemon nfs.cephfs.1.0.compute-2.wmyfrt on compute-2
Dec  8 04:48:37 np0005550137 ceph-mgr[74806]: log_channel(cephadm) log [INF] : Deploying daemon nfs.cephfs.1.0.compute-2.wmyfrt on compute-2
Dec  8 04:48:37 np0005550137 ceph-mon[74516]: from='mgr.14478 192.168.122.100:0/575685615' entity='mgr.compute-0.kitiwu' cmd=[{"prefix": "auth rm", "entity": "client.mgr.nfs.grace.nfs.cephfs"}]: dispatch
Dec  8 04:48:37 np0005550137 ceph-mon[74516]: from='mgr.14478 192.168.122.100:0/575685615' entity='mgr.compute-0.kitiwu' cmd='[{"prefix": "auth rm", "entity": "client.mgr.nfs.grace.nfs.cephfs"}]': finished
Dec  8 04:48:37 np0005550137 ceph-mon[74516]: from='mgr.14478 192.168.122.100:0/575685615' entity='mgr.compute-0.kitiwu' cmd=[{"prefix": "auth get-or-create", "entity": "client.nfs.cephfs.1.0.compute-2.wmyfrt-rgw", "caps": ["mon", "allow r", "osd", "allow rwx tag rgw *=*"]}]: dispatch
Dec  8 04:48:37 np0005550137 ceph-mon[74516]: from='mgr.14478 192.168.122.100:0/575685615' entity='mgr.compute-0.kitiwu' cmd='[{"prefix": "auth get-or-create", "entity": "client.nfs.cephfs.1.0.compute-2.wmyfrt-rgw", "caps": ["mon", "allow r", "osd", "allow rwx tag rgw *=*"]}]': finished
Dec  8 04:48:38 np0005550137 ceph-mgr[74806]: log_channel(cluster) log [DBG] : pgmap v21: 198 pgs: 198 active+clean; 456 KiB data, 102 MiB used, 60 GiB / 60 GiB avail; 938 B/s rd, 1.8 KiB/s wr, 4 op/s
Dec  8 04:48:38 np0005550137 ceph-mon[74516]: Rados config object exists: conf-nfs.cephfs
Dec  8 04:48:38 np0005550137 ceph-mon[74516]: Creating key for client.nfs.cephfs.1.0.compute-2.wmyfrt-rgw
Dec  8 04:48:38 np0005550137 ceph-mon[74516]: Bind address in nfs.cephfs.1.0.compute-2.wmyfrt's ganesha conf is defaulting to empty
Dec  8 04:48:38 np0005550137 ceph-mon[74516]: Deploying daemon nfs.cephfs.1.0.compute-2.wmyfrt on compute-2
Dec  8 04:48:39 np0005550137 ceph-mon[74516]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2.devices.0}] v 0)
Dec  8 04:48:39 np0005550137 ceph-mon[74516]: log_channel(audit) log [INF] : from='mgr.14478 192.168.122.100:0/575685615' entity='mgr.compute-0.kitiwu' 
Dec  8 04:48:39 np0005550137 ceph-mon[74516]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2}] v 0)
Dec  8 04:48:39 np0005550137 ceph-mon[74516]: log_channel(audit) log [INF] : from='mgr.14478 192.168.122.100:0/575685615' entity='mgr.compute-0.kitiwu' 
Dec  8 04:48:39 np0005550137 ceph-mon[74516]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.nfs.cephfs}] v 0)
Dec  8 04:48:39 np0005550137 ceph-mon[74516]: log_channel(audit) log [INF] : from='mgr.14478 192.168.122.100:0/575685615' entity='mgr.compute-0.kitiwu' 
Dec  8 04:48:39 np0005550137 ceph-mgr[74806]: [cephadm INFO cephadm.services.nfs] Creating key for client.nfs.cephfs.2.0.compute-0.cuvvno
Dec  8 04:48:39 np0005550137 ceph-mgr[74806]: log_channel(cephadm) log [INF] : Creating key for client.nfs.cephfs.2.0.compute-0.cuvvno
Dec  8 04:48:39 np0005550137 ceph-mon[74516]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get-or-create", "entity": "client.nfs.cephfs.2.0.compute-0.cuvvno", "caps": ["mon", "allow r", "osd", "allow rw pool=.nfs namespace=cephfs"]} v 0)
Dec  8 04:48:39 np0005550137 ceph-mon[74516]: log_channel(audit) log [INF] : from='mgr.14478 192.168.122.100:0/575685615' entity='mgr.compute-0.kitiwu' cmd=[{"prefix": "auth get-or-create", "entity": "client.nfs.cephfs.2.0.compute-0.cuvvno", "caps": ["mon", "allow r", "osd", "allow rw pool=.nfs namespace=cephfs"]}]: dispatch
Dec  8 04:48:39 np0005550137 ceph-mon[74516]: log_channel(audit) log [INF] : from='mgr.14478 192.168.122.100:0/575685615' entity='mgr.compute-0.kitiwu' cmd='[{"prefix": "auth get-or-create", "entity": "client.nfs.cephfs.2.0.compute-0.cuvvno", "caps": ["mon", "allow r", "osd", "allow rw pool=.nfs namespace=cephfs"]}]': finished
Dec  8 04:48:39 np0005550137 ceph-mgr[74806]: [cephadm INFO root] Ensuring nfs.cephfs.2 is in the ganesha grace table
Dec  8 04:48:39 np0005550137 ceph-mgr[74806]: log_channel(cephadm) log [INF] : Ensuring nfs.cephfs.2 is in the ganesha grace table
Dec  8 04:48:39 np0005550137 ceph-mon[74516]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get-or-create", "entity": "client.mgr.nfs.grace.nfs.cephfs", "caps": ["mon", "allow r", "osd", "allow rwx pool .nfs"]} v 0)
Dec  8 04:48:39 np0005550137 ceph-mon[74516]: log_channel(audit) log [INF] : from='mgr.14478 192.168.122.100:0/575685615' entity='mgr.compute-0.kitiwu' cmd=[{"prefix": "auth get-or-create", "entity": "client.mgr.nfs.grace.nfs.cephfs", "caps": ["mon", "allow r", "osd", "allow rwx pool .nfs"]}]: dispatch
Dec  8 04:48:39 np0005550137 ceph-mon[74516]: log_channel(audit) log [INF] : from='mgr.14478 192.168.122.100:0/575685615' entity='mgr.compute-0.kitiwu' cmd='[{"prefix": "auth get-or-create", "entity": "client.mgr.nfs.grace.nfs.cephfs", "caps": ["mon", "allow r", "osd", "allow rwx pool .nfs"]}]': finished
Dec  8 04:48:39 np0005550137 ceph-mon[74516]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Dec  8 04:48:39 np0005550137 ceph-mon[74516]: log_channel(audit) log [DBG] : from='mgr.14478 192.168.122.100:0/575685615' entity='mgr.compute-0.kitiwu' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec  8 04:48:39 np0005550137 ceph-mon[74516]: from='mgr.14478 192.168.122.100:0/575685615' entity='mgr.compute-0.kitiwu' 
Dec  8 04:48:39 np0005550137 ceph-mon[74516]: from='mgr.14478 192.168.122.100:0/575685615' entity='mgr.compute-0.kitiwu' 
Dec  8 04:48:39 np0005550137 ceph-mon[74516]: from='mgr.14478 192.168.122.100:0/575685615' entity='mgr.compute-0.kitiwu' 
Dec  8 04:48:39 np0005550137 ceph-mon[74516]: from='mgr.14478 192.168.122.100:0/575685615' entity='mgr.compute-0.kitiwu' cmd=[{"prefix": "auth get-or-create", "entity": "client.nfs.cephfs.2.0.compute-0.cuvvno", "caps": ["mon", "allow r", "osd", "allow rw pool=.nfs namespace=cephfs"]}]: dispatch
Dec  8 04:48:39 np0005550137 ceph-mon[74516]: from='mgr.14478 192.168.122.100:0/575685615' entity='mgr.compute-0.kitiwu' cmd='[{"prefix": "auth get-or-create", "entity": "client.nfs.cephfs.2.0.compute-0.cuvvno", "caps": ["mon", "allow r", "osd", "allow rw pool=.nfs namespace=cephfs"]}]': finished
Dec  8 04:48:39 np0005550137 ceph-mon[74516]: from='mgr.14478 192.168.122.100:0/575685615' entity='mgr.compute-0.kitiwu' cmd=[{"prefix": "auth get-or-create", "entity": "client.mgr.nfs.grace.nfs.cephfs", "caps": ["mon", "allow r", "osd", "allow rwx pool .nfs"]}]: dispatch
Dec  8 04:48:39 np0005550137 ceph-mon[74516]: from='mgr.14478 192.168.122.100:0/575685615' entity='mgr.compute-0.kitiwu' cmd='[{"prefix": "auth get-or-create", "entity": "client.mgr.nfs.grace.nfs.cephfs", "caps": ["mon", "allow r", "osd", "allow rwx pool .nfs"]}]': finished
Dec  8 04:48:39 np0005550137 ceph-mon[74516]: mon.compute-0@0(leader).osd e49 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  8 04:48:40 np0005550137 ceph-mgr[74806]: log_channel(cluster) log [DBG] : pgmap v22: 198 pgs: 198 active+clean; 456 KiB data, 102 MiB used, 60 GiB / 60 GiB avail; 938 B/s rd, 1.8 KiB/s wr, 4 op/s
Dec  8 04:48:40 np0005550137 ceph-mgr[74806]: [volumes INFO mgr_util] scanning for idle connections..
Dec  8 04:48:40 np0005550137 ceph-mgr[74806]: [volumes INFO mgr_util] cleaning up connections: []
Dec  8 04:48:40 np0005550137 ceph-mgr[74806]: [volumes INFO mgr_util] scanning for idle connections..
Dec  8 04:48:40 np0005550137 ceph-mgr[74806]: [volumes INFO mgr_util] cleaning up connections: []
Dec  8 04:48:40 np0005550137 ceph-mgr[74806]: [volumes INFO mgr_util] scanning for idle connections..
Dec  8 04:48:40 np0005550137 ceph-mgr[74806]: [volumes INFO mgr_util] cleaning up connections: []
Dec  8 04:48:40 np0005550137 ceph-mon[74516]: Creating key for client.nfs.cephfs.2.0.compute-0.cuvvno
Dec  8 04:48:40 np0005550137 ceph-mon[74516]: Ensuring nfs.cephfs.2 is in the ganesha grace table
Dec  8 04:48:42 np0005550137 ceph-mgr[74806]: log_channel(cluster) log [DBG] : pgmap v23: 198 pgs: 198 active+clean; 456 KiB data, 102 MiB used, 60 GiB / 60 GiB avail; 2.3 KiB/s rd, 2.6 KiB/s wr, 7 op/s
Dec  8 04:48:42 np0005550137 ceph-mon[74516]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth rm", "entity": "client.mgr.nfs.grace.nfs.cephfs"} v 0)
Dec  8 04:48:42 np0005550137 ceph-mon[74516]: log_channel(audit) log [INF] : from='mgr.14478 192.168.122.100:0/575685615' entity='mgr.compute-0.kitiwu' cmd=[{"prefix": "auth rm", "entity": "client.mgr.nfs.grace.nfs.cephfs"}]: dispatch
Dec  8 04:48:42 np0005550137 ceph-mon[74516]: log_channel(audit) log [INF] : from='mgr.14478 192.168.122.100:0/575685615' entity='mgr.compute-0.kitiwu' cmd='[{"prefix": "auth rm", "entity": "client.mgr.nfs.grace.nfs.cephfs"}]': finished
Dec  8 04:48:42 np0005550137 ceph-mon[74516]: from='mgr.14478 192.168.122.100:0/575685615' entity='mgr.compute-0.kitiwu' cmd=[{"prefix": "auth rm", "entity": "client.mgr.nfs.grace.nfs.cephfs"}]: dispatch
Dec  8 04:48:42 np0005550137 ceph-mon[74516]: from='mgr.14478 192.168.122.100:0/575685615' entity='mgr.compute-0.kitiwu' cmd='[{"prefix": "auth rm", "entity": "client.mgr.nfs.grace.nfs.cephfs"}]': finished
Dec  8 04:48:42 np0005550137 ceph-mgr[74806]: [cephadm INFO cephadm.services.nfs] Rados config object exists: conf-nfs.cephfs
Dec  8 04:48:42 np0005550137 ceph-mgr[74806]: log_channel(cephadm) log [INF] : Rados config object exists: conf-nfs.cephfs
Dec  8 04:48:42 np0005550137 ceph-mgr[74806]: [cephadm INFO cephadm.services.nfs] Creating key for client.nfs.cephfs.2.0.compute-0.cuvvno-rgw
Dec  8 04:48:42 np0005550137 ceph-mgr[74806]: log_channel(cephadm) log [INF] : Creating key for client.nfs.cephfs.2.0.compute-0.cuvvno-rgw
Dec  8 04:48:42 np0005550137 ceph-mon[74516]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get-or-create", "entity": "client.nfs.cephfs.2.0.compute-0.cuvvno-rgw", "caps": ["mon", "allow r", "osd", "allow rwx tag rgw *=*"]} v 0)
Dec  8 04:48:42 np0005550137 ceph-mon[74516]: log_channel(audit) log [INF] : from='mgr.14478 192.168.122.100:0/575685615' entity='mgr.compute-0.kitiwu' cmd=[{"prefix": "auth get-or-create", "entity": "client.nfs.cephfs.2.0.compute-0.cuvvno-rgw", "caps": ["mon", "allow r", "osd", "allow rwx tag rgw *=*"]}]: dispatch
Dec  8 04:48:42 np0005550137 ceph-mon[74516]: log_channel(audit) log [INF] : from='mgr.14478 192.168.122.100:0/575685615' entity='mgr.compute-0.kitiwu' cmd='[{"prefix": "auth get-or-create", "entity": "client.nfs.cephfs.2.0.compute-0.cuvvno-rgw", "caps": ["mon", "allow r", "osd", "allow rwx tag rgw *=*"]}]': finished
Dec  8 04:48:42 np0005550137 ceph-mgr[74806]: [cephadm WARNING cephadm.services.nfs] Bind address in nfs.cephfs.2.0.compute-0.cuvvno's ganesha conf is defaulting to empty
Dec  8 04:48:42 np0005550137 ceph-mgr[74806]: log_channel(cephadm) log [WRN] : Bind address in nfs.cephfs.2.0.compute-0.cuvvno's ganesha conf is defaulting to empty
Dec  8 04:48:42 np0005550137 ceph-mon[74516]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Dec  8 04:48:42 np0005550137 ceph-mon[74516]: log_channel(audit) log [DBG] : from='mgr.14478 192.168.122.100:0/575685615' entity='mgr.compute-0.kitiwu' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec  8 04:48:42 np0005550137 ceph-mgr[74806]: [cephadm INFO cephadm.serve] Deploying daemon nfs.cephfs.2.0.compute-0.cuvvno on compute-0
Dec  8 04:48:42 np0005550137 ceph-mgr[74806]: log_channel(cephadm) log [INF] : Deploying daemon nfs.cephfs.2.0.compute-0.cuvvno on compute-0
Dec  8 04:48:43 np0005550137 podman[97432]: 2025-12-08 09:48:43.550605605 +0000 UTC m=+0.053232780 container create e7a9b84debc12cd9ac05df06de1dedf2a73ecab88d706b4dd9113e0eba93b038 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=thirsty_kalam, org.label-schema.license=GPLv2, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_REF=squid, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec  8 04:48:43 np0005550137 systemd[1]: Started libpod-conmon-e7a9b84debc12cd9ac05df06de1dedf2a73ecab88d706b4dd9113e0eba93b038.scope.
Dec  8 04:48:43 np0005550137 podman[97432]: 2025-12-08 09:48:43.526819075 +0000 UTC m=+0.029446230 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Dec  8 04:48:43 np0005550137 systemd[1]: Started libcrun container.
Dec  8 04:48:43 np0005550137 podman[97432]: 2025-12-08 09:48:43.650771676 +0000 UTC m=+0.153398871 container init e7a9b84debc12cd9ac05df06de1dedf2a73ecab88d706b4dd9113e0eba93b038 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=thirsty_kalam, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_REF=squid, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Dec  8 04:48:43 np0005550137 podman[97432]: 2025-12-08 09:48:43.657752995 +0000 UTC m=+0.160380160 container start e7a9b84debc12cd9ac05df06de1dedf2a73ecab88d706b4dd9113e0eba93b038 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=thirsty_kalam, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS)
Dec  8 04:48:43 np0005550137 podman[97432]: 2025-12-08 09:48:43.662371782 +0000 UTC m=+0.164998967 container attach e7a9b84debc12cd9ac05df06de1dedf2a73ecab88d706b4dd9113e0eba93b038 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=thirsty_kalam, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_REF=squid, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, ceph=True, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325)
Dec  8 04:48:43 np0005550137 thirsty_kalam[97448]: 167 167
Dec  8 04:48:43 np0005550137 systemd[1]: libpod-e7a9b84debc12cd9ac05df06de1dedf2a73ecab88d706b4dd9113e0eba93b038.scope: Deactivated successfully.
Dec  8 04:48:43 np0005550137 podman[97432]: 2025-12-08 09:48:43.666028402 +0000 UTC m=+0.168655577 container died e7a9b84debc12cd9ac05df06de1dedf2a73ecab88d706b4dd9113e0eba93b038 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=thirsty_kalam, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.license=GPLv2)
Dec  8 04:48:43 np0005550137 systemd[1]: var-lib-containers-storage-overlay-1cfbfbbbdeb809638a0485796585ca903eeddf42ece0a6039aa3f39668451373-merged.mount: Deactivated successfully.
Dec  8 04:48:43 np0005550137 podman[97432]: 2025-12-08 09:48:43.708323845 +0000 UTC m=+0.210950980 container remove e7a9b84debc12cd9ac05df06de1dedf2a73ecab88d706b4dd9113e0eba93b038 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=thirsty_kalam, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Dec  8 04:48:43 np0005550137 systemd[1]: libpod-conmon-e7a9b84debc12cd9ac05df06de1dedf2a73ecab88d706b4dd9113e0eba93b038.scope: Deactivated successfully.
Dec  8 04:48:43 np0005550137 systemd[1]: Reloading.
Dec  8 04:48:43 np0005550137 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Dec  8 04:48:43 np0005550137 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec  8 04:48:43 np0005550137 ceph-mon[74516]: Rados config object exists: conf-nfs.cephfs
Dec  8 04:48:43 np0005550137 ceph-mon[74516]: Creating key for client.nfs.cephfs.2.0.compute-0.cuvvno-rgw
Dec  8 04:48:43 np0005550137 ceph-mon[74516]: from='mgr.14478 192.168.122.100:0/575685615' entity='mgr.compute-0.kitiwu' cmd=[{"prefix": "auth get-or-create", "entity": "client.nfs.cephfs.2.0.compute-0.cuvvno-rgw", "caps": ["mon", "allow r", "osd", "allow rwx tag rgw *=*"]}]: dispatch
Dec  8 04:48:43 np0005550137 ceph-mon[74516]: from='mgr.14478 192.168.122.100:0/575685615' entity='mgr.compute-0.kitiwu' cmd='[{"prefix": "auth get-or-create", "entity": "client.nfs.cephfs.2.0.compute-0.cuvvno-rgw", "caps": ["mon", "allow r", "osd", "allow rwx tag rgw *=*"]}]': finished
Dec  8 04:48:43 np0005550137 ceph-mon[74516]: Bind address in nfs.cephfs.2.0.compute-0.cuvvno's ganesha conf is defaulting to empty
Dec  8 04:48:43 np0005550137 ceph-mon[74516]: Deploying daemon nfs.cephfs.2.0.compute-0.cuvvno on compute-0
Dec  8 04:48:44 np0005550137 systemd[1]: Reloading.
Dec  8 04:48:44 np0005550137 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Dec  8 04:48:44 np0005550137 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec  8 04:48:44 np0005550137 systemd[1]: Starting Ceph nfs.cephfs.2.0.compute-0.cuvvno for ceb838ef-9d5d-54e4-bddb-2f01adce2ad4...
Dec  8 04:48:44 np0005550137 ceph-mgr[74806]: log_channel(cluster) log [DBG] : pgmap v24: 198 pgs: 198 active+clean; 456 KiB data, 102 MiB used, 60 GiB / 60 GiB avail; 2.3 KiB/s rd, 1.5 KiB/s wr, 4 op/s
Dec  8 04:48:44 np0005550137 podman[97591]: 2025-12-08 09:48:44.627960973 +0000 UTC m=+0.047426637 container create 7034b3b818aede2fb8291924130aa2cf54d35eaefa864ea9d8e70488e88a0698 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-ceb838ef-9d5d-54e4-bddb-2f01adce2ad4-nfs-cephfs-2-0-compute-0-cuvvno, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec  8 04:48:44 np0005550137 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f37e944498f6df2867e24eb13592b428c6a0b02cab302eb4e0a1d9de1c4c7bfa/merged/etc/ganesha supports timestamps until 2038 (0x7fffffff)
Dec  8 04:48:44 np0005550137 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f37e944498f6df2867e24eb13592b428c6a0b02cab302eb4e0a1d9de1c4c7bfa/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec  8 04:48:44 np0005550137 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f37e944498f6df2867e24eb13592b428c6a0b02cab302eb4e0a1d9de1c4c7bfa/merged/etc/ceph/keyring supports timestamps until 2038 (0x7fffffff)
Dec  8 04:48:44 np0005550137 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f37e944498f6df2867e24eb13592b428c6a0b02cab302eb4e0a1d9de1c4c7bfa/merged/var/lib/ceph/radosgw/ceph-nfs.cephfs.2.0.compute-0.cuvvno-rgw/keyring supports timestamps until 2038 (0x7fffffff)
Dec  8 04:48:44 np0005550137 podman[97591]: 2025-12-08 09:48:44.699128119 +0000 UTC m=+0.118593813 container init 7034b3b818aede2fb8291924130aa2cf54d35eaefa864ea9d8e70488e88a0698 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-ceb838ef-9d5d-54e4-bddb-2f01adce2ad4-nfs-cephfs-2-0-compute-0-cuvvno, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, ceph=True, CEPH_REF=squid, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2)
Dec  8 04:48:44 np0005550137 podman[97591]: 2025-12-08 09:48:44.608451891 +0000 UTC m=+0.027917575 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Dec  8 04:48:44 np0005550137 podman[97591]: 2025-12-08 09:48:44.705121188 +0000 UTC m=+0.124586862 container start 7034b3b818aede2fb8291924130aa2cf54d35eaefa864ea9d8e70488e88a0698 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-ceb838ef-9d5d-54e4-bddb-2f01adce2ad4-nfs-cephfs-2-0-compute-0-cuvvno, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec  8 04:48:44 np0005550137 bash[97591]: 7034b3b818aede2fb8291924130aa2cf54d35eaefa864ea9d8e70488e88a0698
Dec  8 04:48:44 np0005550137 ceph-ceb838ef-9d5d-54e4-bddb-2f01adce2ad4-nfs-cephfs-2-0-compute-0-cuvvno[97606]: 08/12/2025 09:48:44 : epoch 69369efc : compute-0 : ganesha.nfsd-2[main] init_logging :LOG :NULL :LOG: Setting log level for all components to NIV_EVENT
Dec  8 04:48:44 np0005550137 ceph-ceb838ef-9d5d-54e4-bddb-2f01adce2ad4-nfs-cephfs-2-0-compute-0-cuvvno[97606]: 08/12/2025 09:48:44 : epoch 69369efc : compute-0 : ganesha.nfsd-2[main] main :MAIN :EVENT :ganesha.nfsd Starting: Ganesha Version 5.9
Dec  8 04:48:44 np0005550137 systemd[1]: Started Ceph nfs.cephfs.2.0.compute-0.cuvvno for ceb838ef-9d5d-54e4-bddb-2f01adce2ad4.
Dec  8 04:48:44 np0005550137 ceph-mon[74516]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Dec  8 04:48:44 np0005550137 ceph-ceb838ef-9d5d-54e4-bddb-2f01adce2ad4-nfs-cephfs-2-0-compute-0-cuvvno[97606]: 08/12/2025 09:48:44 : epoch 69369efc : compute-0 : ganesha.nfsd-2[main] nfs_set_param_from_conf :NFS STARTUP :EVENT :Configuration file successfully parsed
Dec  8 04:48:44 np0005550137 ceph-ceb838ef-9d5d-54e4-bddb-2f01adce2ad4-nfs-cephfs-2-0-compute-0-cuvvno[97606]: 08/12/2025 09:48:44 : epoch 69369efc : compute-0 : ganesha.nfsd-2[main] monitoring_init :NFS STARTUP :EVENT :Init monitoring at 0.0.0.0:9587
Dec  8 04:48:44 np0005550137 ceph-ceb838ef-9d5d-54e4-bddb-2f01adce2ad4-nfs-cephfs-2-0-compute-0-cuvvno[97606]: 08/12/2025 09:48:44 : epoch 69369efc : compute-0 : ganesha.nfsd-2[main] fsal_init_fds_limit :MDCACHE LRU :EVENT :Setting the system-imposed limit on FDs to 1048576.
Dec  8 04:48:44 np0005550137 ceph-ceb838ef-9d5d-54e4-bddb-2f01adce2ad4-nfs-cephfs-2-0-compute-0-cuvvno[97606]: 08/12/2025 09:48:44 : epoch 69369efc : compute-0 : ganesha.nfsd-2[main] init_server_pkgs :NFS STARTUP :EVENT :Initializing ID Mapper.
Dec  8 04:48:44 np0005550137 ceph-mon[74516]: log_channel(audit) log [INF] : from='mgr.14478 192.168.122.100:0/575685615' entity='mgr.compute-0.kitiwu' 
Dec  8 04:48:44 np0005550137 ceph-mon[74516]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Dec  8 04:48:44 np0005550137 ceph-ceb838ef-9d5d-54e4-bddb-2f01adce2ad4-nfs-cephfs-2-0-compute-0-cuvvno[97606]: 08/12/2025 09:48:44 : epoch 69369efc : compute-0 : ganesha.nfsd-2[main] init_server_pkgs :NFS STARTUP :EVENT :ID Mapper successfully initialized.
Dec  8 04:48:44 np0005550137 ceph-ceb838ef-9d5d-54e4-bddb-2f01adce2ad4-nfs-cephfs-2-0-compute-0-cuvvno[97606]: 08/12/2025 09:48:44 : epoch 69369efc : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Dec  8 04:48:44 np0005550137 ceph-mon[74516]: log_channel(audit) log [INF] : from='mgr.14478 192.168.122.100:0/575685615' entity='mgr.compute-0.kitiwu' 
Dec  8 04:48:44 np0005550137 ceph-mon[74516]: mon.compute-0@0(leader).osd e49 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  8 04:48:44 np0005550137 ceph-mon[74516]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.nfs.cephfs}] v 0)
Dec  8 04:48:44 np0005550137 ceph-mon[74516]: log_channel(audit) log [INF] : from='mgr.14478 192.168.122.100:0/575685615' entity='mgr.compute-0.kitiwu' 
Dec  8 04:48:44 np0005550137 ceph-mgr[74806]: [progress INFO root] complete: finished ev b5459fec-57ce-4ca0-8b16-50ba00cd4784 (Updating nfs.cephfs deployment (+3 -> 3))
Dec  8 04:48:44 np0005550137 ceph-mgr[74806]: [progress INFO root] Completed event b5459fec-57ce-4ca0-8b16-50ba00cd4784 (Updating nfs.cephfs deployment (+3 -> 3)) in 12 seconds
Dec  8 04:48:44 np0005550137 ceph-mon[74516]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.nfs.cephfs}] v 0)
Dec  8 04:48:44 np0005550137 ceph-mon[74516]: log_channel(audit) log [INF] : from='mgr.14478 192.168.122.100:0/575685615' entity='mgr.compute-0.kitiwu' 
Dec  8 04:48:44 np0005550137 ceph-mgr[74806]: [progress INFO root] update: starting ev 9280d4e0-3580-41b4-b3ff-2a3b8edb9a3d (Updating ingress.nfs.cephfs deployment (+6 -> 6))
Dec  8 04:48:44 np0005550137 ceph-mon[74516]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/ingress.nfs.cephfs/monitor_password}] v 0)
Dec  8 04:48:45 np0005550137 ceph-mon[74516]: log_channel(audit) log [INF] : from='mgr.14478 192.168.122.100:0/575685615' entity='mgr.compute-0.kitiwu' 
Dec  8 04:48:45 np0005550137 ceph-mgr[74806]: [cephadm INFO cephadm.serve] Deploying daemon haproxy.nfs.cephfs.compute-1.opvoqw on compute-1
Dec  8 04:48:45 np0005550137 ceph-mgr[74806]: log_channel(cephadm) log [INF] : Deploying daemon haproxy.nfs.cephfs.compute-1.opvoqw on compute-1
Dec  8 04:48:45 np0005550137 ceph-ceb838ef-9d5d-54e4-bddb-2f01adce2ad4-nfs-cephfs-2-0-compute-0-cuvvno[97606]: 08/12/2025 09:48:45 : epoch 69369efc : compute-0 : ganesha.nfsd-2[main] rados_kv_traverse :CLIENT ID :EVENT :Failed to lst kv ret=-2
Dec  8 04:48:45 np0005550137 ceph-ceb838ef-9d5d-54e4-bddb-2f01adce2ad4-nfs-cephfs-2-0-compute-0-cuvvno[97606]: 08/12/2025 09:48:45 : epoch 69369efc : compute-0 : ganesha.nfsd-2[main] rados_cluster_read_clids :CLIENT ID :EVENT :Failed to traverse recovery db: -2
Dec  8 04:48:45 np0005550137 ceph-ceb838ef-9d5d-54e4-bddb-2f01adce2ad4-nfs-cephfs-2-0-compute-0-cuvvno[97606]: 08/12/2025 09:48:45 : epoch 69369efc : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Dec  8 04:48:45 np0005550137 ceph-ceb838ef-9d5d-54e4-bddb-2f01adce2ad4-nfs-cephfs-2-0-compute-0-cuvvno[97606]: 08/12/2025 09:48:45 : epoch 69369efc : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Dec  8 04:48:45 np0005550137 ceph-ceb838ef-9d5d-54e4-bddb-2f01adce2ad4-nfs-cephfs-2-0-compute-0-cuvvno[97606]: 08/12/2025 09:48:45 : epoch 69369efc : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Dec  8 04:48:45 np0005550137 ceph-ceb838ef-9d5d-54e4-bddb-2f01adce2ad4-nfs-cephfs-2-0-compute-0-cuvvno[97606]: 08/12/2025 09:48:45 : epoch 69369efc : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Dec  8 04:48:45 np0005550137 ceph-ceb838ef-9d5d-54e4-bddb-2f01adce2ad4-nfs-cephfs-2-0-compute-0-cuvvno[97606]: 08/12/2025 09:48:45 : epoch 69369efc : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Dec  8 04:48:45 np0005550137 ceph-ceb838ef-9d5d-54e4-bddb-2f01adce2ad4-nfs-cephfs-2-0-compute-0-cuvvno[97606]: 08/12/2025 09:48:45 : epoch 69369efc : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Dec  8 04:48:45 np0005550137 ceph-ceb838ef-9d5d-54e4-bddb-2f01adce2ad4-nfs-cephfs-2-0-compute-0-cuvvno[97606]: 08/12/2025 09:48:45 : epoch 69369efc : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Dec  8 04:48:45 np0005550137 ceph-ceb838ef-9d5d-54e4-bddb-2f01adce2ad4-nfs-cephfs-2-0-compute-0-cuvvno[97606]: 08/12/2025 09:48:45 : epoch 69369efc : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Dec  8 04:48:45 np0005550137 ceph-ceb838ef-9d5d-54e4-bddb-2f01adce2ad4-nfs-cephfs-2-0-compute-0-cuvvno[97606]: 08/12/2025 09:48:45 : epoch 69369efc : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Dec  8 04:48:45 np0005550137 ceph-ceb838ef-9d5d-54e4-bddb-2f01adce2ad4-nfs-cephfs-2-0-compute-0-cuvvno[97606]: 08/12/2025 09:48:45 : epoch 69369efc : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Dec  8 04:48:45 np0005550137 ceph-mon[74516]: from='mgr.14478 192.168.122.100:0/575685615' entity='mgr.compute-0.kitiwu' 
Dec  8 04:48:45 np0005550137 ceph-mon[74516]: from='mgr.14478 192.168.122.100:0/575685615' entity='mgr.compute-0.kitiwu' 
Dec  8 04:48:45 np0005550137 ceph-mon[74516]: from='mgr.14478 192.168.122.100:0/575685615' entity='mgr.compute-0.kitiwu' 
Dec  8 04:48:45 np0005550137 ceph-mon[74516]: from='mgr.14478 192.168.122.100:0/575685615' entity='mgr.compute-0.kitiwu' 
Dec  8 04:48:45 np0005550137 ceph-mon[74516]: from='mgr.14478 192.168.122.100:0/575685615' entity='mgr.compute-0.kitiwu' 
Dec  8 04:48:45 np0005550137 ceph-mon[74516]: Deploying daemon haproxy.nfs.cephfs.compute-1.opvoqw on compute-1
Dec  8 04:48:45 np0005550137 ceph-ceb838ef-9d5d-54e4-bddb-2f01adce2ad4-nfs-cephfs-2-0-compute-0-cuvvno[97606]: 08/12/2025 09:48:45 : epoch 69369efc : compute-0 : ganesha.nfsd-2[main] rados_cluster_end_grace :CLIENT ID :EVENT :Failed to remove rec-0000000000000003:nfs.cephfs.2: -2
Dec  8 04:48:45 np0005550137 ceph-ceb838ef-9d5d-54e4-bddb-2f01adce2ad4-nfs-cephfs-2-0-compute-0-cuvvno[97606]: 08/12/2025 09:48:45 : epoch 69369efc : compute-0 : ganesha.nfsd-2[main] nfs_lift_grace_locked :STATE :EVENT :NFS Server Now NOT IN GRACE
Dec  8 04:48:45 np0005550137 ceph-ceb838ef-9d5d-54e4-bddb-2f01adce2ad4-nfs-cephfs-2-0-compute-0-cuvvno[97606]: 08/12/2025 09:48:45 : epoch 69369efc : compute-0 : ganesha.nfsd-2[main] main :NFS STARTUP :WARN :No export entries found in configuration file !!!
Dec  8 04:48:45 np0005550137 ceph-ceb838ef-9d5d-54e4-bddb-2f01adce2ad4-nfs-cephfs-2-0-compute-0-cuvvno[97606]: 08/12/2025 09:48:45 : epoch 69369efc : compute-0 : ganesha.nfsd-2[main] config_errs_to_log :CONFIG :WARN :Config File (/etc/ganesha/ganesha.conf:24): Unknown block (RADOS_URLS)
Dec  8 04:48:45 np0005550137 ceph-ceb838ef-9d5d-54e4-bddb-2f01adce2ad4-nfs-cephfs-2-0-compute-0-cuvvno[97606]: 08/12/2025 09:48:45 : epoch 69369efc : compute-0 : ganesha.nfsd-2[main] config_errs_to_log :CONFIG :WARN :Config File (/etc/ganesha/ganesha.conf:29): Unknown block (RGW)
Dec  8 04:48:45 np0005550137 ceph-ceb838ef-9d5d-54e4-bddb-2f01adce2ad4-nfs-cephfs-2-0-compute-0-cuvvno[97606]: 08/12/2025 09:48:45 : epoch 69369efc : compute-0 : ganesha.nfsd-2[main] lower_my_caps :NFS STARTUP :EVENT :CAP_SYS_RESOURCE was successfully removed for proper quota management in FSAL
Dec  8 04:48:45 np0005550137 ceph-ceb838ef-9d5d-54e4-bddb-2f01adce2ad4-nfs-cephfs-2-0-compute-0-cuvvno[97606]: 08/12/2025 09:48:45 : epoch 69369efc : compute-0 : ganesha.nfsd-2[main] lower_my_caps :NFS STARTUP :EVENT :currently set capabilities are: cap_chown,cap_dac_override,cap_fowner,cap_fsetid,cap_kill,cap_setgid,cap_setuid,cap_setpcap,cap_net_bind_service,cap_sys_chroot,cap_setfcap=ep
Dec  8 04:48:45 np0005550137 ceph-ceb838ef-9d5d-54e4-bddb-2f01adce2ad4-nfs-cephfs-2-0-compute-0-cuvvno[97606]: 08/12/2025 09:48:45 : epoch 69369efc : compute-0 : ganesha.nfsd-2[main] gsh_dbus_pkginit :DBUS :CRIT :dbus_bus_get failed (Failed to connect to socket /run/dbus/system_bus_socket: No such file or directory)
Dec  8 04:48:45 np0005550137 ceph-ceb838ef-9d5d-54e4-bddb-2f01adce2ad4-nfs-cephfs-2-0-compute-0-cuvvno[97606]: 08/12/2025 09:48:45 : epoch 69369efc : compute-0 : ganesha.nfsd-2[main] gsh_dbus_register_path :DBUS :CRIT :dbus_connection_register_object_path called with no DBUS connection
Dec  8 04:48:45 np0005550137 ceph-ceb838ef-9d5d-54e4-bddb-2f01adce2ad4-nfs-cephfs-2-0-compute-0-cuvvno[97606]: 08/12/2025 09:48:45 : epoch 69369efc : compute-0 : ganesha.nfsd-2[main] gsh_dbus_register_path :DBUS :CRIT :dbus_connection_register_object_path called with no DBUS connection
Dec  8 04:48:45 np0005550137 ceph-ceb838ef-9d5d-54e4-bddb-2f01adce2ad4-nfs-cephfs-2-0-compute-0-cuvvno[97606]: 08/12/2025 09:48:45 : epoch 69369efc : compute-0 : ganesha.nfsd-2[main] gsh_dbus_register_path :DBUS :CRIT :dbus_connection_register_object_path called with no DBUS connection
Dec  8 04:48:45 np0005550137 ceph-ceb838ef-9d5d-54e4-bddb-2f01adce2ad4-nfs-cephfs-2-0-compute-0-cuvvno[97606]: 08/12/2025 09:48:45 : epoch 69369efc : compute-0 : ganesha.nfsd-2[main] nfs_Init_svc :DISP :CRIT :Cannot acquire credentials for principal nfs
Dec  8 04:48:45 np0005550137 ceph-ceb838ef-9d5d-54e4-bddb-2f01adce2ad4-nfs-cephfs-2-0-compute-0-cuvvno[97606]: 08/12/2025 09:48:45 : epoch 69369efc : compute-0 : ganesha.nfsd-2[main] gsh_dbus_register_path :DBUS :CRIT :dbus_connection_register_object_path called with no DBUS connection
Dec  8 04:48:45 np0005550137 ceph-ceb838ef-9d5d-54e4-bddb-2f01adce2ad4-nfs-cephfs-2-0-compute-0-cuvvno[97606]: 08/12/2025 09:48:45 : epoch 69369efc : compute-0 : ganesha.nfsd-2[main] nfs_Init_admin_thread :NFS CB :EVENT :Admin thread initialized
Dec  8 04:48:45 np0005550137 ceph-ceb838ef-9d5d-54e4-bddb-2f01adce2ad4-nfs-cephfs-2-0-compute-0-cuvvno[97606]: 08/12/2025 09:48:45 : epoch 69369efc : compute-0 : ganesha.nfsd-2[main] nfs_rpc_cb_init_ccache :NFS STARTUP :EVENT :Callback creds directory (/var/run/ganesha) already exists
Dec  8 04:48:45 np0005550137 ceph-ceb838ef-9d5d-54e4-bddb-2f01adce2ad4-nfs-cephfs-2-0-compute-0-cuvvno[97606]: 08/12/2025 09:48:45 : epoch 69369efc : compute-0 : ganesha.nfsd-2[main] find_keytab_entry :NFS CB :WARN :Configuration file does not specify default realm while getting default realm name
Dec  8 04:48:45 np0005550137 ceph-ceb838ef-9d5d-54e4-bddb-2f01adce2ad4-nfs-cephfs-2-0-compute-0-cuvvno[97606]: 08/12/2025 09:48:45 : epoch 69369efc : compute-0 : ganesha.nfsd-2[main] gssd_refresh_krb5_machine_credential :NFS CB :CRIT :ERROR: gssd_refresh_krb5_machine_credential: no usable keytab entry found in keytab /etc/krb5.keytab for connection with host localhost
Dec  8 04:48:45 np0005550137 ceph-ceb838ef-9d5d-54e4-bddb-2f01adce2ad4-nfs-cephfs-2-0-compute-0-cuvvno[97606]: 08/12/2025 09:48:45 : epoch 69369efc : compute-0 : ganesha.nfsd-2[main] nfs_rpc_cb_init_ccache :NFS STARTUP :WARN :gssd_refresh_krb5_machine_credential failed (-1765328160:0)
Dec  8 04:48:45 np0005550137 ceph-ceb838ef-9d5d-54e4-bddb-2f01adce2ad4-nfs-cephfs-2-0-compute-0-cuvvno[97606]: 08/12/2025 09:48:45 : epoch 69369efc : compute-0 : ganesha.nfsd-2[main] nfs_Start_threads :THREAD :EVENT :Starting delayed executor.
Dec  8 04:48:45 np0005550137 ceph-ceb838ef-9d5d-54e4-bddb-2f01adce2ad4-nfs-cephfs-2-0-compute-0-cuvvno[97606]: 08/12/2025 09:48:45 : epoch 69369efc : compute-0 : ganesha.nfsd-2[main] nfs_Start_threads :THREAD :EVENT :gsh_dbusthread was started successfully
Dec  8 04:48:45 np0005550137 ceph-ceb838ef-9d5d-54e4-bddb-2f01adce2ad4-nfs-cephfs-2-0-compute-0-cuvvno[97606]: 08/12/2025 09:48:45 : epoch 69369efc : compute-0 : ganesha.nfsd-2[dbus] gsh_dbus_thread :DBUS :CRIT :DBUS not initialized, service thread exiting
Dec  8 04:48:45 np0005550137 ceph-ceb838ef-9d5d-54e4-bddb-2f01adce2ad4-nfs-cephfs-2-0-compute-0-cuvvno[97606]: 08/12/2025 09:48:45 : epoch 69369efc : compute-0 : ganesha.nfsd-2[dbus] gsh_dbus_thread :DBUS :EVENT :shutdown
Dec  8 04:48:45 np0005550137 ceph-ceb838ef-9d5d-54e4-bddb-2f01adce2ad4-nfs-cephfs-2-0-compute-0-cuvvno[97606]: 08/12/2025 09:48:45 : epoch 69369efc : compute-0 : ganesha.nfsd-2[main] nfs_Start_threads :THREAD :EVENT :admin thread was started successfully
Dec  8 04:48:45 np0005550137 ceph-ceb838ef-9d5d-54e4-bddb-2f01adce2ad4-nfs-cephfs-2-0-compute-0-cuvvno[97606]: 08/12/2025 09:48:45 : epoch 69369efc : compute-0 : ganesha.nfsd-2[main] nfs_Start_threads :THREAD :EVENT :reaper thread was started successfully
Dec  8 04:48:45 np0005550137 ceph-ceb838ef-9d5d-54e4-bddb-2f01adce2ad4-nfs-cephfs-2-0-compute-0-cuvvno[97606]: 08/12/2025 09:48:45 : epoch 69369efc : compute-0 : ganesha.nfsd-2[main] nfs_Start_threads :THREAD :EVENT :General fridge was started successfully
Dec  8 04:48:45 np0005550137 ceph-ceb838ef-9d5d-54e4-bddb-2f01adce2ad4-nfs-cephfs-2-0-compute-0-cuvvno[97606]: 08/12/2025 09:48:45 : epoch 69369efc : compute-0 : ganesha.nfsd-2[main] nfs_start :NFS STARTUP :EVENT :-------------------------------------------------
Dec  8 04:48:45 np0005550137 ceph-ceb838ef-9d5d-54e4-bddb-2f01adce2ad4-nfs-cephfs-2-0-compute-0-cuvvno[97606]: 08/12/2025 09:48:45 : epoch 69369efc : compute-0 : ganesha.nfsd-2[main] nfs_start :NFS STARTUP :EVENT :             NFS SERVER INITIALIZED
Dec  8 04:48:45 np0005550137 ceph-ceb838ef-9d5d-54e4-bddb-2f01adce2ad4-nfs-cephfs-2-0-compute-0-cuvvno[97606]: 08/12/2025 09:48:45 : epoch 69369efc : compute-0 : ganesha.nfsd-2[main] nfs_start :NFS STARTUP :EVENT :-------------------------------------------------
Dec  8 04:48:45 np0005550137 ceph-mgr[74806]: [progress INFO root] Writing back 15 completed events
Dec  8 04:48:45 np0005550137 ceph-mon[74516]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/progress/completed}] v 0)
Dec  8 04:48:45 np0005550137 ceph-mon[74516]: log_channel(audit) log [INF] : from='mgr.14478 192.168.122.100:0/575685615' entity='mgr.compute-0.kitiwu' 
Dec  8 04:48:46 np0005550137 ceph-mgr[74806]: log_channel(cluster) log [DBG] : pgmap v25: 198 pgs: 198 active+clean; 456 KiB data, 102 MiB used, 60 GiB / 60 GiB avail; 2.3 KiB/s rd, 1.5 KiB/s wr, 4 op/s
Dec  8 04:48:46 np0005550137 ceph-mon[74516]: from='mgr.14478 192.168.122.100:0/575685615' entity='mgr.compute-0.kitiwu' 
Dec  8 04:48:48 np0005550137 ceph-mgr[74806]: log_channel(cluster) log [DBG] : pgmap v26: 198 pgs: 198 active+clean; 456 KiB data, 102 MiB used, 60 GiB / 60 GiB avail; 5.7 KiB/s rd, 2.6 KiB/s wr, 9 op/s
Dec  8 04:48:49 np0005550137 ceph-mon[74516]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1.devices.0}] v 0)
Dec  8 04:48:49 np0005550137 ceph-mon[74516]: log_channel(audit) log [INF] : from='mgr.14478 192.168.122.100:0/575685615' entity='mgr.compute-0.kitiwu' 
Dec  8 04:48:49 np0005550137 ceph-mon[74516]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1}] v 0)
Dec  8 04:48:49 np0005550137 ceph-mon[74516]: log_channel(audit) log [INF] : from='mgr.14478 192.168.122.100:0/575685615' entity='mgr.compute-0.kitiwu' 
Dec  8 04:48:49 np0005550137 ceph-mon[74516]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.ingress.nfs.cephfs}] v 0)
Dec  8 04:48:49 np0005550137 ceph-mon[74516]: log_channel(audit) log [INF] : from='mgr.14478 192.168.122.100:0/575685615' entity='mgr.compute-0.kitiwu' 
Dec  8 04:48:49 np0005550137 ceph-mgr[74806]: [cephadm INFO cephadm.serve] Deploying daemon haproxy.nfs.cephfs.compute-0.dvsreo on compute-0
Dec  8 04:48:49 np0005550137 ceph-mgr[74806]: log_channel(cephadm) log [INF] : Deploying daemon haproxy.nfs.cephfs.compute-0.dvsreo on compute-0
Dec  8 04:48:49 np0005550137 ceph-mon[74516]: mon.compute-0@0(leader).osd e49 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  8 04:48:49 np0005550137 ceph-mon[74516]: from='mgr.14478 192.168.122.100:0/575685615' entity='mgr.compute-0.kitiwu' 
Dec  8 04:48:49 np0005550137 ceph-mon[74516]: from='mgr.14478 192.168.122.100:0/575685615' entity='mgr.compute-0.kitiwu' 
Dec  8 04:48:49 np0005550137 ceph-mon[74516]: from='mgr.14478 192.168.122.100:0/575685615' entity='mgr.compute-0.kitiwu' 
Dec  8 04:48:50 np0005550137 ceph-mgr[74806]: log_channel(cluster) log [DBG] : pgmap v27: 198 pgs: 198 active+clean; 456 KiB data, 102 MiB used, 60 GiB / 60 GiB avail; 4.8 KiB/s rd, 1.8 KiB/s wr, 7 op/s
Dec  8 04:48:50 np0005550137 ceph-ceb838ef-9d5d-54e4-bddb-2f01adce2ad4-nfs-cephfs-2-0-compute-0-cuvvno[97606]: 08/12/2025 09:48:50 : epoch 69369efc : compute-0 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f571c000df0 fd 37 proxy header rest len failed header rlen = % (will set dead)
Dec  8 04:48:50 np0005550137 ceph-mon[74516]: Deploying daemon haproxy.nfs.cephfs.compute-0.dvsreo on compute-0
Dec  8 04:48:52 np0005550137 podman[97752]: 2025-12-08 09:48:52.39682807 +0000 UTC m=+2.302458668 container create dbd93bcab7d03dd2931425240cee2a6e8150086483b0a8f9c5a36f2aa4e5ea78 (image=quay.io/ceph/haproxy:2.3, name=eloquent_jackson)
Dec  8 04:48:52 np0005550137 podman[97752]: 2025-12-08 09:48:52.378160553 +0000 UTC m=+2.283791191 image pull e85424b0d443f37ddd2dd8a3bb2ef6f18dd352b987723a921b64289023af2914 quay.io/ceph/haproxy:2.3
Dec  8 04:48:52 np0005550137 systemd[1]: Started libpod-conmon-dbd93bcab7d03dd2931425240cee2a6e8150086483b0a8f9c5a36f2aa4e5ea78.scope.
Dec  8 04:48:52 np0005550137 systemd[1]: Started libcrun container.
Dec  8 04:48:52 np0005550137 podman[97752]: 2025-12-08 09:48:52.489973822 +0000 UTC m=+2.395604460 container init dbd93bcab7d03dd2931425240cee2a6e8150086483b0a8f9c5a36f2aa4e5ea78 (image=quay.io/ceph/haproxy:2.3, name=eloquent_jackson)
Dec  8 04:48:52 np0005550137 podman[97752]: 2025-12-08 09:48:52.502866046 +0000 UTC m=+2.408496674 container start dbd93bcab7d03dd2931425240cee2a6e8150086483b0a8f9c5a36f2aa4e5ea78 (image=quay.io/ceph/haproxy:2.3, name=eloquent_jackson)
Dec  8 04:48:52 np0005550137 podman[97752]: 2025-12-08 09:48:52.506546096 +0000 UTC m=+2.412176694 container attach dbd93bcab7d03dd2931425240cee2a6e8150086483b0a8f9c5a36f2aa4e5ea78 (image=quay.io/ceph/haproxy:2.3, name=eloquent_jackson)
Dec  8 04:48:52 np0005550137 eloquent_jackson[97871]: 0 0
Dec  8 04:48:52 np0005550137 systemd[1]: libpod-dbd93bcab7d03dd2931425240cee2a6e8150086483b0a8f9c5a36f2aa4e5ea78.scope: Deactivated successfully.
Dec  8 04:48:52 np0005550137 podman[97752]: 2025-12-08 09:48:52.50934687 +0000 UTC m=+2.414977468 container died dbd93bcab7d03dd2931425240cee2a6e8150086483b0a8f9c5a36f2aa4e5ea78 (image=quay.io/ceph/haproxy:2.3, name=eloquent_jackson)
Dec  8 04:48:52 np0005550137 systemd[1]: var-lib-containers-storage-overlay-65c32cb21850e14b0d260191e742aa893415d4f32cb5ec4ba682f40bd1143aff-merged.mount: Deactivated successfully.
Dec  8 04:48:52 np0005550137 podman[97752]: 2025-12-08 09:48:52.55991574 +0000 UTC m=+2.465546348 container remove dbd93bcab7d03dd2931425240cee2a6e8150086483b0a8f9c5a36f2aa4e5ea78 (image=quay.io/ceph/haproxy:2.3, name=eloquent_jackson)
Dec  8 04:48:52 np0005550137 ceph-mgr[74806]: log_channel(cluster) log [DBG] : pgmap v28: 198 pgs: 198 active+clean; 456 KiB data, 102 MiB used, 60 GiB / 60 GiB avail; 4.8 KiB/s rd, 1.8 KiB/s wr, 7 op/s
Dec  8 04:48:52 np0005550137 systemd[1]: libpod-conmon-dbd93bcab7d03dd2931425240cee2a6e8150086483b0a8f9c5a36f2aa4e5ea78.scope: Deactivated successfully.
Dec  8 04:48:52 np0005550137 ceph-ceb838ef-9d5d-54e4-bddb-2f01adce2ad4-nfs-cephfs-2-0-compute-0-cuvvno[97606]: 08/12/2025 09:48:52 : epoch 69369efc : compute-0 : ganesha.nfsd-2[svc_2] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f5714001e50 fd 37 proxy header rest len failed header rlen = % (will set dead)
Dec  8 04:48:52 np0005550137 systemd[1]: Reloading.
Dec  8 04:48:52 np0005550137 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Dec  8 04:48:52 np0005550137 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec  8 04:48:53 np0005550137 systemd[1]: Reloading.
Dec  8 04:48:53 np0005550137 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Dec  8 04:48:53 np0005550137 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec  8 04:48:53 np0005550137 systemd[1]: Starting Ceph haproxy.nfs.cephfs.compute-0.dvsreo for ceb838ef-9d5d-54e4-bddb-2f01adce2ad4...
Dec  8 04:48:53 np0005550137 podman[98017]: 2025-12-08 09:48:53.727217134 +0000 UTC m=+0.050666134 container create 7f6df096ca74536932244eb4e1f4382864c206173aaf0a0b9089cd0768af80db (image=quay.io/ceph/haproxy:2.3, name=ceph-ceb838ef-9d5d-54e4-bddb-2f01adce2ad4-haproxy-nfs-cephfs-compute-0-dvsreo)
Dec  8 04:48:53 np0005550137 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2ab0be3de2f31b511a8f80f587d7d4580764e6c5dee3bfbcbf766c4b320823cd/merged/var/lib/haproxy supports timestamps until 2038 (0x7fffffff)
Dec  8 04:48:53 np0005550137 podman[98017]: 2025-12-08 09:48:53.787571216 +0000 UTC m=+0.111020246 container init 7f6df096ca74536932244eb4e1f4382864c206173aaf0a0b9089cd0768af80db (image=quay.io/ceph/haproxy:2.3, name=ceph-ceb838ef-9d5d-54e4-bddb-2f01adce2ad4-haproxy-nfs-cephfs-compute-0-dvsreo)
Dec  8 04:48:53 np0005550137 podman[98017]: 2025-12-08 09:48:53.793958667 +0000 UTC m=+0.117407667 container start 7f6df096ca74536932244eb4e1f4382864c206173aaf0a0b9089cd0768af80db (image=quay.io/ceph/haproxy:2.3, name=ceph-ceb838ef-9d5d-54e4-bddb-2f01adce2ad4-haproxy-nfs-cephfs-compute-0-dvsreo)
Dec  8 04:48:53 np0005550137 podman[98017]: 2025-12-08 09:48:53.701908648 +0000 UTC m=+0.025357738 image pull e85424b0d443f37ddd2dd8a3bb2ef6f18dd352b987723a921b64289023af2914 quay.io/ceph/haproxy:2.3
Dec  8 04:48:53 np0005550137 bash[98017]: 7f6df096ca74536932244eb4e1f4382864c206173aaf0a0b9089cd0768af80db
Dec  8 04:48:53 np0005550137 systemd[1]: Started Ceph haproxy.nfs.cephfs.compute-0.dvsreo for ceb838ef-9d5d-54e4-bddb-2f01adce2ad4.
Dec  8 04:48:53 np0005550137 ceph-ceb838ef-9d5d-54e4-bddb-2f01adce2ad4-haproxy-nfs-cephfs-compute-0-dvsreo[98033]: [NOTICE] 341/094853 (2) : New worker #1 (4) forked
Dec  8 04:48:53 np0005550137 ceph-mon[74516]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Dec  8 04:48:53 np0005550137 ceph-mon[74516]: log_channel(audit) log [INF] : from='mgr.14478 192.168.122.100:0/575685615' entity='mgr.compute-0.kitiwu' 
Dec  8 04:48:53 np0005550137 ceph-mon[74516]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Dec  8 04:48:53 np0005550137 ceph-mon[74516]: log_channel(audit) log [INF] : from='mgr.14478 192.168.122.100:0/575685615' entity='mgr.compute-0.kitiwu' 
Dec  8 04:48:53 np0005550137 ceph-mon[74516]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.ingress.nfs.cephfs}] v 0)
Dec  8 04:48:53 np0005550137 ceph-mon[74516]: log_channel(audit) log [INF] : from='mgr.14478 192.168.122.100:0/575685615' entity='mgr.compute-0.kitiwu' 
Dec  8 04:48:53 np0005550137 ceph-mgr[74806]: [cephadm INFO cephadm.serve] Deploying daemon haproxy.nfs.cephfs.compute-2.mtmwtv on compute-2
Dec  8 04:48:53 np0005550137 ceph-mgr[74806]: log_channel(cephadm) log [INF] : Deploying daemon haproxy.nfs.cephfs.compute-2.mtmwtv on compute-2
Dec  8 04:48:53 np0005550137 ceph-mon[74516]: from='mgr.14478 192.168.122.100:0/575685615' entity='mgr.compute-0.kitiwu' 
Dec  8 04:48:53 np0005550137 ceph-mon[74516]: from='mgr.14478 192.168.122.100:0/575685615' entity='mgr.compute-0.kitiwu' 
Dec  8 04:48:53 np0005550137 ceph-mon[74516]: from='mgr.14478 192.168.122.100:0/575685615' entity='mgr.compute-0.kitiwu' 
Dec  8 04:48:54 np0005550137 ceph-mgr[74806]: log_channel(cluster) log [DBG] : pgmap v29: 198 pgs: 198 active+clean; 456 KiB data, 102 MiB used, 60 GiB / 60 GiB avail; 3.4 KiB/s rd, 1.1 KiB/s wr, 4 op/s
Dec  8 04:48:54 np0005550137 ceph-ceb838ef-9d5d-54e4-bddb-2f01adce2ad4-nfs-cephfs-2-0-compute-0-cuvvno[97606]: 08/12/2025 09:48:54 : epoch 69369efc : compute-0 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f56f8000b60 fd 37 proxy header rest len failed header rlen = % (will set dead)
Dec  8 04:48:54 np0005550137 ceph-mon[74516]: mon.compute-0@0(leader).osd e49 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  8 04:48:54 np0005550137 ceph-mon[74516]: Deploying daemon haproxy.nfs.cephfs.compute-2.mtmwtv on compute-2
Dec  8 04:48:55 np0005550137 ceph-ceb838ef-9d5d-54e4-bddb-2f01adce2ad4-nfs-cephfs-2-0-compute-0-cuvvno[97606]: 08/12/2025 09:48:55 : epoch 69369efc : compute-0 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f5710001c00 fd 37 proxy header rest len failed header rlen = % (will set dead)
Dec  8 04:48:56 np0005550137 ceph-mgr[74806]: log_channel(cluster) log [DBG] : pgmap v30: 198 pgs: 198 active+clean; 456 KiB data, 102 MiB used, 60 GiB / 60 GiB avail; 3.4 KiB/s rd, 1.1 KiB/s wr, 4 op/s
Dec  8 04:48:56 np0005550137 ceph-ceb838ef-9d5d-54e4-bddb-2f01adce2ad4-nfs-cephfs-2-0-compute-0-cuvvno[97606]: 08/12/2025 09:48:56 : epoch 69369efc : compute-0 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f571c001bd0 fd 37 proxy header rest len failed header rlen = % (will set dead)
Dec  8 04:48:57 np0005550137 ceph-ceb838ef-9d5d-54e4-bddb-2f01adce2ad4-nfs-cephfs-2-0-compute-0-cuvvno[97606]: 08/12/2025 09:48:57 : epoch 69369efc : compute-0 : ganesha.nfsd-2[svc_2] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f5714002950 fd 37 proxy header rest len failed header rlen = % (will set dead)
Dec  8 04:48:57 np0005550137 ceph-mon[74516]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2.devices.0}] v 0)
Dec  8 04:48:57 np0005550137 ceph-mon[74516]: log_channel(audit) log [INF] : from='mgr.14478 192.168.122.100:0/575685615' entity='mgr.compute-0.kitiwu' 
Dec  8 04:48:57 np0005550137 ceph-mon[74516]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2}] v 0)
Dec  8 04:48:57 np0005550137 ceph-mon[74516]: log_channel(audit) log [INF] : from='mgr.14478 192.168.122.100:0/575685615' entity='mgr.compute-0.kitiwu' 
Dec  8 04:48:57 np0005550137 ceph-mon[74516]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.ingress.nfs.cephfs}] v 0)
Dec  8 04:48:57 np0005550137 ceph-mon[74516]: log_channel(audit) log [INF] : from='mgr.14478 192.168.122.100:0/575685615' entity='mgr.compute-0.kitiwu' 
Dec  8 04:48:57 np0005550137 ceph-mon[74516]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/ingress.nfs.cephfs/keepalived_password}] v 0)
Dec  8 04:48:57 np0005550137 ceph-mon[74516]: log_channel(audit) log [INF] : from='mgr.14478 192.168.122.100:0/575685615' entity='mgr.compute-0.kitiwu' 
Dec  8 04:48:57 np0005550137 ceph-mgr[74806]: [cephadm INFO cephadm.services.ingress] 192.168.122.2 is in 192.168.122.0/24 on compute-0 interface br-ex
Dec  8 04:48:57 np0005550137 ceph-mgr[74806]: log_channel(cephadm) log [INF] : 192.168.122.2 is in 192.168.122.0/24 on compute-0 interface br-ex
Dec  8 04:48:57 np0005550137 ceph-mgr[74806]: [cephadm INFO cephadm.services.ingress] 192.168.122.2 is in 192.168.122.0/24 on compute-1 interface br-ex
Dec  8 04:48:57 np0005550137 ceph-mgr[74806]: log_channel(cephadm) log [INF] : 192.168.122.2 is in 192.168.122.0/24 on compute-1 interface br-ex
Dec  8 04:48:57 np0005550137 ceph-mgr[74806]: [cephadm INFO cephadm.services.ingress] 192.168.122.2 is in 192.168.122.0/24 on compute-2 interface br-ex
Dec  8 04:48:57 np0005550137 ceph-mgr[74806]: log_channel(cephadm) log [INF] : 192.168.122.2 is in 192.168.122.0/24 on compute-2 interface br-ex
Dec  8 04:48:57 np0005550137 ceph-mgr[74806]: [cephadm INFO cephadm.serve] Deploying daemon keepalived.nfs.cephfs.compute-0.qxgfft on compute-0
Dec  8 04:48:57 np0005550137 ceph-mgr[74806]: log_channel(cephadm) log [INF] : Deploying daemon keepalived.nfs.cephfs.compute-0.qxgfft on compute-0
Dec  8 04:48:58 np0005550137 ceph-mon[74516]: from='mgr.14478 192.168.122.100:0/575685615' entity='mgr.compute-0.kitiwu' 
Dec  8 04:48:58 np0005550137 ceph-mon[74516]: from='mgr.14478 192.168.122.100:0/575685615' entity='mgr.compute-0.kitiwu' 
Dec  8 04:48:58 np0005550137 ceph-mon[74516]: from='mgr.14478 192.168.122.100:0/575685615' entity='mgr.compute-0.kitiwu' 
Dec  8 04:48:58 np0005550137 ceph-mon[74516]: from='mgr.14478 192.168.122.100:0/575685615' entity='mgr.compute-0.kitiwu' 
Dec  8 04:48:58 np0005550137 ceph-mgr[74806]: log_channel(cluster) log [DBG] : pgmap v31: 198 pgs: 198 active+clean; 456 KiB data, 102 MiB used, 60 GiB / 60 GiB avail; 3.7 KiB/s rd, 1.1 KiB/s wr, 5 op/s
Dec  8 04:48:58 np0005550137 ceph-ceb838ef-9d5d-54e4-bddb-2f01adce2ad4-nfs-cephfs-2-0-compute-0-cuvvno[97606]: 08/12/2025 09:48:58 : epoch 69369efc : compute-0 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f56f80016a0 fd 37 proxy header rest len failed header rlen = % (will set dead)
Dec  8 04:48:59 np0005550137 ceph-mon[74516]: 192.168.122.2 is in 192.168.122.0/24 on compute-0 interface br-ex
Dec  8 04:48:59 np0005550137 ceph-mon[74516]: 192.168.122.2 is in 192.168.122.0/24 on compute-1 interface br-ex
Dec  8 04:48:59 np0005550137 ceph-mon[74516]: 192.168.122.2 is in 192.168.122.0/24 on compute-2 interface br-ex
Dec  8 04:48:59 np0005550137 ceph-mon[74516]: Deploying daemon keepalived.nfs.cephfs.compute-0.qxgfft on compute-0
Dec  8 04:48:59 np0005550137 ceph-ceb838ef-9d5d-54e4-bddb-2f01adce2ad4-nfs-cephfs-2-0-compute-0-cuvvno[97606]: 08/12/2025 09:48:59 : epoch 69369efc : compute-0 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f5710002520 fd 37 proxy header rest len failed header rlen = % (will set dead)
Dec  8 04:48:59 np0005550137 ceph-ceb838ef-9d5d-54e4-bddb-2f01adce2ad4-nfs-cephfs-2-0-compute-0-cuvvno[97606]: 08/12/2025 09:48:59 : epoch 69369efc : compute-0 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f571c001bd0 fd 37 proxy header rest len failed header rlen = % (will set dead)
Dec  8 04:48:59 np0005550137 ceph-mon[74516]: mon.compute-0@0(leader).osd e49 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  8 04:49:00 np0005550137 ceph-mgr[74806]: log_channel(cluster) log [DBG] : pgmap v32: 198 pgs: 198 active+clean; 456 KiB data, 102 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Dec  8 04:49:00 np0005550137 ceph-ceb838ef-9d5d-54e4-bddb-2f01adce2ad4-nfs-cephfs-2-0-compute-0-cuvvno[97606]: 08/12/2025 09:49:00 : epoch 69369efc : compute-0 : ganesha.nfsd-2[svc_2] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f5714002950 fd 37 proxy header rest len failed header rlen = % (will set dead)
Dec  8 04:49:01 np0005550137 ceph-ceb838ef-9d5d-54e4-bddb-2f01adce2ad4-nfs-cephfs-2-0-compute-0-cuvvno[97606]: 08/12/2025 09:49:01 : epoch 69369efc : compute-0 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f56f80016a0 fd 37 proxy header rest len failed header rlen = % (will set dead)
Dec  8 04:49:01 np0005550137 ceph-ceb838ef-9d5d-54e4-bddb-2f01adce2ad4-nfs-cephfs-2-0-compute-0-cuvvno[97606]: 08/12/2025 09:49:01 : epoch 69369efc : compute-0 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f5710002520 fd 37 proxy header rest len failed header rlen = % (will set dead)
Dec  8 04:49:01 np0005550137 podman[98137]: 2025-12-08 09:49:01.230053847 +0000 UTC m=+2.758816936 container create c571f70cb70721627c8980fc34db20472dd4a44f8390541a261021f47605a048 (image=quay.io/ceph/keepalived:2.2.4, name=stupefied_borg, version=2.2.4, distribution-scope=public, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9-minimal/images/9.1.0-1793, com.redhat.component=keepalived-container, name=keepalived, release=1793, vendor=Red Hat, Inc., vcs-type=git, description=keepalived for Ceph, architecture=x86_64, io.openshift.tags=Ceph keepalived, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, summary=Provides keepalived on RHEL 9 for Ceph., vcs-ref=befaf1f5ec7b874aef2651ee1384d51828504eb9, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.buildah.version=1.28.2, io.openshift.expose-services=, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.display-name=Keepalived on RHEL 9, build-date=2023-02-22T09:23:20)
Dec  8 04:49:01 np0005550137 systemd[1]: Started libpod-conmon-c571f70cb70721627c8980fc34db20472dd4a44f8390541a261021f47605a048.scope.
Dec  8 04:49:01 np0005550137 podman[98137]: 2025-12-08 09:49:01.211019499 +0000 UTC m=+2.739782618 image pull 4a3a1ff181d97c6dcfa9138ad76eb99fa2c1e840298461d5a7a56133bc05b9a2 quay.io/ceph/keepalived:2.2.4
Dec  8 04:49:01 np0005550137 systemd[1]: Started libcrun container.
Dec  8 04:49:01 np0005550137 podman[98137]: 2025-12-08 09:49:01.341721671 +0000 UTC m=+2.870484810 container init c571f70cb70721627c8980fc34db20472dd4a44f8390541a261021f47605a048 (image=quay.io/ceph/keepalived:2.2.4, name=stupefied_borg, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9-minimal/images/9.1.0-1793, com.redhat.component=keepalived-container, release=1793, vcs-type=git, vcs-ref=befaf1f5ec7b874aef2651ee1384d51828504eb9, build-date=2023-02-22T09:23:20, version=2.2.4, io.openshift.expose-services=, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, summary=Provides keepalived on RHEL 9 for Ceph., name=keepalived, io.k8s.display-name=Keepalived on RHEL 9, io.openshift.tags=Ceph keepalived, distribution-scope=public, description=keepalived for Ceph, io.buildah.version=1.28.2, architecture=x86_64, vendor=Red Hat, Inc., io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly.)
Dec  8 04:49:01 np0005550137 podman[98137]: 2025-12-08 09:49:01.356177243 +0000 UTC m=+2.884940322 container start c571f70cb70721627c8980fc34db20472dd4a44f8390541a261021f47605a048 (image=quay.io/ceph/keepalived:2.2.4, name=stupefied_borg, vcs-ref=befaf1f5ec7b874aef2651ee1384d51828504eb9, version=2.2.4, distribution-scope=public, io.buildah.version=1.28.2, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, summary=Provides keepalived on RHEL 9 for Ceph., architecture=x86_64, release=1793, io.k8s.display-name=Keepalived on RHEL 9, build-date=2023-02-22T09:23:20, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., maintainer=Guillaume Abrioux <gabrioux@redhat.com>, name=keepalived, vendor=Red Hat, Inc., vcs-type=git, io.openshift.expose-services=, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9-minimal/images/9.1.0-1793, description=keepalived for Ceph, com.redhat.component=keepalived-container, io.openshift.tags=Ceph keepalived)
Dec  8 04:49:01 np0005550137 podman[98137]: 2025-12-08 09:49:01.359590105 +0000 UTC m=+2.888353224 container attach c571f70cb70721627c8980fc34db20472dd4a44f8390541a261021f47605a048 (image=quay.io/ceph/keepalived:2.2.4, name=stupefied_borg, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, description=keepalived for Ceph, build-date=2023-02-22T09:23:20, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., version=2.2.4, distribution-scope=public, io.openshift.expose-services=, com.redhat.component=keepalived-container, io.openshift.tags=Ceph keepalived, io.buildah.version=1.28.2, summary=Provides keepalived on RHEL 9 for Ceph., url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9-minimal/images/9.1.0-1793, architecture=x86_64, name=keepalived, vendor=Red Hat, Inc., vcs-type=git, io.k8s.display-name=Keepalived on RHEL 9, vcs-ref=befaf1f5ec7b874aef2651ee1384d51828504eb9, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, release=1793)
Dec  8 04:49:01 np0005550137 stupefied_borg[98234]: 0 0
Dec  8 04:49:01 np0005550137 systemd[1]: libpod-c571f70cb70721627c8980fc34db20472dd4a44f8390541a261021f47605a048.scope: Deactivated successfully.
Dec  8 04:49:01 np0005550137 podman[98137]: 2025-12-08 09:49:01.364567164 +0000 UTC m=+2.893330243 container died c571f70cb70721627c8980fc34db20472dd4a44f8390541a261021f47605a048 (image=quay.io/ceph/keepalived:2.2.4, name=stupefied_borg, description=keepalived for Ceph, release=1793, version=2.2.4, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, summary=Provides keepalived on RHEL 9 for Ceph., architecture=x86_64, name=keepalived, vcs-type=git, vcs-ref=befaf1f5ec7b874aef2651ee1384d51828504eb9, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9-minimal/images/9.1.0-1793, com.redhat.component=keepalived-container, build-date=2023-02-22T09:23:20, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.buildah.version=1.28.2, io.openshift.expose-services=, vendor=Red Hat, Inc., io.k8s.display-name=Keepalived on RHEL 9, io.openshift.tags=Ceph keepalived, distribution-scope=public)
Dec  8 04:49:01 np0005550137 systemd[1]: var-lib-containers-storage-overlay-5e252c553656ba718c95c19dce16368cca76337f26a540e1b36cb25e7a77ded6-merged.mount: Deactivated successfully.
Dec  8 04:49:01 np0005550137 podman[98137]: 2025-12-08 09:49:01.421948666 +0000 UTC m=+2.950711755 container remove c571f70cb70721627c8980fc34db20472dd4a44f8390541a261021f47605a048 (image=quay.io/ceph/keepalived:2.2.4, name=stupefied_borg, description=keepalived for Ceph, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.expose-services=, build-date=2023-02-22T09:23:20, architecture=x86_64, io.k8s.display-name=Keepalived on RHEL 9, io.buildah.version=1.28.2, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9-minimal/images/9.1.0-1793, vcs-type=git, distribution-scope=public, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., maintainer=Guillaume Abrioux <gabrioux@redhat.com>, name=keepalived, summary=Provides keepalived on RHEL 9 for Ceph., version=2.2.4, release=1793, vcs-ref=befaf1f5ec7b874aef2651ee1384d51828504eb9, com.redhat.component=keepalived-container, io.openshift.tags=Ceph keepalived)
Dec  8 04:49:01 np0005550137 systemd[1]: libpod-conmon-c571f70cb70721627c8980fc34db20472dd4a44f8390541a261021f47605a048.scope: Deactivated successfully.
Dec  8 04:49:01 np0005550137 systemd[1]: Reloading.
Dec  8 04:49:01 np0005550137 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Dec  8 04:49:01 np0005550137 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec  8 04:49:01 np0005550137 systemd[1]: Reloading.
Dec  8 04:49:01 np0005550137 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Dec  8 04:49:01 np0005550137 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec  8 04:49:02 np0005550137 systemd[1]: Starting Ceph keepalived.nfs.cephfs.compute-0.qxgfft for ceb838ef-9d5d-54e4-bddb-2f01adce2ad4...
Dec  8 04:49:02 np0005550137 podman[98381]: 2025-12-08 09:49:02.405187065 +0000 UTC m=+0.051632042 container create 860f9b1fceef64b25d38d4f198ba3ddb3d3c4871377cfc5a28c6fffa3c89de5c (image=quay.io/ceph/keepalived:2.2.4, name=ceph-ceb838ef-9d5d-54e4-bddb-2f01adce2ad4-keepalived-nfs-cephfs-compute-0-qxgfft, io.openshift.tags=Ceph keepalived, vendor=Red Hat, Inc., vcs-type=git, io.k8s.display-name=Keepalived on RHEL 9, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., name=keepalived, summary=Provides keepalived on RHEL 9 for Ceph., maintainer=Guillaume Abrioux <gabrioux@redhat.com>, architecture=x86_64, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9-minimal/images/9.1.0-1793, build-date=2023-02-22T09:23:20, description=keepalived for Ceph, io.openshift.expose-services=, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, release=1793, vcs-ref=befaf1f5ec7b874aef2651ee1384d51828504eb9, distribution-scope=public, com.redhat.component=keepalived-container, version=2.2.4, io.buildah.version=1.28.2)
Dec  8 04:49:02 np0005550137 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a7e1a80ca793facc870aa697da02ea8d6db4736bc1b4acf6eacdada98f44860e/merged/etc/keepalived/keepalived.conf supports timestamps until 2038 (0x7fffffff)
Dec  8 04:49:02 np0005550137 podman[98381]: 2025-12-08 09:49:02.377346304 +0000 UTC m=+0.023791321 image pull 4a3a1ff181d97c6dcfa9138ad76eb99fa2c1e840298461d5a7a56133bc05b9a2 quay.io/ceph/keepalived:2.2.4
Dec  8 04:49:02 np0005550137 podman[98381]: 2025-12-08 09:49:02.471469225 +0000 UTC m=+0.117914292 container init 860f9b1fceef64b25d38d4f198ba3ddb3d3c4871377cfc5a28c6fffa3c89de5c (image=quay.io/ceph/keepalived:2.2.4, name=ceph-ceb838ef-9d5d-54e4-bddb-2f01adce2ad4-keepalived-nfs-cephfs-compute-0-qxgfft, io.openshift.tags=Ceph keepalived, vcs-ref=befaf1f5ec7b874aef2651ee1384d51828504eb9, vendor=Red Hat, Inc., version=2.2.4, release=1793, distribution-scope=public, com.redhat.component=keepalived-container, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9-minimal/images/9.1.0-1793, io.k8s.display-name=Keepalived on RHEL 9, build-date=2023-02-22T09:23:20, description=keepalived for Ceph, summary=Provides keepalived on RHEL 9 for Ceph., architecture=x86_64, io.buildah.version=1.28.2, io.openshift.expose-services=, name=keepalived, vcs-type=git, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI)
Dec  8 04:49:02 np0005550137 podman[98381]: 2025-12-08 09:49:02.481476044 +0000 UTC m=+0.127921051 container start 860f9b1fceef64b25d38d4f198ba3ddb3d3c4871377cfc5a28c6fffa3c89de5c (image=quay.io/ceph/keepalived:2.2.4, name=ceph-ceb838ef-9d5d-54e4-bddb-2f01adce2ad4-keepalived-nfs-cephfs-compute-0-qxgfft, summary=Provides keepalived on RHEL 9 for Ceph., io.k8s.display-name=Keepalived on RHEL 9, description=keepalived for Ceph, io.openshift.expose-services=, name=keepalived, vcs-ref=befaf1f5ec7b874aef2651ee1384d51828504eb9, vendor=Red Hat, Inc., com.redhat.component=keepalived-container, architecture=x86_64, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, release=1793, vcs-type=git, distribution-scope=public, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9-minimal/images/9.1.0-1793, version=2.2.4, build-date=2023-02-22T09:23:20, io.buildah.version=1.28.2, io.openshift.tags=Ceph keepalived)
Dec  8 04:49:02 np0005550137 bash[98381]: 860f9b1fceef64b25d38d4f198ba3ddb3d3c4871377cfc5a28c6fffa3c89de5c
Dec  8 04:49:02 np0005550137 systemd[1]: Started Ceph keepalived.nfs.cephfs.compute-0.qxgfft for ceb838ef-9d5d-54e4-bddb-2f01adce2ad4.
Dec  8 04:49:02 np0005550137 ceph-ceb838ef-9d5d-54e4-bddb-2f01adce2ad4-keepalived-nfs-cephfs-compute-0-qxgfft[98396]: Mon Dec  8 09:49:02 2025: Starting Keepalived v2.2.4 (08/21,2021)
Dec  8 04:49:02 np0005550137 ceph-ceb838ef-9d5d-54e4-bddb-2f01adce2ad4-keepalived-nfs-cephfs-compute-0-qxgfft[98396]: Mon Dec  8 09:49:02 2025: Running on Linux 5.14.0-645.el9.x86_64 #1 SMP PREEMPT_DYNAMIC Fri Nov 28 14:01:17 UTC 2025 (built for Linux 5.14.0)
Dec  8 04:49:02 np0005550137 ceph-ceb838ef-9d5d-54e4-bddb-2f01adce2ad4-keepalived-nfs-cephfs-compute-0-qxgfft[98396]: Mon Dec  8 09:49:02 2025: Command line: '/usr/sbin/keepalived' '-n' '-l' '-f' '/etc/keepalived/keepalived.conf'
Dec  8 04:49:02 np0005550137 ceph-ceb838ef-9d5d-54e4-bddb-2f01adce2ad4-keepalived-nfs-cephfs-compute-0-qxgfft[98396]: Mon Dec  8 09:49:02 2025: Configuration file /etc/keepalived/keepalived.conf
Dec  8 04:49:02 np0005550137 ceph-ceb838ef-9d5d-54e4-bddb-2f01adce2ad4-keepalived-nfs-cephfs-compute-0-qxgfft[98396]: Mon Dec  8 09:49:02 2025: NOTICE: setting config option max_auto_priority should result in better keepalived performance
Dec  8 04:49:02 np0005550137 ceph-ceb838ef-9d5d-54e4-bddb-2f01adce2ad4-keepalived-nfs-cephfs-compute-0-qxgfft[98396]: Mon Dec  8 09:49:02 2025: Starting VRRP child process, pid=4
Dec  8 04:49:02 np0005550137 ceph-ceb838ef-9d5d-54e4-bddb-2f01adce2ad4-keepalived-nfs-cephfs-compute-0-qxgfft[98396]: Mon Dec  8 09:49:02 2025: Startup complete
Dec  8 04:49:02 np0005550137 ceph-ceb838ef-9d5d-54e4-bddb-2f01adce2ad4-keepalived-nfs-cephfs-compute-0-qxgfft[98396]: Mon Dec  8 09:49:02 2025: (VI_0) Entering BACKUP STATE (init)
Dec  8 04:49:02 np0005550137 ceph-ceb838ef-9d5d-54e4-bddb-2f01adce2ad4-keepalived-nfs-cephfs-compute-0-qxgfft[98396]: Mon Dec  8 09:49:02 2025: VRRP_Script(check_backend) succeeded
Dec  8 04:49:02 np0005550137 ceph-mon[74516]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Dec  8 04:49:02 np0005550137 ceph-mon[74516]: log_channel(audit) log [INF] : from='mgr.14478 192.168.122.100:0/575685615' entity='mgr.compute-0.kitiwu' 
Dec  8 04:49:02 np0005550137 ceph-mgr[74806]: log_channel(cluster) log [DBG] : pgmap v33: 198 pgs: 198 active+clean; 456 KiB data, 102 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Dec  8 04:49:02 np0005550137 ceph-mon[74516]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Dec  8 04:49:02 np0005550137 ceph-mon[74516]: log_channel(audit) log [INF] : from='mgr.14478 192.168.122.100:0/575685615' entity='mgr.compute-0.kitiwu' 
Dec  8 04:49:02 np0005550137 ceph-mon[74516]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.ingress.nfs.cephfs}] v 0)
Dec  8 04:49:02 np0005550137 ceph-mon[74516]: log_channel(audit) log [INF] : from='mgr.14478 192.168.122.100:0/575685615' entity='mgr.compute-0.kitiwu' 
Dec  8 04:49:02 np0005550137 ceph-mgr[74806]: [cephadm INFO cephadm.services.ingress] 192.168.122.2 is in 192.168.122.0/24 on compute-1 interface br-ex
Dec  8 04:49:02 np0005550137 ceph-mgr[74806]: log_channel(cephadm) log [INF] : 192.168.122.2 is in 192.168.122.0/24 on compute-1 interface br-ex
Dec  8 04:49:02 np0005550137 ceph-mgr[74806]: [cephadm INFO cephadm.services.ingress] 192.168.122.2 is in 192.168.122.0/24 on compute-0 interface br-ex
Dec  8 04:49:02 np0005550137 ceph-mgr[74806]: log_channel(cephadm) log [INF] : 192.168.122.2 is in 192.168.122.0/24 on compute-0 interface br-ex
Dec  8 04:49:02 np0005550137 ceph-mgr[74806]: [cephadm INFO cephadm.services.ingress] 192.168.122.2 is in 192.168.122.0/24 on compute-2 interface br-ex
Dec  8 04:49:02 np0005550137 ceph-mgr[74806]: log_channel(cephadm) log [INF] : 192.168.122.2 is in 192.168.122.0/24 on compute-2 interface br-ex
Dec  8 04:49:02 np0005550137 ceph-mgr[74806]: [cephadm INFO cephadm.serve] Deploying daemon keepalived.nfs.cephfs.compute-1.khfxdl on compute-1
Dec  8 04:49:02 np0005550137 ceph-mgr[74806]: log_channel(cephadm) log [INF] : Deploying daemon keepalived.nfs.cephfs.compute-1.khfxdl on compute-1
Dec  8 04:49:02 np0005550137 ceph-ceb838ef-9d5d-54e4-bddb-2f01adce2ad4-nfs-cephfs-2-0-compute-0-cuvvno[97606]: 08/12/2025 09:49:02 : epoch 69369efc : compute-0 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f571c0089d0 fd 37 proxy header rest len failed header rlen = % (will set dead)
Dec  8 04:49:03 np0005550137 ceph-ceb838ef-9d5d-54e4-bddb-2f01adce2ad4-nfs-cephfs-2-0-compute-0-cuvvno[97606]: 08/12/2025 09:49:03 : epoch 69369efc : compute-0 : ganesha.nfsd-2[svc_2] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f5714002950 fd 37 proxy header rest len failed header rlen = % (will set dead)
Dec  8 04:49:03 np0005550137 ceph-ceb838ef-9d5d-54e4-bddb-2f01adce2ad4-nfs-cephfs-2-0-compute-0-cuvvno[97606]: 08/12/2025 09:49:03 : epoch 69369efc : compute-0 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f56f80016a0 fd 37 proxy header rest len failed header rlen = % (will set dead)
Dec  8 04:49:03 np0005550137 ceph-mon[74516]: from='mgr.14478 192.168.122.100:0/575685615' entity='mgr.compute-0.kitiwu' 
Dec  8 04:49:03 np0005550137 ceph-mon[74516]: from='mgr.14478 192.168.122.100:0/575685615' entity='mgr.compute-0.kitiwu' 
Dec  8 04:49:03 np0005550137 ceph-mon[74516]: from='mgr.14478 192.168.122.100:0/575685615' entity='mgr.compute-0.kitiwu' 
Dec  8 04:49:03 np0005550137 ceph-mon[74516]: 192.168.122.2 is in 192.168.122.0/24 on compute-1 interface br-ex
Dec  8 04:49:03 np0005550137 ceph-mon[74516]: 192.168.122.2 is in 192.168.122.0/24 on compute-0 interface br-ex
Dec  8 04:49:03 np0005550137 ceph-mon[74516]: 192.168.122.2 is in 192.168.122.0/24 on compute-2 interface br-ex
Dec  8 04:49:03 np0005550137 ceph-mon[74516]: Deploying daemon keepalived.nfs.cephfs.compute-1.khfxdl on compute-1
Dec  8 04:49:04 np0005550137 ceph-mgr[74806]: log_channel(cluster) log [DBG] : pgmap v34: 198 pgs: 198 active+clean; 456 KiB data, 102 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Dec  8 04:49:04 np0005550137 ceph-ceb838ef-9d5d-54e4-bddb-2f01adce2ad4-nfs-cephfs-2-0-compute-0-cuvvno[97606]: 08/12/2025 09:49:04 : epoch 69369efc : compute-0 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f5710002520 fd 37 proxy header rest len failed header rlen = % (will set dead)
Dec  8 04:49:04 np0005550137 ceph-mon[74516]: mon.compute-0@0(leader).osd e49 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  8 04:49:05 np0005550137 ceph-ceb838ef-9d5d-54e4-bddb-2f01adce2ad4-nfs-cephfs-2-0-compute-0-cuvvno[97606]: 08/12/2025 09:49:05 : epoch 69369efc : compute-0 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f571c0089d0 fd 37 proxy header rest len failed header rlen = % (will set dead)
Dec  8 04:49:05 np0005550137 ceph-ceb838ef-9d5d-54e4-bddb-2f01adce2ad4-nfs-cephfs-2-0-compute-0-cuvvno[97606]: 08/12/2025 09:49:05 : epoch 69369efc : compute-0 : ganesha.nfsd-2[svc_2] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f5714002950 fd 37 proxy header rest len failed header rlen = % (will set dead)
Dec  8 04:49:06 np0005550137 ceph-ceb838ef-9d5d-54e4-bddb-2f01adce2ad4-keepalived-nfs-cephfs-compute-0-qxgfft[98396]: Mon Dec  8 09:49:06 2025: (VI_0) Entering MASTER STATE
Dec  8 04:49:06 np0005550137 ceph-mgr[74806]: log_channel(cluster) log [DBG] : pgmap v35: 198 pgs: 198 active+clean; 456 KiB data, 102 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Dec  8 04:49:06 np0005550137 ceph-ceb838ef-9d5d-54e4-bddb-2f01adce2ad4-nfs-cephfs-2-0-compute-0-cuvvno[97606]: 08/12/2025 09:49:06 : epoch 69369efc : compute-0 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f56f8002b10 fd 37 proxy header rest len failed header rlen = % (will set dead)
Dec  8 04:49:07 np0005550137 ceph-ceb838ef-9d5d-54e4-bddb-2f01adce2ad4-nfs-cephfs-2-0-compute-0-cuvvno[97606]: 08/12/2025 09:49:07 : epoch 69369efc : compute-0 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f5710002520 fd 37 proxy header rest len failed header rlen = % (will set dead)
Dec  8 04:49:07 np0005550137 ceph-ceb838ef-9d5d-54e4-bddb-2f01adce2ad4-nfs-cephfs-2-0-compute-0-cuvvno[97606]: 08/12/2025 09:49:07 : epoch 69369efc : compute-0 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f571c0096e0 fd 37 proxy header rest len failed header rlen = % (will set dead)
Dec  8 04:49:07 np0005550137 ceph-mon[74516]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1.devices.0}] v 0)
Dec  8 04:49:07 np0005550137 ceph-mon[74516]: log_channel(audit) log [INF] : from='mgr.14478 192.168.122.100:0/575685615' entity='mgr.compute-0.kitiwu' 
Dec  8 04:49:07 np0005550137 ceph-mon[74516]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1}] v 0)
Dec  8 04:49:07 np0005550137 ceph-mon[74516]: log_channel(audit) log [INF] : from='mgr.14478 192.168.122.100:0/575685615' entity='mgr.compute-0.kitiwu' 
Dec  8 04:49:07 np0005550137 ceph-mon[74516]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.ingress.nfs.cephfs}] v 0)
Dec  8 04:49:07 np0005550137 ceph-mon[74516]: log_channel(audit) log [INF] : from='mgr.14478 192.168.122.100:0/575685615' entity='mgr.compute-0.kitiwu' 
Dec  8 04:49:07 np0005550137 ceph-mgr[74806]: [cephadm INFO cephadm.services.ingress] 192.168.122.2 is in 192.168.122.0/24 on compute-2 interface br-ex
Dec  8 04:49:07 np0005550137 ceph-mgr[74806]: log_channel(cephadm) log [INF] : 192.168.122.2 is in 192.168.122.0/24 on compute-2 interface br-ex
Dec  8 04:49:07 np0005550137 ceph-mgr[74806]: [cephadm INFO cephadm.services.ingress] 192.168.122.2 is in 192.168.122.0/24 on compute-0 interface br-ex
Dec  8 04:49:07 np0005550137 ceph-mgr[74806]: log_channel(cephadm) log [INF] : 192.168.122.2 is in 192.168.122.0/24 on compute-0 interface br-ex
Dec  8 04:49:07 np0005550137 ceph-mgr[74806]: [cephadm INFO cephadm.services.ingress] 192.168.122.2 is in 192.168.122.0/24 on compute-1 interface br-ex
Dec  8 04:49:07 np0005550137 ceph-mgr[74806]: log_channel(cephadm) log [INF] : 192.168.122.2 is in 192.168.122.0/24 on compute-1 interface br-ex
Dec  8 04:49:07 np0005550137 ceph-mgr[74806]: [cephadm INFO cephadm.serve] Deploying daemon keepalived.nfs.cephfs.compute-2.bcrsho on compute-2
Dec  8 04:49:07 np0005550137 ceph-mgr[74806]: log_channel(cephadm) log [INF] : Deploying daemon keepalived.nfs.cephfs.compute-2.bcrsho on compute-2
Dec  8 04:49:08 np0005550137 ceph-mgr[74806]: log_channel(cluster) log [DBG] : pgmap v36: 198 pgs: 198 active+clean; 456 KiB data, 102 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Dec  8 04:49:08 np0005550137 ceph-mon[74516]: from='mgr.14478 192.168.122.100:0/575685615' entity='mgr.compute-0.kitiwu' 
Dec  8 04:49:08 np0005550137 ceph-mon[74516]: from='mgr.14478 192.168.122.100:0/575685615' entity='mgr.compute-0.kitiwu' 
Dec  8 04:49:08 np0005550137 ceph-mon[74516]: from='mgr.14478 192.168.122.100:0/575685615' entity='mgr.compute-0.kitiwu' 
Dec  8 04:49:08 np0005550137 ceph-mon[74516]: 192.168.122.2 is in 192.168.122.0/24 on compute-2 interface br-ex
Dec  8 04:49:08 np0005550137 ceph-mon[74516]: 192.168.122.2 is in 192.168.122.0/24 on compute-0 interface br-ex
Dec  8 04:49:08 np0005550137 ceph-mon[74516]: 192.168.122.2 is in 192.168.122.0/24 on compute-1 interface br-ex
Dec  8 04:49:08 np0005550137 ceph-mon[74516]: Deploying daemon keepalived.nfs.cephfs.compute-2.bcrsho on compute-2
Dec  8 04:49:08 np0005550137 ceph-ceb838ef-9d5d-54e4-bddb-2f01adce2ad4-nfs-cephfs-2-0-compute-0-cuvvno[97606]: 08/12/2025 09:49:08 : epoch 69369efc : compute-0 : ganesha.nfsd-2[svc_2] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f5714002950 fd 37 proxy header rest len failed header rlen = % (will set dead)
Dec  8 04:49:09 np0005550137 ceph-ceb838ef-9d5d-54e4-bddb-2f01adce2ad4-nfs-cephfs-2-0-compute-0-cuvvno[97606]: 08/12/2025 09:49:09 : epoch 69369efc : compute-0 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f56f8002b10 fd 37 proxy header rest len failed header rlen = % (will set dead)
Dec  8 04:49:09 np0005550137 ceph-ceb838ef-9d5d-54e4-bddb-2f01adce2ad4-nfs-cephfs-2-0-compute-0-cuvvno[97606]: 08/12/2025 09:49:09 : epoch 69369efc : compute-0 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f5710002520 fd 37 proxy header rest len failed header rlen = % (will set dead)
Dec  8 04:49:09 np0005550137 ceph-mon[74516]: mon.compute-0@0(leader).osd e49 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  8 04:49:10 np0005550137 ceph-mgr[74806]: log_channel(cluster) log [DBG] : pgmap v37: 198 pgs: 198 active+clean; 456 KiB data, 102 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Dec  8 04:49:10 np0005550137 ceph-mgr[74806]: [balancer INFO root] Optimize plan auto_2025-12-08_09:49:10
Dec  8 04:49:10 np0005550137 ceph-mgr[74806]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Dec  8 04:49:10 np0005550137 ceph-mgr[74806]: [balancer INFO root] do_upmap
Dec  8 04:49:10 np0005550137 ceph-mgr[74806]: [balancer INFO root] pools ['cephfs.cephfs.meta', 'default.rgw.meta', 'default.rgw.log', 'vms', 'backups', 'default.rgw.control', 'volumes', '.nfs', 'cephfs.cephfs.data', 'images', '.rgw.root', '.mgr']
Dec  8 04:49:10 np0005550137 ceph-mgr[74806]: [balancer INFO root] prepared 0/10 upmap changes
Dec  8 04:49:10 np0005550137 ceph-mgr[74806]: [pg_autoscaler INFO root] _maybe_adjust
Dec  8 04:49:10 np0005550137 ceph-mgr[74806]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  8 04:49:10 np0005550137 ceph-mgr[74806]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Dec  8 04:49:10 np0005550137 ceph-mgr[74806]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  8 04:49:10 np0005550137 ceph-mgr[74806]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec  8 04:49:10 np0005550137 ceph-mgr[74806]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  8 04:49:10 np0005550137 ceph-mgr[74806]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec  8 04:49:10 np0005550137 ceph-mgr[74806]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  8 04:49:10 np0005550137 ceph-mgr[74806]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec  8 04:49:10 np0005550137 ceph-mgr[74806]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  8 04:49:10 np0005550137 ceph-mgr[74806]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec  8 04:49:10 np0005550137 ceph-mgr[74806]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  8 04:49:10 np0005550137 ceph-mgr[74806]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 32)
Dec  8 04:49:10 np0005550137 ceph-mgr[74806]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  8 04:49:10 np0005550137 ceph-mgr[74806]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec  8 04:49:10 np0005550137 ceph-mgr[74806]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  8 04:49:10 np0005550137 ceph-mgr[74806]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 1)
Dec  8 04:49:10 np0005550137 ceph-mgr[74806]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  8 04:49:10 np0005550137 ceph-mgr[74806]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 1)
Dec  8 04:49:10 np0005550137 ceph-mgr[74806]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  8 04:49:10 np0005550137 ceph-mgr[74806]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 1)
Dec  8 04:49:10 np0005550137 ceph-mgr[74806]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  8 04:49:10 np0005550137 ceph-mgr[74806]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 1)
Dec  8 04:49:10 np0005550137 ceph-mgr[74806]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec  8 04:49:10 np0005550137 ceph-mgr[74806]: [pg_autoscaler INFO root] Pool '.nfs' root_id -1 using 6.359070782053786e-08 of space, bias 1.0, pg target 1.907721234616136e-05 quantized to 32 (current 1)
Dec  8 04:49:10 np0005550137 ceph-mon[74516]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": ".rgw.root", "var": "pg_num", "val": "32"} v 0)
Dec  8 04:49:10 np0005550137 ceph-mon[74516]: log_channel(audit) log [INF] : from='mgr.14478 192.168.122.100:0/575685615' entity='mgr.compute-0.kitiwu' cmd=[{"prefix": "osd pool set", "pool": ".rgw.root", "var": "pg_num", "val": "32"}]: dispatch
Dec  8 04:49:10 np0005550137 ceph-mgr[74806]: [volumes INFO mgr_util] scanning for idle connections..
Dec  8 04:49:10 np0005550137 ceph-mgr[74806]: [volumes INFO mgr_util] cleaning up connections: []
Dec  8 04:49:10 np0005550137 ceph-mgr[74806]: [volumes INFO mgr_util] scanning for idle connections..
Dec  8 04:49:10 np0005550137 ceph-mgr[74806]: [volumes INFO mgr_util] cleaning up connections: []
Dec  8 04:49:10 np0005550137 ceph-mgr[74806]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Dec  8 04:49:10 np0005550137 ceph-mgr[74806]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec  8 04:49:10 np0005550137 ceph-mgr[74806]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec  8 04:49:10 np0005550137 ceph-mgr[74806]: [rbd_support INFO root] load_schedules: backups, start_after=
Dec  8 04:49:10 np0005550137 ceph-mgr[74806]: [rbd_support INFO root] load_schedules: images, start_after=
Dec  8 04:49:10 np0005550137 ceph-mgr[74806]: [volumes INFO mgr_util] scanning for idle connections..
Dec  8 04:49:10 np0005550137 ceph-mgr[74806]: [volumes INFO mgr_util] cleaning up connections: []
Dec  8 04:49:10 np0005550137 ceph-mon[74516]: mon.compute-0@0(leader).osd e49 do_prune osdmap full prune enabled
Dec  8 04:49:10 np0005550137 ceph-mon[74516]: from='mgr.14478 192.168.122.100:0/575685615' entity='mgr.compute-0.kitiwu' cmd=[{"prefix": "osd pool set", "pool": ".rgw.root", "var": "pg_num", "val": "32"}]: dispatch
Dec  8 04:49:10 np0005550137 ceph-mon[74516]: log_channel(audit) log [INF] : from='mgr.14478 192.168.122.100:0/575685615' entity='mgr.compute-0.kitiwu' cmd='[{"prefix": "osd pool set", "pool": ".rgw.root", "var": "pg_num", "val": "32"}]': finished
Dec  8 04:49:10 np0005550137 ceph-mon[74516]: mon.compute-0@0(leader).osd e50 e50: 3 total, 3 up, 3 in
Dec  8 04:49:10 np0005550137 ceph-mgr[74806]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Dec  8 04:49:10 np0005550137 ceph-mon[74516]: log_channel(cluster) log [DBG] : osdmap e50: 3 total, 3 up, 3 in
Dec  8 04:49:10 np0005550137 ceph-mgr[74806]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec  8 04:49:10 np0005550137 ceph-mgr[74806]: [progress INFO root] update: starting ev 78bc646a-5541-4f34-9789-75573139872f (PG autoscaler increasing pool 8 PGs from 1 to 32)
Dec  8 04:49:10 np0005550137 ceph-mon[74516]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pg_num", "val": "32"} v 0)
Dec  8 04:49:10 np0005550137 ceph-mon[74516]: log_channel(audit) log [INF] : from='mgr.14478 192.168.122.100:0/575685615' entity='mgr.compute-0.kitiwu' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pg_num", "val": "32"}]: dispatch
Dec  8 04:49:10 np0005550137 ceph-mgr[74806]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec  8 04:49:10 np0005550137 ceph-mgr[74806]: [rbd_support INFO root] load_schedules: backups, start_after=
Dec  8 04:49:10 np0005550137 ceph-mgr[74806]: [rbd_support INFO root] load_schedules: images, start_after=
Dec  8 04:49:10 np0005550137 ceph-ceb838ef-9d5d-54e4-bddb-2f01adce2ad4-nfs-cephfs-2-0-compute-0-cuvvno[97606]: 08/12/2025 09:49:10 : epoch 69369efc : compute-0 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f571c0096e0 fd 37 proxy header rest len failed header rlen = % (will set dead)
Dec  8 04:49:11 np0005550137 ceph-ceb838ef-9d5d-54e4-bddb-2f01adce2ad4-nfs-cephfs-2-0-compute-0-cuvvno[97606]: 08/12/2025 09:49:11 : epoch 69369efc : compute-0 : ganesha.nfsd-2[svc_2] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f5714002950 fd 37 proxy header rest len failed header rlen = % (will set dead)
Dec  8 04:49:11 np0005550137 ceph-ceb838ef-9d5d-54e4-bddb-2f01adce2ad4-nfs-cephfs-2-0-compute-0-cuvvno[97606]: 08/12/2025 09:49:11 : epoch 69369efc : compute-0 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f56f8002b10 fd 37 proxy header rest len failed header rlen = % (will set dead)
Dec  8 04:49:11 np0005550137 ceph-ceb838ef-9d5d-54e4-bddb-2f01adce2ad4-keepalived-nfs-cephfs-compute-0-qxgfft[98396]: Mon Dec  8 09:49:11 2025: (VI_0) Received advert from 192.168.122.101 with lower priority 90, ours 100, forcing new election
Dec  8 04:49:11 np0005550137 ceph-mon[74516]: mon.compute-0@0(leader).osd e50 do_prune osdmap full prune enabled
Dec  8 04:49:11 np0005550137 ceph-mon[74516]: from='mgr.14478 192.168.122.100:0/575685615' entity='mgr.compute-0.kitiwu' cmd='[{"prefix": "osd pool set", "pool": ".rgw.root", "var": "pg_num", "val": "32"}]': finished
Dec  8 04:49:11 np0005550137 ceph-mon[74516]: from='mgr.14478 192.168.122.100:0/575685615' entity='mgr.compute-0.kitiwu' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pg_num", "val": "32"}]: dispatch
Dec  8 04:49:11 np0005550137 ceph-mon[74516]: log_channel(audit) log [INF] : from='mgr.14478 192.168.122.100:0/575685615' entity='mgr.compute-0.kitiwu' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pg_num", "val": "32"}]': finished
Dec  8 04:49:11 np0005550137 ceph-mon[74516]: mon.compute-0@0(leader).osd e51 e51: 3 total, 3 up, 3 in
Dec  8 04:49:11 np0005550137 ceph-mon[74516]: log_channel(cluster) log [DBG] : osdmap e51: 3 total, 3 up, 3 in
Dec  8 04:49:11 np0005550137 ceph-mgr[74806]: [progress INFO root] update: starting ev 12e62abb-7fd4-4a49-ac6e-43c760b062dd (PG autoscaler increasing pool 9 PGs from 1 to 32)
Dec  8 04:49:11 np0005550137 ceph-mon[74516]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.control", "var": "pg_num", "val": "32"} v 0)
Dec  8 04:49:11 np0005550137 ceph-mon[74516]: log_channel(audit) log [INF] : from='mgr.14478 192.168.122.100:0/575685615' entity='mgr.compute-0.kitiwu' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.control", "var": "pg_num", "val": "32"}]: dispatch
Dec  8 04:49:12 np0005550137 ceph-mon[74516]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2.devices.0}] v 0)
Dec  8 04:49:12 np0005550137 ceph-mon[74516]: log_channel(audit) log [INF] : from='mgr.14478 192.168.122.100:0/575685615' entity='mgr.compute-0.kitiwu' 
Dec  8 04:49:12 np0005550137 ceph-mon[74516]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2}] v 0)
Dec  8 04:49:12 np0005550137 ceph-mon[74516]: log_channel(audit) log [INF] : from='mgr.14478 192.168.122.100:0/575685615' entity='mgr.compute-0.kitiwu' 
Dec  8 04:49:12 np0005550137 ceph-mon[74516]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.ingress.nfs.cephfs}] v 0)
Dec  8 04:49:12 np0005550137 ceph-mon[74516]: log_channel(audit) log [INF] : from='mgr.14478 192.168.122.100:0/575685615' entity='mgr.compute-0.kitiwu' 
Dec  8 04:49:12 np0005550137 ceph-mgr[74806]: [progress INFO root] complete: finished ev 9280d4e0-3580-41b4-b3ff-2a3b8edb9a3d (Updating ingress.nfs.cephfs deployment (+6 -> 6))
Dec  8 04:49:12 np0005550137 ceph-mgr[74806]: [progress INFO root] Completed event 9280d4e0-3580-41b4-b3ff-2a3b8edb9a3d (Updating ingress.nfs.cephfs deployment (+6 -> 6)) in 27 seconds
Dec  8 04:49:12 np0005550137 ceph-mon[74516]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.ingress.nfs.cephfs}] v 0)
Dec  8 04:49:12 np0005550137 ceph-mon[74516]: log_channel(audit) log [INF] : from='mgr.14478 192.168.122.100:0/575685615' entity='mgr.compute-0.kitiwu' 
Dec  8 04:49:12 np0005550137 ceph-mgr[74806]: [progress INFO root] update: starting ev 14228d34-ea82-48be-af71-af222b3ee161 (Updating alertmanager deployment (+1 -> 1))
Dec  8 04:49:12 np0005550137 ceph-mgr[74806]: [cephadm INFO cephadm.serve] Deploying daemon alertmanager.compute-0 on compute-0
Dec  8 04:49:12 np0005550137 ceph-mgr[74806]: log_channel(cephadm) log [INF] : Deploying daemon alertmanager.compute-0 on compute-0
Dec  8 04:49:12 np0005550137 ceph-mgr[74806]: log_channel(cluster) log [DBG] : pgmap v40: 198 pgs: 198 active+clean; 456 KiB data, 103 MiB used, 60 GiB / 60 GiB avail; 383 B/s rd, 0 op/s
Dec  8 04:49:12 np0005550137 ceph-mon[74516]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": ".rgw.root", "var": "pg_num_actual", "val": "32"} v 0)
Dec  8 04:49:12 np0005550137 ceph-mon[74516]: log_channel(audit) log [INF] : from='mgr.14478 192.168.122.100:0/575685615' entity='mgr.compute-0.kitiwu' cmd=[{"prefix": "osd pool set", "pool": ".rgw.root", "var": "pg_num_actual", "val": "32"}]: dispatch
Dec  8 04:49:12 np0005550137 ceph-mon[74516]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pg_num_actual", "val": "32"} v 0)
Dec  8 04:49:12 np0005550137 ceph-mon[74516]: log_channel(audit) log [INF] : from='mgr.14478 192.168.122.100:0/575685615' entity='mgr.compute-0.kitiwu' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pg_num_actual", "val": "32"}]: dispatch
Dec  8 04:49:12 np0005550137 ceph-mon[74516]: mon.compute-0@0(leader).osd e51 do_prune osdmap full prune enabled
Dec  8 04:49:12 np0005550137 ceph-mon[74516]: log_channel(audit) log [INF] : from='mgr.14478 192.168.122.100:0/575685615' entity='mgr.compute-0.kitiwu' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.control", "var": "pg_num", "val": "32"}]': finished
Dec  8 04:49:12 np0005550137 ceph-mon[74516]: log_channel(audit) log [INF] : from='mgr.14478 192.168.122.100:0/575685615' entity='mgr.compute-0.kitiwu' cmd='[{"prefix": "osd pool set", "pool": ".rgw.root", "var": "pg_num_actual", "val": "32"}]': finished
Dec  8 04:49:12 np0005550137 ceph-mon[74516]: log_channel(audit) log [INF] : from='mgr.14478 192.168.122.100:0/575685615' entity='mgr.compute-0.kitiwu' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pg_num_actual", "val": "32"}]': finished
Dec  8 04:49:12 np0005550137 ceph-mon[74516]: mon.compute-0@0(leader).osd e52 e52: 3 total, 3 up, 3 in
Dec  8 04:49:12 np0005550137 ceph-ceb838ef-9d5d-54e4-bddb-2f01adce2ad4-nfs-cephfs-2-0-compute-0-cuvvno[97606]: 08/12/2025 09:49:12 : epoch 69369efc : compute-0 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f5710002520 fd 37 proxy header rest len failed header rlen = % (will set dead)
Dec  8 04:49:12 np0005550137 ceph-mon[74516]: log_channel(cluster) log [DBG] : osdmap e52: 3 total, 3 up, 3 in
Dec  8 04:49:12 np0005550137 ceph-mgr[74806]: [progress INFO root] update: starting ev 768d8243-48fc-4648-898d-be18aa927c13 (PG autoscaler increasing pool 10 PGs from 1 to 32)
Dec  8 04:49:12 np0005550137 ceph-mon[74516]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_num", "val": "32"} v 0)
Dec  8 04:49:12 np0005550137 ceph-mon[74516]: log_channel(audit) log [INF] : from='mgr.14478 192.168.122.100:0/575685615' entity='mgr.compute-0.kitiwu' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_num", "val": "32"}]: dispatch
Dec  8 04:49:12 np0005550137 ceph-mon[74516]: from='mgr.14478 192.168.122.100:0/575685615' entity='mgr.compute-0.kitiwu' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pg_num", "val": "32"}]': finished
Dec  8 04:49:12 np0005550137 ceph-mon[74516]: from='mgr.14478 192.168.122.100:0/575685615' entity='mgr.compute-0.kitiwu' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.control", "var": "pg_num", "val": "32"}]: dispatch
Dec  8 04:49:12 np0005550137 ceph-mon[74516]: from='mgr.14478 192.168.122.100:0/575685615' entity='mgr.compute-0.kitiwu' 
Dec  8 04:49:12 np0005550137 ceph-mon[74516]: from='mgr.14478 192.168.122.100:0/575685615' entity='mgr.compute-0.kitiwu' 
Dec  8 04:49:12 np0005550137 ceph-mon[74516]: from='mgr.14478 192.168.122.100:0/575685615' entity='mgr.compute-0.kitiwu' 
Dec  8 04:49:12 np0005550137 ceph-mon[74516]: from='mgr.14478 192.168.122.100:0/575685615' entity='mgr.compute-0.kitiwu' 
Dec  8 04:49:12 np0005550137 ceph-mon[74516]: from='mgr.14478 192.168.122.100:0/575685615' entity='mgr.compute-0.kitiwu' cmd=[{"prefix": "osd pool set", "pool": ".rgw.root", "var": "pg_num_actual", "val": "32"}]: dispatch
Dec  8 04:49:12 np0005550137 ceph-mon[74516]: from='mgr.14478 192.168.122.100:0/575685615' entity='mgr.compute-0.kitiwu' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pg_num_actual", "val": "32"}]: dispatch
Dec  8 04:49:13 np0005550137 ceph-osd[83009]: osd.1 pg_epoch: 52 pg[8.0( v 35'12 (0'0,35'12] local-lis/les=34/35 n=6 ec=34/34 lis/c=34/34 les/c/f=35/35/0 sis=52 pruub=14.443001747s) [1] r=0 lpr=52 pi=[34,52)/1 crt=35'12 lcod 35'11 mlcod 35'11 active pruub 176.252227783s@ mbc={}] PeeringState::start_peering_interval up [1] -> [1], acting [1] -> [1], acting_primary 1 -> 1, up_primary 1 -> 1, role 0 -> 0, features acting 4540701547738038271 upacting 4540701547738038271
Dec  8 04:49:13 np0005550137 ceph-osd[83009]: osd.1 pg_epoch: 52 pg[9.0( v 49'1026 (0'0,49'1026] local-lis/les=36/37 n=178 ec=36/36 lis/c=36/36 les/c/f=37/37/0 sis=52 pruub=8.459206581s) [1] r=0 lpr=52 pi=[36,52)/1 crt=49'1026 lcod 49'1025 mlcod 49'1025 active pruub 170.268646240s@ mbc={}] PeeringState::start_peering_interval up [1] -> [1], acting [1] -> [1], acting_primary 1 -> 1, up_primary 1 -> 1, role 0 -> 0, features acting 4540701547738038271 upacting 4540701547738038271
Dec  8 04:49:13 np0005550137 ceph-osd[83009]: osd.1 pg_epoch: 52 pg[8.0( v 35'12 lc 0'0 (0'0,35'12] local-lis/les=34/35 n=0 ec=34/34 lis/c=34/34 les/c/f=35/35/0 sis=52 pruub=14.443001747s) [1] r=0 lpr=52 pi=[34,52)/1 crt=35'12 lcod 35'11 mlcod 0'0 unknown pruub 176.252227783s@ mbc={}] state<Start>: transitioning to Primary
Dec  8 04:49:13 np0005550137 ceph-osd[83009]: bluestore(/var/lib/ceph/osd/ceph-1).collection(8.0_head 0x55aa76f4e900) operator()   moving buffer(0x55aa778b0b68 space 0x55aa776ee900 0x0~1000 clean)
Dec  8 04:49:13 np0005550137 ceph-osd[83009]: bluestore(/var/lib/ceph/osd/ceph-1).collection(8.0_head 0x55aa76f4e900) operator()   moving buffer(0x55aa778ee2a8 space 0x55aa7760a9d0 0x0~1000 clean)
Dec  8 04:49:13 np0005550137 ceph-osd[83009]: bluestore(/var/lib/ceph/osd/ceph-1).collection(8.0_head 0x55aa76f4e900) operator()   moving buffer(0x55aa778b08e8 space 0x55aa7779de20 0x0~1000 clean)
Dec  8 04:49:13 np0005550137 ceph-osd[83009]: bluestore(/var/lib/ceph/osd/ceph-1).collection(8.0_head 0x55aa76f4e900) operator()   moving buffer(0x55aa778eee88 space 0x55aa779140e0 0x0~1000 clean)
Dec  8 04:49:13 np0005550137 ceph-osd[83009]: osd.1 pg_epoch: 52 pg[9.0( v 49'1026 lc 0'0 (0'0,49'1026] local-lis/les=36/37 n=5 ec=36/36 lis/c=36/36 les/c/f=37/37/0 sis=52 pruub=8.459206581s) [1] r=0 lpr=52 pi=[36,52)/1 crt=49'1026 lcod 49'1025 mlcod 0'0 unknown pruub 170.268646240s@ mbc={}] state<Start>: transitioning to Primary
Dec  8 04:49:13 np0005550137 ceph-osd[83009]: bluestore(/var/lib/ceph/osd/ceph-1).collection(9.0_head 0x55aa779c7200) operator()   moving buffer(0x55aa778cf2e8 space 0x55aa777d04f0 0x0~1000 clean)
Dec  8 04:49:13 np0005550137 ceph-osd[83009]: bluestore(/var/lib/ceph/osd/ceph-1).collection(9.0_head 0x55aa779c7200) operator()   moving buffer(0x55aa778b0208 space 0x55aa777d0760 0x0~1000 clean)
Dec  8 04:49:13 np0005550137 ceph-osd[83009]: bluestore(/var/lib/ceph/osd/ceph-1).collection(9.0_head 0x55aa779c7200) operator()   moving buffer(0x55aa778cf1a8 space 0x55aa777d0de0 0x0~1000 clean)
Dec  8 04:49:13 np0005550137 ceph-osd[83009]: bluestore(/var/lib/ceph/osd/ceph-1).collection(9.0_head 0x55aa779c7200) operator()   moving buffer(0x55aa774d36a8 space 0x55aa777d1d50 0x0~1000 clean)
Dec  8 04:49:13 np0005550137 ceph-osd[83009]: bluestore(/var/lib/ceph/osd/ceph-1).collection(9.0_head 0x55aa779c7200) operator()   moving buffer(0x55aa7789b748 space 0x55aa777d1600 0x0~1000 clean)
Dec  8 04:49:13 np0005550137 ceph-osd[83009]: bluestore(/var/lib/ceph/osd/ceph-1).collection(9.0_head 0x55aa779c7200) operator()   moving buffer(0x55aa778e82a8 space 0x55aa775f8d10 0x0~1000 clean)
Dec  8 04:49:13 np0005550137 ceph-osd[83009]: bluestore(/var/lib/ceph/osd/ceph-1).collection(9.0_head 0x55aa779c7200) operator()   moving buffer(0x55aa778c72e8 space 0x55aa777cbef0 0x0~1000 clean)
Dec  8 04:49:13 np0005550137 ceph-osd[83009]: bluestore(/var/lib/ceph/osd/ceph-1).collection(9.0_head 0x55aa779c7200) operator()   moving buffer(0x55aa7789b608 space 0x55aa777672c0 0x0~1000 clean)
Dec  8 04:49:13 np0005550137 ceph-osd[83009]: bluestore(/var/lib/ceph/osd/ceph-1).collection(9.0_head 0x55aa779c7200) operator()   moving buffer(0x55aa778b19c8 space 0x55aa779151f0 0x0~1000 clean)
Dec  8 04:49:13 np0005550137 ceph-osd[83009]: bluestore(/var/lib/ceph/osd/ceph-1).collection(9.0_head 0x55aa779c7200) operator()   moving buffer(0x55aa778cf608 space 0x55aa777d1530 0x0~1000 clean)
Dec  8 04:49:13 np0005550137 ceph-osd[83009]: bluestore(/var/lib/ceph/osd/ceph-1).collection(9.0_head 0x55aa779c7200) operator()   moving buffer(0x55aa778b0348 space 0x55aa777d0830 0x0~1000 clean)
Dec  8 04:49:13 np0005550137 ceph-osd[83009]: bluestore(/var/lib/ceph/osd/ceph-1).collection(9.0_head 0x55aa779c7200) operator()   moving buffer(0x55aa778cf248 space 0x55aa777d0c40 0x0~1000 clean)
Dec  8 04:49:13 np0005550137 ceph-osd[83009]: bluestore(/var/lib/ceph/osd/ceph-1).collection(9.0_head 0x55aa779c7200) operator()   moving buffer(0x55aa7789b1a8 space 0x55aa777d11f0 0x0~1000 clean)
Dec  8 04:49:13 np0005550137 ceph-osd[83009]: bluestore(/var/lib/ceph/osd/ceph-1).collection(9.0_head 0x55aa779c7200) operator()   moving buffer(0x55aa778b1ec8 space 0x55aa77915050 0x0~1000 clean)
Dec  8 04:49:13 np0005550137 ceph-osd[83009]: bluestore(/var/lib/ceph/osd/ceph-1).collection(9.0_head 0x55aa779c7200) operator()   moving buffer(0x55aa778cee88 space 0x55aa75540690 0x0~1000 clean)
Dec  8 04:49:13 np0005550137 ceph-osd[83009]: bluestore(/var/lib/ceph/osd/ceph-1).collection(9.0_head 0x55aa779c7200) operator()   moving buffer(0x55aa778b1248 space 0x55aa779152c0 0x0~1000 clean)
Dec  8 04:49:13 np0005550137 ceph-osd[83009]: bluestore(/var/lib/ceph/osd/ceph-1).collection(9.0_head 0x55aa779c7200) operator()   moving buffer(0x55aa77653f68 space 0x55aa77915120 0x0~1000 clean)
Dec  8 04:49:13 np0005550137 ceph-osd[83009]: bluestore(/var/lib/ceph/osd/ceph-1).collection(9.0_head 0x55aa779c7200) operator()   moving buffer(0x55aa778aafc8 space 0x55aa777d1e20 0x0~1000 clean)
Dec  8 04:49:13 np0005550137 ceph-osd[83009]: bluestore(/var/lib/ceph/osd/ceph-1).collection(9.0_head 0x55aa779c7200) operator()   moving buffer(0x55aa7789a7a8 space 0x55aa777d1050 0x0~1000 clean)
Dec  8 04:49:13 np0005550137 ceph-osd[83009]: bluestore(/var/lib/ceph/osd/ceph-1).collection(9.0_head 0x55aa779c7200) operator()   moving buffer(0x55aa778ceac8 space 0x55aa777d05c0 0x0~1000 clean)
Dec  8 04:49:13 np0005550137 ceph-osd[83009]: bluestore(/var/lib/ceph/osd/ceph-1).collection(9.0_head 0x55aa779c7200) operator()   moving buffer(0x55aa778cf748 space 0x55aa777d0690 0x0~1000 clean)
Dec  8 04:49:13 np0005550137 ceph-osd[83009]: bluestore(/var/lib/ceph/osd/ceph-1).collection(9.0_head 0x55aa779c7200) operator()   moving buffer(0x55aa778e88e8 space 0x55aa777d0eb0 0x0~1000 clean)
Dec  8 04:49:13 np0005550137 ceph-osd[83009]: bluestore(/var/lib/ceph/osd/ceph-1).collection(9.0_head 0x55aa779c7200) operator()   moving buffer(0x55aa778e8c08 space 0x55aa777d0f80 0x0~1000 clean)
Dec  8 04:49:13 np0005550137 ceph-osd[83009]: bluestore(/var/lib/ceph/osd/ceph-1).collection(9.0_head 0x55aa779c7200) operator()   moving buffer(0x55aa77188708 space 0x55aa777cb460 0x0~1000 clean)
Dec  8 04:49:13 np0005550137 ceph-osd[83009]: bluestore(/var/lib/ceph/osd/ceph-1).collection(9.0_head 0x55aa779c7200) operator()   moving buffer(0x55aa778efb08 space 0x55aa776e4760 0x0~1000 clean)
Dec  8 04:49:13 np0005550137 ceph-osd[83009]: bluestore(/var/lib/ceph/osd/ceph-1).collection(9.0_head 0x55aa779c7200) operator()   moving buffer(0x55aa7789b7e8 space 0x55aa777d12c0 0x0~1000 clean)
Dec  8 04:49:13 np0005550137 ceph-osd[83009]: bluestore(/var/lib/ceph/osd/ceph-1).collection(9.0_head 0x55aa779c7200) operator()   moving buffer(0x55aa7789aca8 space 0x55aa777d1120 0x0~1000 clean)
Dec  8 04:49:13 np0005550137 ceph-osd[83009]: bluestore(/var/lib/ceph/osd/ceph-1).collection(9.0_head 0x55aa779c7200) operator()   moving buffer(0x55aa778e8528 space 0x55aa777d1390 0x0~1000 clean)
Dec  8 04:49:13 np0005550137 ceph-osd[83009]: bluestore(/var/lib/ceph/osd/ceph-1).collection(9.0_head 0x55aa779c7200) operator()   moving buffer(0x55aa7789ae88 space 0x55aa77646010 0x0~1000 clean)
Dec  8 04:49:13 np0005550137 ceph-osd[83009]: bluestore(/var/lib/ceph/osd/ceph-1).collection(9.0_head 0x55aa779c7200) operator()   moving buffer(0x55aa778cf388 space 0x55aa777d0d10 0x0~1000 clean)
Dec  8 04:49:13 np0005550137 ceph-osd[83009]: bluestore(/var/lib/ceph/osd/ceph-1).collection(9.0_head 0x55aa779c7200) operator()   moving buffer(0x55aa778b0ac8 space 0x55aa777d0900 0x0~1000 clean)
Dec  8 04:49:13 np0005550137 ceph-ceb838ef-9d5d-54e4-bddb-2f01adce2ad4-nfs-cephfs-2-0-compute-0-cuvvno[97606]: 08/12/2025 09:49:13 : epoch 69369efc : compute-0 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f571c0096e0 fd 37 proxy header rest len failed header rlen = % (will set dead)
Dec  8 04:49:13 np0005550137 ceph-ceb838ef-9d5d-54e4-bddb-2f01adce2ad4-nfs-cephfs-2-0-compute-0-cuvvno[97606]: 08/12/2025 09:49:13 : epoch 69369efc : compute-0 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f571c0096e0 fd 37 proxy header rest len failed header rlen = % (will set dead)
Dec  8 04:49:13 np0005550137 ceph-mon[74516]: mon.compute-0@0(leader).osd e52 do_prune osdmap full prune enabled
Dec  8 04:49:13 np0005550137 ceph-mon[74516]: log_channel(audit) log [INF] : from='mgr.14478 192.168.122.100:0/575685615' entity='mgr.compute-0.kitiwu' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_num", "val": "32"}]': finished
Dec  8 04:49:13 np0005550137 ceph-mon[74516]: mon.compute-0@0(leader).osd e53 e53: 3 total, 3 up, 3 in
Dec  8 04:49:13 np0005550137 ceph-mon[74516]: log_channel(cluster) log [DBG] : osdmap e53: 3 total, 3 up, 3 in
Dec  8 04:49:13 np0005550137 ceph-mgr[74806]: [progress INFO root] update: starting ev 7e47e026-2993-4b30-9a80-64ae44123608 (PG autoscaler increasing pool 11 PGs from 1 to 32)
Dec  8 04:49:13 np0005550137 ceph-mon[74516]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": ".nfs", "var": "pg_num", "val": "32"} v 0)
Dec  8 04:49:13 np0005550137 ceph-osd[83009]: osd.1 pg_epoch: 53 pg[8.14( v 35'12 lc 0'0 (0'0,35'12] local-lis/les=34/35 n=0 ec=52/34 lis/c=34/34 les/c/f=35/35/0 sis=52) [1] r=0 lpr=52 pi=[34,52)/1 crt=35'12 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  8 04:49:13 np0005550137 ceph-mon[74516]: log_channel(audit) log [INF] : from='mgr.14478 192.168.122.100:0/575685615' entity='mgr.compute-0.kitiwu' cmd=[{"prefix": "osd pool set", "pool": ".nfs", "var": "pg_num", "val": "32"}]: dispatch
Dec  8 04:49:13 np0005550137 ceph-osd[83009]: osd.1 pg_epoch: 53 pg[8.15( v 35'12 lc 0'0 (0'0,35'12] local-lis/les=34/35 n=0 ec=52/34 lis/c=34/34 les/c/f=35/35/0 sis=52) [1] r=0 lpr=52 pi=[34,52)/1 crt=35'12 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  8 04:49:13 np0005550137 ceph-osd[83009]: osd.1 pg_epoch: 53 pg[9.14( v 49'1026 lc 0'0 (0'0,49'1026] local-lis/les=36/37 n=5 ec=52/36 lis/c=36/36 les/c/f=37/37/0 sis=52) [1] r=0 lpr=52 pi=[36,52)/1 crt=49'1026 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  8 04:49:13 np0005550137 ceph-osd[83009]: osd.1 pg_epoch: 53 pg[9.15( v 49'1026 lc 0'0 (0'0,49'1026] local-lis/les=36/37 n=5 ec=52/36 lis/c=36/36 les/c/f=37/37/0 sis=52) [1] r=0 lpr=52 pi=[36,52)/1 crt=49'1026 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  8 04:49:13 np0005550137 ceph-osd[83009]: osd.1 pg_epoch: 53 pg[9.17( v 49'1026 lc 0'0 (0'0,49'1026] local-lis/les=36/37 n=5 ec=52/36 lis/c=36/36 les/c/f=37/37/0 sis=52) [1] r=0 lpr=52 pi=[36,52)/1 crt=49'1026 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  8 04:49:13 np0005550137 ceph-osd[83009]: osd.1 pg_epoch: 53 pg[8.16( v 35'12 lc 0'0 (0'0,35'12] local-lis/les=34/35 n=0 ec=52/34 lis/c=34/34 les/c/f=35/35/0 sis=52) [1] r=0 lpr=52 pi=[34,52)/1 crt=35'12 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  8 04:49:13 np0005550137 ceph-osd[83009]: osd.1 pg_epoch: 53 pg[9.16( v 49'1026 lc 0'0 (0'0,49'1026] local-lis/les=36/37 n=5 ec=52/36 lis/c=36/36 les/c/f=37/37/0 sis=52) [1] r=0 lpr=52 pi=[36,52)/1 crt=49'1026 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  8 04:49:13 np0005550137 ceph-osd[83009]: osd.1 pg_epoch: 53 pg[9.11( v 49'1026 lc 0'0 (0'0,49'1026] local-lis/les=36/37 n=6 ec=52/36 lis/c=36/36 les/c/f=37/37/0 sis=52) [1] r=0 lpr=52 pi=[36,52)/1 crt=49'1026 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  8 04:49:13 np0005550137 ceph-osd[83009]: osd.1 pg_epoch: 53 pg[8.10( v 35'12 lc 0'0 (0'0,35'12] local-lis/les=34/35 n=0 ec=52/34 lis/c=34/34 les/c/f=35/35/0 sis=52) [1] r=0 lpr=52 pi=[34,52)/1 crt=35'12 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  8 04:49:13 np0005550137 ceph-osd[83009]: osd.1 pg_epoch: 53 pg[8.17( v 35'12 lc 0'0 (0'0,35'12] local-lis/les=34/35 n=0 ec=52/34 lis/c=34/34 les/c/f=35/35/0 sis=52) [1] r=0 lpr=52 pi=[34,52)/1 crt=35'12 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  8 04:49:13 np0005550137 ceph-osd[83009]: osd.1 pg_epoch: 53 pg[9.10( v 49'1026 lc 0'0 (0'0,49'1026] local-lis/les=36/37 n=6 ec=52/36 lis/c=36/36 les/c/f=37/37/0 sis=52) [1] r=0 lpr=52 pi=[36,52)/1 crt=49'1026 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  8 04:49:13 np0005550137 ceph-osd[83009]: osd.1 pg_epoch: 53 pg[9.3( v 49'1026 lc 0'0 (0'0,49'1026] local-lis/les=36/37 n=6 ec=52/36 lis/c=36/36 les/c/f=37/37/0 sis=52) [1] r=0 lpr=52 pi=[36,52)/1 crt=49'1026 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  8 04:49:13 np0005550137 ceph-osd[83009]: osd.1 pg_epoch: 53 pg[8.11( v 35'12 lc 0'0 (0'0,35'12] local-lis/les=34/35 n=0 ec=52/34 lis/c=34/34 les/c/f=35/35/0 sis=52) [1] r=0 lpr=52 pi=[34,52)/1 crt=35'12 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  8 04:49:13 np0005550137 ceph-osd[83009]: osd.1 pg_epoch: 53 pg[8.2( v 35'12 lc 0'0 (0'0,35'12] local-lis/les=34/35 n=1 ec=52/34 lis/c=34/34 les/c/f=35/35/0 sis=52) [1] r=0 lpr=52 pi=[34,52)/1 crt=35'12 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  8 04:49:13 np0005550137 ceph-osd[83009]: osd.1 pg_epoch: 53 pg[8.3( v 35'12 lc 0'0 (0'0,35'12] local-lis/les=34/35 n=1 ec=52/34 lis/c=34/34 les/c/f=35/35/0 sis=52) [1] r=0 lpr=52 pi=[34,52)/1 crt=35'12 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  8 04:49:13 np0005550137 ceph-osd[83009]: osd.1 pg_epoch: 53 pg[9.2( v 49'1026 lc 0'0 (0'0,49'1026] local-lis/les=36/37 n=6 ec=52/36 lis/c=36/36 les/c/f=37/37/0 sis=52) [1] r=0 lpr=52 pi=[36,52)/1 crt=49'1026 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  8 04:49:13 np0005550137 ceph-osd[83009]: osd.1 pg_epoch: 53 pg[8.f( v 35'12 lc 0'0 (0'0,35'12] local-lis/les=34/35 n=0 ec=52/34 lis/c=34/34 les/c/f=35/35/0 sis=52) [1] r=0 lpr=52 pi=[34,52)/1 crt=35'12 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  8 04:49:13 np0005550137 ceph-osd[83009]: osd.1 pg_epoch: 53 pg[9.9( v 49'1026 lc 0'0 (0'0,49'1026] local-lis/les=36/37 n=6 ec=52/36 lis/c=36/36 les/c/f=37/37/0 sis=52) [1] r=0 lpr=52 pi=[36,52)/1 crt=49'1026 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  8 04:49:13 np0005550137 ceph-osd[83009]: osd.1 pg_epoch: 53 pg[8.8( v 35'12 lc 0'0 (0'0,35'12] local-lis/les=34/35 n=0 ec=52/34 lis/c=34/34 les/c/f=35/35/0 sis=52) [1] r=0 lpr=52 pi=[34,52)/1 crt=35'12 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  8 04:49:13 np0005550137 ceph-osd[83009]: osd.1 pg_epoch: 53 pg[9.e( v 49'1026 lc 0'0 (0'0,49'1026] local-lis/les=36/37 n=6 ec=52/36 lis/c=36/36 les/c/f=37/37/0 sis=52) [1] r=0 lpr=52 pi=[36,52)/1 crt=49'1026 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  8 04:49:13 np0005550137 ceph-osd[83009]: osd.1 pg_epoch: 53 pg[9.8( v 49'1026 lc 0'0 (0'0,49'1026] local-lis/les=36/37 n=6 ec=52/36 lis/c=36/36 les/c/f=37/37/0 sis=52) [1] r=0 lpr=52 pi=[36,52)/1 crt=49'1026 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  8 04:49:13 np0005550137 ceph-osd[83009]: osd.1 pg_epoch: 53 pg[8.9( v 35'12 lc 0'0 (0'0,35'12] local-lis/les=34/35 n=0 ec=52/34 lis/c=34/34 les/c/f=35/35/0 sis=52) [1] r=0 lpr=52 pi=[34,52)/1 crt=35'12 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  8 04:49:13 np0005550137 ceph-osd[83009]: osd.1 pg_epoch: 53 pg[9.b( v 49'1026 lc 0'0 (0'0,49'1026] local-lis/les=36/37 n=6 ec=52/36 lis/c=36/36 les/c/f=37/37/0 sis=52) [1] r=0 lpr=52 pi=[36,52)/1 crt=49'1026 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  8 04:49:13 np0005550137 ceph-osd[83009]: osd.1 pg_epoch: 53 pg[9.f( v 49'1026 lc 0'0 (0'0,49'1026] local-lis/les=36/37 n=6 ec=52/36 lis/c=36/36 les/c/f=37/37/0 sis=52) [1] r=0 lpr=52 pi=[36,52)/1 crt=49'1026 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  8 04:49:13 np0005550137 ceph-osd[83009]: osd.1 pg_epoch: 53 pg[8.e( v 35'12 lc 0'0 (0'0,35'12] local-lis/les=34/35 n=0 ec=52/34 lis/c=34/34 les/c/f=35/35/0 sis=52) [1] r=0 lpr=52 pi=[34,52)/1 crt=35'12 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  8 04:49:13 np0005550137 ceph-osd[83009]: osd.1 pg_epoch: 53 pg[8.a( v 35'12 lc 0'0 (0'0,35'12] local-lis/les=34/35 n=0 ec=52/34 lis/c=34/34 les/c/f=35/35/0 sis=52) [1] r=0 lpr=52 pi=[34,52)/1 crt=35'12 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  8 04:49:13 np0005550137 ceph-osd[83009]: osd.1 pg_epoch: 53 pg[9.c( v 49'1026 lc 0'0 (0'0,49'1026] local-lis/les=36/37 n=6 ec=52/36 lis/c=36/36 les/c/f=37/37/0 sis=52) [1] r=0 lpr=52 pi=[36,52)/1 crt=49'1026 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  8 04:49:13 np0005550137 ceph-osd[83009]: osd.1 pg_epoch: 53 pg[8.d( v 35'12 lc 0'0 (0'0,35'12] local-lis/les=34/35 n=0 ec=52/34 lis/c=34/34 les/c/f=35/35/0 sis=52) [1] r=0 lpr=52 pi=[34,52)/1 crt=35'12 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  8 04:49:13 np0005550137 ceph-osd[83009]: osd.1 pg_epoch: 53 pg[8.c( v 35'12 lc 0'0 (0'0,35'12] local-lis/les=34/35 n=0 ec=52/34 lis/c=34/34 les/c/f=35/35/0 sis=52) [1] r=0 lpr=52 pi=[34,52)/1 crt=35'12 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  8 04:49:13 np0005550137 ceph-osd[83009]: osd.1 pg_epoch: 53 pg[9.d( v 49'1026 lc 0'0 (0'0,49'1026] local-lis/les=36/37 n=6 ec=52/36 lis/c=36/36 les/c/f=37/37/0 sis=52) [1] r=0 lpr=52 pi=[36,52)/1 crt=49'1026 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  8 04:49:13 np0005550137 ceph-osd[83009]: osd.1 pg_epoch: 53 pg[9.a( v 49'1026 lc 0'0 (0'0,49'1026] local-lis/les=36/37 n=6 ec=52/36 lis/c=36/36 les/c/f=37/37/0 sis=52) [1] r=0 lpr=52 pi=[36,52)/1 crt=49'1026 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  8 04:49:13 np0005550137 ceph-osd[83009]: osd.1 pg_epoch: 53 pg[8.b( v 35'12 lc 0'0 (0'0,35'12] local-lis/les=34/35 n=0 ec=52/34 lis/c=34/34 les/c/f=35/35/0 sis=52) [1] r=0 lpr=52 pi=[34,52)/1 crt=35'12 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  8 04:49:13 np0005550137 ceph-osd[83009]: osd.1 pg_epoch: 53 pg[8.1( v 35'12 (0'0,35'12] local-lis/les=34/35 n=1 ec=52/34 lis/c=34/34 les/c/f=35/35/0 sis=52) [1] r=0 lpr=52 pi=[34,52)/1 crt=35'12 lcod 0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  8 04:49:13 np0005550137 ceph-osd[83009]: osd.1 pg_epoch: 53 pg[9.1( v 49'1026 lc 0'0 (0'0,49'1026] local-lis/les=36/37 n=6 ec=52/36 lis/c=36/36 les/c/f=37/37/0 sis=52) [1] r=0 lpr=52 pi=[36,52)/1 crt=49'1026 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  8 04:49:13 np0005550137 ceph-osd[83009]: osd.1 pg_epoch: 53 pg[8.7( v 35'12 lc 0'0 (0'0,35'12] local-lis/les=34/35 n=0 ec=52/34 lis/c=34/34 les/c/f=35/35/0 sis=52) [1] r=0 lpr=52 pi=[34,52)/1 crt=35'12 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  8 04:49:13 np0005550137 ceph-osd[83009]: osd.1 pg_epoch: 53 pg[9.6( v 49'1026 lc 0'0 (0'0,49'1026] local-lis/les=36/37 n=6 ec=52/36 lis/c=36/36 les/c/f=37/37/0 sis=52) [1] r=0 lpr=52 pi=[36,52)/1 crt=49'1026 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  8 04:49:13 np0005550137 ceph-osd[83009]: osd.1 pg_epoch: 53 pg[9.7( v 49'1026 lc 0'0 (0'0,49'1026] local-lis/les=36/37 n=6 ec=52/36 lis/c=36/36 les/c/f=37/37/0 sis=52) [1] r=0 lpr=52 pi=[36,52)/1 crt=49'1026 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  8 04:49:13 np0005550137 ceph-osd[83009]: osd.1 pg_epoch: 53 pg[8.6( v 35'12 lc 0'0 (0'0,35'12] local-lis/les=34/35 n=1 ec=52/34 lis/c=34/34 les/c/f=35/35/0 sis=52) [1] r=0 lpr=52 pi=[34,52)/1 crt=35'12 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  8 04:49:13 np0005550137 ceph-osd[83009]: osd.1 pg_epoch: 53 pg[9.4( v 49'1026 lc 0'0 (0'0,49'1026] local-lis/les=36/37 n=6 ec=52/36 lis/c=36/36 les/c/f=37/37/0 sis=52) [1] r=0 lpr=52 pi=[36,52)/1 crt=49'1026 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  8 04:49:13 np0005550137 ceph-osd[83009]: osd.1 pg_epoch: 53 pg[8.5( v 35'12 lc 0'0 (0'0,35'12] local-lis/les=34/35 n=1 ec=52/34 lis/c=34/34 les/c/f=35/35/0 sis=52) [1] r=0 lpr=52 pi=[34,52)/1 crt=35'12 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  8 04:49:13 np0005550137 ceph-osd[83009]: osd.1 pg_epoch: 53 pg[9.5( v 49'1026 lc 0'0 (0'0,49'1026] local-lis/les=36/37 n=6 ec=52/36 lis/c=36/36 les/c/f=37/37/0 sis=52) [1] r=0 lpr=52 pi=[36,52)/1 crt=49'1026 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  8 04:49:13 np0005550137 ceph-osd[83009]: osd.1 pg_epoch: 53 pg[8.4( v 35'12 lc 0'0 (0'0,35'12] local-lis/les=34/35 n=1 ec=52/34 lis/c=34/34 les/c/f=35/35/0 sis=52) [1] r=0 lpr=52 pi=[34,52)/1 crt=35'12 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  8 04:49:13 np0005550137 ceph-osd[83009]: osd.1 pg_epoch: 53 pg[9.1a( v 49'1026 lc 0'0 (0'0,49'1026] local-lis/les=36/37 n=5 ec=52/36 lis/c=36/36 les/c/f=37/37/0 sis=52) [1] r=0 lpr=52 pi=[36,52)/1 crt=49'1026 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  8 04:49:13 np0005550137 ceph-osd[83009]: osd.1 pg_epoch: 53 pg[8.1b( v 35'12 lc 0'0 (0'0,35'12] local-lis/les=34/35 n=0 ec=52/34 lis/c=34/34 les/c/f=35/35/0 sis=52) [1] r=0 lpr=52 pi=[34,52)/1 crt=35'12 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  8 04:49:13 np0005550137 ceph-osd[83009]: osd.1 pg_epoch: 53 pg[9.1b( v 49'1026 lc 0'0 (0'0,49'1026] local-lis/les=36/37 n=5 ec=52/36 lis/c=36/36 les/c/f=37/37/0 sis=52) [1] r=0 lpr=52 pi=[36,52)/1 crt=49'1026 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  8 04:49:13 np0005550137 ceph-osd[83009]: osd.1 pg_epoch: 53 pg[9.18( v 49'1026 lc 0'0 (0'0,49'1026] local-lis/les=36/37 n=5 ec=52/36 lis/c=36/36 les/c/f=37/37/0 sis=52) [1] r=0 lpr=52 pi=[36,52)/1 crt=49'1026 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  8 04:49:13 np0005550137 ceph-osd[83009]: osd.1 pg_epoch: 53 pg[9.19( v 49'1026 lc 0'0 (0'0,49'1026] local-lis/les=36/37 n=5 ec=52/36 lis/c=36/36 les/c/f=37/37/0 sis=52) [1] r=0 lpr=52 pi=[36,52)/1 crt=49'1026 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  8 04:49:13 np0005550137 ceph-osd[83009]: osd.1 pg_epoch: 53 pg[8.18( v 35'12 lc 0'0 (0'0,35'12] local-lis/les=34/35 n=0 ec=52/34 lis/c=34/34 les/c/f=35/35/0 sis=52) [1] r=0 lpr=52 pi=[34,52)/1 crt=35'12 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  8 04:49:13 np0005550137 ceph-osd[83009]: osd.1 pg_epoch: 53 pg[8.19( v 35'12 lc 0'0 (0'0,35'12] local-lis/les=34/35 n=0 ec=52/34 lis/c=34/34 les/c/f=35/35/0 sis=52) [1] r=0 lpr=52 pi=[34,52)/1 crt=35'12 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  8 04:49:13 np0005550137 ceph-osd[83009]: osd.1 pg_epoch: 53 pg[9.1e( v 49'1026 lc 0'0 (0'0,49'1026] local-lis/les=36/37 n=5 ec=52/36 lis/c=36/36 les/c/f=37/37/0 sis=52) [1] r=0 lpr=52 pi=[36,52)/1 crt=49'1026 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  8 04:49:13 np0005550137 ceph-osd[83009]: osd.1 pg_epoch: 53 pg[8.1f( v 35'12 lc 0'0 (0'0,35'12] local-lis/les=34/35 n=0 ec=52/34 lis/c=34/34 les/c/f=35/35/0 sis=52) [1] r=0 lpr=52 pi=[34,52)/1 crt=35'12 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  8 04:49:13 np0005550137 ceph-osd[83009]: osd.1 pg_epoch: 53 pg[8.1a( v 35'12 lc 0'0 (0'0,35'12] local-lis/les=34/35 n=0 ec=52/34 lis/c=34/34 les/c/f=35/35/0 sis=52) [1] r=0 lpr=52 pi=[34,52)/1 crt=35'12 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  8 04:49:13 np0005550137 ceph-osd[83009]: osd.1 pg_epoch: 53 pg[9.1f( v 49'1026 lc 0'0 (0'0,49'1026] local-lis/les=36/37 n=5 ec=52/36 lis/c=36/36 les/c/f=37/37/0 sis=52) [1] r=0 lpr=52 pi=[36,52)/1 crt=49'1026 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  8 04:49:13 np0005550137 ceph-osd[83009]: osd.1 pg_epoch: 53 pg[8.1e( v 35'12 lc 0'0 (0'0,35'12] local-lis/les=34/35 n=0 ec=52/34 lis/c=34/34 les/c/f=35/35/0 sis=52) [1] r=0 lpr=52 pi=[34,52)/1 crt=35'12 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  8 04:49:13 np0005550137 ceph-osd[83009]: osd.1 pg_epoch: 53 pg[9.1c( v 49'1026 lc 0'0 (0'0,49'1026] local-lis/les=36/37 n=5 ec=52/36 lis/c=36/36 les/c/f=37/37/0 sis=52) [1] r=0 lpr=52 pi=[36,52)/1 crt=49'1026 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  8 04:49:13 np0005550137 ceph-osd[83009]: osd.1 pg_epoch: 53 pg[8.1d( v 35'12 lc 0'0 (0'0,35'12] local-lis/les=34/35 n=0 ec=52/34 lis/c=34/34 les/c/f=35/35/0 sis=52) [1] r=0 lpr=52 pi=[34,52)/1 crt=35'12 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  8 04:49:13 np0005550137 ceph-osd[83009]: osd.1 pg_epoch: 53 pg[9.1d( v 49'1026 lc 0'0 (0'0,49'1026] local-lis/les=36/37 n=5 ec=52/36 lis/c=36/36 les/c/f=37/37/0 sis=52) [1] r=0 lpr=52 pi=[36,52)/1 crt=49'1026 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  8 04:49:13 np0005550137 ceph-osd[83009]: osd.1 pg_epoch: 53 pg[8.1c( v 35'12 lc 0'0 (0'0,35'12] local-lis/les=34/35 n=0 ec=52/34 lis/c=34/34 les/c/f=35/35/0 sis=52) [1] r=0 lpr=52 pi=[34,52)/1 crt=35'12 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  8 04:49:13 np0005550137 ceph-osd[83009]: osd.1 pg_epoch: 53 pg[9.12( v 49'1026 lc 0'0 (0'0,49'1026] local-lis/les=36/37 n=6 ec=52/36 lis/c=36/36 les/c/f=37/37/0 sis=52) [1] r=0 lpr=52 pi=[36,52)/1 crt=49'1026 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  8 04:49:13 np0005550137 ceph-osd[83009]: osd.1 pg_epoch: 53 pg[8.13( v 35'12 lc 0'0 (0'0,35'12] local-lis/les=34/35 n=0 ec=52/34 lis/c=34/34 les/c/f=35/35/0 sis=52) [1] r=0 lpr=52 pi=[34,52)/1 crt=35'12 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  8 04:49:13 np0005550137 ceph-osd[83009]: osd.1 pg_epoch: 53 pg[8.12( v 35'12 lc 0'0 (0'0,35'12] local-lis/les=34/35 n=0 ec=52/34 lis/c=34/34 les/c/f=35/35/0 sis=52) [1] r=0 lpr=52 pi=[34,52)/1 crt=35'12 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  8 04:49:13 np0005550137 ceph-osd[83009]: osd.1 pg_epoch: 53 pg[9.13( v 49'1026 lc 0'0 (0'0,49'1026] local-lis/les=36/37 n=5 ec=52/36 lis/c=36/36 les/c/f=37/37/0 sis=52) [1] r=0 lpr=52 pi=[36,52)/1 crt=49'1026 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  8 04:49:13 np0005550137 ceph-osd[83009]: osd.1 pg_epoch: 53 pg[8.15( v 35'12 (0'0,35'12] local-lis/les=52/53 n=0 ec=52/34 lis/c=34/34 les/c/f=35/35/0 sis=52) [1] r=0 lpr=52 pi=[34,52)/1 crt=35'12 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  8 04:49:13 np0005550137 ceph-osd[83009]: osd.1 pg_epoch: 53 pg[8.14( v 35'12 (0'0,35'12] local-lis/les=52/53 n=0 ec=52/34 lis/c=34/34 les/c/f=35/35/0 sis=52) [1] r=0 lpr=52 pi=[34,52)/1 crt=35'12 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  8 04:49:13 np0005550137 ceph-osd[83009]: osd.1 pg_epoch: 53 pg[9.16( v 49'1026 (0'0,49'1026] local-lis/les=52/53 n=5 ec=52/36 lis/c=36/36 les/c/f=37/37/0 sis=52) [1] r=0 lpr=52 pi=[36,52)/1 crt=49'1026 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  8 04:49:13 np0005550137 ceph-osd[83009]: osd.1 pg_epoch: 53 pg[8.16( v 35'12 (0'0,35'12] local-lis/les=52/53 n=0 ec=52/34 lis/c=34/34 les/c/f=35/35/0 sis=52) [1] r=0 lpr=52 pi=[34,52)/1 crt=35'12 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  8 04:49:13 np0005550137 ceph-osd[83009]: osd.1 pg_epoch: 53 pg[9.17( v 49'1026 (0'0,49'1026] local-lis/les=52/53 n=5 ec=52/36 lis/c=36/36 les/c/f=37/37/0 sis=52) [1] r=0 lpr=52 pi=[36,52)/1 crt=49'1026 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  8 04:49:13 np0005550137 ceph-osd[83009]: osd.1 pg_epoch: 53 pg[9.14( v 49'1026 (0'0,49'1026] local-lis/les=52/53 n=5 ec=52/36 lis/c=36/36 les/c/f=37/37/0 sis=52) [1] r=0 lpr=52 pi=[36,52)/1 crt=49'1026 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  8 04:49:13 np0005550137 ceph-osd[83009]: osd.1 pg_epoch: 53 pg[9.11( v 49'1026 (0'0,49'1026] local-lis/les=52/53 n=6 ec=52/36 lis/c=36/36 les/c/f=37/37/0 sis=52) [1] r=0 lpr=52 pi=[36,52)/1 crt=49'1026 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  8 04:49:13 np0005550137 ceph-osd[83009]: osd.1 pg_epoch: 53 pg[8.10( v 35'12 (0'0,35'12] local-lis/les=52/53 n=0 ec=52/34 lis/c=34/34 les/c/f=35/35/0 sis=52) [1] r=0 lpr=52 pi=[34,52)/1 crt=35'12 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  8 04:49:13 np0005550137 ceph-osd[83009]: osd.1 pg_epoch: 53 pg[9.10( v 49'1026 (0'0,49'1026] local-lis/les=52/53 n=6 ec=52/36 lis/c=36/36 les/c/f=37/37/0 sis=52) [1] r=0 lpr=52 pi=[36,52)/1 crt=49'1026 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  8 04:49:13 np0005550137 ceph-osd[83009]: osd.1 pg_epoch: 53 pg[8.17( v 35'12 (0'0,35'12] local-lis/les=52/53 n=0 ec=52/34 lis/c=34/34 les/c/f=35/35/0 sis=52) [1] r=0 lpr=52 pi=[34,52)/1 crt=35'12 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  8 04:49:13 np0005550137 ceph-osd[83009]: osd.1 pg_epoch: 53 pg[9.3( v 49'1026 (0'0,49'1026] local-lis/les=52/53 n=6 ec=52/36 lis/c=36/36 les/c/f=37/37/0 sis=52) [1] r=0 lpr=52 pi=[36,52)/1 crt=49'1026 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  8 04:49:13 np0005550137 ceph-osd[83009]: osd.1 pg_epoch: 53 pg[8.11( v 35'12 (0'0,35'12] local-lis/les=52/53 n=0 ec=52/34 lis/c=34/34 les/c/f=35/35/0 sis=52) [1] r=0 lpr=52 pi=[34,52)/1 crt=35'12 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  8 04:49:13 np0005550137 ceph-osd[83009]: osd.1 pg_epoch: 53 pg[8.2( v 35'12 (0'0,35'12] local-lis/les=52/53 n=1 ec=52/34 lis/c=34/34 les/c/f=35/35/0 sis=52) [1] r=0 lpr=52 pi=[34,52)/1 crt=35'12 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  8 04:49:13 np0005550137 ceph-osd[83009]: osd.1 pg_epoch: 53 pg[8.3( v 35'12 (0'0,35'12] local-lis/les=52/53 n=1 ec=52/34 lis/c=34/34 les/c/f=35/35/0 sis=52) [1] r=0 lpr=52 pi=[34,52)/1 crt=35'12 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  8 04:49:13 np0005550137 ceph-osd[83009]: osd.1 pg_epoch: 53 pg[9.2( v 49'1026 (0'0,49'1026] local-lis/les=52/53 n=6 ec=52/36 lis/c=36/36 les/c/f=37/37/0 sis=52) [1] r=0 lpr=52 pi=[36,52)/1 crt=49'1026 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  8 04:49:13 np0005550137 ceph-osd[83009]: osd.1 pg_epoch: 53 pg[8.f( v 35'12 (0'0,35'12] local-lis/les=52/53 n=0 ec=52/34 lis/c=34/34 les/c/f=35/35/0 sis=52) [1] r=0 lpr=52 pi=[34,52)/1 crt=35'12 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  8 04:49:13 np0005550137 ceph-osd[83009]: osd.1 pg_epoch: 53 pg[9.9( v 49'1026 (0'0,49'1026] local-lis/les=52/53 n=6 ec=52/36 lis/c=36/36 les/c/f=37/37/0 sis=52) [1] r=0 lpr=52 pi=[36,52)/1 crt=49'1026 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  8 04:49:13 np0005550137 ceph-osd[83009]: osd.1 pg_epoch: 53 pg[9.e( v 49'1026 (0'0,49'1026] local-lis/les=52/53 n=6 ec=52/36 lis/c=36/36 les/c/f=37/37/0 sis=52) [1] r=0 lpr=52 pi=[36,52)/1 crt=49'1026 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  8 04:49:13 np0005550137 ceph-osd[83009]: osd.1 pg_epoch: 53 pg[9.15( v 49'1026 (0'0,49'1026] local-lis/les=52/53 n=5 ec=52/36 lis/c=36/36 les/c/f=37/37/0 sis=52) [1] r=0 lpr=52 pi=[36,52)/1 crt=49'1026 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  8 04:49:13 np0005550137 ceph-osd[83009]: osd.1 pg_epoch: 53 pg[9.8( v 49'1026 (0'0,49'1026] local-lis/les=52/53 n=6 ec=52/36 lis/c=36/36 les/c/f=37/37/0 sis=52) [1] r=0 lpr=52 pi=[36,52)/1 crt=49'1026 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  8 04:49:13 np0005550137 ceph-osd[83009]: osd.1 pg_epoch: 53 pg[8.8( v 35'12 (0'0,35'12] local-lis/les=52/53 n=0 ec=52/34 lis/c=34/34 les/c/f=35/35/0 sis=52) [1] r=0 lpr=52 pi=[34,52)/1 crt=35'12 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  8 04:49:13 np0005550137 ceph-osd[83009]: osd.1 pg_epoch: 53 pg[8.9( v 35'12 (0'0,35'12] local-lis/les=52/53 n=0 ec=52/34 lis/c=34/34 les/c/f=35/35/0 sis=52) [1] r=0 lpr=52 pi=[34,52)/1 crt=35'12 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  8 04:49:13 np0005550137 ceph-osd[83009]: osd.1 pg_epoch: 53 pg[9.b( v 49'1026 (0'0,49'1026] local-lis/les=52/53 n=6 ec=52/36 lis/c=36/36 les/c/f=37/37/0 sis=52) [1] r=0 lpr=52 pi=[36,52)/1 crt=49'1026 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  8 04:49:13 np0005550137 ceph-osd[83009]: osd.1 pg_epoch: 53 pg[8.e( v 35'12 (0'0,35'12] local-lis/les=52/53 n=0 ec=52/34 lis/c=34/34 les/c/f=35/35/0 sis=52) [1] r=0 lpr=52 pi=[34,52)/1 crt=35'12 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  8 04:49:13 np0005550137 ceph-osd[83009]: osd.1 pg_epoch: 53 pg[9.f( v 49'1026 (0'0,49'1026] local-lis/les=52/53 n=6 ec=52/36 lis/c=36/36 les/c/f=37/37/0 sis=52) [1] r=0 lpr=52 pi=[36,52)/1 crt=49'1026 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  8 04:49:13 np0005550137 ceph-osd[83009]: osd.1 pg_epoch: 53 pg[8.c( v 35'12 (0'0,35'12] local-lis/les=52/53 n=0 ec=52/34 lis/c=34/34 les/c/f=35/35/0 sis=52) [1] r=0 lpr=52 pi=[34,52)/1 crt=35'12 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  8 04:49:13 np0005550137 ceph-osd[83009]: osd.1 pg_epoch: 53 pg[9.c( v 49'1026 (0'0,49'1026] local-lis/les=52/53 n=6 ec=52/36 lis/c=36/36 les/c/f=37/37/0 sis=52) [1] r=0 lpr=52 pi=[36,52)/1 crt=49'1026 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  8 04:49:13 np0005550137 ceph-osd[83009]: osd.1 pg_epoch: 53 pg[8.d( v 35'12 (0'0,35'12] local-lis/les=52/53 n=0 ec=52/34 lis/c=34/34 les/c/f=35/35/0 sis=52) [1] r=0 lpr=52 pi=[34,52)/1 crt=35'12 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  8 04:49:13 np0005550137 ceph-osd[83009]: osd.1 pg_epoch: 53 pg[9.d( v 49'1026 (0'0,49'1026] local-lis/les=52/53 n=6 ec=52/36 lis/c=36/36 les/c/f=37/37/0 sis=52) [1] r=0 lpr=52 pi=[36,52)/1 crt=49'1026 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  8 04:49:13 np0005550137 ceph-osd[83009]: osd.1 pg_epoch: 53 pg[8.b( v 35'12 (0'0,35'12] local-lis/les=52/53 n=0 ec=52/34 lis/c=34/34 les/c/f=35/35/0 sis=52) [1] r=0 lpr=52 pi=[34,52)/1 crt=35'12 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  8 04:49:13 np0005550137 ceph-osd[83009]: osd.1 pg_epoch: 53 pg[9.0( v 49'1026 (0'0,49'1026] local-lis/les=52/53 n=5 ec=36/36 lis/c=36/36 les/c/f=37/37/0 sis=52) [1] r=0 lpr=52 pi=[36,52)/1 crt=49'1026 lcod 49'1025 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  8 04:49:13 np0005550137 ceph-osd[83009]: osd.1 pg_epoch: 53 pg[9.a( v 49'1026 (0'0,49'1026] local-lis/les=52/53 n=6 ec=52/36 lis/c=36/36 les/c/f=37/37/0 sis=52) [1] r=0 lpr=52 pi=[36,52)/1 crt=49'1026 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  8 04:49:13 np0005550137 ceph-osd[83009]: osd.1 pg_epoch: 53 pg[9.1( v 49'1026 (0'0,49'1026] local-lis/les=52/53 n=6 ec=52/36 lis/c=36/36 les/c/f=37/37/0 sis=52) [1] r=0 lpr=52 pi=[36,52)/1 crt=49'1026 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  8 04:49:13 np0005550137 ceph-osd[83009]: osd.1 pg_epoch: 53 pg[8.a( v 35'12 (0'0,35'12] local-lis/les=52/53 n=0 ec=52/34 lis/c=34/34 les/c/f=35/35/0 sis=52) [1] r=0 lpr=52 pi=[34,52)/1 crt=35'12 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  8 04:49:13 np0005550137 ceph-osd[83009]: osd.1 pg_epoch: 53 pg[8.0( v 35'12 (0'0,35'12] local-lis/les=52/53 n=0 ec=34/34 lis/c=34/34 les/c/f=35/35/0 sis=52) [1] r=0 lpr=52 pi=[34,52)/1 crt=35'12 lcod 35'11 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  8 04:49:13 np0005550137 ceph-osd[83009]: osd.1 pg_epoch: 53 pg[9.7( v 49'1026 (0'0,49'1026] local-lis/les=52/53 n=6 ec=52/36 lis/c=36/36 les/c/f=37/37/0 sis=52) [1] r=0 lpr=52 pi=[36,52)/1 crt=49'1026 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  8 04:49:13 np0005550137 ceph-osd[83009]: osd.1 pg_epoch: 53 pg[9.6( v 49'1026 (0'0,49'1026] local-lis/les=52/53 n=6 ec=52/36 lis/c=36/36 les/c/f=37/37/0 sis=52) [1] r=0 lpr=52 pi=[36,52)/1 crt=49'1026 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  8 04:49:13 np0005550137 ceph-osd[83009]: osd.1 pg_epoch: 53 pg[9.4( v 49'1026 (0'0,49'1026] local-lis/les=52/53 n=6 ec=52/36 lis/c=36/36 les/c/f=37/37/0 sis=52) [1] r=0 lpr=52 pi=[36,52)/1 crt=49'1026 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  8 04:49:13 np0005550137 ceph-osd[83009]: osd.1 pg_epoch: 53 pg[8.6( v 35'12 (0'0,35'12] local-lis/les=52/53 n=1 ec=52/34 lis/c=34/34 les/c/f=35/35/0 sis=52) [1] r=0 lpr=52 pi=[34,52)/1 crt=35'12 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  8 04:49:13 np0005550137 ceph-osd[83009]: osd.1 pg_epoch: 53 pg[8.1( v 35'12 (0'0,35'12] local-lis/les=52/53 n=1 ec=52/34 lis/c=34/34 les/c/f=35/35/0 sis=52) [1] r=0 lpr=52 pi=[34,52)/1 crt=35'12 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  8 04:49:13 np0005550137 ceph-osd[83009]: osd.1 pg_epoch: 53 pg[9.5( v 49'1026 (0'0,49'1026] local-lis/les=52/53 n=6 ec=52/36 lis/c=36/36 les/c/f=37/37/0 sis=52) [1] r=0 lpr=52 pi=[36,52)/1 crt=49'1026 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  8 04:49:13 np0005550137 ceph-osd[83009]: osd.1 pg_epoch: 53 pg[8.7( v 35'12 (0'0,35'12] local-lis/les=52/53 n=0 ec=52/34 lis/c=34/34 les/c/f=35/35/0 sis=52) [1] r=0 lpr=52 pi=[34,52)/1 crt=35'12 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  8 04:49:13 np0005550137 ceph-osd[83009]: osd.1 pg_epoch: 53 pg[8.4( v 35'12 (0'0,35'12] local-lis/les=52/53 n=1 ec=52/34 lis/c=34/34 les/c/f=35/35/0 sis=52) [1] r=0 lpr=52 pi=[34,52)/1 crt=35'12 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  8 04:49:13 np0005550137 ceph-osd[83009]: osd.1 pg_epoch: 53 pg[9.1a( v 49'1026 (0'0,49'1026] local-lis/les=52/53 n=5 ec=52/36 lis/c=36/36 les/c/f=37/37/0 sis=52) [1] r=0 lpr=52 pi=[36,52)/1 crt=49'1026 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  8 04:49:13 np0005550137 ceph-osd[83009]: osd.1 pg_epoch: 53 pg[9.1b( v 49'1026 (0'0,49'1026] local-lis/les=52/53 n=5 ec=52/36 lis/c=36/36 les/c/f=37/37/0 sis=52) [1] r=0 lpr=52 pi=[36,52)/1 crt=49'1026 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  8 04:49:13 np0005550137 ceph-osd[83009]: osd.1 pg_epoch: 53 pg[8.5( v 35'12 (0'0,35'12] local-lis/les=52/53 n=1 ec=52/34 lis/c=34/34 les/c/f=35/35/0 sis=52) [1] r=0 lpr=52 pi=[34,52)/1 crt=35'12 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  8 04:49:13 np0005550137 ceph-osd[83009]: osd.1 pg_epoch: 53 pg[8.1b( v 35'12 (0'0,35'12] local-lis/les=52/53 n=0 ec=52/34 lis/c=34/34 les/c/f=35/35/0 sis=52) [1] r=0 lpr=52 pi=[34,52)/1 crt=35'12 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  8 04:49:13 np0005550137 ceph-osd[83009]: osd.1 pg_epoch: 53 pg[9.19( v 49'1026 (0'0,49'1026] local-lis/les=52/53 n=5 ec=52/36 lis/c=36/36 les/c/f=37/37/0 sis=52) [1] r=0 lpr=52 pi=[36,52)/1 crt=49'1026 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  8 04:49:13 np0005550137 ceph-osd[83009]: osd.1 pg_epoch: 53 pg[8.18( v 35'12 (0'0,35'12] local-lis/les=52/53 n=0 ec=52/34 lis/c=34/34 les/c/f=35/35/0 sis=52) [1] r=0 lpr=52 pi=[34,52)/1 crt=35'12 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  8 04:49:13 np0005550137 ceph-osd[83009]: osd.1 pg_epoch: 53 pg[8.1f( v 35'12 (0'0,35'12] local-lis/les=52/53 n=0 ec=52/34 lis/c=34/34 les/c/f=35/35/0 sis=52) [1] r=0 lpr=52 pi=[34,52)/1 crt=35'12 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  8 04:49:13 np0005550137 ceph-osd[83009]: osd.1 pg_epoch: 53 pg[9.1e( v 49'1026 (0'0,49'1026] local-lis/les=52/53 n=5 ec=52/36 lis/c=36/36 les/c/f=37/37/0 sis=52) [1] r=0 lpr=52 pi=[36,52)/1 crt=49'1026 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  8 04:49:13 np0005550137 ceph-osd[83009]: osd.1 pg_epoch: 53 pg[9.18( v 49'1026 (0'0,49'1026] local-lis/les=52/53 n=5 ec=52/36 lis/c=36/36 les/c/f=37/37/0 sis=52) [1] r=0 lpr=52 pi=[36,52)/1 crt=49'1026 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  8 04:49:13 np0005550137 ceph-osd[83009]: osd.1 pg_epoch: 53 pg[9.1f( v 49'1026 (0'0,49'1026] local-lis/les=52/53 n=5 ec=52/36 lis/c=36/36 les/c/f=37/37/0 sis=52) [1] r=0 lpr=52 pi=[36,52)/1 crt=49'1026 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  8 04:49:13 np0005550137 ceph-osd[83009]: osd.1 pg_epoch: 53 pg[8.1c( v 35'12 (0'0,35'12] local-lis/les=52/53 n=0 ec=52/34 lis/c=34/34 les/c/f=35/35/0 sis=52) [1] r=0 lpr=52 pi=[34,52)/1 crt=35'12 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  8 04:49:13 np0005550137 ceph-osd[83009]: osd.1 pg_epoch: 53 pg[8.1d( v 35'12 (0'0,35'12] local-lis/les=52/53 n=0 ec=52/34 lis/c=34/34 les/c/f=35/35/0 sis=52) [1] r=0 lpr=52 pi=[34,52)/1 crt=35'12 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  8 04:49:13 np0005550137 ceph-osd[83009]: osd.1 pg_epoch: 53 pg[8.1e( v 35'12 (0'0,35'12] local-lis/les=52/53 n=0 ec=52/34 lis/c=34/34 les/c/f=35/35/0 sis=52) [1] r=0 lpr=52 pi=[34,52)/1 crt=35'12 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  8 04:49:13 np0005550137 ceph-osd[83009]: osd.1 pg_epoch: 53 pg[9.1d( v 49'1026 (0'0,49'1026] local-lis/les=52/53 n=5 ec=52/36 lis/c=36/36 les/c/f=37/37/0 sis=52) [1] r=0 lpr=52 pi=[36,52)/1 crt=49'1026 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  8 04:49:13 np0005550137 ceph-osd[83009]: osd.1 pg_epoch: 53 pg[9.12( v 49'1026 (0'0,49'1026] local-lis/les=52/53 n=6 ec=52/36 lis/c=36/36 les/c/f=37/37/0 sis=52) [1] r=0 lpr=52 pi=[36,52)/1 crt=49'1026 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  8 04:49:13 np0005550137 ceph-osd[83009]: osd.1 pg_epoch: 53 pg[8.1a( v 35'12 (0'0,35'12] local-lis/les=52/53 n=0 ec=52/34 lis/c=34/34 les/c/f=35/35/0 sis=52) [1] r=0 lpr=52 pi=[34,52)/1 crt=35'12 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  8 04:49:13 np0005550137 ceph-osd[83009]: osd.1 pg_epoch: 53 pg[8.13( v 35'12 (0'0,35'12] local-lis/les=52/53 n=0 ec=52/34 lis/c=34/34 les/c/f=35/35/0 sis=52) [1] r=0 lpr=52 pi=[34,52)/1 crt=35'12 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  8 04:49:13 np0005550137 ceph-osd[83009]: osd.1 pg_epoch: 53 pg[8.19( v 35'12 (0'0,35'12] local-lis/les=52/53 n=0 ec=52/34 lis/c=34/34 les/c/f=35/35/0 sis=52) [1] r=0 lpr=52 pi=[34,52)/1 crt=35'12 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  8 04:49:13 np0005550137 ceph-osd[83009]: osd.1 pg_epoch: 53 pg[9.1c( v 49'1026 (0'0,49'1026] local-lis/les=52/53 n=5 ec=52/36 lis/c=36/36 les/c/f=37/37/0 sis=52) [1] r=0 lpr=52 pi=[36,52)/1 crt=49'1026 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  8 04:49:13 np0005550137 ceph-osd[83009]: osd.1 pg_epoch: 53 pg[9.13( v 49'1026 (0'0,49'1026] local-lis/les=52/53 n=5 ec=52/36 lis/c=36/36 les/c/f=37/37/0 sis=52) [1] r=0 lpr=52 pi=[36,52)/1 crt=49'1026 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  8 04:49:13 np0005550137 ceph-osd[83009]: osd.1 pg_epoch: 53 pg[8.12( v 35'12 (0'0,35'12] local-lis/les=52/53 n=0 ec=52/34 lis/c=34/34 les/c/f=35/35/0 sis=52) [1] r=0 lpr=52 pi=[34,52)/1 crt=35'12 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  8 04:49:13 np0005550137 ceph-mon[74516]: Deploying daemon alertmanager.compute-0 on compute-0
Dec  8 04:49:13 np0005550137 ceph-mon[74516]: from='mgr.14478 192.168.122.100:0/575685615' entity='mgr.compute-0.kitiwu' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.control", "var": "pg_num", "val": "32"}]': finished
Dec  8 04:49:13 np0005550137 ceph-mon[74516]: from='mgr.14478 192.168.122.100:0/575685615' entity='mgr.compute-0.kitiwu' cmd='[{"prefix": "osd pool set", "pool": ".rgw.root", "var": "pg_num_actual", "val": "32"}]': finished
Dec  8 04:49:13 np0005550137 ceph-mon[74516]: from='mgr.14478 192.168.122.100:0/575685615' entity='mgr.compute-0.kitiwu' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pg_num_actual", "val": "32"}]': finished
Dec  8 04:49:13 np0005550137 ceph-mon[74516]: from='mgr.14478 192.168.122.100:0/575685615' entity='mgr.compute-0.kitiwu' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_num", "val": "32"}]: dispatch
Dec  8 04:49:13 np0005550137 ceph-mon[74516]: from='mgr.14478 192.168.122.100:0/575685615' entity='mgr.compute-0.kitiwu' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_num", "val": "32"}]': finished
Dec  8 04:49:13 np0005550137 ceph-mon[74516]: from='mgr.14478 192.168.122.100:0/575685615' entity='mgr.compute-0.kitiwu' cmd=[{"prefix": "osd pool set", "pool": ".nfs", "var": "pg_num", "val": "32"}]: dispatch
Dec  8 04:49:14 np0005550137 ceph-mgr[74806]: log_channel(cluster) log [DBG] : pgmap v43: 260 pgs: 62 unknown, 198 active+clean; 456 KiB data, 103 MiB used, 60 GiB / 60 GiB avail
Dec  8 04:49:14 np0005550137 ceph-mon[74516]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.control", "var": "pg_num_actual", "val": "32"} v 0)
Dec  8 04:49:14 np0005550137 ceph-mon[74516]: log_channel(audit) log [INF] : from='mgr.14478 192.168.122.100:0/575685615' entity='mgr.compute-0.kitiwu' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.control", "var": "pg_num_actual", "val": "32"}]: dispatch
Dec  8 04:49:14 np0005550137 ceph-mon[74516]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_num_actual", "val": "32"} v 0)
Dec  8 04:49:14 np0005550137 ceph-mon[74516]: log_channel(audit) log [INF] : from='mgr.14478 192.168.122.100:0/575685615' entity='mgr.compute-0.kitiwu' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_num_actual", "val": "32"}]: dispatch
Dec  8 04:49:14 np0005550137 podman[98505]: 2025-12-08 09:49:14.688624021 +0000 UTC m=+1.641950008 volume create 14f4719f996b4e2ea13e1da5b5b6689314a8f73824c0e28f53a169f047541092
Dec  8 04:49:14 np0005550137 podman[98505]: 2025-12-08 09:49:14.697242379 +0000 UTC m=+1.650568366 container create bbc1f0e0c1a8e7ec906c9bb58c1fc1291ed6c473a8fd9f7b663584a8c90bd10b (image=quay.io/prometheus/alertmanager:v0.25.0, name=stupefied_gates, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Dec  8 04:49:14 np0005550137 systemd[1]: Started libpod-conmon-bbc1f0e0c1a8e7ec906c9bb58c1fc1291ed6c473a8fd9f7b663584a8c90bd10b.scope.
Dec  8 04:49:14 np0005550137 systemd[1]: Started libcrun container.
Dec  8 04:49:14 np0005550137 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/07800eb58b6434b67aa63794f07134a56710a6fd38caa296288e1f43c111131d/merged/alertmanager supports timestamps until 2038 (0x7fffffff)
Dec  8 04:49:14 np0005550137 podman[98505]: 2025-12-08 09:49:14.66985882 +0000 UTC m=+1.623184817 image pull c8568f914cd25b2062c44e9f79f9c18da6e3b85fe0c47a12a2191c61426c2b19 quay.io/prometheus/alertmanager:v0.25.0
Dec  8 04:49:14 np0005550137 podman[98505]: 2025-12-08 09:49:14.782860815 +0000 UTC m=+1.736186842 container init bbc1f0e0c1a8e7ec906c9bb58c1fc1291ed6c473a8fd9f7b663584a8c90bd10b (image=quay.io/prometheus/alertmanager:v0.25.0, name=stupefied_gates, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Dec  8 04:49:14 np0005550137 podman[98505]: 2025-12-08 09:49:14.795312857 +0000 UTC m=+1.748638844 container start bbc1f0e0c1a8e7ec906c9bb58c1fc1291ed6c473a8fd9f7b663584a8c90bd10b (image=quay.io/prometheus/alertmanager:v0.25.0, name=stupefied_gates, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Dec  8 04:49:14 np0005550137 podman[98505]: 2025-12-08 09:49:14.800001577 +0000 UTC m=+1.753327584 container attach bbc1f0e0c1a8e7ec906c9bb58c1fc1291ed6c473a8fd9f7b663584a8c90bd10b (image=quay.io/prometheus/alertmanager:v0.25.0, name=stupefied_gates, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Dec  8 04:49:14 np0005550137 stupefied_gates[98640]: 65534 65534
Dec  8 04:49:14 np0005550137 systemd[1]: libpod-bbc1f0e0c1a8e7ec906c9bb58c1fc1291ed6c473a8fd9f7b663584a8c90bd10b.scope: Deactivated successfully.
Dec  8 04:49:14 np0005550137 podman[98505]: 2025-12-08 09:49:14.801414508 +0000 UTC m=+1.754740495 container died bbc1f0e0c1a8e7ec906c9bb58c1fc1291ed6c473a8fd9f7b663584a8c90bd10b (image=quay.io/prometheus/alertmanager:v0.25.0, name=stupefied_gates, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Dec  8 04:49:14 np0005550137 ceph-mon[74516]: mon.compute-0@0(leader).osd e53 do_prune osdmap full prune enabled
Dec  8 04:49:14 np0005550137 ceph-ceb838ef-9d5d-54e4-bddb-2f01adce2ad4-nfs-cephfs-2-0-compute-0-cuvvno[97606]: 08/12/2025 09:49:14 : epoch 69369efc : compute-0 : ganesha.nfsd-2[svc_2] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f56f8003c10 fd 37 proxy header rest len failed header rlen = % (will set dead)
Dec  8 04:49:14 np0005550137 systemd[1]: var-lib-containers-storage-overlay-07800eb58b6434b67aa63794f07134a56710a6fd38caa296288e1f43c111131d-merged.mount: Deactivated successfully.
Dec  8 04:49:14 np0005550137 ceph-mon[74516]: log_channel(audit) log [INF] : from='mgr.14478 192.168.122.100:0/575685615' entity='mgr.compute-0.kitiwu' cmd='[{"prefix": "osd pool set", "pool": ".nfs", "var": "pg_num", "val": "32"}]': finished
Dec  8 04:49:14 np0005550137 ceph-mon[74516]: log_channel(audit) log [INF] : from='mgr.14478 192.168.122.100:0/575685615' entity='mgr.compute-0.kitiwu' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.control", "var": "pg_num_actual", "val": "32"}]': finished
Dec  8 04:49:14 np0005550137 ceph-mon[74516]: log_channel(audit) log [INF] : from='mgr.14478 192.168.122.100:0/575685615' entity='mgr.compute-0.kitiwu' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_num_actual", "val": "32"}]': finished
Dec  8 04:49:14 np0005550137 ceph-mon[74516]: mon.compute-0@0(leader).osd e54 e54: 3 total, 3 up, 3 in
Dec  8 04:49:14 np0005550137 ceph-mon[74516]: log_channel(cluster) log [DBG] : osdmap e54: 3 total, 3 up, 3 in
Dec  8 04:49:14 np0005550137 ceph-mgr[74806]: [progress INFO root] update: starting ev 1d688274-830a-4741-8550-d586effbba99 (PG autoscaler increasing pool 12 PGs from 1 to 32)
Dec  8 04:49:14 np0005550137 ceph-mgr[74806]: [progress INFO root] complete: finished ev 78bc646a-5541-4f34-9789-75573139872f (PG autoscaler increasing pool 8 PGs from 1 to 32)
Dec  8 04:49:14 np0005550137 ceph-mgr[74806]: [progress INFO root] Completed event 78bc646a-5541-4f34-9789-75573139872f (PG autoscaler increasing pool 8 PGs from 1 to 32) in 4 seconds
Dec  8 04:49:14 np0005550137 ceph-mgr[74806]: [progress INFO root] complete: finished ev 12e62abb-7fd4-4a49-ac6e-43c760b062dd (PG autoscaler increasing pool 9 PGs from 1 to 32)
Dec  8 04:49:14 np0005550137 ceph-mgr[74806]: [progress INFO root] Completed event 12e62abb-7fd4-4a49-ac6e-43c760b062dd (PG autoscaler increasing pool 9 PGs from 1 to 32) in 3 seconds
Dec  8 04:49:14 np0005550137 ceph-mgr[74806]: [progress INFO root] complete: finished ev 768d8243-48fc-4648-898d-be18aa927c13 (PG autoscaler increasing pool 10 PGs from 1 to 32)
Dec  8 04:49:14 np0005550137 ceph-mgr[74806]: [progress INFO root] Completed event 768d8243-48fc-4648-898d-be18aa927c13 (PG autoscaler increasing pool 10 PGs from 1 to 32) in 2 seconds
Dec  8 04:49:14 np0005550137 ceph-mgr[74806]: [progress INFO root] complete: finished ev 7e47e026-2993-4b30-9a80-64ae44123608 (PG autoscaler increasing pool 11 PGs from 1 to 32)
Dec  8 04:49:14 np0005550137 ceph-mgr[74806]: [progress INFO root] Completed event 7e47e026-2993-4b30-9a80-64ae44123608 (PG autoscaler increasing pool 11 PGs from 1 to 32) in 1 seconds
Dec  8 04:49:14 np0005550137 ceph-mgr[74806]: [progress INFO root] complete: finished ev 1d688274-830a-4741-8550-d586effbba99 (PG autoscaler increasing pool 12 PGs from 1 to 32)
Dec  8 04:49:14 np0005550137 ceph-mgr[74806]: [progress INFO root] Completed event 1d688274-830a-4741-8550-d586effbba99 (PG autoscaler increasing pool 12 PGs from 1 to 32) in 0 seconds
Dec  8 04:49:14 np0005550137 ceph-osd[83009]: log_channel(cluster) log [DBG] : 8.15 deep-scrub starts
Dec  8 04:49:14 np0005550137 podman[98505]: 2025-12-08 09:49:14.841183766 +0000 UTC m=+1.794509753 container remove bbc1f0e0c1a8e7ec906c9bb58c1fc1291ed6c473a8fd9f7b663584a8c90bd10b (image=quay.io/prometheus/alertmanager:v0.25.0, name=stupefied_gates, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Dec  8 04:49:14 np0005550137 podman[98505]: 2025-12-08 09:49:14.845086283 +0000 UTC m=+1.798412280 volume remove 14f4719f996b4e2ea13e1da5b5b6689314a8f73824c0e28f53a169f047541092
Dec  8 04:49:14 np0005550137 ceph-mon[74516]: from='mgr.14478 192.168.122.100:0/575685615' entity='mgr.compute-0.kitiwu' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.control", "var": "pg_num_actual", "val": "32"}]: dispatch
Dec  8 04:49:14 np0005550137 ceph-mon[74516]: from='mgr.14478 192.168.122.100:0/575685615' entity='mgr.compute-0.kitiwu' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_num_actual", "val": "32"}]: dispatch
Dec  8 04:49:14 np0005550137 ceph-mon[74516]: from='mgr.14478 192.168.122.100:0/575685615' entity='mgr.compute-0.kitiwu' cmd='[{"prefix": "osd pool set", "pool": ".nfs", "var": "pg_num", "val": "32"}]': finished
Dec  8 04:49:14 np0005550137 ceph-mon[74516]: from='mgr.14478 192.168.122.100:0/575685615' entity='mgr.compute-0.kitiwu' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.control", "var": "pg_num_actual", "val": "32"}]': finished
Dec  8 04:49:14 np0005550137 ceph-mon[74516]: from='mgr.14478 192.168.122.100:0/575685615' entity='mgr.compute-0.kitiwu' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_num_actual", "val": "32"}]': finished
Dec  8 04:49:14 np0005550137 ceph-osd[83009]: log_channel(cluster) log [DBG] : 8.15 deep-scrub ok
Dec  8 04:49:14 np0005550137 systemd[1]: libpod-conmon-bbc1f0e0c1a8e7ec906c9bb58c1fc1291ed6c473a8fd9f7b663584a8c90bd10b.scope: Deactivated successfully.
Dec  8 04:49:14 np0005550137 ceph-mon[74516]: mon.compute-0@0(leader).osd e54 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  8 04:49:14 np0005550137 podman[98657]: 2025-12-08 09:49:14.907502996 +0000 UTC m=+0.035423229 volume create dc3701444bdf7ad3adc8f5716ebe88aff613c48a90f1e1f7705b27fd0a126f01
Dec  8 04:49:14 np0005550137 podman[98657]: 2025-12-08 09:49:14.919269347 +0000 UTC m=+0.047189620 container create aaf95adb10c43fda792c885cad04aceb9166927db04663d32cc098f0be99fb14 (image=quay.io/prometheus/alertmanager:v0.25.0, name=blissful_solomon, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Dec  8 04:49:14 np0005550137 systemd[1]: Started libpod-conmon-aaf95adb10c43fda792c885cad04aceb9166927db04663d32cc098f0be99fb14.scope.
Dec  8 04:49:14 np0005550137 systemd[1]: Started libcrun container.
Dec  8 04:49:14 np0005550137 podman[98657]: 2025-12-08 09:49:14.895042314 +0000 UTC m=+0.022962567 image pull c8568f914cd25b2062c44e9f79f9c18da6e3b85fe0c47a12a2191c61426c2b19 quay.io/prometheus/alertmanager:v0.25.0
Dec  8 04:49:14 np0005550137 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e6b5417bbcb9463c5d53d6be60e83e103899217d18e1fe6837f0c75ea0584403/merged/alertmanager supports timestamps until 2038 (0x7fffffff)
Dec  8 04:49:15 np0005550137 podman[98657]: 2025-12-08 09:49:15.009220453 +0000 UTC m=+0.137140696 container init aaf95adb10c43fda792c885cad04aceb9166927db04663d32cc098f0be99fb14 (image=quay.io/prometheus/alertmanager:v0.25.0, name=blissful_solomon, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Dec  8 04:49:15 np0005550137 podman[98657]: 2025-12-08 09:49:15.015963465 +0000 UTC m=+0.143883698 container start aaf95adb10c43fda792c885cad04aceb9166927db04663d32cc098f0be99fb14 (image=quay.io/prometheus/alertmanager:v0.25.0, name=blissful_solomon, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Dec  8 04:49:15 np0005550137 blissful_solomon[98673]: 65534 65534
Dec  8 04:49:15 np0005550137 podman[98657]: 2025-12-08 09:49:15.01982614 +0000 UTC m=+0.147746373 container attach aaf95adb10c43fda792c885cad04aceb9166927db04663d32cc098f0be99fb14 (image=quay.io/prometheus/alertmanager:v0.25.0, name=blissful_solomon, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Dec  8 04:49:15 np0005550137 systemd[1]: libpod-aaf95adb10c43fda792c885cad04aceb9166927db04663d32cc098f0be99fb14.scope: Deactivated successfully.
Dec  8 04:49:15 np0005550137 podman[98657]: 2025-12-08 09:49:15.020701586 +0000 UTC m=+0.148621819 container died aaf95adb10c43fda792c885cad04aceb9166927db04663d32cc098f0be99fb14 (image=quay.io/prometheus/alertmanager:v0.25.0, name=blissful_solomon, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Dec  8 04:49:15 np0005550137 systemd[1]: var-lib-containers-storage-overlay-e6b5417bbcb9463c5d53d6be60e83e103899217d18e1fe6837f0c75ea0584403-merged.mount: Deactivated successfully.
Dec  8 04:49:15 np0005550137 podman[98657]: 2025-12-08 09:49:15.059098102 +0000 UTC m=+0.187018335 container remove aaf95adb10c43fda792c885cad04aceb9166927db04663d32cc098f0be99fb14 (image=quay.io/prometheus/alertmanager:v0.25.0, name=blissful_solomon, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Dec  8 04:49:15 np0005550137 podman[98657]: 2025-12-08 09:49:15.062515185 +0000 UTC m=+0.190435448 volume remove dc3701444bdf7ad3adc8f5716ebe88aff613c48a90f1e1f7705b27fd0a126f01
Dec  8 04:49:15 np0005550137 systemd[1]: libpod-conmon-aaf95adb10c43fda792c885cad04aceb9166927db04663d32cc098f0be99fb14.scope: Deactivated successfully.
Dec  8 04:49:15 np0005550137 systemd[1]: Reloading.
Dec  8 04:49:15 np0005550137 ceph-ceb838ef-9d5d-54e4-bddb-2f01adce2ad4-nfs-cephfs-2-0-compute-0-cuvvno[97606]: 08/12/2025 09:49:15 : epoch 69369efc : compute-0 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f5710002520 fd 37 proxy header rest len failed header rlen = % (will set dead)
Dec  8 04:49:15 np0005550137 ceph-ceb838ef-9d5d-54e4-bddb-2f01adce2ad4-nfs-cephfs-2-0-compute-0-cuvvno[97606]: 08/12/2025 09:49:15 : epoch 69369efc : compute-0 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f5714002950 fd 37 proxy header rest len failed header rlen = % (will set dead)
Dec  8 04:49:15 np0005550137 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Dec  8 04:49:15 np0005550137 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec  8 04:49:15 np0005550137 systemd[1]: Reloading.
Dec  8 04:49:15 np0005550137 ceph-osd[83009]: osd.1 pg_epoch: 54 pg[11.0( v 49'2 (0'0,49'2] local-lis/les=40/41 n=2 ec=40/40 lis/c=40/40 les/c/f=41/41/0 sis=54 pruub=10.141372681s) [1] r=0 lpr=54 pi=[40,54)/1 crt=49'2 lcod 49'1 mlcod 49'1 active pruub 174.319366455s@ mbc={}] PeeringState::start_peering_interval up [1] -> [1], acting [1] -> [1], acting_primary 1 -> 1, up_primary 1 -> 1, role 0 -> 0, features acting 4540701547738038271 upacting 4540701547738038271
Dec  8 04:49:15 np0005550137 ceph-osd[83009]: osd.1 pg_epoch: 54 pg[11.0( v 49'2 lc 0'0 (0'0,49'2] local-lis/les=40/41 n=0 ec=40/40 lis/c=40/40 les/c/f=41/41/0 sis=54 pruub=10.141372681s) [1] r=0 lpr=54 pi=[40,54)/1 crt=49'2 lcod 49'1 mlcod 0'0 unknown pruub 174.319366455s@ mbc={}] state<Start>: transitioning to Primary
Dec  8 04:49:15 np0005550137 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec  8 04:49:15 np0005550137 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Dec  8 04:49:15 np0005550137 systemd[1]: Starting Ceph alertmanager.compute-0 for ceb838ef-9d5d-54e4-bddb-2f01adce2ad4...
Dec  8 04:49:15 np0005550137 ceph-mon[74516]: mon.compute-0@0(leader).osd e54 do_prune osdmap full prune enabled
Dec  8 04:49:15 np0005550137 ceph-mgr[74806]: [progress INFO root] Writing back 21 completed events
Dec  8 04:49:15 np0005550137 ceph-mon[74516]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/progress/completed}] v 0)
Dec  8 04:49:15 np0005550137 ceph-mon[74516]: mon.compute-0@0(leader).osd e55 e55: 3 total, 3 up, 3 in
Dec  8 04:49:15 np0005550137 ceph-mon[74516]: log_channel(cluster) log [DBG] : osdmap e55: 3 total, 3 up, 3 in
Dec  8 04:49:15 np0005550137 ceph-osd[83009]: osd.1 pg_epoch: 55 pg[11.17( v 49'2 lc 0'0 (0'0,49'2] local-lis/les=40/41 n=0 ec=54/40 lis/c=40/40 les/c/f=41/41/0 sis=54) [1] r=0 lpr=54 pi=[40,54)/1 crt=49'2 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  8 04:49:15 np0005550137 ceph-osd[83009]: osd.1 pg_epoch: 55 pg[11.16( v 49'2 lc 0'0 (0'0,49'2] local-lis/les=40/41 n=0 ec=54/40 lis/c=40/40 les/c/f=41/41/0 sis=54) [1] r=0 lpr=54 pi=[40,54)/1 crt=49'2 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  8 04:49:15 np0005550137 ceph-osd[83009]: osd.1 pg_epoch: 55 pg[11.15( v 49'2 lc 0'0 (0'0,49'2] local-lis/les=40/41 n=0 ec=54/40 lis/c=40/40 les/c/f=41/41/0 sis=54) [1] r=0 lpr=54 pi=[40,54)/1 crt=49'2 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  8 04:49:15 np0005550137 ceph-osd[83009]: osd.1 pg_epoch: 55 pg[11.13( v 49'2 lc 0'0 (0'0,49'2] local-lis/les=40/41 n=0 ec=54/40 lis/c=40/40 les/c/f=41/41/0 sis=54) [1] r=0 lpr=54 pi=[40,54)/1 crt=49'2 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  8 04:49:15 np0005550137 ceph-osd[83009]: osd.1 pg_epoch: 55 pg[11.12( v 49'2 lc 0'0 (0'0,49'2] local-lis/les=40/41 n=0 ec=54/40 lis/c=40/40 les/c/f=41/41/0 sis=54) [1] r=0 lpr=54 pi=[40,54)/1 crt=49'2 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  8 04:49:15 np0005550137 ceph-osd[83009]: osd.1 pg_epoch: 55 pg[11.1( v 49'2 (0'0,49'2] local-lis/les=40/41 n=1 ec=54/40 lis/c=40/40 les/c/f=41/41/0 sis=54) [1] r=0 lpr=54 pi=[40,54)/1 crt=49'2 lcod 0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  8 04:49:15 np0005550137 ceph-osd[83009]: osd.1 pg_epoch: 55 pg[11.14( v 49'2 lc 0'0 (0'0,49'2] local-lis/les=40/41 n=0 ec=54/40 lis/c=40/40 les/c/f=41/41/0 sis=54) [1] r=0 lpr=54 pi=[40,54)/1 crt=49'2 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  8 04:49:15 np0005550137 ceph-osd[83009]: osd.1 pg_epoch: 55 pg[11.c( v 49'2 lc 0'0 (0'0,49'2] local-lis/les=40/41 n=0 ec=54/40 lis/c=40/40 les/c/f=41/41/0 sis=54) [1] r=0 lpr=54 pi=[40,54)/1 crt=49'2 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  8 04:49:15 np0005550137 ceph-osd[83009]: osd.1 pg_epoch: 55 pg[11.b( v 49'2 lc 0'0 (0'0,49'2] local-lis/les=40/41 n=0 ec=54/40 lis/c=40/40 les/c/f=41/41/0 sis=54) [1] r=0 lpr=54 pi=[40,54)/1 crt=49'2 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  8 04:49:15 np0005550137 ceph-osd[83009]: osd.1 pg_epoch: 55 pg[11.a( v 49'2 lc 0'0 (0'0,49'2] local-lis/les=40/41 n=0 ec=54/40 lis/c=40/40 les/c/f=41/41/0 sis=54) [1] r=0 lpr=54 pi=[40,54)/1 crt=49'2 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  8 04:49:15 np0005550137 ceph-osd[83009]: osd.1 pg_epoch: 55 pg[11.9( v 49'2 lc 0'0 (0'0,49'2] local-lis/les=40/41 n=0 ec=54/40 lis/c=40/40 les/c/f=41/41/0 sis=54) [1] r=0 lpr=54 pi=[40,54)/1 crt=49'2 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  8 04:49:15 np0005550137 ceph-osd[83009]: osd.1 pg_epoch: 55 pg[11.d( v 49'2 lc 0'0 (0'0,49'2] local-lis/les=40/41 n=0 ec=54/40 lis/c=40/40 les/c/f=41/41/0 sis=54) [1] r=0 lpr=54 pi=[40,54)/1 crt=49'2 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  8 04:49:15 np0005550137 ceph-osd[83009]: osd.1 pg_epoch: 55 pg[11.f( v 49'2 lc 0'0 (0'0,49'2] local-lis/les=40/41 n=0 ec=54/40 lis/c=40/40 les/c/f=41/41/0 sis=54) [1] r=0 lpr=54 pi=[40,54)/1 crt=49'2 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  8 04:49:15 np0005550137 ceph-osd[83009]: osd.1 pg_epoch: 55 pg[11.e( v 49'2 lc 0'0 (0'0,49'2] local-lis/les=40/41 n=0 ec=54/40 lis/c=40/40 les/c/f=41/41/0 sis=54) [1] r=0 lpr=54 pi=[40,54)/1 crt=49'2 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  8 04:49:15 np0005550137 ceph-osd[83009]: osd.1 pg_epoch: 55 pg[11.8( v 49'2 lc 0'0 (0'0,49'2] local-lis/les=40/41 n=0 ec=54/40 lis/c=40/40 les/c/f=41/41/0 sis=54) [1] r=0 lpr=54 pi=[40,54)/1 crt=49'2 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  8 04:49:15 np0005550137 ceph-osd[83009]: osd.1 pg_epoch: 55 pg[11.3( v 49'2 lc 0'0 (0'0,49'2] local-lis/les=40/41 n=0 ec=54/40 lis/c=40/40 les/c/f=41/41/0 sis=54) [1] r=0 lpr=54 pi=[40,54)/1 crt=49'2 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  8 04:49:15 np0005550137 ceph-osd[83009]: osd.1 pg_epoch: 55 pg[11.4( v 49'2 lc 0'0 (0'0,49'2] local-lis/les=40/41 n=0 ec=54/40 lis/c=40/40 les/c/f=41/41/0 sis=54) [1] r=0 lpr=54 pi=[40,54)/1 crt=49'2 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  8 04:49:15 np0005550137 ceph-osd[83009]: osd.1 pg_epoch: 55 pg[11.5( v 49'2 lc 0'0 (0'0,49'2] local-lis/les=40/41 n=0 ec=54/40 lis/c=40/40 les/c/f=41/41/0 sis=54) [1] r=0 lpr=54 pi=[40,54)/1 crt=49'2 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  8 04:49:15 np0005550137 ceph-osd[83009]: osd.1 pg_epoch: 55 pg[11.2( v 49'2 lc 0'0 (0'0,49'2] local-lis/les=40/41 n=1 ec=54/40 lis/c=40/40 les/c/f=41/41/0 sis=54) [1] r=0 lpr=54 pi=[40,54)/1 crt=49'2 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  8 04:49:15 np0005550137 ceph-osd[83009]: osd.1 pg_epoch: 55 pg[11.6( v 49'2 lc 0'0 (0'0,49'2] local-lis/les=40/41 n=0 ec=54/40 lis/c=40/40 les/c/f=41/41/0 sis=54) [1] r=0 lpr=54 pi=[40,54)/1 crt=49'2 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  8 04:49:15 np0005550137 ceph-osd[83009]: osd.1 pg_epoch: 55 pg[11.7( v 49'2 lc 0'0 (0'0,49'2] local-lis/les=40/41 n=0 ec=54/40 lis/c=40/40 les/c/f=41/41/0 sis=54) [1] r=0 lpr=54 pi=[40,54)/1 crt=49'2 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  8 04:49:15 np0005550137 ceph-osd[83009]: osd.1 pg_epoch: 55 pg[11.18( v 49'2 lc 0'0 (0'0,49'2] local-lis/les=40/41 n=0 ec=54/40 lis/c=40/40 les/c/f=41/41/0 sis=54) [1] r=0 lpr=54 pi=[40,54)/1 crt=49'2 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  8 04:49:15 np0005550137 ceph-osd[83009]: osd.1 pg_epoch: 55 pg[11.19( v 49'2 lc 0'0 (0'0,49'2] local-lis/les=40/41 n=0 ec=54/40 lis/c=40/40 les/c/f=41/41/0 sis=54) [1] r=0 lpr=54 pi=[40,54)/1 crt=49'2 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  8 04:49:15 np0005550137 ceph-osd[83009]: osd.1 pg_epoch: 55 pg[11.1a( v 49'2 lc 0'0 (0'0,49'2] local-lis/les=40/41 n=0 ec=54/40 lis/c=40/40 les/c/f=41/41/0 sis=54) [1] r=0 lpr=54 pi=[40,54)/1 crt=49'2 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  8 04:49:15 np0005550137 ceph-osd[83009]: osd.1 pg_epoch: 55 pg[11.1b( v 49'2 lc 0'0 (0'0,49'2] local-lis/les=40/41 n=0 ec=54/40 lis/c=40/40 les/c/f=41/41/0 sis=54) [1] r=0 lpr=54 pi=[40,54)/1 crt=49'2 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  8 04:49:15 np0005550137 ceph-osd[83009]: osd.1 pg_epoch: 55 pg[11.1c( v 49'2 lc 0'0 (0'0,49'2] local-lis/les=40/41 n=0 ec=54/40 lis/c=40/40 les/c/f=41/41/0 sis=54) [1] r=0 lpr=54 pi=[40,54)/1 crt=49'2 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  8 04:49:15 np0005550137 ceph-osd[83009]: osd.1 pg_epoch: 55 pg[11.1d( v 49'2 lc 0'0 (0'0,49'2] local-lis/les=40/41 n=0 ec=54/40 lis/c=40/40 les/c/f=41/41/0 sis=54) [1] r=0 lpr=54 pi=[40,54)/1 crt=49'2 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  8 04:49:15 np0005550137 ceph-osd[83009]: osd.1 pg_epoch: 55 pg[11.1e( v 49'2 lc 0'0 (0'0,49'2] local-lis/les=40/41 n=0 ec=54/40 lis/c=40/40 les/c/f=41/41/0 sis=54) [1] r=0 lpr=54 pi=[40,54)/1 crt=49'2 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  8 04:49:15 np0005550137 ceph-osd[83009]: osd.1 pg_epoch: 55 pg[11.1f( v 49'2 lc 0'0 (0'0,49'2] local-lis/les=40/41 n=0 ec=54/40 lis/c=40/40 les/c/f=41/41/0 sis=54) [1] r=0 lpr=54 pi=[40,54)/1 crt=49'2 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  8 04:49:15 np0005550137 ceph-osd[83009]: osd.1 pg_epoch: 55 pg[11.10( v 49'2 lc 0'0 (0'0,49'2] local-lis/les=40/41 n=0 ec=54/40 lis/c=40/40 les/c/f=41/41/0 sis=54) [1] r=0 lpr=54 pi=[40,54)/1 crt=49'2 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  8 04:49:15 np0005550137 ceph-osd[83009]: osd.1 pg_epoch: 55 pg[11.11( v 49'2 lc 0'0 (0'0,49'2] local-lis/les=40/41 n=0 ec=54/40 lis/c=40/40 les/c/f=41/41/0 sis=54) [1] r=0 lpr=54 pi=[40,54)/1 crt=49'2 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  8 04:49:15 np0005550137 ceph-osd[83009]: osd.1 pg_epoch: 55 pg[11.16( v 49'2 (0'0,49'2] local-lis/les=54/55 n=0 ec=54/40 lis/c=40/40 les/c/f=41/41/0 sis=54) [1] r=0 lpr=54 pi=[40,54)/1 crt=49'2 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  8 04:49:15 np0005550137 ceph-osd[83009]: osd.1 pg_epoch: 55 pg[11.17( v 49'2 (0'0,49'2] local-lis/les=54/55 n=0 ec=54/40 lis/c=40/40 les/c/f=41/41/0 sis=54) [1] r=0 lpr=54 pi=[40,54)/1 crt=49'2 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  8 04:49:15 np0005550137 ceph-osd[83009]: osd.1 pg_epoch: 55 pg[11.13( v 49'2 (0'0,49'2] local-lis/les=54/55 n=0 ec=54/40 lis/c=40/40 les/c/f=41/41/0 sis=54) [1] r=0 lpr=54 pi=[40,54)/1 crt=49'2 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  8 04:49:15 np0005550137 ceph-mon[74516]: log_channel(audit) log [INF] : from='mgr.14478 192.168.122.100:0/575685615' entity='mgr.compute-0.kitiwu' 
Dec  8 04:49:15 np0005550137 ceph-osd[83009]: osd.1 pg_epoch: 55 pg[11.1( v 49'2 (0'0,49'2] local-lis/les=54/55 n=1 ec=54/40 lis/c=40/40 les/c/f=41/41/0 sis=54) [1] r=0 lpr=54 pi=[40,54)/1 crt=49'2 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  8 04:49:15 np0005550137 ceph-osd[83009]: osd.1 pg_epoch: 55 pg[11.12( v 49'2 (0'0,49'2] local-lis/les=54/55 n=0 ec=54/40 lis/c=40/40 les/c/f=41/41/0 sis=54) [1] r=0 lpr=54 pi=[40,54)/1 crt=49'2 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  8 04:49:15 np0005550137 ceph-osd[83009]: osd.1 pg_epoch: 55 pg[11.14( v 49'2 (0'0,49'2] local-lis/les=54/55 n=0 ec=54/40 lis/c=40/40 les/c/f=41/41/0 sis=54) [1] r=0 lpr=54 pi=[40,54)/1 crt=49'2 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  8 04:49:15 np0005550137 ceph-osd[83009]: osd.1 pg_epoch: 55 pg[11.15( v 49'2 (0'0,49'2] local-lis/les=54/55 n=0 ec=54/40 lis/c=40/40 les/c/f=41/41/0 sis=54) [1] r=0 lpr=54 pi=[40,54)/1 crt=49'2 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  8 04:49:15 np0005550137 ceph-osd[83009]: osd.1 pg_epoch: 55 pg[11.b( v 49'2 (0'0,49'2] local-lis/les=54/55 n=0 ec=54/40 lis/c=40/40 les/c/f=41/41/0 sis=54) [1] r=0 lpr=54 pi=[40,54)/1 crt=49'2 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  8 04:49:15 np0005550137 ceph-osd[83009]: osd.1 pg_epoch: 55 pg[11.a( v 49'2 (0'0,49'2] local-lis/les=54/55 n=0 ec=54/40 lis/c=40/40 les/c/f=41/41/0 sis=54) [1] r=0 lpr=54 pi=[40,54)/1 crt=49'2 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  8 04:49:15 np0005550137 ceph-osd[83009]: osd.1 pg_epoch: 55 pg[11.d( v 49'2 (0'0,49'2] local-lis/les=54/55 n=0 ec=54/40 lis/c=40/40 les/c/f=41/41/0 sis=54) [1] r=0 lpr=54 pi=[40,54)/1 crt=49'2 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  8 04:49:15 np0005550137 ceph-osd[83009]: osd.1 pg_epoch: 55 pg[11.9( v 49'2 (0'0,49'2] local-lis/les=54/55 n=0 ec=54/40 lis/c=40/40 les/c/f=41/41/0 sis=54) [1] r=0 lpr=54 pi=[40,54)/1 crt=49'2 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  8 04:49:15 np0005550137 ceph-osd[83009]: osd.1 pg_epoch: 55 pg[11.f( v 49'2 (0'0,49'2] local-lis/les=54/55 n=0 ec=54/40 lis/c=40/40 les/c/f=41/41/0 sis=54) [1] r=0 lpr=54 pi=[40,54)/1 crt=49'2 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  8 04:49:15 np0005550137 ceph-osd[83009]: osd.1 pg_epoch: 55 pg[11.3( v 49'2 (0'0,49'2] local-lis/les=54/55 n=0 ec=54/40 lis/c=40/40 les/c/f=41/41/0 sis=54) [1] r=0 lpr=54 pi=[40,54)/1 crt=49'2 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  8 04:49:15 np0005550137 ceph-osd[83009]: osd.1 pg_epoch: 55 pg[11.5( v 49'2 (0'0,49'2] local-lis/les=54/55 n=0 ec=54/40 lis/c=40/40 les/c/f=41/41/0 sis=54) [1] r=0 lpr=54 pi=[40,54)/1 crt=49'2 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  8 04:49:15 np0005550137 ceph-osd[83009]: osd.1 pg_epoch: 55 pg[11.8( v 49'2 (0'0,49'2] local-lis/les=54/55 n=0 ec=54/40 lis/c=40/40 les/c/f=41/41/0 sis=54) [1] r=0 lpr=54 pi=[40,54)/1 crt=49'2 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  8 04:49:15 np0005550137 ceph-osd[83009]: osd.1 pg_epoch: 55 pg[11.e( v 49'2 (0'0,49'2] local-lis/les=54/55 n=0 ec=54/40 lis/c=40/40 les/c/f=41/41/0 sis=54) [1] r=0 lpr=54 pi=[40,54)/1 crt=49'2 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  8 04:49:15 np0005550137 ceph-osd[83009]: osd.1 pg_epoch: 55 pg[11.6( v 49'2 (0'0,49'2] local-lis/les=54/55 n=0 ec=54/40 lis/c=40/40 les/c/f=41/41/0 sis=54) [1] r=0 lpr=54 pi=[40,54)/1 crt=49'2 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  8 04:49:15 np0005550137 ceph-osd[83009]: osd.1 pg_epoch: 55 pg[11.4( v 49'2 (0'0,49'2] local-lis/les=54/55 n=0 ec=54/40 lis/c=40/40 les/c/f=41/41/0 sis=54) [1] r=0 lpr=54 pi=[40,54)/1 crt=49'2 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  8 04:49:15 np0005550137 ceph-osd[83009]: osd.1 pg_epoch: 55 pg[11.7( v 49'2 (0'0,49'2] local-lis/les=54/55 n=0 ec=54/40 lis/c=40/40 les/c/f=41/41/0 sis=54) [1] r=0 lpr=54 pi=[40,54)/1 crt=49'2 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  8 04:49:15 np0005550137 ceph-osd[83009]: osd.1 pg_epoch: 55 pg[11.0( v 49'2 (0'0,49'2] local-lis/les=54/55 n=0 ec=40/40 lis/c=40/40 les/c/f=41/41/0 sis=54) [1] r=0 lpr=54 pi=[40,54)/1 crt=49'2 lcod 49'1 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  8 04:49:15 np0005550137 ceph-osd[83009]: osd.1 pg_epoch: 55 pg[11.18( v 49'2 (0'0,49'2] local-lis/les=54/55 n=0 ec=54/40 lis/c=40/40 les/c/f=41/41/0 sis=54) [1] r=0 lpr=54 pi=[40,54)/1 crt=49'2 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  8 04:49:15 np0005550137 ceph-osd[83009]: osd.1 pg_epoch: 55 pg[11.2( v 49'2 (0'0,49'2] local-lis/les=54/55 n=1 ec=54/40 lis/c=40/40 les/c/f=41/41/0 sis=54) [1] r=0 lpr=54 pi=[40,54)/1 crt=49'2 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  8 04:49:15 np0005550137 ceph-osd[83009]: osd.1 pg_epoch: 55 pg[11.1a( v 49'2 (0'0,49'2] local-lis/les=54/55 n=0 ec=54/40 lis/c=40/40 les/c/f=41/41/0 sis=54) [1] r=0 lpr=54 pi=[40,54)/1 crt=49'2 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  8 04:49:15 np0005550137 ceph-osd[83009]: osd.1 pg_epoch: 55 pg[11.1c( v 49'2 (0'0,49'2] local-lis/les=54/55 n=0 ec=54/40 lis/c=40/40 les/c/f=41/41/0 sis=54) [1] r=0 lpr=54 pi=[40,54)/1 crt=49'2 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  8 04:49:15 np0005550137 ceph-osd[83009]: osd.1 pg_epoch: 55 pg[11.1d( v 49'2 (0'0,49'2] local-lis/les=54/55 n=0 ec=54/40 lis/c=40/40 les/c/f=41/41/0 sis=54) [1] r=0 lpr=54 pi=[40,54)/1 crt=49'2 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  8 04:49:15 np0005550137 ceph-osd[83009]: osd.1 pg_epoch: 55 pg[11.1b( v 49'2 (0'0,49'2] local-lis/les=54/55 n=0 ec=54/40 lis/c=40/40 les/c/f=41/41/0 sis=54) [1] r=0 lpr=54 pi=[40,54)/1 crt=49'2 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  8 04:49:15 np0005550137 ceph-osd[83009]: osd.1 pg_epoch: 55 pg[11.1f( v 49'2 (0'0,49'2] local-lis/les=54/55 n=0 ec=54/40 lis/c=40/40 les/c/f=41/41/0 sis=54) [1] r=0 lpr=54 pi=[40,54)/1 crt=49'2 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  8 04:49:15 np0005550137 ceph-osd[83009]: osd.1 pg_epoch: 55 pg[11.19( v 49'2 (0'0,49'2] local-lis/les=54/55 n=0 ec=54/40 lis/c=40/40 les/c/f=41/41/0 sis=54) [1] r=0 lpr=54 pi=[40,54)/1 crt=49'2 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  8 04:49:15 np0005550137 ceph-osd[83009]: osd.1 pg_epoch: 55 pg[11.10( v 49'2 (0'0,49'2] local-lis/les=54/55 n=0 ec=54/40 lis/c=40/40 les/c/f=41/41/0 sis=54) [1] r=0 lpr=54 pi=[40,54)/1 crt=49'2 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  8 04:49:15 np0005550137 ceph-osd[83009]: osd.1 pg_epoch: 55 pg[11.c( v 49'2 (0'0,49'2] local-lis/les=54/55 n=0 ec=54/40 lis/c=40/40 les/c/f=41/41/0 sis=54) [1] r=0 lpr=54 pi=[40,54)/1 crt=49'2 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  8 04:49:15 np0005550137 ceph-osd[83009]: osd.1 pg_epoch: 55 pg[11.11( v 49'2 (0'0,49'2] local-lis/les=54/55 n=0 ec=54/40 lis/c=40/40 les/c/f=41/41/0 sis=54) [1] r=0 lpr=54 pi=[40,54)/1 crt=49'2 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  8 04:49:15 np0005550137 ceph-mgr[74806]: [progress WARNING root] Starting Global Recovery Event,124 pgs not in active + clean state
Dec  8 04:49:15 np0005550137 ceph-osd[83009]: osd.1 pg_epoch: 55 pg[11.1e( v 49'2 (0'0,49'2] local-lis/les=54/55 n=0 ec=54/40 lis/c=40/40 les/c/f=41/41/0 sis=54) [1] r=0 lpr=54 pi=[40,54)/1 crt=49'2 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  8 04:49:15 np0005550137 ceph-mon[74516]: from='mgr.14478 192.168.122.100:0/575685615' entity='mgr.compute-0.kitiwu' 
Dec  8 04:49:15 np0005550137 ceph-osd[83009]: log_channel(cluster) log [DBG] : 9.16 scrub starts
Dec  8 04:49:15 np0005550137 ceph-osd[83009]: log_channel(cluster) log [DBG] : 9.16 scrub ok
Dec  8 04:49:16 np0005550137 podman[98819]: 2025-12-08 09:49:16.000852762 +0000 UTC m=+0.061003792 volume create ac36cabf1685985f12b4719a79e790ff0b5cfe2ee9af3f5025152a374f3d5695
Dec  8 04:49:16 np0005550137 ceph-ceb838ef-9d5d-54e4-bddb-2f01adce2ad4-keepalived-nfs-cephfs-compute-0-qxgfft[98396]: Mon Dec  8 09:49:16 2025: (VI_0) Received advert from 192.168.122.102 with lower priority 90, ours 100, forcing new election
Dec  8 04:49:16 np0005550137 podman[98819]: 2025-12-08 09:49:16.017796978 +0000 UTC m=+0.077948008 container create 7099edc240bca550d8ffa93b6e07ba6fcf270dd0e9d3a56c64a700eedb3fc8a8 (image=quay.io/prometheus/alertmanager:v0.25.0, name=ceph-ceb838ef-9d5d-54e4-bddb-2f01adce2ad4-alertmanager-compute-0, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Dec  8 04:49:16 np0005550137 podman[98819]: 2025-12-08 09:49:15.982874855 +0000 UTC m=+0.043025895 image pull c8568f914cd25b2062c44e9f79f9c18da6e3b85fe0c47a12a2191c61426c2b19 quay.io/prometheus/alertmanager:v0.25.0
Dec  8 04:49:16 np0005550137 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2e7624865efeb6a95f84bc47dd5a2668c1b36291a43db5be7b7a6a5e59098268/merged/alertmanager supports timestamps until 2038 (0x7fffffff)
Dec  8 04:49:16 np0005550137 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2e7624865efeb6a95f84bc47dd5a2668c1b36291a43db5be7b7a6a5e59098268/merged/etc/alertmanager supports timestamps until 2038 (0x7fffffff)
Dec  8 04:49:16 np0005550137 podman[98819]: 2025-12-08 09:49:16.119527166 +0000 UTC m=+0.179678246 container init 7099edc240bca550d8ffa93b6e07ba6fcf270dd0e9d3a56c64a700eedb3fc8a8 (image=quay.io/prometheus/alertmanager:v0.25.0, name=ceph-ceb838ef-9d5d-54e4-bddb-2f01adce2ad4-alertmanager-compute-0, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Dec  8 04:49:16 np0005550137 podman[98819]: 2025-12-08 09:49:16.128903045 +0000 UTC m=+0.189054085 container start 7099edc240bca550d8ffa93b6e07ba6fcf270dd0e9d3a56c64a700eedb3fc8a8 (image=quay.io/prometheus/alertmanager:v0.25.0, name=ceph-ceb838ef-9d5d-54e4-bddb-2f01adce2ad4-alertmanager-compute-0, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Dec  8 04:49:16 np0005550137 bash[98819]: 7099edc240bca550d8ffa93b6e07ba6fcf270dd0e9d3a56c64a700eedb3fc8a8
Dec  8 04:49:16 np0005550137 systemd[1]: Started Ceph alertmanager.compute-0 for ceb838ef-9d5d-54e4-bddb-2f01adce2ad4.
Dec  8 04:49:16 np0005550137 ceph-ceb838ef-9d5d-54e4-bddb-2f01adce2ad4-alertmanager-compute-0[98834]: ts=2025-12-08T09:49:16.173Z caller=main.go:240 level=info msg="Starting Alertmanager" version="(version=0.25.0, branch=HEAD, revision=258fab7cdd551f2cf251ed0348f0ad7289aee789)"
Dec  8 04:49:16 np0005550137 ceph-ceb838ef-9d5d-54e4-bddb-2f01adce2ad4-alertmanager-compute-0[98834]: ts=2025-12-08T09:49:16.173Z caller=main.go:241 level=info build_context="(go=go1.19.4, user=root@abe866dd5717, date=20221222-14:51:36)"
Dec  8 04:49:16 np0005550137 ceph-ceb838ef-9d5d-54e4-bddb-2f01adce2ad4-alertmanager-compute-0[98834]: ts=2025-12-08T09:49:16.188Z caller=cluster.go:185 level=info component=cluster msg="setting advertise address explicitly" addr=192.168.122.100 port=9094
Dec  8 04:49:16 np0005550137 ceph-ceb838ef-9d5d-54e4-bddb-2f01adce2ad4-alertmanager-compute-0[98834]: ts=2025-12-08T09:49:16.190Z caller=cluster.go:681 level=info component=cluster msg="Waiting for gossip to settle..." interval=2s
Dec  8 04:49:16 np0005550137 ceph-mon[74516]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Dec  8 04:49:16 np0005550137 ceph-mon[74516]: log_channel(audit) log [INF] : from='mgr.14478 192.168.122.100:0/575685615' entity='mgr.compute-0.kitiwu' 
Dec  8 04:49:16 np0005550137 ceph-mon[74516]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Dec  8 04:49:16 np0005550137 ceph-mon[74516]: log_channel(audit) log [INF] : from='mgr.14478 192.168.122.100:0/575685615' entity='mgr.compute-0.kitiwu' 
Dec  8 04:49:16 np0005550137 ceph-mon[74516]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.alertmanager}] v 0)
Dec  8 04:49:16 np0005550137 ceph-ceb838ef-9d5d-54e4-bddb-2f01adce2ad4-alertmanager-compute-0[98834]: ts=2025-12-08T09:49:16.255Z caller=coordinator.go:113 level=info component=configuration msg="Loading configuration file" file=/etc/alertmanager/alertmanager.yml
Dec  8 04:49:16 np0005550137 ceph-ceb838ef-9d5d-54e4-bddb-2f01adce2ad4-alertmanager-compute-0[98834]: ts=2025-12-08T09:49:16.256Z caller=coordinator.go:126 level=info component=configuration msg="Completed loading of configuration file" file=/etc/alertmanager/alertmanager.yml
Dec  8 04:49:16 np0005550137 ceph-mon[74516]: log_channel(audit) log [INF] : from='mgr.14478 192.168.122.100:0/575685615' entity='mgr.compute-0.kitiwu' 
Dec  8 04:49:16 np0005550137 ceph-mgr[74806]: [progress INFO root] complete: finished ev 14228d34-ea82-48be-af71-af222b3ee161 (Updating alertmanager deployment (+1 -> 1))
Dec  8 04:49:16 np0005550137 ceph-mgr[74806]: [progress INFO root] Completed event 14228d34-ea82-48be-af71-af222b3ee161 (Updating alertmanager deployment (+1 -> 1)) in 4 seconds
Dec  8 04:49:16 np0005550137 ceph-mon[74516]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.alertmanager}] v 0)
Dec  8 04:49:16 np0005550137 ceph-ceb838ef-9d5d-54e4-bddb-2f01adce2ad4-alertmanager-compute-0[98834]: ts=2025-12-08T09:49:16.265Z caller=tls_config.go:232 level=info msg="Listening on" address=192.168.122.100:9093
Dec  8 04:49:16 np0005550137 ceph-ceb838ef-9d5d-54e4-bddb-2f01adce2ad4-alertmanager-compute-0[98834]: ts=2025-12-08T09:49:16.265Z caller=tls_config.go:235 level=info msg="TLS is disabled." http2=false address=192.168.122.100:9093
Dec  8 04:49:16 np0005550137 ceph-mon[74516]: log_channel(audit) log [INF] : from='mgr.14478 192.168.122.100:0/575685615' entity='mgr.compute-0.kitiwu' 
Dec  8 04:49:16 np0005550137 ceph-mgr[74806]: [progress INFO root] update: starting ev 74399342-fda7-46db-b41b-ad6c35f98e21 (Updating grafana deployment (+1 -> 1))
Dec  8 04:49:16 np0005550137 ceph-mgr[74806]: [cephadm INFO cephadm.services.monitoring] Regenerating cephadm self-signed grafana TLS certificates
Dec  8 04:49:16 np0005550137 ceph-mgr[74806]: log_channel(cephadm) log [INF] : Regenerating cephadm self-signed grafana TLS certificates
Dec  8 04:49:16 np0005550137 ceph-mon[74516]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/cert_store.cert.grafana_cert}] v 0)
Dec  8 04:49:16 np0005550137 ceph-mon[74516]: log_channel(audit) log [INF] : from='mgr.14478 192.168.122.100:0/575685615' entity='mgr.compute-0.kitiwu' 
Dec  8 04:49:16 np0005550137 ceph-mon[74516]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/cert_store.key.grafana_key}] v 0)
Dec  8 04:49:16 np0005550137 ceph-mon[74516]: log_channel(audit) log [INF] : from='mgr.14478 192.168.122.100:0/575685615' entity='mgr.compute-0.kitiwu' 
Dec  8 04:49:16 np0005550137 ceph-mon[74516]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "dashboard set-grafana-api-ssl-verify", "value": "false"} v 0)
Dec  8 04:49:16 np0005550137 ceph-mon[74516]: log_channel(audit) log [INF] : from='mgr.14478 192.168.122.100:0/575685615' entity='mgr.compute-0.kitiwu' cmd=[{"prefix": "dashboard set-grafana-api-ssl-verify", "value": "false"}]: dispatch
Dec  8 04:49:16 np0005550137 ceph-mgr[74806]: log_channel(audit) log [DBG] : from='mon.0 -' entity='mon.' cmd=[{"prefix": "dashboard set-grafana-api-ssl-verify", "value": "false"}]: dispatch
Dec  8 04:49:16 np0005550137 ceph-mon[74516]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config set, name=mgr/dashboard/GRAFANA_API_SSL_VERIFY}] v 0)
Dec  8 04:49:16 np0005550137 ceph-mon[74516]: log_channel(audit) log [INF] : from='mgr.14478 192.168.122.100:0/575685615' entity='mgr.compute-0.kitiwu' 
Dec  8 04:49:16 np0005550137 ceph-mgr[74806]: [cephadm INFO cephadm.serve] Deploying daemon grafana.compute-0 on compute-0
Dec  8 04:49:16 np0005550137 ceph-mgr[74806]: log_channel(cephadm) log [INF] : Deploying daemon grafana.compute-0 on compute-0
Dec  8 04:49:16 np0005550137 ceph-mgr[74806]: log_channel(cluster) log [DBG] : pgmap v46: 322 pgs: 124 unknown, 198 active+clean; 456 KiB data, 103 MiB used, 60 GiB / 60 GiB avail
Dec  8 04:49:16 np0005550137 ceph-mon[74516]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": ".nfs", "var": "pg_num_actual", "val": "32"} v 0)
Dec  8 04:49:16 np0005550137 ceph-mon[74516]: log_channel(audit) log [INF] : from='mgr.14478 192.168.122.100:0/575685615' entity='mgr.compute-0.kitiwu' cmd=[{"prefix": "osd pool set", "pool": ".nfs", "var": "pg_num_actual", "val": "32"}]: dispatch
Dec  8 04:49:16 np0005550137 ceph-ceb838ef-9d5d-54e4-bddb-2f01adce2ad4-nfs-cephfs-2-0-compute-0-cuvvno[97606]: 08/12/2025 09:49:16 : epoch 69369efc : compute-0 : ganesha.nfsd-2[svc_2] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f5714002950 fd 37 proxy header rest len failed header rlen = % (will set dead)
Dec  8 04:49:16 np0005550137 ceph-osd[83009]: log_channel(cluster) log [DBG] : 8.16 deep-scrub starts
Dec  8 04:49:16 np0005550137 ceph-osd[83009]: log_channel(cluster) log [DBG] : 8.16 deep-scrub ok
Dec  8 04:49:17 np0005550137 ceph-ceb838ef-9d5d-54e4-bddb-2f01adce2ad4-nfs-cephfs-2-0-compute-0-cuvvno[97606]: 08/12/2025 09:49:17 : epoch 69369efc : compute-0 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f56f8003c10 fd 37 proxy header rest len failed header rlen = % (will set dead)
Dec  8 04:49:17 np0005550137 ceph-ceb838ef-9d5d-54e4-bddb-2f01adce2ad4-nfs-cephfs-2-0-compute-0-cuvvno[97606]: 08/12/2025 09:49:17 : epoch 69369efc : compute-0 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f5710002520 fd 37 proxy header rest len failed header rlen = % (will set dead)
Dec  8 04:49:17 np0005550137 ceph-mon[74516]: from='mgr.14478 192.168.122.100:0/575685615' entity='mgr.compute-0.kitiwu' 
Dec  8 04:49:17 np0005550137 ceph-mon[74516]: from='mgr.14478 192.168.122.100:0/575685615' entity='mgr.compute-0.kitiwu' 
Dec  8 04:49:17 np0005550137 ceph-mon[74516]: from='mgr.14478 192.168.122.100:0/575685615' entity='mgr.compute-0.kitiwu' 
Dec  8 04:49:17 np0005550137 ceph-mon[74516]: from='mgr.14478 192.168.122.100:0/575685615' entity='mgr.compute-0.kitiwu' 
Dec  8 04:49:17 np0005550137 ceph-mon[74516]: Regenerating cephadm self-signed grafana TLS certificates
Dec  8 04:49:17 np0005550137 ceph-mon[74516]: from='mgr.14478 192.168.122.100:0/575685615' entity='mgr.compute-0.kitiwu' 
Dec  8 04:49:17 np0005550137 ceph-mon[74516]: from='mgr.14478 192.168.122.100:0/575685615' entity='mgr.compute-0.kitiwu' 
Dec  8 04:49:17 np0005550137 ceph-mon[74516]: from='mgr.14478 192.168.122.100:0/575685615' entity='mgr.compute-0.kitiwu' cmd=[{"prefix": "dashboard set-grafana-api-ssl-verify", "value": "false"}]: dispatch
Dec  8 04:49:17 np0005550137 ceph-mon[74516]: from='mgr.14478 192.168.122.100:0/575685615' entity='mgr.compute-0.kitiwu' 
Dec  8 04:49:17 np0005550137 ceph-mon[74516]: Deploying daemon grafana.compute-0 on compute-0
Dec  8 04:49:17 np0005550137 ceph-mon[74516]: from='mgr.14478 192.168.122.100:0/575685615' entity='mgr.compute-0.kitiwu' cmd=[{"prefix": "osd pool set", "pool": ".nfs", "var": "pg_num_actual", "val": "32"}]: dispatch
Dec  8 04:49:17 np0005550137 ceph-mon[74516]: mon.compute-0@0(leader).osd e55 do_prune osdmap full prune enabled
Dec  8 04:49:17 np0005550137 ceph-mon[74516]: log_channel(audit) log [INF] : from='mgr.14478 192.168.122.100:0/575685615' entity='mgr.compute-0.kitiwu' cmd='[{"prefix": "osd pool set", "pool": ".nfs", "var": "pg_num_actual", "val": "32"}]': finished
Dec  8 04:49:17 np0005550137 ceph-mon[74516]: mon.compute-0@0(leader).osd e56 e56: 3 total, 3 up, 3 in
Dec  8 04:49:17 np0005550137 ceph-mon[74516]: log_channel(cluster) log [DBG] : osdmap e56: 3 total, 3 up, 3 in
Dec  8 04:49:17 np0005550137 ceph-osd[83009]: log_channel(cluster) log [DBG] : 8.14 scrub starts
Dec  8 04:49:17 np0005550137 ceph-osd[83009]: log_channel(cluster) log [DBG] : 8.14 scrub ok
Dec  8 04:49:18 np0005550137 ceph-ceb838ef-9d5d-54e4-bddb-2f01adce2ad4-alertmanager-compute-0[98834]: ts=2025-12-08T09:49:18.191Z caller=cluster.go:706 level=info component=cluster msg="gossip not settled" polls=0 before=0 now=1 elapsed=2.000176352s
Dec  8 04:49:18 np0005550137 ceph-mon[74516]: from='mgr.14478 192.168.122.100:0/575685615' entity='mgr.compute-0.kitiwu' cmd='[{"prefix": "osd pool set", "pool": ".nfs", "var": "pg_num_actual", "val": "32"}]': finished
Dec  8 04:49:18 np0005550137 ceph-mon[74516]: mon.compute-0@0(leader).osd e56 do_prune osdmap full prune enabled
Dec  8 04:49:18 np0005550137 ceph-mon[74516]: mon.compute-0@0(leader).osd e57 e57: 3 total, 3 up, 3 in
Dec  8 04:49:18 np0005550137 ceph-mon[74516]: log_channel(cluster) log [DBG] : osdmap e57: 3 total, 3 up, 3 in
Dec  8 04:49:18 np0005550137 ceph-mgr[74806]: log_channel(cluster) log [DBG] : pgmap v49: 353 pgs: 31 unknown, 322 active+clean; 456 KiB data, 103 MiB used, 60 GiB / 60 GiB avail
Dec  8 04:49:18 np0005550137 ceph-ceb838ef-9d5d-54e4-bddb-2f01adce2ad4-nfs-cephfs-2-0-compute-0-cuvvno[97606]: 08/12/2025 09:49:18 : epoch 69369efc : compute-0 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f571c00a3f0 fd 37 proxy header rest len failed header rlen = % (will set dead)
Dec  8 04:49:18 np0005550137 ceph-osd[83009]: log_channel(cluster) log [DBG] : 8.10 scrub starts
Dec  8 04:49:18 np0005550137 ceph-osd[83009]: log_channel(cluster) log [DBG] : 8.10 scrub ok
Dec  8 04:49:19 np0005550137 ceph-ceb838ef-9d5d-54e4-bddb-2f01adce2ad4-nfs-cephfs-2-0-compute-0-cuvvno[97606]: 08/12/2025 09:49:19 : epoch 69369efc : compute-0 : ganesha.nfsd-2[svc_2] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f5714002950 fd 37 proxy header rest len failed header rlen = % (will set dead)
Dec  8 04:49:19 np0005550137 ceph-ceb838ef-9d5d-54e4-bddb-2f01adce2ad4-nfs-cephfs-2-0-compute-0-cuvvno[97606]: 08/12/2025 09:49:19 : epoch 69369efc : compute-0 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f56f8003c10 fd 37 proxy header rest len failed header rlen = % (will set dead)
Dec  8 04:49:19 np0005550137 ceph-osd[83009]: log_channel(cluster) log [DBG] : 9.3 scrub starts
Dec  8 04:49:19 np0005550137 ceph-osd[83009]: log_channel(cluster) log [DBG] : 9.3 scrub ok
Dec  8 04:49:19 np0005550137 ceph-mon[74516]: mon.compute-0@0(leader).osd e57 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  8 04:49:20 np0005550137 ceph-mgr[74806]: log_channel(cluster) log [DBG] : pgmap v50: 353 pgs: 31 unknown, 322 active+clean; 456 KiB data, 103 MiB used, 60 GiB / 60 GiB avail
Dec  8 04:49:20 np0005550137 ceph-ceb838ef-9d5d-54e4-bddb-2f01adce2ad4-nfs-cephfs-2-0-compute-0-cuvvno[97606]: 08/12/2025 09:49:20 : epoch 69369efc : compute-0 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f5710002520 fd 37 proxy header rest len failed header rlen = % (will set dead)
Dec  8 04:49:20 np0005550137 ceph-mgr[74806]: [progress INFO root] Writing back 22 completed events
Dec  8 04:49:20 np0005550137 ceph-mon[74516]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/progress/completed}] v 0)
Dec  8 04:49:20 np0005550137 ceph-osd[83009]: log_channel(cluster) log [DBG] : 9.11 scrub starts
Dec  8 04:49:20 np0005550137 ceph-mon[74516]: log_channel(audit) log [INF] : from='mgr.14478 192.168.122.100:0/575685615' entity='mgr.compute-0.kitiwu' 
Dec  8 04:49:20 np0005550137 ceph-osd[83009]: log_channel(cluster) log [DBG] : 9.11 scrub ok
Dec  8 04:49:21 np0005550137 ceph-ceb838ef-9d5d-54e4-bddb-2f01adce2ad4-nfs-cephfs-2-0-compute-0-cuvvno[97606]: 08/12/2025 09:49:21 : epoch 69369efc : compute-0 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f571c00a3f0 fd 37 proxy header rest len failed header rlen = % (will set dead)
Dec  8 04:49:21 np0005550137 ceph-ceb838ef-9d5d-54e4-bddb-2f01adce2ad4-nfs-cephfs-2-0-compute-0-cuvvno[97606]: 08/12/2025 09:49:21 : epoch 69369efc : compute-0 : ganesha.nfsd-2[svc_2] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f5714002950 fd 37 proxy header rest len failed header rlen = % (will set dead)
Dec  8 04:49:21 np0005550137 python3[99155]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint radosgw-admin quay.io/ceph/ceph:v19 --fsid ceb838ef-9d5d-54e4-bddb-2f01adce2ad4 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   user info --uid openstack _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec  8 04:49:21 np0005550137 ceph-mon[74516]: from='mgr.14478 192.168.122.100:0/575685615' entity='mgr.compute-0.kitiwu' 
Dec  8 04:49:21 np0005550137 ceph-osd[83009]: log_channel(cluster) log [DBG] : 8.17 scrub starts
Dec  8 04:49:21 np0005550137 ceph-osd[83009]: log_channel(cluster) log [DBG] : 8.17 scrub ok
Dec  8 04:49:22 np0005550137 ceph-mgr[74806]: log_channel(cluster) log [DBG] : pgmap v51: 353 pgs: 353 active+clean; 456 KiB data, 103 MiB used, 60 GiB / 60 GiB avail
Dec  8 04:49:22 np0005550137 ceph-mon[74516]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": ".nfs", "var": "pgp_num_actual", "val": "32"} v 0)
Dec  8 04:49:22 np0005550137 ceph-mon[74516]: log_channel(audit) log [INF] : from='mgr.14478 192.168.122.100:0/575685615' entity='mgr.compute-0.kitiwu' cmd=[{"prefix": "osd pool set", "pool": ".nfs", "var": "pgp_num_actual", "val": "32"}]: dispatch
Dec  8 04:49:22 np0005550137 ceph-mon[74516]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": ".rgw.root", "var": "pgp_num_actual", "val": "32"} v 0)
Dec  8 04:49:22 np0005550137 ceph-mon[74516]: log_channel(audit) log [INF] : from='mgr.14478 192.168.122.100:0/575685615' entity='mgr.compute-0.kitiwu' cmd=[{"prefix": "osd pool set", "pool": ".rgw.root", "var": "pgp_num_actual", "val": "32"}]: dispatch
Dec  8 04:49:22 np0005550137 ceph-mon[74516]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.control", "var": "pgp_num_actual", "val": "32"} v 0)
Dec  8 04:49:22 np0005550137 ceph-mon[74516]: log_channel(audit) log [INF] : from='mgr.14478 192.168.122.100:0/575685615' entity='mgr.compute-0.kitiwu' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.control", "var": "pgp_num_actual", "val": "32"}]: dispatch
Dec  8 04:49:22 np0005550137 ceph-mon[74516]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "2"} v 0)
Dec  8 04:49:22 np0005550137 ceph-mon[74516]: log_channel(audit) log [INF] : from='mgr.14478 192.168.122.100:0/575685615' entity='mgr.compute-0.kitiwu' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "2"}]: dispatch
Dec  8 04:49:22 np0005550137 ceph-mon[74516]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pgp_num_actual", "val": "32"} v 0)
Dec  8 04:49:22 np0005550137 ceph-mon[74516]: log_channel(audit) log [INF] : from='mgr.14478 192.168.122.100:0/575685615' entity='mgr.compute-0.kitiwu' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pgp_num_actual", "val": "32"}]: dispatch
Dec  8 04:49:22 np0005550137 ceph-mon[74516]: mon.compute-0@0(leader).osd e57 do_prune osdmap full prune enabled
Dec  8 04:49:22 np0005550137 ceph-osd[83009]: log_channel(cluster) log [DBG] : 8.3 scrub starts
Dec  8 04:49:22 np0005550137 ceph-osd[83009]: log_channel(cluster) log [DBG] : 8.3 scrub ok
Dec  8 04:49:22 np0005550137 ceph-ceb838ef-9d5d-54e4-bddb-2f01adce2ad4-nfs-cephfs-2-0-compute-0-cuvvno[97606]: 08/12/2025 09:49:22 : epoch 69369efc : compute-0 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f56f8003c10 fd 37 proxy header rest len failed header rlen = % (will set dead)
Dec  8 04:49:23 np0005550137 ceph-ceb838ef-9d5d-54e4-bddb-2f01adce2ad4-nfs-cephfs-2-0-compute-0-cuvvno[97606]: 08/12/2025 09:49:23 : epoch 69369efc : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f5710002520 fd 37 proxy header rest len failed header rlen = % (will set dead)
Dec  8 04:49:23 np0005550137 ceph-ceb838ef-9d5d-54e4-bddb-2f01adce2ad4-nfs-cephfs-2-0-compute-0-cuvvno[97606]: 08/12/2025 09:49:23 : epoch 69369efc : compute-0 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f56e8000b60 fd 37 proxy header rest len failed header rlen = % (will set dead)
Dec  8 04:49:23 np0005550137 ceph-mon[74516]: log_channel(audit) log [INF] : from='mgr.14478 192.168.122.100:0/575685615' entity='mgr.compute-0.kitiwu' cmd='[{"prefix": "osd pool set", "pool": ".nfs", "var": "pgp_num_actual", "val": "32"}]': finished
Dec  8 04:49:23 np0005550137 ceph-mon[74516]: log_channel(audit) log [INF] : from='mgr.14478 192.168.122.100:0/575685615' entity='mgr.compute-0.kitiwu' cmd='[{"prefix": "osd pool set", "pool": ".rgw.root", "var": "pgp_num_actual", "val": "32"}]': finished
Dec  8 04:49:23 np0005550137 ceph-mon[74516]: log_channel(audit) log [INF] : from='mgr.14478 192.168.122.100:0/575685615' entity='mgr.compute-0.kitiwu' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.control", "var": "pgp_num_actual", "val": "32"}]': finished
Dec  8 04:49:23 np0005550137 ceph-mon[74516]: log_channel(audit) log [INF] : from='mgr.14478 192.168.122.100:0/575685615' entity='mgr.compute-0.kitiwu' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "2"}]': finished
Dec  8 04:49:23 np0005550137 ceph-mon[74516]: log_channel(audit) log [INF] : from='mgr.14478 192.168.122.100:0/575685615' entity='mgr.compute-0.kitiwu' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pgp_num_actual", "val": "32"}]': finished
Dec  8 04:49:23 np0005550137 ceph-mon[74516]: mon.compute-0@0(leader).osd e58 e58: 3 total, 3 up, 3 in
Dec  8 04:49:23 np0005550137 ceph-mon[74516]: log_channel(cluster) log [DBG] : osdmap e58: 3 total, 3 up, 3 in
Dec  8 04:49:23 np0005550137 ceph-osd[83009]: osd.1 pg_epoch: 58 pg[11.17( v 49'2 (0'0,49'2] local-lis/les=54/55 n=0 ec=54/40 lis/c=54/54 les/c/f=55/55/0 sis=58 pruub=8.531887054s) [2] r=-1 lpr=58 pi=[54,58)/1 crt=49'2 lcod 0'0 mlcod 0'0 active pruub 180.540191650s@ mbc={}] PeeringState::start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Dec  8 04:49:23 np0005550137 ceph-osd[83009]: osd.1 pg_epoch: 58 pg[8.14( v 35'12 (0'0,35'12] local-lis/les=52/53 n=0 ec=52/34 lis/c=52/52 les/c/f=53/53/0 sis=58 pruub=14.511912346s) [0] r=-1 lpr=58 pi=[52,58)/1 crt=35'12 lcod 0'0 mlcod 0'0 active pruub 186.520324707s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Dec  8 04:49:23 np0005550137 ceph-osd[83009]: osd.1 pg_epoch: 58 pg[8.14( v 35'12 (0'0,35'12] local-lis/les=52/53 n=0 ec=52/34 lis/c=52/52 les/c/f=53/53/0 sis=58 pruub=14.511754990s) [0] r=-1 lpr=58 pi=[52,58)/1 crt=35'12 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 186.520324707s@ mbc={}] state<Start>: transitioning to Stray
Dec  8 04:49:23 np0005550137 ceph-osd[83009]: osd.1 pg_epoch: 58 pg[11.17( v 49'2 (0'0,49'2] local-lis/les=54/55 n=0 ec=54/40 lis/c=54/54 les/c/f=55/55/0 sis=58 pruub=8.531702995s) [2] r=-1 lpr=58 pi=[54,58)/1 crt=49'2 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 180.540191650s@ mbc={}] state<Start>: transitioning to Stray
Dec  8 04:49:23 np0005550137 ceph-osd[83009]: osd.1 pg_epoch: 58 pg[11.16( v 49'2 (0'0,49'2] local-lis/les=54/55 n=0 ec=54/40 lis/c=54/54 les/c/f=55/55/0 sis=58 pruub=8.531446457s) [2] r=-1 lpr=58 pi=[54,58)/1 crt=49'2 lcod 0'0 mlcod 0'0 active pruub 180.540176392s@ mbc={}] PeeringState::start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Dec  8 04:49:23 np0005550137 ceph-osd[83009]: osd.1 pg_epoch: 58 pg[11.16( v 49'2 (0'0,49'2] local-lis/les=54/55 n=0 ec=54/40 lis/c=54/54 les/c/f=55/55/0 sis=58 pruub=8.531384468s) [2] r=-1 lpr=58 pi=[54,58)/1 crt=49'2 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 180.540176392s@ mbc={}] state<Start>: transitioning to Stray
Dec  8 04:49:23 np0005550137 ceph-osd[83009]: osd.1 pg_epoch: 58 pg[8.15( v 35'12 (0'0,35'12] local-lis/les=52/53 n=0 ec=52/34 lis/c=52/52 les/c/f=53/53/0 sis=58 pruub=14.510281563s) [2] r=-1 lpr=58 pi=[52,58)/1 crt=35'12 lcod 0'0 mlcod 0'0 active pruub 186.518981934s@ mbc={}] PeeringState::start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Dec  8 04:49:23 np0005550137 ceph-osd[83009]: osd.1 pg_epoch: 58 pg[8.15( v 35'12 (0'0,35'12] local-lis/les=52/53 n=0 ec=52/34 lis/c=52/52 les/c/f=53/53/0 sis=58 pruub=14.509788513s) [2] r=-1 lpr=58 pi=[52,58)/1 crt=35'12 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 186.518981934s@ mbc={}] state<Start>: transitioning to Stray
Dec  8 04:49:23 np0005550137 ceph-osd[83009]: osd.1 pg_epoch: 58 pg[11.14( v 49'2 (0'0,49'2] local-lis/les=54/55 n=0 ec=54/40 lis/c=54/54 les/c/f=55/55/0 sis=58 pruub=8.536235809s) [0] r=-1 lpr=58 pi=[54,58)/1 crt=49'2 lcod 0'0 mlcod 0'0 active pruub 180.545486450s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Dec  8 04:49:23 np0005550137 ceph-osd[83009]: osd.1 pg_epoch: 58 pg[8.17( v 35'12 (0'0,35'12] local-lis/les=52/53 n=0 ec=52/34 lis/c=52/52 les/c/f=53/53/0 sis=58 pruub=14.511142731s) [0] r=-1 lpr=58 pi=[52,58)/1 crt=35'12 lcod 0'0 mlcod 0'0 active pruub 186.520568848s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Dec  8 04:49:23 np0005550137 ceph-osd[83009]: osd.1 pg_epoch: 58 pg[11.14( v 49'2 (0'0,49'2] local-lis/les=54/55 n=0 ec=54/40 lis/c=54/54 les/c/f=55/55/0 sis=58 pruub=8.536198616s) [0] r=-1 lpr=58 pi=[54,58)/1 crt=49'2 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 180.545486450s@ mbc={}] state<Start>: transitioning to Stray
Dec  8 04:49:23 np0005550137 ceph-osd[83009]: osd.1 pg_epoch: 58 pg[8.17( v 35'12 (0'0,35'12] local-lis/les=52/53 n=0 ec=52/34 lis/c=52/52 les/c/f=53/53/0 sis=58 pruub=14.511108398s) [0] r=-1 lpr=58 pi=[52,58)/1 crt=35'12 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 186.520568848s@ mbc={}] state<Start>: transitioning to Stray
Dec  8 04:49:23 np0005550137 ceph-osd[83009]: osd.1 pg_epoch: 58 pg[11.13( v 49'2 (0'0,49'2] local-lis/les=54/55 n=0 ec=54/40 lis/c=54/54 les/c/f=55/55/0 sis=58 pruub=8.535806656s) [2] r=-1 lpr=58 pi=[54,58)/1 crt=49'2 lcod 0'0 mlcod 0'0 active pruub 180.545379639s@ mbc={}] PeeringState::start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Dec  8 04:49:23 np0005550137 ceph-osd[83009]: osd.1 pg_epoch: 58 pg[11.13( v 49'2 (0'0,49'2] local-lis/les=54/55 n=0 ec=54/40 lis/c=54/54 les/c/f=55/55/0 sis=58 pruub=8.535752296s) [2] r=-1 lpr=58 pi=[54,58)/1 crt=49'2 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 180.545379639s@ mbc={}] state<Start>: transitioning to Stray
Dec  8 04:49:23 np0005550137 ceph-osd[83009]: osd.1 pg_epoch: 58 pg[8.10( v 35'12 (0'0,35'12] local-lis/les=52/53 n=0 ec=52/34 lis/c=52/52 les/c/f=53/53/0 sis=58 pruub=14.510667801s) [0] r=-1 lpr=58 pi=[52,58)/1 crt=35'12 lcod 0'0 mlcod 0'0 active pruub 186.520462036s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Dec  8 04:49:23 np0005550137 ceph-osd[83009]: osd.1 pg_epoch: 58 pg[11.12( v 49'2 (0'0,49'2] local-lis/les=54/55 n=0 ec=54/40 lis/c=54/54 les/c/f=55/55/0 sis=58 pruub=8.535584450s) [0] r=-1 lpr=58 pi=[54,58)/1 crt=49'2 lcod 0'0 mlcod 0'0 active pruub 180.545455933s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Dec  8 04:49:23 np0005550137 ceph-osd[83009]: osd.1 pg_epoch: 58 pg[8.10( v 35'12 (0'0,35'12] local-lis/les=52/53 n=0 ec=52/34 lis/c=52/52 les/c/f=53/53/0 sis=58 pruub=14.510623932s) [0] r=-1 lpr=58 pi=[52,58)/1 crt=35'12 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 186.520462036s@ mbc={}] state<Start>: transitioning to Stray
Dec  8 04:49:23 np0005550137 ceph-osd[83009]: osd.1 pg_epoch: 58 pg[11.12( v 49'2 (0'0,49'2] local-lis/les=54/55 n=0 ec=54/40 lis/c=54/54 les/c/f=55/55/0 sis=58 pruub=8.535559654s) [0] r=-1 lpr=58 pi=[54,58)/1 crt=49'2 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 180.545455933s@ mbc={}] state<Start>: transitioning to Stray
Dec  8 04:49:23 np0005550137 ceph-osd[83009]: osd.1 pg_epoch: 58 pg[8.11( v 35'12 (0'0,35'12] local-lis/les=52/53 n=0 ec=52/34 lis/c=52/52 les/c/f=53/53/0 sis=58 pruub=14.510600090s) [2] r=-1 lpr=58 pi=[52,58)/1 crt=35'12 lcod 0'0 mlcod 0'0 active pruub 186.520599365s@ mbc={}] PeeringState::start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Dec  8 04:49:23 np0005550137 ceph-osd[83009]: osd.1 pg_epoch: 58 pg[8.11( v 35'12 (0'0,35'12] local-lis/les=52/53 n=0 ec=52/34 lis/c=52/52 les/c/f=53/53/0 sis=58 pruub=14.510551453s) [2] r=-1 lpr=58 pi=[52,58)/1 crt=35'12 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 186.520599365s@ mbc={}] state<Start>: transitioning to Stray
Dec  8 04:49:23 np0005550137 ceph-osd[83009]: osd.1 pg_epoch: 58 pg[11.1( v 49'2 (0'0,49'2] local-lis/les=54/55 n=1 ec=54/40 lis/c=54/54 les/c/f=55/55/0 sis=58 pruub=8.534593582s) [0] r=-1 lpr=58 pi=[54,58)/1 crt=49'2 lcod 0'0 mlcod 0'0 active pruub 180.545394897s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Dec  8 04:49:23 np0005550137 ceph-osd[83009]: osd.1 pg_epoch: 58 pg[11.1( v 49'2 (0'0,49'2] local-lis/les=54/55 n=1 ec=54/40 lis/c=54/54 les/c/f=55/55/0 sis=58 pruub=8.534548759s) [0] r=-1 lpr=58 pi=[54,58)/1 crt=49'2 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 180.545394897s@ mbc={}] state<Start>: transitioning to Stray
Dec  8 04:49:23 np0005550137 ceph-osd[83009]: osd.1 pg_epoch: 58 pg[8.f( v 35'12 (0'0,35'12] local-lis/les=52/53 n=0 ec=52/34 lis/c=52/52 les/c/f=53/53/0 sis=58 pruub=14.509798050s) [2] r=-1 lpr=58 pi=[52,58)/1 crt=35'12 lcod 0'0 mlcod 0'0 active pruub 186.520706177s@ mbc={}] PeeringState::start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Dec  8 04:49:23 np0005550137 ceph-osd[83009]: osd.1 pg_epoch: 58 pg[8.3( v 35'12 (0'0,35'12] local-lis/les=52/53 n=0 ec=52/34 lis/c=52/52 les/c/f=53/53/0 sis=58 pruub=14.509664536s) [2] r=-1 lpr=58 pi=[52,58)/1 crt=35'12 lcod 0'0 mlcod 0'0 active pruub 186.520599365s@ mbc={}] PeeringState::start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Dec  8 04:49:23 np0005550137 ceph-osd[83009]: osd.1 pg_epoch: 58 pg[8.f( v 35'12 (0'0,35'12] local-lis/les=52/53 n=0 ec=52/34 lis/c=52/52 les/c/f=53/53/0 sis=58 pruub=14.509781837s) [2] r=-1 lpr=58 pi=[52,58)/1 crt=35'12 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 186.520706177s@ mbc={}] state<Start>: transitioning to Stray
Dec  8 04:49:23 np0005550137 ceph-osd[83009]: osd.1 pg_epoch: 58 pg[8.3( v 35'12 (0'0,35'12] local-lis/les=52/53 n=0 ec=52/34 lis/c=52/52 les/c/f=53/53/0 sis=58 pruub=14.509621620s) [2] r=-1 lpr=58 pi=[52,58)/1 crt=35'12 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 186.520599365s@ mbc={}] state<Start>: transitioning to Stray
Dec  8 04:49:23 np0005550137 ceph-osd[83009]: osd.1 pg_epoch: 58 pg[8.2( v 35'12 (0'0,35'12] local-lis/les=52/53 n=1 ec=52/34 lis/c=52/52 les/c/f=53/53/0 sis=58 pruub=14.509422302s) [2] r=-1 lpr=58 pi=[52,58)/1 crt=35'12 lcod 0'0 mlcod 0'0 active pruub 186.520584106s@ mbc={}] PeeringState::start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Dec  8 04:49:23 np0005550137 ceph-osd[83009]: osd.1 pg_epoch: 58 pg[8.2( v 35'12 (0'0,35'12] local-lis/les=52/53 n=1 ec=52/34 lis/c=52/52 les/c/f=53/53/0 sis=58 pruub=14.509305954s) [2] r=-1 lpr=58 pi=[52,58)/1 crt=35'12 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 186.520584106s@ mbc={}] state<Start>: transitioning to Stray
Dec  8 04:49:23 np0005550137 ceph-osd[83009]: osd.1 pg_epoch: 58 pg[11.a( v 49'2 (0'0,49'2] local-lis/les=54/55 n=0 ec=54/40 lis/c=54/54 les/c/f=55/55/0 sis=58 pruub=8.533786774s) [2] r=-1 lpr=58 pi=[54,58)/1 crt=49'2 lcod 0'0 mlcod 0'0 active pruub 180.545639038s@ mbc={}] PeeringState::start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Dec  8 04:49:23 np0005550137 ceph-osd[83009]: osd.1 pg_epoch: 58 pg[8.8( v 35'12 (0'0,35'12] local-lis/les=52/53 n=0 ec=52/34 lis/c=52/52 les/c/f=53/53/0 sis=58 pruub=14.509268761s) [0] r=-1 lpr=58 pi=[52,58)/1 crt=35'12 lcod 0'0 mlcod 0'0 active pruub 186.520706177s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Dec  8 04:49:23 np0005550137 ceph-osd[83009]: osd.1 pg_epoch: 58 pg[8.8( v 35'12 (0'0,35'12] local-lis/les=52/53 n=0 ec=52/34 lis/c=52/52 les/c/f=53/53/0 sis=58 pruub=14.508221626s) [0] r=-1 lpr=58 pi=[52,58)/1 crt=35'12 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 186.520706177s@ mbc={}] state<Start>: transitioning to Stray
Dec  8 04:49:23 np0005550137 ceph-osd[83009]: osd.1 pg_epoch: 58 pg[11.a( v 49'2 (0'0,49'2] local-lis/les=54/55 n=0 ec=54/40 lis/c=54/54 les/c/f=55/55/0 sis=58 pruub=8.533749580s) [2] r=-1 lpr=58 pi=[54,58)/1 crt=49'2 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 180.545639038s@ mbc={}] state<Start>: transitioning to Stray
Dec  8 04:49:23 np0005550137 ceph-osd[83009]: osd.1 pg_epoch: 58 pg[8.16( v 35'12 (0'0,35'12] local-lis/les=52/53 n=0 ec=52/34 lis/c=52/52 les/c/f=53/53/0 sis=58 pruub=14.507525444s) [2] r=-1 lpr=58 pi=[52,58)/1 crt=35'12 lcod 0'0 mlcod 0'0 active pruub 186.520339966s@ mbc={}] PeeringState::start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Dec  8 04:49:23 np0005550137 ceph-osd[83009]: osd.1 pg_epoch: 58 pg[8.a( v 35'12 (0'0,35'12] local-lis/les=52/53 n=0 ec=52/34 lis/c=52/52 les/c/f=53/53/0 sis=58 pruub=14.511301994s) [2] r=-1 lpr=58 pi=[52,58)/1 crt=35'12 lcod 0'0 mlcod 0'0 active pruub 186.524154663s@ mbc={}] PeeringState::start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Dec  8 04:49:23 np0005550137 ceph-osd[83009]: osd.1 pg_epoch: 58 pg[8.a( v 35'12 (0'0,35'12] local-lis/les=52/53 n=0 ec=52/34 lis/c=52/52 les/c/f=53/53/0 sis=58 pruub=14.511281967s) [2] r=-1 lpr=58 pi=[52,58)/1 crt=35'12 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 186.524154663s@ mbc={}] state<Start>: transitioning to Stray
Dec  8 04:49:23 np0005550137 ceph-osd[83009]: osd.1 pg_epoch: 58 pg[8.9( v 35'12 (0'0,35'12] local-lis/les=52/53 n=0 ec=52/34 lis/c=52/52 les/c/f=53/53/0 sis=58 pruub=14.507658958s) [2] r=-1 lpr=58 pi=[52,58)/1 crt=35'12 lcod 0'0 mlcod 0'0 active pruub 186.520736694s@ mbc={}] PeeringState::start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Dec  8 04:49:23 np0005550137 ceph-osd[83009]: osd.1 pg_epoch: 58 pg[8.16( v 35'12 (0'0,35'12] local-lis/les=52/53 n=0 ec=52/34 lis/c=52/52 les/c/f=53/53/0 sis=58 pruub=14.507489204s) [2] r=-1 lpr=58 pi=[52,58)/1 crt=35'12 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 186.520339966s@ mbc={}] state<Start>: transitioning to Stray
Dec  8 04:49:23 np0005550137 ceph-osd[83009]: osd.1 pg_epoch: 58 pg[8.d( v 35'12 (0'0,35'12] local-lis/les=52/53 n=0 ec=52/34 lis/c=52/52 les/c/f=53/53/0 sis=58 pruub=14.507581711s) [2] r=-1 lpr=58 pi=[52,58)/1 crt=35'12 lcod 0'0 mlcod 0'0 active pruub 186.520767212s@ mbc={}] PeeringState::start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Dec  8 04:49:23 np0005550137 ceph-osd[83009]: osd.1 pg_epoch: 58 pg[8.d( v 35'12 (0'0,35'12] local-lis/les=52/53 n=0 ec=52/34 lis/c=52/52 les/c/f=53/53/0 sis=58 pruub=14.507462502s) [2] r=-1 lpr=58 pi=[52,58)/1 crt=35'12 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 186.520767212s@ mbc={}] state<Start>: transitioning to Stray
Dec  8 04:49:23 np0005550137 ceph-osd[83009]: osd.1 pg_epoch: 58 pg[11.f( v 49'2 (0'0,49'2] local-lis/les=54/55 n=0 ec=54/40 lis/c=54/54 les/c/f=55/55/0 sis=58 pruub=8.532546997s) [0] r=-1 lpr=58 pi=[54,58)/1 crt=49'2 lcod 0'0 mlcod 0'0 active pruub 180.545928955s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Dec  8 04:49:23 np0005550137 ceph-osd[83009]: osd.1 pg_epoch: 58 pg[11.f( v 49'2 (0'0,49'2] local-lis/les=54/55 n=0 ec=54/40 lis/c=54/54 les/c/f=55/55/0 sis=58 pruub=8.532496452s) [0] r=-1 lpr=58 pi=[54,58)/1 crt=49'2 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 180.545928955s@ mbc={}] state<Start>: transitioning to Stray
Dec  8 04:49:23 np0005550137 ceph-osd[83009]: osd.1 pg_epoch: 58 pg[8.c( v 35'12 (0'0,35'12] local-lis/les=52/53 n=0 ec=52/34 lis/c=52/52 les/c/f=53/53/0 sis=58 pruub=14.507301331s) [2] r=-1 lpr=58 pi=[52,58)/1 crt=35'12 lcod 0'0 mlcod 0'0 active pruub 186.520767212s@ mbc={}] PeeringState::start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Dec  8 04:49:23 np0005550137 ceph-osd[83009]: osd.1 pg_epoch: 58 pg[8.c( v 35'12 (0'0,35'12] local-lis/les=52/53 n=0 ec=52/34 lis/c=52/52 les/c/f=53/53/0 sis=58 pruub=14.507249832s) [2] r=-1 lpr=58 pi=[52,58)/1 crt=35'12 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 186.520767212s@ mbc={}] state<Start>: transitioning to Stray
Dec  8 04:49:23 np0005550137 ceph-osd[83009]: osd.1 pg_epoch: 58 pg[8.9( v 35'12 (0'0,35'12] local-lis/les=52/53 n=0 ec=52/34 lis/c=52/52 les/c/f=53/53/0 sis=58 pruub=14.507496834s) [2] r=-1 lpr=58 pi=[52,58)/1 crt=35'12 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 186.520736694s@ mbc={}] state<Start>: transitioning to Stray
Dec  8 04:49:23 np0005550137 ceph-osd[83009]: osd.1 pg_epoch: 58 pg[8.b( v 35'12 (0'0,35'12] local-lis/les=52/53 n=0 ec=52/34 lis/c=52/52 les/c/f=53/53/0 sis=58 pruub=14.506975174s) [2] r=-1 lpr=58 pi=[52,58)/1 crt=35'12 lcod 0'0 mlcod 0'0 active pruub 186.520812988s@ mbc={}] PeeringState::start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Dec  8 04:49:23 np0005550137 ceph-osd[83009]: osd.1 pg_epoch: 58 pg[8.b( v 35'12 (0'0,35'12] local-lis/les=52/53 n=0 ec=52/34 lis/c=52/52 les/c/f=53/53/0 sis=58 pruub=14.506960869s) [2] r=-1 lpr=58 pi=[52,58)/1 crt=35'12 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 186.520812988s@ mbc={}] state<Start>: transitioning to Stray
Dec  8 04:49:23 np0005550137 ceph-osd[83009]: osd.1 pg_epoch: 58 pg[11.8( v 49'2 (0'0,49'2] local-lis/les=54/55 n=0 ec=54/40 lis/c=54/54 les/c/f=55/55/0 sis=58 pruub=8.532197952s) [2] r=-1 lpr=58 pi=[54,58)/1 crt=49'2 lcod 0'0 mlcod 0'0 active pruub 180.546066284s@ mbc={}] PeeringState::start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Dec  8 04:49:23 np0005550137 ceph-osd[83009]: osd.1 pg_epoch: 58 pg[11.8( v 49'2 (0'0,49'2] local-lis/les=54/55 n=0 ec=54/40 lis/c=54/54 les/c/f=55/55/0 sis=58 pruub=8.532162666s) [2] r=-1 lpr=58 pi=[54,58)/1 crt=49'2 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 180.546066284s@ mbc={}] state<Start>: transitioning to Stray
Dec  8 04:49:23 np0005550137 ceph-osd[83009]: osd.1 pg_epoch: 58 pg[11.3( v 49'2 (0'0,49'2] local-lis/les=54/55 n=0 ec=54/40 lis/c=54/54 les/c/f=55/55/0 sis=58 pruub=8.531964302s) [2] r=-1 lpr=58 pi=[54,58)/1 crt=49'2 lcod 0'0 mlcod 0'0 active pruub 180.546066284s@ mbc={}] PeeringState::start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Dec  8 04:49:23 np0005550137 ceph-osd[83009]: osd.1 pg_epoch: 58 pg[11.3( v 49'2 (0'0,49'2] local-lis/les=54/55 n=0 ec=54/40 lis/c=54/54 les/c/f=55/55/0 sis=58 pruub=8.531949043s) [2] r=-1 lpr=58 pi=[54,58)/1 crt=49'2 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 180.546066284s@ mbc={}] state<Start>: transitioning to Stray
Dec  8 04:49:23 np0005550137 ceph-osd[83009]: osd.1 pg_epoch: 58 pg[11.4( v 49'2 (0'0,49'2] local-lis/les=54/55 n=0 ec=54/40 lis/c=54/54 les/c/f=55/55/0 sis=58 pruub=8.531896591s) [0] r=-1 lpr=58 pi=[54,58)/1 crt=49'2 lcod 0'0 mlcod 0'0 active pruub 180.546157837s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Dec  8 04:49:23 np0005550137 ceph-osd[83009]: osd.1 pg_epoch: 58 pg[11.e( v 49'2 (0'0,49'2] local-lis/les=54/55 n=0 ec=54/40 lis/c=54/54 les/c/f=55/55/0 sis=58 pruub=8.531833649s) [2] r=-1 lpr=58 pi=[54,58)/1 crt=49'2 lcod 0'0 mlcod 0'0 active pruub 180.546112061s@ mbc={}] PeeringState::start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Dec  8 04:49:23 np0005550137 ceph-osd[83009]: osd.1 pg_epoch: 58 pg[11.e( v 49'2 (0'0,49'2] local-lis/les=54/55 n=0 ec=54/40 lis/c=54/54 les/c/f=55/55/0 sis=58 pruub=8.531794548s) [2] r=-1 lpr=58 pi=[54,58)/1 crt=49'2 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 180.546112061s@ mbc={}] state<Start>: transitioning to Stray
Dec  8 04:49:23 np0005550137 ceph-osd[83009]: osd.1 pg_epoch: 58 pg[11.5( v 49'2 (0'0,49'2] local-lis/les=54/55 n=0 ec=54/40 lis/c=54/54 les/c/f=55/55/0 sis=58 pruub=8.531703949s) [0] r=-1 lpr=58 pi=[54,58)/1 crt=49'2 lcod 0'0 mlcod 0'0 active pruub 180.546096802s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Dec  8 04:49:23 np0005550137 ceph-osd[83009]: osd.1 pg_epoch: 58 pg[11.5( v 49'2 (0'0,49'2] local-lis/les=54/55 n=0 ec=54/40 lis/c=54/54 les/c/f=55/55/0 sis=58 pruub=8.531682968s) [0] r=-1 lpr=58 pi=[54,58)/1 crt=49'2 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 180.546096802s@ mbc={}] state<Start>: transitioning to Stray
Dec  8 04:49:23 np0005550137 ceph-osd[83009]: osd.1 pg_epoch: 58 pg[11.7( v 49'2 (0'0,49'2] local-lis/les=54/55 n=0 ec=54/40 lis/c=54/54 les/c/f=55/55/0 sis=58 pruub=8.531363487s) [0] r=-1 lpr=58 pi=[54,58)/1 crt=49'2 lcod 0'0 mlcod 0'0 active pruub 180.546173096s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Dec  8 04:49:23 np0005550137 ceph-osd[83009]: osd.1 pg_epoch: 58 pg[11.7( v 49'2 (0'0,49'2] local-lis/les=54/55 n=0 ec=54/40 lis/c=54/54 les/c/f=55/55/0 sis=58 pruub=8.531341553s) [0] r=-1 lpr=58 pi=[54,58)/1 crt=49'2 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 180.546173096s@ mbc={}] state<Start>: transitioning to Stray
Dec  8 04:49:23 np0005550137 ceph-osd[83009]: osd.1 pg_epoch: 58 pg[8.6( v 35'12 (0'0,35'12] local-lis/les=52/53 n=1 ec=52/34 lis/c=52/52 les/c/f=53/53/0 sis=58 pruub=14.509638786s) [2] r=-1 lpr=58 pi=[52,58)/1 crt=35'12 lcod 0'0 mlcod 0'0 active pruub 186.524368286s@ mbc={}] PeeringState::start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Dec  8 04:49:23 np0005550137 ceph-osd[83009]: osd.1 pg_epoch: 58 pg[8.6( v 35'12 (0'0,35'12] local-lis/les=52/53 n=1 ec=52/34 lis/c=52/52 les/c/f=53/53/0 sis=58 pruub=14.509447098s) [2] r=-1 lpr=58 pi=[52,58)/1 crt=35'12 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 186.524368286s@ mbc={}] state<Start>: transitioning to Stray
Dec  8 04:49:23 np0005550137 ceph-osd[83009]: osd.1 pg_epoch: 58 pg[8.1b( v 35'12 (0'0,35'12] local-lis/les=52/53 n=0 ec=52/34 lis/c=52/52 les/c/f=53/53/0 sis=58 pruub=14.509569168s) [0] r=-1 lpr=58 pi=[52,58)/1 crt=35'12 lcod 0'0 mlcod 0'0 active pruub 186.524627686s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Dec  8 04:49:23 np0005550137 ceph-osd[83009]: osd.1 pg_epoch: 58 pg[8.1b( v 35'12 (0'0,35'12] local-lis/les=52/53 n=0 ec=52/34 lis/c=52/52 les/c/f=53/53/0 sis=58 pruub=14.509552002s) [0] r=-1 lpr=58 pi=[52,58)/1 crt=35'12 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 186.524627686s@ mbc={}] state<Start>: transitioning to Stray
Dec  8 04:49:23 np0005550137 ceph-osd[83009]: osd.1 pg_epoch: 58 pg[8.5( v 35'12 (0'0,35'12] local-lis/les=52/53 n=1 ec=52/34 lis/c=52/52 les/c/f=53/53/0 sis=58 pruub=14.509664536s) [2] r=-1 lpr=58 pi=[52,58)/1 crt=35'12 lcod 0'0 mlcod 0'0 active pruub 186.524627686s@ mbc={}] PeeringState::start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Dec  8 04:49:23 np0005550137 ceph-osd[83009]: osd.1 pg_epoch: 58 pg[8.5( v 35'12 (0'0,35'12] local-lis/les=52/53 n=1 ec=52/34 lis/c=52/52 les/c/f=53/53/0 sis=58 pruub=14.509436607s) [2] r=-1 lpr=58 pi=[52,58)/1 crt=35'12 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 186.524627686s@ mbc={}] state<Start>: transitioning to Stray
Dec  8 04:49:23 np0005550137 ceph-osd[83009]: osd.1 pg_epoch: 58 pg[11.1a( v 49'2 (0'0,49'2] local-lis/les=54/55 n=0 ec=54/40 lis/c=54/54 les/c/f=55/55/0 sis=58 pruub=8.530915260s) [0] r=-1 lpr=58 pi=[54,58)/1 crt=49'2 lcod 0'0 mlcod 0'0 active pruub 180.546279907s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Dec  8 04:49:23 np0005550137 ceph-osd[83009]: osd.1 pg_epoch: 58 pg[11.1a( v 49'2 (0'0,49'2] local-lis/les=54/55 n=0 ec=54/40 lis/c=54/54 les/c/f=55/55/0 sis=58 pruub=8.530891418s) [0] r=-1 lpr=58 pi=[54,58)/1 crt=49'2 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 180.546279907s@ mbc={}] state<Start>: transitioning to Stray
Dec  8 04:49:23 np0005550137 ceph-osd[83009]: osd.1 pg_epoch: 58 pg[8.4( v 35'12 (0'0,35'12] local-lis/les=52/53 n=1 ec=52/34 lis/c=52/52 les/c/f=53/53/0 sis=58 pruub=14.509073257s) [0] r=-1 lpr=58 pi=[52,58)/1 crt=35'12 lcod 0'0 mlcod 0'0 active pruub 186.524505615s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Dec  8 04:49:23 np0005550137 ceph-osd[83009]: osd.1 pg_epoch: 58 pg[8.4( v 35'12 (0'0,35'12] local-lis/les=52/53 n=1 ec=52/34 lis/c=52/52 les/c/f=53/53/0 sis=58 pruub=14.509029388s) [0] r=-1 lpr=58 pi=[52,58)/1 crt=35'12 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 186.524505615s@ mbc={}] state<Start>: transitioning to Stray
Dec  8 04:49:23 np0005550137 ceph-osd[83009]: osd.1 pg_epoch: 58 pg[8.19( v 35'12 (0'0,35'12] local-lis/les=52/53 n=0 ec=52/34 lis/c=52/52 les/c/f=53/53/0 sis=58 pruub=14.509391785s) [0] r=-1 lpr=58 pi=[52,58)/1 crt=35'12 lcod 0'0 mlcod 0'0 active pruub 186.524993896s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Dec  8 04:49:23 np0005550137 ceph-osd[83009]: osd.1 pg_epoch: 58 pg[8.19( v 35'12 (0'0,35'12] local-lis/les=52/53 n=0 ec=52/34 lis/c=52/52 les/c/f=53/53/0 sis=58 pruub=14.509366035s) [0] r=-1 lpr=58 pi=[52,58)/1 crt=35'12 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 186.524993896s@ mbc={}] state<Start>: transitioning to Stray
Dec  8 04:49:23 np0005550137 ceph-osd[83009]: osd.1 pg_epoch: 58 pg[11.1c( v 49'2 (0'0,49'2] local-lis/les=54/55 n=0 ec=54/40 lis/c=54/54 les/c/f=55/55/0 sis=58 pruub=8.530416489s) [0] r=-1 lpr=58 pi=[54,58)/1 crt=49'2 lcod 0'0 mlcod 0'0 active pruub 180.546310425s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Dec  8 04:49:23 np0005550137 ceph-osd[83009]: osd.1 pg_epoch: 58 pg[8.18( v 35'12 (0'0,35'12] local-lis/les=52/53 n=0 ec=52/34 lis/c=52/52 les/c/f=53/53/0 sis=58 pruub=14.508839607s) [0] r=-1 lpr=58 pi=[52,58)/1 crt=35'12 lcod 0'0 mlcod 0'0 active pruub 186.524749756s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Dec  8 04:49:23 np0005550137 ceph-osd[83009]: osd.1 pg_epoch: 58 pg[11.1c( v 49'2 (0'0,49'2] local-lis/les=54/55 n=0 ec=54/40 lis/c=54/54 les/c/f=55/55/0 sis=58 pruub=8.530371666s) [0] r=-1 lpr=58 pi=[54,58)/1 crt=49'2 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 180.546310425s@ mbc={}] state<Start>: transitioning to Stray
Dec  8 04:49:23 np0005550137 ceph-osd[83009]: osd.1 pg_epoch: 58 pg[8.18( v 35'12 (0'0,35'12] local-lis/les=52/53 n=0 ec=52/34 lis/c=52/52 les/c/f=53/53/0 sis=58 pruub=14.508797646s) [0] r=-1 lpr=58 pi=[52,58)/1 crt=35'12 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 186.524749756s@ mbc={}] state<Start>: transitioning to Stray
Dec  8 04:49:23 np0005550137 ceph-osd[83009]: osd.1 pg_epoch: 58 pg[11.1b( v 49'2 (0'0,49'2] local-lis/les=54/55 n=0 ec=54/40 lis/c=54/54 les/c/f=55/55/0 sis=58 pruub=8.530190468s) [0] r=-1 lpr=58 pi=[54,58)/1 crt=49'2 lcod 0'0 mlcod 0'0 active pruub 180.546325684s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Dec  8 04:49:23 np0005550137 ceph-osd[83009]: osd.1 pg_epoch: 58 pg[8.1f( v 35'12 (0'0,35'12] local-lis/les=52/53 n=0 ec=52/34 lis/c=52/52 les/c/f=53/53/0 sis=58 pruub=14.508559227s) [2] r=-1 lpr=58 pi=[52,58)/1 crt=35'12 lcod 0'0 mlcod 0'0 active pruub 186.524749756s@ mbc={}] PeeringState::start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Dec  8 04:49:23 np0005550137 ceph-osd[83009]: osd.1 pg_epoch: 58 pg[8.1f( v 35'12 (0'0,35'12] local-lis/les=52/53 n=0 ec=52/34 lis/c=52/52 les/c/f=53/53/0 sis=58 pruub=14.508514404s) [2] r=-1 lpr=58 pi=[52,58)/1 crt=35'12 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 186.524749756s@ mbc={}] state<Start>: transitioning to Stray
Dec  8 04:49:23 np0005550137 ceph-osd[83009]: osd.1 pg_epoch: 58 pg[11.1d( v 49'2 (0'0,49'2] local-lis/les=54/55 n=0 ec=54/40 lis/c=54/54 les/c/f=55/55/0 sis=58 pruub=8.529963493s) [0] r=-1 lpr=58 pi=[54,58)/1 crt=49'2 lcod 0'0 mlcod 0'0 active pruub 180.546310425s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Dec  8 04:49:23 np0005550137 ceph-osd[83009]: osd.1 pg_epoch: 58 pg[11.1e( v 49'2 (0'0,49'2] local-lis/les=54/55 n=0 ec=54/40 lis/c=54/54 les/c/f=55/55/0 sis=58 pruub=8.532533646s) [0] r=-1 lpr=58 pi=[54,58)/1 crt=49'2 lcod 0'0 mlcod 0'0 active pruub 180.548934937s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Dec  8 04:49:23 np0005550137 ceph-osd[83009]: osd.1 pg_epoch: 58 pg[11.1e( v 49'2 (0'0,49'2] local-lis/les=54/55 n=0 ec=54/40 lis/c=54/54 les/c/f=55/55/0 sis=58 pruub=8.532521248s) [0] r=-1 lpr=58 pi=[54,58)/1 crt=49'2 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 180.548934937s@ mbc={}] state<Start>: transitioning to Stray
Dec  8 04:49:23 np0005550137 ceph-osd[83009]: osd.1 pg_epoch: 58 pg[11.1d( v 49'2 (0'0,49'2] local-lis/les=54/55 n=0 ec=54/40 lis/c=54/54 les/c/f=55/55/0 sis=58 pruub=8.529929161s) [0] r=-1 lpr=58 pi=[54,58)/1 crt=49'2 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 180.546310425s@ mbc={}] state<Start>: transitioning to Stray
Dec  8 04:49:23 np0005550137 ceph-osd[83009]: osd.1 pg_epoch: 58 pg[11.1b( v 49'2 (0'0,49'2] local-lis/les=54/55 n=0 ec=54/40 lis/c=54/54 les/c/f=55/55/0 sis=58 pruub=8.530129433s) [0] r=-1 lpr=58 pi=[54,58)/1 crt=49'2 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 180.546325684s@ mbc={}] state<Start>: transitioning to Stray
Dec  8 04:49:23 np0005550137 ceph-osd[83009]: osd.1 pg_epoch: 58 pg[11.4( v 49'2 (0'0,49'2] local-lis/les=54/55 n=0 ec=54/40 lis/c=54/54 les/c/f=55/55/0 sis=58 pruub=8.531024933s) [0] r=-1 lpr=58 pi=[54,58)/1 crt=49'2 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 180.546157837s@ mbc={}] state<Start>: transitioning to Stray
Dec  8 04:49:23 np0005550137 ceph-osd[83009]: osd.1 pg_epoch: 58 pg[11.19( v 49'2 (0'0,49'2] local-lis/les=54/55 n=0 ec=54/40 lis/c=54/54 les/c/f=55/55/0 sis=58 pruub=8.530993462s) [2] r=-1 lpr=58 pi=[54,58)/1 crt=49'2 lcod 0'0 mlcod 0'0 active pruub 180.546340942s@ mbc={}] PeeringState::start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Dec  8 04:49:23 np0005550137 ceph-osd[83009]: osd.1 pg_epoch: 58 pg[11.19( v 49'2 (0'0,49'2] local-lis/les=54/55 n=0 ec=54/40 lis/c=54/54 les/c/f=55/55/0 sis=58 pruub=8.529469490s) [2] r=-1 lpr=58 pi=[54,58)/1 crt=49'2 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 180.546340942s@ mbc={}] state<Start>: transitioning to Stray
Dec  8 04:49:23 np0005550137 ceph-osd[83009]: osd.1 pg_epoch: 58 pg[8.1c( v 35'12 (0'0,35'12] local-lis/les=52/53 n=0 ec=52/34 lis/c=52/52 les/c/f=53/53/0 sis=58 pruub=14.508003235s) [2] r=-1 lpr=58 pi=[52,58)/1 crt=35'12 lcod 0'0 mlcod 0'0 active pruub 186.524887085s@ mbc={}] PeeringState::start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Dec  8 04:49:23 np0005550137 ceph-osd[83009]: osd.1 pg_epoch: 58 pg[8.1c( v 35'12 (0'0,35'12] local-lis/les=52/53 n=0 ec=52/34 lis/c=52/52 les/c/f=53/53/0 sis=58 pruub=14.507910728s) [2] r=-1 lpr=58 pi=[52,58)/1 crt=35'12 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 186.524887085s@ mbc={}] state<Start>: transitioning to Stray
Dec  8 04:49:23 np0005550137 ceph-osd[83009]: osd.1 pg_epoch: 58 pg[8.12( v 35'12 (0'0,35'12] local-lis/les=52/53 n=0 ec=52/34 lis/c=52/52 les/c/f=53/53/0 sis=58 pruub=14.507733345s) [0] r=-1 lpr=58 pi=[52,58)/1 crt=35'12 lcod 0'0 mlcod 0'0 active pruub 186.525054932s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Dec  8 04:49:23 np0005550137 ceph-osd[83009]: osd.1 pg_epoch: 58 pg[8.12( v 35'12 (0'0,35'12] local-lis/les=52/53 n=0 ec=52/34 lis/c=52/52 les/c/f=53/53/0 sis=58 pruub=14.507640839s) [0] r=-1 lpr=58 pi=[52,58)/1 crt=35'12 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 186.525054932s@ mbc={}] state<Start>: transitioning to Stray
Dec  8 04:49:23 np0005550137 ceph-mon[74516]: from='mgr.14478 192.168.122.100:0/575685615' entity='mgr.compute-0.kitiwu' cmd=[{"prefix": "osd pool set", "pool": ".nfs", "var": "pgp_num_actual", "val": "32"}]: dispatch
Dec  8 04:49:23 np0005550137 ceph-mon[74516]: from='mgr.14478 192.168.122.100:0/575685615' entity='mgr.compute-0.kitiwu' cmd=[{"prefix": "osd pool set", "pool": ".rgw.root", "var": "pgp_num_actual", "val": "32"}]: dispatch
Dec  8 04:49:23 np0005550137 ceph-mon[74516]: from='mgr.14478 192.168.122.100:0/575685615' entity='mgr.compute-0.kitiwu' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.control", "var": "pgp_num_actual", "val": "32"}]: dispatch
Dec  8 04:49:23 np0005550137 ceph-mon[74516]: from='mgr.14478 192.168.122.100:0/575685615' entity='mgr.compute-0.kitiwu' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "2"}]: dispatch
Dec  8 04:49:23 np0005550137 ceph-mon[74516]: from='mgr.14478 192.168.122.100:0/575685615' entity='mgr.compute-0.kitiwu' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pgp_num_actual", "val": "32"}]: dispatch
Dec  8 04:49:23 np0005550137 ceph-osd[83009]: osd.1 pg_epoch: 58 pg[12.10( empty local-lis/les=0/0 n=0 ec=56/47 lis/c=56/56 les/c/f=57/57/0 sis=58) [1] r=0 lpr=58 pi=[56,58)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  8 04:49:23 np0005550137 ceph-osd[83009]: osd.1 pg_epoch: 58 pg[10.15( empty local-lis/les=0/0 n=0 ec=54/38 lis/c=54/54 les/c/f=55/55/0 sis=58) [1] r=0 lpr=58 pi=[54,58)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  8 04:49:23 np0005550137 ceph-osd[83009]: osd.1 pg_epoch: 58 pg[10.14( empty local-lis/les=0/0 n=0 ec=54/38 lis/c=54/54 les/c/f=55/55/0 sis=58) [1] r=0 lpr=58 pi=[54,58)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  8 04:49:23 np0005550137 ceph-osd[83009]: osd.1 pg_epoch: 58 pg[10.13( empty local-lis/les=0/0 n=0 ec=54/38 lis/c=54/54 les/c/f=55/55/0 sis=58) [1] r=0 lpr=58 pi=[54,58)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  8 04:49:23 np0005550137 ceph-osd[83009]: osd.1 pg_epoch: 58 pg[10.2( empty local-lis/les=0/0 n=0 ec=54/38 lis/c=54/54 les/c/f=55/55/0 sis=58) [1] r=0 lpr=58 pi=[54,58)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  8 04:49:23 np0005550137 ceph-osd[83009]: osd.1 pg_epoch: 58 pg[12.6( empty local-lis/les=0/0 n=0 ec=56/47 lis/c=56/56 les/c/f=57/57/0 sis=58) [1] r=0 lpr=58 pi=[56,58)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  8 04:49:23 np0005550137 ceph-osd[83009]: osd.1 pg_epoch: 58 pg[12.12( empty local-lis/les=0/0 n=0 ec=56/47 lis/c=56/56 les/c/f=57/57/0 sis=58) [1] r=0 lpr=58 pi=[56,58)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  8 04:49:23 np0005550137 ceph-osd[83009]: osd.1 pg_epoch: 58 pg[12.8( empty local-lis/les=0/0 n=0 ec=56/47 lis/c=56/56 les/c/f=57/57/0 sis=58) [1] r=0 lpr=58 pi=[56,58)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  8 04:49:23 np0005550137 ceph-osd[83009]: osd.1 pg_epoch: 58 pg[12.b( empty local-lis/les=0/0 n=0 ec=56/47 lis/c=56/56 les/c/f=57/57/0 sis=58) [1] r=0 lpr=58 pi=[56,58)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  8 04:49:23 np0005550137 ceph-osd[83009]: osd.1 pg_epoch: 58 pg[10.8( empty local-lis/les=0/0 n=0 ec=54/38 lis/c=54/54 les/c/f=55/55/0 sis=58) [1] r=0 lpr=58 pi=[54,58)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  8 04:49:23 np0005550137 ceph-osd[83009]: osd.1 pg_epoch: 58 pg[12.c( empty local-lis/les=0/0 n=0 ec=56/47 lis/c=56/56 les/c/f=57/57/0 sis=58) [1] r=0 lpr=58 pi=[56,58)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  8 04:49:23 np0005550137 ceph-osd[83009]: osd.1 pg_epoch: 58 pg[12.e( empty local-lis/les=0/0 n=0 ec=56/47 lis/c=56/56 les/c/f=57/57/0 sis=58) [1] r=0 lpr=58 pi=[56,58)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  8 04:49:23 np0005550137 ceph-osd[83009]: osd.1 pg_epoch: 58 pg[12.a( empty local-lis/les=0/0 n=0 ec=56/47 lis/c=56/56 les/c/f=57/57/0 sis=58) [1] r=0 lpr=58 pi=[56,58)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  8 04:49:23 np0005550137 ceph-osd[83009]: osd.1 pg_epoch: 58 pg[10.5( empty local-lis/les=0/0 n=0 ec=54/38 lis/c=54/54 les/c/f=55/55/0 sis=58) [1] r=0 lpr=58 pi=[54,58)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  8 04:49:23 np0005550137 ceph-osd[83009]: osd.1 pg_epoch: 58 pg[10.18( empty local-lis/les=0/0 n=0 ec=54/38 lis/c=54/54 les/c/f=55/55/0 sis=58) [1] r=0 lpr=58 pi=[54,58)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  8 04:49:23 np0005550137 ceph-osd[83009]: osd.1 pg_epoch: 58 pg[10.19( empty local-lis/les=0/0 n=0 ec=54/38 lis/c=54/54 les/c/f=55/55/0 sis=58) [1] r=0 lpr=58 pi=[54,58)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  8 04:49:23 np0005550137 ceph-osd[83009]: osd.1 pg_epoch: 58 pg[12.1c( empty local-lis/les=0/0 n=0 ec=56/47 lis/c=56/56 les/c/f=57/57/0 sis=58) [1] r=0 lpr=58 pi=[56,58)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  8 04:49:23 np0005550137 ceph-osd[83009]: osd.1 pg_epoch: 58 pg[12.19( empty local-lis/les=0/0 n=0 ec=56/47 lis/c=56/56 les/c/f=57/57/0 sis=58) [1] r=0 lpr=58 pi=[56,58)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  8 04:49:23 np0005550137 ceph-osd[83009]: osd.1 pg_epoch: 58 pg[10.1b( empty local-lis/les=0/0 n=0 ec=54/38 lis/c=54/54 les/c/f=55/55/0 sis=58) [1] r=0 lpr=58 pi=[54,58)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  8 04:49:23 np0005550137 podman[99156]: 2025-12-08 09:49:23.476202396 +0000 UTC m=+1.958808179 container create e2c2a8797cbcec27825bc8ec881473d8bf15bcd4b16414929ca795828125f0dc (image=quay.io/ceph/ceph:v19, name=awesome_dijkstra, org.label-schema.schema-version=1.0, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, OSD_FLAVOR=default, org.label-schema.vendor=CentOS)
Dec  8 04:49:23 np0005550137 podman[99156]: 2025-12-08 09:49:23.43011579 +0000 UTC m=+1.912721653 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Dec  8 04:49:23 np0005550137 podman[98948]: 2025-12-08 09:49:23.523357254 +0000 UTC m=+6.408820320 container create dd2f00d3e6af564ffa76db7e101f73a065b339fd35659ead456914f9056b531f (image=quay.io/ceph/grafana:10.4.0, name=magical_khorana, maintainer=Grafana Labs <hello@grafana.com>)
Dec  8 04:49:23 np0005550137 systemd[1]: Started libpod-conmon-e2c2a8797cbcec27825bc8ec881473d8bf15bcd4b16414929ca795828125f0dc.scope.
Dec  8 04:49:23 np0005550137 systemd[1]: Started libpod-conmon-dd2f00d3e6af564ffa76db7e101f73a065b339fd35659ead456914f9056b531f.scope.
Dec  8 04:49:23 np0005550137 podman[98948]: 2025-12-08 09:49:23.505514911 +0000 UTC m=+6.390978007 image pull c8b91775d855b99270fc5d22f3c6737e8cca01ef4c25c8b0362295e0746fa39b quay.io/ceph/grafana:10.4.0
Dec  8 04:49:23 np0005550137 systemd[1]: Started libcrun container.
Dec  8 04:49:23 np0005550137 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/bccb8ebfa1a2e5e040cf784046a38d79ae713f9a6ca4070732cb9bd6e34e28ef/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec  8 04:49:23 np0005550137 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/bccb8ebfa1a2e5e040cf784046a38d79ae713f9a6ca4070732cb9bd6e34e28ef/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Dec  8 04:49:23 np0005550137 systemd[1]: Started libcrun container.
Dec  8 04:49:23 np0005550137 podman[98948]: 2025-12-08 09:49:23.581864361 +0000 UTC m=+6.467327447 container init dd2f00d3e6af564ffa76db7e101f73a065b339fd35659ead456914f9056b531f (image=quay.io/ceph/grafana:10.4.0, name=magical_khorana, maintainer=Grafana Labs <hello@grafana.com>)
Dec  8 04:49:23 np0005550137 podman[99156]: 2025-12-08 09:49:23.587208291 +0000 UTC m=+2.069814074 container init e2c2a8797cbcec27825bc8ec881473d8bf15bcd4b16414929ca795828125f0dc (image=quay.io/ceph/ceph:v19, name=awesome_dijkstra, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, ceph=True, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325)
Dec  8 04:49:23 np0005550137 podman[98948]: 2025-12-08 09:49:23.589812698 +0000 UTC m=+6.475275774 container start dd2f00d3e6af564ffa76db7e101f73a065b339fd35659ead456914f9056b531f (image=quay.io/ceph/grafana:10.4.0, name=magical_khorana, maintainer=Grafana Labs <hello@grafana.com>)
Dec  8 04:49:23 np0005550137 podman[98948]: 2025-12-08 09:49:23.59288599 +0000 UTC m=+6.478349096 container attach dd2f00d3e6af564ffa76db7e101f73a065b339fd35659ead456914f9056b531f (image=quay.io/ceph/grafana:10.4.0, name=magical_khorana, maintainer=Grafana Labs <hello@grafana.com>)
Dec  8 04:49:23 np0005550137 magical_khorana[99212]: 472 0
Dec  8 04:49:23 np0005550137 podman[99156]: 2025-12-08 09:49:23.59458758 +0000 UTC m=+2.077193343 container start e2c2a8797cbcec27825bc8ec881473d8bf15bcd4b16414929ca795828125f0dc (image=quay.io/ceph/ceph:v19, name=awesome_dijkstra, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image)
Dec  8 04:49:23 np0005550137 podman[99156]: 2025-12-08 09:49:23.598303572 +0000 UTC m=+2.080909325 container attach e2c2a8797cbcec27825bc8ec881473d8bf15bcd4b16414929ca795828125f0dc (image=quay.io/ceph/ceph:v19, name=awesome_dijkstra, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec  8 04:49:23 np0005550137 systemd[1]: libpod-dd2f00d3e6af564ffa76db7e101f73a065b339fd35659ead456914f9056b531f.scope: Deactivated successfully.
Dec  8 04:49:23 np0005550137 podman[98948]: 2025-12-08 09:49:23.604564219 +0000 UTC m=+6.490027295 container died dd2f00d3e6af564ffa76db7e101f73a065b339fd35659ead456914f9056b531f (image=quay.io/ceph/grafana:10.4.0, name=magical_khorana, maintainer=Grafana Labs <hello@grafana.com>)
Dec  8 04:49:23 np0005550137 systemd[1]: var-lib-containers-storage-overlay-8760f38555490c0bcb5af30f9ed026d17f667b60c231ee1b562ba161456abb73-merged.mount: Deactivated successfully.
Dec  8 04:49:23 np0005550137 podman[98948]: 2025-12-08 09:49:23.652986905 +0000 UTC m=+6.538449971 container remove dd2f00d3e6af564ffa76db7e101f73a065b339fd35659ead456914f9056b531f (image=quay.io/ceph/grafana:10.4.0, name=magical_khorana, maintainer=Grafana Labs <hello@grafana.com>)
Dec  8 04:49:23 np0005550137 systemd[1]: libpod-conmon-dd2f00d3e6af564ffa76db7e101f73a065b339fd35659ead456914f9056b531f.scope: Deactivated successfully.
Dec  8 04:49:23 np0005550137 podman[99249]: 2025-12-08 09:49:23.720278844 +0000 UTC m=+0.043039916 container create 2b58a6cbcfb62cd224f59904fe6286b21dc08af2b6db585351fb5b445d30caf6 (image=quay.io/ceph/grafana:10.4.0, name=eloquent_burnell, maintainer=Grafana Labs <hello@grafana.com>)
Dec  8 04:49:23 np0005550137 systemd[1]: Started libpod-conmon-2b58a6cbcfb62cd224f59904fe6286b21dc08af2b6db585351fb5b445d30caf6.scope.
Dec  8 04:49:23 np0005550137 systemd[1]: Started libcrun container.
Dec  8 04:49:23 np0005550137 podman[99249]: 2025-12-08 09:49:23.699811653 +0000 UTC m=+0.022572755 image pull c8b91775d855b99270fc5d22f3c6737e8cca01ef4c25c8b0362295e0746fa39b quay.io/ceph/grafana:10.4.0
Dec  8 04:49:23 np0005550137 podman[99249]: 2025-12-08 09:49:23.797610953 +0000 UTC m=+0.120372055 container init 2b58a6cbcfb62cd224f59904fe6286b21dc08af2b6db585351fb5b445d30caf6 (image=quay.io/ceph/grafana:10.4.0, name=eloquent_burnell, maintainer=Grafana Labs <hello@grafana.com>)
Dec  8 04:49:23 np0005550137 podman[99249]: 2025-12-08 09:49:23.804621712 +0000 UTC m=+0.127382784 container start 2b58a6cbcfb62cd224f59904fe6286b21dc08af2b6db585351fb5b445d30caf6 (image=quay.io/ceph/grafana:10.4.0, name=eloquent_burnell, maintainer=Grafana Labs <hello@grafana.com>)
Dec  8 04:49:23 np0005550137 podman[99249]: 2025-12-08 09:49:23.807771176 +0000 UTC m=+0.130532288 container attach 2b58a6cbcfb62cd224f59904fe6286b21dc08af2b6db585351fb5b445d30caf6 (image=quay.io/ceph/grafana:10.4.0, name=eloquent_burnell, maintainer=Grafana Labs <hello@grafana.com>)
Dec  8 04:49:23 np0005550137 eloquent_burnell[99273]: 472 0
Dec  8 04:49:23 np0005550137 systemd[1]: libpod-2b58a6cbcfb62cd224f59904fe6286b21dc08af2b6db585351fb5b445d30caf6.scope: Deactivated successfully.
Dec  8 04:49:23 np0005550137 podman[99249]: 2025-12-08 09:49:23.809592701 +0000 UTC m=+0.132353803 container died 2b58a6cbcfb62cd224f59904fe6286b21dc08af2b6db585351fb5b445d30caf6 (image=quay.io/ceph/grafana:10.4.0, name=eloquent_burnell, maintainer=Grafana Labs <hello@grafana.com>)
Dec  8 04:49:23 np0005550137 ceph-osd[83009]: log_channel(cluster) log [DBG] : 9.15 scrub starts
Dec  8 04:49:23 np0005550137 systemd[1]: var-lib-containers-storage-overlay-88c264c1787eda7d97319553a018582ae2b0b365027affc7898d70bbb71703f2-merged.mount: Deactivated successfully.
Dec  8 04:49:23 np0005550137 ceph-osd[83009]: log_channel(cluster) log [DBG] : 9.15 scrub ok
Dec  8 04:49:23 np0005550137 podman[99249]: 2025-12-08 09:49:23.8534251 +0000 UTC m=+0.176186192 container remove 2b58a6cbcfb62cd224f59904fe6286b21dc08af2b6db585351fb5b445d30caf6 (image=quay.io/ceph/grafana:10.4.0, name=eloquent_burnell, maintainer=Grafana Labs <hello@grafana.com>)
Dec  8 04:49:23 np0005550137 systemd[1]: libpod-conmon-2b58a6cbcfb62cd224f59904fe6286b21dc08af2b6db585351fb5b445d30caf6.scope: Deactivated successfully.
Dec  8 04:49:23 np0005550137 systemd[1]: Reloading.
Dec  8 04:49:24 np0005550137 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec  8 04:49:24 np0005550137 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Dec  8 04:49:24 np0005550137 systemd[1]: Reloading.
Dec  8 04:49:24 np0005550137 ceph-mon[74516]: mon.compute-0@0(leader).osd e58 do_prune osdmap full prune enabled
Dec  8 04:49:24 np0005550137 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Dec  8 04:49:24 np0005550137 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec  8 04:49:24 np0005550137 ceph-mon[74516]: from='mgr.14478 192.168.122.100:0/575685615' entity='mgr.compute-0.kitiwu' cmd='[{"prefix": "osd pool set", "pool": ".nfs", "var": "pgp_num_actual", "val": "32"}]': finished
Dec  8 04:49:24 np0005550137 ceph-mon[74516]: from='mgr.14478 192.168.122.100:0/575685615' entity='mgr.compute-0.kitiwu' cmd='[{"prefix": "osd pool set", "pool": ".rgw.root", "var": "pgp_num_actual", "val": "32"}]': finished
Dec  8 04:49:24 np0005550137 ceph-mon[74516]: from='mgr.14478 192.168.122.100:0/575685615' entity='mgr.compute-0.kitiwu' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.control", "var": "pgp_num_actual", "val": "32"}]': finished
Dec  8 04:49:24 np0005550137 ceph-mon[74516]: from='mgr.14478 192.168.122.100:0/575685615' entity='mgr.compute-0.kitiwu' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "2"}]': finished
Dec  8 04:49:24 np0005550137 ceph-mon[74516]: from='mgr.14478 192.168.122.100:0/575685615' entity='mgr.compute-0.kitiwu' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pgp_num_actual", "val": "32"}]': finished
Dec  8 04:49:24 np0005550137 systemd[1]: Starting Ceph grafana.compute-0 for ceb838ef-9d5d-54e4-bddb-2f01adce2ad4...
Dec  8 04:49:24 np0005550137 ceph-mgr[74806]: log_channel(cluster) log [DBG] : pgmap v53: 353 pgs: 353 active+clean; 456 KiB data, 103 MiB used, 60 GiB / 60 GiB avail
Dec  8 04:49:24 np0005550137 ceph-mon[74516]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "3"} v 0)
Dec  8 04:49:24 np0005550137 ceph-mon[74516]: log_channel(audit) log [INF] : from='mgr.14478 192.168.122.100:0/575685615' entity='mgr.compute-0.kitiwu' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "3"}]: dispatch
Dec  8 04:49:24 np0005550137 ceph-mon[74516]: mon.compute-0@0(leader).osd e59 e59: 3 total, 3 up, 3 in
Dec  8 04:49:24 np0005550137 ceph-mon[74516]: log_channel(cluster) log [DBG] : osdmap e59: 3 total, 3 up, 3 in
Dec  8 04:49:24 np0005550137 ceph-osd[83009]: osd.1 pg_epoch: 59 pg[10.1b( empty local-lis/les=0/0 n=0 ec=54/38 lis/c=54/54 les/c/f=55/55/0 sis=59) [1]/[0] r=-1 lpr=59 pi=[54,59)/1 crt=0'0 mlcod 0'0 remapped mbc={}] PeeringState::start_peering_interval up [1] -> [1], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 1, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Dec  8 04:49:24 np0005550137 ceph-osd[83009]: osd.1 pg_epoch: 59 pg[10.1b( empty local-lis/les=0/0 n=0 ec=54/38 lis/c=54/54 les/c/f=55/55/0 sis=59) [1]/[0] r=-1 lpr=59 pi=[54,59)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Dec  8 04:49:24 np0005550137 ceph-osd[83009]: osd.1 pg_epoch: 59 pg[10.19( empty local-lis/les=0/0 n=0 ec=54/38 lis/c=54/54 les/c/f=55/55/0 sis=59) [1]/[0] r=-1 lpr=59 pi=[54,59)/1 crt=0'0 mlcod 0'0 remapped mbc={}] PeeringState::start_peering_interval up [1] -> [1], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 1, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Dec  8 04:49:24 np0005550137 ceph-osd[83009]: osd.1 pg_epoch: 59 pg[10.18( empty local-lis/les=0/0 n=0 ec=54/38 lis/c=54/54 les/c/f=55/55/0 sis=59) [1]/[0] r=-1 lpr=59 pi=[54,59)/1 crt=0'0 mlcod 0'0 remapped mbc={}] PeeringState::start_peering_interval up [1] -> [1], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 1, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Dec  8 04:49:24 np0005550137 ceph-osd[83009]: osd.1 pg_epoch: 59 pg[10.19( empty local-lis/les=0/0 n=0 ec=54/38 lis/c=54/54 les/c/f=55/55/0 sis=59) [1]/[0] r=-1 lpr=59 pi=[54,59)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Dec  8 04:49:24 np0005550137 ceph-osd[83009]: osd.1 pg_epoch: 59 pg[10.18( empty local-lis/les=0/0 n=0 ec=54/38 lis/c=54/54 les/c/f=55/55/0 sis=59) [1]/[0] r=-1 lpr=59 pi=[54,59)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Dec  8 04:49:24 np0005550137 ceph-osd[83009]: osd.1 pg_epoch: 59 pg[10.5( empty local-lis/les=0/0 n=0 ec=54/38 lis/c=54/54 les/c/f=55/55/0 sis=59) [1]/[0] r=-1 lpr=59 pi=[54,59)/1 crt=0'0 mlcod 0'0 remapped mbc={}] PeeringState::start_peering_interval up [1] -> [1], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 1, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Dec  8 04:49:24 np0005550137 ceph-osd[83009]: osd.1 pg_epoch: 59 pg[10.2( empty local-lis/les=0/0 n=0 ec=54/38 lis/c=54/54 les/c/f=55/55/0 sis=59) [1]/[0] r=-1 lpr=59 pi=[54,59)/1 crt=0'0 mlcod 0'0 remapped mbc={}] PeeringState::start_peering_interval up [1] -> [1], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 1, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Dec  8 04:49:24 np0005550137 ceph-osd[83009]: osd.1 pg_epoch: 59 pg[10.2( empty local-lis/les=0/0 n=0 ec=54/38 lis/c=54/54 les/c/f=55/55/0 sis=59) [1]/[0] r=-1 lpr=59 pi=[54,59)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Dec  8 04:49:24 np0005550137 ceph-osd[83009]: osd.1 pg_epoch: 59 pg[10.5( empty local-lis/les=0/0 n=0 ec=54/38 lis/c=54/54 les/c/f=55/55/0 sis=59) [1]/[0] r=-1 lpr=59 pi=[54,59)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Dec  8 04:49:24 np0005550137 ceph-osd[83009]: osd.1 pg_epoch: 59 pg[10.8( empty local-lis/les=0/0 n=0 ec=54/38 lis/c=54/54 les/c/f=55/55/0 sis=59) [1]/[0] r=-1 lpr=59 pi=[54,59)/1 crt=0'0 mlcod 0'0 remapped mbc={}] PeeringState::start_peering_interval up [1] -> [1], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 1, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Dec  8 04:49:24 np0005550137 ceph-osd[83009]: osd.1 pg_epoch: 59 pg[10.8( empty local-lis/les=0/0 n=0 ec=54/38 lis/c=54/54 les/c/f=55/55/0 sis=59) [1]/[0] r=-1 lpr=59 pi=[54,59)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Dec  8 04:49:24 np0005550137 ceph-osd[83009]: osd.1 pg_epoch: 59 pg[10.13( empty local-lis/les=0/0 n=0 ec=54/38 lis/c=54/54 les/c/f=55/55/0 sis=59) [1]/[0] r=-1 lpr=59 pi=[54,59)/1 crt=0'0 mlcod 0'0 remapped mbc={}] PeeringState::start_peering_interval up [1] -> [1], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 1, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Dec  8 04:49:24 np0005550137 ceph-osd[83009]: osd.1 pg_epoch: 59 pg[10.13( empty local-lis/les=0/0 n=0 ec=54/38 lis/c=54/54 les/c/f=55/55/0 sis=59) [1]/[0] r=-1 lpr=59 pi=[54,59)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Dec  8 04:49:24 np0005550137 ceph-osd[83009]: osd.1 pg_epoch: 59 pg[10.14( empty local-lis/les=0/0 n=0 ec=54/38 lis/c=54/54 les/c/f=55/55/0 sis=59) [1]/[0] r=-1 lpr=59 pi=[54,59)/1 crt=0'0 mlcod 0'0 remapped mbc={}] PeeringState::start_peering_interval up [1] -> [1], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 1, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Dec  8 04:49:24 np0005550137 ceph-osd[83009]: osd.1 pg_epoch: 59 pg[10.14( empty local-lis/les=0/0 n=0 ec=54/38 lis/c=54/54 les/c/f=55/55/0 sis=59) [1]/[0] r=-1 lpr=59 pi=[54,59)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Dec  8 04:49:24 np0005550137 ceph-osd[83009]: osd.1 pg_epoch: 59 pg[10.15( empty local-lis/les=0/0 n=0 ec=54/38 lis/c=54/54 les/c/f=55/55/0 sis=59) [1]/[0] r=-1 lpr=59 pi=[54,59)/1 crt=0'0 mlcod 0'0 remapped mbc={}] PeeringState::start_peering_interval up [1] -> [1], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 1, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Dec  8 04:49:24 np0005550137 ceph-osd[83009]: osd.1 pg_epoch: 59 pg[10.15( empty local-lis/les=0/0 n=0 ec=54/38 lis/c=54/54 les/c/f=55/55/0 sis=59) [1]/[0] r=-1 lpr=59 pi=[54,59)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Dec  8 04:49:24 np0005550137 ceph-osd[83009]: osd.1 pg_epoch: 59 pg[12.19( v 49'44 (0'0,49'44] local-lis/les=58/59 n=0 ec=56/47 lis/c=56/56 les/c/f=57/57/0 sis=58) [1] r=0 lpr=58 pi=[56,58)/1 crt=49'44 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  8 04:49:24 np0005550137 ceph-osd[83009]: osd.1 pg_epoch: 59 pg[12.1c( v 49'44 (0'0,49'44] local-lis/les=58/59 n=0 ec=56/47 lis/c=56/56 les/c/f=57/57/0 sis=58) [1] r=0 lpr=58 pi=[56,58)/1 crt=49'44 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  8 04:49:24 np0005550137 ceph-osd[83009]: osd.1 pg_epoch: 59 pg[12.8( v 49'44 (0'0,49'44] local-lis/les=58/59 n=0 ec=56/47 lis/c=56/56 les/c/f=57/57/0 sis=58) [1] r=0 lpr=58 pi=[56,58)/1 crt=49'44 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  8 04:49:24 np0005550137 ceph-osd[83009]: osd.1 pg_epoch: 59 pg[12.a( v 49'44 (0'0,49'44] local-lis/les=58/59 n=0 ec=56/47 lis/c=56/56 les/c/f=57/57/0 sis=58) [1] r=0 lpr=58 pi=[56,58)/1 crt=49'44 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  8 04:49:24 np0005550137 ceph-osd[83009]: osd.1 pg_epoch: 59 pg[12.e( v 49'44 (0'0,49'44] local-lis/les=58/59 n=0 ec=56/47 lis/c=56/56 les/c/f=57/57/0 sis=58) [1] r=0 lpr=58 pi=[56,58)/1 crt=49'44 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  8 04:49:24 np0005550137 ceph-osd[83009]: osd.1 pg_epoch: 59 pg[12.c( v 49'44 (0'0,49'44] local-lis/les=58/59 n=0 ec=56/47 lis/c=56/56 les/c/f=57/57/0 sis=58) [1] r=0 lpr=58 pi=[56,58)/1 crt=49'44 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  8 04:49:24 np0005550137 ceph-osd[83009]: osd.1 pg_epoch: 59 pg[12.6( v 49'44 (0'0,49'44] local-lis/les=58/59 n=0 ec=56/47 lis/c=56/56 les/c/f=57/57/0 sis=58) [1] r=0 lpr=58 pi=[56,58)/1 crt=49'44 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  8 04:49:24 np0005550137 ceph-osd[83009]: osd.1 pg_epoch: 59 pg[12.b( v 49'44 (0'0,49'44] local-lis/les=58/59 n=0 ec=56/47 lis/c=56/56 les/c/f=57/57/0 sis=58) [1] r=0 lpr=58 pi=[56,58)/1 crt=49'44 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  8 04:49:24 np0005550137 ceph-osd[83009]: osd.1 pg_epoch: 59 pg[12.10( v 57'47 lc 49'14 (0'0,57'47] local-lis/les=58/59 n=0 ec=56/47 lis/c=56/56 les/c/f=57/57/0 sis=58) [1] r=0 lpr=58 pi=[56,58)/1 crt=57'47 lcod 0'0 mlcod 0'0 active+degraded m=1 mbc={255={(0+1)=1}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  8 04:49:24 np0005550137 ceph-osd[83009]: osd.1 pg_epoch: 59 pg[12.12( v 49'44 (0'0,49'44] local-lis/les=58/59 n=0 ec=56/47 lis/c=56/56 les/c/f=57/57/0 sis=58) [1] r=0 lpr=58 pi=[56,58)/1 crt=49'44 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  8 04:49:24 np0005550137 ceph-ceb838ef-9d5d-54e4-bddb-2f01adce2ad4-nfs-cephfs-2-0-compute-0-cuvvno[97606]: 08/12/2025 09:49:24 : epoch 69369efc : compute-0 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f5714002950 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec  8 04:49:24 np0005550137 podman[99418]: 2025-12-08 09:49:24.838500152 +0000 UTC m=+0.055178728 container create b8b17017c0f8b03c982d86c76026d22cd109a761d95550a2b2d3e0b24e8d9fc9 (image=quay.io/ceph/grafana:10.4.0, name=ceph-ceb838ef-9d5d-54e4-bddb-2f01adce2ad4-grafana-compute-0, maintainer=Grafana Labs <hello@grafana.com>)
Dec  8 04:49:24 np0005550137 ceph-osd[83009]: log_channel(cluster) log [DBG] : 9.14 scrub starts
Dec  8 04:49:24 np0005550137 ceph-mon[74516]: mon.compute-0@0(leader).osd e59 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  8 04:49:24 np0005550137 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f86b2c750d09f580b9d0ba33e2685eea552d8e935ab26508b8c6038f89080ecb/merged/etc/grafana/grafana.ini supports timestamps until 2038 (0x7fffffff)
Dec  8 04:49:24 np0005550137 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f86b2c750d09f580b9d0ba33e2685eea552d8e935ab26508b8c6038f89080ecb/merged/etc/grafana/certs supports timestamps until 2038 (0x7fffffff)
Dec  8 04:49:24 np0005550137 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f86b2c750d09f580b9d0ba33e2685eea552d8e935ab26508b8c6038f89080ecb/merged/etc/grafana/provisioning/datasources supports timestamps until 2038 (0x7fffffff)
Dec  8 04:49:24 np0005550137 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f86b2c750d09f580b9d0ba33e2685eea552d8e935ab26508b8c6038f89080ecb/merged/etc/grafana/provisioning/dashboards supports timestamps until 2038 (0x7fffffff)
Dec  8 04:49:24 np0005550137 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f86b2c750d09f580b9d0ba33e2685eea552d8e935ab26508b8c6038f89080ecb/merged/var/lib/grafana/grafana.db supports timestamps until 2038 (0x7fffffff)
Dec  8 04:49:24 np0005550137 podman[99418]: 2025-12-08 09:49:24.811461115 +0000 UTC m=+0.028139751 image pull c8b91775d855b99270fc5d22f3c6737e8cca01ef4c25c8b0362295e0746fa39b quay.io/ceph/grafana:10.4.0
Dec  8 04:49:24 np0005550137 podman[99418]: 2025-12-08 09:49:24.907887474 +0000 UTC m=+0.124566070 container init b8b17017c0f8b03c982d86c76026d22cd109a761d95550a2b2d3e0b24e8d9fc9 (image=quay.io/ceph/grafana:10.4.0, name=ceph-ceb838ef-9d5d-54e4-bddb-2f01adce2ad4-grafana-compute-0, maintainer=Grafana Labs <hello@grafana.com>)
Dec  8 04:49:24 np0005550137 podman[99418]: 2025-12-08 09:49:24.9164564 +0000 UTC m=+0.133134976 container start b8b17017c0f8b03c982d86c76026d22cd109a761d95550a2b2d3e0b24e8d9fc9 (image=quay.io/ceph/grafana:10.4.0, name=ceph-ceb838ef-9d5d-54e4-bddb-2f01adce2ad4-grafana-compute-0, maintainer=Grafana Labs <hello@grafana.com>)
Dec  8 04:49:24 np0005550137 bash[99418]: b8b17017c0f8b03c982d86c76026d22cd109a761d95550a2b2d3e0b24e8d9fc9
Dec  8 04:49:24 np0005550137 systemd[1]: Started Ceph grafana.compute-0 for ceb838ef-9d5d-54e4-bddb-2f01adce2ad4.
Dec  8 04:49:24 np0005550137 ceph-mon[74516]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Dec  8 04:49:25 np0005550137 ceph-mon[74516]: log_channel(audit) log [INF] : from='mgr.14478 192.168.122.100:0/575685615' entity='mgr.compute-0.kitiwu' 
Dec  8 04:49:25 np0005550137 ceph-mon[74516]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Dec  8 04:49:25 np0005550137 ceph-mon[74516]: log_channel(audit) log [INF] : from='mgr.14478 192.168.122.100:0/575685615' entity='mgr.compute-0.kitiwu' 
Dec  8 04:49:25 np0005550137 ceph-mon[74516]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.grafana}] v 0)
Dec  8 04:49:25 np0005550137 ceph-mon[74516]: log_channel(audit) log [INF] : from='mgr.14478 192.168.122.100:0/575685615' entity='mgr.compute-0.kitiwu' 
Dec  8 04:49:25 np0005550137 ceph-mgr[74806]: [progress INFO root] complete: finished ev 74399342-fda7-46db-b41b-ad6c35f98e21 (Updating grafana deployment (+1 -> 1))
Dec  8 04:49:25 np0005550137 ceph-mgr[74806]: [progress INFO root] Completed event 74399342-fda7-46db-b41b-ad6c35f98e21 (Updating grafana deployment (+1 -> 1)) in 9 seconds
Dec  8 04:49:25 np0005550137 ceph-mon[74516]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.grafana}] v 0)
Dec  8 04:49:25 np0005550137 ceph-mon[74516]: log_channel(audit) log [INF] : from='mgr.14478 192.168.122.100:0/575685615' entity='mgr.compute-0.kitiwu' 
Dec  8 04:49:25 np0005550137 ceph-mgr[74806]: [progress INFO root] update: starting ev f11e4b1d-c93d-4913-b13f-24d42a85fcc8 (Updating ingress.rgw.default deployment (+4 -> 4))
Dec  8 04:49:25 np0005550137 ceph-mon[74516]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/ingress.rgw.default/monitor_password}] v 0)
Dec  8 04:49:25 np0005550137 ceph-mon[74516]: log_channel(audit) log [INF] : from='mgr.14478 192.168.122.100:0/575685615' entity='mgr.compute-0.kitiwu' 
Dec  8 04:49:25 np0005550137 ceph-mgr[74806]: [cephadm INFO cephadm.serve] Deploying daemon haproxy.rgw.default.compute-0.dmkdub on compute-0
Dec  8 04:49:25 np0005550137 ceph-mgr[74806]: log_channel(cephadm) log [INF] : Deploying daemon haproxy.rgw.default.compute-0.dmkdub on compute-0
Dec  8 04:49:25 np0005550137 ceph-ceb838ef-9d5d-54e4-bddb-2f01adce2ad4-grafana-compute-0[99477]: logger=settings t=2025-12-08T09:49:25.09026289Z level=info msg="Starting Grafana" version=10.4.0 commit=03f502a94d17f7dc4e6c34acdf8428aedd986e4c branch=HEAD compiled=2025-12-08T09:49:25Z
Dec  8 04:49:25 np0005550137 ceph-ceb838ef-9d5d-54e4-bddb-2f01adce2ad4-grafana-compute-0[99477]: logger=settings t=2025-12-08T09:49:25.090510747Z level=info msg="Config loaded from" file=/usr/share/grafana/conf/defaults.ini
Dec  8 04:49:25 np0005550137 ceph-ceb838ef-9d5d-54e4-bddb-2f01adce2ad4-grafana-compute-0[99477]: logger=settings t=2025-12-08T09:49:25.090517547Z level=info msg="Config loaded from" file=/etc/grafana/grafana.ini
Dec  8 04:49:25 np0005550137 ceph-ceb838ef-9d5d-54e4-bddb-2f01adce2ad4-grafana-compute-0[99477]: logger=settings t=2025-12-08T09:49:25.090521767Z level=info msg="Config overridden from command line" arg="default.paths.data=/var/lib/grafana"
Dec  8 04:49:25 np0005550137 ceph-ceb838ef-9d5d-54e4-bddb-2f01adce2ad4-grafana-compute-0[99477]: logger=settings t=2025-12-08T09:49:25.090525177Z level=info msg="Config overridden from command line" arg="default.paths.logs=/var/log/grafana"
Dec  8 04:49:25 np0005550137 ceph-ceb838ef-9d5d-54e4-bddb-2f01adce2ad4-grafana-compute-0[99477]: logger=settings t=2025-12-08T09:49:25.090529318Z level=info msg="Config overridden from command line" arg="default.paths.plugins=/var/lib/grafana/plugins"
Dec  8 04:49:25 np0005550137 ceph-ceb838ef-9d5d-54e4-bddb-2f01adce2ad4-grafana-compute-0[99477]: logger=settings t=2025-12-08T09:49:25.090532648Z level=info msg="Config overridden from command line" arg="default.paths.provisioning=/etc/grafana/provisioning"
Dec  8 04:49:25 np0005550137 ceph-ceb838ef-9d5d-54e4-bddb-2f01adce2ad4-grafana-compute-0[99477]: logger=settings t=2025-12-08T09:49:25.090535918Z level=info msg="Config overridden from command line" arg="default.log.mode=console"
Dec  8 04:49:25 np0005550137 ceph-ceb838ef-9d5d-54e4-bddb-2f01adce2ad4-grafana-compute-0[99477]: logger=settings t=2025-12-08T09:49:25.090539538Z level=info msg="Config overridden from Environment variable" var="GF_PATHS_DATA=/var/lib/grafana"
Dec  8 04:49:25 np0005550137 ceph-ceb838ef-9d5d-54e4-bddb-2f01adce2ad4-grafana-compute-0[99477]: logger=settings t=2025-12-08T09:49:25.090543038Z level=info msg="Config overridden from Environment variable" var="GF_PATHS_LOGS=/var/log/grafana"
Dec  8 04:49:25 np0005550137 ceph-ceb838ef-9d5d-54e4-bddb-2f01adce2ad4-grafana-compute-0[99477]: logger=settings t=2025-12-08T09:49:25.090546168Z level=info msg="Config overridden from Environment variable" var="GF_PATHS_PLUGINS=/var/lib/grafana/plugins"
Dec  8 04:49:25 np0005550137 ceph-ceb838ef-9d5d-54e4-bddb-2f01adce2ad4-grafana-compute-0[99477]: logger=settings t=2025-12-08T09:49:25.090549778Z level=info msg="Config overridden from Environment variable" var="GF_PATHS_PROVISIONING=/etc/grafana/provisioning"
Dec  8 04:49:25 np0005550137 ceph-ceb838ef-9d5d-54e4-bddb-2f01adce2ad4-grafana-compute-0[99477]: logger=settings t=2025-12-08T09:49:25.090553258Z level=info msg=Target target=[all]
Dec  8 04:49:25 np0005550137 ceph-ceb838ef-9d5d-54e4-bddb-2f01adce2ad4-grafana-compute-0[99477]: logger=settings t=2025-12-08T09:49:25.090559319Z level=info msg="Path Home" path=/usr/share/grafana
Dec  8 04:49:25 np0005550137 ceph-ceb838ef-9d5d-54e4-bddb-2f01adce2ad4-grafana-compute-0[99477]: logger=settings t=2025-12-08T09:49:25.090562549Z level=info msg="Path Data" path=/var/lib/grafana
Dec  8 04:49:25 np0005550137 ceph-ceb838ef-9d5d-54e4-bddb-2f01adce2ad4-grafana-compute-0[99477]: logger=settings t=2025-12-08T09:49:25.090565649Z level=info msg="Path Logs" path=/var/log/grafana
Dec  8 04:49:25 np0005550137 ceph-ceb838ef-9d5d-54e4-bddb-2f01adce2ad4-grafana-compute-0[99477]: logger=settings t=2025-12-08T09:49:25.090568769Z level=info msg="Path Plugins" path=/var/lib/grafana/plugins
Dec  8 04:49:25 np0005550137 ceph-ceb838ef-9d5d-54e4-bddb-2f01adce2ad4-grafana-compute-0[99477]: logger=settings t=2025-12-08T09:49:25.090571969Z level=info msg="Path Provisioning" path=/etc/grafana/provisioning
Dec  8 04:49:25 np0005550137 ceph-ceb838ef-9d5d-54e4-bddb-2f01adce2ad4-grafana-compute-0[99477]: logger=settings t=2025-12-08T09:49:25.090575389Z level=info msg="App mode production"
Dec  8 04:49:25 np0005550137 ceph-ceb838ef-9d5d-54e4-bddb-2f01adce2ad4-grafana-compute-0[99477]: logger=sqlstore t=2025-12-08T09:49:25.090821146Z level=info msg="Connecting to DB" dbtype=sqlite3
Dec  8 04:49:25 np0005550137 ceph-ceb838ef-9d5d-54e4-bddb-2f01adce2ad4-grafana-compute-0[99477]: logger=sqlstore t=2025-12-08T09:49:25.090835517Z level=warn msg="SQLite database file has broader permissions than it should" path=/var/lib/grafana/grafana.db mode=-rw-r--r-- expected=-rw-r-----
Dec  8 04:49:25 np0005550137 ceph-ceb838ef-9d5d-54e4-bddb-2f01adce2ad4-grafana-compute-0[99477]: logger=migrator t=2025-12-08T09:49:25.09128665Z level=info msg="Starting DB migrations"
Dec  8 04:49:25 np0005550137 ceph-ceb838ef-9d5d-54e4-bddb-2f01adce2ad4-grafana-compute-0[99477]: logger=migrator t=2025-12-08T09:49:25.092452955Z level=info msg="Executing migration" id="create migration_log table"
Dec  8 04:49:25 np0005550137 ceph-ceb838ef-9d5d-54e4-bddb-2f01adce2ad4-grafana-compute-0[99477]: logger=migrator t=2025-12-08T09:49:25.093901638Z level=info msg="Migration successfully executed" id="create migration_log table" duration=1.443863ms
Dec  8 04:49:25 np0005550137 ceph-ceb838ef-9d5d-54e4-bddb-2f01adce2ad4-grafana-compute-0[99477]: logger=migrator t=2025-12-08T09:49:25.097190697Z level=info msg="Executing migration" id="create user table"
Dec  8 04:49:25 np0005550137 ceph-ceb838ef-9d5d-54e4-bddb-2f01adce2ad4-grafana-compute-0[99477]: logger=migrator t=2025-12-08T09:49:25.097956279Z level=info msg="Migration successfully executed" id="create user table" duration=766.192µs
Dec  8 04:49:25 np0005550137 ceph-ceb838ef-9d5d-54e4-bddb-2f01adce2ad4-grafana-compute-0[99477]: logger=migrator t=2025-12-08T09:49:25.100397343Z level=info msg="Executing migration" id="add unique index user.login"
Dec  8 04:49:25 np0005550137 ceph-ceb838ef-9d5d-54e4-bddb-2f01adce2ad4-grafana-compute-0[99477]: logger=migrator t=2025-12-08T09:49:25.101040421Z level=info msg="Migration successfully executed" id="add unique index user.login" duration=643.458µs
Dec  8 04:49:25 np0005550137 ceph-ceb838ef-9d5d-54e4-bddb-2f01adce2ad4-grafana-compute-0[99477]: logger=migrator t=2025-12-08T09:49:25.103568937Z level=info msg="Executing migration" id="add unique index user.email"
Dec  8 04:49:25 np0005550137 ceph-ceb838ef-9d5d-54e4-bddb-2f01adce2ad4-grafana-compute-0[99477]: logger=migrator t=2025-12-08T09:49:25.10434857Z level=info msg="Migration successfully executed" id="add unique index user.email" duration=781.854µs
Dec  8 04:49:25 np0005550137 ceph-ceb838ef-9d5d-54e4-bddb-2f01adce2ad4-grafana-compute-0[99477]: logger=migrator t=2025-12-08T09:49:25.106567036Z level=info msg="Executing migration" id="drop index UQE_user_login - v1"
Dec  8 04:49:25 np0005550137 ceph-ceb838ef-9d5d-54e4-bddb-2f01adce2ad4-grafana-compute-0[99477]: logger=migrator t=2025-12-08T09:49:25.107188665Z level=info msg="Migration successfully executed" id="drop index UQE_user_login - v1" duration=620.939µs
Dec  8 04:49:25 np0005550137 ceph-ceb838ef-9d5d-54e4-bddb-2f01adce2ad4-grafana-compute-0[99477]: logger=migrator t=2025-12-08T09:49:25.108949778Z level=info msg="Executing migration" id="drop index UQE_user_email - v1"
Dec  8 04:49:25 np0005550137 ceph-ceb838ef-9d5d-54e4-bddb-2f01adce2ad4-grafana-compute-0[99477]: logger=migrator t=2025-12-08T09:49:25.109638708Z level=info msg="Migration successfully executed" id="drop index UQE_user_email - v1" duration=689.11µs
Dec  8 04:49:25 np0005550137 ceph-ceb838ef-9d5d-54e4-bddb-2f01adce2ad4-grafana-compute-0[99477]: logger=migrator t=2025-12-08T09:49:25.111332309Z level=info msg="Executing migration" id="Rename table user to user_v1 - v1"
Dec  8 04:49:25 np0005550137 ceph-ceb838ef-9d5d-54e4-bddb-2f01adce2ad4-grafana-compute-0[99477]: logger=migrator t=2025-12-08T09:49:25.113673589Z level=info msg="Migration successfully executed" id="Rename table user to user_v1 - v1" duration=2.33984ms
Dec  8 04:49:25 np0005550137 ceph-ceb838ef-9d5d-54e4-bddb-2f01adce2ad4-grafana-compute-0[99477]: logger=migrator t=2025-12-08T09:49:25.115581956Z level=info msg="Executing migration" id="create user table v2"
Dec  8 04:49:25 np0005550137 ceph-ceb838ef-9d5d-54e4-bddb-2f01adce2ad4-grafana-compute-0[99477]: logger=migrator t=2025-12-08T09:49:25.116326878Z level=info msg="Migration successfully executed" id="create user table v2" duration=742.942µs
Dec  8 04:49:25 np0005550137 ceph-ceb838ef-9d5d-54e4-bddb-2f01adce2ad4-grafana-compute-0[99477]: logger=migrator t=2025-12-08T09:49:25.118593736Z level=info msg="Executing migration" id="create index UQE_user_login - v2"
Dec  8 04:49:25 np0005550137 ceph-ceb838ef-9d5d-54e4-bddb-2f01adce2ad4-grafana-compute-0[99477]: logger=migrator t=2025-12-08T09:49:25.119281786Z level=info msg="Migration successfully executed" id="create index UQE_user_login - v2" duration=688.42µs
Dec  8 04:49:25 np0005550137 ceph-ceb838ef-9d5d-54e4-bddb-2f01adce2ad4-grafana-compute-0[99477]: logger=migrator t=2025-12-08T09:49:25.121275216Z level=info msg="Executing migration" id="create index UQE_user_email - v2"
Dec  8 04:49:25 np0005550137 ceph-ceb838ef-9d5d-54e4-bddb-2f01adce2ad4-grafana-compute-0[99477]: logger=migrator t=2025-12-08T09:49:25.122191683Z level=info msg="Migration successfully executed" id="create index UQE_user_email - v2" duration=916.948µs
Dec  8 04:49:25 np0005550137 ceph-ceb838ef-9d5d-54e4-bddb-2f01adce2ad4-grafana-compute-0[99477]: logger=migrator t=2025-12-08T09:49:25.125605015Z level=info msg="Executing migration" id="copy data_source v1 to v2"
Dec  8 04:49:25 np0005550137 ceph-ceb838ef-9d5d-54e4-bddb-2f01adce2ad4-grafana-compute-0[99477]: logger=migrator t=2025-12-08T09:49:25.126151351Z level=info msg="Migration successfully executed" id="copy data_source v1 to v2" duration=548.406µs
Dec  8 04:49:25 np0005550137 ceph-ceb838ef-9d5d-54e4-bddb-2f01adce2ad4-grafana-compute-0[99477]: logger=migrator t=2025-12-08T09:49:25.128329057Z level=info msg="Executing migration" id="Drop old table user_v1"
Dec  8 04:49:25 np0005550137 ceph-ceb838ef-9d5d-54e4-bddb-2f01adce2ad4-grafana-compute-0[99477]: logger=migrator t=2025-12-08T09:49:25.129104399Z level=info msg="Migration successfully executed" id="Drop old table user_v1" duration=775.312µs
Dec  8 04:49:25 np0005550137 ceph-ceb838ef-9d5d-54e4-bddb-2f01adce2ad4-grafana-compute-0[99477]: logger=migrator t=2025-12-08T09:49:25.132302765Z level=info msg="Executing migration" id="Add column help_flags1 to user table"
Dec  8 04:49:25 np0005550137 ceph-ceb838ef-9d5d-54e4-bddb-2f01adce2ad4-grafana-compute-0[99477]: logger=migrator t=2025-12-08T09:49:25.139296594Z level=info msg="Migration successfully executed" id="Add column help_flags1 to user table" duration=6.990159ms
Dec  8 04:49:25 np0005550137 ceph-ceb838ef-9d5d-54e4-bddb-2f01adce2ad4-grafana-compute-0[99477]: logger=migrator t=2025-12-08T09:49:25.141307364Z level=info msg="Executing migration" id="Update user table charset"
Dec  8 04:49:25 np0005550137 ceph-ceb838ef-9d5d-54e4-bddb-2f01adce2ad4-grafana-compute-0[99477]: logger=migrator t=2025-12-08T09:49:25.141337665Z level=info msg="Migration successfully executed" id="Update user table charset" duration=31.751µs
Dec  8 04:49:25 np0005550137 ceph-ceb838ef-9d5d-54e4-bddb-2f01adce2ad4-grafana-compute-0[99477]: logger=migrator t=2025-12-08T09:49:25.143474558Z level=info msg="Executing migration" id="Add last_seen_at column to user"
Dec  8 04:49:25 np0005550137 ceph-ceb838ef-9d5d-54e4-bddb-2f01adce2ad4-grafana-compute-0[99477]: logger=migrator t=2025-12-08T09:49:25.14453127Z level=info msg="Migration successfully executed" id="Add last_seen_at column to user" duration=1.056362ms
Dec  8 04:49:25 np0005550137 ceph-ceb838ef-9d5d-54e4-bddb-2f01adce2ad4-grafana-compute-0[99477]: logger=migrator t=2025-12-08T09:49:25.146709835Z level=info msg="Executing migration" id="Add missing user data"
Dec  8 04:49:25 np0005550137 ceph-ceb838ef-9d5d-54e4-bddb-2f01adce2ad4-grafana-compute-0[99477]: logger=migrator t=2025-12-08T09:49:25.146916141Z level=info msg="Migration successfully executed" id="Add missing user data" duration=207.416µs
Dec  8 04:49:25 np0005550137 ceph-ceb838ef-9d5d-54e4-bddb-2f01adce2ad4-grafana-compute-0[99477]: logger=migrator t=2025-12-08T09:49:25.148971682Z level=info msg="Executing migration" id="Add is_disabled column to user"
Dec  8 04:49:25 np0005550137 ceph-ceb838ef-9d5d-54e4-bddb-2f01adce2ad4-grafana-compute-0[99477]: logger=migrator t=2025-12-08T09:49:25.149951112Z level=info msg="Migration successfully executed" id="Add is_disabled column to user" duration=979.66µs
Dec  8 04:49:25 np0005550137 ceph-ceb838ef-9d5d-54e4-bddb-2f01adce2ad4-grafana-compute-0[99477]: logger=migrator t=2025-12-08T09:49:25.152081495Z level=info msg="Executing migration" id="Add index user.login/user.email"
Dec  8 04:49:25 np0005550137 ceph-ceb838ef-9d5d-54e4-bddb-2f01adce2ad4-grafana-compute-0[99477]: logger=migrator t=2025-12-08T09:49:25.152779416Z level=info msg="Migration successfully executed" id="Add index user.login/user.email" duration=698.721µs
Dec  8 04:49:25 np0005550137 ceph-ceb838ef-9d5d-54e4-bddb-2f01adce2ad4-grafana-compute-0[99477]: logger=migrator t=2025-12-08T09:49:25.155121806Z level=info msg="Executing migration" id="Add is_service_account column to user"
Dec  8 04:49:25 np0005550137 ceph-ceb838ef-9d5d-54e4-bddb-2f01adce2ad4-grafana-compute-0[99477]: logger=migrator t=2025-12-08T09:49:25.156075695Z level=info msg="Migration successfully executed" id="Add is_service_account column to user" duration=947.639µs
Dec  8 04:49:25 np0005550137 ceph-ceb838ef-9d5d-54e4-bddb-2f01adce2ad4-grafana-compute-0[99477]: logger=migrator t=2025-12-08T09:49:25.157903949Z level=info msg="Executing migration" id="Update is_service_account column to nullable"
Dec  8 04:49:25 np0005550137 ceph-ceb838ef-9d5d-54e4-bddb-2f01adce2ad4-grafana-compute-0[99477]: logger=migrator t=2025-12-08T09:49:25.164153266Z level=info msg="Migration successfully executed" id="Update is_service_account column to nullable" duration=6.248607ms
Dec  8 04:49:25 np0005550137 ceph-ceb838ef-9d5d-54e4-bddb-2f01adce2ad4-grafana-compute-0[99477]: logger=migrator t=2025-12-08T09:49:25.166015621Z level=info msg="Executing migration" id="Add uid column to user"
Dec  8 04:49:25 np0005550137 ceph-ceb838ef-9d5d-54e4-bddb-2f01adce2ad4-grafana-compute-0[99477]: logger=migrator t=2025-12-08T09:49:25.166915449Z level=info msg="Migration successfully executed" id="Add uid column to user" duration=936.268µs
Dec  8 04:49:25 np0005550137 ceph-ceb838ef-9d5d-54e4-bddb-2f01adce2ad4-grafana-compute-0[99477]: logger=migrator t=2025-12-08T09:49:25.169118624Z level=info msg="Executing migration" id="Update uid column values for users"
Dec  8 04:49:25 np0005550137 ceph-ceb838ef-9d5d-54e4-bddb-2f01adce2ad4-grafana-compute-0[99477]: logger=migrator t=2025-12-08T09:49:25.169291379Z level=info msg="Migration successfully executed" id="Update uid column values for users" duration=173.325µs
Dec  8 04:49:25 np0005550137 ceph-ceb838ef-9d5d-54e4-bddb-2f01adce2ad4-grafana-compute-0[99477]: logger=migrator t=2025-12-08T09:49:25.171604818Z level=info msg="Executing migration" id="Add unique index user_uid"
Dec  8 04:49:25 np0005550137 ceph-ceb838ef-9d5d-54e4-bddb-2f01adce2ad4-nfs-cephfs-2-0-compute-0-cuvvno[97606]: 08/12/2025 09:49:25 : epoch 69369efc : compute-0 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f56f0000d90 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec  8 04:49:25 np0005550137 ceph-ceb838ef-9d5d-54e4-bddb-2f01adce2ad4-grafana-compute-0[99477]: logger=migrator t=2025-12-08T09:49:25.172550737Z level=info msg="Migration successfully executed" id="Add unique index user_uid" duration=944.539µs
Dec  8 04:49:25 np0005550137 ceph-ceb838ef-9d5d-54e4-bddb-2f01adce2ad4-grafana-compute-0[99477]: logger=migrator t=2025-12-08T09:49:25.175259258Z level=info msg="Executing migration" id="create temp user table v1-7"
Dec  8 04:49:25 np0005550137 ceph-ceb838ef-9d5d-54e4-bddb-2f01adce2ad4-grafana-compute-0[99477]: logger=migrator t=2025-12-08T09:49:25.175975909Z level=info msg="Migration successfully executed" id="create temp user table v1-7" duration=716.731µs
Dec  8 04:49:25 np0005550137 ceph-ceb838ef-9d5d-54e4-bddb-2f01adce2ad4-grafana-compute-0[99477]: logger=migrator t=2025-12-08T09:49:25.179338609Z level=info msg="Executing migration" id="create index IDX_temp_user_email - v1-7"
Dec  8 04:49:25 np0005550137 ceph-ceb838ef-9d5d-54e4-bddb-2f01adce2ad4-grafana-compute-0[99477]: logger=migrator t=2025-12-08T09:49:25.180082591Z level=info msg="Migration successfully executed" id="create index IDX_temp_user_email - v1-7" duration=745.522µs
Dec  8 04:49:25 np0005550137 ceph-ceb838ef-9d5d-54e4-bddb-2f01adce2ad4-grafana-compute-0[99477]: logger=migrator t=2025-12-08T09:49:25.182124602Z level=info msg="Executing migration" id="create index IDX_temp_user_org_id - v1-7"
Dec  8 04:49:25 np0005550137 ceph-ceb838ef-9d5d-54e4-bddb-2f01adce2ad4-grafana-compute-0[99477]: logger=migrator t=2025-12-08T09:49:25.182913276Z level=info msg="Migration successfully executed" id="create index IDX_temp_user_org_id - v1-7" duration=789.004µs
Dec  8 04:49:25 np0005550137 ceph-ceb838ef-9d5d-54e4-bddb-2f01adce2ad4-grafana-compute-0[99477]: logger=migrator t=2025-12-08T09:49:25.186156643Z level=info msg="Executing migration" id="create index IDX_temp_user_code - v1-7"
Dec  8 04:49:25 np0005550137 ceph-ceb838ef-9d5d-54e4-bddb-2f01adce2ad4-grafana-compute-0[99477]: logger=migrator t=2025-12-08T09:49:25.186791482Z level=info msg="Migration successfully executed" id="create index IDX_temp_user_code - v1-7" duration=631.019µs
Dec  8 04:49:25 np0005550137 ceph-ceb838ef-9d5d-54e4-bddb-2f01adce2ad4-grafana-compute-0[99477]: logger=migrator t=2025-12-08T09:49:25.189239645Z level=info msg="Executing migration" id="create index IDX_temp_user_status - v1-7"
Dec  8 04:49:25 np0005550137 ceph-ceb838ef-9d5d-54e4-bddb-2f01adce2ad4-grafana-compute-0[99477]: logger=migrator t=2025-12-08T09:49:25.189853814Z level=info msg="Migration successfully executed" id="create index IDX_temp_user_status - v1-7" duration=614.389µs
Dec  8 04:49:25 np0005550137 ceph-ceb838ef-9d5d-54e4-bddb-2f01adce2ad4-grafana-compute-0[99477]: logger=migrator t=2025-12-08T09:49:25.1927445Z level=info msg="Executing migration" id="Update temp_user table charset"
Dec  8 04:49:25 np0005550137 ceph-ceb838ef-9d5d-54e4-bddb-2f01adce2ad4-grafana-compute-0[99477]: logger=migrator t=2025-12-08T09:49:25.192769351Z level=info msg="Migration successfully executed" id="Update temp_user table charset" duration=25.841µs
Dec  8 04:49:25 np0005550137 ceph-ceb838ef-9d5d-54e4-bddb-2f01adce2ad4-nfs-cephfs-2-0-compute-0-cuvvno[97606]: 08/12/2025 09:49:25 : epoch 69369efc : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f5710002520 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec  8 04:49:25 np0005550137 ceph-ceb838ef-9d5d-54e4-bddb-2f01adce2ad4-grafana-compute-0[99477]: logger=migrator t=2025-12-08T09:49:25.195472011Z level=info msg="Executing migration" id="drop index IDX_temp_user_email - v1"
Dec  8 04:49:25 np0005550137 ceph-ceb838ef-9d5d-54e4-bddb-2f01adce2ad4-grafana-compute-0[99477]: logger=migrator t=2025-12-08T09:49:25.196369348Z level=info msg="Migration successfully executed" id="drop index IDX_temp_user_email - v1" duration=897.716µs
Dec  8 04:49:25 np0005550137 ceph-ceb838ef-9d5d-54e4-bddb-2f01adce2ad4-grafana-compute-0[99477]: logger=migrator t=2025-12-08T09:49:25.198961315Z level=info msg="Executing migration" id="drop index IDX_temp_user_org_id - v1"
Dec  8 04:49:25 np0005550137 ceph-ceb838ef-9d5d-54e4-bddb-2f01adce2ad4-grafana-compute-0[99477]: logger=migrator t=2025-12-08T09:49:25.199604124Z level=info msg="Migration successfully executed" id="drop index IDX_temp_user_org_id - v1" duration=642.519µs
Dec  8 04:49:25 np0005550137 ceph-ceb838ef-9d5d-54e4-bddb-2f01adce2ad4-grafana-compute-0[99477]: logger=migrator t=2025-12-08T09:49:25.20182585Z level=info msg="Executing migration" id="drop index IDX_temp_user_code - v1"
Dec  8 04:49:25 np0005550137 ceph-ceb838ef-9d5d-54e4-bddb-2f01adce2ad4-grafana-compute-0[99477]: logger=migrator t=2025-12-08T09:49:25.202529062Z level=info msg="Migration successfully executed" id="drop index IDX_temp_user_code - v1" duration=703.532µs
Dec  8 04:49:25 np0005550137 ceph-ceb838ef-9d5d-54e4-bddb-2f01adce2ad4-grafana-compute-0[99477]: logger=migrator t=2025-12-08T09:49:25.204464109Z level=info msg="Executing migration" id="drop index IDX_temp_user_status - v1"
Dec  8 04:49:25 np0005550137 ceph-ceb838ef-9d5d-54e4-bddb-2f01adce2ad4-grafana-compute-0[99477]: logger=migrator t=2025-12-08T09:49:25.20513965Z level=info msg="Migration successfully executed" id="drop index IDX_temp_user_status - v1" duration=675.66µs
Dec  8 04:49:25 np0005550137 ceph-ceb838ef-9d5d-54e4-bddb-2f01adce2ad4-grafana-compute-0[99477]: logger=migrator t=2025-12-08T09:49:25.207056897Z level=info msg="Executing migration" id="Rename table temp_user to temp_user_tmp_qwerty - v1"
Dec  8 04:49:25 np0005550137 ceph-ceb838ef-9d5d-54e4-bddb-2f01adce2ad4-grafana-compute-0[99477]: logger=migrator t=2025-12-08T09:49:25.209701756Z level=info msg="Migration successfully executed" id="Rename table temp_user to temp_user_tmp_qwerty - v1" duration=2.649089ms
Dec  8 04:49:25 np0005550137 ceph-ceb838ef-9d5d-54e4-bddb-2f01adce2ad4-grafana-compute-0[99477]: logger=migrator t=2025-12-08T09:49:25.211483009Z level=info msg="Executing migration" id="create temp_user v2"
Dec  8 04:49:25 np0005550137 ceph-ceb838ef-9d5d-54e4-bddb-2f01adce2ad4-grafana-compute-0[99477]: logger=migrator t=2025-12-08T09:49:25.21215849Z level=info msg="Migration successfully executed" id="create temp_user v2" duration=675.531µs
Dec  8 04:49:25 np0005550137 ceph-ceb838ef-9d5d-54e4-bddb-2f01adce2ad4-grafana-compute-0[99477]: logger=migrator t=2025-12-08T09:49:25.214581602Z level=info msg="Executing migration" id="create index IDX_temp_user_email - v2"
Dec  8 04:49:25 np0005550137 ceph-ceb838ef-9d5d-54e4-bddb-2f01adce2ad4-grafana-compute-0[99477]: logger=migrator t=2025-12-08T09:49:25.215322224Z level=info msg="Migration successfully executed" id="create index IDX_temp_user_email - v2" duration=740.922µs
Dec  8 04:49:25 np0005550137 ceph-ceb838ef-9d5d-54e4-bddb-2f01adce2ad4-grafana-compute-0[99477]: logger=migrator t=2025-12-08T09:49:25.21721047Z level=info msg="Executing migration" id="create index IDX_temp_user_org_id - v2"
Dec  8 04:49:25 np0005550137 ceph-ceb838ef-9d5d-54e4-bddb-2f01adce2ad4-grafana-compute-0[99477]: logger=migrator t=2025-12-08T09:49:25.217845659Z level=info msg="Migration successfully executed" id="create index IDX_temp_user_org_id - v2" duration=635.229µs
Dec  8 04:49:25 np0005550137 ceph-ceb838ef-9d5d-54e4-bddb-2f01adce2ad4-grafana-compute-0[99477]: logger=migrator t=2025-12-08T09:49:25.219574221Z level=info msg="Executing migration" id="create index IDX_temp_user_code - v2"
Dec  8 04:49:25 np0005550137 ceph-ceb838ef-9d5d-54e4-bddb-2f01adce2ad4-grafana-compute-0[99477]: logger=migrator t=2025-12-08T09:49:25.220199Z level=info msg="Migration successfully executed" id="create index IDX_temp_user_code - v2" duration=624.919µs
Dec  8 04:49:25 np0005550137 ceph-ceb838ef-9d5d-54e4-bddb-2f01adce2ad4-grafana-compute-0[99477]: logger=migrator t=2025-12-08T09:49:25.222310872Z level=info msg="Executing migration" id="create index IDX_temp_user_status - v2"
Dec  8 04:49:25 np0005550137 ceph-ceb838ef-9d5d-54e4-bddb-2f01adce2ad4-grafana-compute-0[99477]: logger=migrator t=2025-12-08T09:49:25.222936651Z level=info msg="Migration successfully executed" id="create index IDX_temp_user_status - v2" duration=625.599µs
Dec  8 04:49:25 np0005550137 ceph-ceb838ef-9d5d-54e4-bddb-2f01adce2ad4-grafana-compute-0[99477]: logger=migrator t=2025-12-08T09:49:25.225591741Z level=info msg="Executing migration" id="copy temp_user v1 to v2"
Dec  8 04:49:25 np0005550137 ceph-ceb838ef-9d5d-54e4-bddb-2f01adce2ad4-grafana-compute-0[99477]: logger=migrator t=2025-12-08T09:49:25.226079325Z level=info msg="Migration successfully executed" id="copy temp_user v1 to v2" duration=487.095µs
Dec  8 04:49:25 np0005550137 ceph-ceb838ef-9d5d-54e4-bddb-2f01adce2ad4-grafana-compute-0[99477]: logger=migrator t=2025-12-08T09:49:25.228100115Z level=info msg="Executing migration" id="drop temp_user_tmp_qwerty"
Dec  8 04:49:25 np0005550137 ceph-ceb838ef-9d5d-54e4-bddb-2f01adce2ad4-grafana-compute-0[99477]: logger=migrator t=2025-12-08T09:49:25.228632841Z level=info msg="Migration successfully executed" id="drop temp_user_tmp_qwerty" duration=532.426µs
Dec  8 04:49:25 np0005550137 ceph-ceb838ef-9d5d-54e4-bddb-2f01adce2ad4-grafana-compute-0[99477]: logger=migrator t=2025-12-08T09:49:25.23127104Z level=info msg="Executing migration" id="Set created for temp users that will otherwise prematurely expire"
Dec  8 04:49:25 np0005550137 ceph-ceb838ef-9d5d-54e4-bddb-2f01adce2ad4-grafana-compute-0[99477]: logger=migrator t=2025-12-08T09:49:25.23158741Z level=info msg="Migration successfully executed" id="Set created for temp users that will otherwise prematurely expire" duration=316.75µs
Dec  8 04:49:25 np0005550137 ceph-ceb838ef-9d5d-54e4-bddb-2f01adce2ad4-grafana-compute-0[99477]: logger=migrator t=2025-12-08T09:49:25.233821186Z level=info msg="Executing migration" id="create star table"
Dec  8 04:49:25 np0005550137 ceph-ceb838ef-9d5d-54e4-bddb-2f01adce2ad4-grafana-compute-0[99477]: logger=migrator t=2025-12-08T09:49:25.234405004Z level=info msg="Migration successfully executed" id="create star table" duration=584.278µs
Dec  8 04:49:25 np0005550137 ceph-ceb838ef-9d5d-54e4-bddb-2f01adce2ad4-grafana-compute-0[99477]: logger=migrator t=2025-12-08T09:49:25.23664132Z level=info msg="Executing migration" id="add unique index star.user_id_dashboard_id"
Dec  8 04:49:25 np0005550137 ceph-ceb838ef-9d5d-54e4-bddb-2f01adce2ad4-grafana-compute-0[99477]: logger=migrator t=2025-12-08T09:49:25.237576029Z level=info msg="Migration successfully executed" id="add unique index star.user_id_dashboard_id" duration=934.429µs
Dec  8 04:49:25 np0005550137 ceph-ceb838ef-9d5d-54e4-bddb-2f01adce2ad4-grafana-compute-0[99477]: logger=migrator t=2025-12-08T09:49:25.240057403Z level=info msg="Executing migration" id="create org table v1"
Dec  8 04:49:25 np0005550137 ceph-ceb838ef-9d5d-54e4-bddb-2f01adce2ad4-grafana-compute-0[99477]: logger=migrator t=2025-12-08T09:49:25.240968979Z level=info msg="Migration successfully executed" id="create org table v1" duration=910.767µs
Dec  8 04:49:25 np0005550137 ceph-ceb838ef-9d5d-54e4-bddb-2f01adce2ad4-grafana-compute-0[99477]: logger=migrator t=2025-12-08T09:49:25.244074962Z level=info msg="Executing migration" id="create index UQE_org_name - v1"
Dec  8 04:49:25 np0005550137 ceph-ceb838ef-9d5d-54e4-bddb-2f01adce2ad4-grafana-compute-0[99477]: logger=migrator t=2025-12-08T09:49:25.244919077Z level=info msg="Migration successfully executed" id="create index UQE_org_name - v1" duration=832.995µs
Dec  8 04:49:25 np0005550137 ceph-ceb838ef-9d5d-54e4-bddb-2f01adce2ad4-grafana-compute-0[99477]: logger=migrator t=2025-12-08T09:49:25.247627288Z level=info msg="Executing migration" id="create org_user table v1"
Dec  8 04:49:25 np0005550137 ceph-ceb838ef-9d5d-54e4-bddb-2f01adce2ad4-grafana-compute-0[99477]: logger=migrator t=2025-12-08T09:49:25.24833746Z level=info msg="Migration successfully executed" id="create org_user table v1" duration=710.592µs
Dec  8 04:49:25 np0005550137 ceph-ceb838ef-9d5d-54e4-bddb-2f01adce2ad4-grafana-compute-0[99477]: logger=migrator t=2025-12-08T09:49:25.250794603Z level=info msg="Executing migration" id="create index IDX_org_user_org_id - v1"
Dec  8 04:49:25 np0005550137 ceph-ceb838ef-9d5d-54e4-bddb-2f01adce2ad4-grafana-compute-0[99477]: logger=migrator t=2025-12-08T09:49:25.251532215Z level=info msg="Migration successfully executed" id="create index IDX_org_user_org_id - v1" duration=738.762µs
Dec  8 04:49:25 np0005550137 ceph-ceb838ef-9d5d-54e4-bddb-2f01adce2ad4-grafana-compute-0[99477]: logger=migrator t=2025-12-08T09:49:25.253772972Z level=info msg="Executing migration" id="create index UQE_org_user_org_id_user_id - v1"
Dec  8 04:49:25 np0005550137 ceph-ceb838ef-9d5d-54e4-bddb-2f01adce2ad4-grafana-compute-0[99477]: logger=migrator t=2025-12-08T09:49:25.254500614Z level=info msg="Migration successfully executed" id="create index UQE_org_user_org_id_user_id - v1" duration=727.592µs
Dec  8 04:49:25 np0005550137 ceph-ceb838ef-9d5d-54e4-bddb-2f01adce2ad4-grafana-compute-0[99477]: logger=migrator t=2025-12-08T09:49:25.257310958Z level=info msg="Executing migration" id="create index IDX_org_user_user_id - v1"
Dec  8 04:49:25 np0005550137 ceph-ceb838ef-9d5d-54e4-bddb-2f01adce2ad4-grafana-compute-0[99477]: logger=migrator t=2025-12-08T09:49:25.258021138Z level=info msg="Migration successfully executed" id="create index IDX_org_user_user_id - v1" duration=711.09µs
Dec  8 04:49:25 np0005550137 ceph-ceb838ef-9d5d-54e4-bddb-2f01adce2ad4-grafana-compute-0[99477]: logger=migrator t=2025-12-08T09:49:25.260338297Z level=info msg="Executing migration" id="Update org table charset"
Dec  8 04:49:25 np0005550137 ceph-ceb838ef-9d5d-54e4-bddb-2f01adce2ad4-grafana-compute-0[99477]: logger=migrator t=2025-12-08T09:49:25.260358368Z level=info msg="Migration successfully executed" id="Update org table charset" duration=20.711µs
Dec  8 04:49:25 np0005550137 ceph-ceb838ef-9d5d-54e4-bddb-2f01adce2ad4-grafana-compute-0[99477]: logger=migrator t=2025-12-08T09:49:25.26239234Z level=info msg="Executing migration" id="Update org_user table charset"
Dec  8 04:49:25 np0005550137 ceph-ceb838ef-9d5d-54e4-bddb-2f01adce2ad4-grafana-compute-0[99477]: logger=migrator t=2025-12-08T09:49:25.26241132Z level=info msg="Migration successfully executed" id="Update org_user table charset" duration=26.161µs
Dec  8 04:49:25 np0005550137 ceph-ceb838ef-9d5d-54e4-bddb-2f01adce2ad4-grafana-compute-0[99477]: logger=migrator t=2025-12-08T09:49:25.264300656Z level=info msg="Executing migration" id="Migrate all Read Only Viewers to Viewers"
Dec  8 04:49:25 np0005550137 ceph-ceb838ef-9d5d-54e4-bddb-2f01adce2ad4-grafana-compute-0[99477]: logger=migrator t=2025-12-08T09:49:25.264455261Z level=info msg="Migration successfully executed" id="Migrate all Read Only Viewers to Viewers" duration=158.345µs
Dec  8 04:49:25 np0005550137 ceph-ceb838ef-9d5d-54e4-bddb-2f01adce2ad4-grafana-compute-0[99477]: logger=migrator t=2025-12-08T09:49:25.267037578Z level=info msg="Executing migration" id="create dashboard table"
Dec  8 04:49:25 np0005550137 ceph-ceb838ef-9d5d-54e4-bddb-2f01adce2ad4-grafana-compute-0[99477]: logger=migrator t=2025-12-08T09:49:25.268261855Z level=info msg="Migration successfully executed" id="create dashboard table" duration=1.189566ms
Dec  8 04:49:25 np0005550137 ceph-ceb838ef-9d5d-54e4-bddb-2f01adce2ad4-grafana-compute-0[99477]: logger=migrator t=2025-12-08T09:49:25.27146656Z level=info msg="Executing migration" id="add index dashboard.account_id"
Dec  8 04:49:25 np0005550137 ceph-ceb838ef-9d5d-54e4-bddb-2f01adce2ad4-grafana-compute-0[99477]: logger=migrator t=2025-12-08T09:49:25.272451559Z level=info msg="Migration successfully executed" id="add index dashboard.account_id" duration=985.339µs
Dec  8 04:49:25 np0005550137 ceph-ceb838ef-9d5d-54e4-bddb-2f01adce2ad4-grafana-compute-0[99477]: logger=migrator t=2025-12-08T09:49:25.275850971Z level=info msg="Executing migration" id="add unique index dashboard_account_id_slug"
Dec  8 04:49:25 np0005550137 ceph-ceb838ef-9d5d-54e4-bddb-2f01adce2ad4-grafana-compute-0[99477]: logger=migrator t=2025-12-08T09:49:25.277126639Z level=info msg="Migration successfully executed" id="add unique index dashboard_account_id_slug" duration=1.275578ms
Dec  8 04:49:25 np0005550137 ceph-osd[83009]: log_channel(cluster) log [DBG] : 9.14 scrub ok
Dec  8 04:49:25 np0005550137 ceph-ceb838ef-9d5d-54e4-bddb-2f01adce2ad4-grafana-compute-0[99477]: logger=migrator t=2025-12-08T09:49:25.279678886Z level=info msg="Executing migration" id="create dashboard_tag table"
Dec  8 04:49:25 np0005550137 ceph-ceb838ef-9d5d-54e4-bddb-2f01adce2ad4-grafana-compute-0[99477]: logger=migrator t=2025-12-08T09:49:25.28050037Z level=info msg="Migration successfully executed" id="create dashboard_tag table" duration=821.544µs
Dec  8 04:49:25 np0005550137 ceph-ceb838ef-9d5d-54e4-bddb-2f01adce2ad4-grafana-compute-0[99477]: logger=migrator t=2025-12-08T09:49:25.283302923Z level=info msg="Executing migration" id="add unique index dashboard_tag.dasboard_id_term"
Dec  8 04:49:25 np0005550137 ceph-ceb838ef-9d5d-54e4-bddb-2f01adce2ad4-grafana-compute-0[99477]: logger=migrator t=2025-12-08T09:49:25.28452363Z level=info msg="Migration successfully executed" id="add unique index dashboard_tag.dasboard_id_term" duration=1.219777ms
Dec  8 04:49:25 np0005550137 ceph-ceb838ef-9d5d-54e4-bddb-2f01adce2ad4-grafana-compute-0[99477]: logger=migrator t=2025-12-08T09:49:25.287880951Z level=info msg="Executing migration" id="drop index UQE_dashboard_tag_dashboard_id_term - v1"
Dec  8 04:49:25 np0005550137 ceph-ceb838ef-9d5d-54e4-bddb-2f01adce2ad4-grafana-compute-0[99477]: logger=migrator t=2025-12-08T09:49:25.289419726Z level=info msg="Migration successfully executed" id="drop index UQE_dashboard_tag_dashboard_id_term - v1" duration=1.538756ms
Dec  8 04:49:25 np0005550137 ceph-ceb838ef-9d5d-54e4-bddb-2f01adce2ad4-grafana-compute-0[99477]: logger=migrator t=2025-12-08T09:49:25.29188758Z level=info msg="Executing migration" id="Rename table dashboard to dashboard_v1 - v1"
Dec  8 04:49:25 np0005550137 ceph-ceb838ef-9d5d-54e4-bddb-2f01adce2ad4-grafana-compute-0[99477]: logger=migrator t=2025-12-08T09:49:25.302112626Z level=info msg="Migration successfully executed" id="Rename table dashboard to dashboard_v1 - v1" duration=10.222115ms
Dec  8 04:49:25 np0005550137 ceph-ceb838ef-9d5d-54e4-bddb-2f01adce2ad4-grafana-compute-0[99477]: logger=migrator t=2025-12-08T09:49:25.304918259Z level=info msg="Executing migration" id="create dashboard v2"
Dec  8 04:49:25 np0005550137 ceph-ceb838ef-9d5d-54e4-bddb-2f01adce2ad4-grafana-compute-0[99477]: logger=migrator t=2025-12-08T09:49:25.306526067Z level=info msg="Migration successfully executed" id="create dashboard v2" duration=1.588107ms
Dec  8 04:49:25 np0005550137 ceph-ceb838ef-9d5d-54e4-bddb-2f01adce2ad4-grafana-compute-0[99477]: logger=migrator t=2025-12-08T09:49:25.309096904Z level=info msg="Executing migration" id="create index IDX_dashboard_org_id - v2"
Dec  8 04:49:25 np0005550137 ceph-ceb838ef-9d5d-54e4-bddb-2f01adce2ad4-grafana-compute-0[99477]: logger=migrator t=2025-12-08T09:49:25.310436024Z level=info msg="Migration successfully executed" id="create index IDX_dashboard_org_id - v2" duration=1.33896ms
Dec  8 04:49:25 np0005550137 ceph-ceb838ef-9d5d-54e4-bddb-2f01adce2ad4-grafana-compute-0[99477]: logger=migrator t=2025-12-08T09:49:25.313592088Z level=info msg="Executing migration" id="create index UQE_dashboard_org_id_slug - v2"
Dec  8 04:49:25 np0005550137 ceph-ceb838ef-9d5d-54e4-bddb-2f01adce2ad4-grafana-compute-0[99477]: logger=migrator t=2025-12-08T09:49:25.315032781Z level=info msg="Migration successfully executed" id="create index UQE_dashboard_org_id_slug - v2" duration=1.440414ms
Dec  8 04:49:25 np0005550137 ceph-ceb838ef-9d5d-54e4-bddb-2f01adce2ad4-grafana-compute-0[99477]: logger=migrator t=2025-12-08T09:49:25.31801239Z level=info msg="Executing migration" id="copy dashboard v1 to v2"
Dec  8 04:49:25 np0005550137 ceph-ceb838ef-9d5d-54e4-bddb-2f01adce2ad4-grafana-compute-0[99477]: logger=migrator t=2025-12-08T09:49:25.318750062Z level=info msg="Migration successfully executed" id="copy dashboard v1 to v2" duration=737.792µs
Dec  8 04:49:25 np0005550137 ceph-ceb838ef-9d5d-54e4-bddb-2f01adce2ad4-grafana-compute-0[99477]: logger=migrator t=2025-12-08T09:49:25.321193965Z level=info msg="Executing migration" id="drop table dashboard_v1"
Dec  8 04:49:25 np0005550137 ceph-ceb838ef-9d5d-54e4-bddb-2f01adce2ad4-grafana-compute-0[99477]: logger=migrator t=2025-12-08T09:49:25.323085811Z level=info msg="Migration successfully executed" id="drop table dashboard_v1" duration=1.891216ms
Dec  8 04:49:25 np0005550137 ceph-ceb838ef-9d5d-54e4-bddb-2f01adce2ad4-grafana-compute-0[99477]: logger=migrator t=2025-12-08T09:49:25.326082491Z level=info msg="Executing migration" id="alter dashboard.data to mediumtext v1"
Dec  8 04:49:25 np0005550137 ceph-ceb838ef-9d5d-54e4-bddb-2f01adce2ad4-grafana-compute-0[99477]: logger=migrator t=2025-12-08T09:49:25.326211315Z level=info msg="Migration successfully executed" id="alter dashboard.data to mediumtext v1" duration=131.614µs
Dec  8 04:49:25 np0005550137 ceph-ceb838ef-9d5d-54e4-bddb-2f01adce2ad4-grafana-compute-0[99477]: logger=migrator t=2025-12-08T09:49:25.329020129Z level=info msg="Executing migration" id="Add column updated_by in dashboard - v2"
Dec  8 04:49:25 np0005550137 ceph-ceb838ef-9d5d-54e4-bddb-2f01adce2ad4-grafana-compute-0[99477]: logger=migrator t=2025-12-08T09:49:25.332633627Z level=info msg="Migration successfully executed" id="Add column updated_by in dashboard - v2" duration=3.611258ms
Dec  8 04:49:25 np0005550137 ceph-ceb838ef-9d5d-54e4-bddb-2f01adce2ad4-grafana-compute-0[99477]: logger=migrator t=2025-12-08T09:49:25.33508168Z level=info msg="Executing migration" id="Add column created_by in dashboard - v2"
Dec  8 04:49:25 np0005550137 ceph-ceb838ef-9d5d-54e4-bddb-2f01adce2ad4-grafana-compute-0[99477]: logger=migrator t=2025-12-08T09:49:25.338354578Z level=info msg="Migration successfully executed" id="Add column created_by in dashboard - v2" duration=3.271858ms
Dec  8 04:49:25 np0005550137 ceph-ceb838ef-9d5d-54e4-bddb-2f01adce2ad4-grafana-compute-0[99477]: logger=migrator t=2025-12-08T09:49:25.341351897Z level=info msg="Executing migration" id="Add column gnetId in dashboard"
Dec  8 04:49:25 np0005550137 ceph-ceb838ef-9d5d-54e4-bddb-2f01adce2ad4-grafana-compute-0[99477]: logger=migrator t=2025-12-08T09:49:25.344871972Z level=info msg="Migration successfully executed" id="Add column gnetId in dashboard" duration=3.503434ms
Dec  8 04:49:25 np0005550137 ceph-ceb838ef-9d5d-54e4-bddb-2f01adce2ad4-grafana-compute-0[99477]: logger=migrator t=2025-12-08T09:49:25.347609253Z level=info msg="Executing migration" id="Add index for gnetId in dashboard"
Dec  8 04:49:25 np0005550137 ceph-ceb838ef-9d5d-54e4-bddb-2f01adce2ad4-grafana-compute-0[99477]: logger=migrator t=2025-12-08T09:49:25.34914095Z level=info msg="Migration successfully executed" id="Add index for gnetId in dashboard" duration=1.541967ms
Dec  8 04:49:25 np0005550137 ceph-ceb838ef-9d5d-54e4-bddb-2f01adce2ad4-grafana-compute-0[99477]: logger=migrator t=2025-12-08T09:49:25.351720207Z level=info msg="Executing migration" id="Add column plugin_id in dashboard"
Dec  8 04:49:25 np0005550137 ceph-ceb838ef-9d5d-54e4-bddb-2f01adce2ad4-grafana-compute-0[99477]: logger=migrator t=2025-12-08T09:49:25.355151849Z level=info msg="Migration successfully executed" id="Add column plugin_id in dashboard" duration=3.430362ms
Dec  8 04:49:25 np0005550137 ceph-ceb838ef-9d5d-54e4-bddb-2f01adce2ad4-grafana-compute-0[99477]: logger=migrator t=2025-12-08T09:49:25.358441107Z level=info msg="Executing migration" id="Add index for plugin_id in dashboard"
Dec  8 04:49:25 np0005550137 ceph-ceb838ef-9d5d-54e4-bddb-2f01adce2ad4-grafana-compute-0[99477]: logger=migrator t=2025-12-08T09:49:25.360300813Z level=info msg="Migration successfully executed" id="Add index for plugin_id in dashboard" duration=1.870777ms
Dec  8 04:49:25 np0005550137 ceph-ceb838ef-9d5d-54e4-bddb-2f01adce2ad4-grafana-compute-0[99477]: logger=migrator t=2025-12-08T09:49:25.363394825Z level=info msg="Executing migration" id="Add index for dashboard_id in dashboard_tag"
Dec  8 04:49:25 np0005550137 ceph-ceb838ef-9d5d-54e4-bddb-2f01adce2ad4-grafana-compute-0[99477]: logger=migrator t=2025-12-08T09:49:25.365135097Z level=info msg="Migration successfully executed" id="Add index for dashboard_id in dashboard_tag" duration=1.739802ms
Dec  8 04:49:25 np0005550137 ceph-ceb838ef-9d5d-54e4-bddb-2f01adce2ad4-grafana-compute-0[99477]: logger=migrator t=2025-12-08T09:49:25.369158738Z level=info msg="Executing migration" id="Update dashboard table charset"
Dec  8 04:49:25 np0005550137 ceph-ceb838ef-9d5d-54e4-bddb-2f01adce2ad4-grafana-compute-0[99477]: logger=migrator t=2025-12-08T09:49:25.369211779Z level=info msg="Migration successfully executed" id="Update dashboard table charset" duration=55.122µs
Dec  8 04:49:25 np0005550137 ceph-ceb838ef-9d5d-54e4-bddb-2f01adce2ad4-grafana-compute-0[99477]: logger=migrator t=2025-12-08T09:49:25.372495816Z level=info msg="Executing migration" id="Update dashboard_tag table charset"
Dec  8 04:49:25 np0005550137 ceph-ceb838ef-9d5d-54e4-bddb-2f01adce2ad4-grafana-compute-0[99477]: logger=migrator t=2025-12-08T09:49:25.372568439Z level=info msg="Migration successfully executed" id="Update dashboard_tag table charset" duration=46.552µs
Dec  8 04:49:25 np0005550137 ceph-ceb838ef-9d5d-54e4-bddb-2f01adce2ad4-grafana-compute-0[99477]: logger=migrator t=2025-12-08T09:49:25.375059683Z level=info msg="Executing migration" id="Add column folder_id in dashboard"
Dec  8 04:49:25 np0005550137 ceph-ceb838ef-9d5d-54e4-bddb-2f01adce2ad4-grafana-compute-0[99477]: logger=migrator t=2025-12-08T09:49:25.378422223Z level=info msg="Migration successfully executed" id="Add column folder_id in dashboard" duration=3.36171ms
Dec  8 04:49:25 np0005550137 ceph-ceb838ef-9d5d-54e4-bddb-2f01adce2ad4-grafana-compute-0[99477]: logger=migrator t=2025-12-08T09:49:25.381072553Z level=info msg="Executing migration" id="Add column isFolder in dashboard"
Dec  8 04:49:25 np0005550137 ceph-ceb838ef-9d5d-54e4-bddb-2f01adce2ad4-grafana-compute-0[99477]: logger=migrator t=2025-12-08T09:49:25.384808524Z level=info msg="Migration successfully executed" id="Add column isFolder in dashboard" duration=3.732472ms
Dec  8 04:49:25 np0005550137 ceph-ceb838ef-9d5d-54e4-bddb-2f01adce2ad4-grafana-compute-0[99477]: logger=migrator t=2025-12-08T09:49:25.388100993Z level=info msg="Executing migration" id="Add column has_acl in dashboard"
Dec  8 04:49:25 np0005550137 ceph-ceb838ef-9d5d-54e4-bddb-2f01adce2ad4-grafana-compute-0[99477]: logger=migrator t=2025-12-08T09:49:25.392633278Z level=info msg="Migration successfully executed" id="Add column has_acl in dashboard" duration=4.422421ms
Dec  8 04:49:25 np0005550137 ceph-ceb838ef-9d5d-54e4-bddb-2f01adce2ad4-grafana-compute-0[99477]: logger=migrator t=2025-12-08T09:49:25.395787932Z level=info msg="Executing migration" id="Add column uid in dashboard"
Dec  8 04:49:25 np0005550137 ceph-ceb838ef-9d5d-54e4-bddb-2f01adce2ad4-grafana-compute-0[99477]: logger=migrator t=2025-12-08T09:49:25.399312128Z level=info msg="Migration successfully executed" id="Add column uid in dashboard" duration=3.518225ms
Dec  8 04:49:25 np0005550137 ceph-ceb838ef-9d5d-54e4-bddb-2f01adce2ad4-grafana-compute-0[99477]: logger=migrator t=2025-12-08T09:49:25.402451231Z level=info msg="Executing migration" id="Update uid column values in dashboard"
Dec  8 04:49:25 np0005550137 ceph-ceb838ef-9d5d-54e4-bddb-2f01adce2ad4-grafana-compute-0[99477]: logger=migrator t=2025-12-08T09:49:25.402854214Z level=info msg="Migration successfully executed" id="Update uid column values in dashboard" duration=403.503µs
Dec  8 04:49:25 np0005550137 ceph-ceb838ef-9d5d-54e4-bddb-2f01adce2ad4-grafana-compute-0[99477]: logger=migrator t=2025-12-08T09:49:25.405500613Z level=info msg="Executing migration" id="Add unique index dashboard_org_id_uid"
Dec  8 04:49:25 np0005550137 ceph-ceb838ef-9d5d-54e4-bddb-2f01adce2ad4-grafana-compute-0[99477]: logger=migrator t=2025-12-08T09:49:25.406862713Z level=info msg="Migration successfully executed" id="Add unique index dashboard_org_id_uid" duration=1.36185ms
Dec  8 04:49:25 np0005550137 ceph-ceb838ef-9d5d-54e4-bddb-2f01adce2ad4-grafana-compute-0[99477]: logger=migrator t=2025-12-08T09:49:25.410144261Z level=info msg="Executing migration" id="Remove unique index org_id_slug"
Dec  8 04:49:25 np0005550137 ceph-ceb838ef-9d5d-54e4-bddb-2f01adce2ad4-grafana-compute-0[99477]: logger=migrator t=2025-12-08T09:49:25.411523292Z level=info msg="Migration successfully executed" id="Remove unique index org_id_slug" duration=1.378771ms
Dec  8 04:49:25 np0005550137 ceph-ceb838ef-9d5d-54e4-bddb-2f01adce2ad4-grafana-compute-0[99477]: logger=migrator t=2025-12-08T09:49:25.414300564Z level=info msg="Executing migration" id="Update dashboard title length"
Dec  8 04:49:25 np0005550137 ceph-ceb838ef-9d5d-54e4-bddb-2f01adce2ad4-grafana-compute-0[99477]: logger=migrator t=2025-12-08T09:49:25.414328195Z level=info msg="Migration successfully executed" id="Update dashboard title length" duration=28.591µs
Dec  8 04:49:25 np0005550137 ceph-ceb838ef-9d5d-54e4-bddb-2f01adce2ad4-grafana-compute-0[99477]: logger=migrator t=2025-12-08T09:49:25.416376607Z level=info msg="Executing migration" id="Add unique index for dashboard_org_id_title_folder_id"
Dec  8 04:49:25 np0005550137 ceph-ceb838ef-9d5d-54e4-bddb-2f01adce2ad4-grafana-compute-0[99477]: logger=migrator t=2025-12-08T09:49:25.417086928Z level=info msg="Migration successfully executed" id="Add unique index for dashboard_org_id_title_folder_id" duration=710.731µs
Dec  8 04:49:25 np0005550137 ceph-ceb838ef-9d5d-54e4-bddb-2f01adce2ad4-grafana-compute-0[99477]: logger=migrator t=2025-12-08T09:49:25.418998555Z level=info msg="Executing migration" id="create dashboard_provisioning"
Dec  8 04:49:25 np0005550137 ceph-ceb838ef-9d5d-54e4-bddb-2f01adce2ad4-grafana-compute-0[99477]: logger=migrator t=2025-12-08T09:49:25.419524721Z level=info msg="Migration successfully executed" id="create dashboard_provisioning" duration=526.256µs
Dec  8 04:49:25 np0005550137 ceph-ceb838ef-9d5d-54e4-bddb-2f01adce2ad4-grafana-compute-0[99477]: logger=migrator t=2025-12-08T09:49:25.422071887Z level=info msg="Executing migration" id="Rename table dashboard_provisioning to dashboard_provisioning_tmp_qwerty - v1"
Dec  8 04:49:25 np0005550137 ceph-ceb838ef-9d5d-54e4-bddb-2f01adce2ad4-grafana-compute-0[99477]: logger=migrator t=2025-12-08T09:49:25.425759857Z level=info msg="Migration successfully executed" id="Rename table dashboard_provisioning to dashboard_provisioning_tmp_qwerty - v1" duration=3.68757ms
Dec  8 04:49:25 np0005550137 ceph-ceb838ef-9d5d-54e4-bddb-2f01adce2ad4-grafana-compute-0[99477]: logger=migrator t=2025-12-08T09:49:25.427874831Z level=info msg="Executing migration" id="create dashboard_provisioning v2"
Dec  8 04:49:25 np0005550137 ceph-ceb838ef-9d5d-54e4-bddb-2f01adce2ad4-grafana-compute-0[99477]: logger=migrator t=2025-12-08T09:49:25.428598662Z level=info msg="Migration successfully executed" id="create dashboard_provisioning v2" duration=724.041µs
Dec  8 04:49:25 np0005550137 ceph-ceb838ef-9d5d-54e4-bddb-2f01adce2ad4-grafana-compute-0[99477]: logger=migrator t=2025-12-08T09:49:25.431448197Z level=info msg="Executing migration" id="create index IDX_dashboard_provisioning_dashboard_id - v2"
Dec  8 04:49:25 np0005550137 ceph-ceb838ef-9d5d-54e4-bddb-2f01adce2ad4-grafana-compute-0[99477]: logger=migrator t=2025-12-08T09:49:25.43256059Z level=info msg="Migration successfully executed" id="create index IDX_dashboard_provisioning_dashboard_id - v2" duration=1.113413ms
Dec  8 04:49:25 np0005550137 ceph-ceb838ef-9d5d-54e4-bddb-2f01adce2ad4-grafana-compute-0[99477]: logger=migrator t=2025-12-08T09:49:25.435526379Z level=info msg="Executing migration" id="create index IDX_dashboard_provisioning_dashboard_id_name - v2"
Dec  8 04:49:25 np0005550137 ceph-ceb838ef-9d5d-54e4-bddb-2f01adce2ad4-grafana-compute-0[99477]: logger=migrator t=2025-12-08T09:49:25.43656233Z level=info msg="Migration successfully executed" id="create index IDX_dashboard_provisioning_dashboard_id_name - v2" duration=1.036291ms
Dec  8 04:49:25 np0005550137 ceph-ceb838ef-9d5d-54e4-bddb-2f01adce2ad4-grafana-compute-0[99477]: logger=migrator t=2025-12-08T09:49:25.43924683Z level=info msg="Executing migration" id="copy dashboard_provisioning v1 to v2"
Dec  8 04:49:25 np0005550137 ceph-ceb838ef-9d5d-54e4-bddb-2f01adce2ad4-grafana-compute-0[99477]: logger=migrator t=2025-12-08T09:49:25.439859978Z level=info msg="Migration successfully executed" id="copy dashboard_provisioning v1 to v2" duration=615.118µs
Dec  8 04:49:25 np0005550137 ceph-ceb838ef-9d5d-54e4-bddb-2f01adce2ad4-grafana-compute-0[99477]: logger=migrator t=2025-12-08T09:49:25.441481007Z level=info msg="Executing migration" id="drop dashboard_provisioning_tmp_qwerty"
Dec  8 04:49:25 np0005550137 ceph-ceb838ef-9d5d-54e4-bddb-2f01adce2ad4-grafana-compute-0[99477]: logger=migrator t=2025-12-08T09:49:25.442146457Z level=info msg="Migration successfully executed" id="drop dashboard_provisioning_tmp_qwerty" duration=664.99µs
Dec  8 04:49:25 np0005550137 ceph-ceb838ef-9d5d-54e4-bddb-2f01adce2ad4-grafana-compute-0[99477]: logger=migrator t=2025-12-08T09:49:25.444080534Z level=info msg="Executing migration" id="Add check_sum column"
Dec  8 04:49:25 np0005550137 ceph-ceb838ef-9d5d-54e4-bddb-2f01adce2ad4-grafana-compute-0[99477]: logger=migrator t=2025-12-08T09:49:25.445870867Z level=info msg="Migration successfully executed" id="Add check_sum column" duration=1.790523ms
Dec  8 04:49:25 np0005550137 ceph-ceb838ef-9d5d-54e4-bddb-2f01adce2ad4-grafana-compute-0[99477]: logger=migrator t=2025-12-08T09:49:25.448195647Z level=info msg="Executing migration" id="Add index for dashboard_title"
Dec  8 04:49:25 np0005550137 ceph-ceb838ef-9d5d-54e4-bddb-2f01adce2ad4-grafana-compute-0[99477]: logger=migrator t=2025-12-08T09:49:25.449099734Z level=info msg="Migration successfully executed" id="Add index for dashboard_title" duration=904.927µs
Dec  8 04:49:25 np0005550137 ceph-ceb838ef-9d5d-54e4-bddb-2f01adce2ad4-grafana-compute-0[99477]: logger=migrator t=2025-12-08T09:49:25.451293159Z level=info msg="Executing migration" id="delete tags for deleted dashboards"
Dec  8 04:49:25 np0005550137 ceph-ceb838ef-9d5d-54e4-bddb-2f01adce2ad4-grafana-compute-0[99477]: logger=migrator t=2025-12-08T09:49:25.451556077Z level=info msg="Migration successfully executed" id="delete tags for deleted dashboards" duration=266.388µs
Dec  8 04:49:25 np0005550137 ceph-ceb838ef-9d5d-54e4-bddb-2f01adce2ad4-grafana-compute-0[99477]: logger=migrator t=2025-12-08T09:49:25.454048402Z level=info msg="Executing migration" id="delete stars for deleted dashboards"
Dec  8 04:49:25 np0005550137 ceph-ceb838ef-9d5d-54e4-bddb-2f01adce2ad4-grafana-compute-0[99477]: logger=migrator t=2025-12-08T09:49:25.454301849Z level=info msg="Migration successfully executed" id="delete stars for deleted dashboards" duration=257.028µs
Dec  8 04:49:25 np0005550137 ceph-ceb838ef-9d5d-54e4-bddb-2f01adce2ad4-grafana-compute-0[99477]: logger=migrator t=2025-12-08T09:49:25.458147854Z level=info msg="Executing migration" id="Add index for dashboard_is_folder"
Dec  8 04:49:25 np0005550137 ceph-ceb838ef-9d5d-54e4-bddb-2f01adce2ad4-grafana-compute-0[99477]: logger=migrator t=2025-12-08T09:49:25.459423382Z level=info msg="Migration successfully executed" id="Add index for dashboard_is_folder" duration=1.278278ms
Dec  8 04:49:25 np0005550137 ceph-ceb838ef-9d5d-54e4-bddb-2f01adce2ad4-grafana-compute-0[99477]: logger=migrator t=2025-12-08T09:49:25.46336374Z level=info msg="Executing migration" id="Add isPublic for dashboard"
Dec  8 04:49:25 np0005550137 ceph-ceb838ef-9d5d-54e4-bddb-2f01adce2ad4-grafana-compute-0[99477]: logger=migrator t=2025-12-08T09:49:25.467060051Z level=info msg="Migration successfully executed" id="Add isPublic for dashboard" duration=3.69373ms
Dec  8 04:49:25 np0005550137 ceph-ceb838ef-9d5d-54e4-bddb-2f01adce2ad4-grafana-compute-0[99477]: logger=migrator t=2025-12-08T09:49:25.470495353Z level=info msg="Executing migration" id="create data_source table"
Dec  8 04:49:25 np0005550137 ceph-ceb838ef-9d5d-54e4-bddb-2f01adce2ad4-grafana-compute-0[99477]: logger=migrator t=2025-12-08T09:49:25.472216315Z level=info msg="Migration successfully executed" id="create data_source table" duration=1.721592ms
Dec  8 04:49:25 np0005550137 ceph-ceb838ef-9d5d-54e4-bddb-2f01adce2ad4-grafana-compute-0[99477]: logger=migrator t=2025-12-08T09:49:25.475853112Z level=info msg="Executing migration" id="add index data_source.account_id"
Dec  8 04:49:25 np0005550137 ceph-ceb838ef-9d5d-54e4-bddb-2f01adce2ad4-grafana-compute-0[99477]: logger=migrator t=2025-12-08T09:49:25.477187603Z level=info msg="Migration successfully executed" id="add index data_source.account_id" duration=1.337811ms
Dec  8 04:49:25 np0005550137 ceph-ceb838ef-9d5d-54e4-bddb-2f01adce2ad4-grafana-compute-0[99477]: logger=migrator t=2025-12-08T09:49:25.480710718Z level=info msg="Executing migration" id="add unique index data_source.account_id_name"
Dec  8 04:49:25 np0005550137 ceph-ceb838ef-9d5d-54e4-bddb-2f01adce2ad4-grafana-compute-0[99477]: logger=migrator t=2025-12-08T09:49:25.481581853Z level=info msg="Migration successfully executed" id="add unique index data_source.account_id_name" duration=873.525µs
Dec  8 04:49:25 np0005550137 ceph-ceb838ef-9d5d-54e4-bddb-2f01adce2ad4-grafana-compute-0[99477]: logger=migrator t=2025-12-08T09:49:25.484004227Z level=info msg="Executing migration" id="drop index IDX_data_source_account_id - v1"
Dec  8 04:49:25 np0005550137 ceph-ceb838ef-9d5d-54e4-bddb-2f01adce2ad4-grafana-compute-0[99477]: logger=migrator t=2025-12-08T09:49:25.484976435Z level=info msg="Migration successfully executed" id="drop index IDX_data_source_account_id - v1" duration=973.198µs
Dec  8 04:49:25 np0005550137 ceph-ceb838ef-9d5d-54e4-bddb-2f01adce2ad4-grafana-compute-0[99477]: logger=migrator t=2025-12-08T09:49:25.487271863Z level=info msg="Executing migration" id="drop index UQE_data_source_account_id_name - v1"
Dec  8 04:49:25 np0005550137 ceph-ceb838ef-9d5d-54e4-bddb-2f01adce2ad4-grafana-compute-0[99477]: logger=migrator t=2025-12-08T09:49:25.488279704Z level=info msg="Migration successfully executed" id="drop index UQE_data_source_account_id_name - v1" duration=1.012051ms
Dec  8 04:49:25 np0005550137 ceph-ceb838ef-9d5d-54e4-bddb-2f01adce2ad4-grafana-compute-0[99477]: logger=migrator t=2025-12-08T09:49:25.490143349Z level=info msg="Executing migration" id="Rename table data_source to data_source_v1 - v1"
Dec  8 04:49:25 np0005550137 ceph-ceb838ef-9d5d-54e4-bddb-2f01adce2ad4-grafana-compute-0[99477]: logger=migrator t=2025-12-08T09:49:25.495668014Z level=info msg="Migration successfully executed" id="Rename table data_source to data_source_v1 - v1" duration=5.501544ms
Dec  8 04:49:25 np0005550137 ceph-ceb838ef-9d5d-54e4-bddb-2f01adce2ad4-grafana-compute-0[99477]: logger=migrator t=2025-12-08T09:49:25.498581661Z level=info msg="Executing migration" id="create data_source table v2"
Dec  8 04:49:25 np0005550137 ceph-ceb838ef-9d5d-54e4-bddb-2f01adce2ad4-grafana-compute-0[99477]: logger=migrator t=2025-12-08T09:49:25.499756517Z level=info msg="Migration successfully executed" id="create data_source table v2" duration=1.176686ms
Dec  8 04:49:25 np0005550137 ceph-ceb838ef-9d5d-54e4-bddb-2f01adce2ad4-grafana-compute-0[99477]: logger=migrator t=2025-12-08T09:49:25.502000743Z level=info msg="Executing migration" id="create index IDX_data_source_org_id - v2"
Dec  8 04:49:25 np0005550137 ceph-ceb838ef-9d5d-54e4-bddb-2f01adce2ad4-grafana-compute-0[99477]: logger=migrator t=2025-12-08T09:49:25.502958312Z level=info msg="Migration successfully executed" id="create index IDX_data_source_org_id - v2" duration=953.839µs
Dec  8 04:49:25 np0005550137 ceph-ceb838ef-9d5d-54e4-bddb-2f01adce2ad4-grafana-compute-0[99477]: logger=migrator t=2025-12-08T09:49:25.505505768Z level=info msg="Executing migration" id="create index UQE_data_source_org_id_name - v2"
Dec  8 04:49:25 np0005550137 ceph-ceb838ef-9d5d-54e4-bddb-2f01adce2ad4-grafana-compute-0[99477]: logger=migrator t=2025-12-08T09:49:25.506278852Z level=info msg="Migration successfully executed" id="create index UQE_data_source_org_id_name - v2" duration=773.984µs
Dec  8 04:49:25 np0005550137 ceph-mon[74516]: from='mgr.14478 192.168.122.100:0/575685615' entity='mgr.compute-0.kitiwu' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "3"}]: dispatch
Dec  8 04:49:25 np0005550137 ceph-mon[74516]: from='mgr.14478 192.168.122.100:0/575685615' entity='mgr.compute-0.kitiwu' 
Dec  8 04:49:25 np0005550137 ceph-mon[74516]: from='mgr.14478 192.168.122.100:0/575685615' entity='mgr.compute-0.kitiwu' 
Dec  8 04:49:25 np0005550137 ceph-mon[74516]: from='mgr.14478 192.168.122.100:0/575685615' entity='mgr.compute-0.kitiwu' 
Dec  8 04:49:25 np0005550137 ceph-mon[74516]: from='mgr.14478 192.168.122.100:0/575685615' entity='mgr.compute-0.kitiwu' 
Dec  8 04:49:25 np0005550137 ceph-mon[74516]: from='mgr.14478 192.168.122.100:0/575685615' entity='mgr.compute-0.kitiwu' 
Dec  8 04:49:25 np0005550137 ceph-mon[74516]: Deploying daemon haproxy.rgw.default.compute-0.dmkdub on compute-0
Dec  8 04:49:25 np0005550137 ceph-ceb838ef-9d5d-54e4-bddb-2f01adce2ad4-grafana-compute-0[99477]: logger=migrator t=2025-12-08T09:49:25.508672913Z level=info msg="Executing migration" id="Drop old table data_source_v1 #2"
Dec  8 04:49:25 np0005550137 ceph-ceb838ef-9d5d-54e4-bddb-2f01adce2ad4-grafana-compute-0[99477]: logger=migrator t=2025-12-08T09:49:25.509209679Z level=info msg="Migration successfully executed" id="Drop old table data_source_v1 #2" duration=536.896µs
Dec  8 04:49:25 np0005550137 ceph-ceb838ef-9d5d-54e4-bddb-2f01adce2ad4-grafana-compute-0[99477]: logger=migrator t=2025-12-08T09:49:25.510690373Z level=info msg="Executing migration" id="Add column with_credentials"
Dec  8 04:49:25 np0005550137 ceph-ceb838ef-9d5d-54e4-bddb-2f01adce2ad4-grafana-compute-0[99477]: logger=migrator t=2025-12-08T09:49:25.512581799Z level=info msg="Migration successfully executed" id="Add column with_credentials" duration=1.890306ms
Dec  8 04:49:25 np0005550137 ceph-ceb838ef-9d5d-54e4-bddb-2f01adce2ad4-grafana-compute-0[99477]: logger=migrator t=2025-12-08T09:49:25.51660496Z level=info msg="Executing migration" id="Add secure json data column"
Dec  8 04:49:25 np0005550137 ceph-ceb838ef-9d5d-54e4-bddb-2f01adce2ad4-grafana-compute-0[99477]: logger=migrator t=2025-12-08T09:49:25.522030282Z level=info msg="Migration successfully executed" id="Add secure json data column" duration=5.422431ms
Dec  8 04:49:25 np0005550137 ceph-ceb838ef-9d5d-54e4-bddb-2f01adce2ad4-grafana-compute-0[99477]: logger=migrator t=2025-12-08T09:49:25.524683031Z level=info msg="Executing migration" id="Update data_source table charset"
Dec  8 04:49:25 np0005550137 ceph-ceb838ef-9d5d-54e4-bddb-2f01adce2ad4-grafana-compute-0[99477]: logger=migrator t=2025-12-08T09:49:25.524719572Z level=info msg="Migration successfully executed" id="Update data_source table charset" duration=39.041µs
Dec  8 04:49:25 np0005550137 ceph-ceb838ef-9d5d-54e4-bddb-2f01adce2ad4-grafana-compute-0[99477]: logger=migrator t=2025-12-08T09:49:25.527172615Z level=info msg="Executing migration" id="Update initial version to 1"
Dec  8 04:49:25 np0005550137 ceph-ceb838ef-9d5d-54e4-bddb-2f01adce2ad4-grafana-compute-0[99477]: logger=migrator t=2025-12-08T09:49:25.527464224Z level=info msg="Migration successfully executed" id="Update initial version to 1" duration=296.779µs
Dec  8 04:49:25 np0005550137 ceph-ceb838ef-9d5d-54e4-bddb-2f01adce2ad4-grafana-compute-0[99477]: logger=migrator t=2025-12-08T09:49:25.529637429Z level=info msg="Executing migration" id="Add read_only data column"
Dec  8 04:49:25 np0005550137 ceph-ceb838ef-9d5d-54e4-bddb-2f01adce2ad4-grafana-compute-0[99477]: logger=migrator t=2025-12-08T09:49:25.532304398Z level=info msg="Migration successfully executed" id="Add read_only data column" duration=2.66482ms
Dec  8 04:49:25 np0005550137 ceph-ceb838ef-9d5d-54e4-bddb-2f01adce2ad4-grafana-compute-0[99477]: logger=migrator t=2025-12-08T09:49:25.534640768Z level=info msg="Executing migration" id="Migrate logging ds to loki ds"
Dec  8 04:49:25 np0005550137 ceph-ceb838ef-9d5d-54e4-bddb-2f01adce2ad4-grafana-compute-0[99477]: logger=migrator t=2025-12-08T09:49:25.534927206Z level=info msg="Migration successfully executed" id="Migrate logging ds to loki ds" duration=289.698µs
Dec  8 04:49:25 np0005550137 ceph-ceb838ef-9d5d-54e4-bddb-2f01adce2ad4-grafana-compute-0[99477]: logger=migrator t=2025-12-08T09:49:25.537454562Z level=info msg="Executing migration" id="Update json_data with nulls"
Dec  8 04:49:25 np0005550137 ceph-ceb838ef-9d5d-54e4-bddb-2f01adce2ad4-grafana-compute-0[99477]: logger=migrator t=2025-12-08T09:49:25.537693699Z level=info msg="Migration successfully executed" id="Update json_data with nulls" duration=236.447µs
Dec  8 04:49:25 np0005550137 ceph-ceb838ef-9d5d-54e4-bddb-2f01adce2ad4-grafana-compute-0[99477]: logger=migrator t=2025-12-08T09:49:25.540090481Z level=info msg="Executing migration" id="Add uid column"
Dec  8 04:49:25 np0005550137 ceph-ceb838ef-9d5d-54e4-bddb-2f01adce2ad4-grafana-compute-0[99477]: logger=migrator t=2025-12-08T09:49:25.542756661Z level=info msg="Migration successfully executed" id="Add uid column" duration=2.67787ms
Dec  8 04:49:25 np0005550137 ceph-ceb838ef-9d5d-54e4-bddb-2f01adce2ad4-grafana-compute-0[99477]: logger=migrator t=2025-12-08T09:49:25.544794882Z level=info msg="Executing migration" id="Update uid value"
Dec  8 04:49:25 np0005550137 ceph-ceb838ef-9d5d-54e4-bddb-2f01adce2ad4-grafana-compute-0[99477]: logger=migrator t=2025-12-08T09:49:25.54508004Z level=info msg="Migration successfully executed" id="Update uid value" duration=279.899µs
Dec  8 04:49:25 np0005550137 ceph-ceb838ef-9d5d-54e4-bddb-2f01adce2ad4-grafana-compute-0[99477]: logger=migrator t=2025-12-08T09:49:25.547367828Z level=info msg="Executing migration" id="Add unique index datasource_org_id_uid"
Dec  8 04:49:25 np0005550137 ceph-ceb838ef-9d5d-54e4-bddb-2f01adce2ad4-grafana-compute-0[99477]: logger=migrator t=2025-12-08T09:49:25.548448691Z level=info msg="Migration successfully executed" id="Add unique index datasource_org_id_uid" duration=1.080633ms
Dec  8 04:49:25 np0005550137 ceph-ceb838ef-9d5d-54e4-bddb-2f01adce2ad4-grafana-compute-0[99477]: logger=migrator t=2025-12-08T09:49:25.55111249Z level=info msg="Executing migration" id="add unique index datasource_org_id_is_default"
Dec  8 04:49:25 np0005550137 ceph-ceb838ef-9d5d-54e4-bddb-2f01adce2ad4-grafana-compute-0[99477]: logger=migrator t=2025-12-08T09:49:25.55210987Z level=info msg="Migration successfully executed" id="add unique index datasource_org_id_is_default" duration=997.399µs
Dec  8 04:49:25 np0005550137 ceph-ceb838ef-9d5d-54e4-bddb-2f01adce2ad4-grafana-compute-0[99477]: logger=migrator t=2025-12-08T09:49:25.554732238Z level=info msg="Executing migration" id="create api_key table"
Dec  8 04:49:25 np0005550137 ceph-ceb838ef-9d5d-54e4-bddb-2f01adce2ad4-grafana-compute-0[99477]: logger=migrator t=2025-12-08T09:49:25.555402878Z level=info msg="Migration successfully executed" id="create api_key table" duration=670.89µs
Dec  8 04:49:25 np0005550137 ceph-ceb838ef-9d5d-54e4-bddb-2f01adce2ad4-grafana-compute-0[99477]: logger=migrator t=2025-12-08T09:49:25.557521241Z level=info msg="Executing migration" id="add index api_key.account_id"
Dec  8 04:49:25 np0005550137 ceph-ceb838ef-9d5d-54e4-bddb-2f01adce2ad4-grafana-compute-0[99477]: logger=migrator t=2025-12-08T09:49:25.558345576Z level=info msg="Migration successfully executed" id="add index api_key.account_id" duration=824.185µs
Dec  8 04:49:25 np0005550137 ceph-ceb838ef-9d5d-54e4-bddb-2f01adce2ad4-grafana-compute-0[99477]: logger=migrator t=2025-12-08T09:49:25.5608458Z level=info msg="Executing migration" id="add index api_key.key"
Dec  8 04:49:25 np0005550137 ceph-ceb838ef-9d5d-54e4-bddb-2f01adce2ad4-grafana-compute-0[99477]: logger=migrator t=2025-12-08T09:49:25.561519601Z level=info msg="Migration successfully executed" id="add index api_key.key" duration=673.541µs
Dec  8 04:49:25 np0005550137 ceph-ceb838ef-9d5d-54e4-bddb-2f01adce2ad4-grafana-compute-0[99477]: logger=migrator t=2025-12-08T09:49:25.564040316Z level=info msg="Executing migration" id="add index api_key.account_id_name"
Dec  8 04:49:25 np0005550137 ceph-ceb838ef-9d5d-54e4-bddb-2f01adce2ad4-grafana-compute-0[99477]: logger=migrator t=2025-12-08T09:49:25.56484096Z level=info msg="Migration successfully executed" id="add index api_key.account_id_name" duration=800.534µs
Dec  8 04:49:25 np0005550137 ceph-ceb838ef-9d5d-54e4-bddb-2f01adce2ad4-grafana-compute-0[99477]: logger=migrator t=2025-12-08T09:49:25.566894701Z level=info msg="Executing migration" id="drop index IDX_api_key_account_id - v1"
Dec  8 04:49:25 np0005550137 ceph-ceb838ef-9d5d-54e4-bddb-2f01adce2ad4-grafana-compute-0[99477]: logger=migrator t=2025-12-08T09:49:25.567640674Z level=info msg="Migration successfully executed" id="drop index IDX_api_key_account_id - v1" duration=745.273µs
Dec  8 04:49:25 np0005550137 ceph-ceb838ef-9d5d-54e4-bddb-2f01adce2ad4-grafana-compute-0[99477]: logger=migrator t=2025-12-08T09:49:25.56920055Z level=info msg="Executing migration" id="drop index UQE_api_key_key - v1"
Dec  8 04:49:25 np0005550137 ceph-ceb838ef-9d5d-54e4-bddb-2f01adce2ad4-grafana-compute-0[99477]: logger=migrator t=2025-12-08T09:49:25.569947603Z level=info msg="Migration successfully executed" id="drop index UQE_api_key_key - v1" duration=746.742µs
Dec  8 04:49:25 np0005550137 ceph-ceb838ef-9d5d-54e4-bddb-2f01adce2ad4-grafana-compute-0[99477]: logger=migrator t=2025-12-08T09:49:25.571390185Z level=info msg="Executing migration" id="drop index UQE_api_key_account_id_name - v1"
Dec  8 04:49:25 np0005550137 ceph-ceb838ef-9d5d-54e4-bddb-2f01adce2ad4-grafana-compute-0[99477]: logger=migrator t=2025-12-08T09:49:25.57219852Z level=info msg="Migration successfully executed" id="drop index UQE_api_key_account_id_name - v1" duration=808.914µs
Dec  8 04:49:25 np0005550137 ceph-ceb838ef-9d5d-54e4-bddb-2f01adce2ad4-grafana-compute-0[99477]: logger=migrator t=2025-12-08T09:49:25.573838348Z level=info msg="Executing migration" id="Rename table api_key to api_key_v1 - v1"
Dec  8 04:49:25 np0005550137 ceph-ceb838ef-9d5d-54e4-bddb-2f01adce2ad4-grafana-compute-0[99477]: logger=migrator t=2025-12-08T09:49:25.579029004Z level=info msg="Migration successfully executed" id="Rename table api_key to api_key_v1 - v1" duration=5.189926ms
Dec  8 04:49:25 np0005550137 ceph-ceb838ef-9d5d-54e4-bddb-2f01adce2ad4-grafana-compute-0[99477]: logger=migrator t=2025-12-08T09:49:25.580874999Z level=info msg="Executing migration" id="create api_key table v2"
Dec  8 04:49:25 np0005550137 ceph-ceb838ef-9d5d-54e4-bddb-2f01adce2ad4-grafana-compute-0[99477]: logger=migrator t=2025-12-08T09:49:25.581490637Z level=info msg="Migration successfully executed" id="create api_key table v2" duration=615.388µs
Dec  8 04:49:25 np0005550137 ceph-ceb838ef-9d5d-54e4-bddb-2f01adce2ad4-grafana-compute-0[99477]: logger=migrator t=2025-12-08T09:49:25.58360854Z level=info msg="Executing migration" id="create index IDX_api_key_org_id - v2"
Dec  8 04:49:25 np0005550137 ceph-ceb838ef-9d5d-54e4-bddb-2f01adce2ad4-grafana-compute-0[99477]: logger=migrator t=2025-12-08T09:49:25.584316052Z level=info msg="Migration successfully executed" id="create index IDX_api_key_org_id - v2" duration=707.412µs
Dec  8 04:49:25 np0005550137 ceph-ceb838ef-9d5d-54e4-bddb-2f01adce2ad4-grafana-compute-0[99477]: logger=migrator t=2025-12-08T09:49:25.586322461Z level=info msg="Executing migration" id="create index UQE_api_key_key - v2"
Dec  8 04:49:25 np0005550137 ceph-ceb838ef-9d5d-54e4-bddb-2f01adce2ad4-grafana-compute-0[99477]: logger=migrator t=2025-12-08T09:49:25.587168507Z level=info msg="Migration successfully executed" id="create index UQE_api_key_key - v2" duration=846.106µs
Dec  8 04:49:25 np0005550137 ceph-ceb838ef-9d5d-54e4-bddb-2f01adce2ad4-grafana-compute-0[99477]: logger=migrator t=2025-12-08T09:49:25.58897007Z level=info msg="Executing migration" id="create index UQE_api_key_org_id_name - v2"
Dec  8 04:49:25 np0005550137 ceph-ceb838ef-9d5d-54e4-bddb-2f01adce2ad4-grafana-compute-0[99477]: logger=migrator t=2025-12-08T09:49:25.589706302Z level=info msg="Migration successfully executed" id="create index UQE_api_key_org_id_name - v2" duration=735.772µs
Dec  8 04:49:25 np0005550137 ceph-ceb838ef-9d5d-54e4-bddb-2f01adce2ad4-grafana-compute-0[99477]: logger=migrator t=2025-12-08T09:49:25.592046612Z level=info msg="Executing migration" id="copy api_key v1 to v2"
Dec  8 04:49:25 np0005550137 ceph-ceb838ef-9d5d-54e4-bddb-2f01adce2ad4-grafana-compute-0[99477]: logger=migrator t=2025-12-08T09:49:25.592410493Z level=info msg="Migration successfully executed" id="copy api_key v1 to v2" duration=362.551µs
Dec  8 04:49:25 np0005550137 ceph-ceb838ef-9d5d-54e4-bddb-2f01adce2ad4-grafana-compute-0[99477]: logger=migrator t=2025-12-08T09:49:25.594065922Z level=info msg="Executing migration" id="Drop old table api_key_v1"
Dec  8 04:49:25 np0005550137 ceph-ceb838ef-9d5d-54e4-bddb-2f01adce2ad4-grafana-compute-0[99477]: logger=migrator t=2025-12-08T09:49:25.594620809Z level=info msg="Migration successfully executed" id="Drop old table api_key_v1" duration=554.467µs
Dec  8 04:49:25 np0005550137 ceph-ceb838ef-9d5d-54e4-bddb-2f01adce2ad4-grafana-compute-0[99477]: logger=migrator t=2025-12-08T09:49:25.596216886Z level=info msg="Executing migration" id="Update api_key table charset"
Dec  8 04:49:25 np0005550137 ceph-ceb838ef-9d5d-54e4-bddb-2f01adce2ad4-grafana-compute-0[99477]: logger=migrator t=2025-12-08T09:49:25.596289318Z level=info msg="Migration successfully executed" id="Update api_key table charset" duration=73.382µs
Dec  8 04:49:25 np0005550137 ceph-ceb838ef-9d5d-54e4-bddb-2f01adce2ad4-grafana-compute-0[99477]: logger=migrator t=2025-12-08T09:49:25.598190356Z level=info msg="Executing migration" id="Add expires to api_key table"
Dec  8 04:49:25 np0005550137 ceph-ceb838ef-9d5d-54e4-bddb-2f01adce2ad4-grafana-compute-0[99477]: logger=migrator t=2025-12-08T09:49:25.600214076Z level=info msg="Migration successfully executed" id="Add expires to api_key table" duration=2.02365ms
Dec  8 04:49:25 np0005550137 ceph-ceb838ef-9d5d-54e4-bddb-2f01adce2ad4-grafana-compute-0[99477]: logger=migrator t=2025-12-08T09:49:25.601854755Z level=info msg="Executing migration" id="Add service account foreign key"
Dec  8 04:49:25 np0005550137 ceph-ceb838ef-9d5d-54e4-bddb-2f01adce2ad4-grafana-compute-0[99477]: logger=migrator t=2025-12-08T09:49:25.60369903Z level=info msg="Migration successfully executed" id="Add service account foreign key" duration=1.843725ms
Dec  8 04:49:25 np0005550137 ceph-ceb838ef-9d5d-54e4-bddb-2f01adce2ad4-grafana-compute-0[99477]: logger=migrator t=2025-12-08T09:49:25.605333859Z level=info msg="Executing migration" id="set service account foreign key to nil if 0"
Dec  8 04:49:25 np0005550137 ceph-ceb838ef-9d5d-54e4-bddb-2f01adce2ad4-grafana-compute-0[99477]: logger=migrator t=2025-12-08T09:49:25.605480663Z level=info msg="Migration successfully executed" id="set service account foreign key to nil if 0" duration=147.054µs
Dec  8 04:49:25 np0005550137 ceph-ceb838ef-9d5d-54e4-bddb-2f01adce2ad4-grafana-compute-0[99477]: logger=migrator t=2025-12-08T09:49:25.607423961Z level=info msg="Executing migration" id="Add last_used_at to api_key table"
Dec  8 04:49:25 np0005550137 ceph-ceb838ef-9d5d-54e4-bddb-2f01adce2ad4-grafana-compute-0[99477]: logger=migrator t=2025-12-08T09:49:25.610356128Z level=info msg="Migration successfully executed" id="Add last_used_at to api_key table" duration=2.930407ms
Dec  8 04:49:25 np0005550137 ceph-ceb838ef-9d5d-54e4-bddb-2f01adce2ad4-grafana-compute-0[99477]: logger=migrator t=2025-12-08T09:49:25.612472542Z level=info msg="Executing migration" id="Add is_revoked column to api_key table"
Dec  8 04:49:25 np0005550137 ceph-ceb838ef-9d5d-54e4-bddb-2f01adce2ad4-grafana-compute-0[99477]: logger=migrator t=2025-12-08T09:49:25.614376969Z level=info msg="Migration successfully executed" id="Add is_revoked column to api_key table" duration=1.904737ms
Dec  8 04:49:25 np0005550137 ceph-ceb838ef-9d5d-54e4-bddb-2f01adce2ad4-grafana-compute-0[99477]: logger=migrator t=2025-12-08T09:49:25.616342727Z level=info msg="Executing migration" id="create dashboard_snapshot table v4"
Dec  8 04:49:25 np0005550137 ceph-ceb838ef-9d5d-54e4-bddb-2f01adce2ad4-grafana-compute-0[99477]: logger=migrator t=2025-12-08T09:49:25.617271135Z level=info msg="Migration successfully executed" id="create dashboard_snapshot table v4" duration=928.538µs
Dec  8 04:49:25 np0005550137 ceph-ceb838ef-9d5d-54e4-bddb-2f01adce2ad4-grafana-compute-0[99477]: logger=migrator t=2025-12-08T09:49:25.619362398Z level=info msg="Executing migration" id="drop table dashboard_snapshot_v4 #1"
Dec  8 04:49:25 np0005550137 ceph-ceb838ef-9d5d-54e4-bddb-2f01adce2ad4-grafana-compute-0[99477]: logger=migrator t=2025-12-08T09:49:25.620217653Z level=info msg="Migration successfully executed" id="drop table dashboard_snapshot_v4 #1" duration=856.795µs
Dec  8 04:49:25 np0005550137 ceph-ceb838ef-9d5d-54e4-bddb-2f01adce2ad4-grafana-compute-0[99477]: logger=migrator t=2025-12-08T09:49:25.62212787Z level=info msg="Executing migration" id="create dashboard_snapshot table v5 #2"
Dec  8 04:49:25 np0005550137 ceph-ceb838ef-9d5d-54e4-bddb-2f01adce2ad4-grafana-compute-0[99477]: logger=migrator t=2025-12-08T09:49:25.622994156Z level=info msg="Migration successfully executed" id="create dashboard_snapshot table v5 #2" duration=866.166µs
Dec  8 04:49:25 np0005550137 ceph-ceb838ef-9d5d-54e4-bddb-2f01adce2ad4-grafana-compute-0[99477]: logger=migrator t=2025-12-08T09:49:25.624849141Z level=info msg="Executing migration" id="create index UQE_dashboard_snapshot_key - v5"
Dec  8 04:49:25 np0005550137 ceph-ceb838ef-9d5d-54e4-bddb-2f01adce2ad4-grafana-compute-0[99477]: logger=migrator t=2025-12-08T09:49:25.625681286Z level=info msg="Migration successfully executed" id="create index UQE_dashboard_snapshot_key - v5" duration=831.645µs
Dec  8 04:49:25 np0005550137 ceph-ceb838ef-9d5d-54e4-bddb-2f01adce2ad4-grafana-compute-0[99477]: logger=migrator t=2025-12-08T09:49:25.628168671Z level=info msg="Executing migration" id="create index UQE_dashboard_snapshot_delete_key - v5"
Dec  8 04:49:25 np0005550137 ceph-ceb838ef-9d5d-54e4-bddb-2f01adce2ad4-grafana-compute-0[99477]: logger=migrator t=2025-12-08T09:49:25.629004356Z level=info msg="Migration successfully executed" id="create index UQE_dashboard_snapshot_delete_key - v5" duration=831.865µs
Dec  8 04:49:25 np0005550137 ceph-ceb838ef-9d5d-54e4-bddb-2f01adce2ad4-grafana-compute-0[99477]: logger=migrator t=2025-12-08T09:49:25.63117411Z level=info msg="Executing migration" id="create index IDX_dashboard_snapshot_user_id - v5"
Dec  8 04:49:25 np0005550137 ceph-ceb838ef-9d5d-54e4-bddb-2f01adce2ad4-grafana-compute-0[99477]: logger=migrator t=2025-12-08T09:49:25.631914403Z level=info msg="Migration successfully executed" id="create index IDX_dashboard_snapshot_user_id - v5" duration=739.513µs
Dec  8 04:49:25 np0005550137 ceph-ceb838ef-9d5d-54e4-bddb-2f01adce2ad4-grafana-compute-0[99477]: logger=migrator t=2025-12-08T09:49:25.633998295Z level=info msg="Executing migration" id="alter dashboard_snapshot to mediumtext v2"
Dec  8 04:49:25 np0005550137 ceph-ceb838ef-9d5d-54e4-bddb-2f01adce2ad4-grafana-compute-0[99477]: logger=migrator t=2025-12-08T09:49:25.634054327Z level=info msg="Migration successfully executed" id="alter dashboard_snapshot to mediumtext v2" duration=56.261µs
Dec  8 04:49:25 np0005550137 ceph-ceb838ef-9d5d-54e4-bddb-2f01adce2ad4-grafana-compute-0[99477]: logger=migrator t=2025-12-08T09:49:25.635785948Z level=info msg="Executing migration" id="Update dashboard_snapshot table charset"
Dec  8 04:49:25 np0005550137 ceph-ceb838ef-9d5d-54e4-bddb-2f01adce2ad4-grafana-compute-0[99477]: logger=migrator t=2025-12-08T09:49:25.635807638Z level=info msg="Migration successfully executed" id="Update dashboard_snapshot table charset" duration=22.56µs
Dec  8 04:49:25 np0005550137 ceph-ceb838ef-9d5d-54e4-bddb-2f01adce2ad4-grafana-compute-0[99477]: logger=migrator t=2025-12-08T09:49:25.637417257Z level=info msg="Executing migration" id="Add column external_delete_url to dashboard_snapshots table"
Dec  8 04:49:25 np0005550137 ceph-ceb838ef-9d5d-54e4-bddb-2f01adce2ad4-grafana-compute-0[99477]: logger=migrator t=2025-12-08T09:49:25.63951531Z level=info msg="Migration successfully executed" id="Add column external_delete_url to dashboard_snapshots table" duration=2.097912ms
Dec  8 04:49:25 np0005550137 ceph-ceb838ef-9d5d-54e4-bddb-2f01adce2ad4-grafana-compute-0[99477]: logger=migrator t=2025-12-08T09:49:25.641056355Z level=info msg="Executing migration" id="Add encrypted dashboard json column"
Dec  8 04:49:25 np0005550137 ceph-ceb838ef-9d5d-54e4-bddb-2f01adce2ad4-grafana-compute-0[99477]: logger=migrator t=2025-12-08T09:49:25.642963743Z level=info msg="Migration successfully executed" id="Add encrypted dashboard json column" duration=1.907348ms
Dec  8 04:49:25 np0005550137 ceph-ceb838ef-9d5d-54e4-bddb-2f01adce2ad4-grafana-compute-0[99477]: logger=migrator t=2025-12-08T09:49:25.645027234Z level=info msg="Executing migration" id="Change dashboard_encrypted column to MEDIUMBLOB"
Dec  8 04:49:25 np0005550137 ceph-ceb838ef-9d5d-54e4-bddb-2f01adce2ad4-grafana-compute-0[99477]: logger=migrator t=2025-12-08T09:49:25.645070645Z level=info msg="Migration successfully executed" id="Change dashboard_encrypted column to MEDIUMBLOB" duration=44.051µs
Dec  8 04:49:25 np0005550137 ceph-ceb838ef-9d5d-54e4-bddb-2f01adce2ad4-grafana-compute-0[99477]: logger=migrator t=2025-12-08T09:49:25.646481928Z level=info msg="Executing migration" id="create quota table v1"
Dec  8 04:49:25 np0005550137 ceph-ceb838ef-9d5d-54e4-bddb-2f01adce2ad4-grafana-compute-0[99477]: logger=migrator t=2025-12-08T09:49:25.647062525Z level=info msg="Migration successfully executed" id="create quota table v1" duration=584.366µs
Dec  8 04:49:25 np0005550137 ceph-ceb838ef-9d5d-54e4-bddb-2f01adce2ad4-grafana-compute-0[99477]: logger=migrator t=2025-12-08T09:49:25.649274661Z level=info msg="Executing migration" id="create index UQE_quota_org_id_user_id_target - v1"
Dec  8 04:49:25 np0005550137 ceph-ceb838ef-9d5d-54e4-bddb-2f01adce2ad4-grafana-compute-0[99477]: logger=migrator t=2025-12-08T09:49:25.649890189Z level=info msg="Migration successfully executed" id="create index UQE_quota_org_id_user_id_target - v1" duration=615.128µs
Dec  8 04:49:25 np0005550137 ceph-ceb838ef-9d5d-54e4-bddb-2f01adce2ad4-grafana-compute-0[99477]: logger=migrator t=2025-12-08T09:49:25.652955721Z level=info msg="Executing migration" id="Update quota table charset"
Dec  8 04:49:25 np0005550137 ceph-ceb838ef-9d5d-54e4-bddb-2f01adce2ad4-grafana-compute-0[99477]: logger=migrator t=2025-12-08T09:49:25.652974411Z level=info msg="Migration successfully executed" id="Update quota table charset" duration=19.25µs
Dec  8 04:49:25 np0005550137 ceph-ceb838ef-9d5d-54e4-bddb-2f01adce2ad4-grafana-compute-0[99477]: logger=migrator t=2025-12-08T09:49:25.654540059Z level=info msg="Executing migration" id="create plugin_setting table"
Dec  8 04:49:25 np0005550137 ceph-ceb838ef-9d5d-54e4-bddb-2f01adce2ad4-grafana-compute-0[99477]: logger=migrator t=2025-12-08T09:49:25.655130655Z level=info msg="Migration successfully executed" id="create plugin_setting table" duration=590.347µs
Dec  8 04:49:25 np0005550137 ceph-ceb838ef-9d5d-54e4-bddb-2f01adce2ad4-grafana-compute-0[99477]: logger=migrator t=2025-12-08T09:49:25.656989992Z level=info msg="Executing migration" id="create index UQE_plugin_setting_org_id_plugin_id - v1"
Dec  8 04:49:25 np0005550137 ceph-ceb838ef-9d5d-54e4-bddb-2f01adce2ad4-grafana-compute-0[99477]: logger=migrator t=2025-12-08T09:49:25.65759766Z level=info msg="Migration successfully executed" id="create index UQE_plugin_setting_org_id_plugin_id - v1" duration=607.278µs
Dec  8 04:49:25 np0005550137 ceph-ceb838ef-9d5d-54e4-bddb-2f01adce2ad4-grafana-compute-0[99477]: logger=migrator t=2025-12-08T09:49:25.659721813Z level=info msg="Executing migration" id="Add column plugin_version to plugin_settings"
Dec  8 04:49:25 np0005550137 ceph-ceb838ef-9d5d-54e4-bddb-2f01adce2ad4-grafana-compute-0[99477]: logger=migrator t=2025-12-08T09:49:25.662815946Z level=info msg="Migration successfully executed" id="Add column plugin_version to plugin_settings" duration=3.091052ms
Dec  8 04:49:25 np0005550137 ceph-ceb838ef-9d5d-54e4-bddb-2f01adce2ad4-grafana-compute-0[99477]: logger=migrator t=2025-12-08T09:49:25.664576848Z level=info msg="Executing migration" id="Update plugin_setting table charset"
Dec  8 04:49:25 np0005550137 ceph-ceb838ef-9d5d-54e4-bddb-2f01adce2ad4-grafana-compute-0[99477]: logger=migrator t=2025-12-08T09:49:25.664602229Z level=info msg="Migration successfully executed" id="Update plugin_setting table charset" duration=26.291µs
Dec  8 04:49:25 np0005550137 ceph-ceb838ef-9d5d-54e4-bddb-2f01adce2ad4-grafana-compute-0[99477]: logger=migrator t=2025-12-08T09:49:25.666551046Z level=info msg="Executing migration" id="create session table"
Dec  8 04:49:25 np0005550137 ceph-ceb838ef-9d5d-54e4-bddb-2f01adce2ad4-grafana-compute-0[99477]: logger=migrator t=2025-12-08T09:49:25.66734778Z level=info msg="Migration successfully executed" id="create session table" duration=796.934µs
Dec  8 04:49:25 np0005550137 podman[99589]: 2025-12-08 09:49:25.667811094 +0000 UTC m=+0.050323123 container create 7a7a99fbd137d44d21b7dc9840eed40594057acffd9ed2e62843f0d6f7eff591 (image=quay.io/ceph/haproxy:2.3, name=sharp_galois)
Dec  8 04:49:25 np0005550137 ceph-ceb838ef-9d5d-54e4-bddb-2f01adce2ad4-grafana-compute-0[99477]: logger=migrator t=2025-12-08T09:49:25.669460473Z level=info msg="Executing migration" id="Drop old table playlist table"
Dec  8 04:49:25 np0005550137 ceph-ceb838ef-9d5d-54e4-bddb-2f01adce2ad4-grafana-compute-0[99477]: logger=migrator t=2025-12-08T09:49:25.669555586Z level=info msg="Migration successfully executed" id="Drop old table playlist table" duration=93.443µs
Dec  8 04:49:25 np0005550137 ceph-ceb838ef-9d5d-54e4-bddb-2f01adce2ad4-grafana-compute-0[99477]: logger=migrator t=2025-12-08T09:49:25.671289899Z level=info msg="Executing migration" id="Drop old table playlist_item table"
Dec  8 04:49:25 np0005550137 ceph-ceb838ef-9d5d-54e4-bddb-2f01adce2ad4-grafana-compute-0[99477]: logger=migrator t=2025-12-08T09:49:25.671361111Z level=info msg="Migration successfully executed" id="Drop old table playlist_item table" duration=71.242µs
Dec  8 04:49:25 np0005550137 ceph-ceb838ef-9d5d-54e4-bddb-2f01adce2ad4-grafana-compute-0[99477]: logger=migrator t=2025-12-08T09:49:25.674841894Z level=info msg="Executing migration" id="create playlist table v2"
Dec  8 04:49:25 np0005550137 ceph-ceb838ef-9d5d-54e4-bddb-2f01adce2ad4-grafana-compute-0[99477]: logger=migrator t=2025-12-08T09:49:25.675520654Z level=info msg="Migration successfully executed" id="create playlist table v2" duration=678.66µs
Dec  8 04:49:25 np0005550137 ceph-ceb838ef-9d5d-54e4-bddb-2f01adce2ad4-grafana-compute-0[99477]: logger=migrator t=2025-12-08T09:49:25.677561985Z level=info msg="Executing migration" id="create playlist item table v2"
Dec  8 04:49:25 np0005550137 ceph-ceb838ef-9d5d-54e4-bddb-2f01adce2ad4-grafana-compute-0[99477]: logger=migrator t=2025-12-08T09:49:25.678250096Z level=info msg="Migration successfully executed" id="create playlist item table v2" duration=687.431µs
Dec  8 04:49:25 np0005550137 ceph-ceb838ef-9d5d-54e4-bddb-2f01adce2ad4-grafana-compute-0[99477]: logger=migrator t=2025-12-08T09:49:25.680106412Z level=info msg="Executing migration" id="Update playlist table charset"
Dec  8 04:49:25 np0005550137 ceph-ceb838ef-9d5d-54e4-bddb-2f01adce2ad4-grafana-compute-0[99477]: logger=migrator t=2025-12-08T09:49:25.680125353Z level=info msg="Migration successfully executed" id="Update playlist table charset" duration=19.591µs
Dec  8 04:49:25 np0005550137 ceph-ceb838ef-9d5d-54e4-bddb-2f01adce2ad4-grafana-compute-0[99477]: logger=migrator t=2025-12-08T09:49:25.681840893Z level=info msg="Executing migration" id="Update playlist_item table charset"
Dec  8 04:49:25 np0005550137 ceph-ceb838ef-9d5d-54e4-bddb-2f01adce2ad4-grafana-compute-0[99477]: logger=migrator t=2025-12-08T09:49:25.681862004Z level=info msg="Migration successfully executed" id="Update playlist_item table charset" duration=21.721µs
Dec  8 04:49:25 np0005550137 ceph-ceb838ef-9d5d-54e4-bddb-2f01adce2ad4-grafana-compute-0[99477]: logger=migrator t=2025-12-08T09:49:25.683505133Z level=info msg="Executing migration" id="Add playlist column created_at"
Dec  8 04:49:25 np0005550137 ceph-ceb838ef-9d5d-54e4-bddb-2f01adce2ad4-grafana-compute-0[99477]: logger=migrator t=2025-12-08T09:49:25.685908574Z level=info msg="Migration successfully executed" id="Add playlist column created_at" duration=2.403381ms
Dec  8 04:49:25 np0005550137 ceph-ceb838ef-9d5d-54e4-bddb-2f01adce2ad4-grafana-compute-0[99477]: logger=migrator t=2025-12-08T09:49:25.687342148Z level=info msg="Executing migration" id="Add playlist column updated_at"
Dec  8 04:49:25 np0005550137 ceph-ceb838ef-9d5d-54e4-bddb-2f01adce2ad4-grafana-compute-0[99477]: logger=migrator t=2025-12-08T09:49:25.689624846Z level=info msg="Migration successfully executed" id="Add playlist column updated_at" duration=2.282478ms
Dec  8 04:49:25 np0005550137 ceph-ceb838ef-9d5d-54e4-bddb-2f01adce2ad4-grafana-compute-0[99477]: logger=migrator t=2025-12-08T09:49:25.691252495Z level=info msg="Executing migration" id="drop preferences table v2"
Dec  8 04:49:25 np0005550137 ceph-ceb838ef-9d5d-54e4-bddb-2f01adce2ad4-grafana-compute-0[99477]: logger=migrator t=2025-12-08T09:49:25.691327537Z level=info msg="Migration successfully executed" id="drop preferences table v2" duration=75.552µs
Dec  8 04:49:25 np0005550137 ceph-ceb838ef-9d5d-54e4-bddb-2f01adce2ad4-grafana-compute-0[99477]: logger=migrator t=2025-12-08T09:49:25.692968986Z level=info msg="Executing migration" id="drop preferences table v3"
Dec  8 04:49:25 np0005550137 ceph-ceb838ef-9d5d-54e4-bddb-2f01adce2ad4-grafana-compute-0[99477]: logger=migrator t=2025-12-08T09:49:25.693041118Z level=info msg="Migration successfully executed" id="drop preferences table v3" duration=72.692µs
Dec  8 04:49:25 np0005550137 ceph-ceb838ef-9d5d-54e4-bddb-2f01adce2ad4-grafana-compute-0[99477]: logger=migrator t=2025-12-08T09:49:25.694721858Z level=info msg="Executing migration" id="create preferences table v3"
Dec  8 04:49:25 np0005550137 ceph-ceb838ef-9d5d-54e4-bddb-2f01adce2ad4-grafana-compute-0[99477]: logger=migrator t=2025-12-08T09:49:25.695345256Z level=info msg="Migration successfully executed" id="create preferences table v3" duration=621.218µs
Dec  8 04:49:25 np0005550137 ceph-ceb838ef-9d5d-54e4-bddb-2f01adce2ad4-grafana-compute-0[99477]: logger=migrator t=2025-12-08T09:49:25.69749011Z level=info msg="Executing migration" id="Update preferences table charset"
Dec  8 04:49:25 np0005550137 ceph-ceb838ef-9d5d-54e4-bddb-2f01adce2ad4-grafana-compute-0[99477]: logger=migrator t=2025-12-08T09:49:25.697508001Z level=info msg="Migration successfully executed" id="Update preferences table charset" duration=18.041µs
Dec  8 04:49:25 np0005550137 ceph-ceb838ef-9d5d-54e4-bddb-2f01adce2ad4-grafana-compute-0[99477]: logger=migrator t=2025-12-08T09:49:25.699200342Z level=info msg="Executing migration" id="Add column team_id in preferences"
Dec  8 04:49:25 np0005550137 ceph-ceb838ef-9d5d-54e4-bddb-2f01adce2ad4-grafana-compute-0[99477]: logger=migrator t=2025-12-08T09:49:25.701413818Z level=info msg="Migration successfully executed" id="Add column team_id in preferences" duration=2.212836ms
Dec  8 04:49:25 np0005550137 ceph-ceb838ef-9d5d-54e4-bddb-2f01adce2ad4-grafana-compute-0[99477]: logger=migrator t=2025-12-08T09:49:25.703027156Z level=info msg="Executing migration" id="Update team_id column values in preferences"
Dec  8 04:49:25 np0005550137 ceph-ceb838ef-9d5d-54e4-bddb-2f01adce2ad4-grafana-compute-0[99477]: logger=migrator t=2025-12-08T09:49:25.70316173Z level=info msg="Migration successfully executed" id="Update team_id column values in preferences" duration=134.974µs
Dec  8 04:49:25 np0005550137 ceph-ceb838ef-9d5d-54e4-bddb-2f01adce2ad4-grafana-compute-0[99477]: logger=migrator t=2025-12-08T09:49:25.704706866Z level=info msg="Executing migration" id="Add column week_start in preferences"
Dec  8 04:49:25 np0005550137 ceph-ceb838ef-9d5d-54e4-bddb-2f01adce2ad4-grafana-compute-0[99477]: logger=migrator t=2025-12-08T09:49:25.707084317Z level=info msg="Migration successfully executed" id="Add column week_start in preferences" duration=2.378001ms
Dec  8 04:49:25 np0005550137 ceph-ceb838ef-9d5d-54e4-bddb-2f01adce2ad4-grafana-compute-0[99477]: logger=migrator t=2025-12-08T09:49:25.708668994Z level=info msg="Executing migration" id="Add column preferences.json_data"
Dec  8 04:49:25 np0005550137 ceph-ceb838ef-9d5d-54e4-bddb-2f01adce2ad4-grafana-compute-0[99477]: logger=migrator t=2025-12-08T09:49:25.711196869Z level=info msg="Migration successfully executed" id="Add column preferences.json_data" duration=2.527645ms
Dec  8 04:49:25 np0005550137 ceph-ceb838ef-9d5d-54e4-bddb-2f01adce2ad4-grafana-compute-0[99477]: logger=migrator t=2025-12-08T09:49:25.712898811Z level=info msg="Executing migration" id="alter preferences.json_data to mediumtext v1"
Dec  8 04:49:25 np0005550137 ceph-ceb838ef-9d5d-54e4-bddb-2f01adce2ad4-grafana-compute-0[99477]: logger=migrator t=2025-12-08T09:49:25.712943112Z level=info msg="Migration successfully executed" id="alter preferences.json_data to mediumtext v1" duration=45.551µs
Dec  8 04:49:25 np0005550137 systemd[1]: Started libpod-conmon-7a7a99fbd137d44d21b7dc9840eed40594057acffd9ed2e62843f0d6f7eff591.scope.
Dec  8 04:49:25 np0005550137 ceph-ceb838ef-9d5d-54e4-bddb-2f01adce2ad4-grafana-compute-0[99477]: logger=migrator t=2025-12-08T09:49:25.715281792Z level=info msg="Executing migration" id="Add preferences index org_id"
Dec  8 04:49:25 np0005550137 ceph-ceb838ef-9d5d-54e4-bddb-2f01adce2ad4-grafana-compute-0[99477]: logger=migrator t=2025-12-08T09:49:25.716078086Z level=info msg="Migration successfully executed" id="Add preferences index org_id" duration=795.154µs
Dec  8 04:49:25 np0005550137 ceph-ceb838ef-9d5d-54e4-bddb-2f01adce2ad4-grafana-compute-0[99477]: logger=migrator t=2025-12-08T09:49:25.718430476Z level=info msg="Executing migration" id="Add preferences index user_id"
Dec  8 04:49:25 np0005550137 ceph-ceb838ef-9d5d-54e4-bddb-2f01adce2ad4-grafana-compute-0[99477]: logger=migrator t=2025-12-08T09:49:25.719330043Z level=info msg="Migration successfully executed" id="Add preferences index user_id" duration=899.497µs
Dec  8 04:49:25 np0005550137 ceph-ceb838ef-9d5d-54e4-bddb-2f01adce2ad4-grafana-compute-0[99477]: logger=migrator t=2025-12-08T09:49:25.721803687Z level=info msg="Executing migration" id="create alert table v1"
Dec  8 04:49:25 np0005550137 ceph-ceb838ef-9d5d-54e4-bddb-2f01adce2ad4-grafana-compute-0[99477]: logger=migrator t=2025-12-08T09:49:25.722761005Z level=info msg="Migration successfully executed" id="create alert table v1" duration=957.148µs
Dec  8 04:49:25 np0005550137 ceph-ceb838ef-9d5d-54e4-bddb-2f01adce2ad4-grafana-compute-0[99477]: logger=migrator t=2025-12-08T09:49:25.725414374Z level=info msg="Executing migration" id="add index alert org_id & id "
Dec  8 04:49:25 np0005550137 ceph-ceb838ef-9d5d-54e4-bddb-2f01adce2ad4-grafana-compute-0[99477]: logger=migrator t=2025-12-08T09:49:25.726251059Z level=info msg="Migration successfully executed" id="add index alert org_id & id " duration=836.845µs
Dec  8 04:49:25 np0005550137 ceph-ceb838ef-9d5d-54e4-bddb-2f01adce2ad4-grafana-compute-0[99477]: logger=migrator t=2025-12-08T09:49:25.728606599Z level=info msg="Executing migration" id="add index alert state"
Dec  8 04:49:25 np0005550137 ceph-ceb838ef-9d5d-54e4-bddb-2f01adce2ad4-grafana-compute-0[99477]: logger=migrator t=2025-12-08T09:49:25.729550938Z level=info msg="Migration successfully executed" id="add index alert state" duration=944.499µs
Dec  8 04:49:25 np0005550137 ceph-ceb838ef-9d5d-54e4-bddb-2f01adce2ad4-grafana-compute-0[99477]: logger=migrator t=2025-12-08T09:49:25.731897918Z level=info msg="Executing migration" id="add index alert dashboard_id"
Dec  8 04:49:25 np0005550137 ceph-ceb838ef-9d5d-54e4-bddb-2f01adce2ad4-grafana-compute-0[99477]: logger=migrator t=2025-12-08T09:49:25.732832066Z level=info msg="Migration successfully executed" id="add index alert dashboard_id" duration=934.538µs
Dec  8 04:49:25 np0005550137 ceph-mon[74516]: mon.compute-0@0(leader).osd e59 do_prune osdmap full prune enabled
Dec  8 04:49:25 np0005550137 ceph-ceb838ef-9d5d-54e4-bddb-2f01adce2ad4-grafana-compute-0[99477]: logger=migrator t=2025-12-08T09:49:25.73632914Z level=info msg="Executing migration" id="Create alert_rule_tag table v1"
Dec  8 04:49:25 np0005550137 podman[99589]: 2025-12-08 09:49:25.645419286 +0000 UTC m=+0.027931415 image pull e85424b0d443f37ddd2dd8a3bb2ef6f18dd352b987723a921b64289023af2914 quay.io/ceph/haproxy:2.3
Dec  8 04:49:25 np0005550137 ceph-ceb838ef-9d5d-54e4-bddb-2f01adce2ad4-grafana-compute-0[99477]: logger=migrator t=2025-12-08T09:49:25.738007321Z level=info msg="Migration successfully executed" id="Create alert_rule_tag table v1" duration=1.68453ms
Dec  8 04:49:25 np0005550137 systemd[1]: Started libcrun container.
Dec  8 04:49:25 np0005550137 ceph-ceb838ef-9d5d-54e4-bddb-2f01adce2ad4-grafana-compute-0[99477]: logger=migrator t=2025-12-08T09:49:25.741224867Z level=info msg="Executing migration" id="Add unique index alert_rule_tag.alert_id_tag_id"
Dec  8 04:49:25 np0005550137 ceph-ceb838ef-9d5d-54e4-bddb-2f01adce2ad4-grafana-compute-0[99477]: logger=migrator t=2025-12-08T09:49:25.742215416Z level=info msg="Migration successfully executed" id="Add unique index alert_rule_tag.alert_id_tag_id" duration=990.809µs
Dec  8 04:49:25 np0005550137 ceph-ceb838ef-9d5d-54e4-bddb-2f01adce2ad4-grafana-compute-0[99477]: logger=migrator t=2025-12-08T09:49:25.744775842Z level=info msg="Executing migration" id="drop index UQE_alert_rule_tag_alert_id_tag_id - v1"
Dec  8 04:49:25 np0005550137 ceph-mon[74516]: log_channel(audit) log [INF] : from='mgr.14478 192.168.122.100:0/575685615' entity='mgr.compute-0.kitiwu' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "3"}]': finished
Dec  8 04:49:25 np0005550137 ceph-mon[74516]: mon.compute-0@0(leader).osd e60 e60: 3 total, 3 up, 3 in
Dec  8 04:49:25 np0005550137 ceph-ceb838ef-9d5d-54e4-bddb-2f01adce2ad4-grafana-compute-0[99477]: logger=migrator t=2025-12-08T09:49:25.74637067Z level=info msg="Migration successfully executed" id="drop index UQE_alert_rule_tag_alert_id_tag_id - v1" duration=1.593118ms
Dec  8 04:49:25 np0005550137 ceph-ceb838ef-9d5d-54e4-bddb-2f01adce2ad4-grafana-compute-0[99477]: logger=migrator t=2025-12-08T09:49:25.748770622Z level=info msg="Executing migration" id="Rename table alert_rule_tag to alert_rule_tag_v1 - v1"
Dec  8 04:49:25 np0005550137 ceph-mon[74516]: log_channel(cluster) log [DBG] : osdmap e60: 3 total, 3 up, 3 in
Dec  8 04:49:25 np0005550137 podman[99589]: 2025-12-08 09:49:25.758632956 +0000 UTC m=+0.141145055 container init 7a7a99fbd137d44d21b7dc9840eed40594057acffd9ed2e62843f0d6f7eff591 (image=quay.io/ceph/haproxy:2.3, name=sharp_galois)
Dec  8 04:49:25 np0005550137 ceph-ceb838ef-9d5d-54e4-bddb-2f01adce2ad4-grafana-compute-0[99477]: logger=migrator t=2025-12-08T09:49:25.765300205Z level=info msg="Migration successfully executed" id="Rename table alert_rule_tag to alert_rule_tag_v1 - v1" duration=16.524593ms
Dec  8 04:49:25 np0005550137 ceph-ceb838ef-9d5d-54e4-bddb-2f01adce2ad4-grafana-compute-0[99477]: logger=migrator t=2025-12-08T09:49:25.767920303Z level=info msg="Executing migration" id="Create alert_rule_tag table v2"
Dec  8 04:49:25 np0005550137 ceph-ceb838ef-9d5d-54e4-bddb-2f01adce2ad4-grafana-compute-0[99477]: logger=migrator t=2025-12-08T09:49:25.768942794Z level=info msg="Migration successfully executed" id="Create alert_rule_tag table v2" duration=1.023411ms
Dec  8 04:49:25 np0005550137 ceph-ceb838ef-9d5d-54e4-bddb-2f01adce2ad4-grafana-compute-0[99477]: logger=migrator t=2025-12-08T09:49:25.771336636Z level=info msg="Executing migration" id="create index UQE_alert_rule_tag_alert_id_tag_id - Add unique index alert_rule_tag.alert_id_tag_id V2"
Dec  8 04:49:25 np0005550137 podman[99589]: 2025-12-08 09:49:25.771700077 +0000 UTC m=+0.154212106 container start 7a7a99fbd137d44d21b7dc9840eed40594057acffd9ed2e62843f0d6f7eff591 (image=quay.io/ceph/haproxy:2.3, name=sharp_galois)
Dec  8 04:49:25 np0005550137 ceph-ceb838ef-9d5d-54e4-bddb-2f01adce2ad4-grafana-compute-0[99477]: logger=migrator t=2025-12-08T09:49:25.772605113Z level=info msg="Migration successfully executed" id="create index UQE_alert_rule_tag_alert_id_tag_id - Add unique index alert_rule_tag.alert_id_tag_id V2" duration=1.271268ms
Dec  8 04:49:25 np0005550137 podman[99589]: 2025-12-08 09:49:25.776375496 +0000 UTC m=+0.158887525 container attach 7a7a99fbd137d44d21b7dc9840eed40594057acffd9ed2e62843f0d6f7eff591 (image=quay.io/ceph/haproxy:2.3, name=sharp_galois)
Dec  8 04:49:25 np0005550137 ceph-ceb838ef-9d5d-54e4-bddb-2f01adce2ad4-grafana-compute-0[99477]: logger=migrator t=2025-12-08T09:49:25.77684406Z level=info msg="Executing migration" id="copy alert_rule_tag v1 to v2"
Dec  8 04:49:25 np0005550137 ceph-ceb838ef-9d5d-54e4-bddb-2f01adce2ad4-grafana-compute-0[99477]: logger=migrator t=2025-12-08T09:49:25.777194991Z level=info msg="Migration successfully executed" id="copy alert_rule_tag v1 to v2" duration=351.831µs
Dec  8 04:49:25 np0005550137 sharp_galois[99605]: 0 0
Dec  8 04:49:25 np0005550137 systemd[1]: libpod-7a7a99fbd137d44d21b7dc9840eed40594057acffd9ed2e62843f0d6f7eff591.scope: Deactivated successfully.
Dec  8 04:49:25 np0005550137 ceph-ceb838ef-9d5d-54e4-bddb-2f01adce2ad4-grafana-compute-0[99477]: logger=migrator t=2025-12-08T09:49:25.779882521Z level=info msg="Executing migration" id="drop table alert_rule_tag_v1"
Dec  8 04:49:25 np0005550137 podman[99589]: 2025-12-08 09:49:25.780249242 +0000 UTC m=+0.162761271 container died 7a7a99fbd137d44d21b7dc9840eed40594057acffd9ed2e62843f0d6f7eff591 (image=quay.io/ceph/haproxy:2.3, name=sharp_galois)
Dec  8 04:49:25 np0005550137 ceph-ceb838ef-9d5d-54e4-bddb-2f01adce2ad4-grafana-compute-0[99477]: logger=migrator t=2025-12-08T09:49:25.780723196Z level=info msg="Migration successfully executed" id="drop table alert_rule_tag_v1" duration=837.515µs
Dec  8 04:49:25 np0005550137 ceph-ceb838ef-9d5d-54e4-bddb-2f01adce2ad4-grafana-compute-0[99477]: logger=migrator t=2025-12-08T09:49:25.782903781Z level=info msg="Executing migration" id="create alert_notification table v1"
Dec  8 04:49:25 np0005550137 ceph-ceb838ef-9d5d-54e4-bddb-2f01adce2ad4-grafana-compute-0[99477]: logger=migrator t=2025-12-08T09:49:25.783762117Z level=info msg="Migration successfully executed" id="create alert_notification table v1" duration=857.926µs
Dec  8 04:49:25 np0005550137 ceph-ceb838ef-9d5d-54e4-bddb-2f01adce2ad4-grafana-compute-0[99477]: logger=migrator t=2025-12-08T09:49:25.785768927Z level=info msg="Executing migration" id="Add column is_default"
Dec  8 04:49:25 np0005550137 ceph-ceb838ef-9d5d-54e4-bddb-2f01adce2ad4-grafana-compute-0[99477]: logger=migrator t=2025-12-08T09:49:25.789337373Z level=info msg="Migration successfully executed" id="Add column is_default" duration=3.567586ms
Dec  8 04:49:25 np0005550137 ceph-ceb838ef-9d5d-54e4-bddb-2f01adce2ad4-grafana-compute-0[99477]: logger=migrator t=2025-12-08T09:49:25.791268611Z level=info msg="Executing migration" id="Add column frequency"
Dec  8 04:49:25 np0005550137 ceph-ceb838ef-9d5d-54e4-bddb-2f01adce2ad4-grafana-compute-0[99477]: logger=migrator t=2025-12-08T09:49:25.795384313Z level=info msg="Migration successfully executed" id="Add column frequency" duration=4.114432ms
Dec  8 04:49:25 np0005550137 ceph-ceb838ef-9d5d-54e4-bddb-2f01adce2ad4-grafana-compute-0[99477]: logger=migrator t=2025-12-08T09:49:25.797553669Z level=info msg="Executing migration" id="Add column send_reminder"
Dec  8 04:49:25 np0005550137 ceph-ceb838ef-9d5d-54e4-bddb-2f01adce2ad4-grafana-compute-0[99477]: logger=migrator t=2025-12-08T09:49:25.801757724Z level=info msg="Migration successfully executed" id="Add column send_reminder" duration=4.203515ms
Dec  8 04:49:25 np0005550137 ceph-ceb838ef-9d5d-54e4-bddb-2f01adce2ad4-grafana-compute-0[99477]: logger=migrator t=2025-12-08T09:49:25.804474135Z level=info msg="Executing migration" id="Add column disable_resolve_message"
Dec  8 04:49:25 np0005550137 systemd[1]: var-lib-containers-storage-overlay-54b3c99468b73ac73823a74d1d7219cfdb2fed1d19742ebee37dc7ae59abebf8-merged.mount: Deactivated successfully.
Dec  8 04:49:25 np0005550137 ceph-ceb838ef-9d5d-54e4-bddb-2f01adce2ad4-grafana-compute-0[99477]: logger=migrator t=2025-12-08T09:49:25.811885106Z level=info msg="Migration successfully executed" id="Add column disable_resolve_message" duration=7.403121ms
Dec  8 04:49:25 np0005550137 ceph-ceb838ef-9d5d-54e4-bddb-2f01adce2ad4-grafana-compute-0[99477]: logger=migrator t=2025-12-08T09:49:25.814811263Z level=info msg="Executing migration" id="add index alert_notification org_id & name"
Dec  8 04:49:25 np0005550137 ceph-ceb838ef-9d5d-54e4-bddb-2f01adce2ad4-grafana-compute-0[99477]: logger=migrator t=2025-12-08T09:49:25.815923787Z level=info msg="Migration successfully executed" id="add index alert_notification org_id & name" duration=1.114414ms
Dec  8 04:49:25 np0005550137 awesome_dijkstra[99208]: could not fetch user info: no user info saved
Dec  8 04:49:25 np0005550137 ceph-ceb838ef-9d5d-54e4-bddb-2f01adce2ad4-grafana-compute-0[99477]: logger=migrator t=2025-12-08T09:49:25.819953868Z level=info msg="Executing migration" id="Update alert table charset"
Dec  8 04:49:25 np0005550137 ceph-ceb838ef-9d5d-54e4-bddb-2f01adce2ad4-grafana-compute-0[99477]: logger=migrator t=2025-12-08T09:49:25.819999059Z level=info msg="Migration successfully executed" id="Update alert table charset" duration=48.322µs
Dec  8 04:49:25 np0005550137 podman[99589]: 2025-12-08 09:49:25.821947787 +0000 UTC m=+0.204459856 container remove 7a7a99fbd137d44d21b7dc9840eed40594057acffd9ed2e62843f0d6f7eff591 (image=quay.io/ceph/haproxy:2.3, name=sharp_galois)
Dec  8 04:49:25 np0005550137 ceph-ceb838ef-9d5d-54e4-bddb-2f01adce2ad4-grafana-compute-0[99477]: logger=migrator t=2025-12-08T09:49:25.823307247Z level=info msg="Executing migration" id="Update alert_notification table charset"
Dec  8 04:49:25 np0005550137 ceph-ceb838ef-9d5d-54e4-bddb-2f01adce2ad4-grafana-compute-0[99477]: logger=migrator t=2025-12-08T09:49:25.823336388Z level=info msg="Migration successfully executed" id="Update alert_notification table charset" duration=32.121µs
Dec  8 04:49:25 np0005550137 ceph-ceb838ef-9d5d-54e4-bddb-2f01adce2ad4-grafana-compute-0[99477]: logger=migrator t=2025-12-08T09:49:25.825780381Z level=info msg="Executing migration" id="create notification_journal table v1"
Dec  8 04:49:25 np0005550137 ceph-ceb838ef-9d5d-54e4-bddb-2f01adce2ad4-grafana-compute-0[99477]: logger=migrator t=2025-12-08T09:49:25.826532273Z level=info msg="Migration successfully executed" id="create notification_journal table v1" duration=753.682µs
Dec  8 04:49:25 np0005550137 ceph-ceb838ef-9d5d-54e4-bddb-2f01adce2ad4-grafana-compute-0[99477]: logger=migrator t=2025-12-08T09:49:25.829045188Z level=info msg="Executing migration" id="add index notification_journal org_id & alert_id & notifier_id"
Dec  8 04:49:25 np0005550137 ceph-ceb838ef-9d5d-54e4-bddb-2f01adce2ad4-grafana-compute-0[99477]: logger=migrator t=2025-12-08T09:49:25.829873433Z level=info msg="Migration successfully executed" id="add index notification_journal org_id & alert_id & notifier_id" duration=828.925µs
Dec  8 04:49:25 np0005550137 ceph-ceb838ef-9d5d-54e4-bddb-2f01adce2ad4-grafana-compute-0[99477]: logger=migrator t=2025-12-08T09:49:25.83311311Z level=info msg="Executing migration" id="drop alert_notification_journal"
Dec  8 04:49:25 np0005550137 ceph-ceb838ef-9d5d-54e4-bddb-2f01adce2ad4-grafana-compute-0[99477]: logger=migrator t=2025-12-08T09:49:25.835562143Z level=info msg="Migration successfully executed" id="drop alert_notification_journal" duration=2.451793ms
Dec  8 04:49:25 np0005550137 ceph-ceb838ef-9d5d-54e4-bddb-2f01adce2ad4-grafana-compute-0[99477]: logger=migrator t=2025-12-08T09:49:25.839560223Z level=info msg="Executing migration" id="create alert_notification_state table v1"
Dec  8 04:49:25 np0005550137 systemd[1]: libpod-conmon-7a7a99fbd137d44d21b7dc9840eed40594057acffd9ed2e62843f0d6f7eff591.scope: Deactivated successfully.
Dec  8 04:49:25 np0005550137 ceph-ceb838ef-9d5d-54e4-bddb-2f01adce2ad4-grafana-compute-0[99477]: logger=migrator t=2025-12-08T09:49:25.8411442Z level=info msg="Migration successfully executed" id="create alert_notification_state table v1" duration=1.583657ms
Dec  8 04:49:25 np0005550137 ceph-ceb838ef-9d5d-54e4-bddb-2f01adce2ad4-grafana-compute-0[99477]: logger=migrator t=2025-12-08T09:49:25.843676295Z level=info msg="Executing migration" id="add index alert_notification_state org_id & alert_id & notifier_id"
Dec  8 04:49:25 np0005550137 ceph-ceb838ef-9d5d-54e4-bddb-2f01adce2ad4-grafana-compute-0[99477]: logger=migrator t=2025-12-08T09:49:25.844903953Z level=info msg="Migration successfully executed" id="add index alert_notification_state org_id & alert_id & notifier_id" duration=1.226377ms
Dec  8 04:49:25 np0005550137 ceph-ceb838ef-9d5d-54e4-bddb-2f01adce2ad4-grafana-compute-0[99477]: logger=migrator t=2025-12-08T09:49:25.847143499Z level=info msg="Executing migration" id="Add for to alert table"
Dec  8 04:49:25 np0005550137 ceph-ceb838ef-9d5d-54e4-bddb-2f01adce2ad4-grafana-compute-0[99477]: logger=migrator t=2025-12-08T09:49:25.850428347Z level=info msg="Migration successfully executed" id="Add for to alert table" duration=3.283368ms
Dec  8 04:49:25 np0005550137 ceph-ceb838ef-9d5d-54e4-bddb-2f01adce2ad4-grafana-compute-0[99477]: logger=migrator t=2025-12-08T09:49:25.85286306Z level=info msg="Executing migration" id="Add column uid in alert_notification"
Dec  8 04:49:25 np0005550137 ceph-ceb838ef-9d5d-54e4-bddb-2f01adce2ad4-grafana-compute-0[99477]: logger=migrator t=2025-12-08T09:49:25.856679623Z level=info msg="Migration successfully executed" id="Add column uid in alert_notification" duration=3.813273ms
Dec  8 04:49:25 np0005550137 ceph-ceb838ef-9d5d-54e4-bddb-2f01adce2ad4-grafana-compute-0[99477]: logger=migrator t=2025-12-08T09:49:25.858819038Z level=info msg="Executing migration" id="Update uid column values in alert_notification"
Dec  8 04:49:25 np0005550137 ceph-ceb838ef-9d5d-54e4-bddb-2f01adce2ad4-grafana-compute-0[99477]: logger=migrator t=2025-12-08T09:49:25.859123967Z level=info msg="Migration successfully executed" id="Update uid column values in alert_notification" duration=304.229µs
Dec  8 04:49:25 np0005550137 ceph-ceb838ef-9d5d-54e4-bddb-2f01adce2ad4-grafana-compute-0[99477]: logger=migrator t=2025-12-08T09:49:25.861803347Z level=info msg="Executing migration" id="Add unique index alert_notification_org_id_uid"
Dec  8 04:49:25 np0005550137 ceph-ceb838ef-9d5d-54e4-bddb-2f01adce2ad4-grafana-compute-0[99477]: logger=migrator t=2025-12-08T09:49:25.863013983Z level=info msg="Migration successfully executed" id="Add unique index alert_notification_org_id_uid" duration=1.212426ms
Dec  8 04:49:25 np0005550137 ceph-ceb838ef-9d5d-54e4-bddb-2f01adce2ad4-grafana-compute-0[99477]: logger=migrator t=2025-12-08T09:49:25.865526608Z level=info msg="Executing migration" id="Remove unique index org_id_name"
Dec  8 04:49:25 np0005550137 ceph-ceb838ef-9d5d-54e4-bddb-2f01adce2ad4-grafana-compute-0[99477]: logger=migrator t=2025-12-08T09:49:25.867324522Z level=info msg="Migration successfully executed" id="Remove unique index org_id_name" duration=1.796534ms
Dec  8 04:49:25 np0005550137 ceph-ceb838ef-9d5d-54e4-bddb-2f01adce2ad4-grafana-compute-0[99477]: logger=migrator t=2025-12-08T09:49:25.869481036Z level=info msg="Executing migration" id="Add column secure_settings in alert_notification"
Dec  8 04:49:25 np0005550137 ceph-mgr[74806]: [progress INFO root] Writing back 23 completed events
Dec  8 04:49:25 np0005550137 ceph-ceb838ef-9d5d-54e4-bddb-2f01adce2ad4-grafana-compute-0[99477]: logger=migrator t=2025-12-08T09:49:25.880524236Z level=info msg="Migration successfully executed" id="Add column secure_settings in alert_notification" duration=11.03914ms
Dec  8 04:49:25 np0005550137 ceph-mon[74516]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/progress/completed}] v 0)
Dec  8 04:49:25 np0005550137 ceph-ceb838ef-9d5d-54e4-bddb-2f01adce2ad4-grafana-compute-0[99477]: logger=migrator t=2025-12-08T09:49:25.882727201Z level=info msg="Executing migration" id="alter alert.settings to mediumtext"
Dec  8 04:49:25 np0005550137 ceph-ceb838ef-9d5d-54e4-bddb-2f01adce2ad4-grafana-compute-0[99477]: logger=migrator t=2025-12-08T09:49:25.882800244Z level=info msg="Migration successfully executed" id="alter alert.settings to mediumtext" duration=72.882µs
Dec  8 04:49:25 np0005550137 podman[99156]: 2025-12-08 09:49:25.885202085 +0000 UTC m=+4.367807858 container died e2c2a8797cbcec27825bc8ec881473d8bf15bcd4b16414929ca795828125f0dc (image=quay.io/ceph/ceph:v19, name=awesome_dijkstra, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, org.label-schema.license=GPLv2, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Dec  8 04:49:25 np0005550137 ceph-ceb838ef-9d5d-54e4-bddb-2f01adce2ad4-grafana-compute-0[99477]: logger=migrator t=2025-12-08T09:49:25.885746202Z level=info msg="Executing migration" id="Add non-unique index alert_notification_state_alert_id"
Dec  8 04:49:25 np0005550137 systemd[1]: libpod-e2c2a8797cbcec27825bc8ec881473d8bf15bcd4b16414929ca795828125f0dc.scope: Deactivated successfully.
Dec  8 04:49:25 np0005550137 ceph-ceb838ef-9d5d-54e4-bddb-2f01adce2ad4-grafana-compute-0[99477]: logger=migrator t=2025-12-08T09:49:25.886641929Z level=info msg="Migration successfully executed" id="Add non-unique index alert_notification_state_alert_id" duration=898.948µs
Dec  8 04:49:25 np0005550137 systemd[1]: Reloading.
Dec  8 04:49:25 np0005550137 ceph-ceb838ef-9d5d-54e4-bddb-2f01adce2ad4-grafana-compute-0[99477]: logger=migrator t=2025-12-08T09:49:25.889471063Z level=info msg="Executing migration" id="Add non-unique index alert_rule_tag_alert_id"
Dec  8 04:49:25 np0005550137 ceph-ceb838ef-9d5d-54e4-bddb-2f01adce2ad4-grafana-compute-0[99477]: logger=migrator t=2025-12-08T09:49:25.890346839Z level=info msg="Migration successfully executed" id="Add non-unique index alert_rule_tag_alert_id" duration=874.855µs
Dec  8 04:49:25 np0005550137 ceph-ceb838ef-9d5d-54e4-bddb-2f01adce2ad4-grafana-compute-0[99477]: logger=migrator t=2025-12-08T09:49:25.89306831Z level=info msg="Executing migration" id="Drop old annotation table v4"
Dec  8 04:49:25 np0005550137 ceph-ceb838ef-9d5d-54e4-bddb-2f01adce2ad4-grafana-compute-0[99477]: logger=migrator t=2025-12-08T09:49:25.893143392Z level=info msg="Migration successfully executed" id="Drop old annotation table v4" duration=75.332µs
Dec  8 04:49:25 np0005550137 ceph-mon[74516]: log_channel(audit) log [INF] : from='mgr.14478 192.168.122.100:0/575685615' entity='mgr.compute-0.kitiwu' 
Dec  8 04:49:25 np0005550137 ceph-ceb838ef-9d5d-54e4-bddb-2f01adce2ad4-grafana-compute-0[99477]: logger=migrator t=2025-12-08T09:49:25.894825513Z level=info msg="Executing migration" id="create annotation table v5"
Dec  8 04:49:25 np0005550137 ceph-ceb838ef-9d5d-54e4-bddb-2f01adce2ad4-grafana-compute-0[99477]: logger=migrator t=2025-12-08T09:49:25.895527494Z level=info msg="Migration successfully executed" id="create annotation table v5" duration=702.101µs
Dec  8 04:49:25 np0005550137 ceph-ceb838ef-9d5d-54e4-bddb-2f01adce2ad4-grafana-compute-0[99477]: logger=migrator t=2025-12-08T09:49:25.897798612Z level=info msg="Executing migration" id="add index annotation 0 v3"
Dec  8 04:49:25 np0005550137 ceph-ceb838ef-9d5d-54e4-bddb-2f01adce2ad4-grafana-compute-0[99477]: logger=migrator t=2025-12-08T09:49:25.898499923Z level=info msg="Migration successfully executed" id="add index annotation 0 v3" duration=701.191µs
Dec  8 04:49:25 np0005550137 ceph-mgr[74806]: [progress INFO root] Completed event c2194446-03c4-40b0-85ec-4545548ba00a (Global Recovery Event) in 10 seconds
Dec  8 04:49:25 np0005550137 ceph-ceb838ef-9d5d-54e4-bddb-2f01adce2ad4-grafana-compute-0[99477]: logger=migrator t=2025-12-08T09:49:25.900606976Z level=info msg="Executing migration" id="add index annotation 1 v3"
Dec  8 04:49:25 np0005550137 ceph-ceb838ef-9d5d-54e4-bddb-2f01adce2ad4-grafana-compute-0[99477]: logger=migrator t=2025-12-08T09:49:25.901373378Z level=info msg="Migration successfully executed" id="add index annotation 1 v3" duration=765.762µs
Dec  8 04:49:25 np0005550137 ceph-ceb838ef-9d5d-54e4-bddb-2f01adce2ad4-grafana-compute-0[99477]: logger=migrator t=2025-12-08T09:49:25.903862073Z level=info msg="Executing migration" id="add index annotation 2 v3"
Dec  8 04:49:25 np0005550137 ceph-ceb838ef-9d5d-54e4-bddb-2f01adce2ad4-grafana-compute-0[99477]: logger=migrator t=2025-12-08T09:49:25.904620955Z level=info msg="Migration successfully executed" id="add index annotation 2 v3" duration=759.252µs
Dec  8 04:49:25 np0005550137 ceph-ceb838ef-9d5d-54e4-bddb-2f01adce2ad4-grafana-compute-0[99477]: logger=migrator t=2025-12-08T09:49:25.906752369Z level=info msg="Executing migration" id="add index annotation 3 v3"
Dec  8 04:49:25 np0005550137 ceph-ceb838ef-9d5d-54e4-bddb-2f01adce2ad4-grafana-compute-0[99477]: logger=migrator t=2025-12-08T09:49:25.907534232Z level=info msg="Migration successfully executed" id="add index annotation 3 v3" duration=784.153µs
Dec  8 04:49:25 np0005550137 ceph-ceb838ef-9d5d-54e4-bddb-2f01adce2ad4-grafana-compute-0[99477]: logger=migrator t=2025-12-08T09:49:25.909742089Z level=info msg="Executing migration" id="add index annotation 4 v3"
Dec  8 04:49:25 np0005550137 ceph-ceb838ef-9d5d-54e4-bddb-2f01adce2ad4-grafana-compute-0[99477]: logger=migrator t=2025-12-08T09:49:25.910556092Z level=info msg="Migration successfully executed" id="add index annotation 4 v3" duration=812.613µs
Dec  8 04:49:25 np0005550137 ceph-ceb838ef-9d5d-54e4-bddb-2f01adce2ad4-grafana-compute-0[99477]: logger=migrator t=2025-12-08T09:49:25.913024556Z level=info msg="Executing migration" id="Update annotation table charset"
Dec  8 04:49:25 np0005550137 ceph-ceb838ef-9d5d-54e4-bddb-2f01adce2ad4-grafana-compute-0[99477]: logger=migrator t=2025-12-08T09:49:25.913045676Z level=info msg="Migration successfully executed" id="Update annotation table charset" duration=22.35µs
Dec  8 04:49:25 np0005550137 ceph-ceb838ef-9d5d-54e4-bddb-2f01adce2ad4-grafana-compute-0[99477]: logger=migrator t=2025-12-08T09:49:25.91480329Z level=info msg="Executing migration" id="Add column region_id to annotation table"
Dec  8 04:49:25 np0005550137 ceph-ceb838ef-9d5d-54e4-bddb-2f01adce2ad4-grafana-compute-0[99477]: logger=migrator t=2025-12-08T09:49:25.918019226Z level=info msg="Migration successfully executed" id="Add column region_id to annotation table" duration=3.217847ms
Dec  8 04:49:25 np0005550137 ceph-ceb838ef-9d5d-54e4-bddb-2f01adce2ad4-grafana-compute-0[99477]: logger=migrator t=2025-12-08T09:49:25.921431727Z level=info msg="Executing migration" id="Drop category_id index"
Dec  8 04:49:25 np0005550137 ceph-ceb838ef-9d5d-54e4-bddb-2f01adce2ad4-grafana-compute-0[99477]: logger=migrator t=2025-12-08T09:49:25.922270472Z level=info msg="Migration successfully executed" id="Drop category_id index" duration=839.015µs
Dec  8 04:49:25 np0005550137 ceph-ceb838ef-9d5d-54e4-bddb-2f01adce2ad4-grafana-compute-0[99477]: logger=migrator t=2025-12-08T09:49:25.924481788Z level=info msg="Executing migration" id="Add column tags to annotation table"
Dec  8 04:49:25 np0005550137 ceph-ceb838ef-9d5d-54e4-bddb-2f01adce2ad4-grafana-compute-0[99477]: logger=migrator t=2025-12-08T09:49:25.927532719Z level=info msg="Migration successfully executed" id="Add column tags to annotation table" duration=3.053291ms
Dec  8 04:49:25 np0005550137 ceph-ceb838ef-9d5d-54e4-bddb-2f01adce2ad4-grafana-compute-0[99477]: logger=migrator t=2025-12-08T09:49:25.929774806Z level=info msg="Executing migration" id="Create annotation_tag table v2"
Dec  8 04:49:25 np0005550137 ceph-ceb838ef-9d5d-54e4-bddb-2f01adce2ad4-grafana-compute-0[99477]: logger=migrator t=2025-12-08T09:49:25.930331903Z level=info msg="Migration successfully executed" id="Create annotation_tag table v2" duration=557.797µs
Dec  8 04:49:25 np0005550137 ceph-ceb838ef-9d5d-54e4-bddb-2f01adce2ad4-grafana-compute-0[99477]: logger=migrator t=2025-12-08T09:49:25.932281731Z level=info msg="Executing migration" id="Add unique index annotation_tag.annotation_id_tag_id"
Dec  8 04:49:25 np0005550137 ceph-ceb838ef-9d5d-54e4-bddb-2f01adce2ad4-grafana-compute-0[99477]: logger=migrator t=2025-12-08T09:49:25.933027403Z level=info msg="Migration successfully executed" id="Add unique index annotation_tag.annotation_id_tag_id" duration=743.442µs
Dec  8 04:49:25 np0005550137 ceph-ceb838ef-9d5d-54e4-bddb-2f01adce2ad4-grafana-compute-0[99477]: logger=migrator t=2025-12-08T09:49:25.935235739Z level=info msg="Executing migration" id="drop index UQE_annotation_tag_annotation_id_tag_id - v2"
Dec  8 04:49:25 np0005550137 ceph-ceb838ef-9d5d-54e4-bddb-2f01adce2ad4-grafana-compute-0[99477]: logger=migrator t=2025-12-08T09:49:25.935953221Z level=info msg="Migration successfully executed" id="drop index UQE_annotation_tag_annotation_id_tag_id - v2" duration=717.342µs
Dec  8 04:49:25 np0005550137 ceph-ceb838ef-9d5d-54e4-bddb-2f01adce2ad4-grafana-compute-0[99477]: logger=migrator t=2025-12-08T09:49:25.938085384Z level=info msg="Executing migration" id="Rename table annotation_tag to annotation_tag_v2 - v2"
Dec  8 04:49:25 np0005550137 ceph-ceb838ef-9d5d-54e4-bddb-2f01adce2ad4-grafana-compute-0[99477]: logger=migrator t=2025-12-08T09:49:25.946595988Z level=info msg="Migration successfully executed" id="Rename table annotation_tag to annotation_tag_v2 - v2" duration=8.509414ms
Dec  8 04:49:25 np0005550137 ceph-ceb838ef-9d5d-54e4-bddb-2f01adce2ad4-grafana-compute-0[99477]: logger=migrator t=2025-12-08T09:49:25.948394623Z level=info msg="Executing migration" id="Create annotation_tag table v3"
Dec  8 04:49:25 np0005550137 ceph-ceb838ef-9d5d-54e4-bddb-2f01adce2ad4-grafana-compute-0[99477]: logger=migrator t=2025-12-08T09:49:25.94900716Z level=info msg="Migration successfully executed" id="Create annotation_tag table v3" duration=612.638µs
Dec  8 04:49:25 np0005550137 ceph-ceb838ef-9d5d-54e4-bddb-2f01adce2ad4-grafana-compute-0[99477]: logger=migrator t=2025-12-08T09:49:25.950602988Z level=info msg="Executing migration" id="create index UQE_annotation_tag_annotation_id_tag_id - Add unique index annotation_tag.annotation_id_tag_id V3"
Dec  8 04:49:25 np0005550137 ceph-ceb838ef-9d5d-54e4-bddb-2f01adce2ad4-grafana-compute-0[99477]: logger=migrator t=2025-12-08T09:49:25.951301709Z level=info msg="Migration successfully executed" id="create index UQE_annotation_tag_annotation_id_tag_id - Add unique index annotation_tag.annotation_id_tag_id V3" duration=701.271µs
Dec  8 04:49:25 np0005550137 ceph-ceb838ef-9d5d-54e4-bddb-2f01adce2ad4-grafana-compute-0[99477]: logger=migrator t=2025-12-08T09:49:25.953297919Z level=info msg="Executing migration" id="copy annotation_tag v2 to v3"
Dec  8 04:49:25 np0005550137 ceph-ceb838ef-9d5d-54e4-bddb-2f01adce2ad4-grafana-compute-0[99477]: logger=migrator t=2025-12-08T09:49:25.953545846Z level=info msg="Migration successfully executed" id="copy annotation_tag v2 to v3" duration=245.757µs
Dec  8 04:49:25 np0005550137 ceph-ceb838ef-9d5d-54e4-bddb-2f01adce2ad4-grafana-compute-0[99477]: logger=migrator t=2025-12-08T09:49:25.955245317Z level=info msg="Executing migration" id="drop table annotation_tag_v2"
Dec  8 04:49:25 np0005550137 ceph-ceb838ef-9d5d-54e4-bddb-2f01adce2ad4-grafana-compute-0[99477]: logger=migrator t=2025-12-08T09:49:25.955763292Z level=info msg="Migration successfully executed" id="drop table annotation_tag_v2" duration=518.185µs
Dec  8 04:49:25 np0005550137 ceph-ceb838ef-9d5d-54e4-bddb-2f01adce2ad4-grafana-compute-0[99477]: logger=migrator t=2025-12-08T09:49:25.957936287Z level=info msg="Executing migration" id="Update alert annotations and set TEXT to empty"
Dec  8 04:49:25 np0005550137 ceph-ceb838ef-9d5d-54e4-bddb-2f01adce2ad4-grafana-compute-0[99477]: logger=migrator t=2025-12-08T09:49:25.958213205Z level=info msg="Migration successfully executed" id="Update alert annotations and set TEXT to empty" duration=280.188µs
Dec  8 04:49:25 np0005550137 ceph-ceb838ef-9d5d-54e4-bddb-2f01adce2ad4-grafana-compute-0[99477]: logger=migrator t=2025-12-08T09:49:25.960349049Z level=info msg="Executing migration" id="Add created time to annotation table"
Dec  8 04:49:25 np0005550137 ceph-ceb838ef-9d5d-54e4-bddb-2f01adce2ad4-grafana-compute-0[99477]: logger=migrator t=2025-12-08T09:49:25.964703069Z level=info msg="Migration successfully executed" id="Add created time to annotation table" duration=4.35363ms
Dec  8 04:49:25 np0005550137 ceph-ceb838ef-9d5d-54e4-bddb-2f01adce2ad4-grafana-compute-0[99477]: logger=migrator t=2025-12-08T09:49:25.966761391Z level=info msg="Executing migration" id="Add updated time to annotation table"
Dec  8 04:49:25 np0005550137 ceph-ceb838ef-9d5d-54e4-bddb-2f01adce2ad4-grafana-compute-0[99477]: logger=migrator t=2025-12-08T09:49:25.970930296Z level=info msg="Migration successfully executed" id="Add updated time to annotation table" duration=4.169555ms
Dec  8 04:49:25 np0005550137 ceph-ceb838ef-9d5d-54e4-bddb-2f01adce2ad4-grafana-compute-0[99477]: logger=migrator t=2025-12-08T09:49:25.973074619Z level=info msg="Executing migration" id="Add index for created in annotation table"
Dec  8 04:49:25 np0005550137 ceph-ceb838ef-9d5d-54e4-bddb-2f01adce2ad4-grafana-compute-0[99477]: logger=migrator t=2025-12-08T09:49:25.975353987Z level=info msg="Migration successfully executed" id="Add index for created in annotation table" duration=2.278668ms
Dec  8 04:49:25 np0005550137 ceph-ceb838ef-9d5d-54e4-bddb-2f01adce2ad4-grafana-compute-0[99477]: logger=migrator t=2025-12-08T09:49:25.977732528Z level=info msg="Executing migration" id="Add index for updated in annotation table"
Dec  8 04:49:25 np0005550137 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Dec  8 04:49:25 np0005550137 ceph-ceb838ef-9d5d-54e4-bddb-2f01adce2ad4-grafana-compute-0[99477]: logger=migrator t=2025-12-08T09:49:25.979055338Z level=info msg="Migration successfully executed" id="Add index for updated in annotation table" duration=1.32223ms
Dec  8 04:49:25 np0005550137 ceph-ceb838ef-9d5d-54e4-bddb-2f01adce2ad4-grafana-compute-0[99477]: logger=migrator t=2025-12-08T09:49:25.981614134Z level=info msg="Executing migration" id="Convert existing annotations from seconds to milliseconds"
Dec  8 04:49:25 np0005550137 ceph-ceb838ef-9d5d-54e4-bddb-2f01adce2ad4-grafana-compute-0[99477]: logger=migrator t=2025-12-08T09:49:25.982008326Z level=info msg="Migration successfully executed" id="Convert existing annotations from seconds to milliseconds" duration=393.812µs
Dec  8 04:49:25 np0005550137 ceph-ceb838ef-9d5d-54e4-bddb-2f01adce2ad4-grafana-compute-0[99477]: logger=migrator t=2025-12-08T09:49:25.984058167Z level=info msg="Executing migration" id="Add epoch_end column"
Dec  8 04:49:25 np0005550137 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec  8 04:49:25 np0005550137 ceph-ceb838ef-9d5d-54e4-bddb-2f01adce2ad4-grafana-compute-0[99477]: logger=migrator t=2025-12-08T09:49:25.988707926Z level=info msg="Migration successfully executed" id="Add epoch_end column" duration=4.649249ms
Dec  8 04:49:25 np0005550137 ceph-ceb838ef-9d5d-54e4-bddb-2f01adce2ad4-grafana-compute-0[99477]: logger=migrator t=2025-12-08T09:49:25.990737557Z level=info msg="Executing migration" id="Add index for epoch_end"
Dec  8 04:49:25 np0005550137 ceph-ceb838ef-9d5d-54e4-bddb-2f01adce2ad4-grafana-compute-0[99477]: logger=migrator t=2025-12-08T09:49:25.99186849Z level=info msg="Migration successfully executed" id="Add index for epoch_end" duration=1.127763ms
Dec  8 04:49:25 np0005550137 ceph-ceb838ef-9d5d-54e4-bddb-2f01adce2ad4-grafana-compute-0[99477]: logger=migrator t=2025-12-08T09:49:25.994249291Z level=info msg="Executing migration" id="Make epoch_end the same as epoch"
Dec  8 04:49:25 np0005550137 ceph-ceb838ef-9d5d-54e4-bddb-2f01adce2ad4-grafana-compute-0[99477]: logger=migrator t=2025-12-08T09:49:25.99455156Z level=info msg="Migration successfully executed" id="Make epoch_end the same as epoch" duration=302.159µs
Dec  8 04:49:25 np0005550137 ceph-ceb838ef-9d5d-54e4-bddb-2f01adce2ad4-grafana-compute-0[99477]: logger=migrator t=2025-12-08T09:49:25.99688589Z level=info msg="Executing migration" id="Move region to single row"
Dec  8 04:49:25 np0005550137 ceph-ceb838ef-9d5d-54e4-bddb-2f01adce2ad4-grafana-compute-0[99477]: logger=migrator t=2025-12-08T09:49:25.997415616Z level=info msg="Migration successfully executed" id="Move region to single row" duration=529.976µs
Dec  8 04:49:25 np0005550137 ceph-ceb838ef-9d5d-54e4-bddb-2f01adce2ad4-grafana-compute-0[99477]: logger=migrator t=2025-12-08T09:49:25.999308903Z level=info msg="Executing migration" id="Remove index org_id_epoch from annotation table"
Dec  8 04:49:26 np0005550137 ceph-ceb838ef-9d5d-54e4-bddb-2f01adce2ad4-grafana-compute-0[99477]: logger=migrator t=2025-12-08T09:49:26.000456627Z level=info msg="Migration successfully executed" id="Remove index org_id_epoch from annotation table" duration=1.147724ms
Dec  8 04:49:26 np0005550137 ceph-ceb838ef-9d5d-54e4-bddb-2f01adce2ad4-grafana-compute-0[99477]: logger=migrator t=2025-12-08T09:49:26.002561099Z level=info msg="Executing migration" id="Remove index org_id_dashboard_id_panel_id_epoch from annotation table"
Dec  8 04:49:26 np0005550137 ceph-ceb838ef-9d5d-54e4-bddb-2f01adce2ad4-grafana-compute-0[99477]: logger=migrator t=2025-12-08T09:49:26.003694233Z level=info msg="Migration successfully executed" id="Remove index org_id_dashboard_id_panel_id_epoch from annotation table" duration=1.133544ms
Dec  8 04:49:26 np0005550137 ceph-ceb838ef-9d5d-54e4-bddb-2f01adce2ad4-grafana-compute-0[99477]: logger=migrator t=2025-12-08T09:49:26.005457966Z level=info msg="Executing migration" id="Add index for org_id_dashboard_id_epoch_end_epoch on annotation table"
Dec  8 04:49:26 np0005550137 ceph-ceb838ef-9d5d-54e4-bddb-2f01adce2ad4-grafana-compute-0[99477]: logger=migrator t=2025-12-08T09:49:26.006608451Z level=info msg="Migration successfully executed" id="Add index for org_id_dashboard_id_epoch_end_epoch on annotation table" duration=1.150384ms
Dec  8 04:49:26 np0005550137 ceph-ceb838ef-9d5d-54e4-bddb-2f01adce2ad4-grafana-compute-0[99477]: logger=migrator t=2025-12-08T09:49:26.008507117Z level=info msg="Executing migration" id="Add index for org_id_epoch_end_epoch on annotation table"
Dec  8 04:49:26 np0005550137 ceph-ceb838ef-9d5d-54e4-bddb-2f01adce2ad4-grafana-compute-0[99477]: logger=migrator t=2025-12-08T09:49:26.00961283Z level=info msg="Migration successfully executed" id="Add index for org_id_epoch_end_epoch on annotation table" duration=1.105293ms
Dec  8 04:49:26 np0005550137 ceph-ceb838ef-9d5d-54e4-bddb-2f01adce2ad4-grafana-compute-0[99477]: logger=migrator t=2025-12-08T09:49:26.011445185Z level=info msg="Executing migration" id="Remove index org_id_epoch_epoch_end from annotation table"
Dec  8 04:49:26 np0005550137 ceph-ceb838ef-9d5d-54e4-bddb-2f01adce2ad4-grafana-compute-0[99477]: logger=migrator t=2025-12-08T09:49:26.012548008Z level=info msg="Migration successfully executed" id="Remove index org_id_epoch_epoch_end from annotation table" duration=1.102613ms
Dec  8 04:49:26 np0005550137 ceph-ceb838ef-9d5d-54e4-bddb-2f01adce2ad4-grafana-compute-0[99477]: logger=migrator t=2025-12-08T09:49:26.014348301Z level=info msg="Executing migration" id="Add index for alert_id on annotation table"
Dec  8 04:49:26 np0005550137 ceph-ceb838ef-9d5d-54e4-bddb-2f01adce2ad4-grafana-compute-0[99477]: logger=migrator t=2025-12-08T09:49:26.015466365Z level=info msg="Migration successfully executed" id="Add index for alert_id on annotation table" duration=1.117604ms
Dec  8 04:49:26 np0005550137 ceph-ceb838ef-9d5d-54e4-bddb-2f01adce2ad4-grafana-compute-0[99477]: logger=migrator t=2025-12-08T09:49:26.017598659Z level=info msg="Executing migration" id="Increase tags column to length 4096"
Dec  8 04:49:26 np0005550137 ceph-ceb838ef-9d5d-54e4-bddb-2f01adce2ad4-grafana-compute-0[99477]: logger=migrator t=2025-12-08T09:49:26.017852026Z level=info msg="Migration successfully executed" id="Increase tags column to length 4096" duration=254.988µs
Dec  8 04:49:26 np0005550137 ceph-ceb838ef-9d5d-54e4-bddb-2f01adce2ad4-grafana-compute-0[99477]: logger=migrator t=2025-12-08T09:49:26.020772223Z level=info msg="Executing migration" id="create test_data table"
Dec  8 04:49:26 np0005550137 ceph-ceb838ef-9d5d-54e4-bddb-2f01adce2ad4-grafana-compute-0[99477]: logger=migrator t=2025-12-08T09:49:26.021846435Z level=info msg="Migration successfully executed" id="create test_data table" duration=1.073842ms
Dec  8 04:49:26 np0005550137 ceph-ceb838ef-9d5d-54e4-bddb-2f01adce2ad4-grafana-compute-0[99477]: logger=migrator t=2025-12-08T09:49:26.024986709Z level=info msg="Executing migration" id="create dashboard_version table v1"
Dec  8 04:49:26 np0005550137 ceph-ceb838ef-9d5d-54e4-bddb-2f01adce2ad4-grafana-compute-0[99477]: logger=migrator t=2025-12-08T09:49:26.0260183Z level=info msg="Migration successfully executed" id="create dashboard_version table v1" duration=1.031451ms
Dec  8 04:49:26 np0005550137 ceph-ceb838ef-9d5d-54e4-bddb-2f01adce2ad4-grafana-compute-0[99477]: logger=migrator t=2025-12-08T09:49:26.028742571Z level=info msg="Executing migration" id="add index dashboard_version.dashboard_id"
Dec  8 04:49:26 np0005550137 ceph-ceb838ef-9d5d-54e4-bddb-2f01adce2ad4-grafana-compute-0[99477]: logger=migrator t=2025-12-08T09:49:26.030091242Z level=info msg="Migration successfully executed" id="add index dashboard_version.dashboard_id" duration=1.348391ms
Dec  8 04:49:26 np0005550137 ceph-ceb838ef-9d5d-54e4-bddb-2f01adce2ad4-grafana-compute-0[99477]: logger=migrator t=2025-12-08T09:49:26.033010248Z level=info msg="Executing migration" id="add unique index dashboard_version.dashboard_id and dashboard_version.version"
Dec  8 04:49:26 np0005550137 ceph-ceb838ef-9d5d-54e4-bddb-2f01adce2ad4-grafana-compute-0[99477]: logger=migrator t=2025-12-08T09:49:26.034200124Z level=info msg="Migration successfully executed" id="add unique index dashboard_version.dashboard_id and dashboard_version.version" duration=1.189756ms
Dec  8 04:49:26 np0005550137 ceph-ceb838ef-9d5d-54e4-bddb-2f01adce2ad4-grafana-compute-0[99477]: logger=migrator t=2025-12-08T09:49:26.036956827Z level=info msg="Executing migration" id="Set dashboard version to 1 where 0"
Dec  8 04:49:26 np0005550137 ceph-ceb838ef-9d5d-54e4-bddb-2f01adce2ad4-grafana-compute-0[99477]: logger=migrator t=2025-12-08T09:49:26.037285817Z level=info msg="Migration successfully executed" id="Set dashboard version to 1 where 0" duration=330.88µs
Dec  8 04:49:26 np0005550137 ceph-ceb838ef-9d5d-54e4-bddb-2f01adce2ad4-grafana-compute-0[99477]: logger=migrator t=2025-12-08T09:49:26.039441951Z level=info msg="Executing migration" id="save existing dashboard data in dashboard_version table v1"
Dec  8 04:49:26 np0005550137 ceph-ceb838ef-9d5d-54e4-bddb-2f01adce2ad4-grafana-compute-0[99477]: logger=migrator t=2025-12-08T09:49:26.039991357Z level=info msg="Migration successfully executed" id="save existing dashboard data in dashboard_version table v1" duration=549.366µs
Dec  8 04:49:26 np0005550137 ceph-ceb838ef-9d5d-54e4-bddb-2f01adce2ad4-grafana-compute-0[99477]: logger=migrator t=2025-12-08T09:49:26.042133771Z level=info msg="Executing migration" id="alter dashboard_version.data to mediumtext v1"
Dec  8 04:49:26 np0005550137 ceph-ceb838ef-9d5d-54e4-bddb-2f01adce2ad4-grafana-compute-0[99477]: logger=migrator t=2025-12-08T09:49:26.042336137Z level=info msg="Migration successfully executed" id="alter dashboard_version.data to mediumtext v1" duration=200.586µs
Dec  8 04:49:26 np0005550137 ceph-ceb838ef-9d5d-54e4-bddb-2f01adce2ad4-grafana-compute-0[99477]: logger=migrator t=2025-12-08T09:49:26.044611435Z level=info msg="Executing migration" id="create team table"
Dec  8 04:49:26 np0005550137 ceph-ceb838ef-9d5d-54e4-bddb-2f01adce2ad4-grafana-compute-0[99477]: logger=migrator t=2025-12-08T09:49:26.045573524Z level=info msg="Migration successfully executed" id="create team table" duration=961.729µs
Dec  8 04:49:26 np0005550137 ceph-ceb838ef-9d5d-54e4-bddb-2f01adce2ad4-grafana-compute-0[99477]: logger=migrator t=2025-12-08T09:49:26.048068298Z level=info msg="Executing migration" id="add index team.org_id"
Dec  8 04:49:26 np0005550137 ceph-ceb838ef-9d5d-54e4-bddb-2f01adce2ad4-grafana-compute-0[99477]: logger=migrator t=2025-12-08T09:49:26.049269105Z level=info msg="Migration successfully executed" id="add index team.org_id" duration=1.200456ms
Dec  8 04:49:26 np0005550137 ceph-ceb838ef-9d5d-54e4-bddb-2f01adce2ad4-grafana-compute-0[99477]: logger=migrator t=2025-12-08T09:49:26.051887913Z level=info msg="Executing migration" id="add unique index team_org_id_name"
Dec  8 04:49:26 np0005550137 ceph-ceb838ef-9d5d-54e4-bddb-2f01adce2ad4-grafana-compute-0[99477]: logger=migrator t=2025-12-08T09:49:26.053024216Z level=info msg="Migration successfully executed" id="add unique index team_org_id_name" duration=1.137043ms
Dec  8 04:49:26 np0005550137 ceph-ceb838ef-9d5d-54e4-bddb-2f01adce2ad4-grafana-compute-0[99477]: logger=migrator t=2025-12-08T09:49:26.055382236Z level=info msg="Executing migration" id="Add column uid in team"
Dec  8 04:49:26 np0005550137 ceph-ceb838ef-9d5d-54e4-bddb-2f01adce2ad4-grafana-compute-0[99477]: logger=migrator t=2025-12-08T09:49:26.059812169Z level=info msg="Migration successfully executed" id="Add column uid in team" duration=4.428843ms
Dec  8 04:49:26 np0005550137 ceph-ceb838ef-9d5d-54e4-bddb-2f01adce2ad4-grafana-compute-0[99477]: logger=migrator t=2025-12-08T09:49:26.062057946Z level=info msg="Executing migration" id="Update uid column values in team"
Dec  8 04:49:26 np0005550137 ceph-ceb838ef-9d5d-54e4-bddb-2f01adce2ad4-grafana-compute-0[99477]: logger=migrator t=2025-12-08T09:49:26.062275123Z level=info msg="Migration successfully executed" id="Update uid column values in team" duration=216.376µs
Dec  8 04:49:26 np0005550137 ceph-ceb838ef-9d5d-54e4-bddb-2f01adce2ad4-grafana-compute-0[99477]: logger=migrator t=2025-12-08T09:49:26.064359474Z level=info msg="Executing migration" id="Add unique index team_org_id_uid"
Dec  8 04:49:26 np0005550137 ceph-ceb838ef-9d5d-54e4-bddb-2f01adce2ad4-grafana-compute-0[99477]: logger=migrator t=2025-12-08T09:49:26.065412346Z level=info msg="Migration successfully executed" id="Add unique index team_org_id_uid" duration=1.052392ms
Dec  8 04:49:26 np0005550137 ceph-ceb838ef-9d5d-54e4-bddb-2f01adce2ad4-grafana-compute-0[99477]: logger=migrator t=2025-12-08T09:49:26.067849769Z level=info msg="Executing migration" id="create team member table"
Dec  8 04:49:26 np0005550137 ceph-ceb838ef-9d5d-54e4-bddb-2f01adce2ad4-grafana-compute-0[99477]: logger=migrator t=2025-12-08T09:49:26.068749736Z level=info msg="Migration successfully executed" id="create team member table" duration=899.397µs
Dec  8 04:49:26 np0005550137 ceph-ceb838ef-9d5d-54e4-bddb-2f01adce2ad4-grafana-compute-0[99477]: logger=migrator t=2025-12-08T09:49:26.07122758Z level=info msg="Executing migration" id="add index team_member.org_id"
Dec  8 04:49:26 np0005550137 ceph-ceb838ef-9d5d-54e4-bddb-2f01adce2ad4-grafana-compute-0[99477]: logger=migrator t=2025-12-08T09:49:26.07226597Z level=info msg="Migration successfully executed" id="add index team_member.org_id" duration=1.0379ms
Dec  8 04:49:26 np0005550137 ceph-ceb838ef-9d5d-54e4-bddb-2f01adce2ad4-grafana-compute-0[99477]: logger=migrator t=2025-12-08T09:49:26.07455411Z level=info msg="Executing migration" id="add unique index team_member_org_id_team_id_user_id"
Dec  8 04:49:26 np0005550137 ceph-ceb838ef-9d5d-54e4-bddb-2f01adce2ad4-grafana-compute-0[99477]: logger=migrator t=2025-12-08T09:49:26.07559707Z level=info msg="Migration successfully executed" id="add unique index team_member_org_id_team_id_user_id" duration=1.04273ms
Dec  8 04:49:26 np0005550137 ceph-ceb838ef-9d5d-54e4-bddb-2f01adce2ad4-grafana-compute-0[99477]: logger=migrator t=2025-12-08T09:49:26.078890849Z level=info msg="Executing migration" id="add index team_member.team_id"
Dec  8 04:49:26 np0005550137 ceph-ceb838ef-9d5d-54e4-bddb-2f01adce2ad4-grafana-compute-0[99477]: logger=migrator t=2025-12-08T09:49:26.079950621Z level=info msg="Migration successfully executed" id="add index team_member.team_id" duration=1.059092ms
Dec  8 04:49:26 np0005550137 ceph-ceb838ef-9d5d-54e4-bddb-2f01adce2ad4-grafana-compute-0[99477]: logger=migrator t=2025-12-08T09:49:26.08229321Z level=info msg="Executing migration" id="Add column email to team table"
Dec  8 04:49:26 np0005550137 ceph-ceb838ef-9d5d-54e4-bddb-2f01adce2ad4-grafana-compute-0[99477]: logger=migrator t=2025-12-08T09:49:26.087408793Z level=info msg="Migration successfully executed" id="Add column email to team table" duration=5.114133ms
Dec  8 04:49:26 np0005550137 ceph-ceb838ef-9d5d-54e4-bddb-2f01adce2ad4-grafana-compute-0[99477]: logger=migrator t=2025-12-08T09:49:26.089619049Z level=info msg="Executing migration" id="Add column external to team_member table"
Dec  8 04:49:26 np0005550137 ceph-ceb838ef-9d5d-54e4-bddb-2f01adce2ad4-grafana-compute-0[99477]: logger=migrator t=2025-12-08T09:49:26.094619548Z level=info msg="Migration successfully executed" id="Add column external to team_member table" duration=4.998279ms
Dec  8 04:49:26 np0005550137 ceph-ceb838ef-9d5d-54e4-bddb-2f01adce2ad4-grafana-compute-0[99477]: logger=migrator t=2025-12-08T09:49:26.09700478Z level=info msg="Executing migration" id="Add column permission to team_member table"
Dec  8 04:49:26 np0005550137 ceph-ceb838ef-9d5d-54e4-bddb-2f01adce2ad4-grafana-compute-0[99477]: logger=migrator t=2025-12-08T09:49:26.102061141Z level=info msg="Migration successfully executed" id="Add column permission to team_member table" duration=5.055181ms
Dec  8 04:49:26 np0005550137 ceph-ceb838ef-9d5d-54e4-bddb-2f01adce2ad4-grafana-compute-0[99477]: logger=migrator t=2025-12-08T09:49:26.104186734Z level=info msg="Executing migration" id="create dashboard acl table"
Dec  8 04:49:26 np0005550137 ceph-ceb838ef-9d5d-54e4-bddb-2f01adce2ad4-grafana-compute-0[99477]: logger=migrator t=2025-12-08T09:49:26.105335969Z level=info msg="Migration successfully executed" id="create dashboard acl table" duration=1.148015ms
Dec  8 04:49:26 np0005550137 ceph-ceb838ef-9d5d-54e4-bddb-2f01adce2ad4-grafana-compute-0[99477]: logger=migrator t=2025-12-08T09:49:26.108430721Z level=info msg="Executing migration" id="add index dashboard_acl_dashboard_id"
Dec  8 04:49:26 np0005550137 ceph-ceb838ef-9d5d-54e4-bddb-2f01adce2ad4-grafana-compute-0[99477]: logger=migrator t=2025-12-08T09:49:26.109551394Z level=info msg="Migration successfully executed" id="add index dashboard_acl_dashboard_id" duration=1.119103ms
Dec  8 04:49:26 np0005550137 ceph-ceb838ef-9d5d-54e4-bddb-2f01adce2ad4-grafana-compute-0[99477]: logger=migrator t=2025-12-08T09:49:26.111968586Z level=info msg="Executing migration" id="add unique index dashboard_acl_dashboard_id_user_id"
Dec  8 04:49:26 np0005550137 ceph-ceb838ef-9d5d-54e4-bddb-2f01adce2ad4-grafana-compute-0[99477]: logger=migrator t=2025-12-08T09:49:26.113217514Z level=info msg="Migration successfully executed" id="add unique index dashboard_acl_dashboard_id_user_id" duration=1.248918ms
Dec  8 04:49:26 np0005550137 ceph-ceb838ef-9d5d-54e4-bddb-2f01adce2ad4-grafana-compute-0[99477]: logger=migrator t=2025-12-08T09:49:26.115736829Z level=info msg="Executing migration" id="add unique index dashboard_acl_dashboard_id_team_id"
Dec  8 04:49:26 np0005550137 ceph-ceb838ef-9d5d-54e4-bddb-2f01adce2ad4-grafana-compute-0[99477]: logger=migrator t=2025-12-08T09:49:26.116986626Z level=info msg="Migration successfully executed" id="add unique index dashboard_acl_dashboard_id_team_id" duration=1.248947ms
Dec  8 04:49:26 np0005550137 ceph-ceb838ef-9d5d-54e4-bddb-2f01adce2ad4-grafana-compute-0[99477]: logger=migrator t=2025-12-08T09:49:26.119479621Z level=info msg="Executing migration" id="add index dashboard_acl_user_id"
Dec  8 04:49:26 np0005550137 ceph-ceb838ef-9d5d-54e4-bddb-2f01adce2ad4-grafana-compute-0[99477]: logger=migrator t=2025-12-08T09:49:26.120589334Z level=info msg="Migration successfully executed" id="add index dashboard_acl_user_id" duration=1.111312ms
Dec  8 04:49:26 np0005550137 ceph-ceb838ef-9d5d-54e4-bddb-2f01adce2ad4-grafana-compute-0[99477]: logger=migrator t=2025-12-08T09:49:26.122899352Z level=info msg="Executing migration" id="add index dashboard_acl_team_id"
Dec  8 04:49:26 np0005550137 ceph-ceb838ef-9d5d-54e4-bddb-2f01adce2ad4-grafana-compute-0[99477]: logger=migrator t=2025-12-08T09:49:26.124064088Z level=info msg="Migration successfully executed" id="add index dashboard_acl_team_id" duration=1.165075ms
Dec  8 04:49:26 np0005550137 ceph-ceb838ef-9d5d-54e4-bddb-2f01adce2ad4-grafana-compute-0[99477]: logger=migrator t=2025-12-08T09:49:26.126430868Z level=info msg="Executing migration" id="add index dashboard_acl_org_id_role"
Dec  8 04:49:26 np0005550137 ceph-ceb838ef-9d5d-54e4-bddb-2f01adce2ad4-grafana-compute-0[99477]: logger=migrator t=2025-12-08T09:49:26.127627984Z level=info msg="Migration successfully executed" id="add index dashboard_acl_org_id_role" duration=1.196506ms
Dec  8 04:49:26 np0005550137 ceph-ceb838ef-9d5d-54e4-bddb-2f01adce2ad4-grafana-compute-0[99477]: logger=migrator t=2025-12-08T09:49:26.130109058Z level=info msg="Executing migration" id="add index dashboard_permission"
Dec  8 04:49:26 np0005550137 ceph-ceb838ef-9d5d-54e4-bddb-2f01adce2ad4-grafana-compute-0[99477]: logger=migrator t=2025-12-08T09:49:26.131354955Z level=info msg="Migration successfully executed" id="add index dashboard_permission" duration=1.246677ms
Dec  8 04:49:26 np0005550137 ceph-ceb838ef-9d5d-54e4-bddb-2f01adce2ad4-grafana-compute-0[99477]: logger=migrator t=2025-12-08T09:49:26.133805658Z level=info msg="Executing migration" id="save default acl rules in dashboard_acl table"
Dec  8 04:49:26 np0005550137 ceph-ceb838ef-9d5d-54e4-bddb-2f01adce2ad4-grafana-compute-0[99477]: logger=migrator t=2025-12-08T09:49:26.134289083Z level=info msg="Migration successfully executed" id="save default acl rules in dashboard_acl table" duration=483.554µs
Dec  8 04:49:26 np0005550137 ceph-ceb838ef-9d5d-54e4-bddb-2f01adce2ad4-grafana-compute-0[99477]: logger=migrator t=2025-12-08T09:49:26.136453428Z level=info msg="Executing migration" id="delete acl rules for deleted dashboards and folders"
Dec  8 04:49:26 np0005550137 ceph-ceb838ef-9d5d-54e4-bddb-2f01adce2ad4-grafana-compute-0[99477]: logger=migrator t=2025-12-08T09:49:26.136732475Z level=info msg="Migration successfully executed" id="delete acl rules for deleted dashboards and folders" duration=279.047µs
Dec  8 04:49:26 np0005550137 ceph-ceb838ef-9d5d-54e4-bddb-2f01adce2ad4-grafana-compute-0[99477]: logger=migrator t=2025-12-08T09:49:26.138623003Z level=info msg="Executing migration" id="create tag table"
Dec  8 04:49:26 np0005550137 ceph-ceb838ef-9d5d-54e4-bddb-2f01adce2ad4-grafana-compute-0[99477]: logger=migrator t=2025-12-08T09:49:26.139327883Z level=info msg="Migration successfully executed" id="create tag table" duration=705.871µs
Dec  8 04:49:26 np0005550137 ceph-ceb838ef-9d5d-54e4-bddb-2f01adce2ad4-grafana-compute-0[99477]: logger=migrator t=2025-12-08T09:49:26.141778597Z level=info msg="Executing migration" id="add index tag.key_value"
Dec  8 04:49:26 np0005550137 ceph-ceb838ef-9d5d-54e4-bddb-2f01adce2ad4-grafana-compute-0[99477]: logger=migrator t=2025-12-08T09:49:26.142747235Z level=info msg="Migration successfully executed" id="add index tag.key_value" duration=968.838µs
Dec  8 04:49:26 np0005550137 ceph-ceb838ef-9d5d-54e4-bddb-2f01adce2ad4-grafana-compute-0[99477]: logger=migrator t=2025-12-08T09:49:26.144983692Z level=info msg="Executing migration" id="create login attempt table"
Dec  8 04:49:26 np0005550137 ceph-ceb838ef-9d5d-54e4-bddb-2f01adce2ad4-grafana-compute-0[99477]: logger=migrator t=2025-12-08T09:49:26.145560579Z level=info msg="Migration successfully executed" id="create login attempt table" duration=578.697µs
Dec  8 04:49:26 np0005550137 ceph-ceb838ef-9d5d-54e4-bddb-2f01adce2ad4-grafana-compute-0[99477]: logger=migrator t=2025-12-08T09:49:26.147847857Z level=info msg="Executing migration" id="add index login_attempt.username"
Dec  8 04:49:26 np0005550137 ceph-ceb838ef-9d5d-54e4-bddb-2f01adce2ad4-grafana-compute-0[99477]: logger=migrator t=2025-12-08T09:49:26.148797486Z level=info msg="Migration successfully executed" id="add index login_attempt.username" duration=949.289µs
Dec  8 04:49:26 np0005550137 ceph-ceb838ef-9d5d-54e4-bddb-2f01adce2ad4-grafana-compute-0[99477]: logger=migrator t=2025-12-08T09:49:26.151245099Z level=info msg="Executing migration" id="drop index IDX_login_attempt_username - v1"
Dec  8 04:49:26 np0005550137 ceph-ceb838ef-9d5d-54e4-bddb-2f01adce2ad4-grafana-compute-0[99477]: logger=migrator t=2025-12-08T09:49:26.152038523Z level=info msg="Migration successfully executed" id="drop index IDX_login_attempt_username - v1" duration=793.674µs
Dec  8 04:49:26 np0005550137 ceph-ceb838ef-9d5d-54e4-bddb-2f01adce2ad4-grafana-compute-0[99477]: logger=migrator t=2025-12-08T09:49:26.153804475Z level=info msg="Executing migration" id="Rename table login_attempt to login_attempt_tmp_qwerty - v1"
Dec  8 04:49:26 np0005550137 ceph-ceb838ef-9d5d-54e4-bddb-2f01adce2ad4-grafana-compute-0[99477]: logger=migrator t=2025-12-08T09:49:26.168924677Z level=info msg="Migration successfully executed" id="Rename table login_attempt to login_attempt_tmp_qwerty - v1" duration=15.115332ms
Dec  8 04:49:26 np0005550137 ceph-ceb838ef-9d5d-54e4-bddb-2f01adce2ad4-grafana-compute-0[99477]: logger=migrator t=2025-12-08T09:49:26.171197745Z level=info msg="Executing migration" id="create login_attempt v2"
Dec  8 04:49:26 np0005550137 ceph-ceb838ef-9d5d-54e4-bddb-2f01adce2ad4-grafana-compute-0[99477]: logger=migrator t=2025-12-08T09:49:26.171878285Z level=info msg="Migration successfully executed" id="create login_attempt v2" duration=681.4µs
Dec  8 04:49:26 np0005550137 ceph-ceb838ef-9d5d-54e4-bddb-2f01adce2ad4-grafana-compute-0[99477]: logger=migrator t=2025-12-08T09:49:26.173977438Z level=info msg="Executing migration" id="create index IDX_login_attempt_username - v2"
Dec  8 04:49:26 np0005550137 ceph-ceb838ef-9d5d-54e4-bddb-2f01adce2ad4-grafana-compute-0[99477]: logger=migrator t=2025-12-08T09:49:26.174684159Z level=info msg="Migration successfully executed" id="create index IDX_login_attempt_username - v2" duration=704.511µs
Dec  8 04:49:26 np0005550137 systemd[1]: var-lib-containers-storage-overlay-bccb8ebfa1a2e5e040cf784046a38d79ae713f9a6ca4070732cb9bd6e34e28ef-merged.mount: Deactivated successfully.
Dec  8 04:49:26 np0005550137 ceph-ceb838ef-9d5d-54e4-bddb-2f01adce2ad4-grafana-compute-0[99477]: logger=migrator t=2025-12-08T09:49:26.179616216Z level=info msg="Executing migration" id="copy login_attempt v1 to v2"
Dec  8 04:49:26 np0005550137 ceph-ceb838ef-9d5d-54e4-bddb-2f01adce2ad4-grafana-compute-0[99477]: logger=migrator t=2025-12-08T09:49:26.18007291Z level=info msg="Migration successfully executed" id="copy login_attempt v1 to v2" duration=456.754µs
Dec  8 04:49:26 np0005550137 ceph-ceb838ef-9d5d-54e4-bddb-2f01adce2ad4-grafana-compute-0[99477]: logger=migrator t=2025-12-08T09:49:26.18275394Z level=info msg="Executing migration" id="drop login_attempt_tmp_qwerty"
Dec  8 04:49:26 np0005550137 ceph-ceb838ef-9d5d-54e4-bddb-2f01adce2ad4-grafana-compute-0[99477]: logger=migrator t=2025-12-08T09:49:26.18342058Z level=info msg="Migration successfully executed" id="drop login_attempt_tmp_qwerty" duration=666.55µs
Dec  8 04:49:26 np0005550137 ceph-ceb838ef-9d5d-54e4-bddb-2f01adce2ad4-grafana-compute-0[99477]: logger=migrator t=2025-12-08T09:49:26.18577859Z level=info msg="Executing migration" id="create user auth table"
Dec  8 04:49:26 np0005550137 ceph-ceb838ef-9d5d-54e4-bddb-2f01adce2ad4-grafana-compute-0[99477]: logger=migrator t=2025-12-08T09:49:26.186452951Z level=info msg="Migration successfully executed" id="create user auth table" duration=674.091µs
Dec  8 04:49:26 np0005550137 ceph-ceb838ef-9d5d-54e4-bddb-2f01adce2ad4-grafana-compute-0[99477]: logger=migrator t=2025-12-08T09:49:26.188384598Z level=info msg="Executing migration" id="create index IDX_user_auth_auth_module_auth_id - v1"
Dec  8 04:49:26 np0005550137 ceph-ceb838ef-9d5d-54e4-bddb-2f01adce2ad4-grafana-compute-0[99477]: logger=migrator t=2025-12-08T09:49:26.189350917Z level=info msg="Migration successfully executed" id="create index IDX_user_auth_auth_module_auth_id - v1" duration=964.599µs
Dec  8 04:49:26 np0005550137 ceph-ceb838ef-9d5d-54e4-bddb-2f01adce2ad4-alertmanager-compute-0[98834]: ts=2025-12-08T09:49:26.192Z caller=cluster.go:698 level=info component=cluster msg="gossip settled; proceeding" elapsed=10.001702796s
Dec  8 04:49:26 np0005550137 ceph-ceb838ef-9d5d-54e4-bddb-2f01adce2ad4-grafana-compute-0[99477]: logger=migrator t=2025-12-08T09:49:26.193277204Z level=info msg="Executing migration" id="alter user_auth.auth_id to length 190"
Dec  8 04:49:26 np0005550137 ceph-ceb838ef-9d5d-54e4-bddb-2f01adce2ad4-grafana-compute-0[99477]: logger=migrator t=2025-12-08T09:49:26.193320915Z level=info msg="Migration successfully executed" id="alter user_auth.auth_id to length 190" duration=44.431µs
Dec  8 04:49:26 np0005550137 podman[99156]: 2025-12-08 09:49:26.193568603 +0000 UTC m=+4.676174356 container remove e2c2a8797cbcec27825bc8ec881473d8bf15bcd4b16414929ca795828125f0dc (image=quay.io/ceph/ceph:v19, name=awesome_dijkstra, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_REF=squid, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1)
Dec  8 04:49:26 np0005550137 ceph-ceb838ef-9d5d-54e4-bddb-2f01adce2ad4-grafana-compute-0[99477]: logger=migrator t=2025-12-08T09:49:26.195751128Z level=info msg="Executing migration" id="Add OAuth access token to user_auth"
Dec  8 04:49:26 np0005550137 ceph-ceb838ef-9d5d-54e4-bddb-2f01adce2ad4-grafana-compute-0[99477]: logger=migrator t=2025-12-08T09:49:26.199351055Z level=info msg="Migration successfully executed" id="Add OAuth access token to user_auth" duration=3.599537ms
Dec  8 04:49:26 np0005550137 ceph-ceb838ef-9d5d-54e4-bddb-2f01adce2ad4-grafana-compute-0[99477]: logger=migrator t=2025-12-08T09:49:26.201128308Z level=info msg="Executing migration" id="Add OAuth refresh token to user_auth"
Dec  8 04:49:26 np0005550137 ceph-ceb838ef-9d5d-54e4-bddb-2f01adce2ad4-grafana-compute-0[99477]: logger=migrator t=2025-12-08T09:49:26.204572141Z level=info msg="Migration successfully executed" id="Add OAuth refresh token to user_auth" duration=3.443453ms
Dec  8 04:49:26 np0005550137 ceph-ceb838ef-9d5d-54e4-bddb-2f01adce2ad4-grafana-compute-0[99477]: logger=migrator t=2025-12-08T09:49:26.206426807Z level=info msg="Executing migration" id="Add OAuth token type to user_auth"
Dec  8 04:49:26 np0005550137 ceph-ceb838ef-9d5d-54e4-bddb-2f01adce2ad4-grafana-compute-0[99477]: logger=migrator t=2025-12-08T09:49:26.209970202Z level=info msg="Migration successfully executed" id="Add OAuth token type to user_auth" duration=3.541875ms
Dec  8 04:49:26 np0005550137 ceph-ceb838ef-9d5d-54e4-bddb-2f01adce2ad4-grafana-compute-0[99477]: logger=migrator t=2025-12-08T09:49:26.211748336Z level=info msg="Executing migration" id="Add OAuth expiry to user_auth"
Dec  8 04:49:26 np0005550137 systemd[1]: libpod-conmon-e2c2a8797cbcec27825bc8ec881473d8bf15bcd4b16414929ca795828125f0dc.scope: Deactivated successfully.
Dec  8 04:49:26 np0005550137 ceph-ceb838ef-9d5d-54e4-bddb-2f01adce2ad4-grafana-compute-0[99477]: logger=migrator t=2025-12-08T09:49:26.215387064Z level=info msg="Migration successfully executed" id="Add OAuth expiry to user_auth" duration=3.638438ms
Dec  8 04:49:26 np0005550137 ceph-ceb838ef-9d5d-54e4-bddb-2f01adce2ad4-grafana-compute-0[99477]: logger=migrator t=2025-12-08T09:49:26.217205719Z level=info msg="Executing migration" id="Add index to user_id column in user_auth"
Dec  8 04:49:26 np0005550137 ceph-ceb838ef-9d5d-54e4-bddb-2f01adce2ad4-grafana-compute-0[99477]: logger=migrator t=2025-12-08T09:49:26.218003022Z level=info msg="Migration successfully executed" id="Add index to user_id column in user_auth" duration=797.183µs
Dec  8 04:49:26 np0005550137 ceph-ceb838ef-9d5d-54e4-bddb-2f01adce2ad4-grafana-compute-0[99477]: logger=migrator t=2025-12-08T09:49:26.220275581Z level=info msg="Executing migration" id="Add OAuth ID token to user_auth"
Dec  8 04:49:26 np0005550137 ceph-ceb838ef-9d5d-54e4-bddb-2f01adce2ad4-grafana-compute-0[99477]: logger=migrator t=2025-12-08T09:49:26.223813516Z level=info msg="Migration successfully executed" id="Add OAuth ID token to user_auth" duration=3.535624ms
Dec  8 04:49:26 np0005550137 ceph-ceb838ef-9d5d-54e4-bddb-2f01adce2ad4-grafana-compute-0[99477]: logger=migrator t=2025-12-08T09:49:26.225576789Z level=info msg="Executing migration" id="create server_lock table"
Dec  8 04:49:26 np0005550137 ceph-ceb838ef-9d5d-54e4-bddb-2f01adce2ad4-grafana-compute-0[99477]: logger=migrator t=2025-12-08T09:49:26.226467365Z level=info msg="Migration successfully executed" id="create server_lock table" duration=892.056µs
Dec  8 04:49:26 np0005550137 ceph-ceb838ef-9d5d-54e4-bddb-2f01adce2ad4-grafana-compute-0[99477]: logger=migrator t=2025-12-08T09:49:26.228966809Z level=info msg="Executing migration" id="add index server_lock.operation_uid"
Dec  8 04:49:26 np0005550137 ceph-ceb838ef-9d5d-54e4-bddb-2f01adce2ad4-grafana-compute-0[99477]: logger=migrator t=2025-12-08T09:49:26.229826835Z level=info msg="Migration successfully executed" id="add index server_lock.operation_uid" duration=860.056µs
Dec  8 04:49:26 np0005550137 ceph-ceb838ef-9d5d-54e4-bddb-2f01adce2ad4-grafana-compute-0[99477]: logger=migrator t=2025-12-08T09:49:26.232418893Z level=info msg="Executing migration" id="create user auth token table"
Dec  8 04:49:26 np0005550137 ceph-ceb838ef-9d5d-54e4-bddb-2f01adce2ad4-grafana-compute-0[99477]: logger=migrator t=2025-12-08T09:49:26.233144054Z level=info msg="Migration successfully executed" id="create user auth token table" duration=724.961µs
Dec  8 04:49:26 np0005550137 ceph-ceb838ef-9d5d-54e4-bddb-2f01adce2ad4-grafana-compute-0[99477]: logger=migrator t=2025-12-08T09:49:26.235453333Z level=info msg="Executing migration" id="add unique index user_auth_token.auth_token"
Dec  8 04:49:26 np0005550137 ceph-ceb838ef-9d5d-54e4-bddb-2f01adce2ad4-grafana-compute-0[99477]: logger=migrator t=2025-12-08T09:49:26.236212366Z level=info msg="Migration successfully executed" id="add unique index user_auth_token.auth_token" duration=758.893µs
Dec  8 04:49:26 np0005550137 ceph-ceb838ef-9d5d-54e4-bddb-2f01adce2ad4-grafana-compute-0[99477]: logger=migrator t=2025-12-08T09:49:26.238506445Z level=info msg="Executing migration" id="add unique index user_auth_token.prev_auth_token"
Dec  8 04:49:26 np0005550137 ceph-ceb838ef-9d5d-54e4-bddb-2f01adce2ad4-grafana-compute-0[99477]: logger=migrator t=2025-12-08T09:49:26.239419732Z level=info msg="Migration successfully executed" id="add unique index user_auth_token.prev_auth_token" duration=913.178µs
Dec  8 04:49:26 np0005550137 ceph-ceb838ef-9d5d-54e4-bddb-2f01adce2ad4-grafana-compute-0[99477]: logger=migrator t=2025-12-08T09:49:26.241823224Z level=info msg="Executing migration" id="add index user_auth_token.user_id"
Dec  8 04:49:26 np0005550137 ceph-ceb838ef-9d5d-54e4-bddb-2f01adce2ad4-grafana-compute-0[99477]: logger=migrator t=2025-12-08T09:49:26.24268105Z level=info msg="Migration successfully executed" id="add index user_auth_token.user_id" duration=857.866µs
Dec  8 04:49:26 np0005550137 ceph-ceb838ef-9d5d-54e4-bddb-2f01adce2ad4-grafana-compute-0[99477]: logger=migrator t=2025-12-08T09:49:26.244907746Z level=info msg="Executing migration" id="Add revoked_at to the user auth token"
Dec  8 04:49:26 np0005550137 ceph-ceb838ef-9d5d-54e4-bddb-2f01adce2ad4-grafana-compute-0[99477]: logger=migrator t=2025-12-08T09:49:26.248684658Z level=info msg="Migration successfully executed" id="Add revoked_at to the user auth token" duration=3.776792ms
Dec  8 04:49:26 np0005550137 ceph-ceb838ef-9d5d-54e4-bddb-2f01adce2ad4-grafana-compute-0[99477]: logger=migrator t=2025-12-08T09:49:26.250510513Z level=info msg="Executing migration" id="add index user_auth_token.revoked_at"
Dec  8 04:49:26 np0005550137 ceph-ceb838ef-9d5d-54e4-bddb-2f01adce2ad4-grafana-compute-0[99477]: logger=migrator t=2025-12-08T09:49:26.251295147Z level=info msg="Migration successfully executed" id="add index user_auth_token.revoked_at" duration=784.754µs
Dec  8 04:49:26 np0005550137 ceph-ceb838ef-9d5d-54e4-bddb-2f01adce2ad4-grafana-compute-0[99477]: logger=migrator t=2025-12-08T09:49:26.253528833Z level=info msg="Executing migration" id="create cache_data table"
Dec  8 04:49:26 np0005550137 ceph-ceb838ef-9d5d-54e4-bddb-2f01adce2ad4-grafana-compute-0[99477]: logger=migrator t=2025-12-08T09:49:26.254270495Z level=info msg="Migration successfully executed" id="create cache_data table" duration=741.142µs
Dec  8 04:49:26 np0005550137 ceph-ceb838ef-9d5d-54e4-bddb-2f01adce2ad4-grafana-compute-0[99477]: logger=migrator t=2025-12-08T09:49:26.256561694Z level=info msg="Executing migration" id="add unique index cache_data.cache_key"
Dec  8 04:49:26 np0005550137 ceph-ceb838ef-9d5d-54e4-bddb-2f01adce2ad4-grafana-compute-0[99477]: logger=migrator t=2025-12-08T09:49:26.257332006Z level=info msg="Migration successfully executed" id="add unique index cache_data.cache_key" duration=770.332µs
Dec  8 04:49:26 np0005550137 ceph-ceb838ef-9d5d-54e4-bddb-2f01adce2ad4-grafana-compute-0[99477]: logger=migrator t=2025-12-08T09:49:26.259483661Z level=info msg="Executing migration" id="create short_url table v1"
Dec  8 04:49:26 np0005550137 ceph-ceb838ef-9d5d-54e4-bddb-2f01adce2ad4-grafana-compute-0[99477]: logger=migrator t=2025-12-08T09:49:26.260257944Z level=info msg="Migration successfully executed" id="create short_url table v1" duration=773.993µs
Dec  8 04:49:26 np0005550137 ceph-ceb838ef-9d5d-54e4-bddb-2f01adce2ad4-grafana-compute-0[99477]: logger=migrator t=2025-12-08T09:49:26.262551262Z level=info msg="Executing migration" id="add index short_url.org_id-uid"
Dec  8 04:49:26 np0005550137 ceph-ceb838ef-9d5d-54e4-bddb-2f01adce2ad4-grafana-compute-0[99477]: logger=migrator t=2025-12-08T09:49:26.263364487Z level=info msg="Migration successfully executed" id="add index short_url.org_id-uid" duration=813.005µs
Dec  8 04:49:26 np0005550137 systemd[1]: Reloading.
Dec  8 04:49:26 np0005550137 ceph-ceb838ef-9d5d-54e4-bddb-2f01adce2ad4-grafana-compute-0[99477]: logger=migrator t=2025-12-08T09:49:26.266603054Z level=info msg="Executing migration" id="alter table short_url alter column created_by type to bigint"
Dec  8 04:49:26 np0005550137 ceph-ceb838ef-9d5d-54e4-bddb-2f01adce2ad4-grafana-compute-0[99477]: logger=migrator t=2025-12-08T09:49:26.266673506Z level=info msg="Migration successfully executed" id="alter table short_url alter column created_by type to bigint" duration=43.542µs
Dec  8 04:49:26 np0005550137 ceph-ceb838ef-9d5d-54e4-bddb-2f01adce2ad4-grafana-compute-0[99477]: logger=migrator t=2025-12-08T09:49:26.26851035Z level=info msg="Executing migration" id="delete alert_definition table"
Dec  8 04:49:26 np0005550137 ceph-ceb838ef-9d5d-54e4-bddb-2f01adce2ad4-grafana-compute-0[99477]: logger=migrator t=2025-12-08T09:49:26.268670065Z level=info msg="Migration successfully executed" id="delete alert_definition table" duration=159.445µs
Dec  8 04:49:26 np0005550137 ceph-ceb838ef-9d5d-54e4-bddb-2f01adce2ad4-grafana-compute-0[99477]: logger=migrator t=2025-12-08T09:49:26.270296474Z level=info msg="Executing migration" id="recreate alert_definition table"
Dec  8 04:49:26 np0005550137 ceph-ceb838ef-9d5d-54e4-bddb-2f01adce2ad4-grafana-compute-0[99477]: logger=migrator t=2025-12-08T09:49:26.271025475Z level=info msg="Migration successfully executed" id="recreate alert_definition table" duration=728.791µs
Dec  8 04:49:26 np0005550137 ceph-ceb838ef-9d5d-54e4-bddb-2f01adce2ad4-grafana-compute-0[99477]: logger=migrator t=2025-12-08T09:49:26.273233432Z level=info msg="Executing migration" id="add index in alert_definition on org_id and title columns"
Dec  8 04:49:26 np0005550137 ceph-ceb838ef-9d5d-54e4-bddb-2f01adce2ad4-grafana-compute-0[99477]: logger=migrator t=2025-12-08T09:49:26.274041185Z level=info msg="Migration successfully executed" id="add index in alert_definition on org_id and title columns" duration=807.713µs
Dec  8 04:49:26 np0005550137 ceph-ceb838ef-9d5d-54e4-bddb-2f01adce2ad4-grafana-compute-0[99477]: logger=migrator t=2025-12-08T09:49:26.276108648Z level=info msg="Executing migration" id="add index in alert_definition on org_id and uid columns"
Dec  8 04:49:26 np0005550137 ceph-ceb838ef-9d5d-54e4-bddb-2f01adce2ad4-grafana-compute-0[99477]: logger=migrator t=2025-12-08T09:49:26.276946842Z level=info msg="Migration successfully executed" id="add index in alert_definition on org_id and uid columns" duration=837.914µs
Dec  8 04:49:26 np0005550137 ceph-ceb838ef-9d5d-54e4-bddb-2f01adce2ad4-grafana-compute-0[99477]: logger=migrator t=2025-12-08T09:49:26.279135708Z level=info msg="Executing migration" id="alter alert_definition table data column to mediumtext in mysql"
Dec  8 04:49:26 np0005550137 ceph-ceb838ef-9d5d-54e4-bddb-2f01adce2ad4-grafana-compute-0[99477]: logger=migrator t=2025-12-08T09:49:26.279176489Z level=info msg="Migration successfully executed" id="alter alert_definition table data column to mediumtext in mysql" duration=41.341µs
Dec  8 04:49:26 np0005550137 ceph-ceb838ef-9d5d-54e4-bddb-2f01adce2ad4-grafana-compute-0[99477]: logger=migrator t=2025-12-08T09:49:26.280983833Z level=info msg="Executing migration" id="drop index in alert_definition on org_id and title columns"
Dec  8 04:49:26 np0005550137 ceph-ceb838ef-9d5d-54e4-bddb-2f01adce2ad4-grafana-compute-0[99477]: logger=migrator t=2025-12-08T09:49:26.281773947Z level=info msg="Migration successfully executed" id="drop index in alert_definition on org_id and title columns" duration=790.224µs
Dec  8 04:49:26 np0005550137 ceph-ceb838ef-9d5d-54e4-bddb-2f01adce2ad4-grafana-compute-0[99477]: logger=migrator t=2025-12-08T09:49:26.283529479Z level=info msg="Executing migration" id="drop index in alert_definition on org_id and uid columns"
Dec  8 04:49:26 np0005550137 ceph-ceb838ef-9d5d-54e4-bddb-2f01adce2ad4-grafana-compute-0[99477]: logger=migrator t=2025-12-08T09:49:26.284324523Z level=info msg="Migration successfully executed" id="drop index in alert_definition on org_id and uid columns" duration=795.384µs
Dec  8 04:49:26 np0005550137 ceph-ceb838ef-9d5d-54e4-bddb-2f01adce2ad4-grafana-compute-0[99477]: logger=migrator t=2025-12-08T09:49:26.286224159Z level=info msg="Executing migration" id="add unique index in alert_definition on org_id and title columns"
Dec  8 04:49:26 np0005550137 ceph-ceb838ef-9d5d-54e4-bddb-2f01adce2ad4-grafana-compute-0[99477]: logger=migrator t=2025-12-08T09:49:26.287036844Z level=info msg="Migration successfully executed" id="add unique index in alert_definition on org_id and title columns" duration=810.605µs
Dec  8 04:49:26 np0005550137 ceph-ceb838ef-9d5d-54e4-bddb-2f01adce2ad4-grafana-compute-0[99477]: logger=migrator t=2025-12-08T09:49:26.288955031Z level=info msg="Executing migration" id="add unique index in alert_definition on org_id and uid columns"
Dec  8 04:49:26 np0005550137 ceph-ceb838ef-9d5d-54e4-bddb-2f01adce2ad4-grafana-compute-0[99477]: logger=migrator t=2025-12-08T09:49:26.289761675Z level=info msg="Migration successfully executed" id="add unique index in alert_definition on org_id and uid columns" duration=806.454µs
Dec  8 04:49:26 np0005550137 ceph-ceb838ef-9d5d-54e4-bddb-2f01adce2ad4-grafana-compute-0[99477]: logger=migrator t=2025-12-08T09:49:26.291860718Z level=info msg="Executing migration" id="Add column paused in alert_definition"
Dec  8 04:49:26 np0005550137 ceph-ceb838ef-9d5d-54e4-bddb-2f01adce2ad4-grafana-compute-0[99477]: logger=migrator t=2025-12-08T09:49:26.295829717Z level=info msg="Migration successfully executed" id="Add column paused in alert_definition" duration=3.967598ms
Dec  8 04:49:26 np0005550137 ceph-ceb838ef-9d5d-54e4-bddb-2f01adce2ad4-grafana-compute-0[99477]: logger=migrator t=2025-12-08T09:49:26.297808535Z level=info msg="Executing migration" id="drop alert_definition table"
Dec  8 04:49:26 np0005550137 ceph-ceb838ef-9d5d-54e4-bddb-2f01adce2ad4-grafana-compute-0[99477]: logger=migrator t=2025-12-08T09:49:26.298792195Z level=info msg="Migration successfully executed" id="drop alert_definition table" duration=983.18µs
Dec  8 04:49:26 np0005550137 ceph-ceb838ef-9d5d-54e4-bddb-2f01adce2ad4-grafana-compute-0[99477]: logger=migrator t=2025-12-08T09:49:26.301032962Z level=info msg="Executing migration" id="delete alert_definition_version table"
Dec  8 04:49:26 np0005550137 ceph-ceb838ef-9d5d-54e4-bddb-2f01adce2ad4-grafana-compute-0[99477]: logger=migrator t=2025-12-08T09:49:26.301280789Z level=info msg="Migration successfully executed" id="delete alert_definition_version table" duration=248.297µs
Dec  8 04:49:26 np0005550137 ceph-ceb838ef-9d5d-54e4-bddb-2f01adce2ad4-grafana-compute-0[99477]: logger=migrator t=2025-12-08T09:49:26.303683361Z level=info msg="Executing migration" id="recreate alert_definition_version table"
Dec  8 04:49:26 np0005550137 ceph-ceb838ef-9d5d-54e4-bddb-2f01adce2ad4-grafana-compute-0[99477]: logger=migrator t=2025-12-08T09:49:26.304934268Z level=info msg="Migration successfully executed" id="recreate alert_definition_version table" duration=1.250637ms
Dec  8 04:49:26 np0005550137 ceph-ceb838ef-9d5d-54e4-bddb-2f01adce2ad4-grafana-compute-0[99477]: logger=migrator t=2025-12-08T09:49:26.307236207Z level=info msg="Executing migration" id="add index in alert_definition_version table on alert_definition_id and version columns"
Dec  8 04:49:26 np0005550137 ceph-ceb838ef-9d5d-54e4-bddb-2f01adce2ad4-grafana-compute-0[99477]: logger=migrator t=2025-12-08T09:49:26.30835951Z level=info msg="Migration successfully executed" id="add index in alert_definition_version table on alert_definition_id and version columns" duration=1.123173ms
Dec  8 04:49:26 np0005550137 ceph-ceb838ef-9d5d-54e4-bddb-2f01adce2ad4-grafana-compute-0[99477]: logger=migrator t=2025-12-08T09:49:26.310491444Z level=info msg="Executing migration" id="add index in alert_definition_version table on alert_definition_uid and version columns"
Dec  8 04:49:26 np0005550137 ceph-ceb838ef-9d5d-54e4-bddb-2f01adce2ad4-grafana-compute-0[99477]: logger=migrator t=2025-12-08T09:49:26.311406461Z level=info msg="Migration successfully executed" id="add index in alert_definition_version table on alert_definition_uid and version columns" duration=916.437µs
Dec  8 04:49:26 np0005550137 ceph-ceb838ef-9d5d-54e4-bddb-2f01adce2ad4-grafana-compute-0[99477]: logger=migrator t=2025-12-08T09:49:26.313497623Z level=info msg="Executing migration" id="alter alert_definition_version table data column to mediumtext in mysql"
Dec  8 04:49:26 np0005550137 ceph-ceb838ef-9d5d-54e4-bddb-2f01adce2ad4-grafana-compute-0[99477]: logger=migrator t=2025-12-08T09:49:26.313628287Z level=info msg="Migration successfully executed" id="alter alert_definition_version table data column to mediumtext in mysql" duration=51.232µs
Dec  8 04:49:26 np0005550137 ceph-ceb838ef-9d5d-54e4-bddb-2f01adce2ad4-grafana-compute-0[99477]: logger=migrator t=2025-12-08T09:49:26.315309888Z level=info msg="Executing migration" id="drop alert_definition_version table"
Dec  8 04:49:26 np0005550137 ceph-ceb838ef-9d5d-54e4-bddb-2f01adce2ad4-grafana-compute-0[99477]: logger=migrator t=2025-12-08T09:49:26.316282537Z level=info msg="Migration successfully executed" id="drop alert_definition_version table" duration=971.948µs
Dec  8 04:49:26 np0005550137 ceph-ceb838ef-9d5d-54e4-bddb-2f01adce2ad4-grafana-compute-0[99477]: logger=migrator t=2025-12-08T09:49:26.318468113Z level=info msg="Executing migration" id="create alert_instance table"
Dec  8 04:49:26 np0005550137 ceph-ceb838ef-9d5d-54e4-bddb-2f01adce2ad4-grafana-compute-0[99477]: logger=migrator t=2025-12-08T09:49:26.319586795Z level=info msg="Migration successfully executed" id="create alert_instance table" duration=1.117952ms
Dec  8 04:49:26 np0005550137 ceph-ceb838ef-9d5d-54e4-bddb-2f01adce2ad4-grafana-compute-0[99477]: logger=migrator t=2025-12-08T09:49:26.321631966Z level=info msg="Executing migration" id="add index in alert_instance table on def_org_id, def_uid and current_state columns"
Dec  8 04:49:26 np0005550137 ceph-ceb838ef-9d5d-54e4-bddb-2f01adce2ad4-grafana-compute-0[99477]: logger=migrator t=2025-12-08T09:49:26.322522593Z level=info msg="Migration successfully executed" id="add index in alert_instance table on def_org_id, def_uid and current_state columns" duration=890.607µs
Dec  8 04:49:26 np0005550137 ceph-ceb838ef-9d5d-54e4-bddb-2f01adce2ad4-grafana-compute-0[99477]: logger=migrator t=2025-12-08T09:49:26.324327047Z level=info msg="Executing migration" id="add index in alert_instance table on def_org_id, current_state columns"
Dec  8 04:49:26 np0005550137 ceph-ceb838ef-9d5d-54e4-bddb-2f01adce2ad4-grafana-compute-0[99477]: logger=migrator t=2025-12-08T09:49:26.325497332Z level=info msg="Migration successfully executed" id="add index in alert_instance table on def_org_id, current_state columns" duration=1.170055ms
Dec  8 04:49:26 np0005550137 ceph-ceb838ef-9d5d-54e4-bddb-2f01adce2ad4-grafana-compute-0[99477]: logger=migrator t=2025-12-08T09:49:26.328466051Z level=info msg="Executing migration" id="add column current_state_end to alert_instance"
Dec  8 04:49:26 np0005550137 ceph-ceb838ef-9d5d-54e4-bddb-2f01adce2ad4-grafana-compute-0[99477]: logger=migrator t=2025-12-08T09:49:26.332738838Z level=info msg="Migration successfully executed" id="add column current_state_end to alert_instance" duration=4.268657ms
Dec  8 04:49:26 np0005550137 ceph-ceb838ef-9d5d-54e4-bddb-2f01adce2ad4-grafana-compute-0[99477]: logger=migrator t=2025-12-08T09:49:26.338376636Z level=info msg="Executing migration" id="remove index def_org_id, def_uid, current_state on alert_instance"
Dec  8 04:49:26 np0005550137 ceph-ceb838ef-9d5d-54e4-bddb-2f01adce2ad4-grafana-compute-0[99477]: logger=migrator t=2025-12-08T09:49:26.339600253Z level=info msg="Migration successfully executed" id="remove index def_org_id, def_uid, current_state on alert_instance" duration=1.225237ms
Dec  8 04:49:26 np0005550137 ceph-ceb838ef-9d5d-54e4-bddb-2f01adce2ad4-grafana-compute-0[99477]: logger=migrator t=2025-12-08T09:49:26.341939793Z level=info msg="Executing migration" id="remove index def_org_id, current_state on alert_instance"
Dec  8 04:49:26 np0005550137 ceph-ceb838ef-9d5d-54e4-bddb-2f01adce2ad4-grafana-compute-0[99477]: logger=migrator t=2025-12-08T09:49:26.342792789Z level=info msg="Migration successfully executed" id="remove index def_org_id, current_state on alert_instance" duration=853.046µs
Dec  8 04:49:26 np0005550137 ceph-ceb838ef-9d5d-54e4-bddb-2f01adce2ad4-grafana-compute-0[99477]: logger=migrator t=2025-12-08T09:49:26.34484289Z level=info msg="Executing migration" id="rename def_org_id to rule_org_id in alert_instance"
Dec  8 04:49:26 np0005550137 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Dec  8 04:49:26 np0005550137 ceph-mgr[74806]: log_channel(cluster) log [DBG] : pgmap v56: 353 pgs: 353 active+clean; 456 KiB data, 103 MiB used, 60 GiB / 60 GiB avail
Dec  8 04:49:26 np0005550137 ceph-mon[74516]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "4"} v 0)
Dec  8 04:49:26 np0005550137 ceph-mon[74516]: log_channel(audit) log [INF] : from='mgr.14478 192.168.122.100:0/575685615' entity='mgr.compute-0.kitiwu' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "4"}]: dispatch
Dec  8 04:49:26 np0005550137 ceph-mon[74516]: mon.compute-0@0(leader).osd e60 do_prune osdmap full prune enabled
Dec  8 04:49:26 np0005550137 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec  8 04:49:26 np0005550137 ceph-ceb838ef-9d5d-54e4-bddb-2f01adce2ad4-grafana-compute-0[99477]: logger=migrator t=2025-12-08T09:49:26.366535887Z level=info msg="Migration successfully executed" id="rename def_org_id to rule_org_id in alert_instance" duration=21.667836ms
Dec  8 04:49:26 np0005550137 ceph-ceb838ef-9d5d-54e4-bddb-2f01adce2ad4-grafana-compute-0[99477]: logger=migrator t=2025-12-08T09:49:26.791584069Z level=info msg="Executing migration" id="rename def_uid to rule_uid in alert_instance"
Dec  8 04:49:26 np0005550137 ceph-mon[74516]: from='mgr.14478 192.168.122.100:0/575685615' entity='mgr.compute-0.kitiwu' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "3"}]': finished
Dec  8 04:49:26 np0005550137 ceph-mon[74516]: from='mgr.14478 192.168.122.100:0/575685615' entity='mgr.compute-0.kitiwu' 
Dec  8 04:49:26 np0005550137 ceph-mon[74516]: from='mgr.14478 192.168.122.100:0/575685615' entity='mgr.compute-0.kitiwu' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "4"}]: dispatch
Dec  8 04:49:26 np0005550137 ceph-ceb838ef-9d5d-54e4-bddb-2f01adce2ad4-grafana-compute-0[99477]: logger=migrator t=2025-12-08T09:49:26.813865604Z level=info msg="Migration successfully executed" id="rename def_uid to rule_uid in alert_instance" duration=22.275935ms
Dec  8 04:49:26 np0005550137 ceph-mon[74516]: log_channel(audit) log [INF] : from='mgr.14478 192.168.122.100:0/575685615' entity='mgr.compute-0.kitiwu' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "4"}]': finished
Dec  8 04:49:26 np0005550137 ceph-mon[74516]: mon.compute-0@0(leader).osd e61 e61: 3 total, 3 up, 3 in
Dec  8 04:49:26 np0005550137 ceph-ceb838ef-9d5d-54e4-bddb-2f01adce2ad4-grafana-compute-0[99477]: logger=migrator t=2025-12-08T09:49:26.816283916Z level=info msg="Executing migration" id="add index rule_org_id, rule_uid, current_state on alert_instance"
Dec  8 04:49:26 np0005550137 ceph-ceb838ef-9d5d-54e4-bddb-2f01adce2ad4-grafana-compute-0[99477]: logger=migrator t=2025-12-08T09:49:26.817285076Z level=info msg="Migration successfully executed" id="add index rule_org_id, rule_uid, current_state on alert_instance" duration=1.00245ms
Dec  8 04:49:26 np0005550137 ceph-mon[74516]: log_channel(cluster) log [DBG] : osdmap e61: 3 total, 3 up, 3 in
Dec  8 04:49:26 np0005550137 ceph-ceb838ef-9d5d-54e4-bddb-2f01adce2ad4-grafana-compute-0[99477]: logger=migrator t=2025-12-08T09:49:26.819560215Z level=info msg="Executing migration" id="add index rule_org_id, current_state on alert_instance"
Dec  8 04:49:26 np0005550137 ceph-ceb838ef-9d5d-54e4-bddb-2f01adce2ad4-grafana-compute-0[99477]: logger=migrator t=2025-12-08T09:49:26.820618835Z level=info msg="Migration successfully executed" id="add index rule_org_id, current_state on alert_instance" duration=1.059411ms
Dec  8 04:49:26 np0005550137 ceph-ceb838ef-9d5d-54e4-bddb-2f01adce2ad4-grafana-compute-0[99477]: logger=migrator t=2025-12-08T09:49:26.823642246Z level=info msg="Executing migration" id="add current_reason column related to current_state"
Dec  8 04:49:26 np0005550137 ceph-osd[83009]: osd.1 pg_epoch: 61 pg[10.19( v 49'120 (0'0,49'120] local-lis/les=0/0 n=0 ec=54/38 lis/c=59/54 les/c/f=60/55/0 sis=61) [1] r=0 lpr=61 pi=[54,61)/1 luod=0'0 crt=49'120 lcod 0'0 mlcod 0'0 active mbc={}] PeeringState::start_peering_interval up [1] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 1 -> 1, role -1 -> 0, features acting 4540701547738038271 upacting 4540701547738038271
Dec  8 04:49:26 np0005550137 ceph-osd[83009]: osd.1 pg_epoch: 61 pg[10.19( v 49'120 (0'0,49'120] local-lis/les=0/0 n=0 ec=54/38 lis/c=59/54 les/c/f=60/55/0 sis=61) [1] r=0 lpr=61 pi=[54,61)/1 crt=49'120 lcod 0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  8 04:49:26 np0005550137 ceph-osd[83009]: osd.1 pg_epoch: 61 pg[10.2( v 49'120 (0'0,49'120] local-lis/les=0/0 n=1 ec=54/38 lis/c=59/54 les/c/f=60/55/0 sis=61) [1] r=0 lpr=61 pi=[54,61)/1 luod=0'0 crt=49'120 lcod 0'0 mlcod 0'0 active mbc={}] PeeringState::start_peering_interval up [1] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 1 -> 1, role -1 -> 0, features acting 4540701547738038271 upacting 4540701547738038271
Dec  8 04:49:26 np0005550137 ceph-osd[83009]: osd.1 pg_epoch: 61 pg[10.2( v 49'120 (0'0,49'120] local-lis/les=0/0 n=1 ec=54/38 lis/c=59/54 les/c/f=60/55/0 sis=61) [1] r=0 lpr=61 pi=[54,61)/1 crt=49'120 lcod 0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  8 04:49:26 np0005550137 ceph-osd[83009]: osd.1 pg_epoch: 61 pg[10.5( v 49'120 (0'0,49'120] local-lis/les=0/0 n=1 ec=54/38 lis/c=59/54 les/c/f=60/55/0 sis=61) [1] r=0 lpr=61 pi=[54,61)/1 luod=0'0 crt=49'120 lcod 0'0 mlcod 0'0 active mbc={}] PeeringState::start_peering_interval up [1] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 1 -> 1, role -1 -> 0, features acting 4540701547738038271 upacting 4540701547738038271
Dec  8 04:49:26 np0005550137 ceph-osd[83009]: osd.1 pg_epoch: 61 pg[10.18( v 49'120 (0'0,49'120] local-lis/les=0/0 n=0 ec=54/38 lis/c=59/54 les/c/f=60/55/0 sis=61) [1] r=0 lpr=61 pi=[54,61)/1 luod=0'0 crt=49'120 lcod 0'0 mlcod 0'0 active mbc={}] PeeringState::start_peering_interval up [1] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 1 -> 1, role -1 -> 0, features acting 4540701547738038271 upacting 4540701547738038271
Dec  8 04:49:26 np0005550137 ceph-osd[83009]: osd.1 pg_epoch: 61 pg[10.5( v 49'120 (0'0,49'120] local-lis/les=0/0 n=1 ec=54/38 lis/c=59/54 les/c/f=60/55/0 sis=61) [1] r=0 lpr=61 pi=[54,61)/1 crt=49'120 lcod 0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  8 04:49:26 np0005550137 ceph-osd[83009]: osd.1 pg_epoch: 61 pg[10.18( v 49'120 (0'0,49'120] local-lis/les=0/0 n=0 ec=54/38 lis/c=59/54 les/c/f=60/55/0 sis=61) [1] r=0 lpr=61 pi=[54,61)/1 crt=49'120 lcod 0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  8 04:49:26 np0005550137 ceph-osd[83009]: osd.1 pg_epoch: 61 pg[10.1b( v 49'120 (0'0,49'120] local-lis/les=0/0 n=0 ec=54/38 lis/c=59/54 les/c/f=60/55/0 sis=61) [1] r=0 lpr=61 pi=[54,61)/1 luod=0'0 crt=49'120 lcod 0'0 mlcod 0'0 active mbc={}] PeeringState::start_peering_interval up [1] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 1 -> 1, role -1 -> 0, features acting 4540701547738038271 upacting 4540701547738038271
Dec  8 04:49:26 np0005550137 ceph-osd[83009]: osd.1 pg_epoch: 61 pg[10.1b( v 49'120 (0'0,49'120] local-lis/les=0/0 n=0 ec=54/38 lis/c=59/54 les/c/f=60/55/0 sis=61) [1] r=0 lpr=61 pi=[54,61)/1 crt=49'120 lcod 0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  8 04:49:26 np0005550137 ceph-osd[83009]: osd.1 pg_epoch: 61 pg[10.8( v 49'120 (0'0,49'120] local-lis/les=0/0 n=1 ec=54/38 lis/c=59/54 les/c/f=60/55/0 sis=61) [1] r=0 lpr=61 pi=[54,61)/1 luod=0'0 crt=49'120 lcod 0'0 mlcod 0'0 active mbc={}] PeeringState::start_peering_interval up [1] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 1 -> 1, role -1 -> 0, features acting 4540701547738038271 upacting 4540701547738038271
Dec  8 04:49:26 np0005550137 ceph-osd[83009]: osd.1 pg_epoch: 61 pg[10.8( v 49'120 (0'0,49'120] local-lis/les=0/0 n=1 ec=54/38 lis/c=59/54 les/c/f=60/55/0 sis=61) [1] r=0 lpr=61 pi=[54,61)/1 crt=49'120 lcod 0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  8 04:49:26 np0005550137 ceph-osd[83009]: osd.1 pg_epoch: 61 pg[10.13( v 49'120 (0'0,49'120] local-lis/les=0/0 n=0 ec=54/38 lis/c=59/54 les/c/f=60/55/0 sis=61) [1] r=0 lpr=61 pi=[54,61)/1 luod=0'0 crt=49'120 lcod 0'0 mlcod 0'0 active mbc={}] PeeringState::start_peering_interval up [1] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 1 -> 1, role -1 -> 0, features acting 4540701547738038271 upacting 4540701547738038271
Dec  8 04:49:26 np0005550137 ceph-osd[83009]: osd.1 pg_epoch: 61 pg[10.13( v 49'120 (0'0,49'120] local-lis/les=0/0 n=0 ec=54/38 lis/c=59/54 les/c/f=60/55/0 sis=61) [1] r=0 lpr=61 pi=[54,61)/1 crt=49'120 lcod 0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  8 04:49:26 np0005550137 ceph-osd[83009]: osd.1 pg_epoch: 61 pg[10.15( v 60'130 (0'0,60'130] local-lis/les=0/0 n=0 ec=54/38 lis/c=0/54 les/c/f=0/55/0 sis=61) [1] r=0 lpr=61 pi=[54,61)/1 luod=0'0 lua=60'128 crt=60'130 lcod 60'129 mlcod 0'0 active mbc={}] PeeringState::start_peering_interval up [1] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 1 -> 1, role -1 -> 0, features acting 4540701547738038271 upacting 4540701547738038271
Dec  8 04:49:26 np0005550137 ceph-osd[83009]: osd.1 pg_epoch: 61 pg[10.15( v 60'130 (0'0,60'130] local-lis/les=0/0 n=0 ec=54/38 lis/c=0/54 les/c/f=0/55/0 sis=61) [1] r=0 lpr=61 pi=[54,61)/1 crt=60'130 lcod 60'129 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  8 04:49:26 np0005550137 ceph-osd[83009]: osd.1 pg_epoch: 61 pg[10.14( v 60'130 (0'0,60'130] local-lis/les=0/0 n=0 ec=54/38 lis/c=0/54 les/c/f=0/55/0 sis=61) [1] r=0 lpr=61 pi=[54,61)/1 luod=0'0 lua=60'127 crt=60'130 lcod 60'129 mlcod 0'0 active mbc={}] PeeringState::start_peering_interval up [1] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 1 -> 1, role -1 -> 0, features acting 4540701547738038271 upacting 4540701547738038271
Dec  8 04:49:26 np0005550137 ceph-osd[83009]: osd.1 pg_epoch: 61 pg[10.14( v 60'130 (0'0,60'130] local-lis/les=0/0 n=0 ec=54/38 lis/c=0/54 les/c/f=0/55/0 sis=61) [1] r=0 lpr=61 pi=[54,61)/1 crt=60'130 lcod 60'129 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  8 04:49:26 np0005550137 ceph-osd[83009]: osd.1 pg_epoch: 61 pg[9.17( v 49'1026 (0'0,49'1026] local-lis/les=52/53 n=5 ec=52/36 lis/c=52/52 les/c/f=53/53/0 sis=61 pruub=11.000617027s) [2] r=-1 lpr=61 pi=[52,61)/1 crt=49'1026 lcod 0'0 mlcod 0'0 active pruub 186.520355225s@ mbc={}] PeeringState::start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Dec  8 04:49:26 np0005550137 ceph-osd[83009]: osd.1 pg_epoch: 61 pg[9.17( v 49'1026 (0'0,49'1026] local-lis/les=52/53 n=5 ec=52/36 lis/c=52/52 les/c/f=53/53/0 sis=61 pruub=11.000589371s) [2] r=-1 lpr=61 pi=[52,61)/1 crt=49'1026 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 186.520355225s@ mbc={}] state<Start>: transitioning to Stray
Dec  8 04:49:26 np0005550137 ceph-ceb838ef-9d5d-54e4-bddb-2f01adce2ad4-nfs-cephfs-2-0-compute-0-cuvvno[97606]: 08/12/2025 09:49:26 : epoch 69369efc : compute-0 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f56e80016a0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec  8 04:49:26 np0005550137 ceph-osd[83009]: osd.1 pg_epoch: 61 pg[9.3( v 49'1026 (0'0,49'1026] local-lis/les=52/53 n=8 ec=52/36 lis/c=52/52 les/c/f=53/53/0 sis=61 pruub=11.000137329s) [2] r=-1 lpr=61 pi=[52,61)/1 crt=49'1026 lcod 0'0 mlcod 0'0 active pruub 186.520568848s@ mbc={}] PeeringState::start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Dec  8 04:49:26 np0005550137 ceph-osd[83009]: osd.1 pg_epoch: 61 pg[9.3( v 49'1026 (0'0,49'1026] local-lis/les=52/53 n=8 ec=52/36 lis/c=52/52 les/c/f=53/53/0 sis=61 pruub=11.000095367s) [2] r=-1 lpr=61 pi=[52,61)/1 crt=49'1026 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 186.520568848s@ mbc={}] state<Start>: transitioning to Stray
Dec  8 04:49:26 np0005550137 ceph-osd[83009]: osd.1 pg_epoch: 61 pg[9.f( v 49'1026 (0'0,49'1026] local-lis/les=52/53 n=6 ec=52/36 lis/c=52/52 les/c/f=53/53/0 sis=61 pruub=10.999452591s) [2] r=-1 lpr=61 pi=[52,61)/1 crt=49'1026 lcod 0'0 mlcod 0'0 active pruub 186.520736694s@ mbc={}] PeeringState::start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Dec  8 04:49:26 np0005550137 ceph-osd[83009]: osd.1 pg_epoch: 61 pg[9.f( v 49'1026 (0'0,49'1026] local-lis/les=52/53 n=6 ec=52/36 lis/c=52/52 les/c/f=53/53/0 sis=61 pruub=10.999417305s) [2] r=-1 lpr=61 pi=[52,61)/1 crt=49'1026 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 186.520736694s@ mbc={}] state<Start>: transitioning to Stray
Dec  8 04:49:26 np0005550137 ceph-osd[83009]: osd.1 pg_epoch: 61 pg[9.b( v 49'1026 (0'0,49'1026] local-lis/les=52/53 n=6 ec=52/36 lis/c=52/52 les/c/f=53/53/0 sis=61 pruub=10.999260902s) [2] r=-1 lpr=61 pi=[52,61)/1 crt=49'1026 lcod 0'0 mlcod 0'0 active pruub 186.520751953s@ mbc={}] PeeringState::start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Dec  8 04:49:26 np0005550137 ceph-osd[83009]: osd.1 pg_epoch: 61 pg[9.b( v 49'1026 (0'0,49'1026] local-lis/les=52/53 n=6 ec=52/36 lis/c=52/52 les/c/f=53/53/0 sis=61 pruub=10.999094963s) [2] r=-1 lpr=61 pi=[52,61)/1 crt=49'1026 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 186.520751953s@ mbc={}] state<Start>: transitioning to Stray
Dec  8 04:49:26 np0005550137 ceph-osd[83009]: osd.1 pg_epoch: 61 pg[9.7( v 49'1026 (0'0,49'1026] local-lis/les=52/53 n=6 ec=52/36 lis/c=52/52 les/c/f=53/53/0 sis=61 pruub=11.002241135s) [2] r=-1 lpr=61 pi=[52,61)/1 crt=49'1026 lcod 0'0 mlcod 0'0 active pruub 186.524322510s@ mbc={}] PeeringState::start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Dec  8 04:49:26 np0005550137 ceph-osd[83009]: osd.1 pg_epoch: 61 pg[9.7( v 49'1026 (0'0,49'1026] local-lis/les=52/53 n=6 ec=52/36 lis/c=52/52 les/c/f=53/53/0 sis=61 pruub=11.002183914s) [2] r=-1 lpr=61 pi=[52,61)/1 crt=49'1026 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 186.524322510s@ mbc={}] state<Start>: transitioning to Stray
Dec  8 04:49:26 np0005550137 ceph-osd[83009]: osd.1 pg_epoch: 61 pg[9.1b( v 49'1026 (0'0,49'1026] local-lis/les=52/53 n=5 ec=52/36 lis/c=52/52 les/c/f=53/53/0 sis=61 pruub=11.002298355s) [2] r=-1 lpr=61 pi=[52,61)/1 crt=49'1026 lcod 0'0 mlcod 0'0 active pruub 186.524597168s@ mbc={}] PeeringState::start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Dec  8 04:49:26 np0005550137 ceph-osd[83009]: osd.1 pg_epoch: 61 pg[9.1b( v 49'1026 (0'0,49'1026] local-lis/les=52/53 n=5 ec=52/36 lis/c=52/52 les/c/f=53/53/0 sis=61 pruub=11.002265930s) [2] r=-1 lpr=61 pi=[52,61)/1 crt=49'1026 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 186.524597168s@ mbc={}] state<Start>: transitioning to Stray
Dec  8 04:49:26 np0005550137 ceph-osd[83009]: osd.1 pg_epoch: 61 pg[9.1f( v 49'1026 (0'0,49'1026] local-lis/les=52/53 n=5 ec=52/36 lis/c=52/52 les/c/f=53/53/0 sis=61 pruub=11.002212524s) [2] r=-1 lpr=61 pi=[52,61)/1 crt=49'1026 lcod 0'0 mlcod 0'0 active pruub 186.524856567s@ mbc={}] PeeringState::start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Dec  8 04:49:26 np0005550137 ceph-osd[83009]: osd.1 pg_epoch: 61 pg[9.1f( v 49'1026 (0'0,49'1026] local-lis/les=52/53 n=5 ec=52/36 lis/c=52/52 les/c/f=53/53/0 sis=61 pruub=11.002181053s) [2] r=-1 lpr=61 pi=[52,61)/1 crt=49'1026 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 186.524856567s@ mbc={}] state<Start>: transitioning to Stray
Dec  8 04:49:26 np0005550137 ceph-osd[83009]: osd.1 pg_epoch: 61 pg[9.13( v 49'1026 (0'0,49'1026] local-lis/les=52/53 n=5 ec=52/36 lis/c=52/52 les/c/f=53/53/0 sis=61 pruub=11.001770020s) [2] r=-1 lpr=61 pi=[52,61)/1 crt=49'1026 lcod 0'0 mlcod 0'0 active pruub 186.525054932s@ mbc={}] PeeringState::start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Dec  8 04:49:26 np0005550137 ceph-osd[83009]: osd.1 pg_epoch: 61 pg[9.13( v 49'1026 (0'0,49'1026] local-lis/les=52/53 n=5 ec=52/36 lis/c=52/52 les/c/f=53/53/0 sis=61 pruub=11.001612663s) [2] r=-1 lpr=61 pi=[52,61)/1 crt=49'1026 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 186.525054932s@ mbc={}] state<Start>: transitioning to Stray
Dec  8 04:49:26 np0005550137 ceph-ceb838ef-9d5d-54e4-bddb-2f01adce2ad4-grafana-compute-0[99477]: logger=migrator t=2025-12-08T09:49:26.835408277Z level=info msg="Migration successfully executed" id="add current_reason column related to current_state" duration=11.783462ms
Dec  8 04:49:26 np0005550137 ceph-ceb838ef-9d5d-54e4-bddb-2f01adce2ad4-grafana-compute-0[99477]: logger=migrator t=2025-12-08T09:49:26.837644104Z level=info msg="Executing migration" id="add result_fingerprint column to alert_instance"
Dec  8 04:49:26 np0005550137 ceph-ceb838ef-9d5d-54e4-bddb-2f01adce2ad4-grafana-compute-0[99477]: logger=migrator t=2025-12-08T09:49:26.842770817Z level=info msg="Migration successfully executed" id="add result_fingerprint column to alert_instance" duration=5.126933ms
Dec  8 04:49:26 np0005550137 ceph-ceb838ef-9d5d-54e4-bddb-2f01adce2ad4-grafana-compute-0[99477]: logger=migrator t=2025-12-08T09:49:26.844467728Z level=info msg="Executing migration" id="create alert_rule table"
Dec  8 04:49:26 np0005550137 ceph-ceb838ef-9d5d-54e4-bddb-2f01adce2ad4-grafana-compute-0[99477]: logger=migrator t=2025-12-08T09:49:26.845273641Z level=info msg="Migration successfully executed" id="create alert_rule table" duration=806.043µs
Dec  8 04:49:26 np0005550137 ceph-ceb838ef-9d5d-54e4-bddb-2f01adce2ad4-grafana-compute-0[99477]: logger=migrator t=2025-12-08T09:49:26.847941511Z level=info msg="Executing migration" id="add index in alert_rule on org_id and title columns"
Dec  8 04:49:26 np0005550137 ceph-ceb838ef-9d5d-54e4-bddb-2f01adce2ad4-grafana-compute-0[99477]: logger=migrator t=2025-12-08T09:49:26.848817398Z level=info msg="Migration successfully executed" id="add index in alert_rule on org_id and title columns" duration=875.357µs
Dec  8 04:49:26 np0005550137 ceph-ceb838ef-9d5d-54e4-bddb-2f01adce2ad4-grafana-compute-0[99477]: logger=migrator t=2025-12-08T09:49:26.851315532Z level=info msg="Executing migration" id="add index in alert_rule on org_id and uid columns"
Dec  8 04:49:26 np0005550137 ceph-ceb838ef-9d5d-54e4-bddb-2f01adce2ad4-grafana-compute-0[99477]: logger=migrator t=2025-12-08T09:49:26.852188228Z level=info msg="Migration successfully executed" id="add index in alert_rule on org_id and uid columns" duration=873.076µs
Dec  8 04:49:26 np0005550137 ceph-ceb838ef-9d5d-54e4-bddb-2f01adce2ad4-grafana-compute-0[99477]: logger=migrator t=2025-12-08T09:49:26.854732834Z level=info msg="Executing migration" id="add index in alert_rule on org_id, namespace_uid, group_uid columns"
Dec  8 04:49:26 np0005550137 ceph-ceb838ef-9d5d-54e4-bddb-2f01adce2ad4-grafana-compute-0[99477]: logger=migrator t=2025-12-08T09:49:26.855612831Z level=info msg="Migration successfully executed" id="add index in alert_rule on org_id, namespace_uid, group_uid columns" duration=878.217µs
Dec  8 04:49:26 np0005550137 ceph-ceb838ef-9d5d-54e4-bddb-2f01adce2ad4-grafana-compute-0[99477]: logger=migrator t=2025-12-08T09:49:26.857866588Z level=info msg="Executing migration" id="alter alert_rule table data column to mediumtext in mysql"
Dec  8 04:49:26 np0005550137 ceph-ceb838ef-9d5d-54e4-bddb-2f01adce2ad4-grafana-compute-0[99477]: logger=migrator t=2025-12-08T09:49:26.85793886Z level=info msg="Migration successfully executed" id="alter alert_rule table data column to mediumtext in mysql" duration=73.392µs
Dec  8 04:49:26 np0005550137 ceph-ceb838ef-9d5d-54e4-bddb-2f01adce2ad4-grafana-compute-0[99477]: logger=migrator t=2025-12-08T09:49:26.859556538Z level=info msg="Executing migration" id="add column for to alert_rule"
Dec  8 04:49:26 np0005550137 ceph-ceb838ef-9d5d-54e4-bddb-2f01adce2ad4-grafana-compute-0[99477]: logger=migrator t=2025-12-08T09:49:26.864927358Z level=info msg="Migration successfully executed" id="add column for to alert_rule" duration=5.36911ms
Dec  8 04:49:26 np0005550137 ceph-ceb838ef-9d5d-54e4-bddb-2f01adce2ad4-grafana-compute-0[99477]: logger=migrator t=2025-12-08T09:49:26.86697124Z level=info msg="Executing migration" id="add column annotations to alert_rule"
Dec  8 04:49:26 np0005550137 ceph-ceb838ef-9d5d-54e4-bddb-2f01adce2ad4-grafana-compute-0[99477]: logger=migrator t=2025-12-08T09:49:26.871248867Z level=info msg="Migration successfully executed" id="add column annotations to alert_rule" duration=4.276267ms
Dec  8 04:49:26 np0005550137 ceph-ceb838ef-9d5d-54e4-bddb-2f01adce2ad4-grafana-compute-0[99477]: logger=migrator t=2025-12-08T09:49:26.873283728Z level=info msg="Executing migration" id="add column labels to alert_rule"
Dec  8 04:49:26 np0005550137 ceph-ceb838ef-9d5d-54e4-bddb-2f01adce2ad4-grafana-compute-0[99477]: logger=migrator t=2025-12-08T09:49:26.878063431Z level=info msg="Migration successfully executed" id="add column labels to alert_rule" duration=4.779233ms
Dec  8 04:49:26 np0005550137 ceph-ceb838ef-9d5d-54e4-bddb-2f01adce2ad4-grafana-compute-0[99477]: logger=migrator t=2025-12-08T09:49:26.880016369Z level=info msg="Executing migration" id="remove unique index from alert_rule on org_id, title columns"
Dec  8 04:49:26 np0005550137 ceph-ceb838ef-9d5d-54e4-bddb-2f01adce2ad4-grafana-compute-0[99477]: logger=migrator t=2025-12-08T09:49:26.880802783Z level=info msg="Migration successfully executed" id="remove unique index from alert_rule on org_id, title columns" duration=789.054µs
Dec  8 04:49:26 np0005550137 ceph-ceb838ef-9d5d-54e4-bddb-2f01adce2ad4-grafana-compute-0[99477]: logger=migrator t=2025-12-08T09:49:26.882640748Z level=info msg="Executing migration" id="add index in alert_rule on org_id, namespase_uid and title columns"
Dec  8 04:49:26 np0005550137 ceph-ceb838ef-9d5d-54e4-bddb-2f01adce2ad4-grafana-compute-0[99477]: logger=migrator t=2025-12-08T09:49:26.883490203Z level=info msg="Migration successfully executed" id="add index in alert_rule on org_id, namespase_uid and title columns" duration=849.105µs
Dec  8 04:49:26 np0005550137 ceph-ceb838ef-9d5d-54e4-bddb-2f01adce2ad4-grafana-compute-0[99477]: logger=migrator t=2025-12-08T09:49:26.885759271Z level=info msg="Executing migration" id="add dashboard_uid column to alert_rule"
Dec  8 04:49:26 np0005550137 ceph-ceb838ef-9d5d-54e4-bddb-2f01adce2ad4-grafana-compute-0[99477]: logger=migrator t=2025-12-08T09:49:26.890034138Z level=info msg="Migration successfully executed" id="add dashboard_uid column to alert_rule" duration=4.273647ms
Dec  8 04:49:26 np0005550137 ceph-ceb838ef-9d5d-54e4-bddb-2f01adce2ad4-grafana-compute-0[99477]: logger=migrator t=2025-12-08T09:49:26.891915125Z level=info msg="Executing migration" id="add panel_id column to alert_rule"
Dec  8 04:49:26 np0005550137 ceph-ceb838ef-9d5d-54e4-bddb-2f01adce2ad4-grafana-compute-0[99477]: logger=migrator t=2025-12-08T09:49:26.895912594Z level=info msg="Migration successfully executed" id="add panel_id column to alert_rule" duration=3.995128ms
Dec  8 04:49:26 np0005550137 ceph-ceb838ef-9d5d-54e4-bddb-2f01adce2ad4-grafana-compute-0[99477]: logger=migrator t=2025-12-08T09:49:26.897526832Z level=info msg="Executing migration" id="add index in alert_rule on org_id, dashboard_uid and panel_id columns"
Dec  8 04:49:26 np0005550137 ceph-ceb838ef-9d5d-54e4-bddb-2f01adce2ad4-grafana-compute-0[99477]: logger=migrator t=2025-12-08T09:49:26.898298845Z level=info msg="Migration successfully executed" id="add index in alert_rule on org_id, dashboard_uid and panel_id columns" duration=771.723µs
Dec  8 04:49:26 np0005550137 ceph-ceb838ef-9d5d-54e4-bddb-2f01adce2ad4-grafana-compute-0[99477]: logger=migrator t=2025-12-08T09:49:26.900314366Z level=info msg="Executing migration" id="add rule_group_idx column to alert_rule"
Dec  8 04:49:26 np0005550137 ceph-ceb838ef-9d5d-54e4-bddb-2f01adce2ad4-grafana-compute-0[99477]: logger=migrator t=2025-12-08T09:49:26.904327765Z level=info msg="Migration successfully executed" id="add rule_group_idx column to alert_rule" duration=4.01129ms
Dec  8 04:49:26 np0005550137 ceph-ceb838ef-9d5d-54e4-bddb-2f01adce2ad4-grafana-compute-0[99477]: logger=migrator t=2025-12-08T09:49:26.906113249Z level=info msg="Executing migration" id="add is_paused column to alert_rule table"
Dec  8 04:49:26 np0005550137 ceph-ceb838ef-9d5d-54e4-bddb-2f01adce2ad4-grafana-compute-0[99477]: logger=migrator t=2025-12-08T09:49:26.91020552Z level=info msg="Migration successfully executed" id="add is_paused column to alert_rule table" duration=4.091171ms
Dec  8 04:49:26 np0005550137 ceph-ceb838ef-9d5d-54e4-bddb-2f01adce2ad4-grafana-compute-0[99477]: logger=migrator t=2025-12-08T09:49:26.912435037Z level=info msg="Executing migration" id="fix is_paused column for alert_rule table"
Dec  8 04:49:26 np0005550137 ceph-ceb838ef-9d5d-54e4-bddb-2f01adce2ad4-grafana-compute-0[99477]: logger=migrator t=2025-12-08T09:49:26.912482638Z level=info msg="Migration successfully executed" id="fix is_paused column for alert_rule table" duration=48.361µs
Dec  8 04:49:26 np0005550137 ceph-ceb838ef-9d5d-54e4-bddb-2f01adce2ad4-grafana-compute-0[99477]: logger=migrator t=2025-12-08T09:49:26.914343994Z level=info msg="Executing migration" id="create alert_rule_version table"
Dec  8 04:49:26 np0005550137 ceph-ceb838ef-9d5d-54e4-bddb-2f01adce2ad4-grafana-compute-0[99477]: logger=migrator t=2025-12-08T09:49:26.915238571Z level=info msg="Migration successfully executed" id="create alert_rule_version table" duration=894.426µs
Dec  8 04:49:26 np0005550137 ceph-ceb838ef-9d5d-54e4-bddb-2f01adce2ad4-grafana-compute-0[99477]: logger=migrator t=2025-12-08T09:49:26.919694134Z level=info msg="Executing migration" id="add index in alert_rule_version table on rule_org_id, rule_uid and version columns"
Dec  8 04:49:26 np0005550137 ceph-ceb838ef-9d5d-54e4-bddb-2f01adce2ad4-grafana-compute-0[99477]: logger=migrator t=2025-12-08T09:49:26.920563669Z level=info msg="Migration successfully executed" id="add index in alert_rule_version table on rule_org_id, rule_uid and version columns" duration=869.335µs
Dec  8 04:49:26 np0005550137 ceph-ceb838ef-9d5d-54e4-bddb-2f01adce2ad4-grafana-compute-0[99477]: logger=migrator t=2025-12-08T09:49:26.922626592Z level=info msg="Executing migration" id="add index in alert_rule_version table on rule_org_id, rule_namespace_uid and rule_group columns"
Dec  8 04:49:26 np0005550137 ceph-ceb838ef-9d5d-54e4-bddb-2f01adce2ad4-grafana-compute-0[99477]: logger=migrator t=2025-12-08T09:49:26.923504337Z level=info msg="Migration successfully executed" id="add index in alert_rule_version table on rule_org_id, rule_namespace_uid and rule_group columns" duration=877.575µs
Dec  8 04:49:26 np0005550137 ceph-ceb838ef-9d5d-54e4-bddb-2f01adce2ad4-grafana-compute-0[99477]: logger=migrator t=2025-12-08T09:49:26.927299981Z level=info msg="Executing migration" id="alter alert_rule_version table data column to mediumtext in mysql"
Dec  8 04:49:26 np0005550137 ceph-ceb838ef-9d5d-54e4-bddb-2f01adce2ad4-grafana-compute-0[99477]: logger=migrator t=2025-12-08T09:49:26.927362093Z level=info msg="Migration successfully executed" id="alter alert_rule_version table data column to mediumtext in mysql" duration=63.652µs
Dec  8 04:49:26 np0005550137 ceph-ceb838ef-9d5d-54e4-bddb-2f01adce2ad4-grafana-compute-0[99477]: logger=migrator t=2025-12-08T09:49:26.929139746Z level=info msg="Executing migration" id="add column for to alert_rule_version"
Dec  8 04:49:26 np0005550137 ceph-ceb838ef-9d5d-54e4-bddb-2f01adce2ad4-grafana-compute-0[99477]: logger=migrator t=2025-12-08T09:49:26.933506827Z level=info msg="Migration successfully executed" id="add column for to alert_rule_version" duration=4.36434ms
Dec  8 04:49:26 np0005550137 ceph-ceb838ef-9d5d-54e4-bddb-2f01adce2ad4-grafana-compute-0[99477]: logger=migrator t=2025-12-08T09:49:26.935398453Z level=info msg="Executing migration" id="add column annotations to alert_rule_version"
Dec  8 04:49:26 np0005550137 ceph-ceb838ef-9d5d-54e4-bddb-2f01adce2ad4-grafana-compute-0[99477]: logger=migrator t=2025-12-08T09:49:26.939949269Z level=info msg="Migration successfully executed" id="add column annotations to alert_rule_version" duration=4.548476ms
Dec  8 04:49:26 np0005550137 ceph-ceb838ef-9d5d-54e4-bddb-2f01adce2ad4-grafana-compute-0[99477]: logger=migrator t=2025-12-08T09:49:26.948745811Z level=info msg="Executing migration" id="add column labels to alert_rule_version"
Dec  8 04:49:26 np0005550137 ceph-ceb838ef-9d5d-54e4-bddb-2f01adce2ad4-grafana-compute-0[99477]: logger=migrator t=2025-12-08T09:49:26.954191854Z level=info msg="Migration successfully executed" id="add column labels to alert_rule_version" duration=5.446262ms
Dec  8 04:49:26 np0005550137 ceph-ceb838ef-9d5d-54e4-bddb-2f01adce2ad4-grafana-compute-0[99477]: logger=migrator t=2025-12-08T09:49:26.956135362Z level=info msg="Executing migration" id="add rule_group_idx column to alert_rule_version"
Dec  8 04:49:26 np0005550137 ceph-ceb838ef-9d5d-54e4-bddb-2f01adce2ad4-grafana-compute-0[99477]: logger=migrator t=2025-12-08T09:49:26.960451051Z level=info msg="Migration successfully executed" id="add rule_group_idx column to alert_rule_version" duration=4.315439ms
Dec  8 04:49:26 np0005550137 ceph-ceb838ef-9d5d-54e4-bddb-2f01adce2ad4-grafana-compute-0[99477]: logger=migrator t=2025-12-08T09:49:26.96207882Z level=info msg="Executing migration" id="add is_paused column to alert_rule_versions table"
Dec  8 04:49:26 np0005550137 ceph-ceb838ef-9d5d-54e4-bddb-2f01adce2ad4-grafana-compute-0[99477]: logger=migrator t=2025-12-08T09:49:26.966285935Z level=info msg="Migration successfully executed" id="add is_paused column to alert_rule_versions table" duration=4.206875ms
Dec  8 04:49:26 np0005550137 ceph-ceb838ef-9d5d-54e4-bddb-2f01adce2ad4-grafana-compute-0[99477]: logger=migrator t=2025-12-08T09:49:26.968280874Z level=info msg="Executing migration" id="fix is_paused column for alert_rule_version table"
Dec  8 04:49:26 np0005550137 ceph-ceb838ef-9d5d-54e4-bddb-2f01adce2ad4-grafana-compute-0[99477]: logger=migrator t=2025-12-08T09:49:26.968337886Z level=info msg="Migration successfully executed" id="fix is_paused column for alert_rule_version table" duration=58.162µs
Dec  8 04:49:26 np0005550137 ceph-ceb838ef-9d5d-54e4-bddb-2f01adce2ad4-grafana-compute-0[99477]: logger=migrator t=2025-12-08T09:49:26.970292355Z level=info msg="Executing migration" id=create_alert_configuration_table
Dec  8 04:49:26 np0005550137 ceph-ceb838ef-9d5d-54e4-bddb-2f01adce2ad4-grafana-compute-0[99477]: logger=migrator t=2025-12-08T09:49:26.971004606Z level=info msg="Migration successfully executed" id=create_alert_configuration_table duration=712.401µs
Dec  8 04:49:26 np0005550137 ceph-ceb838ef-9d5d-54e4-bddb-2f01adce2ad4-grafana-compute-0[99477]: logger=migrator t=2025-12-08T09:49:26.973566203Z level=info msg="Executing migration" id="Add column default in alert_configuration"
Dec  8 04:49:26 np0005550137 ceph-ceb838ef-9d5d-54e4-bddb-2f01adce2ad4-grafana-compute-0[99477]: logger=migrator t=2025-12-08T09:49:26.978627584Z level=info msg="Migration successfully executed" id="Add column default in alert_configuration" duration=5.059151ms
Dec  8 04:49:26 np0005550137 ceph-ceb838ef-9d5d-54e4-bddb-2f01adce2ad4-grafana-compute-0[99477]: logger=migrator t=2025-12-08T09:49:26.980574232Z level=info msg="Executing migration" id="alert alert_configuration alertmanager_configuration column from TEXT to MEDIUMTEXT if mysql"
Dec  8 04:49:26 np0005550137 ceph-ceb838ef-9d5d-54e4-bddb-2f01adce2ad4-grafana-compute-0[99477]: logger=migrator t=2025-12-08T09:49:26.980622753Z level=info msg="Migration successfully executed" id="alert alert_configuration alertmanager_configuration column from TEXT to MEDIUMTEXT if mysql" duration=49.161µs
Dec  8 04:49:26 np0005550137 ceph-ceb838ef-9d5d-54e4-bddb-2f01adce2ad4-grafana-compute-0[99477]: logger=migrator t=2025-12-08T09:49:26.98253089Z level=info msg="Executing migration" id="add column org_id in alert_configuration"
Dec  8 04:49:26 np0005550137 ceph-ceb838ef-9d5d-54e4-bddb-2f01adce2ad4-grafana-compute-0[99477]: logger=migrator t=2025-12-08T09:49:26.987697934Z level=info msg="Migration successfully executed" id="add column org_id in alert_configuration" duration=5.161064ms
Dec  8 04:49:26 np0005550137 ceph-ceb838ef-9d5d-54e4-bddb-2f01adce2ad4-grafana-compute-0[99477]: logger=migrator t=2025-12-08T09:49:26.990038825Z level=info msg="Executing migration" id="add index in alert_configuration table on org_id column"
Dec  8 04:49:26 np0005550137 ceph-ceb838ef-9d5d-54e4-bddb-2f01adce2ad4-grafana-compute-0[99477]: logger=migrator t=2025-12-08T09:49:26.991308052Z level=info msg="Migration successfully executed" id="add index in alert_configuration table on org_id column" duration=1.271887ms
Dec  8 04:49:26 np0005550137 ceph-ceb838ef-9d5d-54e4-bddb-2f01adce2ad4-grafana-compute-0[99477]: logger=migrator t=2025-12-08T09:49:26.994213729Z level=info msg="Executing migration" id="add configuration_hash column to alert_configuration"
Dec  8 04:49:26 np0005550137 ceph-ceb838ef-9d5d-54e4-bddb-2f01adce2ad4-grafana-compute-0[99477]: logger=migrator t=2025-12-08T09:49:26.998993571Z level=info msg="Migration successfully executed" id="add configuration_hash column to alert_configuration" duration=4.780412ms
Dec  8 04:49:27 np0005550137 ceph-ceb838ef-9d5d-54e4-bddb-2f01adce2ad4-grafana-compute-0[99477]: logger=migrator t=2025-12-08T09:49:27.002270009Z level=info msg="Executing migration" id=create_ngalert_configuration_table
Dec  8 04:49:27 np0005550137 ceph-ceb838ef-9d5d-54e4-bddb-2f01adce2ad4-grafana-compute-0[99477]: logger=migrator t=2025-12-08T09:49:27.002919239Z level=info msg="Migration successfully executed" id=create_ngalert_configuration_table duration=649.28µs
Dec  8 04:49:27 np0005550137 ceph-ceb838ef-9d5d-54e4-bddb-2f01adce2ad4-grafana-compute-0[99477]: logger=migrator t=2025-12-08T09:49:27.005366462Z level=info msg="Executing migration" id="add index in ngalert_configuration on org_id column"
Dec  8 04:49:27 np0005550137 ceph-ceb838ef-9d5d-54e4-bddb-2f01adce2ad4-grafana-compute-0[99477]: logger=migrator t=2025-12-08T09:49:27.006159326Z level=info msg="Migration successfully executed" id="add index in ngalert_configuration on org_id column" duration=793.314µs
Dec  8 04:49:27 np0005550137 ceph-ceb838ef-9d5d-54e4-bddb-2f01adce2ad4-grafana-compute-0[99477]: logger=migrator t=2025-12-08T09:49:27.008486465Z level=info msg="Executing migration" id="add column send_alerts_to in ngalert_configuration"
Dec  8 04:49:27 np0005550137 systemd[1]: Starting Ceph haproxy.rgw.default.compute-0.dmkdub for ceb838ef-9d5d-54e4-bddb-2f01adce2ad4...
Dec  8 04:49:27 np0005550137 ceph-ceb838ef-9d5d-54e4-bddb-2f01adce2ad4-grafana-compute-0[99477]: logger=migrator t=2025-12-08T09:49:27.015251018Z level=info msg="Migration successfully executed" id="add column send_alerts_to in ngalert_configuration" duration=6.759502ms
Dec  8 04:49:27 np0005550137 ceph-ceb838ef-9d5d-54e4-bddb-2f01adce2ad4-grafana-compute-0[99477]: logger=migrator t=2025-12-08T09:49:27.017553616Z level=info msg="Executing migration" id="create provenance_type table"
Dec  8 04:49:27 np0005550137 ceph-ceb838ef-9d5d-54e4-bddb-2f01adce2ad4-grafana-compute-0[99477]: logger=migrator t=2025-12-08T09:49:27.018316909Z level=info msg="Migration successfully executed" id="create provenance_type table" duration=763.323µs
Dec  8 04:49:27 np0005550137 ceph-ceb838ef-9d5d-54e4-bddb-2f01adce2ad4-grafana-compute-0[99477]: logger=migrator t=2025-12-08T09:49:27.020437172Z level=info msg="Executing migration" id="add index to uniquify (record_key, record_type, org_id) columns"
Dec  8 04:49:27 np0005550137 ceph-ceb838ef-9d5d-54e4-bddb-2f01adce2ad4-grafana-compute-0[99477]: logger=migrator t=2025-12-08T09:49:27.021369069Z level=info msg="Migration successfully executed" id="add index to uniquify (record_key, record_type, org_id) columns" duration=932.097µs
Dec  8 04:49:27 np0005550137 ceph-ceb838ef-9d5d-54e4-bddb-2f01adce2ad4-grafana-compute-0[99477]: logger=migrator t=2025-12-08T09:49:27.024305087Z level=info msg="Executing migration" id="create alert_image table"
Dec  8 04:49:27 np0005550137 ceph-ceb838ef-9d5d-54e4-bddb-2f01adce2ad4-grafana-compute-0[99477]: logger=migrator t=2025-12-08T09:49:27.025255286Z level=info msg="Migration successfully executed" id="create alert_image table" duration=954.429µs
Dec  8 04:49:27 np0005550137 ceph-ceb838ef-9d5d-54e4-bddb-2f01adce2ad4-grafana-compute-0[99477]: logger=migrator t=2025-12-08T09:49:27.027616886Z level=info msg="Executing migration" id="add unique index on token to alert_image table"
Dec  8 04:49:27 np0005550137 ceph-ceb838ef-9d5d-54e4-bddb-2f01adce2ad4-grafana-compute-0[99477]: logger=migrator t=2025-12-08T09:49:27.028514203Z level=info msg="Migration successfully executed" id="add unique index on token to alert_image table" duration=897.367µs
Dec  8 04:49:27 np0005550137 ceph-ceb838ef-9d5d-54e4-bddb-2f01adce2ad4-grafana-compute-0[99477]: logger=migrator t=2025-12-08T09:49:27.030842543Z level=info msg="Executing migration" id="support longer URLs in alert_image table"
Dec  8 04:49:27 np0005550137 ceph-ceb838ef-9d5d-54e4-bddb-2f01adce2ad4-grafana-compute-0[99477]: logger=migrator t=2025-12-08T09:49:27.030912355Z level=info msg="Migration successfully executed" id="support longer URLs in alert_image table" duration=70.902µs
Dec  8 04:49:27 np0005550137 ceph-ceb838ef-9d5d-54e4-bddb-2f01adce2ad4-grafana-compute-0[99477]: logger=migrator t=2025-12-08T09:49:27.032991437Z level=info msg="Executing migration" id=create_alert_configuration_history_table
Dec  8 04:49:27 np0005550137 ceph-ceb838ef-9d5d-54e4-bddb-2f01adce2ad4-grafana-compute-0[99477]: logger=migrator t=2025-12-08T09:49:27.034020468Z level=info msg="Migration successfully executed" id=create_alert_configuration_history_table duration=1.028292ms
Dec  8 04:49:27 np0005550137 ceph-ceb838ef-9d5d-54e4-bddb-2f01adce2ad4-grafana-compute-0[99477]: logger=migrator t=2025-12-08T09:49:27.036737529Z level=info msg="Executing migration" id="drop non-unique orgID index on alert_configuration"
Dec  8 04:49:27 np0005550137 ceph-ceb838ef-9d5d-54e4-bddb-2f01adce2ad4-grafana-compute-0[99477]: logger=migrator t=2025-12-08T09:49:27.03778553Z level=info msg="Migration successfully executed" id="drop non-unique orgID index on alert_configuration" duration=1.049441ms
Dec  8 04:49:27 np0005550137 ceph-ceb838ef-9d5d-54e4-bddb-2f01adce2ad4-grafana-compute-0[99477]: logger=migrator t=2025-12-08T09:49:27.041519151Z level=info msg="Executing migration" id="drop unique orgID index on alert_configuration if exists"
Dec  8 04:49:27 np0005550137 ceph-ceb838ef-9d5d-54e4-bddb-2f01adce2ad4-grafana-compute-0[99477]: logger=migrator t=2025-12-08T09:49:27.041922063Z level=warn msg="Skipping migration: Already executed, but not recorded in migration log" id="drop unique orgID index on alert_configuration if exists"
Dec  8 04:49:27 np0005550137 ceph-ceb838ef-9d5d-54e4-bddb-2f01adce2ad4-grafana-compute-0[99477]: logger=migrator t=2025-12-08T09:49:27.043692556Z level=info msg="Executing migration" id="extract alertmanager configuration history to separate table"
Dec  8 04:49:27 np0005550137 ceph-ceb838ef-9d5d-54e4-bddb-2f01adce2ad4-grafana-compute-0[99477]: logger=migrator t=2025-12-08T09:49:27.044190031Z level=info msg="Migration successfully executed" id="extract alertmanager configuration history to separate table" duration=498.275µs
Dec  8 04:49:27 np0005550137 ceph-ceb838ef-9d5d-54e4-bddb-2f01adce2ad4-grafana-compute-0[99477]: logger=migrator t=2025-12-08T09:49:27.046304645Z level=info msg="Executing migration" id="add unique index on orgID to alert_configuration"
Dec  8 04:49:27 np0005550137 ceph-ceb838ef-9d5d-54e4-bddb-2f01adce2ad4-grafana-compute-0[99477]: logger=migrator t=2025-12-08T09:49:27.047151019Z level=info msg="Migration successfully executed" id="add unique index on orgID to alert_configuration" duration=847.014µs
Dec  8 04:49:27 np0005550137 ceph-ceb838ef-9d5d-54e4-bddb-2f01adce2ad4-grafana-compute-0[99477]: logger=migrator t=2025-12-08T09:49:27.050078807Z level=info msg="Executing migration" id="add last_applied column to alert_configuration_history"
Dec  8 04:49:27 np0005550137 ceph-ceb838ef-9d5d-54e4-bddb-2f01adce2ad4-grafana-compute-0[99477]: logger=migrator t=2025-12-08T09:49:27.055269742Z level=info msg="Migration successfully executed" id="add last_applied column to alert_configuration_history" duration=5.189355ms
Dec  8 04:49:27 np0005550137 ceph-ceb838ef-9d5d-54e4-bddb-2f01adce2ad4-grafana-compute-0[99477]: logger=migrator t=2025-12-08T09:49:27.057340754Z level=info msg="Executing migration" id="create library_element table v1"
Dec  8 04:49:27 np0005550137 ceph-ceb838ef-9d5d-54e4-bddb-2f01adce2ad4-grafana-compute-0[99477]: logger=migrator t=2025-12-08T09:49:27.058259331Z level=info msg="Migration successfully executed" id="create library_element table v1" duration=918.277µs
Dec  8 04:49:27 np0005550137 ceph-ceb838ef-9d5d-54e4-bddb-2f01adce2ad4-grafana-compute-0[99477]: logger=migrator t=2025-12-08T09:49:27.06057012Z level=info msg="Executing migration" id="add index library_element org_id-folder_id-name-kind"
Dec  8 04:49:27 np0005550137 ceph-ceb838ef-9d5d-54e4-bddb-2f01adce2ad4-grafana-compute-0[99477]: logger=migrator t=2025-12-08T09:49:27.061424246Z level=info msg="Migration successfully executed" id="add index library_element org_id-folder_id-name-kind" duration=853.916µs
Dec  8 04:49:27 np0005550137 ceph-ceb838ef-9d5d-54e4-bddb-2f01adce2ad4-grafana-compute-0[99477]: logger=migrator t=2025-12-08T09:49:27.063811057Z level=info msg="Executing migration" id="create library_element_connection table v1"
Dec  8 04:49:27 np0005550137 ceph-ceb838ef-9d5d-54e4-bddb-2f01adce2ad4-grafana-compute-0[99477]: logger=migrator t=2025-12-08T09:49:27.064531379Z level=info msg="Migration successfully executed" id="create library_element_connection table v1" duration=717.881µs
Dec  8 04:49:27 np0005550137 ceph-ceb838ef-9d5d-54e4-bddb-2f01adce2ad4-grafana-compute-0[99477]: logger=migrator t=2025-12-08T09:49:27.066555299Z level=info msg="Executing migration" id="add index library_element_connection element_id-kind-connection_id"
Dec  8 04:49:27 np0005550137 ceph-ceb838ef-9d5d-54e4-bddb-2f01adce2ad4-grafana-compute-0[99477]: logger=migrator t=2025-12-08T09:49:27.067562329Z level=info msg="Migration successfully executed" id="add index library_element_connection element_id-kind-connection_id" duration=1.00702ms
Dec  8 04:49:27 np0005550137 ceph-ceb838ef-9d5d-54e4-bddb-2f01adce2ad4-grafana-compute-0[99477]: logger=migrator t=2025-12-08T09:49:27.069510017Z level=info msg="Executing migration" id="add unique index library_element org_id_uid"
Dec  8 04:49:27 np0005550137 ceph-ceb838ef-9d5d-54e4-bddb-2f01adce2ad4-grafana-compute-0[99477]: logger=migrator t=2025-12-08T09:49:27.070243759Z level=info msg="Migration successfully executed" id="add unique index library_element org_id_uid" duration=733.592µs
Dec  8 04:49:27 np0005550137 ceph-ceb838ef-9d5d-54e4-bddb-2f01adce2ad4-grafana-compute-0[99477]: logger=migrator t=2025-12-08T09:49:27.072983721Z level=info msg="Executing migration" id="increase max description length to 2048"
Dec  8 04:49:27 np0005550137 ceph-ceb838ef-9d5d-54e4-bddb-2f01adce2ad4-grafana-compute-0[99477]: logger=migrator t=2025-12-08T09:49:27.073007552Z level=info msg="Migration successfully executed" id="increase max description length to 2048" duration=24.801µs
Dec  8 04:49:27 np0005550137 ceph-ceb838ef-9d5d-54e4-bddb-2f01adce2ad4-grafana-compute-0[99477]: logger=migrator t=2025-12-08T09:49:27.074668651Z level=info msg="Executing migration" id="alter library_element model to mediumtext"
Dec  8 04:49:27 np0005550137 ceph-ceb838ef-9d5d-54e4-bddb-2f01adce2ad4-grafana-compute-0[99477]: logger=migrator t=2025-12-08T09:49:27.074714062Z level=info msg="Migration successfully executed" id="alter library_element model to mediumtext" duration=45.861µs
Dec  8 04:49:27 np0005550137 ceph-ceb838ef-9d5d-54e4-bddb-2f01adce2ad4-grafana-compute-0[99477]: logger=migrator t=2025-12-08T09:49:27.076501436Z level=info msg="Executing migration" id="clone move dashboard alerts to unified alerting"
Dec  8 04:49:27 np0005550137 ceph-ceb838ef-9d5d-54e4-bddb-2f01adce2ad4-grafana-compute-0[99477]: logger=migrator t=2025-12-08T09:49:27.076744294Z level=info msg="Migration successfully executed" id="clone move dashboard alerts to unified alerting" duration=242.988µs
Dec  8 04:49:27 np0005550137 ceph-ceb838ef-9d5d-54e4-bddb-2f01adce2ad4-grafana-compute-0[99477]: logger=migrator t=2025-12-08T09:49:27.078834546Z level=info msg="Executing migration" id="create data_keys table"
Dec  8 04:49:27 np0005550137 ceph-ceb838ef-9d5d-54e4-bddb-2f01adce2ad4-grafana-compute-0[99477]: logger=migrator t=2025-12-08T09:49:27.079868547Z level=info msg="Migration successfully executed" id="create data_keys table" duration=1.034741ms
Dec  8 04:49:27 np0005550137 ceph-ceb838ef-9d5d-54e4-bddb-2f01adce2ad4-grafana-compute-0[99477]: logger=migrator t=2025-12-08T09:49:27.082210277Z level=info msg="Executing migration" id="create secrets table"
Dec  8 04:49:27 np0005550137 ceph-ceb838ef-9d5d-54e4-bddb-2f01adce2ad4-grafana-compute-0[99477]: logger=migrator t=2025-12-08T09:49:27.08300128Z level=info msg="Migration successfully executed" id="create secrets table" duration=790.973µs
Dec  8 04:49:27 np0005550137 ceph-ceb838ef-9d5d-54e4-bddb-2f01adce2ad4-grafana-compute-0[99477]: logger=migrator t=2025-12-08T09:49:27.085356811Z level=info msg="Executing migration" id="rename data_keys name column to id"
Dec  8 04:49:27 np0005550137 ceph-ceb838ef-9d5d-54e4-bddb-2f01adce2ad4-grafana-compute-0[99477]: logger=migrator t=2025-12-08T09:49:27.11783629Z level=info msg="Migration successfully executed" id="rename data_keys name column to id" duration=32.473079ms
Dec  8 04:49:27 np0005550137 ceph-ceb838ef-9d5d-54e4-bddb-2f01adce2ad4-grafana-compute-0[99477]: logger=migrator t=2025-12-08T09:49:27.119947213Z level=info msg="Executing migration" id="add name column into data_keys"
Dec  8 04:49:27 np0005550137 ceph-ceb838ef-9d5d-54e4-bddb-2f01adce2ad4-grafana-compute-0[99477]: logger=migrator t=2025-12-08T09:49:27.125257951Z level=info msg="Migration successfully executed" id="add name column into data_keys" duration=5.309788ms
Dec  8 04:49:27 np0005550137 ceph-ceb838ef-9d5d-54e4-bddb-2f01adce2ad4-grafana-compute-0[99477]: logger=migrator t=2025-12-08T09:49:27.127621032Z level=info msg="Executing migration" id="copy data_keys id column values into name"
Dec  8 04:49:27 np0005550137 ceph-ceb838ef-9d5d-54e4-bddb-2f01adce2ad4-grafana-compute-0[99477]: logger=migrator t=2025-12-08T09:49:27.127813768Z level=info msg="Migration successfully executed" id="copy data_keys id column values into name" duration=193.986µs
Dec  8 04:49:27 np0005550137 ceph-ceb838ef-9d5d-54e4-bddb-2f01adce2ad4-grafana-compute-0[99477]: logger=migrator t=2025-12-08T09:49:27.129801798Z level=info msg="Executing migration" id="rename data_keys name column to label"
Dec  8 04:49:27 np0005550137 python3[99749]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint radosgw-admin quay.io/ceph/ceph:v19 --fsid ceb838ef-9d5d-54e4-bddb-2f01adce2ad4 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   user create --uid="openstack" --display-name "openstack" _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec  8 04:49:27 np0005550137 ceph-ceb838ef-9d5d-54e4-bddb-2f01adce2ad4-grafana-compute-0[99477]: logger=migrator t=2025-12-08T09:49:27.158934447Z level=info msg="Migration successfully executed" id="rename data_keys name column to label" duration=29.124539ms
Dec  8 04:49:27 np0005550137 ceph-ceb838ef-9d5d-54e4-bddb-2f01adce2ad4-grafana-compute-0[99477]: logger=migrator t=2025-12-08T09:49:27.161085912Z level=info msg="Executing migration" id="rename data_keys id column back to name"
Dec  8 04:49:27 np0005550137 ceph-ceb838ef-9d5d-54e4-bddb-2f01adce2ad4-nfs-cephfs-2-0-compute-0-cuvvno[97606]: 08/12/2025 09:49:27 : epoch 69369efc : compute-0 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f5714002950 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec  8 04:49:27 np0005550137 ceph-ceb838ef-9d5d-54e4-bddb-2f01adce2ad4-grafana-compute-0[99477]: logger=migrator t=2025-12-08T09:49:27.188910663Z level=info msg="Migration successfully executed" id="rename data_keys id column back to name" duration=27.820341ms
Dec  8 04:49:27 np0005550137 ceph-ceb838ef-9d5d-54e4-bddb-2f01adce2ad4-grafana-compute-0[99477]: logger=migrator t=2025-12-08T09:49:27.190899562Z level=info msg="Executing migration" id="create kv_store table v1"
Dec  8 04:49:27 np0005550137 ceph-ceb838ef-9d5d-54e4-bddb-2f01adce2ad4-grafana-compute-0[99477]: logger=migrator t=2025-12-08T09:49:27.191782289Z level=info msg="Migration successfully executed" id="create kv_store table v1" duration=915.787µs
Dec  8 04:49:27 np0005550137 ceph-ceb838ef-9d5d-54e4-bddb-2f01adce2ad4-grafana-compute-0[99477]: logger=migrator t=2025-12-08T09:49:27.194637284Z level=info msg="Executing migration" id="add index kv_store.org_id-namespace-key"
Dec  8 04:49:27 np0005550137 ceph-ceb838ef-9d5d-54e4-bddb-2f01adce2ad4-grafana-compute-0[99477]: logger=migrator t=2025-12-08T09:49:27.195800758Z level=info msg="Migration successfully executed" id="add index kv_store.org_id-namespace-key" duration=1.163344ms
Dec  8 04:49:27 np0005550137 ceph-ceb838ef-9d5d-54e4-bddb-2f01adce2ad4-nfs-cephfs-2-0-compute-0-cuvvno[97606]: 08/12/2025 09:49:27 : epoch 69369efc : compute-0 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f56e80016a0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec  8 04:49:27 np0005550137 ceph-ceb838ef-9d5d-54e4-bddb-2f01adce2ad4-grafana-compute-0[99477]: logger=migrator t=2025-12-08T09:49:27.197983383Z level=info msg="Executing migration" id="update dashboard_uid and panel_id from existing annotations"
Dec  8 04:49:27 np0005550137 ceph-ceb838ef-9d5d-54e4-bddb-2f01adce2ad4-grafana-compute-0[99477]: logger=migrator t=2025-12-08T09:49:27.198177549Z level=info msg="Migration successfully executed" id="update dashboard_uid and panel_id from existing annotations" duration=193.976µs
Dec  8 04:49:27 np0005550137 ceph-ceb838ef-9d5d-54e4-bddb-2f01adce2ad4-grafana-compute-0[99477]: logger=migrator t=2025-12-08T09:49:27.19986082Z level=info msg="Executing migration" id="create permission table"
Dec  8 04:49:27 np0005550137 ceph-ceb838ef-9d5d-54e4-bddb-2f01adce2ad4-grafana-compute-0[99477]: logger=migrator t=2025-12-08T09:49:27.200726575Z level=info msg="Migration successfully executed" id="create permission table" duration=881.876µs
Dec  8 04:49:27 np0005550137 ceph-ceb838ef-9d5d-54e4-bddb-2f01adce2ad4-grafana-compute-0[99477]: logger=migrator t=2025-12-08T09:49:27.20324165Z level=info msg="Executing migration" id="add unique index permission.role_id"
Dec  8 04:49:27 np0005550137 ceph-ceb838ef-9d5d-54e4-bddb-2f01adce2ad4-grafana-compute-0[99477]: logger=migrator t=2025-12-08T09:49:27.204077665Z level=info msg="Migration successfully executed" id="add unique index permission.role_id" duration=836.265µs
Dec  8 04:49:27 np0005550137 ceph-ceb838ef-9d5d-54e4-bddb-2f01adce2ad4-grafana-compute-0[99477]: logger=migrator t=2025-12-08T09:49:27.206397464Z level=info msg="Executing migration" id="add unique index role_id_action_scope"
Dec  8 04:49:27 np0005550137 ceph-ceb838ef-9d5d-54e4-bddb-2f01adce2ad4-grafana-compute-0[99477]: logger=migrator t=2025-12-08T09:49:27.207313942Z level=info msg="Migration successfully executed" id="add unique index role_id_action_scope" duration=916.118µs
Dec  8 04:49:27 np0005550137 ceph-ceb838ef-9d5d-54e4-bddb-2f01adce2ad4-grafana-compute-0[99477]: logger=migrator t=2025-12-08T09:49:27.20961244Z level=info msg="Executing migration" id="create role table"
Dec  8 04:49:27 np0005550137 ceph-ceb838ef-9d5d-54e4-bddb-2f01adce2ad4-grafana-compute-0[99477]: logger=migrator t=2025-12-08T09:49:27.210338522Z level=info msg="Migration successfully executed" id="create role table" duration=726.252µs
Dec  8 04:49:27 np0005550137 ceph-ceb838ef-9d5d-54e4-bddb-2f01adce2ad4-grafana-compute-0[99477]: logger=migrator t=2025-12-08T09:49:27.212423294Z level=info msg="Executing migration" id="add column display_name"
Dec  8 04:49:27 np0005550137 podman[99784]: 2025-12-08 09:49:27.214764804 +0000 UTC m=+0.049128597 container create e3990d395af4ece9de0a6415fa4c61edcb82c6b9c8d78de5a591da3358c920fb (image=quay.io/ceph/ceph:v19, name=elastic_nobel, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec  8 04:49:27 np0005550137 ceph-ceb838ef-9d5d-54e4-bddb-2f01adce2ad4-grafana-compute-0[99477]: logger=migrator t=2025-12-08T09:49:27.217827665Z level=info msg="Migration successfully executed" id="add column display_name" duration=5.404371ms
Dec  8 04:49:27 np0005550137 ceph-ceb838ef-9d5d-54e4-bddb-2f01adce2ad4-grafana-compute-0[99477]: logger=migrator t=2025-12-08T09:49:27.219957739Z level=info msg="Executing migration" id="add column group_name"
Dec  8 04:49:27 np0005550137 ceph-ceb838ef-9d5d-54e4-bddb-2f01adce2ad4-grafana-compute-0[99477]: logger=migrator t=2025-12-08T09:49:27.225071412Z level=info msg="Migration successfully executed" id="add column group_name" duration=5.113192ms
Dec  8 04:49:27 np0005550137 ceph-ceb838ef-9d5d-54e4-bddb-2f01adce2ad4-grafana-compute-0[99477]: logger=migrator t=2025-12-08T09:49:27.22733219Z level=info msg="Executing migration" id="add index role.org_id"
Dec  8 04:49:27 np0005550137 ceph-ceb838ef-9d5d-54e4-bddb-2f01adce2ad4-grafana-compute-0[99477]: logger=migrator t=2025-12-08T09:49:27.228120894Z level=info msg="Migration successfully executed" id="add index role.org_id" duration=790.724µs
Dec  8 04:49:27 np0005550137 ceph-ceb838ef-9d5d-54e4-bddb-2f01adce2ad4-grafana-compute-0[99477]: logger=migrator t=2025-12-08T09:49:27.230799604Z level=info msg="Executing migration" id="add unique index role_org_id_name"
Dec  8 04:49:27 np0005550137 ceph-ceb838ef-9d5d-54e4-bddb-2f01adce2ad4-grafana-compute-0[99477]: logger=migrator t=2025-12-08T09:49:27.231600767Z level=info msg="Migration successfully executed" id="add unique index role_org_id_name" duration=801.184µs
Dec  8 04:49:27 np0005550137 ceph-ceb838ef-9d5d-54e4-bddb-2f01adce2ad4-grafana-compute-0[99477]: logger=migrator t=2025-12-08T09:49:27.233877745Z level=info msg="Executing migration" id="add index role_org_id_uid"
Dec  8 04:49:27 np0005550137 ceph-ceb838ef-9d5d-54e4-bddb-2f01adce2ad4-grafana-compute-0[99477]: logger=migrator t=2025-12-08T09:49:27.23472409Z level=info msg="Migration successfully executed" id="add index role_org_id_uid" duration=845.995µs
Dec  8 04:49:27 np0005550137 ceph-ceb838ef-9d5d-54e4-bddb-2f01adce2ad4-grafana-compute-0[99477]: logger=migrator t=2025-12-08T09:49:27.236966977Z level=info msg="Executing migration" id="create team role table"
Dec  8 04:49:27 np0005550137 ceph-ceb838ef-9d5d-54e4-bddb-2f01adce2ad4-grafana-compute-0[99477]: logger=migrator t=2025-12-08T09:49:27.23774868Z level=info msg="Migration successfully executed" id="create team role table" duration=780.843µs
Dec  8 04:49:27 np0005550137 ceph-ceb838ef-9d5d-54e4-bddb-2f01adce2ad4-grafana-compute-0[99477]: logger=migrator t=2025-12-08T09:49:27.241103791Z level=info msg="Executing migration" id="add index team_role.org_id"
Dec  8 04:49:27 np0005550137 ceph-ceb838ef-9d5d-54e4-bddb-2f01adce2ad4-grafana-compute-0[99477]: logger=migrator t=2025-12-08T09:49:27.242113421Z level=info msg="Migration successfully executed" id="add index team_role.org_id" duration=1.01145ms
Dec  8 04:49:27 np0005550137 ceph-ceb838ef-9d5d-54e4-bddb-2f01adce2ad4-grafana-compute-0[99477]: logger=migrator t=2025-12-08T09:49:27.245035069Z level=info msg="Executing migration" id="add unique index team_role_org_id_team_id_role_id"
Dec  8 04:49:27 np0005550137 ceph-ceb838ef-9d5d-54e4-bddb-2f01adce2ad4-grafana-compute-0[99477]: logger=migrator t=2025-12-08T09:49:27.246049178Z level=info msg="Migration successfully executed" id="add unique index team_role_org_id_team_id_role_id" duration=1.012399ms
Dec  8 04:49:27 np0005550137 ceph-ceb838ef-9d5d-54e4-bddb-2f01adce2ad4-grafana-compute-0[99477]: logger=migrator t=2025-12-08T09:49:27.249749609Z level=info msg="Executing migration" id="add index team_role.team_id"
Dec  8 04:49:27 np0005550137 ceph-ceb838ef-9d5d-54e4-bddb-2f01adce2ad4-grafana-compute-0[99477]: logger=migrator t=2025-12-08T09:49:27.250919153Z level=info msg="Migration successfully executed" id="add index team_role.team_id" duration=1.169844ms
Dec  8 04:49:27 np0005550137 ceph-ceb838ef-9d5d-54e4-bddb-2f01adce2ad4-grafana-compute-0[99477]: logger=migrator t=2025-12-08T09:49:27.253506842Z level=info msg="Executing migration" id="create user role table"
Dec  8 04:49:27 np0005550137 ceph-ceb838ef-9d5d-54e4-bddb-2f01adce2ad4-grafana-compute-0[99477]: logger=migrator t=2025-12-08T09:49:27.254224832Z level=info msg="Migration successfully executed" id="create user role table" duration=715.21µs
Dec  8 04:49:27 np0005550137 ceph-ceb838ef-9d5d-54e4-bddb-2f01adce2ad4-grafana-compute-0[99477]: logger=migrator t=2025-12-08T09:49:27.256988465Z level=info msg="Executing migration" id="add index user_role.org_id"
Dec  8 04:49:27 np0005550137 ceph-ceb838ef-9d5d-54e4-bddb-2f01adce2ad4-grafana-compute-0[99477]: logger=migrator t=2025-12-08T09:49:27.257996235Z level=info msg="Migration successfully executed" id="add index user_role.org_id" duration=1.00907ms
Dec  8 04:49:27 np0005550137 ceph-ceb838ef-9d5d-54e4-bddb-2f01adce2ad4-grafana-compute-0[99477]: logger=migrator t=2025-12-08T09:49:27.260585953Z level=info msg="Executing migration" id="add unique index user_role_org_id_user_id_role_id"
Dec  8 04:49:27 np0005550137 ceph-ceb838ef-9d5d-54e4-bddb-2f01adce2ad4-grafana-compute-0[99477]: logger=migrator t=2025-12-08T09:49:27.261370186Z level=info msg="Migration successfully executed" id="add unique index user_role_org_id_user_id_role_id" duration=784.034µs
Dec  8 04:49:27 np0005550137 ceph-ceb838ef-9d5d-54e4-bddb-2f01adce2ad4-grafana-compute-0[99477]: logger=migrator t=2025-12-08T09:49:27.263620903Z level=info msg="Executing migration" id="add index user_role.user_id"
Dec  8 04:49:27 np0005550137 podman[99806]: 2025-12-08 09:49:27.264359656 +0000 UTC m=+0.050435207 container create 9b4e650b260650b374b1c4a0eb2280fdd05729618959f1e86743dcf4e54c9d8d (image=quay.io/ceph/haproxy:2.3, name=ceph-ceb838ef-9d5d-54e4-bddb-2f01adce2ad4-haproxy-rgw-default-compute-0-dmkdub)
Dec  8 04:49:27 np0005550137 ceph-ceb838ef-9d5d-54e4-bddb-2f01adce2ad4-grafana-compute-0[99477]: logger=migrator t=2025-12-08T09:49:27.26451156Z level=info msg="Migration successfully executed" id="add index user_role.user_id" duration=890.307µs
Dec  8 04:49:27 np0005550137 ceph-ceb838ef-9d5d-54e4-bddb-2f01adce2ad4-grafana-compute-0[99477]: logger=migrator t=2025-12-08T09:49:27.266832019Z level=info msg="Executing migration" id="create builtin role table"
Dec  8 04:49:27 np0005550137 ceph-ceb838ef-9d5d-54e4-bddb-2f01adce2ad4-grafana-compute-0[99477]: logger=migrator t=2025-12-08T09:49:27.267612142Z level=info msg="Migration successfully executed" id="create builtin role table" duration=780.563µs
Dec  8 04:49:27 np0005550137 systemd[1]: Started libpod-conmon-e3990d395af4ece9de0a6415fa4c61edcb82c6b9c8d78de5a591da3358c920fb.scope.
Dec  8 04:49:27 np0005550137 ceph-ceb838ef-9d5d-54e4-bddb-2f01adce2ad4-grafana-compute-0[99477]: logger=migrator t=2025-12-08T09:49:27.269922322Z level=info msg="Executing migration" id="add index builtin_role.role_id"
Dec  8 04:49:27 np0005550137 ceph-ceb838ef-9d5d-54e4-bddb-2f01adce2ad4-grafana-compute-0[99477]: logger=migrator t=2025-12-08T09:49:27.270754256Z level=info msg="Migration successfully executed" id="add index builtin_role.role_id" duration=831.784µs
Dec  8 04:49:27 np0005550137 ceph-ceb838ef-9d5d-54e4-bddb-2f01adce2ad4-grafana-compute-0[99477]: logger=migrator t=2025-12-08T09:49:27.273847018Z level=info msg="Executing migration" id="add index builtin_role.name"
Dec  8 04:49:27 np0005550137 ceph-ceb838ef-9d5d-54e4-bddb-2f01adce2ad4-grafana-compute-0[99477]: logger=migrator t=2025-12-08T09:49:27.274616901Z level=info msg="Migration successfully executed" id="add index builtin_role.name" duration=770.023µs
Dec  8 04:49:27 np0005550137 ceph-ceb838ef-9d5d-54e4-bddb-2f01adce2ad4-grafana-compute-0[99477]: logger=migrator t=2025-12-08T09:49:27.276766125Z level=info msg="Executing migration" id="Add column org_id to builtin_role table"
Dec  8 04:49:27 np0005550137 ceph-ceb838ef-9d5d-54e4-bddb-2f01adce2ad4-grafana-compute-0[99477]: logger=migrator t=2025-12-08T09:49:27.282301341Z level=info msg="Migration successfully executed" id="Add column org_id to builtin_role table" duration=5.535136ms
Dec  8 04:49:27 np0005550137 ceph-ceb838ef-9d5d-54e4-bddb-2f01adce2ad4-grafana-compute-0[99477]: logger=migrator t=2025-12-08T09:49:27.284352132Z level=info msg="Executing migration" id="add index builtin_role.org_id"
Dec  8 04:49:27 np0005550137 ceph-ceb838ef-9d5d-54e4-bddb-2f01adce2ad4-grafana-compute-0[99477]: logger=migrator t=2025-12-08T09:49:27.285222888Z level=info msg="Migration successfully executed" id="add index builtin_role.org_id" duration=870.867µs
Dec  8 04:49:27 np0005550137 ceph-ceb838ef-9d5d-54e4-bddb-2f01adce2ad4-grafana-compute-0[99477]: logger=migrator t=2025-12-08T09:49:27.287918918Z level=info msg="Executing migration" id="add unique index builtin_role_org_id_role_id_role"
Dec  8 04:49:27 np0005550137 podman[99784]: 2025-12-08 09:49:27.195637993 +0000 UTC m=+0.030001816 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Dec  8 04:49:27 np0005550137 ceph-ceb838ef-9d5d-54e4-bddb-2f01adce2ad4-grafana-compute-0[99477]: logger=migrator t=2025-12-08T09:49:27.288765254Z level=info msg="Migration successfully executed" id="add unique index builtin_role_org_id_role_id_role" duration=845.846µs
Dec  8 04:49:27 np0005550137 systemd[1]: Started libcrun container.
Dec  8 04:49:27 np0005550137 ceph-ceb838ef-9d5d-54e4-bddb-2f01adce2ad4-grafana-compute-0[99477]: logger=migrator t=2025-12-08T09:49:27.290831265Z level=info msg="Executing migration" id="Remove unique index role_org_id_uid"
Dec  8 04:49:27 np0005550137 ceph-ceb838ef-9d5d-54e4-bddb-2f01adce2ad4-grafana-compute-0[99477]: logger=migrator t=2025-12-08T09:49:27.291607509Z level=info msg="Migration successfully executed" id="Remove unique index role_org_id_uid" duration=776.014µs
Dec  8 04:49:27 np0005550137 ceph-ceb838ef-9d5d-54e4-bddb-2f01adce2ad4-grafana-compute-0[99477]: logger=migrator t=2025-12-08T09:49:27.29331714Z level=info msg="Executing migration" id="add unique index role.uid"
Dec  8 04:49:27 np0005550137 ceph-ceb838ef-9d5d-54e4-bddb-2f01adce2ad4-grafana-compute-0[99477]: logger=migrator t=2025-12-08T09:49:27.294143024Z level=info msg="Migration successfully executed" id="add unique index role.uid" duration=825.375µs
Dec  8 04:49:27 np0005550137 ceph-ceb838ef-9d5d-54e4-bddb-2f01adce2ad4-grafana-compute-0[99477]: logger=migrator t=2025-12-08T09:49:27.296072372Z level=info msg="Executing migration" id="create seed assignment table"
Dec  8 04:49:27 np0005550137 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b75bae23e40a87f09743cc30465e7fdb5af8002279a0efe60b2c360ae9c2e262/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec  8 04:49:27 np0005550137 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b75bae23e40a87f09743cc30465e7fdb5af8002279a0efe60b2c360ae9c2e262/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Dec  8 04:49:27 np0005550137 ceph-ceb838ef-9d5d-54e4-bddb-2f01adce2ad4-grafana-compute-0[99477]: logger=migrator t=2025-12-08T09:49:27.296758882Z level=info msg="Migration successfully executed" id="create seed assignment table" duration=687.05µs
Dec  8 04:49:27 np0005550137 ceph-ceb838ef-9d5d-54e4-bddb-2f01adce2ad4-grafana-compute-0[99477]: logger=migrator t=2025-12-08T09:49:27.299290978Z level=info msg="Executing migration" id="add unique index builtin_role_role_name"
Dec  8 04:49:27 np0005550137 ceph-ceb838ef-9d5d-54e4-bddb-2f01adce2ad4-grafana-compute-0[99477]: logger=migrator t=2025-12-08T09:49:27.300305489Z level=info msg="Migration successfully executed" id="add unique index builtin_role_role_name" duration=1.014731ms
Dec  8 04:49:27 np0005550137 ceph-ceb838ef-9d5d-54e4-bddb-2f01adce2ad4-grafana-compute-0[99477]: logger=migrator t=2025-12-08T09:49:27.302678019Z level=info msg="Executing migration" id="add column hidden to role table"
Dec  8 04:49:27 np0005550137 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/623a4560717c322676efb91ec76ab0e45509bb2f38315612ee8abe7bf64972aa/merged/var/lib/haproxy supports timestamps until 2038 (0x7fffffff)
Dec  8 04:49:27 np0005550137 ceph-ceb838ef-9d5d-54e4-bddb-2f01adce2ad4-grafana-compute-0[99477]: logger=migrator t=2025-12-08T09:49:27.310043609Z level=info msg="Migration successfully executed" id="add column hidden to role table" duration=7.36241ms
Dec  8 04:49:27 np0005550137 podman[99784]: 2025-12-08 09:49:27.312189164 +0000 UTC m=+0.146552987 container init e3990d395af4ece9de0a6415fa4c61edcb82c6b9c8d78de5a591da3358c920fb (image=quay.io/ceph/ceph:v19, name=elastic_nobel, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, ceph=True)
Dec  8 04:49:27 np0005550137 ceph-ceb838ef-9d5d-54e4-bddb-2f01adce2ad4-grafana-compute-0[99477]: logger=migrator t=2025-12-08T09:49:27.312393609Z level=info msg="Executing migration" id="permission kind migration"
Dec  8 04:49:27 np0005550137 ceph-ceb838ef-9d5d-54e4-bddb-2f01adce2ad4-grafana-compute-0[99477]: logger=migrator t=2025-12-08T09:49:27.317940996Z level=info msg="Migration successfully executed" id="permission kind migration" duration=5.546776ms
Dec  8 04:49:27 np0005550137 podman[99784]: 2025-12-08 09:49:27.318952885 +0000 UTC m=+0.153316678 container start e3990d395af4ece9de0a6415fa4c61edcb82c6b9c8d78de5a591da3358c920fb (image=quay.io/ceph/ceph:v19, name=elastic_nobel, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS)
Dec  8 04:49:27 np0005550137 ceph-ceb838ef-9d5d-54e4-bddb-2f01adce2ad4-grafana-compute-0[99477]: logger=migrator t=2025-12-08T09:49:27.319600245Z level=info msg="Executing migration" id="permission attribute migration"
Dec  8 04:49:27 np0005550137 podman[99784]: 2025-12-08 09:49:27.321947405 +0000 UTC m=+0.156311208 container attach e3990d395af4ece9de0a6415fa4c61edcb82c6b9c8d78de5a591da3358c920fb (image=quay.io/ceph/ceph:v19, name=elastic_nobel, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, ceph=True, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2)
Dec  8 04:49:27 np0005550137 ceph-ceb838ef-9d5d-54e4-bddb-2f01adce2ad4-grafana-compute-0[99477]: logger=migrator t=2025-12-08T09:49:27.325207052Z level=info msg="Migration successfully executed" id="permission attribute migration" duration=5.606107ms
Dec  8 04:49:27 np0005550137 ceph-ceb838ef-9d5d-54e4-bddb-2f01adce2ad4-grafana-compute-0[99477]: logger=migrator t=2025-12-08T09:49:27.327344226Z level=info msg="Executing migration" id="permission identifier migration"
Dec  8 04:49:27 np0005550137 podman[99806]: 2025-12-08 09:49:27.328775229 +0000 UTC m=+0.114850790 container init 9b4e650b260650b374b1c4a0eb2280fdd05729618959f1e86743dcf4e54c9d8d (image=quay.io/ceph/haproxy:2.3, name=ceph-ceb838ef-9d5d-54e4-bddb-2f01adce2ad4-haproxy-rgw-default-compute-0-dmkdub)
Dec  8 04:49:27 np0005550137 ceph-ceb838ef-9d5d-54e4-bddb-2f01adce2ad4-grafana-compute-0[99477]: logger=migrator t=2025-12-08T09:49:27.333243542Z level=info msg="Migration successfully executed" id="permission identifier migration" duration=5.896496ms
Dec  8 04:49:27 np0005550137 podman[99806]: 2025-12-08 09:49:27.33385477 +0000 UTC m=+0.119930321 container start 9b4e650b260650b374b1c4a0eb2280fdd05729618959f1e86743dcf4e54c9d8d (image=quay.io/ceph/haproxy:2.3, name=ceph-ceb838ef-9d5d-54e4-bddb-2f01adce2ad4-haproxy-rgw-default-compute-0-dmkdub)
Dec  8 04:49:27 np0005550137 podman[99806]: 2025-12-08 09:49:27.240334928 +0000 UTC m=+0.026410529 image pull e85424b0d443f37ddd2dd8a3bb2ef6f18dd352b987723a921b64289023af2914 quay.io/ceph/haproxy:2.3
Dec  8 04:49:27 np0005550137 ceph-ceb838ef-9d5d-54e4-bddb-2f01adce2ad4-grafana-compute-0[99477]: logger=migrator t=2025-12-08T09:49:27.335158029Z level=info msg="Executing migration" id="add permission identifier index"
Dec  8 04:49:27 np0005550137 ceph-ceb838ef-9d5d-54e4-bddb-2f01adce2ad4-grafana-compute-0[99477]: logger=migrator t=2025-12-08T09:49:27.336079407Z level=info msg="Migration successfully executed" id="add permission identifier index" duration=921.798µs
Dec  8 04:49:27 np0005550137 bash[99806]: 9b4e650b260650b374b1c4a0eb2280fdd05729618959f1e86743dcf4e54c9d8d
Dec  8 04:49:27 np0005550137 ceph-ceb838ef-9d5d-54e4-bddb-2f01adce2ad4-grafana-compute-0[99477]: logger=migrator t=2025-12-08T09:49:27.338618322Z level=info msg="Executing migration" id="add permission action scope role_id index"
Dec  8 04:49:27 np0005550137 ceph-ceb838ef-9d5d-54e4-bddb-2f01adce2ad4-grafana-compute-0[99477]: logger=migrator t=2025-12-08T09:49:27.339646673Z level=info msg="Migration successfully executed" id="add permission action scope role_id index" duration=1.027481ms
Dec  8 04:49:27 np0005550137 ceph-ceb838ef-9d5d-54e4-bddb-2f01adce2ad4-grafana-compute-0[99477]: logger=migrator t=2025-12-08T09:49:27.342932302Z level=info msg="Executing migration" id="remove permission role_id action scope index"
Dec  8 04:49:27 np0005550137 systemd[1]: Started Ceph haproxy.rgw.default.compute-0.dmkdub for ceb838ef-9d5d-54e4-bddb-2f01adce2ad4.
Dec  8 04:49:27 np0005550137 ceph-ceb838ef-9d5d-54e4-bddb-2f01adce2ad4-grafana-compute-0[99477]: logger=migrator t=2025-12-08T09:49:27.344032384Z level=info msg="Migration successfully executed" id="remove permission role_id action scope index" duration=1.100322ms
Dec  8 04:49:27 np0005550137 ceph-ceb838ef-9d5d-54e4-bddb-2f01adce2ad4-haproxy-rgw-default-compute-0-dmkdub[99829]: [NOTICE] 341/094927 (2) : New worker #1 (4) forked
Dec  8 04:49:27 np0005550137 ceph-ceb838ef-9d5d-54e4-bddb-2f01adce2ad4-grafana-compute-0[99477]: logger=migrator t=2025-12-08T09:49:27.345961541Z level=info msg="Executing migration" id="create query_history table v1"
Dec  8 04:49:27 np0005550137 ceph-ceb838ef-9d5d-54e4-bddb-2f01adce2ad4-grafana-compute-0[99477]: logger=migrator t=2025-12-08T09:49:27.346849698Z level=info msg="Migration successfully executed" id="create query_history table v1" duration=886.467µs
Dec  8 04:49:27 np0005550137 ceph-ceb838ef-9d5d-54e4-bddb-2f01adce2ad4-grafana-compute-0[99477]: logger=migrator t=2025-12-08T09:49:27.350000712Z level=info msg="Executing migration" id="add index query_history.org_id-created_by-datasource_uid"
Dec  8 04:49:27 np0005550137 ceph-ceb838ef-9d5d-54e4-bddb-2f01adce2ad4-grafana-compute-0[99477]: logger=migrator t=2025-12-08T09:49:27.351079525Z level=info msg="Migration successfully executed" id="add index query_history.org_id-created_by-datasource_uid" duration=1.078073ms
Dec  8 04:49:27 np0005550137 ceph-ceb838ef-9d5d-54e4-bddb-2f01adce2ad4-grafana-compute-0[99477]: logger=migrator t=2025-12-08T09:49:27.354768224Z level=info msg="Executing migration" id="alter table query_history alter column created_by type to bigint"
Dec  8 04:49:27 np0005550137 ceph-ceb838ef-9d5d-54e4-bddb-2f01adce2ad4-grafana-compute-0[99477]: logger=migrator t=2025-12-08T09:49:27.354894048Z level=info msg="Migration successfully executed" id="alter table query_history alter column created_by type to bigint" duration=128.994µs
Dec  8 04:49:27 np0005550137 radosgw[89717]: ====== starting new request req=0x7ff6ecd845d0 =====
Dec  8 04:49:27 np0005550137 ceph-ceb838ef-9d5d-54e4-bddb-2f01adce2ad4-grafana-compute-0[99477]: logger=migrator t=2025-12-08T09:49:27.362039232Z level=info msg="Executing migration" id="rbac disabled migrator"
Dec  8 04:49:27 np0005550137 ceph-ceb838ef-9d5d-54e4-bddb-2f01adce2ad4-grafana-compute-0[99477]: logger=migrator t=2025-12-08T09:49:27.362139415Z level=info msg="Migration successfully executed" id="rbac disabled migrator" duration=105.183µs
Dec  8 04:49:27 np0005550137 ceph-ceb838ef-9d5d-54e4-bddb-2f01adce2ad4-grafana-compute-0[99477]: logger=migrator t=2025-12-08T09:49:27.365780113Z level=info msg="Executing migration" id="teams permissions migration"
Dec  8 04:49:27 np0005550137 ceph-ceb838ef-9d5d-54e4-bddb-2f01adce2ad4-grafana-compute-0[99477]: logger=migrator t=2025-12-08T09:49:27.366316269Z level=info msg="Migration successfully executed" id="teams permissions migration" duration=536.826µs
Dec  8 04:49:27 np0005550137 ceph-ceb838ef-9d5d-54e4-bddb-2f01adce2ad4-grafana-compute-0[99477]: logger=migrator t=2025-12-08T09:49:27.368339329Z level=info msg="Executing migration" id="dashboard permissions"
Dec  8 04:49:27 np0005550137 ceph-ceb838ef-9d5d-54e4-bddb-2f01adce2ad4-grafana-compute-0[99477]: logger=migrator t=2025-12-08T09:49:27.368993239Z level=info msg="Migration successfully executed" id="dashboard permissions" duration=654.09µs
Dec  8 04:49:27 np0005550137 ceph-ceb838ef-9d5d-54e4-bddb-2f01adce2ad4-grafana-compute-0[99477]: logger=migrator t=2025-12-08T09:49:27.371609417Z level=info msg="Executing migration" id="dashboard permissions uid scopes"
Dec  8 04:49:27 np0005550137 ceph-ceb838ef-9d5d-54e4-bddb-2f01adce2ad4-grafana-compute-0[99477]: logger=migrator t=2025-12-08T09:49:27.372431312Z level=info msg="Migration successfully executed" id="dashboard permissions uid scopes" duration=822.355µs
Dec  8 04:49:27 np0005550137 ceph-ceb838ef-9d5d-54e4-bddb-2f01adce2ad4-grafana-compute-0[99477]: logger=migrator t=2025-12-08T09:49:27.376853264Z level=info msg="Executing migration" id="drop managed folder create actions"
Dec  8 04:49:27 np0005550137 ceph-ceb838ef-9d5d-54e4-bddb-2f01adce2ad4-grafana-compute-0[99477]: logger=migrator t=2025-12-08T09:49:27.377132412Z level=info msg="Migration successfully executed" id="drop managed folder create actions" duration=279.558µs
Dec  8 04:49:27 np0005550137 ceph-ceb838ef-9d5d-54e4-bddb-2f01adce2ad4-grafana-compute-0[99477]: logger=migrator t=2025-12-08T09:49:27.379487453Z level=info msg="Executing migration" id="alerting notification permissions"
Dec  8 04:49:27 np0005550137 ceph-ceb838ef-9d5d-54e4-bddb-2f01adce2ad4-grafana-compute-0[99477]: logger=migrator t=2025-12-08T09:49:27.38007601Z level=info msg="Migration successfully executed" id="alerting notification permissions" duration=588.677µs
Dec  8 04:49:27 np0005550137 ceph-ceb838ef-9d5d-54e4-bddb-2f01adce2ad4-grafana-compute-0[99477]: logger=migrator t=2025-12-08T09:49:27.382291037Z level=info msg="Executing migration" id="create query_history_star table v1"
Dec  8 04:49:27 np0005550137 ceph-ceb838ef-9d5d-54e4-bddb-2f01adce2ad4-grafana-compute-0[99477]: logger=migrator t=2025-12-08T09:49:27.383308967Z level=info msg="Migration successfully executed" id="create query_history_star table v1" duration=1.01761ms
Dec  8 04:49:27 np0005550137 ceph-ceb838ef-9d5d-54e4-bddb-2f01adce2ad4-grafana-compute-0[99477]: logger=migrator t=2025-12-08T09:49:27.385884243Z level=info msg="Executing migration" id="add index query_history.user_id-query_uid"
Dec  8 04:49:27 np0005550137 ceph-ceb838ef-9d5d-54e4-bddb-2f01adce2ad4-grafana-compute-0[99477]: logger=migrator t=2025-12-08T09:49:27.387111211Z level=info msg="Migration successfully executed" id="add index query_history.user_id-query_uid" duration=1.226747ms
Dec  8 04:49:27 np0005550137 ceph-ceb838ef-9d5d-54e4-bddb-2f01adce2ad4-grafana-compute-0[99477]: logger=migrator t=2025-12-08T09:49:27.389803511Z level=info msg="Executing migration" id="add column org_id in query_history_star"
Dec  8 04:49:27 np0005550137 ceph-ceb838ef-9d5d-54e4-bddb-2f01adce2ad4-grafana-compute-0[99477]: logger=migrator t=2025-12-08T09:49:27.398445529Z level=info msg="Migration successfully executed" id="add column org_id in query_history_star" duration=8.640158ms
Dec  8 04:49:27 np0005550137 ceph-ceb838ef-9d5d-54e4-bddb-2f01adce2ad4-grafana-compute-0[99477]: logger=migrator t=2025-12-08T09:49:27.400461469Z level=info msg="Executing migration" id="alter table query_history_star_mig column user_id type to bigint"
Dec  8 04:49:27 np0005550137 ceph-ceb838ef-9d5d-54e4-bddb-2f01adce2ad4-grafana-compute-0[99477]: logger=migrator t=2025-12-08T09:49:27.400593613Z level=info msg="Migration successfully executed" id="alter table query_history_star_mig column user_id type to bigint" duration=130.404µs
Dec  8 04:49:27 np0005550137 ceph-mon[74516]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Dec  8 04:49:27 np0005550137 ceph-ceb838ef-9d5d-54e4-bddb-2f01adce2ad4-grafana-compute-0[99477]: logger=migrator t=2025-12-08T09:49:27.403719136Z level=info msg="Executing migration" id="create correlation table v1"
Dec  8 04:49:27 np0005550137 ceph-ceb838ef-9d5d-54e4-bddb-2f01adce2ad4-grafana-compute-0[99477]: logger=migrator t=2025-12-08T09:49:27.405062976Z level=info msg="Migration successfully executed" id="create correlation table v1" duration=1.34405ms
Dec  8 04:49:27 np0005550137 ceph-ceb838ef-9d5d-54e4-bddb-2f01adce2ad4-grafana-compute-0[99477]: logger=migrator t=2025-12-08T09:49:27.407745766Z level=info msg="Executing migration" id="add index correlations.uid"
Dec  8 04:49:27 np0005550137 ceph-ceb838ef-9d5d-54e4-bddb-2f01adce2ad4-grafana-compute-0[99477]: logger=migrator t=2025-12-08T09:49:27.409153939Z level=info msg="Migration successfully executed" id="add index correlations.uid" duration=1.406692ms
Dec  8 04:49:27 np0005550137 ceph-ceb838ef-9d5d-54e4-bddb-2f01adce2ad4-grafana-compute-0[99477]: logger=migrator t=2025-12-08T09:49:27.411811978Z level=info msg="Executing migration" id="add index correlations.source_uid"
Dec  8 04:49:27 np0005550137 ceph-mon[74516]: log_channel(audit) log [INF] : from='mgr.14478 192.168.122.100:0/575685615' entity='mgr.compute-0.kitiwu' 
Dec  8 04:49:27 np0005550137 ceph-ceb838ef-9d5d-54e4-bddb-2f01adce2ad4-grafana-compute-0[99477]: logger=migrator t=2025-12-08T09:49:27.41323998Z level=info msg="Migration successfully executed" id="add index correlations.source_uid" duration=1.427672ms
Dec  8 04:49:27 np0005550137 ceph-mon[74516]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Dec  8 04:49:27 np0005550137 ceph-ceb838ef-9d5d-54e4-bddb-2f01adce2ad4-grafana-compute-0[99477]: logger=migrator t=2025-12-08T09:49:27.416769206Z level=info msg="Executing migration" id="add correlation config column"
Dec  8 04:49:27 np0005550137 ceph-ceb838ef-9d5d-54e4-bddb-2f01adce2ad4-grafana-compute-0[99477]: logger=migrator t=2025-12-08T09:49:27.422522798Z level=info msg="Migration successfully executed" id="add correlation config column" duration=5.753411ms
Dec  8 04:49:27 np0005550137 ceph-ceb838ef-9d5d-54e4-bddb-2f01adce2ad4-grafana-compute-0[99477]: logger=migrator t=2025-12-08T09:49:27.424576869Z level=info msg="Executing migration" id="drop index IDX_correlation_uid - v1"
Dec  8 04:49:27 np0005550137 ceph-ceb838ef-9d5d-54e4-bddb-2f01adce2ad4-grafana-compute-0[99477]: logger=migrator t=2025-12-08T09:49:27.425398603Z level=info msg="Migration successfully executed" id="drop index IDX_correlation_uid - v1" duration=821.834µs
Dec  8 04:49:27 np0005550137 ceph-ceb838ef-9d5d-54e4-bddb-2f01adce2ad4-grafana-compute-0[99477]: logger=migrator t=2025-12-08T09:49:27.427325671Z level=info msg="Executing migration" id="drop index IDX_correlation_source_uid - v1"
Dec  8 04:49:27 np0005550137 ceph-mon[74516]: log_channel(audit) log [INF] : from='mgr.14478 192.168.122.100:0/575685615' entity='mgr.compute-0.kitiwu' 
Dec  8 04:49:27 np0005550137 ceph-ceb838ef-9d5d-54e4-bddb-2f01adce2ad4-grafana-compute-0[99477]: logger=migrator t=2025-12-08T09:49:27.428819396Z level=info msg="Migration successfully executed" id="drop index IDX_correlation_source_uid - v1" duration=1.494105ms
Dec  8 04:49:27 np0005550137 ceph-mon[74516]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.ingress.rgw.default}] v 0)
Dec  8 04:49:27 np0005550137 ceph-ceb838ef-9d5d-54e4-bddb-2f01adce2ad4-grafana-compute-0[99477]: logger=migrator t=2025-12-08T09:49:27.431121214Z level=info msg="Executing migration" id="Rename table correlation to correlation_tmp_qwerty - v1"
Dec  8 04:49:27 np0005550137 ceph-mon[74516]: log_channel(audit) log [INF] : from='mgr.14478 192.168.122.100:0/575685615' entity='mgr.compute-0.kitiwu' 
Dec  8 04:49:27 np0005550137 ceph-mgr[74806]: [cephadm INFO cephadm.serve] Deploying daemon haproxy.rgw.default.compute-2.akikwx on compute-2
Dec  8 04:49:27 np0005550137 ceph-mgr[74806]: log_channel(cephadm) log [INF] : Deploying daemon haproxy.rgw.default.compute-2.akikwx on compute-2
Dec  8 04:49:27 np0005550137 ceph-ceb838ef-9d5d-54e4-bddb-2f01adce2ad4-grafana-compute-0[99477]: logger=migrator t=2025-12-08T09:49:27.45309618Z level=info msg="Migration successfully executed" id="Rename table correlation to correlation_tmp_qwerty - v1" duration=21.973576ms
Dec  8 04:49:27 np0005550137 ceph-ceb838ef-9d5d-54e4-bddb-2f01adce2ad4-grafana-compute-0[99477]: logger=migrator t=2025-12-08T09:49:27.455018708Z level=info msg="Executing migration" id="create correlation v2"
Dec  8 04:49:27 np0005550137 ceph-ceb838ef-9d5d-54e4-bddb-2f01adce2ad4-grafana-compute-0[99477]: logger=migrator t=2025-12-08T09:49:27.455960446Z level=info msg="Migration successfully executed" id="create correlation v2" duration=942.018µs
Dec  8 04:49:27 np0005550137 ceph-ceb838ef-9d5d-54e4-bddb-2f01adce2ad4-grafana-compute-0[99477]: logger=migrator t=2025-12-08T09:49:27.458221033Z level=info msg="Executing migration" id="create index IDX_correlation_uid - v2"
Dec  8 04:49:27 np0005550137 ceph-ceb838ef-9d5d-54e4-bddb-2f01adce2ad4-grafana-compute-0[99477]: logger=migrator t=2025-12-08T09:49:27.459149631Z level=info msg="Migration successfully executed" id="create index IDX_correlation_uid - v2" duration=928.358µs
Dec  8 04:49:27 np0005550137 ceph-ceb838ef-9d5d-54e4-bddb-2f01adce2ad4-grafana-compute-0[99477]: logger=migrator t=2025-12-08T09:49:27.461819561Z level=info msg="Executing migration" id="create index IDX_correlation_source_uid - v2"
Dec  8 04:49:27 np0005550137 ceph-ceb838ef-9d5d-54e4-bddb-2f01adce2ad4-grafana-compute-0[99477]: logger=migrator t=2025-12-08T09:49:27.462735469Z level=info msg="Migration successfully executed" id="create index IDX_correlation_source_uid - v2" duration=915.488µs
Dec  8 04:49:27 np0005550137 ceph-ceb838ef-9d5d-54e4-bddb-2f01adce2ad4-grafana-compute-0[99477]: logger=migrator t=2025-12-08T09:49:27.465332906Z level=info msg="Executing migration" id="create index IDX_correlation_org_id - v2"
Dec  8 04:49:27 np0005550137 ceph-ceb838ef-9d5d-54e4-bddb-2f01adce2ad4-grafana-compute-0[99477]: logger=migrator t=2025-12-08T09:49:27.466337595Z level=info msg="Migration successfully executed" id="create index IDX_correlation_org_id - v2" duration=1.004409ms
Dec  8 04:49:27 np0005550137 ceph-ceb838ef-9d5d-54e4-bddb-2f01adce2ad4-grafana-compute-0[99477]: logger=migrator t=2025-12-08T09:49:27.468589974Z level=info msg="Executing migration" id="copy correlation v1 to v2"
Dec  8 04:49:27 np0005550137 ceph-ceb838ef-9d5d-54e4-bddb-2f01adce2ad4-grafana-compute-0[99477]: logger=migrator t=2025-12-08T09:49:27.468891853Z level=info msg="Migration successfully executed" id="copy correlation v1 to v2" duration=303.26µs
Dec  8 04:49:27 np0005550137 ceph-ceb838ef-9d5d-54e4-bddb-2f01adce2ad4-grafana-compute-0[99477]: logger=migrator t=2025-12-08T09:49:27.470798119Z level=info msg="Executing migration" id="drop correlation_tmp_qwerty"
Dec  8 04:49:27 np0005550137 ceph-ceb838ef-9d5d-54e4-bddb-2f01adce2ad4-grafana-compute-0[99477]: logger=migrator t=2025-12-08T09:49:27.471546032Z level=info msg="Migration successfully executed" id="drop correlation_tmp_qwerty" duration=748.583µs
Dec  8 04:49:27 np0005550137 ceph-ceb838ef-9d5d-54e4-bddb-2f01adce2ad4-grafana-compute-0[99477]: logger=migrator t=2025-12-08T09:49:27.473474099Z level=info msg="Executing migration" id="add provisioning column"
Dec  8 04:49:27 np0005550137 ceph-ceb838ef-9d5d-54e4-bddb-2f01adce2ad4-grafana-compute-0[99477]: logger=migrator t=2025-12-08T09:49:27.479385286Z level=info msg="Migration successfully executed" id="add provisioning column" duration=5.909257ms
Dec  8 04:49:27 np0005550137 ceph-ceb838ef-9d5d-54e4-bddb-2f01adce2ad4-grafana-compute-0[99477]: logger=migrator t=2025-12-08T09:49:27.481455597Z level=info msg="Executing migration" id="create entity_events table"
Dec  8 04:49:27 np0005550137 ceph-ceb838ef-9d5d-54e4-bddb-2f01adce2ad4-grafana-compute-0[99477]: logger=migrator t=2025-12-08T09:49:27.482317993Z level=info msg="Migration successfully executed" id="create entity_events table" duration=862.606µs
Dec  8 04:49:27 np0005550137 ceph-ceb838ef-9d5d-54e4-bddb-2f01adce2ad4-grafana-compute-0[99477]: logger=migrator t=2025-12-08T09:49:27.484313203Z level=info msg="Executing migration" id="create dashboard public config v1"
Dec  8 04:49:27 np0005550137 ceph-ceb838ef-9d5d-54e4-bddb-2f01adce2ad4-grafana-compute-0[99477]: logger=migrator t=2025-12-08T09:49:27.485368444Z level=info msg="Migration successfully executed" id="create dashboard public config v1" duration=1.055302ms
Dec  8 04:49:27 np0005550137 ceph-ceb838ef-9d5d-54e4-bddb-2f01adce2ad4-grafana-compute-0[99477]: logger=migrator t=2025-12-08T09:49:27.488062955Z level=info msg="Executing migration" id="drop index UQE_dashboard_public_config_uid - v1"
Dec  8 04:49:27 np0005550137 ceph-ceb838ef-9d5d-54e4-bddb-2f01adce2ad4-grafana-compute-0[99477]: logger=migrator t=2025-12-08T09:49:27.488546109Z level=warn msg="Skipping migration: Already executed, but not recorded in migration log" id="drop index UQE_dashboard_public_config_uid - v1"
Dec  8 04:49:27 np0005550137 ceph-ceb838ef-9d5d-54e4-bddb-2f01adce2ad4-grafana-compute-0[99477]: logger=migrator t=2025-12-08T09:49:27.490499207Z level=info msg="Executing migration" id="drop index IDX_dashboard_public_config_org_id_dashboard_uid - v1"
Dec  8 04:49:27 np0005550137 ceph-ceb838ef-9d5d-54e4-bddb-2f01adce2ad4-grafana-compute-0[99477]: logger=migrator t=2025-12-08T09:49:27.49092566Z level=warn msg="Skipping migration: Already executed, but not recorded in migration log" id="drop index IDX_dashboard_public_config_org_id_dashboard_uid - v1"
Dec  8 04:49:27 np0005550137 ceph-ceb838ef-9d5d-54e4-bddb-2f01adce2ad4-grafana-compute-0[99477]: logger=migrator t=2025-12-08T09:49:27.493044953Z level=info msg="Executing migration" id="Drop old dashboard public config table"
Dec  8 04:49:27 np0005550137 ceph-ceb838ef-9d5d-54e4-bddb-2f01adce2ad4-grafana-compute-0[99477]: logger=migrator t=2025-12-08T09:49:27.493971711Z level=info msg="Migration successfully executed" id="Drop old dashboard public config table" duration=926.778µs
Dec  8 04:49:27 np0005550137 ceph-ceb838ef-9d5d-54e4-bddb-2f01adce2ad4-grafana-compute-0[99477]: logger=migrator t=2025-12-08T09:49:27.495510327Z level=info msg="Executing migration" id="recreate dashboard public config v1"
Dec  8 04:49:27 np0005550137 ceph-ceb838ef-9d5d-54e4-bddb-2f01adce2ad4-grafana-compute-0[99477]: logger=migrator t=2025-12-08T09:49:27.496567939Z level=info msg="Migration successfully executed" id="recreate dashboard public config v1" duration=1.055232ms
Dec  8 04:49:27 np0005550137 ceph-ceb838ef-9d5d-54e4-bddb-2f01adce2ad4-grafana-compute-0[99477]: logger=migrator t=2025-12-08T09:49:27.499218488Z level=info msg="Executing migration" id="create index UQE_dashboard_public_config_uid - v1"
Dec  8 04:49:27 np0005550137 ceph-ceb838ef-9d5d-54e4-bddb-2f01adce2ad4-grafana-compute-0[99477]: logger=migrator t=2025-12-08T09:49:27.500104234Z level=info msg="Migration successfully executed" id="create index UQE_dashboard_public_config_uid - v1" duration=885.766µs
Dec  8 04:49:27 np0005550137 ceph-ceb838ef-9d5d-54e4-bddb-2f01adce2ad4-grafana-compute-0[99477]: logger=migrator t=2025-12-08T09:49:27.502436074Z level=info msg="Executing migration" id="create index IDX_dashboard_public_config_org_id_dashboard_uid - v1"
Dec  8 04:49:27 np0005550137 ceph-ceb838ef-9d5d-54e4-bddb-2f01adce2ad4-grafana-compute-0[99477]: logger=migrator t=2025-12-08T09:49:27.503343621Z level=info msg="Migration successfully executed" id="create index IDX_dashboard_public_config_org_id_dashboard_uid - v1" duration=906.907µs
Dec  8 04:49:27 np0005550137 ceph-ceb838ef-9d5d-54e4-bddb-2f01adce2ad4-grafana-compute-0[99477]: logger=migrator t=2025-12-08T09:49:27.506730742Z level=info msg="Executing migration" id="drop index UQE_dashboard_public_config_uid - v2"
Dec  8 04:49:27 np0005550137 ceph-ceb838ef-9d5d-54e4-bddb-2f01adce2ad4-grafana-compute-0[99477]: logger=migrator t=2025-12-08T09:49:27.507580898Z level=info msg="Migration successfully executed" id="drop index UQE_dashboard_public_config_uid - v2" duration=850.396µs
Dec  8 04:49:27 np0005550137 ceph-ceb838ef-9d5d-54e4-bddb-2f01adce2ad4-grafana-compute-0[99477]: logger=migrator t=2025-12-08T09:49:27.509520665Z level=info msg="Executing migration" id="drop index IDX_dashboard_public_config_org_id_dashboard_uid - v2"
Dec  8 04:49:27 np0005550137 ceph-ceb838ef-9d5d-54e4-bddb-2f01adce2ad4-grafana-compute-0[99477]: logger=migrator t=2025-12-08T09:49:27.510385321Z level=info msg="Migration successfully executed" id="drop index IDX_dashboard_public_config_org_id_dashboard_uid - v2" duration=864.176µs
Dec  8 04:49:27 np0005550137 ceph-ceb838ef-9d5d-54e4-bddb-2f01adce2ad4-grafana-compute-0[99477]: logger=migrator t=2025-12-08T09:49:27.51570343Z level=info msg="Executing migration" id="Drop public config table"
Dec  8 04:49:27 np0005550137 ceph-ceb838ef-9d5d-54e4-bddb-2f01adce2ad4-grafana-compute-0[99477]: logger=migrator t=2025-12-08T09:49:27.516430722Z level=info msg="Migration successfully executed" id="Drop public config table" duration=727.262µs
Dec  8 04:49:27 np0005550137 ceph-ceb838ef-9d5d-54e4-bddb-2f01adce2ad4-grafana-compute-0[99477]: logger=migrator t=2025-12-08T09:49:27.518084671Z level=info msg="Executing migration" id="Recreate dashboard public config v2"
Dec  8 04:49:27 np0005550137 ceph-ceb838ef-9d5d-54e4-bddb-2f01adce2ad4-grafana-compute-0[99477]: logger=migrator t=2025-12-08T09:49:27.518987008Z level=info msg="Migration successfully executed" id="Recreate dashboard public config v2" duration=902.507µs
Dec  8 04:49:27 np0005550137 ceph-ceb838ef-9d5d-54e4-bddb-2f01adce2ad4-grafana-compute-0[99477]: logger=migrator t=2025-12-08T09:49:27.520852084Z level=info msg="Executing migration" id="create index UQE_dashboard_public_config_uid - v2"
Dec  8 04:49:27 np0005550137 ceph-ceb838ef-9d5d-54e4-bddb-2f01adce2ad4-grafana-compute-0[99477]: logger=migrator t=2025-12-08T09:49:27.52173735Z level=info msg="Migration successfully executed" id="create index UQE_dashboard_public_config_uid - v2" duration=885.566µs
Dec  8 04:49:27 np0005550137 ceph-ceb838ef-9d5d-54e4-bddb-2f01adce2ad4-grafana-compute-0[99477]: logger=migrator t=2025-12-08T09:49:27.523689548Z level=info msg="Executing migration" id="create index IDX_dashboard_public_config_org_id_dashboard_uid - v2"
Dec  8 04:49:27 np0005550137 ceph-ceb838ef-9d5d-54e4-bddb-2f01adce2ad4-grafana-compute-0[99477]: logger=migrator t=2025-12-08T09:49:27.524625667Z level=info msg="Migration successfully executed" id="create index IDX_dashboard_public_config_org_id_dashboard_uid - v2" duration=935.999µs
Dec  8 04:49:27 np0005550137 ceph-ceb838ef-9d5d-54e4-bddb-2f01adce2ad4-grafana-compute-0[99477]: logger=migrator t=2025-12-08T09:49:27.526680338Z level=info msg="Executing migration" id="create index UQE_dashboard_public_config_access_token - v2"
Dec  8 04:49:27 np0005550137 ceph-ceb838ef-9d5d-54e4-bddb-2f01adce2ad4-grafana-compute-0[99477]: logger=migrator t=2025-12-08T09:49:27.527496992Z level=info msg="Migration successfully executed" id="create index UQE_dashboard_public_config_access_token - v2" duration=814.474µs
Dec  8 04:49:27 np0005550137 ceph-ceb838ef-9d5d-54e4-bddb-2f01adce2ad4-grafana-compute-0[99477]: logger=migrator t=2025-12-08T09:49:27.530093Z level=info msg="Executing migration" id="Rename table dashboard_public_config to dashboard_public - v2"
Dec  8 04:49:27 np0005550137 ceph-ceb838ef-9d5d-54e4-bddb-2f01adce2ad4-grafana-compute-0[99477]: logger=migrator t=2025-12-08T09:49:27.552275772Z level=info msg="Migration successfully executed" id="Rename table dashboard_public_config to dashboard_public - v2" duration=22.177402ms
Dec  8 04:49:27 np0005550137 ceph-ceb838ef-9d5d-54e4-bddb-2f01adce2ad4-grafana-compute-0[99477]: logger=migrator t=2025-12-08T09:49:27.555024404Z level=info msg="Executing migration" id="add annotations_enabled column"
Dec  8 04:49:27 np0005550137 ceph-ceb838ef-9d5d-54e4-bddb-2f01adce2ad4-grafana-compute-0[99477]: logger=migrator t=2025-12-08T09:49:27.561469937Z level=info msg="Migration successfully executed" id="add annotations_enabled column" duration=6.444053ms
Dec  8 04:49:27 np0005550137 ceph-ceb838ef-9d5d-54e4-bddb-2f01adce2ad4-grafana-compute-0[99477]: logger=migrator t=2025-12-08T09:49:27.563381294Z level=info msg="Executing migration" id="add time_selection_enabled column"
Dec  8 04:49:27 np0005550137 ceph-ceb838ef-9d5d-54e4-bddb-2f01adce2ad4-grafana-compute-0[99477]: logger=migrator t=2025-12-08T09:49:27.569102835Z level=info msg="Migration successfully executed" id="add time_selection_enabled column" duration=5.7219ms
Dec  8 04:49:27 np0005550137 ceph-ceb838ef-9d5d-54e4-bddb-2f01adce2ad4-grafana-compute-0[99477]: logger=migrator t=2025-12-08T09:49:27.571066803Z level=info msg="Executing migration" id="delete orphaned public dashboards"
Dec  8 04:49:27 np0005550137 ceph-ceb838ef-9d5d-54e4-bddb-2f01adce2ad4-grafana-compute-0[99477]: logger=migrator t=2025-12-08T09:49:27.571328151Z level=info msg="Migration successfully executed" id="delete orphaned public dashboards" duration=264.948µs
Dec  8 04:49:27 np0005550137 ceph-ceb838ef-9d5d-54e4-bddb-2f01adce2ad4-grafana-compute-0[99477]: logger=migrator t=2025-12-08T09:49:27.573113744Z level=info msg="Executing migration" id="add share column"
Dec  8 04:49:27 np0005550137 ceph-ceb838ef-9d5d-54e4-bddb-2f01adce2ad4-grafana-compute-0[99477]: logger=migrator t=2025-12-08T09:49:27.580353191Z level=info msg="Migration successfully executed" id="add share column" duration=7.237776ms
Dec  8 04:49:27 np0005550137 ceph-ceb838ef-9d5d-54e4-bddb-2f01adce2ad4-grafana-compute-0[99477]: logger=migrator t=2025-12-08T09:49:27.582283228Z level=info msg="Executing migration" id="backfill empty share column fields with default of public"
Dec  8 04:49:27 np0005550137 ceph-ceb838ef-9d5d-54e4-bddb-2f01adce2ad4-grafana-compute-0[99477]: logger=migrator t=2025-12-08T09:49:27.582515765Z level=info msg="Migration successfully executed" id="backfill empty share column fields with default of public" duration=232.947µs
Dec  8 04:49:27 np0005550137 ceph-ceb838ef-9d5d-54e4-bddb-2f01adce2ad4-grafana-compute-0[99477]: logger=migrator t=2025-12-08T09:49:27.584180644Z level=info msg="Executing migration" id="create file table"
Dec  8 04:49:27 np0005550137 ceph-ceb838ef-9d5d-54e4-bddb-2f01adce2ad4-grafana-compute-0[99477]: logger=migrator t=2025-12-08T09:49:27.585146383Z level=info msg="Migration successfully executed" id="create file table" duration=965.549µs
Dec  8 04:49:27 np0005550137 ceph-ceb838ef-9d5d-54e4-bddb-2f01adce2ad4-grafana-compute-0[99477]: logger=migrator t=2025-12-08T09:49:27.587492403Z level=info msg="Executing migration" id="file table idx: path natural pk"
Dec  8 04:49:27 np0005550137 ceph-ceb838ef-9d5d-54e4-bddb-2f01adce2ad4-grafana-compute-0[99477]: logger=migrator t=2025-12-08T09:49:27.588543865Z level=info msg="Migration successfully executed" id="file table idx: path natural pk" duration=1.051602ms
Dec  8 04:49:27 np0005550137 ceph-ceb838ef-9d5d-54e4-bddb-2f01adce2ad4-grafana-compute-0[99477]: logger=migrator t=2025-12-08T09:49:27.590645358Z level=info msg="Executing migration" id="file table idx: parent_folder_path_hash fast folder retrieval"
Dec  8 04:49:27 np0005550137 ceph-ceb838ef-9d5d-54e4-bddb-2f01adce2ad4-grafana-compute-0[99477]: logger=migrator t=2025-12-08T09:49:27.59173029Z level=info msg="Migration successfully executed" id="file table idx: parent_folder_path_hash fast folder retrieval" duration=1.084803ms
Dec  8 04:49:27 np0005550137 ceph-ceb838ef-9d5d-54e4-bddb-2f01adce2ad4-grafana-compute-0[99477]: logger=migrator t=2025-12-08T09:49:27.593895275Z level=info msg="Executing migration" id="create file_meta table"
Dec  8 04:49:27 np0005550137 ceph-ceb838ef-9d5d-54e4-bddb-2f01adce2ad4-grafana-compute-0[99477]: logger=migrator t=2025-12-08T09:49:27.59472158Z level=info msg="Migration successfully executed" id="create file_meta table" duration=826.425µs
Dec  8 04:49:27 np0005550137 ceph-ceb838ef-9d5d-54e4-bddb-2f01adce2ad4-grafana-compute-0[99477]: logger=migrator t=2025-12-08T09:49:27.596955836Z level=info msg="Executing migration" id="file table idx: path key"
Dec  8 04:49:27 np0005550137 ceph-ceb838ef-9d5d-54e4-bddb-2f01adce2ad4-grafana-compute-0[99477]: logger=migrator t=2025-12-08T09:49:27.598028868Z level=info msg="Migration successfully executed" id="file table idx: path key" duration=1.072962ms
Dec  8 04:49:27 np0005550137 ceph-ceb838ef-9d5d-54e4-bddb-2f01adce2ad4-grafana-compute-0[99477]: logger=migrator t=2025-12-08T09:49:27.600146722Z level=info msg="Executing migration" id="set path collation in file table"
Dec  8 04:49:27 np0005550137 ceph-ceb838ef-9d5d-54e4-bddb-2f01adce2ad4-grafana-compute-0[99477]: logger=migrator t=2025-12-08T09:49:27.600280516Z level=info msg="Migration successfully executed" id="set path collation in file table" duration=137.495µs
Dec  8 04:49:27 np0005550137 ceph-ceb838ef-9d5d-54e4-bddb-2f01adce2ad4-grafana-compute-0[99477]: logger=migrator t=2025-12-08T09:49:27.601989566Z level=info msg="Executing migration" id="migrate contents column to mediumblob for MySQL"
Dec  8 04:49:27 np0005550137 ceph-ceb838ef-9d5d-54e4-bddb-2f01adce2ad4-grafana-compute-0[99477]: logger=migrator t=2025-12-08T09:49:27.60209796Z level=info msg="Migration successfully executed" id="migrate contents column to mediumblob for MySQL" duration=107.483µs
Dec  8 04:49:27 np0005550137 ceph-ceb838ef-9d5d-54e4-bddb-2f01adce2ad4-grafana-compute-0[99477]: logger=migrator t=2025-12-08T09:49:27.604235573Z level=info msg="Executing migration" id="managed permissions migration"
Dec  8 04:49:27 np0005550137 ceph-ceb838ef-9d5d-54e4-bddb-2f01adce2ad4-grafana-compute-0[99477]: logger=migrator t=2025-12-08T09:49:27.604754659Z level=info msg="Migration successfully executed" id="managed permissions migration" duration=519.096µs
Dec  8 04:49:27 np0005550137 ceph-ceb838ef-9d5d-54e4-bddb-2f01adce2ad4-grafana-compute-0[99477]: logger=migrator t=2025-12-08T09:49:27.606353526Z level=info msg="Executing migration" id="managed folder permissions alert actions migration"
Dec  8 04:49:27 np0005550137 ceph-ceb838ef-9d5d-54e4-bddb-2f01adce2ad4-grafana-compute-0[99477]: logger=migrator t=2025-12-08T09:49:27.606560972Z level=info msg="Migration successfully executed" id="managed folder permissions alert actions migration" duration=208.096µs
Dec  8 04:49:27 np0005550137 ceph-ceb838ef-9d5d-54e4-bddb-2f01adce2ad4-grafana-compute-0[99477]: logger=migrator t=2025-12-08T09:49:27.608192052Z level=info msg="Executing migration" id="RBAC action name migrator"
Dec  8 04:49:27 np0005550137 ceph-ceb838ef-9d5d-54e4-bddb-2f01adce2ad4-grafana-compute-0[99477]: logger=migrator t=2025-12-08T09:49:27.609541521Z level=info msg="Migration successfully executed" id="RBAC action name migrator" duration=1.349279ms
Dec  8 04:49:27 np0005550137 ceph-ceb838ef-9d5d-54e4-bddb-2f01adce2ad4-grafana-compute-0[99477]: logger=migrator t=2025-12-08T09:49:27.611616784Z level=info msg="Executing migration" id="Add UID column to playlist"
Dec  8 04:49:27 np0005550137 ceph-ceb838ef-9d5d-54e4-bddb-2f01adce2ad4-grafana-compute-0[99477]: logger=migrator t=2025-12-08T09:49:27.620081867Z level=info msg="Migration successfully executed" id="Add UID column to playlist" duration=8.455923ms
Dec  8 04:49:27 np0005550137 ceph-ceb838ef-9d5d-54e4-bddb-2f01adce2ad4-grafana-compute-0[99477]: logger=migrator t=2025-12-08T09:49:27.622366355Z level=info msg="Executing migration" id="Update uid column values in playlist"
Dec  8 04:49:27 np0005550137 ceph-ceb838ef-9d5d-54e4-bddb-2f01adce2ad4-grafana-compute-0[99477]: logger=migrator t=2025-12-08T09:49:27.622589822Z level=info msg="Migration successfully executed" id="Update uid column values in playlist" duration=224.537µs
Dec  8 04:49:27 np0005550137 ceph-ceb838ef-9d5d-54e4-bddb-2f01adce2ad4-grafana-compute-0[99477]: logger=migrator t=2025-12-08T09:49:27.62453939Z level=info msg="Executing migration" id="Add index for uid in playlist"
Dec  8 04:49:27 np0005550137 ceph-ceb838ef-9d5d-54e4-bddb-2f01adce2ad4-grafana-compute-0[99477]: logger=migrator t=2025-12-08T09:49:27.625930091Z level=info msg="Migration successfully executed" id="Add index for uid in playlist" duration=1.390181ms
Dec  8 04:49:27 np0005550137 ceph-ceb838ef-9d5d-54e4-bddb-2f01adce2ad4-grafana-compute-0[99477]: logger=migrator t=2025-12-08T09:49:27.62926395Z level=info msg="Executing migration" id="update group index for alert rules"
Dec  8 04:49:27 np0005550137 ceph-ceb838ef-9d5d-54e4-bddb-2f01adce2ad4-grafana-compute-0[99477]: logger=migrator t=2025-12-08T09:49:27.629778786Z level=info msg="Migration successfully executed" id="update group index for alert rules" duration=513.946µs
Dec  8 04:49:27 np0005550137 ceph-ceb838ef-9d5d-54e4-bddb-2f01adce2ad4-grafana-compute-0[99477]: logger=migrator t=2025-12-08T09:49:27.631513977Z level=info msg="Executing migration" id="managed folder permissions alert actions repeated migration"
Dec  8 04:49:27 np0005550137 ceph-ceb838ef-9d5d-54e4-bddb-2f01adce2ad4-grafana-compute-0[99477]: logger=migrator t=2025-12-08T09:49:27.631798276Z level=info msg="Migration successfully executed" id="managed folder permissions alert actions repeated migration" duration=284.519µs
Dec  8 04:49:27 np0005550137 ceph-ceb838ef-9d5d-54e4-bddb-2f01adce2ad4-grafana-compute-0[99477]: logger=migrator t=2025-12-08T09:49:27.633544059Z level=info msg="Executing migration" id="admin only folder/dashboard permission"
Dec  8 04:49:27 np0005550137 ceph-ceb838ef-9d5d-54e4-bddb-2f01adce2ad4-grafana-compute-0[99477]: logger=migrator t=2025-12-08T09:49:27.633990472Z level=info msg="Migration successfully executed" id="admin only folder/dashboard permission" duration=445.873µs
Dec  8 04:49:27 np0005550137 ceph-ceb838ef-9d5d-54e4-bddb-2f01adce2ad4-grafana-compute-0[99477]: logger=migrator t=2025-12-08T09:49:27.635880558Z level=info msg="Executing migration" id="add action column to seed_assignment"
Dec  8 04:49:27 np0005550137 ceph-ceb838ef-9d5d-54e4-bddb-2f01adce2ad4-grafana-compute-0[99477]: logger=migrator t=2025-12-08T09:49:27.642267849Z level=info msg="Migration successfully executed" id="add action column to seed_assignment" duration=6.385751ms
Dec  8 04:49:27 np0005550137 ceph-ceb838ef-9d5d-54e4-bddb-2f01adce2ad4-grafana-compute-0[99477]: logger=migrator t=2025-12-08T09:49:27.644035132Z level=info msg="Executing migration" id="add scope column to seed_assignment"
Dec  8 04:49:27 np0005550137 ceph-ceb838ef-9d5d-54e4-bddb-2f01adce2ad4-grafana-compute-0[99477]: logger=migrator t=2025-12-08T09:49:27.649965339Z level=info msg="Migration successfully executed" id="add scope column to seed_assignment" duration=5.928427ms
Dec  8 04:49:27 np0005550137 ceph-ceb838ef-9d5d-54e4-bddb-2f01adce2ad4-grafana-compute-0[99477]: logger=migrator t=2025-12-08T09:49:27.65203642Z level=info msg="Executing migration" id="remove unique index builtin_role_role_name before nullable update"
Dec  8 04:49:27 np0005550137 ceph-ceb838ef-9d5d-54e4-bddb-2f01adce2ad4-grafana-compute-0[99477]: logger=migrator t=2025-12-08T09:49:27.653152414Z level=info msg="Migration successfully executed" id="remove unique index builtin_role_role_name before nullable update" duration=1.116014ms
Dec  8 04:49:27 np0005550137 ceph-ceb838ef-9d5d-54e4-bddb-2f01adce2ad4-grafana-compute-0[99477]: logger=migrator t=2025-12-08T09:49:27.655050101Z level=info msg="Executing migration" id="update seed_assignment role_name column to nullable"
Dec  8 04:49:27 np0005550137 ceph-ceb838ef-9d5d-54e4-bddb-2f01adce2ad4-grafana-compute-0[99477]: logger=migrator t=2025-12-08T09:49:27.721324449Z level=info msg="Migration successfully executed" id="update seed_assignment role_name column to nullable" duration=66.297539ms
Dec  8 04:49:27 np0005550137 ceph-ceb838ef-9d5d-54e4-bddb-2f01adce2ad4-grafana-compute-0[99477]: logger=migrator t=2025-12-08T09:49:27.723564337Z level=info msg="Executing migration" id="add unique index builtin_role_name back"
Dec  8 04:49:27 np0005550137 ceph-ceb838ef-9d5d-54e4-bddb-2f01adce2ad4-grafana-compute-0[99477]: logger=migrator t=2025-12-08T09:49:27.724580616Z level=info msg="Migration successfully executed" id="add unique index builtin_role_name back" duration=1.016189ms
Dec  8 04:49:27 np0005550137 ceph-osd[83009]: log_channel(cluster) log [DBG] : 9.10 scrub starts
Dec  8 04:49:27 np0005550137 ceph-ceb838ef-9d5d-54e4-bddb-2f01adce2ad4-grafana-compute-0[99477]: logger=migrator t=2025-12-08T09:49:27.726926646Z level=info msg="Executing migration" id="add unique index builtin_role_action_scope"
Dec  8 04:49:27 np0005550137 ceph-ceb838ef-9d5d-54e4-bddb-2f01adce2ad4-grafana-compute-0[99477]: logger=migrator t=2025-12-08T09:49:27.727980348Z level=info msg="Migration successfully executed" id="add unique index builtin_role_action_scope" duration=1.054032ms
Dec  8 04:49:27 np0005550137 ceph-ceb838ef-9d5d-54e4-bddb-2f01adce2ad4-grafana-compute-0[99477]: logger=migrator t=2025-12-08T09:49:27.731430151Z level=info msg="Executing migration" id="add primary key to seed_assigment"
Dec  8 04:49:27 np0005550137 ceph-osd[83009]: log_channel(cluster) log [DBG] : 9.10 scrub ok
Dec  8 04:49:27 np0005550137 ceph-ceb838ef-9d5d-54e4-bddb-2f01adce2ad4-grafana-compute-0[99477]: logger=migrator t=2025-12-08T09:49:27.751904892Z level=info msg="Migration successfully executed" id="add primary key to seed_assigment" duration=20.471831ms
Dec  8 04:49:27 np0005550137 ceph-ceb838ef-9d5d-54e4-bddb-2f01adce2ad4-grafana-compute-0[99477]: logger=migrator t=2025-12-08T09:49:27.754524011Z level=info msg="Executing migration" id="add origin column to seed_assignment"
Dec  8 04:49:27 np0005550137 ceph-ceb838ef-9d5d-54e4-bddb-2f01adce2ad4-grafana-compute-0[99477]: logger=migrator t=2025-12-08T09:49:27.761179339Z level=info msg="Migration successfully executed" id="add origin column to seed_assignment" duration=6.655308ms
Dec  8 04:49:27 np0005550137 ceph-ceb838ef-9d5d-54e4-bddb-2f01adce2ad4-grafana-compute-0[99477]: logger=migrator t=2025-12-08T09:49:27.762820299Z level=info msg="Executing migration" id="add origin to plugin seed_assignment"
Dec  8 04:49:27 np0005550137 ceph-ceb838ef-9d5d-54e4-bddb-2f01adce2ad4-grafana-compute-0[99477]: logger=migrator t=2025-12-08T09:49:27.763064826Z level=info msg="Migration successfully executed" id="add origin to plugin seed_assignment" duration=242.867µs
Dec  8 04:49:27 np0005550137 ceph-ceb838ef-9d5d-54e4-bddb-2f01adce2ad4-grafana-compute-0[99477]: logger=migrator t=2025-12-08T09:49:27.764789927Z level=info msg="Executing migration" id="prevent seeding OnCall access"
Dec  8 04:49:27 np0005550137 ceph-ceb838ef-9d5d-54e4-bddb-2f01adce2ad4-grafana-compute-0[99477]: logger=migrator t=2025-12-08T09:49:27.764955702Z level=info msg="Migration successfully executed" id="prevent seeding OnCall access" duration=165.695µs
Dec  8 04:49:27 np0005550137 ceph-ceb838ef-9d5d-54e4-bddb-2f01adce2ad4-grafana-compute-0[99477]: logger=migrator t=2025-12-08T09:49:27.766643112Z level=info msg="Executing migration" id="managed folder permissions alert actions repeated fixed migration"
Dec  8 04:49:27 np0005550137 ceph-ceb838ef-9d5d-54e4-bddb-2f01adce2ad4-grafana-compute-0[99477]: logger=migrator t=2025-12-08T09:49:27.766856469Z level=info msg="Migration successfully executed" id="managed folder permissions alert actions repeated fixed migration" duration=213.387µs
Dec  8 04:49:27 np0005550137 ceph-ceb838ef-9d5d-54e4-bddb-2f01adce2ad4-grafana-compute-0[99477]: logger=migrator t=2025-12-08T09:49:27.76853789Z level=info msg="Executing migration" id="managed folder permissions library panel actions migration"
Dec  8 04:49:27 np0005550137 ceph-ceb838ef-9d5d-54e4-bddb-2f01adce2ad4-grafana-compute-0[99477]: logger=migrator t=2025-12-08T09:49:27.768747925Z level=info msg="Migration successfully executed" id="managed folder permissions library panel actions migration" duration=209.915µs
Dec  8 04:49:27 np0005550137 ceph-ceb838ef-9d5d-54e4-bddb-2f01adce2ad4-grafana-compute-0[99477]: logger=migrator t=2025-12-08T09:49:27.770490758Z level=info msg="Executing migration" id="migrate external alertmanagers to datsourcse"
Dec  8 04:49:27 np0005550137 ceph-ceb838ef-9d5d-54e4-bddb-2f01adce2ad4-grafana-compute-0[99477]: logger=migrator t=2025-12-08T09:49:27.770714474Z level=info msg="Migration successfully executed" id="migrate external alertmanagers to datsourcse" duration=217.236µs
Dec  8 04:49:27 np0005550137 ceph-ceb838ef-9d5d-54e4-bddb-2f01adce2ad4-grafana-compute-0[99477]: logger=migrator t=2025-12-08T09:49:27.772542789Z level=info msg="Executing migration" id="create folder table"
Dec  8 04:49:27 np0005550137 ceph-ceb838ef-9d5d-54e4-bddb-2f01adce2ad4-grafana-compute-0[99477]: logger=migrator t=2025-12-08T09:49:27.77325999Z level=info msg="Migration successfully executed" id="create folder table" duration=717.251µs
Dec  8 04:49:27 np0005550137 ceph-ceb838ef-9d5d-54e4-bddb-2f01adce2ad4-grafana-compute-0[99477]: logger=migrator t=2025-12-08T09:49:27.774856118Z level=info msg="Executing migration" id="Add index for parent_uid"
Dec  8 04:49:27 np0005550137 ceph-ceb838ef-9d5d-54e4-bddb-2f01adce2ad4-grafana-compute-0[99477]: logger=migrator t=2025-12-08T09:49:27.775908339Z level=info msg="Migration successfully executed" id="Add index for parent_uid" duration=1.051601ms
Dec  8 04:49:27 np0005550137 ceph-ceb838ef-9d5d-54e4-bddb-2f01adce2ad4-grafana-compute-0[99477]: logger=migrator t=2025-12-08T09:49:27.778250109Z level=info msg="Executing migration" id="Add unique index for folder.uid and folder.org_id"
Dec  8 04:49:27 np0005550137 ceph-ceb838ef-9d5d-54e4-bddb-2f01adce2ad4-grafana-compute-0[99477]: logger=migrator t=2025-12-08T09:49:27.779156516Z level=info msg="Migration successfully executed" id="Add unique index for folder.uid and folder.org_id" duration=906.217µs
Dec  8 04:49:27 np0005550137 ceph-ceb838ef-9d5d-54e4-bddb-2f01adce2ad4-grafana-compute-0[99477]: logger=migrator t=2025-12-08T09:49:27.781410864Z level=info msg="Executing migration" id="Update folder title length"
Dec  8 04:49:27 np0005550137 ceph-ceb838ef-9d5d-54e4-bddb-2f01adce2ad4-grafana-compute-0[99477]: logger=migrator t=2025-12-08T09:49:27.781431694Z level=info msg="Migration successfully executed" id="Update folder title length" duration=21.37µs
Dec  8 04:49:27 np0005550137 ceph-ceb838ef-9d5d-54e4-bddb-2f01adce2ad4-grafana-compute-0[99477]: logger=migrator t=2025-12-08T09:49:27.783039272Z level=info msg="Executing migration" id="Add unique index for folder.title and folder.parent_uid"
Dec  8 04:49:27 np0005550137 ceph-ceb838ef-9d5d-54e4-bddb-2f01adce2ad4-grafana-compute-0[99477]: logger=migrator t=2025-12-08T09:49:27.78396534Z level=info msg="Migration successfully executed" id="Add unique index for folder.title and folder.parent_uid" duration=925.728µs
Dec  8 04:49:27 np0005550137 ceph-ceb838ef-9d5d-54e4-bddb-2f01adce2ad4-grafana-compute-0[99477]: logger=migrator t=2025-12-08T09:49:27.786129994Z level=info msg="Executing migration" id="Remove unique index for folder.title and folder.parent_uid"
Dec  8 04:49:27 np0005550137 ceph-ceb838ef-9d5d-54e4-bddb-2f01adce2ad4-grafana-compute-0[99477]: logger=migrator t=2025-12-08T09:49:27.787016541Z level=info msg="Migration successfully executed" id="Remove unique index for folder.title and folder.parent_uid" duration=887.897µs
Dec  8 04:49:27 np0005550137 ceph-ceb838ef-9d5d-54e4-bddb-2f01adce2ad4-grafana-compute-0[99477]: logger=migrator t=2025-12-08T09:49:27.789269458Z level=info msg="Executing migration" id="Add unique index for title, parent_uid, and org_id"
Dec  8 04:49:27 np0005550137 ceph-ceb838ef-9d5d-54e4-bddb-2f01adce2ad4-grafana-compute-0[99477]: logger=migrator t=2025-12-08T09:49:27.790211307Z level=info msg="Migration successfully executed" id="Add unique index for title, parent_uid, and org_id" duration=941.528µs
Dec  8 04:49:27 np0005550137 ceph-ceb838ef-9d5d-54e4-bddb-2f01adce2ad4-grafana-compute-0[99477]: logger=migrator t=2025-12-08T09:49:27.792200796Z level=info msg="Executing migration" id="Sync dashboard and folder table"
Dec  8 04:49:27 np0005550137 ceph-ceb838ef-9d5d-54e4-bddb-2f01adce2ad4-grafana-compute-0[99477]: logger=migrator t=2025-12-08T09:49:27.792604708Z level=info msg="Migration successfully executed" id="Sync dashboard and folder table" duration=403.562µs
Dec  8 04:49:27 np0005550137 ceph-ceb838ef-9d5d-54e4-bddb-2f01adce2ad4-grafana-compute-0[99477]: logger=migrator t=2025-12-08T09:49:27.79404176Z level=info msg="Executing migration" id="Remove ghost folders from the folder table"
Dec  8 04:49:27 np0005550137 ceph-ceb838ef-9d5d-54e4-bddb-2f01adce2ad4-grafana-compute-0[99477]: logger=migrator t=2025-12-08T09:49:27.794280347Z level=info msg="Migration successfully executed" id="Remove ghost folders from the folder table" duration=238.687µs
Dec  8 04:49:27 np0005550137 ceph-ceb838ef-9d5d-54e4-bddb-2f01adce2ad4-grafana-compute-0[99477]: logger=migrator t=2025-12-08T09:49:27.796061501Z level=info msg="Executing migration" id="Remove unique index UQE_folder_uid_org_id"
Dec  8 04:49:27 np0005550137 ceph-ceb838ef-9d5d-54e4-bddb-2f01adce2ad4-grafana-compute-0[99477]: logger=migrator t=2025-12-08T09:49:27.796974848Z level=info msg="Migration successfully executed" id="Remove unique index UQE_folder_uid_org_id" duration=913.547µs
Dec  8 04:49:27 np0005550137 ceph-ceb838ef-9d5d-54e4-bddb-2f01adce2ad4-grafana-compute-0[99477]: logger=migrator t=2025-12-08T09:49:27.798637758Z level=info msg="Executing migration" id="Add unique index UQE_folder_org_id_uid"
Dec  8 04:49:27 np0005550137 ceph-ceb838ef-9d5d-54e4-bddb-2f01adce2ad4-grafana-compute-0[99477]: logger=migrator t=2025-12-08T09:49:27.800016949Z level=info msg="Migration successfully executed" id="Add unique index UQE_folder_org_id_uid" duration=1.378991ms
Dec  8 04:49:27 np0005550137 ceph-ceb838ef-9d5d-54e4-bddb-2f01adce2ad4-grafana-compute-0[99477]: logger=migrator t=2025-12-08T09:49:27.80171988Z level=info msg="Executing migration" id="Remove unique index UQE_folder_title_parent_uid_org_id"
Dec  8 04:49:27 np0005550137 ceph-ceb838ef-9d5d-54e4-bddb-2f01adce2ad4-grafana-compute-0[99477]: logger=migrator t=2025-12-08T09:49:27.803036129Z level=info msg="Migration successfully executed" id="Remove unique index UQE_folder_title_parent_uid_org_id" duration=1.316239ms
Dec  8 04:49:27 np0005550137 ceph-ceb838ef-9d5d-54e4-bddb-2f01adce2ad4-grafana-compute-0[99477]: logger=migrator t=2025-12-08T09:49:27.805169412Z level=info msg="Executing migration" id="Add unique index UQE_folder_org_id_parent_uid_title"
Dec  8 04:49:27 np0005550137 ceph-ceb838ef-9d5d-54e4-bddb-2f01adce2ad4-grafana-compute-0[99477]: logger=migrator t=2025-12-08T09:49:27.806448821Z level=info msg="Migration successfully executed" id="Add unique index UQE_folder_org_id_parent_uid_title" duration=1.280759ms
Dec  8 04:49:27 np0005550137 ceph-ceb838ef-9d5d-54e4-bddb-2f01adce2ad4-grafana-compute-0[99477]: logger=migrator t=2025-12-08T09:49:27.8084383Z level=info msg="Executing migration" id="Remove index IDX_folder_parent_uid_org_id"
Dec  8 04:49:27 np0005550137 ceph-ceb838ef-9d5d-54e4-bddb-2f01adce2ad4-grafana-compute-0[99477]: logger=migrator t=2025-12-08T09:49:27.809559144Z level=info msg="Migration successfully executed" id="Remove index IDX_folder_parent_uid_org_id" duration=1.120904ms
Dec  8 04:49:27 np0005550137 ceph-ceb838ef-9d5d-54e4-bddb-2f01adce2ad4-grafana-compute-0[99477]: logger=migrator t=2025-12-08T09:49:27.811643166Z level=info msg="Executing migration" id="create anon_device table"
Dec  8 04:49:27 np0005550137 ceph-ceb838ef-9d5d-54e4-bddb-2f01adce2ad4-grafana-compute-0[99477]: logger=migrator t=2025-12-08T09:49:27.812621076Z level=info msg="Migration successfully executed" id="create anon_device table" duration=978.34µs
Dec  8 04:49:27 np0005550137 ceph-mon[74516]: mon.compute-0@0(leader).osd e61 do_prune osdmap full prune enabled
Dec  8 04:49:27 np0005550137 ceph-ceb838ef-9d5d-54e4-bddb-2f01adce2ad4-grafana-compute-0[99477]: logger=migrator t=2025-12-08T09:49:27.814592764Z level=info msg="Executing migration" id="add unique index anon_device.device_id"
Dec  8 04:49:27 np0005550137 ceph-ceb838ef-9d5d-54e4-bddb-2f01adce2ad4-grafana-compute-0[99477]: logger=migrator t=2025-12-08T09:49:27.815997547Z level=info msg="Migration successfully executed" id="add unique index anon_device.device_id" duration=1.405383ms
Dec  8 04:49:27 np0005550137 ceph-ceb838ef-9d5d-54e4-bddb-2f01adce2ad4-grafana-compute-0[99477]: logger=migrator t=2025-12-08T09:49:27.818569203Z level=info msg="Executing migration" id="add index anon_device.updated_at"
Dec  8 04:49:27 np0005550137 ceph-ceb838ef-9d5d-54e4-bddb-2f01adce2ad4-grafana-compute-0[99477]: logger=migrator t=2025-12-08T09:49:27.819857111Z level=info msg="Migration successfully executed" id="add index anon_device.updated_at" duration=1.287878ms
Dec  8 04:49:27 np0005550137 ceph-ceb838ef-9d5d-54e4-bddb-2f01adce2ad4-grafana-compute-0[99477]: logger=migrator t=2025-12-08T09:49:27.82380967Z level=info msg="Executing migration" id="create signing_key table"
Dec  8 04:49:27 np0005550137 ceph-ceb838ef-9d5d-54e4-bddb-2f01adce2ad4-grafana-compute-0[99477]: logger=migrator t=2025-12-08T09:49:27.825125168Z level=info msg="Migration successfully executed" id="create signing_key table" duration=1.314798ms
Dec  8 04:49:27 np0005550137 ceph-mon[74516]: mon.compute-0@0(leader).osd e62 e62: 3 total, 3 up, 3 in
Dec  8 04:49:27 np0005550137 ceph-ceb838ef-9d5d-54e4-bddb-2f01adce2ad4-grafana-compute-0[99477]: logger=migrator t=2025-12-08T09:49:27.827975644Z level=info msg="Executing migration" id="add unique index signing_key.key_id"
Dec  8 04:49:27 np0005550137 ceph-ceb838ef-9d5d-54e4-bddb-2f01adce2ad4-grafana-compute-0[99477]: logger=migrator t=2025-12-08T09:49:27.829261732Z level=info msg="Migration successfully executed" id="add unique index signing_key.key_id" duration=1.285608ms
Dec  8 04:49:27 np0005550137 ceph-ceb838ef-9d5d-54e4-bddb-2f01adce2ad4-grafana-compute-0[99477]: logger=migrator t=2025-12-08T09:49:27.831793778Z level=info msg="Executing migration" id="set legacy alert migration status in kvstore"
Dec  8 04:49:27 np0005550137 ceph-ceb838ef-9d5d-54e4-bddb-2f01adce2ad4-grafana-compute-0[99477]: logger=migrator t=2025-12-08T09:49:27.833016865Z level=info msg="Migration successfully executed" id="set legacy alert migration status in kvstore" duration=1.226107ms
Dec  8 04:49:27 np0005550137 ceph-mon[74516]: log_channel(cluster) log [DBG] : osdmap e62: 3 total, 3 up, 3 in
Dec  8 04:49:27 np0005550137 ceph-ceb838ef-9d5d-54e4-bddb-2f01adce2ad4-grafana-compute-0[99477]: logger=migrator t=2025-12-08T09:49:27.836066875Z level=info msg="Executing migration" id="migrate record of created folders during legacy migration to kvstore"
Dec  8 04:49:27 np0005550137 ceph-ceb838ef-9d5d-54e4-bddb-2f01adce2ad4-grafana-compute-0[99477]: logger=migrator t=2025-12-08T09:49:27.836440346Z level=info msg="Migration successfully executed" id="migrate record of created folders during legacy migration to kvstore" duration=374.151µs
Dec  8 04:49:27 np0005550137 ceph-osd[83009]: osd.1 pg_epoch: 62 pg[9.1b( v 49'1026 (0'0,49'1026] local-lis/les=52/53 n=5 ec=52/36 lis/c=52/52 les/c/f=53/53/0 sis=62) [2]/[1] r=0 lpr=62 pi=[52,62)/1 crt=49'1026 lcod 0'0 mlcod 0'0 remapped NOTIFY mbc={}] PeeringState::start_peering_interval up [2] -> [2], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 2, role -1 -> 0, features acting 4540701547738038271 upacting 4540701547738038271
Dec  8 04:49:27 np0005550137 ceph-osd[83009]: osd.1 pg_epoch: 62 pg[9.1f( v 49'1026 (0'0,49'1026] local-lis/les=52/53 n=5 ec=52/36 lis/c=52/52 les/c/f=53/53/0 sis=62) [2]/[1] r=0 lpr=62 pi=[52,62)/1 crt=49'1026 lcod 0'0 mlcod 0'0 remapped NOTIFY mbc={}] PeeringState::start_peering_interval up [2] -> [2], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 2, role -1 -> 0, features acting 4540701547738038271 upacting 4540701547738038271
Dec  8 04:49:27 np0005550137 ceph-osd[83009]: osd.1 pg_epoch: 62 pg[9.1b( v 49'1026 (0'0,49'1026] local-lis/les=52/53 n=5 ec=52/36 lis/c=52/52 les/c/f=53/53/0 sis=62) [2]/[1] r=0 lpr=62 pi=[52,62)/1 crt=49'1026 lcod 0'0 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Dec  8 04:49:27 np0005550137 ceph-osd[83009]: osd.1 pg_epoch: 62 pg[9.1f( v 49'1026 (0'0,49'1026] local-lis/les=52/53 n=5 ec=52/36 lis/c=52/52 les/c/f=53/53/0 sis=62) [2]/[1] r=0 lpr=62 pi=[52,62)/1 crt=49'1026 lcod 0'0 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Dec  8 04:49:27 np0005550137 ceph-osd[83009]: osd.1 pg_epoch: 62 pg[9.13( v 49'1026 (0'0,49'1026] local-lis/les=52/53 n=5 ec=52/36 lis/c=52/52 les/c/f=53/53/0 sis=62) [2]/[1] r=0 lpr=62 pi=[52,62)/1 crt=49'1026 lcod 0'0 mlcod 0'0 remapped NOTIFY mbc={}] PeeringState::start_peering_interval up [2] -> [2], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 2, role -1 -> 0, features acting 4540701547738038271 upacting 4540701547738038271
Dec  8 04:49:27 np0005550137 ceph-osd[83009]: osd.1 pg_epoch: 62 pg[9.13( v 49'1026 (0'0,49'1026] local-lis/les=52/53 n=5 ec=52/36 lis/c=52/52 les/c/f=53/53/0 sis=62) [2]/[1] r=0 lpr=62 pi=[52,62)/1 crt=49'1026 lcod 0'0 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Dec  8 04:49:27 np0005550137 ceph-osd[83009]: osd.1 pg_epoch: 62 pg[9.17( v 49'1026 (0'0,49'1026] local-lis/les=52/53 n=5 ec=52/36 lis/c=52/52 les/c/f=53/53/0 sis=62) [2]/[1] r=0 lpr=62 pi=[52,62)/1 crt=49'1026 lcod 0'0 mlcod 0'0 remapped NOTIFY mbc={}] PeeringState::start_peering_interval up [2] -> [2], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 2, role -1 -> 0, features acting 4540701547738038271 upacting 4540701547738038271
Dec  8 04:49:27 np0005550137 ceph-osd[83009]: osd.1 pg_epoch: 62 pg[9.17( v 49'1026 (0'0,49'1026] local-lis/les=52/53 n=5 ec=52/36 lis/c=52/52 les/c/f=53/53/0 sis=62) [2]/[1] r=0 lpr=62 pi=[52,62)/1 crt=49'1026 lcod 0'0 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Dec  8 04:49:27 np0005550137 ceph-osd[83009]: osd.1 pg_epoch: 62 pg[9.7( v 49'1026 (0'0,49'1026] local-lis/les=52/53 n=6 ec=52/36 lis/c=52/52 les/c/f=53/53/0 sis=62) [2]/[1] r=0 lpr=62 pi=[52,62)/1 crt=49'1026 lcod 0'0 mlcod 0'0 remapped NOTIFY mbc={}] PeeringState::start_peering_interval up [2] -> [2], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 2, role -1 -> 0, features acting 4540701547738038271 upacting 4540701547738038271
Dec  8 04:49:27 np0005550137 ceph-osd[83009]: osd.1 pg_epoch: 62 pg[9.7( v 49'1026 (0'0,49'1026] local-lis/les=52/53 n=6 ec=52/36 lis/c=52/52 les/c/f=53/53/0 sis=62) [2]/[1] r=0 lpr=62 pi=[52,62)/1 crt=49'1026 lcod 0'0 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Dec  8 04:49:27 np0005550137 ceph-osd[83009]: osd.1 pg_epoch: 62 pg[9.3( v 49'1026 (0'0,49'1026] local-lis/les=52/53 n=8 ec=52/36 lis/c=52/52 les/c/f=53/53/0 sis=62) [2]/[1] r=0 lpr=62 pi=[52,62)/1 crt=49'1026 lcod 0'0 mlcod 0'0 remapped NOTIFY mbc={}] PeeringState::start_peering_interval up [2] -> [2], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 2, role -1 -> 0, features acting 4540701547738038271 upacting 4540701547738038271
Dec  8 04:49:27 np0005550137 ceph-osd[83009]: osd.1 pg_epoch: 62 pg[9.b( v 49'1026 (0'0,49'1026] local-lis/les=52/53 n=6 ec=52/36 lis/c=52/52 les/c/f=53/53/0 sis=62) [2]/[1] r=0 lpr=62 pi=[52,62)/1 crt=49'1026 lcod 0'0 mlcod 0'0 remapped NOTIFY mbc={}] PeeringState::start_peering_interval up [2] -> [2], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 2, role -1 -> 0, features acting 4540701547738038271 upacting 4540701547738038271
Dec  8 04:49:27 np0005550137 ceph-osd[83009]: osd.1 pg_epoch: 62 pg[9.b( v 49'1026 (0'0,49'1026] local-lis/les=52/53 n=6 ec=52/36 lis/c=52/52 les/c/f=53/53/0 sis=62) [2]/[1] r=0 lpr=62 pi=[52,62)/1 crt=49'1026 lcod 0'0 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Dec  8 04:49:27 np0005550137 ceph-osd[83009]: osd.1 pg_epoch: 62 pg[9.3( v 49'1026 (0'0,49'1026] local-lis/les=52/53 n=8 ec=52/36 lis/c=52/52 les/c/f=53/53/0 sis=62) [2]/[1] r=0 lpr=62 pi=[52,62)/1 crt=49'1026 lcod 0'0 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Dec  8 04:49:27 np0005550137 ceph-osd[83009]: osd.1 pg_epoch: 62 pg[9.f( v 49'1026 (0'0,49'1026] local-lis/les=52/53 n=6 ec=52/36 lis/c=52/52 les/c/f=53/53/0 sis=62) [2]/[1] r=0 lpr=62 pi=[52,62)/1 crt=49'1026 lcod 0'0 mlcod 0'0 remapped NOTIFY mbc={}] PeeringState::start_peering_interval up [2] -> [2], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 2, role -1 -> 0, features acting 4540701547738038271 upacting 4540701547738038271
Dec  8 04:49:27 np0005550137 ceph-osd[83009]: osd.1 pg_epoch: 62 pg[9.f( v 49'1026 (0'0,49'1026] local-lis/les=52/53 n=6 ec=52/36 lis/c=52/52 les/c/f=53/53/0 sis=62) [2]/[1] r=0 lpr=62 pi=[52,62)/1 crt=49'1026 lcod 0'0 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Dec  8 04:49:27 np0005550137 ceph-osd[83009]: osd.1 pg_epoch: 62 pg[10.1b( v 49'120 (0'0,49'120] local-lis/les=61/62 n=0 ec=54/38 lis/c=59/54 les/c/f=60/55/0 sis=61) [1] r=0 lpr=61 pi=[54,61)/1 crt=49'120 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  8 04:49:27 np0005550137 ceph-osd[83009]: osd.1 pg_epoch: 62 pg[10.18( v 49'120 (0'0,49'120] local-lis/les=61/62 n=0 ec=54/38 lis/c=59/54 les/c/f=60/55/0 sis=61) [1] r=0 lpr=61 pi=[54,61)/1 crt=49'120 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  8 04:49:27 np0005550137 ceph-ceb838ef-9d5d-54e4-bddb-2f01adce2ad4-grafana-compute-0[99477]: logger=migrator t=2025-12-08T09:49:27.843009593Z level=info msg="Executing migration" id="Add folder_uid for dashboard"
Dec  8 04:49:27 np0005550137 ceph-osd[83009]: osd.1 pg_epoch: 62 pg[10.5( v 49'120 (0'0,49'120] local-lis/les=61/62 n=1 ec=54/38 lis/c=59/54 les/c/f=60/55/0 sis=61) [1] r=0 lpr=61 pi=[54,61)/1 crt=49'120 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  8 04:49:27 np0005550137 ceph-mon[74516]: from='mgr.14478 192.168.122.100:0/575685615' entity='mgr.compute-0.kitiwu' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "4"}]': finished
Dec  8 04:49:27 np0005550137 ceph-osd[83009]: osd.1 pg_epoch: 62 pg[10.8( v 49'120 (0'0,49'120] local-lis/les=61/62 n=1 ec=54/38 lis/c=59/54 les/c/f=60/55/0 sis=61) [1] r=0 lpr=61 pi=[54,61)/1 crt=49'120 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  8 04:49:27 np0005550137 ceph-mon[74516]: from='mgr.14478 192.168.122.100:0/575685615' entity='mgr.compute-0.kitiwu' 
Dec  8 04:49:27 np0005550137 ceph-osd[83009]: osd.1 pg_epoch: 62 pg[10.19( v 49'120 (0'0,49'120] local-lis/les=61/62 n=0 ec=54/38 lis/c=59/54 les/c/f=60/55/0 sis=61) [1] r=0 lpr=61 pi=[54,61)/1 crt=49'120 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  8 04:49:27 np0005550137 ceph-mon[74516]: from='mgr.14478 192.168.122.100:0/575685615' entity='mgr.compute-0.kitiwu' 
Dec  8 04:49:27 np0005550137 ceph-osd[83009]: osd.1 pg_epoch: 62 pg[10.13( v 49'120 (0'0,49'120] local-lis/les=61/62 n=0 ec=54/38 lis/c=59/54 les/c/f=60/55/0 sis=61) [1] r=0 lpr=61 pi=[54,61)/1 crt=49'120 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  8 04:49:27 np0005550137 ceph-mon[74516]: from='mgr.14478 192.168.122.100:0/575685615' entity='mgr.compute-0.kitiwu' 
Dec  8 04:49:27 np0005550137 ceph-osd[83009]: osd.1 pg_epoch: 62 pg[10.2( v 49'120 (0'0,49'120] local-lis/les=61/62 n=1 ec=54/38 lis/c=59/54 les/c/f=60/55/0 sis=61) [1] r=0 lpr=61 pi=[54,61)/1 crt=49'120 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  8 04:49:27 np0005550137 ceph-osd[83009]: osd.1 pg_epoch: 62 pg[10.15( v 60'130 (0'0,60'130] local-lis/les=61/62 n=0 ec=54/38 lis/c=59/54 les/c/f=60/55/0 sis=61) [1] r=0 lpr=61 pi=[54,61)/1 crt=60'130 lcod 60'129 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  8 04:49:27 np0005550137 ceph-osd[83009]: osd.1 pg_epoch: 62 pg[10.14( v 60'130 (0'0,60'130] local-lis/les=61/62 n=0 ec=54/38 lis/c=59/54 les/c/f=60/55/0 sis=61) [1] r=0 lpr=61 pi=[54,61)/1 crt=60'130 lcod 60'129 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  8 04:49:27 np0005550137 ceph-ceb838ef-9d5d-54e4-bddb-2f01adce2ad4-grafana-compute-0[99477]: logger=migrator t=2025-12-08T09:49:27.853853927Z level=info msg="Migration successfully executed" id="Add folder_uid for dashboard" duration=10.841883ms
Dec  8 04:49:27 np0005550137 ceph-ceb838ef-9d5d-54e4-bddb-2f01adce2ad4-grafana-compute-0[99477]: logger=migrator t=2025-12-08T09:49:27.855815195Z level=info msg="Executing migration" id="Populate dashboard folder_uid column"
Dec  8 04:49:27 np0005550137 ceph-ceb838ef-9d5d-54e4-bddb-2f01adce2ad4-grafana-compute-0[99477]: logger=migrator t=2025-12-08T09:49:27.857099314Z level=info msg="Migration successfully executed" id="Populate dashboard folder_uid column" duration=1.286999ms
Dec  8 04:49:27 np0005550137 ceph-ceb838ef-9d5d-54e4-bddb-2f01adce2ad4-grafana-compute-0[99477]: logger=migrator t=2025-12-08T09:49:27.859227317Z level=info msg="Executing migration" id="Add unique index for dashboard_org_id_folder_uid_title"
Dec  8 04:49:27 np0005550137 ceph-ceb838ef-9d5d-54e4-bddb-2f01adce2ad4-grafana-compute-0[99477]: logger=migrator t=2025-12-08T09:49:27.860145615Z level=info msg="Migration successfully executed" id="Add unique index for dashboard_org_id_folder_uid_title" duration=930.908µs
Dec  8 04:49:27 np0005550137 ceph-ceb838ef-9d5d-54e4-bddb-2f01adce2ad4-grafana-compute-0[99477]: logger=migrator t=2025-12-08T09:49:27.862551696Z level=info msg="Executing migration" id="Delete unique index for dashboard_org_id_folder_id_title"
Dec  8 04:49:27 np0005550137 ceph-ceb838ef-9d5d-54e4-bddb-2f01adce2ad4-grafana-compute-0[99477]: logger=migrator t=2025-12-08T09:49:27.863410042Z level=info msg="Migration successfully executed" id="Delete unique index for dashboard_org_id_folder_id_title" duration=860.146µs
Dec  8 04:49:27 np0005550137 ceph-ceb838ef-9d5d-54e4-bddb-2f01adce2ad4-grafana-compute-0[99477]: logger=migrator t=2025-12-08T09:49:27.865138344Z level=info msg="Executing migration" id="Delete unique index for dashboard_org_id_folder_uid_title"
Dec  8 04:49:27 np0005550137 ceph-ceb838ef-9d5d-54e4-bddb-2f01adce2ad4-grafana-compute-0[99477]: logger=migrator t=2025-12-08T09:49:27.86599862Z level=info msg="Migration successfully executed" id="Delete unique index for dashboard_org_id_folder_uid_title" duration=860.686µs
Dec  8 04:49:27 np0005550137 ceph-ceb838ef-9d5d-54e4-bddb-2f01adce2ad4-grafana-compute-0[99477]: logger=migrator t=2025-12-08T09:49:27.867572776Z level=info msg="Executing migration" id="Add unique index for dashboard_org_id_folder_uid_title_is_folder"
Dec  8 04:49:27 np0005550137 ceph-ceb838ef-9d5d-54e4-bddb-2f01adce2ad4-grafana-compute-0[99477]: logger=migrator t=2025-12-08T09:49:27.868523795Z level=info msg="Migration successfully executed" id="Add unique index for dashboard_org_id_folder_uid_title_is_folder" duration=951.129µs
Dec  8 04:49:27 np0005550137 ceph-ceb838ef-9d5d-54e4-bddb-2f01adce2ad4-grafana-compute-0[99477]: logger=migrator t=2025-12-08T09:49:27.870123212Z level=info msg="Executing migration" id="Restore index for dashboard_org_id_folder_id_title"
Dec  8 04:49:27 np0005550137 ceph-ceb838ef-9d5d-54e4-bddb-2f01adce2ad4-grafana-compute-0[99477]: logger=migrator t=2025-12-08T09:49:27.870950757Z level=info msg="Migration successfully executed" id="Restore index for dashboard_org_id_folder_id_title" duration=825.105µs
Dec  8 04:49:27 np0005550137 ceph-ceb838ef-9d5d-54e4-bddb-2f01adce2ad4-grafana-compute-0[99477]: logger=migrator t=2025-12-08T09:49:27.872711659Z level=info msg="Executing migration" id="create sso_setting table"
Dec  8 04:49:27 np0005550137 ceph-ceb838ef-9d5d-54e4-bddb-2f01adce2ad4-grafana-compute-0[99477]: logger=migrator t=2025-12-08T09:49:27.87372966Z level=info msg="Migration successfully executed" id="create sso_setting table" duration=1.018061ms
Dec  8 04:49:27 np0005550137 ceph-ceb838ef-9d5d-54e4-bddb-2f01adce2ad4-grafana-compute-0[99477]: logger=migrator t=2025-12-08T09:49:27.875910455Z level=info msg="Executing migration" id="copy kvstore migration status to each org"
Dec  8 04:49:27 np0005550137 ceph-ceb838ef-9d5d-54e4-bddb-2f01adce2ad4-grafana-compute-0[99477]: logger=migrator t=2025-12-08T09:49:27.876640687Z level=info msg="Migration successfully executed" id="copy kvstore migration status to each org" duration=730.392µs
Dec  8 04:49:27 np0005550137 ceph-ceb838ef-9d5d-54e4-bddb-2f01adce2ad4-grafana-compute-0[99477]: logger=migrator t=2025-12-08T09:49:27.878464011Z level=info msg="Executing migration" id="add back entry for orgid=0 migrated status"
Dec  8 04:49:27 np0005550137 ceph-ceb838ef-9d5d-54e4-bddb-2f01adce2ad4-grafana-compute-0[99477]: logger=migrator t=2025-12-08T09:49:27.878681538Z level=info msg="Migration successfully executed" id="add back entry for orgid=0 migrated status" duration=218.687µs
Dec  8 04:49:27 np0005550137 ceph-ceb838ef-9d5d-54e4-bddb-2f01adce2ad4-grafana-compute-0[99477]: logger=migrator t=2025-12-08T09:49:27.880712678Z level=info msg="Executing migration" id="alter kv_store.value to longtext"
Dec  8 04:49:27 np0005550137 ceph-ceb838ef-9d5d-54e4-bddb-2f01adce2ad4-grafana-compute-0[99477]: logger=migrator t=2025-12-08T09:49:27.88076608Z level=info msg="Migration successfully executed" id="alter kv_store.value to longtext" duration=52.092µs
Dec  8 04:49:27 np0005550137 ceph-ceb838ef-9d5d-54e4-bddb-2f01adce2ad4-grafana-compute-0[99477]: logger=migrator t=2025-12-08T09:49:27.882517763Z level=info msg="Executing migration" id="add notification_settings column to alert_rule table"
Dec  8 04:49:27 np0005550137 ceph-ceb838ef-9d5d-54e4-bddb-2f01adce2ad4-grafana-compute-0[99477]: logger=migrator t=2025-12-08T09:49:27.891733638Z level=info msg="Migration successfully executed" id="add notification_settings column to alert_rule table" duration=9.215145ms
Dec  8 04:49:27 np0005550137 ceph-ceb838ef-9d5d-54e4-bddb-2f01adce2ad4-grafana-compute-0[99477]: logger=migrator t=2025-12-08T09:49:27.893601654Z level=info msg="Executing migration" id="add notification_settings column to alert_rule_version table"
Dec  8 04:49:27 np0005550137 ceph-ceb838ef-9d5d-54e4-bddb-2f01adce2ad4-grafana-compute-0[99477]: logger=migrator t=2025-12-08T09:49:27.90219774Z level=info msg="Migration successfully executed" id="add notification_settings column to alert_rule_version table" duration=8.595727ms
Dec  8 04:49:27 np0005550137 ceph-ceb838ef-9d5d-54e4-bddb-2f01adce2ad4-grafana-compute-0[99477]: logger=migrator t=2025-12-08T09:49:27.904358945Z level=info msg="Executing migration" id="removing scope from alert.instances:read action migration"
Dec  8 04:49:27 np0005550137 ceph-ceb838ef-9d5d-54e4-bddb-2f01adce2ad4-grafana-compute-0[99477]: logger=migrator t=2025-12-08T09:49:27.904702195Z level=info msg="Migration successfully executed" id="removing scope from alert.instances:read action migration" duration=343.14µs
Dec  8 04:49:27 np0005550137 ceph-ceb838ef-9d5d-54e4-bddb-2f01adce2ad4-grafana-compute-0[99477]: logger=migrator t=2025-12-08T09:49:27.906317393Z level=info msg="migrations completed" performed=547 skipped=0 duration=2.8139243s
Dec  8 04:49:27 np0005550137 ceph-ceb838ef-9d5d-54e4-bddb-2f01adce2ad4-grafana-compute-0[99477]: logger=sqlstore t=2025-12-08T09:49:27.90756453Z level=info msg="Created default organization"
Dec  8 04:49:27 np0005550137 ceph-ceb838ef-9d5d-54e4-bddb-2f01adce2ad4-grafana-compute-0[99477]: logger=secrets t=2025-12-08T09:49:27.909614661Z level=info msg="Envelope encryption state" enabled=true currentprovider=secretKey.v1
Dec  8 04:49:27 np0005550137 ceph-ceb838ef-9d5d-54e4-bddb-2f01adce2ad4-grafana-compute-0[99477]: logger=plugin.store t=2025-12-08T09:49:27.934476004Z level=info msg="Loading plugins..."
Dec  8 04:49:28 np0005550137 ceph-ceb838ef-9d5d-54e4-bddb-2f01adce2ad4-grafana-compute-0[99477]: logger=local.finder t=2025-12-08T09:49:28.004196075Z level=warn msg="Skipping finding plugins as directory does not exist" path=/usr/share/grafana/plugins-bundled
Dec  8 04:49:28 np0005550137 ceph-ceb838ef-9d5d-54e4-bddb-2f01adce2ad4-grafana-compute-0[99477]: logger=plugin.store t=2025-12-08T09:49:28.004247477Z level=info msg="Plugins loaded" count=55 duration=69.771573ms
Dec  8 04:49:28 np0005550137 ceph-ceb838ef-9d5d-54e4-bddb-2f01adce2ad4-grafana-compute-0[99477]: logger=query_data t=2025-12-08T09:49:28.00702933Z level=info msg="Query Service initialization"
Dec  8 04:49:28 np0005550137 ceph-ceb838ef-9d5d-54e4-bddb-2f01adce2ad4-grafana-compute-0[99477]: logger=live.push_http t=2025-12-08T09:49:28.01942376Z level=info msg="Live Push Gateway initialization"
Dec  8 04:49:28 np0005550137 ceph-ceb838ef-9d5d-54e4-bddb-2f01adce2ad4-grafana-compute-0[99477]: logger=ngalert.migration t=2025-12-08T09:49:28.022625846Z level=info msg=Starting
Dec  8 04:49:28 np0005550137 ceph-ceb838ef-9d5d-54e4-bddb-2f01adce2ad4-grafana-compute-0[99477]: logger=ngalert.migration t=2025-12-08T09:49:28.023110221Z level=info msg="Applying transition" currentType=Legacy desiredType=UnifiedAlerting cleanOnDowngrade=false cleanOnUpgrade=false
Dec  8 04:49:28 np0005550137 ceph-ceb838ef-9d5d-54e4-bddb-2f01adce2ad4-grafana-compute-0[99477]: logger=ngalert.migration orgID=1 t=2025-12-08T09:49:28.023494651Z level=info msg="Migrating alerts for organisation"
Dec  8 04:49:28 np0005550137 ceph-ceb838ef-9d5d-54e4-bddb-2f01adce2ad4-grafana-compute-0[99477]: logger=ngalert.migration orgID=1 t=2025-12-08T09:49:28.024164382Z level=info msg="Alerts found to migrate" alerts=0
Dec  8 04:49:28 np0005550137 ceph-ceb838ef-9d5d-54e4-bddb-2f01adce2ad4-grafana-compute-0[99477]: logger=ngalert.migration t=2025-12-08T09:49:28.026002227Z level=info msg="Completed alerting migration"
Dec  8 04:49:28 np0005550137 ceph-ceb838ef-9d5d-54e4-bddb-2f01adce2ad4-grafana-compute-0[99477]: logger=ngalert.state.manager t=2025-12-08T09:49:28.04419652Z level=info msg="Running in alternative execution of Error/NoData mode"
Dec  8 04:49:28 np0005550137 ceph-ceb838ef-9d5d-54e4-bddb-2f01adce2ad4-grafana-compute-0[99477]: logger=infra.usagestats.collector t=2025-12-08T09:49:28.046428286Z level=info msg="registering usage stat providers" usageStatsProvidersLen=2
Dec  8 04:49:28 np0005550137 ceph-ceb838ef-9d5d-54e4-bddb-2f01adce2ad4-grafana-compute-0[99477]: logger=provisioning.datasources t=2025-12-08T09:49:28.047605042Z level=info msg="inserting datasource from configuration" name=Loki uid=P8E80F9AEF21F6940
Dec  8 04:49:28 np0005550137 ceph-ceb838ef-9d5d-54e4-bddb-2f01adce2ad4-grafana-compute-0[99477]: logger=provisioning.alerting t=2025-12-08T09:49:28.057536778Z level=info msg="starting to provision alerting"
Dec  8 04:49:28 np0005550137 ceph-ceb838ef-9d5d-54e4-bddb-2f01adce2ad4-grafana-compute-0[99477]: logger=provisioning.alerting t=2025-12-08T09:49:28.057636141Z level=info msg="finished to provision alerting"
Dec  8 04:49:28 np0005550137 ceph-ceb838ef-9d5d-54e4-bddb-2f01adce2ad4-grafana-compute-0[99477]: logger=grafanaStorageLogger t=2025-12-08T09:49:28.058200128Z level=info msg="Storage starting"
Dec  8 04:49:28 np0005550137 ceph-ceb838ef-9d5d-54e4-bddb-2f01adce2ad4-grafana-compute-0[99477]: logger=ngalert.state.manager t=2025-12-08T09:49:28.058240659Z level=info msg="Warming state cache for startup"
Dec  8 04:49:28 np0005550137 ceph-ceb838ef-9d5d-54e4-bddb-2f01adce2ad4-grafana-compute-0[99477]: logger=ngalert.multiorg.alertmanager t=2025-12-08T09:49:28.058547159Z level=info msg="Starting MultiOrg Alertmanager"
Dec  8 04:49:28 np0005550137 ceph-ceb838ef-9d5d-54e4-bddb-2f01adce2ad4-grafana-compute-0[99477]: logger=http.server t=2025-12-08T09:49:28.060129275Z level=info msg="HTTP Server TLS settings" MinTLSVersion=TLS1.2 configuredciphers=TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA,TLS_ECDHE_ECDSA_WITH_AES_256_CBC_SHA,TLS_ECDHE_RSA_WITH_AES_256_CBC_SHA,TLS_RSA_WITH_AES_128_GCM_SHA256,TLS_RSA_WITH_AES_256_GCM_SHA384,TLS_RSA_WITH_AES_128_CBC_SHA,TLS_RSA_WITH_AES_256_CBC_SHA
Dec  8 04:49:28 np0005550137 ceph-ceb838ef-9d5d-54e4-bddb-2f01adce2ad4-grafana-compute-0[99477]: logger=http.server t=2025-12-08T09:49:28.06061669Z level=info msg="HTTP Server Listen" address=192.168.122.100:3000 protocol=https subUrl= socket=
Dec  8 04:49:28 np0005550137 ceph-ceb838ef-9d5d-54e4-bddb-2f01adce2ad4-grafana-compute-0[99477]: logger=provisioning.dashboard t=2025-12-08T09:49:28.089958177Z level=info msg="starting to provision dashboards"
Dec  8 04:49:28 np0005550137 ceph-ceb838ef-9d5d-54e4-bddb-2f01adce2ad4-grafana-compute-0[99477]: logger=ngalert.state.manager t=2025-12-08T09:49:28.127778765Z level=info msg="State cache has been initialized" states=0 duration=69.536076ms
Dec  8 04:49:28 np0005550137 ceph-ceb838ef-9d5d-54e4-bddb-2f01adce2ad4-grafana-compute-0[99477]: logger=ngalert.scheduler t=2025-12-08T09:49:28.127822097Z level=info msg="Starting scheduler" tickInterval=10s maxAttempts=1
Dec  8 04:49:28 np0005550137 ceph-ceb838ef-9d5d-54e4-bddb-2f01adce2ad4-grafana-compute-0[99477]: logger=ticker t=2025-12-08T09:49:28.127895299Z level=info msg=starting first_tick=2025-12-08T09:49:30Z
Dec  8 04:49:28 np0005550137 ceph-ceb838ef-9d5d-54e4-bddb-2f01adce2ad4-grafana-compute-0[99477]: logger=sqlstore.transactions t=2025-12-08T09:49:28.134380713Z level=info msg="Database locked, sleeping then retrying" error="database is locked" retry=0 code="database is locked"
Dec  8 04:49:28 np0005550137 ceph-ceb838ef-9d5d-54e4-bddb-2f01adce2ad4-grafana-compute-0[99477]: logger=sqlstore.transactions t=2025-12-08T09:49:28.145577127Z level=info msg="Database locked, sleeping then retrying" error="database is locked" retry=1 code="database is locked"
Dec  8 04:49:28 np0005550137 ceph-ceb838ef-9d5d-54e4-bddb-2f01adce2ad4-grafana-compute-0[99477]: logger=plugins.update.checker t=2025-12-08T09:49:28.160454112Z level=info msg="Update check succeeded" duration=101.764599ms
Dec  8 04:49:28 np0005550137 ceph-ceb838ef-9d5d-54e4-bddb-2f01adce2ad4-grafana-compute-0[99477]: logger=grafana.update.checker t=2025-12-08T09:49:28.165408089Z level=info msg="Update check succeeded" duration=106.19005ms
Dec  8 04:49:28 np0005550137 ceph-ceb838ef-9d5d-54e4-bddb-2f01adce2ad4-grafana-compute-0[99477]: logger=grafana-apiserver t=2025-12-08T09:49:28.308524793Z level=info msg="Adding GroupVersion playlist.grafana.app v0alpha1 to ResourceManager"
Dec  8 04:49:28 np0005550137 ceph-ceb838ef-9d5d-54e4-bddb-2f01adce2ad4-grafana-compute-0[99477]: logger=grafana-apiserver t=2025-12-08T09:49:28.309237873Z level=info msg="Adding GroupVersion featuretoggle.grafana.app v0alpha1 to ResourceManager"
Dec  8 04:49:28 np0005550137 ceph-ceb838ef-9d5d-54e4-bddb-2f01adce2ad4-grafana-compute-0[99477]: logger=provisioning.dashboard t=2025-12-08T09:49:28.382252694Z level=info msg="finished to provision dashboards"
Dec  8 04:49:28 np0005550137 ceph-mgr[74806]: log_channel(cluster) log [DBG] : pgmap v59: 353 pgs: 17 active+remapped, 336 active+clean; 456 KiB data, 107 MiB used, 60 GiB / 60 GiB avail; 494 B/s, 0 keys/s, 3 objects/s recovering
Dec  8 04:49:28 np0005550137 ceph-mon[74516]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "5"} v 0)
Dec  8 04:49:28 np0005550137 ceph-mon[74516]: log_channel(audit) log [INF] : from='mgr.14478 192.168.122.100:0/575685615' entity='mgr.compute-0.kitiwu' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "5"}]: dispatch
Dec  8 04:49:28 np0005550137 ceph-osd[83009]: log_channel(cluster) log [DBG] : 9.2 scrub starts
Dec  8 04:49:28 np0005550137 ceph-osd[83009]: log_channel(cluster) log [DBG] : 9.2 scrub ok
Dec  8 04:49:28 np0005550137 ceph-mon[74516]: mon.compute-0@0(leader).osd e62 do_prune osdmap full prune enabled
Dec  8 04:49:28 np0005550137 ceph-ceb838ef-9d5d-54e4-bddb-2f01adce2ad4-nfs-cephfs-2-0-compute-0-cuvvno[97606]: 08/12/2025 09:49:28 : epoch 69369efc : compute-0 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f56f00018b0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec  8 04:49:28 np0005550137 ceph-mon[74516]: log_channel(audit) log [INF] : from='mgr.14478 192.168.122.100:0/575685615' entity='mgr.compute-0.kitiwu' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "5"}]': finished
Dec  8 04:49:28 np0005550137 ceph-mon[74516]: mon.compute-0@0(leader).osd e63 e63: 3 total, 3 up, 3 in
Dec  8 04:49:28 np0005550137 ceph-mon[74516]: log_channel(cluster) log [DBG] : osdmap e63: 3 total, 3 up, 3 in
Dec  8 04:49:28 np0005550137 ceph-osd[83009]: osd.1 pg_epoch: 63 pg[9.17( v 49'1026 (0'0,49'1026] local-lis/les=62/63 n=5 ec=52/36 lis/c=52/52 les/c/f=53/53/0 sis=62) [2]/[1] async=[2] r=0 lpr=62 pi=[52,62)/1 crt=49'1026 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=3}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  8 04:49:28 np0005550137 ceph-osd[83009]: osd.1 pg_epoch: 63 pg[9.f( v 49'1026 (0'0,49'1026] local-lis/les=62/63 n=6 ec=52/36 lis/c=52/52 les/c/f=53/53/0 sis=62) [2]/[1] async=[2] r=0 lpr=62 pi=[52,62)/1 crt=49'1026 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=7}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  8 04:49:28 np0005550137 ceph-osd[83009]: osd.1 pg_epoch: 63 pg[9.3( v 49'1026 (0'0,49'1026] local-lis/les=62/63 n=8 ec=52/36 lis/c=52/52 les/c/f=53/53/0 sis=62) [2]/[1] async=[2] r=0 lpr=62 pi=[52,62)/1 crt=49'1026 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=8}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  8 04:49:28 np0005550137 ceph-osd[83009]: osd.1 pg_epoch: 63 pg[9.7( v 49'1026 (0'0,49'1026] local-lis/les=62/63 n=6 ec=52/36 lis/c=52/52 les/c/f=53/53/0 sis=62) [2]/[1] async=[2] r=0 lpr=62 pi=[52,62)/1 crt=49'1026 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=5}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  8 04:49:28 np0005550137 ceph-osd[83009]: osd.1 pg_epoch: 63 pg[9.1f( v 49'1026 (0'0,49'1026] local-lis/les=62/63 n=5 ec=52/36 lis/c=52/52 les/c/f=53/53/0 sis=62) [2]/[1] async=[2] r=0 lpr=62 pi=[52,62)/1 crt=49'1026 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=5}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  8 04:49:28 np0005550137 ceph-osd[83009]: osd.1 pg_epoch: 63 pg[9.13( v 49'1026 (0'0,49'1026] local-lis/les=62/63 n=5 ec=52/36 lis/c=52/52 les/c/f=53/53/0 sis=62) [2]/[1] async=[2] r=0 lpr=62 pi=[52,62)/1 crt=49'1026 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=5}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  8 04:49:28 np0005550137 ceph-osd[83009]: osd.1 pg_epoch: 63 pg[9.1b( v 49'1026 (0'0,49'1026] local-lis/les=62/63 n=5 ec=52/36 lis/c=52/52 les/c/f=53/53/0 sis=62) [2]/[1] async=[2] r=0 lpr=62 pi=[52,62)/1 crt=49'1026 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=2}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  8 04:49:28 np0005550137 ceph-osd[83009]: osd.1 pg_epoch: 63 pg[9.b( v 49'1026 (0'0,49'1026] local-lis/les=62/63 n=6 ec=52/36 lis/c=52/52 les/c/f=53/53/0 sis=62) [2]/[1] async=[2] r=0 lpr=62 pi=[52,62)/1 crt=49'1026 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=4}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  8 04:49:28 np0005550137 radosgw[89717]: ====== req done req=0x7ff6ecd845d0 op status=0 http_status=200 latency=1.503045321s ======
Dec  8 04:49:28 np0005550137 radosgw[89717]: beast: 0x7ff6ecd845d0: 192.168.122.100 - anonymous [08/Dec/2025:09:49:27.358 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=1.503045321s
Dec  8 04:49:28 np0005550137 ceph-mon[74516]: Deploying daemon haproxy.rgw.default.compute-2.akikwx on compute-2
Dec  8 04:49:28 np0005550137 ceph-mon[74516]: from='mgr.14478 192.168.122.100:0/575685615' entity='mgr.compute-0.kitiwu' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "5"}]: dispatch
Dec  8 04:49:28 np0005550137 ceph-mon[74516]: from='mgr.14478 192.168.122.100:0/575685615' entity='mgr.compute-0.kitiwu' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "5"}]': finished
Dec  8 04:49:28 np0005550137 elastic_nobel[99824]: {
Dec  8 04:49:28 np0005550137 elastic_nobel[99824]:    "user_id": "openstack",
Dec  8 04:49:28 np0005550137 elastic_nobel[99824]:    "display_name": "openstack",
Dec  8 04:49:28 np0005550137 elastic_nobel[99824]:    "email": "",
Dec  8 04:49:28 np0005550137 elastic_nobel[99824]:    "suspended": 0,
Dec  8 04:49:28 np0005550137 elastic_nobel[99824]:    "max_buckets": 1000,
Dec  8 04:49:28 np0005550137 elastic_nobel[99824]:    "subusers": [],
Dec  8 04:49:28 np0005550137 elastic_nobel[99824]:    "keys": [
Dec  8 04:49:28 np0005550137 elastic_nobel[99824]:        {
Dec  8 04:49:28 np0005550137 elastic_nobel[99824]:            "user": "openstack",
Dec  8 04:49:28 np0005550137 elastic_nobel[99824]:            "access_key": "ZPYSAJCNQ7VNJOB33UBY",
Dec  8 04:49:28 np0005550137 elastic_nobel[99824]:            "secret_key": "0Gj9iW1PceIVKlKrFpMYsdHsVsPlhVGOusBjpiso",
Dec  8 04:49:28 np0005550137 elastic_nobel[99824]:            "active": true,
Dec  8 04:49:28 np0005550137 elastic_nobel[99824]:            "create_date": "2025-12-08T09:49:28.942568Z"
Dec  8 04:49:28 np0005550137 elastic_nobel[99824]:        }
Dec  8 04:49:28 np0005550137 elastic_nobel[99824]:    ],
Dec  8 04:49:28 np0005550137 elastic_nobel[99824]:    "swift_keys": [],
Dec  8 04:49:28 np0005550137 elastic_nobel[99824]:    "caps": [],
Dec  8 04:49:28 np0005550137 elastic_nobel[99824]:    "op_mask": "read, write, delete",
Dec  8 04:49:28 np0005550137 elastic_nobel[99824]:    "default_placement": "",
Dec  8 04:49:28 np0005550137 elastic_nobel[99824]:    "default_storage_class": "",
Dec  8 04:49:28 np0005550137 elastic_nobel[99824]:    "placement_tags": [],
Dec  8 04:49:28 np0005550137 elastic_nobel[99824]:    "bucket_quota": {
Dec  8 04:49:28 np0005550137 elastic_nobel[99824]:        "enabled": false,
Dec  8 04:49:28 np0005550137 elastic_nobel[99824]:        "check_on_raw": false,
Dec  8 04:49:28 np0005550137 elastic_nobel[99824]:        "max_size": -1,
Dec  8 04:49:28 np0005550137 elastic_nobel[99824]:        "max_size_kb": 0,
Dec  8 04:49:28 np0005550137 elastic_nobel[99824]:        "max_objects": -1
Dec  8 04:49:28 np0005550137 elastic_nobel[99824]:    },
Dec  8 04:49:28 np0005550137 elastic_nobel[99824]:    "user_quota": {
Dec  8 04:49:28 np0005550137 elastic_nobel[99824]:        "enabled": false,
Dec  8 04:49:28 np0005550137 elastic_nobel[99824]:        "check_on_raw": false,
Dec  8 04:49:28 np0005550137 elastic_nobel[99824]:        "max_size": -1,
Dec  8 04:49:28 np0005550137 elastic_nobel[99824]:        "max_size_kb": 0,
Dec  8 04:49:28 np0005550137 elastic_nobel[99824]:        "max_objects": -1
Dec  8 04:49:28 np0005550137 elastic_nobel[99824]:    },
Dec  8 04:49:28 np0005550137 elastic_nobel[99824]:    "temp_url_keys": [],
Dec  8 04:49:28 np0005550137 elastic_nobel[99824]:    "type": "rgw",
Dec  8 04:49:28 np0005550137 elastic_nobel[99824]:    "mfa_ids": [],
Dec  8 04:49:28 np0005550137 elastic_nobel[99824]:    "account_id": "",
Dec  8 04:49:28 np0005550137 elastic_nobel[99824]:    "path": "/",
Dec  8 04:49:28 np0005550137 elastic_nobel[99824]:    "create_date": "2025-12-08T09:49:28.941206Z",
Dec  8 04:49:28 np0005550137 elastic_nobel[99824]:    "tags": [],
Dec  8 04:49:28 np0005550137 elastic_nobel[99824]:    "group_ids": []
Dec  8 04:49:28 np0005550137 elastic_nobel[99824]: }
Dec  8 04:49:28 np0005550137 elastic_nobel[99824]: 
Dec  8 04:49:29 np0005550137 systemd[1]: libpod-e3990d395af4ece9de0a6415fa4c61edcb82c6b9c8d78de5a591da3358c920fb.scope: Deactivated successfully.
Dec  8 04:49:29 np0005550137 podman[99784]: 2025-12-08 09:49:29.010883434 +0000 UTC m=+1.845247257 container died e3990d395af4ece9de0a6415fa4c61edcb82c6b9c8d78de5a591da3358c920fb (image=quay.io/ceph/ceph:v19, name=elastic_nobel, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0)
Dec  8 04:49:29 np0005550137 systemd[1]: var-lib-containers-storage-overlay-b75bae23e40a87f09743cc30465e7fdb5af8002279a0efe60b2c360ae9c2e262-merged.mount: Deactivated successfully.
Dec  8 04:49:29 np0005550137 podman[99784]: 2025-12-08 09:49:29.055725293 +0000 UTC m=+1.890089086 container remove e3990d395af4ece9de0a6415fa4c61edcb82c6b9c8d78de5a591da3358c920fb (image=quay.io/ceph/ceph:v19, name=elastic_nobel, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True)
Dec  8 04:49:29 np0005550137 systemd[1]: libpod-conmon-e3990d395af4ece9de0a6415fa4c61edcb82c6b9c8d78de5a591da3358c920fb.scope: Deactivated successfully.
Dec  8 04:49:29 np0005550137 ceph-ceb838ef-9d5d-54e4-bddb-2f01adce2ad4-nfs-cephfs-2-0-compute-0-cuvvno[97606]: 08/12/2025 09:49:29 : epoch 69369efc : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f5710002520 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec  8 04:49:29 np0005550137 ceph-ceb838ef-9d5d-54e4-bddb-2f01adce2ad4-nfs-cephfs-2-0-compute-0-cuvvno[97606]: 08/12/2025 09:49:29 : epoch 69369efc : compute-0 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f5714002950 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec  8 04:49:29 np0005550137 radosgw[89717]: ====== starting new request req=0x7ff6ecd845d0 =====
Dec  8 04:49:29 np0005550137 radosgw[89717]: ====== req done req=0x7ff6ecd845d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  8 04:49:29 np0005550137 radosgw[89717]: beast: 0x7ff6ecd845d0: 192.168.122.102 - anonymous [08/Dec/2025:09:49:29.426 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  8 04:49:29 np0005550137 ceph-mon[74516]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2.devices.0}] v 0)
Dec  8 04:49:29 np0005550137 ceph-osd[83009]: log_channel(cluster) log [DBG] : 10.18 scrub starts
Dec  8 04:49:29 np0005550137 ceph-osd[83009]: log_channel(cluster) log [DBG] : 10.18 scrub ok
Dec  8 04:49:29 np0005550137 python3[99969]: ansible-ansible.builtin.get_url Invoked with url=http://192.168.122.100:8443 dest=/tmp/dash_response mode=0644 validate_certs=False force=False http_agent=ansible-httpget use_proxy=True force_basic_auth=False use_gssapi=False backup=False checksum= timeout=10 unredirected_headers=[] decompress=True use_netrc=True unsafe_writes=False url_username=None url_password=NOT_LOGGING_PARAMETER client_cert=None client_key=None headers=None tmp_dest=None ciphers=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  8 04:49:29 np0005550137 ceph-mon[74516]: mon.compute-0@0(leader).osd e63 do_prune osdmap full prune enabled
Dec  8 04:49:29 np0005550137 ceph-mon[74516]: log_channel(audit) log [INF] : from='mgr.14478 192.168.122.100:0/575685615' entity='mgr.compute-0.kitiwu' 
Dec  8 04:49:29 np0005550137 ceph-mon[74516]: mon.compute-0@0(leader).osd e64 e64: 3 total, 3 up, 3 in
Dec  8 04:49:29 np0005550137 ceph-mon[74516]: log_channel(cluster) log [DBG] : osdmap e64: 3 total, 3 up, 3 in
Dec  8 04:49:29 np0005550137 ceph-mon[74516]: mon.compute-0@0(leader).osd e64 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  8 04:49:29 np0005550137 ceph-osd[83009]: osd.1 pg_epoch: 64 pg[9.3( v 49'1026 (0'0,49'1026] local-lis/les=62/63 n=8 ec=52/36 lis/c=62/52 les/c/f=63/53/0 sis=64 pruub=14.966196060s) [2] async=[2] r=-1 lpr=64 pi=[52,64)/1 crt=49'1026 lcod 0'0 mlcod 0'0 active pruub 193.544845581s@ mbc={255={}}] PeeringState::start_peering_interval up [2] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 2 -> 2, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Dec  8 04:49:29 np0005550137 ceph-osd[83009]: osd.1 pg_epoch: 64 pg[9.3( v 49'1026 (0'0,49'1026] local-lis/les=62/63 n=8 ec=52/36 lis/c=62/52 les/c/f=63/53/0 sis=64 pruub=14.966011047s) [2] r=-1 lpr=64 pi=[52,64)/1 crt=49'1026 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 193.544845581s@ mbc={}] state<Start>: transitioning to Stray
Dec  8 04:49:29 np0005550137 ceph-osd[83009]: osd.1 pg_epoch: 64 pg[9.b( v 49'1026 (0'0,49'1026] local-lis/les=62/63 n=6 ec=52/36 lis/c=62/52 les/c/f=63/53/0 sis=64 pruub=14.966584206s) [2] async=[2] r=-1 lpr=64 pi=[52,64)/1 crt=49'1026 lcod 0'0 mlcod 0'0 active pruub 193.545562744s@ mbc={255={}}] PeeringState::start_peering_interval up [2] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 2 -> 2, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Dec  8 04:49:29 np0005550137 ceph-osd[83009]: osd.1 pg_epoch: 64 pg[9.b( v 49'1026 (0'0,49'1026] local-lis/les=62/63 n=6 ec=52/36 lis/c=62/52 les/c/f=63/53/0 sis=64 pruub=14.966542244s) [2] r=-1 lpr=64 pi=[52,64)/1 crt=49'1026 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 193.545562744s@ mbc={}] state<Start>: transitioning to Stray
Dec  8 04:49:29 np0005550137 ceph-osd[83009]: osd.1 pg_epoch: 64 pg[9.f( v 49'1026 (0'0,49'1026] local-lis/les=62/63 n=6 ec=52/36 lis/c=62/52 les/c/f=63/53/0 sis=64 pruub=14.965527534s) [2] async=[2] r=-1 lpr=64 pi=[52,64)/1 crt=49'1026 lcod 0'0 mlcod 0'0 active pruub 193.544830322s@ mbc={255={}}] PeeringState::start_peering_interval up [2] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 2 -> 2, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Dec  8 04:49:29 np0005550137 ceph-osd[83009]: osd.1 pg_epoch: 64 pg[9.f( v 49'1026 (0'0,49'1026] local-lis/les=62/63 n=6 ec=52/36 lis/c=62/52 les/c/f=63/53/0 sis=64 pruub=14.965482712s) [2] r=-1 lpr=64 pi=[52,64)/1 crt=49'1026 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 193.544830322s@ mbc={}] state<Start>: transitioning to Stray
Dec  8 04:49:29 np0005550137 ceph-osd[83009]: osd.1 pg_epoch: 64 pg[9.17( v 49'1026 (0'0,49'1026] local-lis/les=62/63 n=5 ec=52/36 lis/c=62/52 les/c/f=63/53/0 sis=64 pruub=14.959469795s) [2] async=[2] r=-1 lpr=64 pi=[52,64)/1 crt=49'1026 lcod 0'0 mlcod 0'0 active pruub 193.538879395s@ mbc={255={}}] PeeringState::start_peering_interval up [2] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 2 -> 2, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Dec  8 04:49:29 np0005550137 ceph-osd[83009]: osd.1 pg_epoch: 64 pg[9.17( v 49'1026 (0'0,49'1026] local-lis/les=62/63 n=5 ec=52/36 lis/c=62/52 les/c/f=63/53/0 sis=64 pruub=14.959263802s) [2] r=-1 lpr=64 pi=[52,64)/1 crt=49'1026 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 193.538879395s@ mbc={}] state<Start>: transitioning to Stray
Dec  8 04:49:29 np0005550137 ceph-osd[83009]: osd.1 pg_epoch: 64 pg[9.7( v 49'1026 (0'0,49'1026] local-lis/les=62/63 n=6 ec=52/36 lis/c=62/52 les/c/f=63/53/0 sis=64 pruub=14.965024948s) [2] async=[2] r=-1 lpr=64 pi=[52,64)/1 crt=49'1026 lcod 0'0 mlcod 0'0 active pruub 193.544921875s@ mbc={255={}}] PeeringState::start_peering_interval up [2] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 2 -> 2, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Dec  8 04:49:29 np0005550137 ceph-osd[83009]: osd.1 pg_epoch: 64 pg[9.7( v 49'1026 (0'0,49'1026] local-lis/les=62/63 n=6 ec=52/36 lis/c=62/52 les/c/f=63/53/0 sis=64 pruub=14.964967728s) [2] r=-1 lpr=64 pi=[52,64)/1 crt=49'1026 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 193.544921875s@ mbc={}] state<Start>: transitioning to Stray
Dec  8 04:49:29 np0005550137 ceph-osd[83009]: osd.1 pg_epoch: 64 pg[9.1b( v 49'1026 (0'0,49'1026] local-lis/les=62/63 n=5 ec=52/36 lis/c=62/52 les/c/f=63/53/0 sis=64 pruub=14.965091705s) [2] async=[2] r=-1 lpr=64 pi=[52,64)/1 crt=49'1026 lcod 0'0 mlcod 0'0 active pruub 193.545242310s@ mbc={255={}}] PeeringState::start_peering_interval up [2] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 2 -> 2, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Dec  8 04:49:29 np0005550137 ceph-osd[83009]: osd.1 pg_epoch: 64 pg[9.1b( v 49'1026 (0'0,49'1026] local-lis/les=62/63 n=5 ec=52/36 lis/c=62/52 les/c/f=63/53/0 sis=64 pruub=14.965060234s) [2] r=-1 lpr=64 pi=[52,64)/1 crt=49'1026 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 193.545242310s@ mbc={}] state<Start>: transitioning to Stray
Dec  8 04:49:29 np0005550137 ceph-osd[83009]: osd.1 pg_epoch: 64 pg[9.1f( v 49'1026 (0'0,49'1026] local-lis/les=62/63 n=5 ec=52/36 lis/c=62/52 les/c/f=63/53/0 sis=64 pruub=14.964514732s) [2] async=[2] r=-1 lpr=64 pi=[52,64)/1 crt=49'1026 lcod 0'0 mlcod 0'0 active pruub 193.544952393s@ mbc={255={}}] PeeringState::start_peering_interval up [2] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 2 -> 2, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Dec  8 04:49:29 np0005550137 ceph-osd[83009]: osd.1 pg_epoch: 64 pg[9.1f( v 49'1026 (0'0,49'1026] local-lis/les=62/63 n=5 ec=52/36 lis/c=62/52 les/c/f=63/53/0 sis=64 pruub=14.964447975s) [2] r=-1 lpr=64 pi=[52,64)/1 crt=49'1026 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 193.544952393s@ mbc={}] state<Start>: transitioning to Stray
Dec  8 04:49:29 np0005550137 ceph-osd[83009]: osd.1 pg_epoch: 64 pg[9.13( v 49'1026 (0'0,49'1026] local-lis/les=62/63 n=5 ec=52/36 lis/c=62/52 les/c/f=63/53/0 sis=64 pruub=14.964385033s) [2] async=[2] r=-1 lpr=64 pi=[52,64)/1 crt=49'1026 lcod 0'0 mlcod 0'0 active pruub 193.545043945s@ mbc={255={}}] PeeringState::start_peering_interval up [2] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 2 -> 2, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Dec  8 04:49:29 np0005550137 ceph-osd[83009]: osd.1 pg_epoch: 64 pg[9.13( v 49'1026 (0'0,49'1026] local-lis/les=62/63 n=5 ec=52/36 lis/c=62/52 les/c/f=63/53/0 sis=64 pruub=14.964345932s) [2] r=-1 lpr=64 pi=[52,64)/1 crt=49'1026 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 193.545043945s@ mbc={}] state<Start>: transitioning to Stray
Dec  8 04:49:29 np0005550137 ceph-mon[74516]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #18. Immutable memtables: 0.
Dec  8 04:49:29 np0005550137 ceph-mon[74516]: rocksdb: (Original Log Time 2025/12/08-09:49:29.947855) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Dec  8 04:49:29 np0005550137 ceph-mon[74516]: rocksdb: [db/flush_job.cc:856] [default] [JOB 3] Flushing memtable with next log file: 18
Dec  8 04:49:29 np0005550137 ceph-mon[74516]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765187369948022, "job": 3, "event": "flush_started", "num_memtables": 1, "num_entries": 7262, "num_deletes": 252, "total_data_size": 13467527, "memory_usage": 14198080, "flush_reason": "Manual Compaction"}
Dec  8 04:49:29 np0005550137 ceph-mon[74516]: rocksdb: [db/flush_job.cc:885] [default] [JOB 3] Level-0 flush table #19: started
Dec  8 04:49:29 np0005550137 ceph-mgr[74806]: [dashboard INFO request] [192.168.122.100:36844] [GET] [200] [0.139s] [6.3K] [c2d7eefd-ccc3-47b1-9e78-0379fab03975] /
Dec  8 04:49:30 np0005550137 ceph-mon[74516]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2}] v 0)
Dec  8 04:49:30 np0005550137 ceph-mon[74516]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765187370178926, "cf_name": "default", "job": 3, "event": "table_file_creation", "file_number": 19, "file_size": 12072887, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 142, "largest_seqno": 7399, "table_properties": {"data_size": 12046314, "index_size": 16986, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 8517, "raw_key_size": 82161, "raw_average_key_size": 24, "raw_value_size": 11981018, "raw_average_value_size": 3530, "num_data_blocks": 748, "num_entries": 3394, "num_filter_entries": 3394, "num_deletions": 252, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1765187099, "oldest_key_time": 1765187099, "file_creation_time": 1765187369, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "80444841-be0f-461b-9293-2c19ffebbf01", "db_session_id": "WSOFQ4I8QWDIF20O9U4H", "orig_file_number": 19, "seqno_to_time_mapping": "N/A"}}
Dec  8 04:49:30 np0005550137 ceph-mon[74516]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 3] Flush lasted 231134 microseconds, and 23821 cpu microseconds.
Dec  8 04:49:30 np0005550137 ceph-mon[74516]: from='mgr.14478 192.168.122.100:0/575685615' entity='mgr.compute-0.kitiwu' 
Dec  8 04:49:30 np0005550137 ceph-mon[74516]: rocksdb: (Original Log Time 2025/12/08-09:49:30.178992) [db/flush_job.cc:967] [default] [JOB 3] Level-0 flush table #19: 12072887 bytes OK
Dec  8 04:49:30 np0005550137 ceph-mon[74516]: rocksdb: (Original Log Time 2025/12/08-09:49:30.179029) [db/memtable_list.cc:519] [default] Level-0 commit table #19 started
Dec  8 04:49:30 np0005550137 ceph-mon[74516]: rocksdb: (Original Log Time 2025/12/08-09:49:30.246413) [db/memtable_list.cc:722] [default] Level-0 commit table #19: memtable #1 done
Dec  8 04:49:30 np0005550137 ceph-mon[74516]: rocksdb: (Original Log Time 2025/12/08-09:49:30.246450) EVENT_LOG_v1 {"time_micros": 1765187370246443, "job": 3, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [3, 0, 0, 0, 0, 0, 0], "immutable_memtables": 0}
Dec  8 04:49:30 np0005550137 ceph-mon[74516]: rocksdb: (Original Log Time 2025/12/08-09:49:30.246474) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: files[3 0 0 0 0 0 0] max score 0.75
Dec  8 04:49:30 np0005550137 ceph-mon[74516]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 3] Try to delete WAL files size 13434849, prev total WAL file size 13439783, number of live WAL files 2.
Dec  8 04:49:30 np0005550137 ceph-mon[74516]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000014.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Dec  8 04:49:30 np0005550137 ceph-mon[74516]: rocksdb: (Original Log Time 2025/12/08-09:49:30.249847) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '6B760030' seq:72057594037927935, type:22 .. '6B7600323532' seq:0, type:0; will stop at (end)
Dec  8 04:49:30 np0005550137 ceph-mon[74516]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 4] Compacting 3@0 files to L6, score -1.00
Dec  8 04:49:30 np0005550137 ceph-mon[74516]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 3 Base level 0, inputs: [19(11MB) 13(57KB) 8(1944B)]
Dec  8 04:49:30 np0005550137 ceph-mon[74516]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765187370249943, "job": 4, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [19, 13, 8], "score": -1, "input_data_size": 12133321, "oldest_snapshot_seqno": -1}
Dec  8 04:49:30 np0005550137 python3[99995]: ansible-ansible.builtin.get_url Invoked with url=http://192.168.122.100:8443 dest=/tmp/dash_http_response mode=0644 validate_certs=False username=VALUE_SPECIFIED_IN_NO_LOG_PARAMETER password=NOT_LOGGING_PARAMETER url_username=VALUE_SPECIFIED_IN_NO_LOG_PARAMETER url_password=NOT_LOGGING_PARAMETER force=False http_agent=ansible-httpget use_proxy=True force_basic_auth=False use_gssapi=False backup=False checksum= timeout=10 unredirected_headers=[] decompress=True use_netrc=True unsafe_writes=False client_cert=None client_key=None headers=None tmp_dest=None ciphers=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  8 04:49:30 np0005550137 ceph-mon[74516]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 4] Generated table #20: 3215 keys, 12115376 bytes, temperature: kUnknown
Dec  8 04:49:30 np0005550137 ceph-mon[74516]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765187370367924, "cf_name": "default", "job": 4, "event": "table_file_creation", "file_number": 20, "file_size": 12115376, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 12089116, "index_size": 17126, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 8069, "raw_key_size": 81077, "raw_average_key_size": 25, "raw_value_size": 12025359, "raw_average_value_size": 3740, "num_data_blocks": 755, "num_entries": 3215, "num_filter_entries": 3215, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1765187097, "oldest_key_time": 0, "file_creation_time": 1765187370, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "80444841-be0f-461b-9293-2c19ffebbf01", "db_session_id": "WSOFQ4I8QWDIF20O9U4H", "orig_file_number": 20, "seqno_to_time_mapping": "N/A"}}
Dec  8 04:49:30 np0005550137 ceph-mgr[74806]: [dashboard INFO request] [192.168.122.100:36856] [GET] [200] [0.003s] [6.3K] [7e48da44-2784-458b-b481-a019976ef68b] /
Dec  8 04:49:30 np0005550137 ceph-mon[74516]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Dec  8 04:49:30 np0005550137 ceph-mon[74516]: log_channel(audit) log [INF] : from='mgr.14478 192.168.122.100:0/575685615' entity='mgr.compute-0.kitiwu' 
Dec  8 04:49:30 np0005550137 ceph-mon[74516]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.ingress.rgw.default}] v 0)
Dec  8 04:49:30 np0005550137 ceph-mon[74516]: rocksdb: (Original Log Time 2025/12/08-09:49:30.368354) [db/compaction/compaction_job.cc:1663] [default] [JOB 4] Compacted 3@0 files to L6 => 12115376 bytes
Dec  8 04:49:30 np0005550137 ceph-mon[74516]: rocksdb: (Original Log Time 2025/12/08-09:49:30.452549) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 102.7 rd, 102.5 wr, level 6, files in(3, 0) out(1 +0 blob) MB in(11.6, 0.0 +0.0 blob) out(11.6 +0.0 blob), read-write-amplify(2.0) write-amplify(1.0) OK, records in: 3503, records dropped: 288 output_compression: NoCompression
Dec  8 04:49:30 np0005550137 ceph-mon[74516]: rocksdb: (Original Log Time 2025/12/08-09:49:30.452609) EVENT_LOG_v1 {"time_micros": 1765187370452585, "job": 4, "event": "compaction_finished", "compaction_time_micros": 118152, "compaction_time_cpu_micros": 28051, "output_level": 6, "num_output_files": 1, "total_output_size": 12115376, "num_input_records": 3503, "num_output_records": 3215, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Dec  8 04:49:30 np0005550137 ceph-mon[74516]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000019.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Dec  8 04:49:30 np0005550137 ceph-mon[74516]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765187370456744, "job": 4, "event": "table_file_deletion", "file_number": 19}
Dec  8 04:49:30 np0005550137 ceph-mon[74516]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000013.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Dec  8 04:49:30 np0005550137 ceph-mon[74516]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765187370456919, "job": 4, "event": "table_file_deletion", "file_number": 13}
Dec  8 04:49:30 np0005550137 ceph-mon[74516]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000008.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Dec  8 04:49:30 np0005550137 ceph-mon[74516]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765187370456972, "job": 4, "event": "table_file_deletion", "file_number": 8}
Dec  8 04:49:30 np0005550137 ceph-mon[74516]: rocksdb: (Original Log Time 2025/12/08-09:49:30.249755) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec  8 04:49:30 np0005550137 ceph-mon[74516]: log_channel(audit) log [INF] : from='mgr.14478 192.168.122.100:0/575685615' entity='mgr.compute-0.kitiwu' 
Dec  8 04:49:30 np0005550137 ceph-mon[74516]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/ingress.rgw.default/keepalived_password}] v 0)
Dec  8 04:49:30 np0005550137 ceph-mgr[74806]: log_channel(cluster) log [DBG] : pgmap v62: 353 pgs: 17 active+remapped, 336 active+clean; 456 KiB data, 107 MiB used, 60 GiB / 60 GiB avail; 494 B/s, 0 keys/s, 3 objects/s recovering
Dec  8 04:49:30 np0005550137 ceph-mon[74516]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "6"} v 0)
Dec  8 04:49:30 np0005550137 ceph-mon[74516]: log_channel(audit) log [INF] : from='mgr.14478 192.168.122.100:0/575685615' entity='mgr.compute-0.kitiwu' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "6"}]: dispatch
Dec  8 04:49:30 np0005550137 ceph-mon[74516]: log_channel(audit) log [INF] : from='mgr.14478 192.168.122.100:0/575685615' entity='mgr.compute-0.kitiwu' 
Dec  8 04:49:30 np0005550137 ceph-mgr[74806]: [cephadm INFO cephadm.services.ingress] 192.168.122.2 is in 192.168.122.0/24 on compute-0 interface br-ex
Dec  8 04:49:30 np0005550137 ceph-mgr[74806]: log_channel(cephadm) log [INF] : 192.168.122.2 is in 192.168.122.0/24 on compute-0 interface br-ex
Dec  8 04:49:30 np0005550137 ceph-mgr[74806]: [cephadm INFO cephadm.services.ingress] 192.168.122.2 is in 192.168.122.0/24 on compute-2 interface br-ex
Dec  8 04:49:30 np0005550137 ceph-mgr[74806]: log_channel(cephadm) log [INF] : 192.168.122.2 is in 192.168.122.0/24 on compute-2 interface br-ex
Dec  8 04:49:30 np0005550137 ceph-mgr[74806]: [cephadm INFO cephadm.serve] Deploying daemon keepalived.rgw.default.compute-0.qvwaqs on compute-0
Dec  8 04:49:30 np0005550137 ceph-mgr[74806]: log_channel(cephadm) log [INF] : Deploying daemon keepalived.rgw.default.compute-0.qvwaqs on compute-0
Dec  8 04:49:30 np0005550137 ceph-osd[83009]: log_channel(cluster) log [DBG] : 10.5 scrub starts
Dec  8 04:49:30 np0005550137 ceph-osd[83009]: log_channel(cluster) log [DBG] : 10.5 scrub ok
Dec  8 04:49:30 np0005550137 ceph-ceb838ef-9d5d-54e4-bddb-2f01adce2ad4-nfs-cephfs-2-0-compute-0-cuvvno[97606]: 08/12/2025 09:49:30 : epoch 69369efc : compute-0 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f56e80016a0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec  8 04:49:30 np0005550137 radosgw[89717]: ====== starting new request req=0x7ff6ecd845d0 =====
Dec  8 04:49:30 np0005550137 radosgw[89717]: ====== req done req=0x7ff6ecd845d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  8 04:49:30 np0005550137 radosgw[89717]: beast: 0x7ff6ecd845d0: 192.168.122.100 - anonymous [08/Dec/2025:09:49:30.863 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  8 04:49:30 np0005550137 ceph-mon[74516]: mon.compute-0@0(leader).osd e64 do_prune osdmap full prune enabled
Dec  8 04:49:30 np0005550137 ceph-mon[74516]: log_channel(audit) log [INF] : from='mgr.14478 192.168.122.100:0/575685615' entity='mgr.compute-0.kitiwu' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "6"}]': finished
Dec  8 04:49:30 np0005550137 ceph-mon[74516]: mon.compute-0@0(leader).osd e65 e65: 3 total, 3 up, 3 in
Dec  8 04:49:30 np0005550137 ceph-mon[74516]: log_channel(cluster) log [DBG] : osdmap e65: 3 total, 3 up, 3 in
Dec  8 04:49:30 np0005550137 ceph-mgr[74806]: [progress INFO root] Writing back 24 completed events
Dec  8 04:49:30 np0005550137 ceph-mon[74516]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/progress/completed}] v 0)
Dec  8 04:49:30 np0005550137 ceph-mon[74516]: log_channel(audit) log [INF] : from='mgr.14478 192.168.122.100:0/575685615' entity='mgr.compute-0.kitiwu' 
Dec  8 04:49:30 np0005550137 ceph-mgr[74806]: [progress WARNING root] Starting Global Recovery Event,17 pgs not in active + clean state
Dec  8 04:49:31 np0005550137 ceph-ceb838ef-9d5d-54e4-bddb-2f01adce2ad4-nfs-cephfs-2-0-compute-0-cuvvno[97606]: 08/12/2025 09:49:31 : epoch 69369efc : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f56e80016a0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec  8 04:49:31 np0005550137 ceph-ceb838ef-9d5d-54e4-bddb-2f01adce2ad4-nfs-cephfs-2-0-compute-0-cuvvno[97606]: 08/12/2025 09:49:31 : epoch 69369efc : compute-0 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f5710002520 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec  8 04:49:31 np0005550137 ceph-mon[74516]: from='mgr.14478 192.168.122.100:0/575685615' entity='mgr.compute-0.kitiwu' 
Dec  8 04:49:31 np0005550137 ceph-mon[74516]: from='mgr.14478 192.168.122.100:0/575685615' entity='mgr.compute-0.kitiwu' 
Dec  8 04:49:31 np0005550137 ceph-mon[74516]: from='mgr.14478 192.168.122.100:0/575685615' entity='mgr.compute-0.kitiwu' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "6"}]: dispatch
Dec  8 04:49:31 np0005550137 ceph-mon[74516]: from='mgr.14478 192.168.122.100:0/575685615' entity='mgr.compute-0.kitiwu' 
Dec  8 04:49:31 np0005550137 ceph-mon[74516]: from='mgr.14478 192.168.122.100:0/575685615' entity='mgr.compute-0.kitiwu' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "6"}]': finished
Dec  8 04:49:31 np0005550137 ceph-mon[74516]: from='mgr.14478 192.168.122.100:0/575685615' entity='mgr.compute-0.kitiwu' 
Dec  8 04:49:31 np0005550137 podman[100089]: 2025-12-08 09:49:31.331209915 +0000 UTC m=+0.059315252 container create 3755e032a3f7f585dcdf88110872aa6267c39a7580d248f64bade1ed01fff209 (image=quay.io/ceph/keepalived:2.2.4, name=agitated_allen, name=keepalived, build-date=2023-02-22T09:23:20, io.buildah.version=1.28.2, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.display-name=Keepalived on RHEL 9, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, vcs-ref=befaf1f5ec7b874aef2651ee1384d51828504eb9, description=keepalived for Ceph, architecture=x86_64, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9-minimal/images/9.1.0-1793, release=1793, summary=Provides keepalived on RHEL 9 for Ceph., distribution-scope=public, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., version=2.2.4, com.redhat.component=keepalived-container, io.openshift.expose-services=, io.openshift.tags=Ceph keepalived, vendor=Red Hat, Inc., vcs-type=git)
Dec  8 04:49:31 np0005550137 systemd[1]: Started libpod-conmon-3755e032a3f7f585dcdf88110872aa6267c39a7580d248f64bade1ed01fff209.scope.
Dec  8 04:49:31 np0005550137 podman[100089]: 2025-12-08 09:49:31.310897389 +0000 UTC m=+0.039002726 image pull 4a3a1ff181d97c6dcfa9138ad76eb99fa2c1e840298461d5a7a56133bc05b9a2 quay.io/ceph/keepalived:2.2.4
Dec  8 04:49:31 np0005550137 systemd[1]: Started libcrun container.
Dec  8 04:49:31 np0005550137 radosgw[89717]: ====== starting new request req=0x7ff6ecd845d0 =====
Dec  8 04:49:31 np0005550137 radosgw[89717]: ====== req done req=0x7ff6ecd845d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  8 04:49:31 np0005550137 radosgw[89717]: beast: 0x7ff6ecd845d0: 192.168.122.102 - anonymous [08/Dec/2025:09:49:31.430 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  8 04:49:31 np0005550137 podman[100089]: 2025-12-08 09:49:31.431797309 +0000 UTC m=+0.159902656 container init 3755e032a3f7f585dcdf88110872aa6267c39a7580d248f64bade1ed01fff209 (image=quay.io/ceph/keepalived:2.2.4, name=agitated_allen, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, version=2.2.4, release=1793, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.buildah.version=1.28.2, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, vendor=Red Hat, Inc., description=keepalived for Ceph, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9-minimal/images/9.1.0-1793, summary=Provides keepalived on RHEL 9 for Ceph., distribution-scope=public, com.redhat.component=keepalived-container, name=keepalived, io.openshift.expose-services=, architecture=x86_64, build-date=2023-02-22T09:23:20, io.openshift.tags=Ceph keepalived, io.k8s.display-name=Keepalived on RHEL 9, vcs-ref=befaf1f5ec7b874aef2651ee1384d51828504eb9, vcs-type=git)
Dec  8 04:49:31 np0005550137 podman[100089]: 2025-12-08 09:49:31.438979493 +0000 UTC m=+0.167084830 container start 3755e032a3f7f585dcdf88110872aa6267c39a7580d248f64bade1ed01fff209 (image=quay.io/ceph/keepalived:2.2.4, name=agitated_allen, description=keepalived for Ceph, com.redhat.component=keepalived-container, io.openshift.expose-services=, io.openshift.tags=Ceph keepalived, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, release=1793, distribution-scope=public, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9-minimal/images/9.1.0-1793, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, vendor=Red Hat, Inc., vcs-type=git, name=keepalived, architecture=x86_64, build-date=2023-02-22T09:23:20, io.buildah.version=1.28.2, vcs-ref=befaf1f5ec7b874aef2651ee1384d51828504eb9, summary=Provides keepalived on RHEL 9 for Ceph., version=2.2.4, io.k8s.display-name=Keepalived on RHEL 9, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly.)
Dec  8 04:49:31 np0005550137 podman[100089]: 2025-12-08 09:49:31.442716095 +0000 UTC m=+0.170821502 container attach 3755e032a3f7f585dcdf88110872aa6267c39a7580d248f64bade1ed01fff209 (image=quay.io/ceph/keepalived:2.2.4, name=agitated_allen, build-date=2023-02-22T09:23:20, io.buildah.version=1.28.2, vendor=Red Hat, Inc., summary=Provides keepalived on RHEL 9 for Ceph., io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9-minimal/images/9.1.0-1793, version=2.2.4, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, distribution-scope=public, vcs-type=git, description=keepalived for Ceph, com.redhat.component=keepalived-container, release=1793, vcs-ref=befaf1f5ec7b874aef2651ee1384d51828504eb9, name=keepalived, io.openshift.tags=Ceph keepalived, io.k8s.display-name=Keepalived on RHEL 9, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, io.openshift.expose-services=, architecture=x86_64)
Dec  8 04:49:31 np0005550137 agitated_allen[100105]: 0 0
Dec  8 04:49:31 np0005550137 systemd[1]: libpod-3755e032a3f7f585dcdf88110872aa6267c39a7580d248f64bade1ed01fff209.scope: Deactivated successfully.
Dec  8 04:49:31 np0005550137 podman[100089]: 2025-12-08 09:49:31.443975393 +0000 UTC m=+0.172080730 container died 3755e032a3f7f585dcdf88110872aa6267c39a7580d248f64bade1ed01fff209 (image=quay.io/ceph/keepalived:2.2.4, name=agitated_allen, vendor=Red Hat, Inc., com.redhat.component=keepalived-container, architecture=x86_64, build-date=2023-02-22T09:23:20, release=1793, summary=Provides keepalived on RHEL 9 for Ceph., name=keepalived, io.openshift.expose-services=, io.buildah.version=1.28.2, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9-minimal/images/9.1.0-1793, vcs-ref=befaf1f5ec7b874aef2651ee1384d51828504eb9, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.k8s.display-name=Keepalived on RHEL 9, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, distribution-scope=public, vcs-type=git, description=keepalived for Ceph, io.openshift.tags=Ceph keepalived, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, version=2.2.4)
Dec  8 04:49:31 np0005550137 systemd[1]: var-lib-containers-storage-overlay-95b104658e1b466cdbaa12ed424db99b7575d8f6bffbeefb97fa973b4d70c26c-merged.mount: Deactivated successfully.
Dec  8 04:49:31 np0005550137 podman[100089]: 2025-12-08 09:49:31.489704727 +0000 UTC m=+0.217810064 container remove 3755e032a3f7f585dcdf88110872aa6267c39a7580d248f64bade1ed01fff209 (image=quay.io/ceph/keepalived:2.2.4, name=agitated_allen, io.k8s.display-name=Keepalived on RHEL 9, io.openshift.expose-services=, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, vcs-ref=befaf1f5ec7b874aef2651ee1384d51828504eb9, name=keepalived, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9-minimal/images/9.1.0-1793, vendor=Red Hat, Inc., description=keepalived for Ceph, release=1793, summary=Provides keepalived on RHEL 9 for Ceph., com.redhat.component=keepalived-container, distribution-scope=public, io.openshift.tags=Ceph keepalived, version=2.2.4, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, build-date=2023-02-22T09:23:20, io.buildah.version=1.28.2, vcs-type=git)
Dec  8 04:49:31 np0005550137 systemd[1]: libpod-conmon-3755e032a3f7f585dcdf88110872aa6267c39a7580d248f64bade1ed01fff209.scope: Deactivated successfully.
Dec  8 04:49:31 np0005550137 systemd[1]: Reloading.
Dec  8 04:49:31 np0005550137 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Dec  8 04:49:31 np0005550137 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec  8 04:49:31 np0005550137 ceph-osd[83009]: log_channel(cluster) log [DBG] : 11.15 scrub starts
Dec  8 04:49:31 np0005550137 ceph-osd[83009]: osd.1 pg_epoch: 65 pg[9.15( v 49'1026 (0'0,49'1026] local-lis/les=52/53 n=4 ec=52/36 lis/c=52/52 les/c/f=53/53/0 sis=65 pruub=14.078397751s) [2] r=-1 lpr=65 pi=[52,65)/1 crt=49'1026 lcod 0'0 mlcod 0'0 active pruub 194.520736694s@ mbc={}] PeeringState::start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Dec  8 04:49:31 np0005550137 ceph-osd[83009]: osd.1 pg_epoch: 65 pg[9.15( v 49'1026 (0'0,49'1026] local-lis/les=52/53 n=4 ec=52/36 lis/c=52/52 les/c/f=53/53/0 sis=65 pruub=14.078319550s) [2] r=-1 lpr=65 pi=[52,65)/1 crt=49'1026 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 194.520736694s@ mbc={}] state<Start>: transitioning to Stray
Dec  8 04:49:31 np0005550137 ceph-osd[83009]: osd.1 pg_epoch: 65 pg[9.d( v 49'1026 (0'0,49'1026] local-lis/les=52/53 n=6 ec=52/36 lis/c=52/52 les/c/f=53/53/0 sis=65 pruub=14.077647209s) [2] r=-1 lpr=65 pi=[52,65)/1 crt=49'1026 lcod 0'0 mlcod 0'0 active pruub 194.520980835s@ mbc={}] PeeringState::start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Dec  8 04:49:31 np0005550137 ceph-osd[83009]: osd.1 pg_epoch: 65 pg[9.d( v 49'1026 (0'0,49'1026] local-lis/les=52/53 n=6 ec=52/36 lis/c=52/52 les/c/f=53/53/0 sis=65 pruub=14.077572823s) [2] r=-1 lpr=65 pi=[52,65)/1 crt=49'1026 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 194.520980835s@ mbc={}] state<Start>: transitioning to Stray
Dec  8 04:49:31 np0005550137 ceph-osd[83009]: osd.1 pg_epoch: 65 pg[9.5( v 54'1029 (0'0,54'1029] local-lis/les=52/53 n=6 ec=52/36 lis/c=52/52 les/c/f=53/53/0 sis=65 pruub=14.080682755s) [2] r=-1 lpr=65 pi=[52,65)/1 crt=53'1027 lcod 53'1028 mlcod 53'1028 active pruub 194.524612427s@ mbc={}] PeeringState::start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Dec  8 04:49:31 np0005550137 ceph-osd[83009]: osd.1 pg_epoch: 65 pg[9.5( v 54'1029 (0'0,54'1029] local-lis/les=52/53 n=6 ec=52/36 lis/c=52/52 les/c/f=53/53/0 sis=65 pruub=14.080609322s) [2] r=-1 lpr=65 pi=[52,65)/1 crt=53'1027 lcod 53'1028 mlcod 0'0 unknown NOTIFY pruub 194.524612427s@ mbc={}] state<Start>: transitioning to Stray
Dec  8 04:49:31 np0005550137 ceph-osd[83009]: osd.1 pg_epoch: 65 pg[9.1d( v 49'1026 (0'0,49'1026] local-lis/les=52/53 n=5 ec=52/36 lis/c=52/52 les/c/f=53/53/0 sis=65 pruub=14.080593109s) [2] r=-1 lpr=65 pi=[52,65)/1 crt=49'1026 lcod 0'0 mlcod 0'0 active pruub 194.524963379s@ mbc={}] PeeringState::start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Dec  8 04:49:31 np0005550137 ceph-osd[83009]: osd.1 pg_epoch: 65 pg[9.1d( v 49'1026 (0'0,49'1026] local-lis/les=52/53 n=5 ec=52/36 lis/c=52/52 les/c/f=53/53/0 sis=65 pruub=14.080548286s) [2] r=-1 lpr=65 pi=[52,65)/1 crt=49'1026 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 194.524963379s@ mbc={}] state<Start>: transitioning to Stray
Dec  8 04:49:31 np0005550137 ceph-osd[83009]: log_channel(cluster) log [DBG] : 11.15 scrub ok
Dec  8 04:49:31 np0005550137 systemd[1]: Reloading.
Dec  8 04:49:31 np0005550137 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Dec  8 04:49:31 np0005550137 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec  8 04:49:32 np0005550137 systemd[1]: Starting Ceph keepalived.rgw.default.compute-0.qvwaqs for ceb838ef-9d5d-54e4-bddb-2f01adce2ad4...
Dec  8 04:49:32 np0005550137 ceph-mon[74516]: 192.168.122.2 is in 192.168.122.0/24 on compute-0 interface br-ex
Dec  8 04:49:32 np0005550137 ceph-mon[74516]: 192.168.122.2 is in 192.168.122.0/24 on compute-2 interface br-ex
Dec  8 04:49:32 np0005550137 ceph-mon[74516]: Deploying daemon keepalived.rgw.default.compute-0.qvwaqs on compute-0
Dec  8 04:49:32 np0005550137 podman[100254]: 2025-12-08 09:49:32.419616494 +0000 UTC m=+0.054314403 container create f9f714ef6059092209b3e9fb33eefbc020c67eec18dfa91d2477121dd66aab90 (image=quay.io/ceph/keepalived:2.2.4, name=ceph-ceb838ef-9d5d-54e4-bddb-2f01adce2ad4-keepalived-rgw-default-compute-0-qvwaqs, name=keepalived, io.openshift.expose-services=, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9-minimal/images/9.1.0-1793, com.redhat.component=keepalived-container, io.k8s.display-name=Keepalived on RHEL 9, build-date=2023-02-22T09:23:20, release=1793, vcs-type=git, architecture=x86_64, description=keepalived for Ceph, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, vendor=Red Hat, Inc., version=2.2.4, vcs-ref=befaf1f5ec7b874aef2651ee1384d51828504eb9, io.openshift.tags=Ceph keepalived, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, summary=Provides keepalived on RHEL 9 for Ceph., io.buildah.version=1.28.2, distribution-scope=public)
Dec  8 04:49:32 np0005550137 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/85cc572ec51e90e6133e7995b7f6ca4de64ee0ec82d3b2b930f11decd0eb6f5f/merged/etc/keepalived/keepalived.conf supports timestamps until 2038 (0x7fffffff)
Dec  8 04:49:32 np0005550137 ceph-mon[74516]: mon.compute-0@0(leader).osd e65 do_prune osdmap full prune enabled
Dec  8 04:49:32 np0005550137 ceph-mon[74516]: mon.compute-0@0(leader).osd e66 e66: 3 total, 3 up, 3 in
Dec  8 04:49:32 np0005550137 podman[100254]: 2025-12-08 09:49:32.486524711 +0000 UTC m=+0.121222660 container init f9f714ef6059092209b3e9fb33eefbc020c67eec18dfa91d2477121dd66aab90 (image=quay.io/ceph/keepalived:2.2.4, name=ceph-ceb838ef-9d5d-54e4-bddb-2f01adce2ad4-keepalived-rgw-default-compute-0-qvwaqs, io.buildah.version=1.28.2, version=2.2.4, distribution-scope=public, release=1793, vcs-type=git, architecture=x86_64, build-date=2023-02-22T09:23:20, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.display-name=Keepalived on RHEL 9, io.openshift.tags=Ceph keepalived, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., name=keepalived, io.openshift.expose-services=, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9-minimal/images/9.1.0-1793, description=keepalived for Ceph, vendor=Red Hat, Inc., summary=Provides keepalived on RHEL 9 for Ceph., com.redhat.component=keepalived-container, vcs-ref=befaf1f5ec7b874aef2651ee1384d51828504eb9, maintainer=Guillaume Abrioux <gabrioux@redhat.com>)
Dec  8 04:49:32 np0005550137 ceph-mon[74516]: log_channel(cluster) log [DBG] : osdmap e66: 3 total, 3 up, 3 in
Dec  8 04:49:32 np0005550137 podman[100254]: 2025-12-08 09:49:32.395940047 +0000 UTC m=+0.030638006 image pull 4a3a1ff181d97c6dcfa9138ad76eb99fa2c1e840298461d5a7a56133bc05b9a2 quay.io/ceph/keepalived:2.2.4
Dec  8 04:49:32 np0005550137 podman[100254]: 2025-12-08 09:49:32.491377397 +0000 UTC m=+0.126075346 container start f9f714ef6059092209b3e9fb33eefbc020c67eec18dfa91d2477121dd66aab90 (image=quay.io/ceph/keepalived:2.2.4, name=ceph-ceb838ef-9d5d-54e4-bddb-2f01adce2ad4-keepalived-rgw-default-compute-0-qvwaqs, com.redhat.component=keepalived-container, description=keepalived for Ceph, io.k8s.display-name=Keepalived on RHEL 9, vendor=Red Hat, Inc., io.openshift.tags=Ceph keepalived, distribution-scope=public, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-ref=befaf1f5ec7b874aef2651ee1384d51828504eb9, io.buildah.version=1.28.2, version=2.2.4, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.expose-services=, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9-minimal/images/9.1.0-1793, name=keepalived, architecture=x86_64, build-date=2023-02-22T09:23:20, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, summary=Provides keepalived on RHEL 9 for Ceph., release=1793, vcs-type=git)
Dec  8 04:49:32 np0005550137 bash[100254]: f9f714ef6059092209b3e9fb33eefbc020c67eec18dfa91d2477121dd66aab90
Dec  8 04:49:32 np0005550137 ceph-osd[83009]: osd.1 pg_epoch: 66 pg[9.1d( v 49'1026 (0'0,49'1026] local-lis/les=52/53 n=5 ec=52/36 lis/c=52/52 les/c/f=53/53/0 sis=66) [2]/[1] r=0 lpr=66 pi=[52,66)/1 crt=49'1026 lcod 0'0 mlcod 0'0 remapped NOTIFY mbc={}] PeeringState::start_peering_interval up [2] -> [2], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 2, role -1 -> 0, features acting 4540701547738038271 upacting 4540701547738038271
Dec  8 04:49:32 np0005550137 ceph-osd[83009]: osd.1 pg_epoch: 66 pg[9.1d( v 49'1026 (0'0,49'1026] local-lis/les=52/53 n=5 ec=52/36 lis/c=52/52 les/c/f=53/53/0 sis=66) [2]/[1] r=0 lpr=66 pi=[52,66)/1 crt=49'1026 lcod 0'0 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Dec  8 04:49:32 np0005550137 ceph-osd[83009]: osd.1 pg_epoch: 66 pg[9.5( v 54'1029 (0'0,54'1029] local-lis/les=52/53 n=6 ec=52/36 lis/c=52/52 les/c/f=53/53/0 sis=66) [2]/[1] r=0 lpr=66 pi=[52,66)/1 crt=53'1027 lcod 53'1028 mlcod 0'0 remapped NOTIFY mbc={}] PeeringState::start_peering_interval up [2] -> [2], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 2, role -1 -> 0, features acting 4540701547738038271 upacting 4540701547738038271
Dec  8 04:49:32 np0005550137 ceph-osd[83009]: osd.1 pg_epoch: 66 pg[9.15( v 49'1026 (0'0,49'1026] local-lis/les=52/53 n=4 ec=52/36 lis/c=52/52 les/c/f=53/53/0 sis=66) [2]/[1] r=0 lpr=66 pi=[52,66)/1 crt=49'1026 lcod 0'0 mlcod 0'0 remapped NOTIFY mbc={}] PeeringState::start_peering_interval up [2] -> [2], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 2, role -1 -> 0, features acting 4540701547738038271 upacting 4540701547738038271
Dec  8 04:49:32 np0005550137 ceph-osd[83009]: osd.1 pg_epoch: 66 pg[9.5( v 54'1029 (0'0,54'1029] local-lis/les=52/53 n=6 ec=52/36 lis/c=52/52 les/c/f=53/53/0 sis=66) [2]/[1] r=0 lpr=66 pi=[52,66)/1 crt=53'1027 lcod 53'1028 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Dec  8 04:49:32 np0005550137 ceph-osd[83009]: osd.1 pg_epoch: 66 pg[9.15( v 49'1026 (0'0,49'1026] local-lis/les=52/53 n=4 ec=52/36 lis/c=52/52 les/c/f=53/53/0 sis=66) [2]/[1] r=0 lpr=66 pi=[52,66)/1 crt=49'1026 lcod 0'0 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Dec  8 04:49:32 np0005550137 ceph-osd[83009]: osd.1 pg_epoch: 66 pg[9.d( v 49'1026 (0'0,49'1026] local-lis/les=52/53 n=6 ec=52/36 lis/c=52/52 les/c/f=53/53/0 sis=66) [2]/[1] r=0 lpr=66 pi=[52,66)/1 crt=49'1026 lcod 0'0 mlcod 0'0 remapped NOTIFY mbc={}] PeeringState::start_peering_interval up [2] -> [2], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 2, role -1 -> 0, features acting 4540701547738038271 upacting 4540701547738038271
Dec  8 04:49:32 np0005550137 ceph-osd[83009]: osd.1 pg_epoch: 66 pg[9.d( v 49'1026 (0'0,49'1026] local-lis/les=52/53 n=6 ec=52/36 lis/c=52/52 les/c/f=53/53/0 sis=66) [2]/[1] r=0 lpr=66 pi=[52,66)/1 crt=49'1026 lcod 0'0 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Dec  8 04:49:32 np0005550137 systemd[1]: Started Ceph keepalived.rgw.default.compute-0.qvwaqs for ceb838ef-9d5d-54e4-bddb-2f01adce2ad4.
Dec  8 04:49:32 np0005550137 ceph-ceb838ef-9d5d-54e4-bddb-2f01adce2ad4-keepalived-rgw-default-compute-0-qvwaqs[100269]: Mon Dec  8 09:49:32 2025: Starting Keepalived v2.2.4 (08/21,2021)
Dec  8 04:49:32 np0005550137 ceph-ceb838ef-9d5d-54e4-bddb-2f01adce2ad4-keepalived-rgw-default-compute-0-qvwaqs[100269]: Mon Dec  8 09:49:32 2025: Running on Linux 5.14.0-645.el9.x86_64 #1 SMP PREEMPT_DYNAMIC Fri Nov 28 14:01:17 UTC 2025 (built for Linux 5.14.0)
Dec  8 04:49:32 np0005550137 ceph-ceb838ef-9d5d-54e4-bddb-2f01adce2ad4-keepalived-rgw-default-compute-0-qvwaqs[100269]: Mon Dec  8 09:49:32 2025: Command line: '/usr/sbin/keepalived' '-n' '-l' '-f' '/etc/keepalived/keepalived.conf'
Dec  8 04:49:32 np0005550137 ceph-ceb838ef-9d5d-54e4-bddb-2f01adce2ad4-keepalived-rgw-default-compute-0-qvwaqs[100269]: Mon Dec  8 09:49:32 2025: Configuration file /etc/keepalived/keepalived.conf
Dec  8 04:49:32 np0005550137 ceph-ceb838ef-9d5d-54e4-bddb-2f01adce2ad4-keepalived-rgw-default-compute-0-qvwaqs[100269]: Mon Dec  8 09:49:32 2025: Failed to bind to process monitoring socket - errno 98 - Address already in use
Dec  8 04:49:32 np0005550137 ceph-ceb838ef-9d5d-54e4-bddb-2f01adce2ad4-keepalived-rgw-default-compute-0-qvwaqs[100269]: Mon Dec  8 09:49:32 2025: NOTICE: setting config option max_auto_priority should result in better keepalived performance
Dec  8 04:49:32 np0005550137 ceph-ceb838ef-9d5d-54e4-bddb-2f01adce2ad4-keepalived-rgw-default-compute-0-qvwaqs[100269]: Mon Dec  8 09:49:32 2025: Starting VRRP child process, pid=4
Dec  8 04:49:32 np0005550137 ceph-ceb838ef-9d5d-54e4-bddb-2f01adce2ad4-keepalived-rgw-default-compute-0-qvwaqs[100269]: Mon Dec  8 09:49:32 2025: Startup complete
Dec  8 04:49:32 np0005550137 ceph-ceb838ef-9d5d-54e4-bddb-2f01adce2ad4-keepalived-nfs-cephfs-compute-0-qxgfft[98396]: Mon Dec  8 09:49:32 2025: (VI_0) Entering BACKUP STATE
Dec  8 04:49:32 np0005550137 ceph-ceb838ef-9d5d-54e4-bddb-2f01adce2ad4-keepalived-rgw-default-compute-0-qvwaqs[100269]: Mon Dec  8 09:49:32 2025: (VI_0) Entering BACKUP STATE (init)
Dec  8 04:49:32 np0005550137 ceph-ceb838ef-9d5d-54e4-bddb-2f01adce2ad4-keepalived-rgw-default-compute-0-qvwaqs[100269]: Mon Dec  8 09:49:32 2025: VRRP_Script(check_backend) succeeded
Dec  8 04:49:32 np0005550137 ceph-mon[74516]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Dec  8 04:49:32 np0005550137 ceph-mon[74516]: log_channel(audit) log [INF] : from='mgr.14478 192.168.122.100:0/575685615' entity='mgr.compute-0.kitiwu' 
Dec  8 04:49:32 np0005550137 ceph-mon[74516]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Dec  8 04:49:32 np0005550137 ceph-mon[74516]: log_channel(audit) log [INF] : from='mgr.14478 192.168.122.100:0/575685615' entity='mgr.compute-0.kitiwu' 
Dec  8 04:49:32 np0005550137 ceph-mon[74516]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.ingress.rgw.default}] v 0)
Dec  8 04:49:32 np0005550137 ceph-mgr[74806]: log_channel(cluster) log [DBG] : pgmap v65: 353 pgs: 353 active+clean; 457 KiB data, 125 MiB used, 60 GiB / 60 GiB avail; 170 B/s, 8 objects/s recovering
Dec  8 04:49:32 np0005550137 ceph-mon[74516]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "7"} v 0)
Dec  8 04:49:32 np0005550137 ceph-mon[74516]: log_channel(audit) log [INF] : from='mgr.14478 192.168.122.100:0/575685615' entity='mgr.compute-0.kitiwu' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "7"}]: dispatch
Dec  8 04:49:32 np0005550137 ceph-mon[74516]: log_channel(audit) log [INF] : from='mgr.14478 192.168.122.100:0/575685615' entity='mgr.compute-0.kitiwu' 
Dec  8 04:49:32 np0005550137 ceph-mgr[74806]: [cephadm INFO cephadm.services.ingress] 192.168.122.2 is in 192.168.122.0/24 on compute-2 interface br-ex
Dec  8 04:49:32 np0005550137 ceph-mgr[74806]: log_channel(cephadm) log [INF] : 192.168.122.2 is in 192.168.122.0/24 on compute-2 interface br-ex
Dec  8 04:49:32 np0005550137 ceph-mgr[74806]: [cephadm INFO cephadm.services.ingress] 192.168.122.2 is in 192.168.122.0/24 on compute-0 interface br-ex
Dec  8 04:49:32 np0005550137 ceph-mgr[74806]: log_channel(cephadm) log [INF] : 192.168.122.2 is in 192.168.122.0/24 on compute-0 interface br-ex
Dec  8 04:49:32 np0005550137 ceph-mgr[74806]: [cephadm INFO cephadm.serve] Deploying daemon keepalived.rgw.default.compute-2.wajgbn on compute-2
Dec  8 04:49:32 np0005550137 ceph-mgr[74806]: log_channel(cephadm) log [INF] : Deploying daemon keepalived.rgw.default.compute-2.wajgbn on compute-2
Dec  8 04:49:32 np0005550137 ceph-osd[83009]: log_channel(cluster) log [DBG] : 11.0 scrub starts
Dec  8 04:49:32 np0005550137 ceph-osd[83009]: log_channel(cluster) log [DBG] : 11.0 scrub ok
Dec  8 04:49:32 np0005550137 ceph-ceb838ef-9d5d-54e4-bddb-2f01adce2ad4-nfs-cephfs-2-0-compute-0-cuvvno[97606]: 08/12/2025 09:49:32 : epoch 69369efc : compute-0 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f5714002950 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec  8 04:49:32 np0005550137 radosgw[89717]: ====== starting new request req=0x7ff6ecd845d0 =====
Dec  8 04:49:32 np0005550137 radosgw[89717]: ====== req done req=0x7ff6ecd845d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  8 04:49:32 np0005550137 radosgw[89717]: beast: 0x7ff6ecd845d0: 192.168.122.100 - anonymous [08/Dec/2025:09:49:32.866 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  8 04:49:33 np0005550137 ceph-ceb838ef-9d5d-54e4-bddb-2f01adce2ad4-keepalived-nfs-cephfs-compute-0-qxgfft[98396]: Mon Dec  8 09:49:33 2025: (VI_0) Entering MASTER STATE
Dec  8 04:49:33 np0005550137 ceph-ceb838ef-9d5d-54e4-bddb-2f01adce2ad4-nfs-cephfs-2-0-compute-0-cuvvno[97606]: 08/12/2025 09:49:33 : epoch 69369efc : compute-0 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f56f00021d0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec  8 04:49:33 np0005550137 ceph-ceb838ef-9d5d-54e4-bddb-2f01adce2ad4-nfs-cephfs-2-0-compute-0-cuvvno[97606]: 08/12/2025 09:49:33 : epoch 69369efc : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f56e80016a0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec  8 04:49:33 np0005550137 ceph-mon[74516]: from='mgr.14478 192.168.122.100:0/575685615' entity='mgr.compute-0.kitiwu' 
Dec  8 04:49:33 np0005550137 ceph-mon[74516]: from='mgr.14478 192.168.122.100:0/575685615' entity='mgr.compute-0.kitiwu' 
Dec  8 04:49:33 np0005550137 ceph-mon[74516]: from='mgr.14478 192.168.122.100:0/575685615' entity='mgr.compute-0.kitiwu' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "7"}]: dispatch
Dec  8 04:49:33 np0005550137 ceph-mon[74516]: from='mgr.14478 192.168.122.100:0/575685615' entity='mgr.compute-0.kitiwu' 
Dec  8 04:49:33 np0005550137 radosgw[89717]: ====== starting new request req=0x7ff6ecd845d0 =====
Dec  8 04:49:33 np0005550137 radosgw[89717]: ====== req done req=0x7ff6ecd845d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  8 04:49:33 np0005550137 radosgw[89717]: beast: 0x7ff6ecd845d0: 192.168.122.102 - anonymous [08/Dec/2025:09:49:33.434 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  8 04:49:33 np0005550137 ceph-mon[74516]: mon.compute-0@0(leader).osd e66 do_prune osdmap full prune enabled
Dec  8 04:49:33 np0005550137 ceph-mon[74516]: log_channel(audit) log [INF] : from='mgr.14478 192.168.122.100:0/575685615' entity='mgr.compute-0.kitiwu' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "7"}]': finished
Dec  8 04:49:33 np0005550137 ceph-mon[74516]: mon.compute-0@0(leader).osd e67 e67: 3 total, 3 up, 3 in
Dec  8 04:49:33 np0005550137 ceph-mon[74516]: log_channel(cluster) log [DBG] : osdmap e67: 3 total, 3 up, 3 in
Dec  8 04:49:33 np0005550137 ceph-osd[83009]: osd.1 pg_epoch: 67 pg[9.e( v 49'1026 (0'0,49'1026] local-lis/les=52/53 n=6 ec=52/36 lis/c=52/52 les/c/f=53/53/0 sis=67 pruub=12.328575134s) [0] r=-1 lpr=67 pi=[52,67)/1 crt=49'1026 lcod 0'0 mlcod 0'0 active pruub 194.520858765s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Dec  8 04:49:33 np0005550137 ceph-osd[83009]: osd.1 pg_epoch: 67 pg[9.e( v 49'1026 (0'0,49'1026] local-lis/les=52/53 n=6 ec=52/36 lis/c=52/52 les/c/f=53/53/0 sis=67 pruub=12.328532219s) [0] r=-1 lpr=67 pi=[52,67)/1 crt=49'1026 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 194.520858765s@ mbc={}] state<Start>: transitioning to Stray
Dec  8 04:49:33 np0005550137 ceph-osd[83009]: osd.1 pg_epoch: 67 pg[9.16( v 49'1026 (0'0,49'1026] local-lis/les=52/53 n=4 ec=52/36 lis/c=52/52 les/c/f=53/53/0 sis=67 pruub=12.328369141s) [0] r=-1 lpr=67 pi=[52,67)/1 crt=49'1026 lcod 0'0 mlcod 0'0 active pruub 194.520553589s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Dec  8 04:49:33 np0005550137 ceph-osd[83009]: osd.1 pg_epoch: 67 pg[9.16( v 49'1026 (0'0,49'1026] local-lis/les=52/53 n=4 ec=52/36 lis/c=52/52 les/c/f=53/53/0 sis=67 pruub=12.328081131s) [0] r=-1 lpr=67 pi=[52,67)/1 crt=49'1026 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 194.520553589s@ mbc={}] state<Start>: transitioning to Stray
Dec  8 04:49:33 np0005550137 ceph-osd[83009]: osd.1 pg_epoch: 67 pg[9.6( v 49'1026 (0'0,49'1026] local-lis/les=52/53 n=6 ec=52/36 lis/c=52/52 les/c/f=53/53/0 sis=67 pruub=12.331425667s) [0] r=-1 lpr=67 pi=[52,67)/1 crt=49'1026 lcod 0'0 mlcod 0'0 active pruub 194.524353027s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Dec  8 04:49:33 np0005550137 ceph-osd[83009]: osd.1 pg_epoch: 67 pg[9.6( v 49'1026 (0'0,49'1026] local-lis/les=52/53 n=6 ec=52/36 lis/c=52/52 les/c/f=53/53/0 sis=67 pruub=12.331393242s) [0] r=-1 lpr=67 pi=[52,67)/1 crt=49'1026 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 194.524353027s@ mbc={}] state<Start>: transitioning to Stray
Dec  8 04:49:33 np0005550137 ceph-osd[83009]: osd.1 pg_epoch: 67 pg[9.1e( v 49'1026 (0'0,49'1026] local-lis/les=52/53 n=5 ec=52/36 lis/c=52/52 les/c/f=53/53/0 sis=67 pruub=12.331562042s) [0] r=-1 lpr=67 pi=[52,67)/1 crt=49'1026 lcod 0'0 mlcod 0'0 active pruub 194.524841309s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Dec  8 04:49:33 np0005550137 ceph-osd[83009]: osd.1 pg_epoch: 67 pg[9.1e( v 49'1026 (0'0,49'1026] local-lis/les=52/53 n=5 ec=52/36 lis/c=52/52 les/c/f=53/53/0 sis=67 pruub=12.331538200s) [0] r=-1 lpr=67 pi=[52,67)/1 crt=49'1026 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 194.524841309s@ mbc={}] state<Start>: transitioning to Stray
Dec  8 04:49:33 np0005550137 ceph-osd[83009]: osd.1 pg_epoch: 67 pg[9.15( v 49'1026 (0'0,49'1026] local-lis/les=66/67 n=4 ec=52/36 lis/c=52/52 les/c/f=53/53/0 sis=66) [2]/[1] async=[2] r=0 lpr=66 pi=[52,66)/1 crt=49'1026 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=4}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  8 04:49:33 np0005550137 ceph-osd[83009]: osd.1 pg_epoch: 67 pg[9.d( v 49'1026 (0'0,49'1026] local-lis/les=66/67 n=6 ec=52/36 lis/c=52/52 les/c/f=53/53/0 sis=66) [2]/[1] async=[2] r=0 lpr=66 pi=[52,66)/1 crt=49'1026 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=8}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  8 04:49:33 np0005550137 ceph-osd[83009]: osd.1 pg_epoch: 67 pg[9.5( v 54'1029 (0'0,54'1029] local-lis/les=66/67 n=6 ec=52/36 lis/c=52/52 les/c/f=53/53/0 sis=66) [2]/[1] async=[2] r=0 lpr=66 pi=[52,66)/1 crt=54'1029 lcod 53'1028 mlcod 0'0 active+remapped mbc={255={(0+1)=8}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  8 04:49:33 np0005550137 ceph-osd[83009]: osd.1 pg_epoch: 67 pg[9.1d( v 49'1026 (0'0,49'1026] local-lis/les=66/67 n=5 ec=52/36 lis/c=52/52 les/c/f=53/53/0 sis=66) [2]/[1] async=[2] r=0 lpr=66 pi=[52,66)/1 crt=49'1026 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=5}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  8 04:49:34 np0005550137 ceph-mon[74516]: 192.168.122.2 is in 192.168.122.0/24 on compute-2 interface br-ex
Dec  8 04:49:34 np0005550137 ceph-mon[74516]: 192.168.122.2 is in 192.168.122.0/24 on compute-0 interface br-ex
Dec  8 04:49:34 np0005550137 ceph-mon[74516]: Deploying daemon keepalived.rgw.default.compute-2.wajgbn on compute-2
Dec  8 04:49:34 np0005550137 ceph-mon[74516]: from='mgr.14478 192.168.122.100:0/575685615' entity='mgr.compute-0.kitiwu' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "7"}]': finished
Dec  8 04:49:34 np0005550137 ceph-mon[74516]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2.devices.0}] v 0)
Dec  8 04:49:34 np0005550137 ceph-mon[74516]: log_channel(audit) log [INF] : from='mgr.14478 192.168.122.100:0/575685615' entity='mgr.compute-0.kitiwu' 
Dec  8 04:49:34 np0005550137 ceph-mon[74516]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2}] v 0)
Dec  8 04:49:34 np0005550137 ceph-mon[74516]: log_channel(audit) log [INF] : from='mgr.14478 192.168.122.100:0/575685615' entity='mgr.compute-0.kitiwu' 
Dec  8 04:49:34 np0005550137 ceph-mon[74516]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.ingress.rgw.default}] v 0)
Dec  8 04:49:34 np0005550137 ceph-mon[74516]: log_channel(audit) log [INF] : from='mgr.14478 192.168.122.100:0/575685615' entity='mgr.compute-0.kitiwu' 
Dec  8 04:49:34 np0005550137 ceph-mgr[74806]: [progress INFO root] complete: finished ev f11e4b1d-c93d-4913-b13f-24d42a85fcc8 (Updating ingress.rgw.default deployment (+4 -> 4))
Dec  8 04:49:34 np0005550137 ceph-mgr[74806]: [progress INFO root] Completed event f11e4b1d-c93d-4913-b13f-24d42a85fcc8 (Updating ingress.rgw.default deployment (+4 -> 4)) in 9 seconds
Dec  8 04:49:34 np0005550137 ceph-mon[74516]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.ingress.rgw.default}] v 0)
Dec  8 04:49:34 np0005550137 ceph-mon[74516]: log_channel(audit) log [INF] : from='mgr.14478 192.168.122.100:0/575685615' entity='mgr.compute-0.kitiwu' 
Dec  8 04:49:34 np0005550137 ceph-mgr[74806]: [progress INFO root] update: starting ev 26b46f91-7c7e-4b42-b3b0-7c07c47c61f7 (Updating prometheus deployment (+1 -> 1))
Dec  8 04:49:34 np0005550137 ceph-mon[74516]: mon.compute-0@0(leader).osd e67 do_prune osdmap full prune enabled
Dec  8 04:49:34 np0005550137 ceph-mon[74516]: mon.compute-0@0(leader).osd e68 e68: 3 total, 3 up, 3 in
Dec  8 04:49:34 np0005550137 ceph-mon[74516]: log_channel(cluster) log [DBG] : osdmap e68: 3 total, 3 up, 3 in
Dec  8 04:49:34 np0005550137 ceph-osd[83009]: osd.1 pg_epoch: 68 pg[9.16( v 49'1026 (0'0,49'1026] local-lis/les=52/53 n=4 ec=52/36 lis/c=52/52 les/c/f=53/53/0 sis=68) [0]/[1] r=0 lpr=68 pi=[52,68)/1 crt=49'1026 lcod 0'0 mlcod 0'0 remapped NOTIFY mbc={}] PeeringState::start_peering_interval up [0] -> [0], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 0, role -1 -> 0, features acting 4540701547738038271 upacting 4540701547738038271
Dec  8 04:49:34 np0005550137 ceph-osd[83009]: osd.1 pg_epoch: 68 pg[9.16( v 49'1026 (0'0,49'1026] local-lis/les=52/53 n=4 ec=52/36 lis/c=52/52 les/c/f=53/53/0 sis=68) [0]/[1] r=0 lpr=68 pi=[52,68)/1 crt=49'1026 lcod 0'0 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Dec  8 04:49:34 np0005550137 ceph-osd[83009]: osd.1 pg_epoch: 68 pg[9.e( v 49'1026 (0'0,49'1026] local-lis/les=52/53 n=6 ec=52/36 lis/c=52/52 les/c/f=53/53/0 sis=68) [0]/[1] r=0 lpr=68 pi=[52,68)/1 crt=49'1026 lcod 0'0 mlcod 0'0 remapped NOTIFY mbc={}] PeeringState::start_peering_interval up [0] -> [0], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 0, role -1 -> 0, features acting 4540701547738038271 upacting 4540701547738038271
Dec  8 04:49:34 np0005550137 ceph-osd[83009]: osd.1 pg_epoch: 68 pg[9.15( v 49'1026 (0'0,49'1026] local-lis/les=66/67 n=4 ec=52/36 lis/c=66/52 les/c/f=67/53/0 sis=68 pruub=15.000684738s) [2] async=[2] r=-1 lpr=68 pi=[52,68)/1 crt=49'1026 lcod 0'0 mlcod 0'0 active pruub 198.199707031s@ mbc={255={}}] PeeringState::start_peering_interval up [2] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 2 -> 2, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Dec  8 04:49:34 np0005550137 ceph-osd[83009]: osd.1 pg_epoch: 68 pg[9.e( v 49'1026 (0'0,49'1026] local-lis/les=52/53 n=6 ec=52/36 lis/c=52/52 les/c/f=53/53/0 sis=68) [0]/[1] r=0 lpr=68 pi=[52,68)/1 crt=49'1026 lcod 0'0 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Dec  8 04:49:34 np0005550137 ceph-osd[83009]: osd.1 pg_epoch: 68 pg[9.15( v 49'1026 (0'0,49'1026] local-lis/les=66/67 n=4 ec=52/36 lis/c=66/52 les/c/f=67/53/0 sis=68 pruub=15.000506401s) [2] r=-1 lpr=68 pi=[52,68)/1 crt=49'1026 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 198.199707031s@ mbc={}] state<Start>: transitioning to Stray
Dec  8 04:49:34 np0005550137 ceph-osd[83009]: osd.1 pg_epoch: 68 pg[9.d( v 49'1026 (0'0,49'1026] local-lis/les=66/67 n=6 ec=52/36 lis/c=66/52 les/c/f=67/53/0 sis=68 pruub=15.004770279s) [2] async=[2] r=-1 lpr=68 pi=[52,68)/1 crt=49'1026 lcod 0'0 mlcod 0'0 active pruub 198.204360962s@ mbc={255={}}] PeeringState::start_peering_interval up [2] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 2 -> 2, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Dec  8 04:49:34 np0005550137 ceph-osd[83009]: osd.1 pg_epoch: 68 pg[9.d( v 49'1026 (0'0,49'1026] local-lis/les=66/67 n=6 ec=52/36 lis/c=66/52 les/c/f=67/53/0 sis=68 pruub=15.004526138s) [2] r=-1 lpr=68 pi=[52,68)/1 crt=49'1026 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 198.204360962s@ mbc={}] state<Start>: transitioning to Stray
Dec  8 04:49:34 np0005550137 ceph-osd[83009]: osd.1 pg_epoch: 68 pg[9.6( v 49'1026 (0'0,49'1026] local-lis/les=52/53 n=6 ec=52/36 lis/c=52/52 les/c/f=53/53/0 sis=68) [0]/[1] r=0 lpr=68 pi=[52,68)/1 crt=49'1026 lcod 0'0 mlcod 0'0 remapped NOTIFY mbc={}] PeeringState::start_peering_interval up [0] -> [0], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 0, role -1 -> 0, features acting 4540701547738038271 upacting 4540701547738038271
Dec  8 04:49:34 np0005550137 ceph-osd[83009]: osd.1 pg_epoch: 68 pg[9.6( v 49'1026 (0'0,49'1026] local-lis/les=52/53 n=6 ec=52/36 lis/c=52/52 les/c/f=53/53/0 sis=68) [0]/[1] r=0 lpr=68 pi=[52,68)/1 crt=49'1026 lcod 0'0 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Dec  8 04:49:34 np0005550137 ceph-osd[83009]: osd.1 pg_epoch: 68 pg[9.5( v 67'1033 (0'0,67'1033] local-lis/les=66/67 n=6 ec=52/36 lis/c=66/52 les/c/f=67/53/0 sis=68 pruub=15.004129410s) [2] async=[2] r=-1 lpr=68 pi=[52,68)/1 crt=54'1029 lcod 67'1032 mlcod 67'1032 active pruub 198.204376221s@ mbc={255={}}] PeeringState::start_peering_interval up [2] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 2 -> 2, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Dec  8 04:49:34 np0005550137 ceph-osd[83009]: osd.1 pg_epoch: 68 pg[9.1e( v 49'1026 (0'0,49'1026] local-lis/les=52/53 n=5 ec=52/36 lis/c=52/52 les/c/f=53/53/0 sis=68) [0]/[1] r=0 lpr=68 pi=[52,68)/1 crt=49'1026 lcod 0'0 mlcod 0'0 remapped NOTIFY mbc={}] PeeringState::start_peering_interval up [0] -> [0], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 0, role -1 -> 0, features acting 4540701547738038271 upacting 4540701547738038271
Dec  8 04:49:34 np0005550137 ceph-osd[83009]: osd.1 pg_epoch: 68 pg[9.1e( v 49'1026 (0'0,49'1026] local-lis/les=52/53 n=5 ec=52/36 lis/c=52/52 les/c/f=53/53/0 sis=68) [0]/[1] r=0 lpr=68 pi=[52,68)/1 crt=49'1026 lcod 0'0 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Dec  8 04:49:34 np0005550137 ceph-osd[83009]: osd.1 pg_epoch: 68 pg[9.5( v 67'1033 (0'0,67'1033] local-lis/les=66/67 n=6 ec=52/36 lis/c=66/52 les/c/f=67/53/0 sis=68 pruub=15.003935814s) [2] r=-1 lpr=68 pi=[52,68)/1 crt=54'1029 lcod 67'1032 mlcod 0'0 unknown NOTIFY pruub 198.204376221s@ mbc={}] state<Start>: transitioning to Stray
Dec  8 04:49:34 np0005550137 ceph-osd[83009]: osd.1 pg_epoch: 68 pg[9.1d( v 49'1026 (0'0,49'1026] local-lis/les=66/67 n=5 ec=52/36 lis/c=66/52 les/c/f=67/53/0 sis=68 pruub=15.003708839s) [2] async=[2] r=-1 lpr=68 pi=[52,68)/1 crt=49'1026 lcod 0'0 mlcod 0'0 active pruub 198.204437256s@ mbc={255={}}] PeeringState::start_peering_interval up [2] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 2 -> 2, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Dec  8 04:49:34 np0005550137 ceph-osd[83009]: osd.1 pg_epoch: 68 pg[9.1d( v 49'1026 (0'0,49'1026] local-lis/les=66/67 n=5 ec=52/36 lis/c=66/52 les/c/f=67/53/0 sis=68 pruub=15.003673553s) [2] r=-1 lpr=68 pi=[52,68)/1 crt=49'1026 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 198.204437256s@ mbc={}] state<Start>: transitioning to Stray
Dec  8 04:49:34 np0005550137 ceph-mgr[74806]: log_channel(cluster) log [DBG] : pgmap v68: 353 pgs: 353 active+clean; 457 KiB data, 125 MiB used, 60 GiB / 60 GiB avail; 170 B/s, 8 objects/s recovering
Dec  8 04:49:34 np0005550137 ceph-mon[74516]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "8"} v 0)
Dec  8 04:49:34 np0005550137 ceph-mon[74516]: log_channel(audit) log [INF] : from='mgr.14478 192.168.122.100:0/575685615' entity='mgr.compute-0.kitiwu' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "8"}]: dispatch
Dec  8 04:49:34 np0005550137 ceph-mgr[74806]: [cephadm INFO cephadm.serve] Deploying daemon prometheus.compute-0 on compute-0
Dec  8 04:49:34 np0005550137 ceph-mgr[74806]: log_channel(cephadm) log [INF] : Deploying daemon prometheus.compute-0 on compute-0
Dec  8 04:49:34 np0005550137 ceph-ceb838ef-9d5d-54e4-bddb-2f01adce2ad4-nfs-cephfs-2-0-compute-0-cuvvno[97606]: 08/12/2025 09:49:34 : epoch 69369efc : compute-0 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f5710002520 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec  8 04:49:34 np0005550137 radosgw[89717]: ====== starting new request req=0x7ff6ecd845d0 =====
Dec  8 04:49:34 np0005550137 radosgw[89717]: ====== req done req=0x7ff6ecd845d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  8 04:49:34 np0005550137 radosgw[89717]: beast: 0x7ff6ecd845d0: 192.168.122.100 - anonymous [08/Dec/2025:09:49:34.870 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  8 04:49:34 np0005550137 ceph-mon[74516]: mon.compute-0@0(leader).osd e68 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  8 04:49:35 np0005550137 ceph-ceb838ef-9d5d-54e4-bddb-2f01adce2ad4-nfs-cephfs-2-0-compute-0-cuvvno[97606]: 08/12/2025 09:49:35 : epoch 69369efc : compute-0 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f5714002950 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec  8 04:49:35 np0005550137 ceph-ceb838ef-9d5d-54e4-bddb-2f01adce2ad4-nfs-cephfs-2-0-compute-0-cuvvno[97606]: 08/12/2025 09:49:35 : epoch 69369efc : compute-0 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f56f00021d0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec  8 04:49:35 np0005550137 radosgw[89717]: ====== starting new request req=0x7ff6ecd845d0 =====
Dec  8 04:49:35 np0005550137 radosgw[89717]: ====== req done req=0x7ff6ecd845d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  8 04:49:35 np0005550137 radosgw[89717]: beast: 0x7ff6ecd845d0: 192.168.122.102 - anonymous [08/Dec/2025:09:49:35.436 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  8 04:49:35 np0005550137 ceph-mon[74516]: mon.compute-0@0(leader).osd e68 do_prune osdmap full prune enabled
Dec  8 04:49:35 np0005550137 ceph-mon[74516]: from='mgr.14478 192.168.122.100:0/575685615' entity='mgr.compute-0.kitiwu' 
Dec  8 04:49:35 np0005550137 ceph-mon[74516]: from='mgr.14478 192.168.122.100:0/575685615' entity='mgr.compute-0.kitiwu' 
Dec  8 04:49:35 np0005550137 ceph-mon[74516]: from='mgr.14478 192.168.122.100:0/575685615' entity='mgr.compute-0.kitiwu' 
Dec  8 04:49:35 np0005550137 ceph-mon[74516]: from='mgr.14478 192.168.122.100:0/575685615' entity='mgr.compute-0.kitiwu' 
Dec  8 04:49:35 np0005550137 ceph-mon[74516]: from='mgr.14478 192.168.122.100:0/575685615' entity='mgr.compute-0.kitiwu' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "8"}]: dispatch
Dec  8 04:49:35 np0005550137 ceph-mon[74516]: log_channel(audit) log [INF] : from='mgr.14478 192.168.122.100:0/575685615' entity='mgr.compute-0.kitiwu' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "8"}]': finished
Dec  8 04:49:35 np0005550137 ceph-mon[74516]: mon.compute-0@0(leader).osd e69 e69: 3 total, 3 up, 3 in
Dec  8 04:49:35 np0005550137 ceph-mon[74516]: log_channel(cluster) log [DBG] : osdmap e69: 3 total, 3 up, 3 in
Dec  8 04:49:35 np0005550137 ceph-mgr[74806]: [progress INFO root] Writing back 25 completed events
Dec  8 04:49:35 np0005550137 ceph-mon[74516]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/progress/completed}] v 0)
Dec  8 04:49:35 np0005550137 ceph-mon[74516]: log_channel(audit) log [INF] : from='mgr.14478 192.168.122.100:0/575685615' entity='mgr.compute-0.kitiwu' 
Dec  8 04:49:35 np0005550137 ceph-mgr[74806]: [progress INFO root] Completed event d7b46e01-e5e7-4aaf-a435-5a4301007da0 (Global Recovery Event) in 5 seconds
Dec  8 04:49:36 np0005550137 ceph-osd[83009]: osd.1 pg_epoch: 69 pg[9.e( v 49'1026 (0'0,49'1026] local-lis/les=68/69 n=6 ec=52/36 lis/c=52/52 les/c/f=53/53/0 sis=68) [0]/[1] async=[0] r=0 lpr=68 pi=[52,68)/1 crt=49'1026 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=5}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  8 04:49:36 np0005550137 ceph-osd[83009]: osd.1 pg_epoch: 69 pg[9.6( v 49'1026 (0'0,49'1026] local-lis/les=68/69 n=6 ec=52/36 lis/c=52/52 les/c/f=53/53/0 sis=68) [0]/[1] async=[0] r=0 lpr=68 pi=[52,68)/1 crt=49'1026 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=6}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  8 04:49:36 np0005550137 ceph-osd[83009]: osd.1 pg_epoch: 69 pg[9.16( v 49'1026 (0'0,49'1026] local-lis/les=68/69 n=4 ec=52/36 lis/c=52/52 les/c/f=53/53/0 sis=68) [0]/[1] async=[0] r=0 lpr=68 pi=[52,68)/1 crt=49'1026 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=4}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  8 04:49:36 np0005550137 ceph-osd[83009]: osd.1 pg_epoch: 69 pg[9.1e( v 49'1026 (0'0,49'1026] local-lis/les=68/69 n=5 ec=52/36 lis/c=52/52 les/c/f=53/53/0 sis=68) [0]/[1] async=[0] r=0 lpr=68 pi=[52,68)/1 crt=49'1026 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=5}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  8 04:49:36 np0005550137 ceph-ceb838ef-9d5d-54e4-bddb-2f01adce2ad4-keepalived-rgw-default-compute-0-qvwaqs[100269]: Mon Dec  8 09:49:36 2025: (VI_0) Entering MASTER STATE
Dec  8 04:49:36 np0005550137 ceph-mgr[74806]: log_channel(cluster) log [DBG] : pgmap v70: 353 pgs: 353 active+clean; 457 KiB data, 125 MiB used, 60 GiB / 60 GiB avail
Dec  8 04:49:36 np0005550137 ceph-mon[74516]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "9"} v 0)
Dec  8 04:49:36 np0005550137 ceph-mon[74516]: log_channel(audit) log [INF] : from='mgr.14478 192.168.122.100:0/575685615' entity='mgr.compute-0.kitiwu' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "9"}]: dispatch
Dec  8 04:49:36 np0005550137 ceph-osd[83009]: log_channel(cluster) log [DBG] : 9.9 scrub starts
Dec  8 04:49:36 np0005550137 ceph-mon[74516]: Deploying daemon prometheus.compute-0 on compute-0
Dec  8 04:49:36 np0005550137 ceph-mon[74516]: from='mgr.14478 192.168.122.100:0/575685615' entity='mgr.compute-0.kitiwu' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "8"}]': finished
Dec  8 04:49:36 np0005550137 ceph-mon[74516]: from='mgr.14478 192.168.122.100:0/575685615' entity='mgr.compute-0.kitiwu' 
Dec  8 04:49:36 np0005550137 ceph-mon[74516]: from='mgr.14478 192.168.122.100:0/575685615' entity='mgr.compute-0.kitiwu' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "9"}]: dispatch
Dec  8 04:49:36 np0005550137 ceph-osd[83009]: log_channel(cluster) log [DBG] : 9.9 scrub ok
Dec  8 04:49:36 np0005550137 ceph-ceb838ef-9d5d-54e4-bddb-2f01adce2ad4-nfs-cephfs-2-0-compute-0-cuvvno[97606]: 08/12/2025 09:49:36 : epoch 69369efc : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f56e80032f0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec  8 04:49:36 np0005550137 radosgw[89717]: ====== starting new request req=0x7ff6ecd845d0 =====
Dec  8 04:49:36 np0005550137 radosgw[89717]: ====== req done req=0x7ff6ecd845d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  8 04:49:36 np0005550137 radosgw[89717]: beast: 0x7ff6ecd845d0: 192.168.122.100 - anonymous [08/Dec/2025:09:49:36.871 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  8 04:49:36 np0005550137 ceph-mon[74516]: mon.compute-0@0(leader).osd e69 do_prune osdmap full prune enabled
Dec  8 04:49:36 np0005550137 ceph-mon[74516]: log_channel(audit) log [INF] : from='mgr.14478 192.168.122.100:0/575685615' entity='mgr.compute-0.kitiwu' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "9"}]': finished
Dec  8 04:49:36 np0005550137 ceph-mon[74516]: mon.compute-0@0(leader).osd e70 e70: 3 total, 3 up, 3 in
Dec  8 04:49:36 np0005550137 ceph-mon[74516]: log_channel(cluster) log [DBG] : osdmap e70: 3 total, 3 up, 3 in
Dec  8 04:49:36 np0005550137 ceph-osd[83009]: osd.1 pg_epoch: 70 pg[9.16( v 49'1026 (0'0,49'1026] local-lis/les=68/69 n=4 ec=52/36 lis/c=68/52 les/c/f=69/53/0 sis=70 pruub=15.076234818s) [0] async=[0] r=-1 lpr=70 pi=[52,70)/1 crt=49'1026 lcod 0'0 mlcod 0'0 active pruub 200.719619751s@ mbc={255={}}] PeeringState::start_peering_interval up [0] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 0 -> 0, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Dec  8 04:49:36 np0005550137 ceph-osd[83009]: osd.1 pg_epoch: 70 pg[9.16( v 49'1026 (0'0,49'1026] local-lis/les=68/69 n=4 ec=52/36 lis/c=68/52 les/c/f=69/53/0 sis=70 pruub=15.076183319s) [0] r=-1 lpr=70 pi=[52,70)/1 crt=49'1026 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 200.719619751s@ mbc={}] state<Start>: transitioning to Stray
Dec  8 04:49:36 np0005550137 ceph-osd[83009]: osd.1 pg_epoch: 70 pg[9.e( v 49'1026 (0'0,49'1026] local-lis/les=68/69 n=6 ec=52/36 lis/c=68/52 les/c/f=69/53/0 sis=70 pruub=15.073046684s) [0] async=[0] r=-1 lpr=70 pi=[52,70)/1 crt=49'1026 lcod 0'0 mlcod 0'0 active pruub 200.716629028s@ mbc={255={}}] PeeringState::start_peering_interval up [0] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 0 -> 0, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Dec  8 04:49:36 np0005550137 ceph-osd[83009]: osd.1 pg_epoch: 70 pg[9.e( v 49'1026 (0'0,49'1026] local-lis/les=68/69 n=6 ec=52/36 lis/c=68/52 les/c/f=69/53/0 sis=70 pruub=15.073009491s) [0] r=-1 lpr=70 pi=[52,70)/1 crt=49'1026 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 200.716629028s@ mbc={}] state<Start>: transitioning to Stray
Dec  8 04:49:36 np0005550137 ceph-osd[83009]: osd.1 pg_epoch: 70 pg[9.8( v 49'1026 (0'0,49'1026] local-lis/les=52/53 n=6 ec=52/36 lis/c=52/52 les/c/f=53/53/0 sis=70 pruub=8.877097130s) [2] r=-1 lpr=70 pi=[52,70)/1 crt=49'1026 lcod 0'0 mlcod 0'0 active pruub 194.520736694s@ mbc={}] PeeringState::start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Dec  8 04:49:36 np0005550137 ceph-osd[83009]: osd.1 pg_epoch: 70 pg[9.8( v 49'1026 (0'0,49'1026] local-lis/les=52/53 n=6 ec=52/36 lis/c=52/52 les/c/f=53/53/0 sis=70 pruub=8.877078056s) [2] r=-1 lpr=70 pi=[52,70)/1 crt=49'1026 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 194.520736694s@ mbc={}] state<Start>: transitioning to Stray
Dec  8 04:49:36 np0005550137 ceph-osd[83009]: osd.1 pg_epoch: 70 pg[9.6( v 49'1026 (0'0,49'1026] local-lis/les=68/69 n=6 ec=52/36 lis/c=68/52 les/c/f=69/53/0 sis=70 pruub=15.072663307s) [0] async=[0] r=-1 lpr=70 pi=[52,70)/1 crt=49'1026 lcod 0'0 mlcod 0'0 active pruub 200.716659546s@ mbc={255={}}] PeeringState::start_peering_interval up [0] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 0 -> 0, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Dec  8 04:49:36 np0005550137 ceph-osd[83009]: osd.1 pg_epoch: 70 pg[9.6( v 49'1026 (0'0,49'1026] local-lis/les=68/69 n=6 ec=52/36 lis/c=68/52 les/c/f=69/53/0 sis=70 pruub=15.072618484s) [0] r=-1 lpr=70 pi=[52,70)/1 crt=49'1026 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 200.716659546s@ mbc={}] state<Start>: transitioning to Stray
Dec  8 04:49:36 np0005550137 ceph-osd[83009]: osd.1 pg_epoch: 70 pg[9.18( v 49'1026 (0'0,49'1026] local-lis/les=52/53 n=5 ec=52/36 lis/c=52/52 les/c/f=53/53/0 sis=70 pruub=8.880543709s) [2] r=-1 lpr=70 pi=[52,70)/1 crt=49'1026 lcod 0'0 mlcod 0'0 active pruub 194.524734497s@ mbc={}] PeeringState::start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Dec  8 04:49:36 np0005550137 ceph-osd[83009]: osd.1 pg_epoch: 70 pg[9.18( v 49'1026 (0'0,49'1026] local-lis/les=52/53 n=5 ec=52/36 lis/c=52/52 les/c/f=53/53/0 sis=70 pruub=8.880520821s) [2] r=-1 lpr=70 pi=[52,70)/1 crt=49'1026 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 194.524734497s@ mbc={}] state<Start>: transitioning to Stray
Dec  8 04:49:36 np0005550137 ceph-osd[83009]: osd.1 pg_epoch: 70 pg[9.1e( v 49'1026 (0'0,49'1026] local-lis/les=68/69 n=5 ec=52/36 lis/c=68/52 les/c/f=69/53/0 sis=70 pruub=15.075377464s) [0] async=[0] r=-1 lpr=70 pi=[52,70)/1 crt=49'1026 lcod 0'0 mlcod 0'0 active pruub 200.719635010s@ mbc={255={}}] PeeringState::start_peering_interval up [0] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 0 -> 0, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Dec  8 04:49:36 np0005550137 ceph-osd[83009]: osd.1 pg_epoch: 70 pg[9.1e( v 49'1026 (0'0,49'1026] local-lis/les=68/69 n=5 ec=52/36 lis/c=68/52 les/c/f=69/53/0 sis=70 pruub=15.075343132s) [0] r=-1 lpr=70 pi=[52,70)/1 crt=49'1026 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 200.719635010s@ mbc={}] state<Start>: transitioning to Stray
Dec  8 04:49:37 np0005550137 ceph-ceb838ef-9d5d-54e4-bddb-2f01adce2ad4-nfs-cephfs-2-0-compute-0-cuvvno[97606]: 08/12/2025 09:49:37 : epoch 69369efc : compute-0 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f5710002520 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec  8 04:49:37 np0005550137 ceph-ceb838ef-9d5d-54e4-bddb-2f01adce2ad4-nfs-cephfs-2-0-compute-0-cuvvno[97606]: 08/12/2025 09:49:37 : epoch 69369efc : compute-0 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f5714002950 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec  8 04:49:37 np0005550137 radosgw[89717]: ====== starting new request req=0x7ff6ecd845d0 =====
Dec  8 04:49:37 np0005550137 radosgw[89717]: ====== req done req=0x7ff6ecd845d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  8 04:49:37 np0005550137 radosgw[89717]: beast: 0x7ff6ecd845d0: 192.168.122.102 - anonymous [08/Dec/2025:09:49:37.439 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  8 04:49:37 np0005550137 ceph-mon[74516]: from='mgr.14478 192.168.122.100:0/575685615' entity='mgr.compute-0.kitiwu' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "9"}]': finished
Dec  8 04:49:37 np0005550137 ceph-osd[83009]: log_channel(cluster) log [DBG] : 9.c scrub starts
Dec  8 04:49:37 np0005550137 ceph-osd[83009]: log_channel(cluster) log [DBG] : 9.c scrub ok
Dec  8 04:49:37 np0005550137 ceph-mon[74516]: mon.compute-0@0(leader).osd e70 do_prune osdmap full prune enabled
Dec  8 04:49:37 np0005550137 ceph-mon[74516]: mon.compute-0@0(leader).osd e71 e71: 3 total, 3 up, 3 in
Dec  8 04:49:37 np0005550137 ceph-mon[74516]: log_channel(cluster) log [DBG] : osdmap e71: 3 total, 3 up, 3 in
Dec  8 04:49:37 np0005550137 ceph-osd[83009]: osd.1 pg_epoch: 71 pg[9.8( v 49'1026 (0'0,49'1026] local-lis/les=52/53 n=6 ec=52/36 lis/c=52/52 les/c/f=53/53/0 sis=71) [2]/[1] r=0 lpr=71 pi=[52,71)/1 crt=49'1026 lcod 0'0 mlcod 0'0 remapped NOTIFY mbc={}] PeeringState::start_peering_interval up [2] -> [2], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 2, role -1 -> 0, features acting 4540701547738038271 upacting 4540701547738038271
Dec  8 04:49:37 np0005550137 ceph-osd[83009]: osd.1 pg_epoch: 71 pg[9.8( v 49'1026 (0'0,49'1026] local-lis/les=52/53 n=6 ec=52/36 lis/c=52/52 les/c/f=53/53/0 sis=71) [2]/[1] r=0 lpr=71 pi=[52,71)/1 crt=49'1026 lcod 0'0 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Dec  8 04:49:37 np0005550137 ceph-osd[83009]: osd.1 pg_epoch: 71 pg[9.18( v 49'1026 (0'0,49'1026] local-lis/les=52/53 n=5 ec=52/36 lis/c=52/52 les/c/f=53/53/0 sis=71) [2]/[1] r=0 lpr=71 pi=[52,71)/1 crt=49'1026 lcod 0'0 mlcod 0'0 remapped NOTIFY mbc={}] PeeringState::start_peering_interval up [2] -> [2], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 2, role -1 -> 0, features acting 4540701547738038271 upacting 4540701547738038271
Dec  8 04:49:37 np0005550137 ceph-osd[83009]: osd.1 pg_epoch: 71 pg[9.18( v 49'1026 (0'0,49'1026] local-lis/les=52/53 n=5 ec=52/36 lis/c=52/52 les/c/f=53/53/0 sis=71) [2]/[1] r=0 lpr=71 pi=[52,71)/1 crt=49'1026 lcod 0'0 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Dec  8 04:49:37 np0005550137 ceph-mon[74516]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #21. Immutable memtables: 0.
Dec  8 04:49:37 np0005550137 ceph-mon[74516]: rocksdb: (Original Log Time 2025/12/08-09:49:37.976167) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Dec  8 04:49:37 np0005550137 ceph-mon[74516]: rocksdb: [db/flush_job.cc:856] [default] [JOB 5] Flushing memtable with next log file: 21
Dec  8 04:49:37 np0005550137 ceph-mon[74516]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765187377976211, "job": 5, "event": "flush_started", "num_memtables": 1, "num_entries": 567, "num_deletes": 251, "total_data_size": 519246, "memory_usage": 531464, "flush_reason": "Manual Compaction"}
Dec  8 04:49:37 np0005550137 ceph-mon[74516]: rocksdb: [db/flush_job.cc:885] [default] [JOB 5] Level-0 flush table #22: started
Dec  8 04:49:37 np0005550137 ceph-mon[74516]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765187377984213, "cf_name": "default", "job": 5, "event": "table_file_creation", "file_number": 22, "file_size": 498634, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 7400, "largest_seqno": 7966, "table_properties": {"data_size": 495280, "index_size": 1198, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 1093, "raw_key_size": 8470, "raw_average_key_size": 19, "raw_value_size": 488172, "raw_average_value_size": 1151, "num_data_blocks": 53, "num_entries": 424, "num_filter_entries": 424, "num_deletions": 251, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1765187370, "oldest_key_time": 1765187370, "file_creation_time": 1765187377, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "80444841-be0f-461b-9293-2c19ffebbf01", "db_session_id": "WSOFQ4I8QWDIF20O9U4H", "orig_file_number": 22, "seqno_to_time_mapping": "N/A"}}
Dec  8 04:49:37 np0005550137 ceph-mon[74516]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 5] Flush lasted 8090 microseconds, and 2718 cpu microseconds.
Dec  8 04:49:37 np0005550137 ceph-mon[74516]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Dec  8 04:49:37 np0005550137 ceph-mon[74516]: rocksdb: (Original Log Time 2025/12/08-09:49:37.984261) [db/flush_job.cc:967] [default] [JOB 5] Level-0 flush table #22: 498634 bytes OK
Dec  8 04:49:37 np0005550137 ceph-mon[74516]: rocksdb: (Original Log Time 2025/12/08-09:49:37.984281) [db/memtable_list.cc:519] [default] Level-0 commit table #22 started
Dec  8 04:49:37 np0005550137 ceph-mon[74516]: rocksdb: (Original Log Time 2025/12/08-09:49:37.985895) [db/memtable_list.cc:722] [default] Level-0 commit table #22: memtable #1 done
Dec  8 04:49:37 np0005550137 ceph-mon[74516]: rocksdb: (Original Log Time 2025/12/08-09:49:37.985913) EVENT_LOG_v1 {"time_micros": 1765187377985908, "job": 5, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Dec  8 04:49:37 np0005550137 ceph-mon[74516]: rocksdb: (Original Log Time 2025/12/08-09:49:37.985930) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Dec  8 04:49:37 np0005550137 ceph-mon[74516]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 5] Try to delete WAL files size 515866, prev total WAL file size 515866, number of live WAL files 2.
Dec  8 04:49:37 np0005550137 ceph-mon[74516]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000018.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Dec  8 04:49:37 np0005550137 ceph-mon[74516]: rocksdb: (Original Log Time 2025/12/08-09:49:37.986412) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F730030' seq:72057594037927935, type:22 .. '7061786F7300323532' seq:0, type:0; will stop at (end)
Dec  8 04:49:37 np0005550137 ceph-mon[74516]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 6] Compacting 1@0 + 1@6 files to L6, score -1.00
Dec  8 04:49:37 np0005550137 ceph-mon[74516]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 5 Base level 0, inputs: [22(486KB)], [20(11MB)]
Dec  8 04:49:37 np0005550137 ceph-mon[74516]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765187377986466, "job": 6, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [22], "files_L6": [20], "score": -1, "input_data_size": 12614010, "oldest_snapshot_seqno": -1}
Dec  8 04:49:38 np0005550137 ceph-mon[74516]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 6] Generated table #23: 3118 keys, 11399885 bytes, temperature: kUnknown
Dec  8 04:49:38 np0005550137 ceph-mon[74516]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765187378048396, "cf_name": "default", "job": 6, "event": "table_file_creation", "file_number": 23, "file_size": 11399885, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 11374858, "index_size": 16150, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 7813, "raw_key_size": 80558, "raw_average_key_size": 25, "raw_value_size": 11313176, "raw_average_value_size": 3628, "num_data_blocks": 705, "num_entries": 3118, "num_filter_entries": 3118, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1765187097, "oldest_key_time": 0, "file_creation_time": 1765187377, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "80444841-be0f-461b-9293-2c19ffebbf01", "db_session_id": "WSOFQ4I8QWDIF20O9U4H", "orig_file_number": 23, "seqno_to_time_mapping": "N/A"}}
Dec  8 04:49:38 np0005550137 ceph-mon[74516]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Dec  8 04:49:38 np0005550137 ceph-mon[74516]: rocksdb: (Original Log Time 2025/12/08-09:49:38.048746) [db/compaction/compaction_job.cc:1663] [default] [JOB 6] Compacted 1@0 + 1@6 files to L6 => 11399885 bytes
Dec  8 04:49:38 np0005550137 ceph-mon[74516]: rocksdb: (Original Log Time 2025/12/08-09:49:38.050241) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 203.3 rd, 183.7 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(0.5, 11.6 +0.0 blob) out(10.9 +0.0 blob), read-write-amplify(48.2) write-amplify(22.9) OK, records in: 3639, records dropped: 521 output_compression: NoCompression
Dec  8 04:49:38 np0005550137 ceph-mon[74516]: rocksdb: (Original Log Time 2025/12/08-09:49:38.050305) EVENT_LOG_v1 {"time_micros": 1765187378050293, "job": 6, "event": "compaction_finished", "compaction_time_micros": 62059, "compaction_time_cpu_micros": 22575, "output_level": 6, "num_output_files": 1, "total_output_size": 11399885, "num_input_records": 3639, "num_output_records": 3118, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Dec  8 04:49:38 np0005550137 ceph-mon[74516]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000022.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Dec  8 04:49:38 np0005550137 ceph-mon[74516]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765187378050601, "job": 6, "event": "table_file_deletion", "file_number": 22}
Dec  8 04:49:38 np0005550137 podman[100370]: 2025-12-08 09:49:38.052022872 +0000 UTC m=+2.803111788 volume create 46dad7d6558e384b88d2752eaaf60fb09306975d08043466bea3e308d4fc154c
Dec  8 04:49:38 np0005550137 ceph-mon[74516]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000020.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Dec  8 04:49:38 np0005550137 ceph-mon[74516]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765187378055120, "job": 6, "event": "table_file_deletion", "file_number": 20}
Dec  8 04:49:38 np0005550137 ceph-mon[74516]: rocksdb: (Original Log Time 2025/12/08-09:49:37.986327) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec  8 04:49:38 np0005550137 ceph-mon[74516]: rocksdb: (Original Log Time 2025/12/08-09:49:38.055206) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec  8 04:49:38 np0005550137 ceph-mon[74516]: rocksdb: (Original Log Time 2025/12/08-09:49:38.055211) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec  8 04:49:38 np0005550137 ceph-mon[74516]: rocksdb: (Original Log Time 2025/12/08-09:49:38.055212) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec  8 04:49:38 np0005550137 ceph-mon[74516]: rocksdb: (Original Log Time 2025/12/08-09:49:38.055213) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec  8 04:49:38 np0005550137 ceph-mon[74516]: rocksdb: (Original Log Time 2025/12/08-09:49:38.055215) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec  8 04:49:38 np0005550137 podman[100370]: 2025-12-08 09:49:38.062265038 +0000 UTC m=+2.813353954 container create 64348c724740a12d5680a04229549fc10f028920f69e0c7c9d8a59eccec01d05 (image=quay.io/prometheus/prometheus:v2.51.0, name=exciting_keller, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Dec  8 04:49:38 np0005550137 podman[100370]: 2025-12-08 09:49:38.025137309 +0000 UTC m=+2.776226255 image pull 1d3b7f56885b6dd623f1785be963aa9c195f86bc256ea454e8d02a7980b79c53 quay.io/prometheus/prometheus:v2.51.0
Dec  8 04:49:38 np0005550137 systemd[1]: Started libpod-conmon-64348c724740a12d5680a04229549fc10f028920f69e0c7c9d8a59eccec01d05.scope.
Dec  8 04:49:38 np0005550137 systemd[1]: Started libcrun container.
Dec  8 04:49:38 np0005550137 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/93d36dabeae2bccb14c9767728517ee0c2fd098d998c10932054613f30b870e2/merged/prometheus supports timestamps until 2038 (0x7fffffff)
Dec  8 04:49:38 np0005550137 podman[100370]: 2025-12-08 09:49:38.146152533 +0000 UTC m=+2.897241519 container init 64348c724740a12d5680a04229549fc10f028920f69e0c7c9d8a59eccec01d05 (image=quay.io/prometheus/prometheus:v2.51.0, name=exciting_keller, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Dec  8 04:49:38 np0005550137 podman[100370]: 2025-12-08 09:49:38.158881982 +0000 UTC m=+2.909970888 container start 64348c724740a12d5680a04229549fc10f028920f69e0c7c9d8a59eccec01d05 (image=quay.io/prometheus/prometheus:v2.51.0, name=exciting_keller, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Dec  8 04:49:38 np0005550137 podman[100370]: 2025-12-08 09:49:38.162026056 +0000 UTC m=+2.913115042 container attach 64348c724740a12d5680a04229549fc10f028920f69e0c7c9d8a59eccec01d05 (image=quay.io/prometheus/prometheus:v2.51.0, name=exciting_keller, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Dec  8 04:49:38 np0005550137 exciting_keller[100625]: 65534 65534
Dec  8 04:49:38 np0005550137 systemd[1]: libpod-64348c724740a12d5680a04229549fc10f028920f69e0c7c9d8a59eccec01d05.scope: Deactivated successfully.
Dec  8 04:49:38 np0005550137 conmon[100625]: conmon 64348c724740a12d5680 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-64348c724740a12d5680a04229549fc10f028920f69e0c7c9d8a59eccec01d05.scope/container/memory.events
Dec  8 04:49:38 np0005550137 podman[100370]: 2025-12-08 09:49:38.163982415 +0000 UTC m=+2.915071351 container died 64348c724740a12d5680a04229549fc10f028920f69e0c7c9d8a59eccec01d05 (image=quay.io/prometheus/prometheus:v2.51.0, name=exciting_keller, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Dec  8 04:49:38 np0005550137 systemd[1]: var-lib-containers-storage-overlay-93d36dabeae2bccb14c9767728517ee0c2fd098d998c10932054613f30b870e2-merged.mount: Deactivated successfully.
Dec  8 04:49:38 np0005550137 podman[100370]: 2025-12-08 09:49:38.206893416 +0000 UTC m=+2.957982352 container remove 64348c724740a12d5680a04229549fc10f028920f69e0c7c9d8a59eccec01d05 (image=quay.io/prometheus/prometheus:v2.51.0, name=exciting_keller, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Dec  8 04:49:38 np0005550137 podman[100370]: 2025-12-08 09:49:38.211298697 +0000 UTC m=+2.962387643 volume remove 46dad7d6558e384b88d2752eaaf60fb09306975d08043466bea3e308d4fc154c
Dec  8 04:49:38 np0005550137 systemd[1]: libpod-conmon-64348c724740a12d5680a04229549fc10f028920f69e0c7c9d8a59eccec01d05.scope: Deactivated successfully.
Dec  8 04:49:38 np0005550137 podman[100643]: 2025-12-08 09:49:38.291544824 +0000 UTC m=+0.047536390 volume create 9089fd259764ea4f8a4caaa748bed4cc27d9d62fb04ce8cb68bda513f87901a8
Dec  8 04:49:38 np0005550137 podman[100643]: 2025-12-08 09:49:38.305387707 +0000 UTC m=+0.061379283 container create 57b4d5e9f5f3dab333dd25f075fc18670fc012793c77ad3d57fe8d04a4190a58 (image=quay.io/prometheus/prometheus:v2.51.0, name=happy_ellis, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Dec  8 04:49:38 np0005550137 systemd[1]: Started libpod-conmon-57b4d5e9f5f3dab333dd25f075fc18670fc012793c77ad3d57fe8d04a4190a58.scope.
Dec  8 04:49:38 np0005550137 systemd[1]: Started libcrun container.
Dec  8 04:49:38 np0005550137 podman[100643]: 2025-12-08 09:49:38.274449853 +0000 UTC m=+0.030441409 image pull 1d3b7f56885b6dd623f1785be963aa9c195f86bc256ea454e8d02a7980b79c53 quay.io/prometheus/prometheus:v2.51.0
Dec  8 04:49:38 np0005550137 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6a25edd18f100eadd9a4817c049499f828d7c021e40502cd25920b3d8aa15ffd/merged/prometheus supports timestamps until 2038 (0x7fffffff)
Dec  8 04:49:38 np0005550137 podman[100643]: 2025-12-08 09:49:38.381467309 +0000 UTC m=+0.137458915 container init 57b4d5e9f5f3dab333dd25f075fc18670fc012793c77ad3d57fe8d04a4190a58 (image=quay.io/prometheus/prometheus:v2.51.0, name=happy_ellis, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Dec  8 04:49:38 np0005550137 podman[100643]: 2025-12-08 09:49:38.388130377 +0000 UTC m=+0.144121923 container start 57b4d5e9f5f3dab333dd25f075fc18670fc012793c77ad3d57fe8d04a4190a58 (image=quay.io/prometheus/prometheus:v2.51.0, name=happy_ellis, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Dec  8 04:49:38 np0005550137 happy_ellis[100660]: 65534 65534
Dec  8 04:49:38 np0005550137 systemd[1]: libpod-57b4d5e9f5f3dab333dd25f075fc18670fc012793c77ad3d57fe8d04a4190a58.scope: Deactivated successfully.
Dec  8 04:49:38 np0005550137 podman[100643]: 2025-12-08 09:49:38.391627122 +0000 UTC m=+0.147618738 container attach 57b4d5e9f5f3dab333dd25f075fc18670fc012793c77ad3d57fe8d04a4190a58 (image=quay.io/prometheus/prometheus:v2.51.0, name=happy_ellis, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Dec  8 04:49:38 np0005550137 podman[100643]: 2025-12-08 09:49:38.39222837 +0000 UTC m=+0.148219916 container died 57b4d5e9f5f3dab333dd25f075fc18670fc012793c77ad3d57fe8d04a4190a58 (image=quay.io/prometheus/prometheus:v2.51.0, name=happy_ellis, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Dec  8 04:49:38 np0005550137 systemd[1]: var-lib-containers-storage-overlay-6a25edd18f100eadd9a4817c049499f828d7c021e40502cd25920b3d8aa15ffd-merged.mount: Deactivated successfully.
Dec  8 04:49:38 np0005550137 podman[100643]: 2025-12-08 09:49:38.430313517 +0000 UTC m=+0.186305063 container remove 57b4d5e9f5f3dab333dd25f075fc18670fc012793c77ad3d57fe8d04a4190a58 (image=quay.io/prometheus/prometheus:v2.51.0, name=happy_ellis, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Dec  8 04:49:38 np0005550137 podman[100643]: 2025-12-08 09:49:38.43440969 +0000 UTC m=+0.190401236 volume remove 9089fd259764ea4f8a4caaa748bed4cc27d9d62fb04ce8cb68bda513f87901a8
Dec  8 04:49:38 np0005550137 systemd[1]: libpod-conmon-57b4d5e9f5f3dab333dd25f075fc18670fc012793c77ad3d57fe8d04a4190a58.scope: Deactivated successfully.
Dec  8 04:49:38 np0005550137 systemd[1]: Reloading.
Dec  8 04:49:38 np0005550137 ceph-mgr[74806]: log_channel(cluster) log [DBG] : pgmap v73: 353 pgs: 1 active+clean+scrubbing, 4 active+remapped, 348 active+clean; 458 KiB data, 125 MiB used, 60 GiB / 60 GiB avail; 0 B/s, 10 objects/s recovering
Dec  8 04:49:38 np0005550137 ceph-mon[74516]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "10"} v 0)
Dec  8 04:49:38 np0005550137 ceph-mon[74516]: log_channel(audit) log [INF] : from='mgr.14478 192.168.122.100:0/575685615' entity='mgr.compute-0.kitiwu' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "10"}]: dispatch
Dec  8 04:49:38 np0005550137 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec  8 04:49:38 np0005550137 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Dec  8 04:49:38 np0005550137 ceph-osd[83009]: log_channel(cluster) log [DBG] : 11.c scrub starts
Dec  8 04:49:38 np0005550137 ceph-osd[83009]: log_channel(cluster) log [DBG] : 11.c scrub ok
Dec  8 04:49:38 np0005550137 ceph-ceb838ef-9d5d-54e4-bddb-2f01adce2ad4-nfs-cephfs-2-0-compute-0-cuvvno[97606]: 08/12/2025 09:49:38 : epoch 69369efc : compute-0 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f56f00021d0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec  8 04:49:38 np0005550137 systemd[1]: Reloading.
Dec  8 04:49:38 np0005550137 radosgw[89717]: ====== starting new request req=0x7ff6ecd845d0 =====
Dec  8 04:49:38 np0005550137 radosgw[89717]: ====== req done req=0x7ff6ecd845d0 op status=0 http_status=200 latency=0.001000030s ======
Dec  8 04:49:38 np0005550137 radosgw[89717]: beast: 0x7ff6ecd845d0: 192.168.122.100 - anonymous [08/Dec/2025:09:49:38.873 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000030s
Dec  8 04:49:38 np0005550137 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Dec  8 04:49:38 np0005550137 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec  8 04:49:38 np0005550137 ceph-mon[74516]: mon.compute-0@0(leader).osd e71 do_prune osdmap full prune enabled
Dec  8 04:49:38 np0005550137 ceph-mon[74516]: log_channel(audit) log [INF] : from='mgr.14478 192.168.122.100:0/575685615' entity='mgr.compute-0.kitiwu' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "10"}]': finished
Dec  8 04:49:38 np0005550137 ceph-mon[74516]: mon.compute-0@0(leader).osd e72 e72: 3 total, 3 up, 3 in
Dec  8 04:49:38 np0005550137 ceph-mon[74516]: log_channel(cluster) log [DBG] : osdmap e72: 3 total, 3 up, 3 in
Dec  8 04:49:38 np0005550137 ceph-osd[83009]: osd.1 pg_epoch: 72 pg[9.9( v 49'1026 (0'0,49'1026] local-lis/les=52/53 n=5 ec=52/36 lis/c=52/52 les/c/f=53/53/0 sis=72 pruub=14.857347488s) [2] r=-1 lpr=72 pi=[52,72)/1 crt=49'1026 lcod 0'0 mlcod 0'0 active pruub 202.520919800s@ mbc={}] PeeringState::start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Dec  8 04:49:38 np0005550137 ceph-osd[83009]: osd.1 pg_epoch: 72 pg[9.9( v 49'1026 (0'0,49'1026] local-lis/les=52/53 n=5 ec=52/36 lis/c=52/52 les/c/f=53/53/0 sis=72 pruub=14.857311249s) [2] r=-1 lpr=72 pi=[52,72)/1 crt=49'1026 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 202.520919800s@ mbc={}] state<Start>: transitioning to Stray
Dec  8 04:49:38 np0005550137 ceph-osd[83009]: osd.1 pg_epoch: 72 pg[9.19( v 49'1026 (0'0,49'1026] local-lis/les=52/53 n=5 ec=52/36 lis/c=52/52 les/c/f=53/53/0 sis=72 pruub=14.860664368s) [2] r=-1 lpr=72 pi=[52,72)/1 crt=49'1026 lcod 0'0 mlcod 0'0 active pruub 202.524932861s@ mbc={}] PeeringState::start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Dec  8 04:49:38 np0005550137 ceph-osd[83009]: osd.1 pg_epoch: 72 pg[9.19( v 49'1026 (0'0,49'1026] local-lis/les=52/53 n=5 ec=52/36 lis/c=52/52 les/c/f=53/53/0 sis=72 pruub=14.860638618s) [2] r=-1 lpr=72 pi=[52,72)/1 crt=49'1026 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 202.524932861s@ mbc={}] state<Start>: transitioning to Stray
Dec  8 04:49:38 np0005550137 ceph-mon[74516]: from='mgr.14478 192.168.122.100:0/575685615' entity='mgr.compute-0.kitiwu' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "10"}]: dispatch
Dec  8 04:49:38 np0005550137 ceph-osd[83009]: osd.1 pg_epoch: 72 pg[9.8( v 49'1026 (0'0,49'1026] local-lis/les=71/72 n=6 ec=52/36 lis/c=52/52 les/c/f=53/53/0 sis=71) [2]/[1] async=[2] r=0 lpr=71 pi=[52,71)/1 crt=49'1026 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=7}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  8 04:49:38 np0005550137 ceph-osd[83009]: osd.1 pg_epoch: 72 pg[9.18( v 49'1026 (0'0,49'1026] local-lis/les=71/72 n=5 ec=52/36 lis/c=52/52 les/c/f=53/53/0 sis=71) [2]/[1] async=[2] r=0 lpr=71 pi=[52,71)/1 crt=49'1026 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=4}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  8 04:49:39 np0005550137 systemd[1]: Starting Ceph prometheus.compute-0 for ceb838ef-9d5d-54e4-bddb-2f01adce2ad4...
Dec  8 04:49:39 np0005550137 ceph-ceb838ef-9d5d-54e4-bddb-2f01adce2ad4-nfs-cephfs-2-0-compute-0-cuvvno[97606]: 08/12/2025 09:49:39 : epoch 69369efc : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f56e80032f0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec  8 04:49:39 np0005550137 ceph-ceb838ef-9d5d-54e4-bddb-2f01adce2ad4-nfs-cephfs-2-0-compute-0-cuvvno[97606]: 08/12/2025 09:49:39 : epoch 69369efc : compute-0 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f5710002520 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec  8 04:49:39 np0005550137 podman[100803]: 2025-12-08 09:49:39.347811792 +0000 UTC m=+0.044855570 container create d4d95e6750bbb2e718479d53632eff7b41f65f80181b0d3e530725fcb6fb28e3 (image=quay.io/prometheus/prometheus:v2.51.0, name=ceph-ceb838ef-9d5d-54e4-bddb-2f01adce2ad4-prometheus-compute-0, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Dec  8 04:49:39 np0005550137 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/71e627f1c62a4cb3611977c7784ea51a26aaa3b3f33c26d69bf9e0f1ba34b760/merged/prometheus supports timestamps until 2038 (0x7fffffff)
Dec  8 04:49:39 np0005550137 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/71e627f1c62a4cb3611977c7784ea51a26aaa3b3f33c26d69bf9e0f1ba34b760/merged/etc/prometheus supports timestamps until 2038 (0x7fffffff)
Dec  8 04:49:39 np0005550137 podman[100803]: 2025-12-08 09:49:39.416048 +0000 UTC m=+0.113091868 container init d4d95e6750bbb2e718479d53632eff7b41f65f80181b0d3e530725fcb6fb28e3 (image=quay.io/prometheus/prometheus:v2.51.0, name=ceph-ceb838ef-9d5d-54e4-bddb-2f01adce2ad4-prometheus-compute-0, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Dec  8 04:49:39 np0005550137 podman[100803]: 2025-12-08 09:49:39.325347221 +0000 UTC m=+0.022391019 image pull 1d3b7f56885b6dd623f1785be963aa9c195f86bc256ea454e8d02a7980b79c53 quay.io/prometheus/prometheus:v2.51.0
Dec  8 04:49:39 np0005550137 podman[100803]: 2025-12-08 09:49:39.425902995 +0000 UTC m=+0.122946803 container start d4d95e6750bbb2e718479d53632eff7b41f65f80181b0d3e530725fcb6fb28e3 (image=quay.io/prometheus/prometheus:v2.51.0, name=ceph-ceb838ef-9d5d-54e4-bddb-2f01adce2ad4-prometheus-compute-0, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Dec  8 04:49:39 np0005550137 bash[100803]: d4d95e6750bbb2e718479d53632eff7b41f65f80181b0d3e530725fcb6fb28e3
Dec  8 04:49:39 np0005550137 systemd[1]: Started Ceph prometheus.compute-0 for ceb838ef-9d5d-54e4-bddb-2f01adce2ad4.
Dec  8 04:49:39 np0005550137 radosgw[89717]: ====== starting new request req=0x7ff6ecd845d0 =====
Dec  8 04:49:39 np0005550137 radosgw[89717]: ====== req done req=0x7ff6ecd845d0 op status=0 http_status=200 latency=0.001000030s ======
Dec  8 04:49:39 np0005550137 radosgw[89717]: beast: 0x7ff6ecd845d0: 192.168.122.102 - anonymous [08/Dec/2025:09:49:39.442 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000030s
Dec  8 04:49:39 np0005550137 ceph-ceb838ef-9d5d-54e4-bddb-2f01adce2ad4-prometheus-compute-0[100818]: ts=2025-12-08T09:49:39.461Z caller=main.go:617 level=info msg="Starting Prometheus Server" mode=server version="(version=2.51.0, branch=HEAD, revision=c05c15512acb675e3f6cd662a6727854e93fc024)"
Dec  8 04:49:39 np0005550137 ceph-ceb838ef-9d5d-54e4-bddb-2f01adce2ad4-prometheus-compute-0[100818]: ts=2025-12-08T09:49:39.461Z caller=main.go:622 level=info build_context="(go=go1.22.1, platform=linux/amd64, user=root@b5723e458358, date=20240319-10:54:45, tags=netgo,builtinassets,stringlabels)"
Dec  8 04:49:39 np0005550137 ceph-ceb838ef-9d5d-54e4-bddb-2f01adce2ad4-prometheus-compute-0[100818]: ts=2025-12-08T09:49:39.462Z caller=main.go:623 level=info host_details="(Linux 5.14.0-645.el9.x86_64 #1 SMP PREEMPT_DYNAMIC Fri Nov 28 14:01:17 UTC 2025 x86_64 compute-0 (none))"
Dec  8 04:49:39 np0005550137 ceph-ceb838ef-9d5d-54e4-bddb-2f01adce2ad4-prometheus-compute-0[100818]: ts=2025-12-08T09:49:39.462Z caller=main.go:624 level=info fd_limits="(soft=1048576, hard=1048576)"
Dec  8 04:49:39 np0005550137 ceph-ceb838ef-9d5d-54e4-bddb-2f01adce2ad4-prometheus-compute-0[100818]: ts=2025-12-08T09:49:39.462Z caller=main.go:625 level=info vm_limits="(soft=unlimited, hard=unlimited)"
Dec  8 04:49:39 np0005550137 ceph-ceb838ef-9d5d-54e4-bddb-2f01adce2ad4-prometheus-compute-0[100818]: ts=2025-12-08T09:49:39.464Z caller=web.go:568 level=info component=web msg="Start listening for connections" address=192.168.122.100:9095
Dec  8 04:49:39 np0005550137 ceph-ceb838ef-9d5d-54e4-bddb-2f01adce2ad4-prometheus-compute-0[100818]: ts=2025-12-08T09:49:39.465Z caller=main.go:1129 level=info msg="Starting TSDB ..."
Dec  8 04:49:39 np0005550137 ceph-ceb838ef-9d5d-54e4-bddb-2f01adce2ad4-prometheus-compute-0[100818]: ts=2025-12-08T09:49:39.469Z caller=tls_config.go:313 level=info component=web msg="Listening on" address=192.168.122.100:9095
Dec  8 04:49:39 np0005550137 ceph-ceb838ef-9d5d-54e4-bddb-2f01adce2ad4-prometheus-compute-0[100818]: ts=2025-12-08T09:49:39.469Z caller=tls_config.go:316 level=info component=web msg="TLS is disabled." http2=false address=192.168.122.100:9095
Dec  8 04:49:39 np0005550137 ceph-ceb838ef-9d5d-54e4-bddb-2f01adce2ad4-prometheus-compute-0[100818]: ts=2025-12-08T09:49:39.474Z caller=head.go:616 level=info component=tsdb msg="Replaying on-disk memory mappable chunks if any"
Dec  8 04:49:39 np0005550137 ceph-ceb838ef-9d5d-54e4-bddb-2f01adce2ad4-prometheus-compute-0[100818]: ts=2025-12-08T09:49:39.474Z caller=head.go:698 level=info component=tsdb msg="On-disk memory mappable chunks replay completed" duration=3.701µs
Dec  8 04:49:39 np0005550137 ceph-ceb838ef-9d5d-54e4-bddb-2f01adce2ad4-prometheus-compute-0[100818]: ts=2025-12-08T09:49:39.474Z caller=head.go:706 level=info component=tsdb msg="Replaying WAL, this may take a while"
Dec  8 04:49:39 np0005550137 ceph-ceb838ef-9d5d-54e4-bddb-2f01adce2ad4-prometheus-compute-0[100818]: ts=2025-12-08T09:49:39.474Z caller=head.go:778 level=info component=tsdb msg="WAL segment loaded" segment=0 maxSegment=0
Dec  8 04:49:39 np0005550137 ceph-ceb838ef-9d5d-54e4-bddb-2f01adce2ad4-prometheus-compute-0[100818]: ts=2025-12-08T09:49:39.474Z caller=head.go:815 level=info component=tsdb msg="WAL replay completed" checkpoint_replay_duration=39.542µs wal_replay_duration=555.206µs wbl_replay_duration=190ns total_replay_duration=626.699µs
Dec  8 04:49:39 np0005550137 ceph-ceb838ef-9d5d-54e4-bddb-2f01adce2ad4-prometheus-compute-0[100818]: ts=2025-12-08T09:49:39.476Z caller=main.go:1150 level=info fs_type=XFS_SUPER_MAGIC
Dec  8 04:49:39 np0005550137 ceph-ceb838ef-9d5d-54e4-bddb-2f01adce2ad4-prometheus-compute-0[100818]: ts=2025-12-08T09:49:39.476Z caller=main.go:1153 level=info msg="TSDB started"
Dec  8 04:49:39 np0005550137 ceph-ceb838ef-9d5d-54e4-bddb-2f01adce2ad4-prometheus-compute-0[100818]: ts=2025-12-08T09:49:39.476Z caller=main.go:1335 level=info msg="Loading configuration file" filename=/etc/prometheus/prometheus.yml
Dec  8 04:49:39 np0005550137 ceph-mon[74516]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Dec  8 04:49:39 np0005550137 ceph-mon[74516]: log_channel(audit) log [INF] : from='mgr.14478 192.168.122.100:0/575685615' entity='mgr.compute-0.kitiwu' 
Dec  8 04:49:39 np0005550137 ceph-mon[74516]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Dec  8 04:49:39 np0005550137 ceph-mon[74516]: log_channel(audit) log [INF] : from='mgr.14478 192.168.122.100:0/575685615' entity='mgr.compute-0.kitiwu' 
Dec  8 04:49:39 np0005550137 ceph-mon[74516]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.prometheus}] v 0)
Dec  8 04:49:39 np0005550137 ceph-mon[74516]: log_channel(audit) log [INF] : from='mgr.14478 192.168.122.100:0/575685615' entity='mgr.compute-0.kitiwu' 
Dec  8 04:49:39 np0005550137 ceph-mgr[74806]: [progress INFO root] complete: finished ev 26b46f91-7c7e-4b42-b3b0-7c07c47c61f7 (Updating prometheus deployment (+1 -> 1))
Dec  8 04:49:39 np0005550137 ceph-mgr[74806]: [progress INFO root] Completed event 26b46f91-7c7e-4b42-b3b0-7c07c47c61f7 (Updating prometheus deployment (+1 -> 1)) in 5 seconds
Dec  8 04:49:39 np0005550137 ceph-mon[74516]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mgr module enable", "module": "prometheus"} v 0)
Dec  8 04:49:39 np0005550137 ceph-mon[74516]: log_channel(audit) log [INF] : from='mgr.14478 192.168.122.100:0/575685615' entity='mgr.compute-0.kitiwu' cmd=[{"prefix": "mgr module enable", "module": "prometheus"}]: dispatch
Dec  8 04:49:39 np0005550137 ceph-ceb838ef-9d5d-54e4-bddb-2f01adce2ad4-prometheus-compute-0[100818]: ts=2025-12-08T09:49:39.545Z caller=main.go:1372 level=info msg="Completed loading of configuration file" filename=/etc/prometheus/prometheus.yml totalDuration=68.455514ms db_storage=1.09µs remote_storage=1.73µs web_handler=520ns query_engine=800ns scrape=33.069647ms scrape_sd=222.887µs notify=16.54µs notify_sd=13.411µs rules=34.714377ms tracing=12.8µs
Dec  8 04:49:39 np0005550137 ceph-ceb838ef-9d5d-54e4-bddb-2f01adce2ad4-prometheus-compute-0[100818]: ts=2025-12-08T09:49:39.545Z caller=main.go:1114 level=info msg="Server is ready to receive web requests."
Dec  8 04:49:39 np0005550137 ceph-ceb838ef-9d5d-54e4-bddb-2f01adce2ad4-prometheus-compute-0[100818]: ts=2025-12-08T09:49:39.545Z caller=manager.go:163 level=info component="rule manager" msg="Starting rule manager..."
Dec  8 04:49:39 np0005550137 ceph-osd[83009]: log_channel(cluster) log [DBG] : 11.b scrub starts
Dec  8 04:49:39 np0005550137 ceph-osd[83009]: log_channel(cluster) log [DBG] : 11.b scrub ok
Dec  8 04:49:39 np0005550137 ceph-mon[74516]: mon.compute-0@0(leader).osd e72 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  8 04:49:39 np0005550137 ceph-mon[74516]: mon.compute-0@0(leader).osd e72 do_prune osdmap full prune enabled
Dec  8 04:49:40 np0005550137 ceph-mon[74516]: mon.compute-0@0(leader).osd e73 e73: 3 total, 3 up, 3 in
Dec  8 04:49:40 np0005550137 ceph-mon[74516]: log_channel(cluster) log [DBG] : osdmap e73: 3 total, 3 up, 3 in
Dec  8 04:49:40 np0005550137 ceph-osd[83009]: osd.1 pg_epoch: 73 pg[9.19( v 49'1026 (0'0,49'1026] local-lis/les=52/53 n=5 ec=52/36 lis/c=52/52 les/c/f=53/53/0 sis=73) [2]/[1] r=0 lpr=73 pi=[52,73)/1 crt=49'1026 lcod 0'0 mlcod 0'0 remapped NOTIFY mbc={}] PeeringState::start_peering_interval up [2] -> [2], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 2, role -1 -> 0, features acting 4540701547738038271 upacting 4540701547738038271
Dec  8 04:49:40 np0005550137 ceph-osd[83009]: osd.1 pg_epoch: 73 pg[9.19( v 49'1026 (0'0,49'1026] local-lis/les=52/53 n=5 ec=52/36 lis/c=52/52 les/c/f=53/53/0 sis=73) [2]/[1] r=0 lpr=73 pi=[52,73)/1 crt=49'1026 lcod 0'0 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Dec  8 04:49:40 np0005550137 ceph-osd[83009]: osd.1 pg_epoch: 73 pg[9.18( v 49'1026 (0'0,49'1026] local-lis/les=71/72 n=5 ec=52/36 lis/c=71/52 les/c/f=72/53/0 sis=73 pruub=14.886471748s) [2] async=[2] r=-1 lpr=73 pi=[52,73)/1 crt=49'1026 lcod 0'0 mlcod 0'0 active pruub 203.673400879s@ mbc={255={}}] PeeringState::start_peering_interval up [2] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 2 -> 2, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Dec  8 04:49:40 np0005550137 ceph-osd[83009]: osd.1 pg_epoch: 73 pg[9.18( v 49'1026 (0'0,49'1026] local-lis/les=71/72 n=5 ec=52/36 lis/c=71/52 les/c/f=72/53/0 sis=73 pruub=14.886422157s) [2] r=-1 lpr=73 pi=[52,73)/1 crt=49'1026 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 203.673400879s@ mbc={}] state<Start>: transitioning to Stray
Dec  8 04:49:40 np0005550137 ceph-osd[83009]: osd.1 pg_epoch: 73 pg[9.9( v 49'1026 (0'0,49'1026] local-lis/les=52/53 n=5 ec=52/36 lis/c=52/52 les/c/f=53/53/0 sis=73) [2]/[1] r=0 lpr=73 pi=[52,73)/1 crt=49'1026 lcod 0'0 mlcod 0'0 remapped NOTIFY mbc={}] PeeringState::start_peering_interval up [2] -> [2], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 2, role -1 -> 0, features acting 4540701547738038271 upacting 4540701547738038271
Dec  8 04:49:40 np0005550137 ceph-osd[83009]: osd.1 pg_epoch: 73 pg[9.9( v 49'1026 (0'0,49'1026] local-lis/les=52/53 n=5 ec=52/36 lis/c=52/52 les/c/f=53/53/0 sis=73) [2]/[1] r=0 lpr=73 pi=[52,73)/1 crt=49'1026 lcod 0'0 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Dec  8 04:49:40 np0005550137 ceph-osd[83009]: osd.1 pg_epoch: 73 pg[9.8( v 49'1026 (0'0,49'1026] local-lis/les=71/72 n=6 ec=52/36 lis/c=71/52 les/c/f=72/53/0 sis=73 pruub=14.886133194s) [2] async=[2] r=-1 lpr=73 pi=[52,73)/1 crt=49'1026 lcod 0'0 mlcod 0'0 active pruub 203.673385620s@ mbc={255={}}] PeeringState::start_peering_interval up [2] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 2 -> 2, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Dec  8 04:49:40 np0005550137 ceph-osd[83009]: osd.1 pg_epoch: 73 pg[9.8( v 49'1026 (0'0,49'1026] local-lis/les=71/72 n=6 ec=52/36 lis/c=71/52 les/c/f=72/53/0 sis=73 pruub=14.886080742s) [2] r=-1 lpr=73 pi=[52,73)/1 crt=49'1026 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 203.673385620s@ mbc={}] state<Start>: transitioning to Stray
Dec  8 04:49:40 np0005550137 ceph-mon[74516]: from='mgr.14478 192.168.122.100:0/575685615' entity='mgr.compute-0.kitiwu' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "10"}]': finished
Dec  8 04:49:40 np0005550137 ceph-mon[74516]: from='mgr.14478 192.168.122.100:0/575685615' entity='mgr.compute-0.kitiwu' 
Dec  8 04:49:40 np0005550137 ceph-mon[74516]: from='mgr.14478 192.168.122.100:0/575685615' entity='mgr.compute-0.kitiwu' 
Dec  8 04:49:40 np0005550137 ceph-mon[74516]: from='mgr.14478 192.168.122.100:0/575685615' entity='mgr.compute-0.kitiwu' 
Dec  8 04:49:40 np0005550137 ceph-mon[74516]: from='mgr.14478 192.168.122.100:0/575685615' entity='mgr.compute-0.kitiwu' cmd=[{"prefix": "mgr module enable", "module": "prometheus"}]: dispatch
Dec  8 04:49:40 np0005550137 ceph-mon[74516]: log_channel(audit) log [INF] : from='mgr.14478 192.168.122.100:0/575685615' entity='mgr.compute-0.kitiwu' cmd='[{"prefix": "mgr module enable", "module": "prometheus"}]': finished
Dec  8 04:49:40 np0005550137 ceph-mon[74516]: log_channel(cluster) log [DBG] : mgrmap e27: compute-0.kitiwu(active, since 90s), standbys: compute-1.mmkaif, compute-2.zqytsv
Dec  8 04:49:40 np0005550137 systemd[1]: session-35.scope: Deactivated successfully.
Dec  8 04:49:40 np0005550137 systemd[1]: session-35.scope: Consumed 48.807s CPU time.
Dec  8 04:49:40 np0005550137 systemd-logind[805]: Session 35 logged out. Waiting for processes to exit.
Dec  8 04:49:40 np0005550137 systemd-logind[805]: Removed session 35.
Dec  8 04:49:40 np0005550137 ceph-ceb838ef-9d5d-54e4-bddb-2f01adce2ad4-mgr-compute-0-kitiwu[74802]: ignoring --setuser ceph since I am not root
Dec  8 04:49:40 np0005550137 ceph-ceb838ef-9d5d-54e4-bddb-2f01adce2ad4-mgr-compute-0-kitiwu[74802]: ignoring --setgroup ceph since I am not root
Dec  8 04:49:40 np0005550137 ceph-mgr[74806]: ceph version 19.2.3 (c92aebb279828e9c3c1f5d24613efca272649e62) squid (stable), process ceph-mgr, pid 2
Dec  8 04:49:40 np0005550137 ceph-mgr[74806]: pidfile_write: ignore empty --pid-file
Dec  8 04:49:40 np0005550137 ceph-osd[83009]: log_channel(cluster) log [DBG] : 11.9 scrub starts
Dec  8 04:49:40 np0005550137 ceph-osd[83009]: log_channel(cluster) log [DBG] : 11.9 scrub ok
Dec  8 04:49:40 np0005550137 ceph-mgr[74806]: mgr[py] Loading python module 'alerts'
Dec  8 04:49:40 np0005550137 ceph-ceb838ef-9d5d-54e4-bddb-2f01adce2ad4-mgr-compute-0-kitiwu[74802]: 2025-12-08T09:49:40.821+0000 7fd1cd78a140 -1 mgr[py] Module alerts has missing NOTIFY_TYPES member
Dec  8 04:49:40 np0005550137 ceph-mgr[74806]: mgr[py] Module alerts has missing NOTIFY_TYPES member
Dec  8 04:49:40 np0005550137 ceph-mgr[74806]: mgr[py] Loading python module 'balancer'
Dec  8 04:49:40 np0005550137 ceph-ceb838ef-9d5d-54e4-bddb-2f01adce2ad4-nfs-cephfs-2-0-compute-0-cuvvno[97606]: 08/12/2025 09:49:40 : epoch 69369efc : compute-0 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f5710002520 fd 15 proxy header rest len failed header rlen = % (will set dead)
Dec  8 04:49:40 np0005550137 radosgw[89717]: ====== starting new request req=0x7ff6ecd845d0 =====
Dec  8 04:49:40 np0005550137 radosgw[89717]: ====== req done req=0x7ff6ecd845d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  8 04:49:40 np0005550137 radosgw[89717]: beast: 0x7ff6ecd845d0: 192.168.122.100 - anonymous [08/Dec/2025:09:49:40.876 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  8 04:49:40 np0005550137 ceph-ceb838ef-9d5d-54e4-bddb-2f01adce2ad4-mgr-compute-0-kitiwu[74802]: 2025-12-08T09:49:40.911+0000 7fd1cd78a140 -1 mgr[py] Module balancer has missing NOTIFY_TYPES member
Dec  8 04:49:40 np0005550137 ceph-mgr[74806]: mgr[py] Module balancer has missing NOTIFY_TYPES member
Dec  8 04:49:40 np0005550137 ceph-mgr[74806]: mgr[py] Loading python module 'cephadm'
Dec  8 04:49:41 np0005550137 ceph-mon[74516]: mon.compute-0@0(leader).osd e73 do_prune osdmap full prune enabled
Dec  8 04:49:41 np0005550137 ceph-mon[74516]: mon.compute-0@0(leader).osd e74 e74: 3 total, 3 up, 3 in
Dec  8 04:49:41 np0005550137 ceph-mon[74516]: log_channel(cluster) log [DBG] : osdmap e74: 3 total, 3 up, 3 in
Dec  8 04:49:41 np0005550137 ceph-ceb838ef-9d5d-54e4-bddb-2f01adce2ad4-nfs-cephfs-2-0-compute-0-cuvvno[97606]: 08/12/2025 09:49:41 : epoch 69369efc : compute-0 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f56f0003520 fd 15 proxy header rest len failed header rlen = % (will set dead)
Dec  8 04:49:41 np0005550137 ceph-ceb838ef-9d5d-54e4-bddb-2f01adce2ad4-nfs-cephfs-2-0-compute-0-cuvvno[97606]: 08/12/2025 09:49:41 : epoch 69369efc : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f56e8003c10 fd 15 proxy header rest len failed header rlen = % (will set dead)
Dec  8 04:49:41 np0005550137 ceph-mon[74516]: from='mgr.14478 192.168.122.100:0/575685615' entity='mgr.compute-0.kitiwu' cmd='[{"prefix": "mgr module enable", "module": "prometheus"}]': finished
Dec  8 04:49:41 np0005550137 ceph-osd[83009]: osd.1 pg_epoch: 74 pg[9.19( v 49'1026 (0'0,49'1026] local-lis/les=73/74 n=5 ec=52/36 lis/c=52/52 les/c/f=53/53/0 sis=73) [2]/[1] async=[2] r=0 lpr=73 pi=[52,73)/1 crt=49'1026 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=7}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  8 04:49:41 np0005550137 ceph-osd[83009]: osd.1 pg_epoch: 74 pg[9.9( v 49'1026 (0'0,49'1026] local-lis/les=73/74 n=5 ec=52/36 lis/c=52/52 les/c/f=53/53/0 sis=73) [2]/[1] async=[2] r=0 lpr=73 pi=[52,73)/1 crt=49'1026 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=5}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  8 04:49:41 np0005550137 radosgw[89717]: ====== starting new request req=0x7ff6ecd845d0 =====
Dec  8 04:49:41 np0005550137 radosgw[89717]: ====== req done req=0x7ff6ecd845d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  8 04:49:41 np0005550137 radosgw[89717]: beast: 0x7ff6ecd845d0: 192.168.122.102 - anonymous [08/Dec/2025:09:49:41.446 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  8 04:49:41 np0005550137 ceph-osd[83009]: log_channel(cluster) log [DBG] : 11.d scrub starts
Dec  8 04:49:41 np0005550137 ceph-osd[83009]: log_channel(cluster) log [DBG] : 11.d scrub ok
Dec  8 04:49:41 np0005550137 ceph-mgr[74806]: mgr[py] Loading python module 'crash'
Dec  8 04:49:41 np0005550137 ceph-ceb838ef-9d5d-54e4-bddb-2f01adce2ad4-mgr-compute-0-kitiwu[74802]: 2025-12-08T09:49:41.771+0000 7fd1cd78a140 -1 mgr[py] Module crash has missing NOTIFY_TYPES member
Dec  8 04:49:41 np0005550137 ceph-mgr[74806]: mgr[py] Module crash has missing NOTIFY_TYPES member
Dec  8 04:49:41 np0005550137 ceph-mgr[74806]: mgr[py] Loading python module 'dashboard'
Dec  8 04:49:42 np0005550137 ceph-mon[74516]: mon.compute-0@0(leader).osd e74 do_prune osdmap full prune enabled
Dec  8 04:49:42 np0005550137 ceph-mon[74516]: mon.compute-0@0(leader).osd e75 e75: 3 total, 3 up, 3 in
Dec  8 04:49:42 np0005550137 ceph-mon[74516]: log_channel(cluster) log [DBG] : osdmap e75: 3 total, 3 up, 3 in
Dec  8 04:49:42 np0005550137 ceph-osd[83009]: osd.1 pg_epoch: 75 pg[9.9( v 49'1026 (0'0,49'1026] local-lis/les=73/74 n=5 ec=52/36 lis/c=73/52 les/c/f=74/53/0 sis=75 pruub=15.017644882s) [2] async=[2] r=-1 lpr=75 pi=[52,75)/1 crt=49'1026 lcod 0'0 mlcod 0'0 active pruub 205.961135864s@ mbc={255={}}] PeeringState::start_peering_interval up [2] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 2 -> 2, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Dec  8 04:49:42 np0005550137 ceph-osd[83009]: osd.1 pg_epoch: 75 pg[9.9( v 49'1026 (0'0,49'1026] local-lis/les=73/74 n=5 ec=52/36 lis/c=73/52 les/c/f=74/53/0 sis=75 pruub=15.017549515s) [2] r=-1 lpr=75 pi=[52,75)/1 crt=49'1026 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 205.961135864s@ mbc={}] state<Start>: transitioning to Stray
Dec  8 04:49:42 np0005550137 ceph-osd[83009]: osd.1 pg_epoch: 75 pg[9.19( v 49'1026 (0'0,49'1026] local-lis/les=73/74 n=5 ec=52/36 lis/c=73/52 les/c/f=74/53/0 sis=75 pruub=14.973391533s) [2] async=[2] r=-1 lpr=75 pi=[52,75)/1 crt=49'1026 lcod 0'0 mlcod 0'0 active pruub 205.918060303s@ mbc={255={}}] PeeringState::start_peering_interval up [2] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 2 -> 2, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Dec  8 04:49:42 np0005550137 ceph-osd[83009]: osd.1 pg_epoch: 75 pg[9.19( v 49'1026 (0'0,49'1026] local-lis/les=73/74 n=5 ec=52/36 lis/c=73/52 les/c/f=74/53/0 sis=75 pruub=14.973332405s) [2] r=-1 lpr=75 pi=[52,75)/1 crt=49'1026 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 205.918060303s@ mbc={}] state<Start>: transitioning to Stray
Dec  8 04:49:42 np0005550137 ceph-mgr[74806]: mgr[py] Loading python module 'devicehealth'
Dec  8 04:49:42 np0005550137 ceph-ceb838ef-9d5d-54e4-bddb-2f01adce2ad4-mgr-compute-0-kitiwu[74802]: 2025-12-08T09:49:42.401+0000 7fd1cd78a140 -1 mgr[py] Module devicehealth has missing NOTIFY_TYPES member
Dec  8 04:49:42 np0005550137 ceph-mgr[74806]: mgr[py] Module devicehealth has missing NOTIFY_TYPES member
Dec  8 04:49:42 np0005550137 ceph-mgr[74806]: mgr[py] Loading python module 'diskprediction_local'
Dec  8 04:49:42 np0005550137 ceph-ceb838ef-9d5d-54e4-bddb-2f01adce2ad4-mgr-compute-0-kitiwu[74802]: /lib64/python3.9/site-packages/scipy/__init__.py:73: UserWarning: NumPy was imported from a Python sub-interpreter but NumPy does not properly support sub-interpreters. This will likely work for most users but might cause hard to track down issues or subtle bugs. A common user of the rare sub-interpreter feature is wsgi which also allows single-interpreter mode.
Dec  8 04:49:42 np0005550137 ceph-ceb838ef-9d5d-54e4-bddb-2f01adce2ad4-mgr-compute-0-kitiwu[74802]: Improvements in the case of bugs are welcome, but is not on the NumPy roadmap, and full support may require significant effort to achieve.
Dec  8 04:49:42 np0005550137 ceph-ceb838ef-9d5d-54e4-bddb-2f01adce2ad4-mgr-compute-0-kitiwu[74802]:  from numpy import show_config as show_numpy_config
Dec  8 04:49:42 np0005550137 ceph-ceb838ef-9d5d-54e4-bddb-2f01adce2ad4-mgr-compute-0-kitiwu[74802]: 2025-12-08T09:49:42.569+0000 7fd1cd78a140 -1 mgr[py] Module diskprediction_local has missing NOTIFY_TYPES member
Dec  8 04:49:42 np0005550137 ceph-mgr[74806]: mgr[py] Module diskprediction_local has missing NOTIFY_TYPES member
Dec  8 04:49:42 np0005550137 ceph-mgr[74806]: mgr[py] Loading python module 'influx'
Dec  8 04:49:42 np0005550137 ceph-ceb838ef-9d5d-54e4-bddb-2f01adce2ad4-mgr-compute-0-kitiwu[74802]: 2025-12-08T09:49:42.638+0000 7fd1cd78a140 -1 mgr[py] Module influx has missing NOTIFY_TYPES member
Dec  8 04:49:42 np0005550137 ceph-mgr[74806]: mgr[py] Module influx has missing NOTIFY_TYPES member
Dec  8 04:49:42 np0005550137 ceph-mgr[74806]: mgr[py] Loading python module 'insights'
Dec  8 04:49:42 np0005550137 ceph-osd[83009]: log_channel(cluster) log [DBG] : 8.e scrub starts
Dec  8 04:49:42 np0005550137 ceph-osd[83009]: log_channel(cluster) log [DBG] : 8.e scrub ok
Dec  8 04:49:42 np0005550137 ceph-mgr[74806]: mgr[py] Loading python module 'iostat'
Dec  8 04:49:42 np0005550137 ceph-ceb838ef-9d5d-54e4-bddb-2f01adce2ad4-mgr-compute-0-kitiwu[74802]: 2025-12-08T09:49:42.772+0000 7fd1cd78a140 -1 mgr[py] Module iostat has missing NOTIFY_TYPES member
Dec  8 04:49:42 np0005550137 ceph-mgr[74806]: mgr[py] Module iostat has missing NOTIFY_TYPES member
Dec  8 04:49:42 np0005550137 ceph-mgr[74806]: mgr[py] Loading python module 'k8sevents'
Dec  8 04:49:42 np0005550137 ceph-ceb838ef-9d5d-54e4-bddb-2f01adce2ad4-nfs-cephfs-2-0-compute-0-cuvvno[97606]: 08/12/2025 09:49:42 : epoch 69369efc : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f5710002520 fd 15 proxy header rest len failed header rlen = % (will set dead)
Dec  8 04:49:42 np0005550137 radosgw[89717]: ====== starting new request req=0x7ff6ecd845d0 =====
Dec  8 04:49:42 np0005550137 radosgw[89717]: ====== req done req=0x7ff6ecd845d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  8 04:49:42 np0005550137 radosgw[89717]: beast: 0x7ff6ecd845d0: 192.168.122.100 - anonymous [08/Dec/2025:09:49:42.878 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  8 04:49:43 np0005550137 ceph-mgr[74806]: mgr[py] Loading python module 'localpool'
Dec  8 04:49:43 np0005550137 ceph-ceb838ef-9d5d-54e4-bddb-2f01adce2ad4-nfs-cephfs-2-0-compute-0-cuvvno[97606]: 08/12/2025 09:49:43 : epoch 69369efc : compute-0 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f56e8003c10 fd 15 proxy header rest len failed header rlen = % (will set dead)
Dec  8 04:49:43 np0005550137 ceph-ceb838ef-9d5d-54e4-bddb-2f01adce2ad4-nfs-cephfs-2-0-compute-0-cuvvno[97606]: 08/12/2025 09:49:43 : epoch 69369efc : compute-0 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f56e8003c10 fd 15 proxy header rest len failed header rlen = % (will set dead)
Dec  8 04:49:43 np0005550137 ceph-mgr[74806]: mgr[py] Loading python module 'mds_autoscaler'
Dec  8 04:49:43 np0005550137 ceph-mon[74516]: mon.compute-0@0(leader).osd e75 do_prune osdmap full prune enabled
Dec  8 04:49:43 np0005550137 ceph-mon[74516]: mon.compute-0@0(leader).osd e76 e76: 3 total, 3 up, 3 in
Dec  8 04:49:43 np0005550137 ceph-mon[74516]: log_channel(cluster) log [DBG] : osdmap e76: 3 total, 3 up, 3 in
Dec  8 04:49:43 np0005550137 ceph-mgr[74806]: mgr[py] Loading python module 'mirroring'
Dec  8 04:49:43 np0005550137 radosgw[89717]: ====== starting new request req=0x7ff6ecd845d0 =====
Dec  8 04:49:43 np0005550137 radosgw[89717]: ====== req done req=0x7ff6ecd845d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  8 04:49:43 np0005550137 radosgw[89717]: beast: 0x7ff6ecd845d0: 192.168.122.102 - anonymous [08/Dec/2025:09:49:43.450 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  8 04:49:43 np0005550137 ceph-mgr[74806]: mgr[py] Loading python module 'nfs'
Dec  8 04:49:43 np0005550137 ceph-osd[83009]: log_channel(cluster) log [DBG] : 11.2 scrub starts
Dec  8 04:49:43 np0005550137 ceph-osd[83009]: log_channel(cluster) log [DBG] : 11.2 scrub ok
Dec  8 04:49:43 np0005550137 ceph-ceb838ef-9d5d-54e4-bddb-2f01adce2ad4-mgr-compute-0-kitiwu[74802]: 2025-12-08T09:49:43.774+0000 7fd1cd78a140 -1 mgr[py] Module nfs has missing NOTIFY_TYPES member
Dec  8 04:49:43 np0005550137 ceph-mgr[74806]: mgr[py] Module nfs has missing NOTIFY_TYPES member
Dec  8 04:49:43 np0005550137 ceph-mgr[74806]: mgr[py] Loading python module 'orchestrator'
Dec  8 04:49:44 np0005550137 ceph-ceb838ef-9d5d-54e4-bddb-2f01adce2ad4-mgr-compute-0-kitiwu[74802]: 2025-12-08T09:49:44.002+0000 7fd1cd78a140 -1 mgr[py] Module orchestrator has missing NOTIFY_TYPES member
Dec  8 04:49:44 np0005550137 ceph-mgr[74806]: mgr[py] Module orchestrator has missing NOTIFY_TYPES member
Dec  8 04:49:44 np0005550137 ceph-mgr[74806]: mgr[py] Loading python module 'osd_perf_query'
Dec  8 04:49:44 np0005550137 ceph-ceb838ef-9d5d-54e4-bddb-2f01adce2ad4-mgr-compute-0-kitiwu[74802]: 2025-12-08T09:49:44.083+0000 7fd1cd78a140 -1 mgr[py] Module osd_perf_query has missing NOTIFY_TYPES member
Dec  8 04:49:44 np0005550137 ceph-mgr[74806]: mgr[py] Module osd_perf_query has missing NOTIFY_TYPES member
Dec  8 04:49:44 np0005550137 ceph-mgr[74806]: mgr[py] Loading python module 'osd_support'
Dec  8 04:49:44 np0005550137 ceph-ceb838ef-9d5d-54e4-bddb-2f01adce2ad4-mgr-compute-0-kitiwu[74802]: 2025-12-08T09:49:44.153+0000 7fd1cd78a140 -1 mgr[py] Module osd_support has missing NOTIFY_TYPES member
Dec  8 04:49:44 np0005550137 ceph-mgr[74806]: mgr[py] Module osd_support has missing NOTIFY_TYPES member
Dec  8 04:49:44 np0005550137 ceph-mgr[74806]: mgr[py] Loading python module 'pg_autoscaler'
Dec  8 04:49:44 np0005550137 ceph-ceb838ef-9d5d-54e4-bddb-2f01adce2ad4-mgr-compute-0-kitiwu[74802]: 2025-12-08T09:49:44.233+0000 7fd1cd78a140 -1 mgr[py] Module pg_autoscaler has missing NOTIFY_TYPES member
Dec  8 04:49:44 np0005550137 ceph-mgr[74806]: mgr[py] Module pg_autoscaler has missing NOTIFY_TYPES member
Dec  8 04:49:44 np0005550137 ceph-mgr[74806]: mgr[py] Loading python module 'progress'
Dec  8 04:49:44 np0005550137 ceph-ceb838ef-9d5d-54e4-bddb-2f01adce2ad4-mgr-compute-0-kitiwu[74802]: 2025-12-08T09:49:44.307+0000 7fd1cd78a140 -1 mgr[py] Module progress has missing NOTIFY_TYPES member
Dec  8 04:49:44 np0005550137 ceph-mgr[74806]: mgr[py] Module progress has missing NOTIFY_TYPES member
Dec  8 04:49:44 np0005550137 ceph-mgr[74806]: mgr[py] Loading python module 'prometheus'
Dec  8 04:49:44 np0005550137 ceph-osd[83009]: log_channel(cluster) log [DBG] : 8.1 scrub starts
Dec  8 04:49:44 np0005550137 ceph-osd[83009]: log_channel(cluster) log [DBG] : 8.1 scrub ok
Dec  8 04:49:44 np0005550137 ceph-ceb838ef-9d5d-54e4-bddb-2f01adce2ad4-mgr-compute-0-kitiwu[74802]: 2025-12-08T09:49:44.675+0000 7fd1cd78a140 -1 mgr[py] Module prometheus has missing NOTIFY_TYPES member
Dec  8 04:49:44 np0005550137 ceph-mgr[74806]: mgr[py] Module prometheus has missing NOTIFY_TYPES member
Dec  8 04:49:44 np0005550137 ceph-mgr[74806]: mgr[py] Loading python module 'rbd_support'
Dec  8 04:49:44 np0005550137 ceph-ceb838ef-9d5d-54e4-bddb-2f01adce2ad4-mgr-compute-0-kitiwu[74802]: 2025-12-08T09:49:44.782+0000 7fd1cd78a140 -1 mgr[py] Module rbd_support has missing NOTIFY_TYPES member
Dec  8 04:49:44 np0005550137 ceph-mgr[74806]: mgr[py] Module rbd_support has missing NOTIFY_TYPES member
Dec  8 04:49:44 np0005550137 ceph-mgr[74806]: mgr[py] Loading python module 'restful'
Dec  8 04:49:44 np0005550137 ceph-ceb838ef-9d5d-54e4-bddb-2f01adce2ad4-nfs-cephfs-2-0-compute-0-cuvvno[97606]: 08/12/2025 09:49:44 : epoch 69369efc : compute-0 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f56f0003520 fd 15 proxy header rest len failed header rlen = % (will set dead)
Dec  8 04:49:44 np0005550137 radosgw[89717]: ====== starting new request req=0x7ff6ecd845d0 =====
Dec  8 04:49:44 np0005550137 radosgw[89717]: ====== req done req=0x7ff6ecd845d0 op status=0 http_status=200 latency=0.001000031s ======
Dec  8 04:49:44 np0005550137 radosgw[89717]: beast: 0x7ff6ecd845d0: 192.168.122.100 - anonymous [08/Dec/2025:09:49:44.880 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Dec  8 04:49:44 np0005550137 ceph-mon[74516]: mon.compute-0@0(leader).osd e76 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  8 04:49:44 np0005550137 ceph-mgr[74806]: mgr[py] Loading python module 'rgw'
Dec  8 04:49:45 np0005550137 ceph-ceb838ef-9d5d-54e4-bddb-2f01adce2ad4-nfs-cephfs-2-0-compute-0-cuvvno[97606]: 08/12/2025 09:49:45 : epoch 69369efc : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f5710002520 fd 15 proxy header rest len failed header rlen = % (will set dead)
Dec  8 04:49:45 np0005550137 ceph-ceb838ef-9d5d-54e4-bddb-2f01adce2ad4-nfs-cephfs-2-0-compute-0-cuvvno[97606]: 08/12/2025 09:49:45 : epoch 69369efc : compute-0 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f5714002950 fd 15 proxy header rest len failed header rlen = % (will set dead)
Dec  8 04:49:45 np0005550137 ceph-ceb838ef-9d5d-54e4-bddb-2f01adce2ad4-mgr-compute-0-kitiwu[74802]: 2025-12-08T09:49:45.207+0000 7fd1cd78a140 -1 mgr[py] Module rgw has missing NOTIFY_TYPES member
Dec  8 04:49:45 np0005550137 ceph-mgr[74806]: mgr[py] Module rgw has missing NOTIFY_TYPES member
Dec  8 04:49:45 np0005550137 ceph-mgr[74806]: mgr[py] Loading python module 'rook'
Dec  8 04:49:45 np0005550137 radosgw[89717]: ====== starting new request req=0x7ff6ecd845d0 =====
Dec  8 04:49:45 np0005550137 radosgw[89717]: ====== req done req=0x7ff6ecd845d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  8 04:49:45 np0005550137 radosgw[89717]: beast: 0x7ff6ecd845d0: 192.168.122.102 - anonymous [08/Dec/2025:09:49:45.454 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  8 04:49:45 np0005550137 ceph-osd[83009]: log_channel(cluster) log [DBG] : 8.0 scrub starts
Dec  8 04:49:45 np0005550137 ceph-osd[83009]: log_channel(cluster) log [DBG] : 8.0 scrub ok
Dec  8 04:49:45 np0005550137 ceph-ceb838ef-9d5d-54e4-bddb-2f01adce2ad4-mgr-compute-0-kitiwu[74802]: 2025-12-08T09:49:45.790+0000 7fd1cd78a140 -1 mgr[py] Module rook has missing NOTIFY_TYPES member
Dec  8 04:49:45 np0005550137 ceph-mgr[74806]: mgr[py] Module rook has missing NOTIFY_TYPES member
Dec  8 04:49:45 np0005550137 ceph-mgr[74806]: mgr[py] Loading python module 'selftest'
Dec  8 04:49:45 np0005550137 ceph-ceb838ef-9d5d-54e4-bddb-2f01adce2ad4-mgr-compute-0-kitiwu[74802]: 2025-12-08T09:49:45.870+0000 7fd1cd78a140 -1 mgr[py] Module selftest has missing NOTIFY_TYPES member
Dec  8 04:49:45 np0005550137 ceph-mgr[74806]: mgr[py] Module selftest has missing NOTIFY_TYPES member
Dec  8 04:49:45 np0005550137 ceph-mgr[74806]: mgr[py] Loading python module 'snap_schedule'
Dec  8 04:49:45 np0005550137 ceph-ceb838ef-9d5d-54e4-bddb-2f01adce2ad4-mgr-compute-0-kitiwu[74802]: 2025-12-08T09:49:45.955+0000 7fd1cd78a140 -1 mgr[py] Module snap_schedule has missing NOTIFY_TYPES member
Dec  8 04:49:45 np0005550137 ceph-mgr[74806]: mgr[py] Module snap_schedule has missing NOTIFY_TYPES member
Dec  8 04:49:45 np0005550137 ceph-mgr[74806]: mgr[py] Loading python module 'stats'
Dec  8 04:49:46 np0005550137 ceph-mgr[74806]: mgr[py] Loading python module 'status'
Dec  8 04:49:46 np0005550137 ceph-ceb838ef-9d5d-54e4-bddb-2f01adce2ad4-mgr-compute-0-kitiwu[74802]: 2025-12-08T09:49:46.115+0000 7fd1cd78a140 -1 mgr[py] Module status has missing NOTIFY_TYPES member
Dec  8 04:49:46 np0005550137 ceph-mgr[74806]: mgr[py] Module status has missing NOTIFY_TYPES member
Dec  8 04:49:46 np0005550137 ceph-mgr[74806]: mgr[py] Loading python module 'telegraf'
Dec  8 04:49:46 np0005550137 ceph-ceb838ef-9d5d-54e4-bddb-2f01adce2ad4-mgr-compute-0-kitiwu[74802]: 2025-12-08T09:49:46.195+0000 7fd1cd78a140 -1 mgr[py] Module telegraf has missing NOTIFY_TYPES member
Dec  8 04:49:46 np0005550137 ceph-mgr[74806]: mgr[py] Module telegraf has missing NOTIFY_TYPES member
Dec  8 04:49:46 np0005550137 ceph-mgr[74806]: mgr[py] Loading python module 'telemetry'
Dec  8 04:49:46 np0005550137 ceph-ceb838ef-9d5d-54e4-bddb-2f01adce2ad4-mgr-compute-0-kitiwu[74802]: 2025-12-08T09:49:46.365+0000 7fd1cd78a140 -1 mgr[py] Module telemetry has missing NOTIFY_TYPES member
Dec  8 04:49:46 np0005550137 ceph-mgr[74806]: mgr[py] Module telemetry has missing NOTIFY_TYPES member
Dec  8 04:49:46 np0005550137 ceph-mgr[74806]: mgr[py] Loading python module 'test_orchestrator'
Dec  8 04:49:46 np0005550137 ceph-ceb838ef-9d5d-54e4-bddb-2f01adce2ad4-mgr-compute-0-kitiwu[74802]: 2025-12-08T09:49:46.604+0000 7fd1cd78a140 -1 mgr[py] Module test_orchestrator has missing NOTIFY_TYPES member
Dec  8 04:49:46 np0005550137 ceph-mgr[74806]: mgr[py] Module test_orchestrator has missing NOTIFY_TYPES member
Dec  8 04:49:46 np0005550137 ceph-mgr[74806]: mgr[py] Loading python module 'volumes'
Dec  8 04:49:46 np0005550137 ceph-osd[83009]: log_channel(cluster) log [DBG] : 8.7 deep-scrub starts
Dec  8 04:49:46 np0005550137 ceph-osd[83009]: log_channel(cluster) log [DBG] : 8.7 deep-scrub ok
Dec  8 04:49:46 np0005550137 ceph-ceb838ef-9d5d-54e4-bddb-2f01adce2ad4-nfs-cephfs-2-0-compute-0-cuvvno[97606]: 08/12/2025 09:49:46 : epoch 69369efc : compute-0 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f56e8003c10 fd 15 proxy header rest len failed header rlen = % (will set dead)
Dec  8 04:49:46 np0005550137 ceph-ceb838ef-9d5d-54e4-bddb-2f01adce2ad4-mgr-compute-0-kitiwu[74802]: 2025-12-08T09:49:46.874+0000 7fd1cd78a140 -1 mgr[py] Module volumes has missing NOTIFY_TYPES member
Dec  8 04:49:46 np0005550137 ceph-mgr[74806]: mgr[py] Module volumes has missing NOTIFY_TYPES member
Dec  8 04:49:46 np0005550137 ceph-mgr[74806]: mgr[py] Loading python module 'zabbix'
Dec  8 04:49:46 np0005550137 radosgw[89717]: ====== starting new request req=0x7ff6ecd845d0 =====
Dec  8 04:49:46 np0005550137 radosgw[89717]: ====== req done req=0x7ff6ecd845d0 op status=0 http_status=200 latency=0.001000031s ======
Dec  8 04:49:46 np0005550137 radosgw[89717]: beast: 0x7ff6ecd845d0: 192.168.122.100 - anonymous [08/Dec/2025:09:49:46.884 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Dec  8 04:49:46 np0005550137 ceph-ceb838ef-9d5d-54e4-bddb-2f01adce2ad4-mgr-compute-0-kitiwu[74802]: 2025-12-08T09:49:46.945+0000 7fd1cd78a140 -1 mgr[py] Module zabbix has missing NOTIFY_TYPES member
Dec  8 04:49:46 np0005550137 ceph-mgr[74806]: mgr[py] Module zabbix has missing NOTIFY_TYPES member
Dec  8 04:49:46 np0005550137 ceph-mon[74516]: log_channel(cluster) log [INF] : Active manager daemon compute-0.kitiwu restarted
Dec  8 04:49:46 np0005550137 ceph-mon[74516]: mon.compute-0@0(leader).osd e76 do_prune osdmap full prune enabled
Dec  8 04:49:46 np0005550137 ceph-mon[74516]: log_channel(cluster) log [INF] : Activating manager daemon compute-0.kitiwu
Dec  8 04:49:46 np0005550137 ceph-mgr[74806]: ms_deliver_dispatch: unhandled message 0x556667aa5860 mon_map magic: 0 from mon.0 v2:192.168.122.100:3300/0
Dec  8 04:49:46 np0005550137 ceph-mon[74516]: mon.compute-0@0(leader).osd e77 e77: 3 total, 3 up, 3 in
Dec  8 04:49:46 np0005550137 ceph-mon[74516]: log_channel(cluster) log [DBG] : osdmap e77: 3 total, 3 up, 3 in
Dec  8 04:49:46 np0005550137 ceph-mon[74516]: log_channel(cluster) log [DBG] : mgrmap e28: compute-0.kitiwu(active, starting, since 0.0452981s), standbys: compute-1.mmkaif, compute-2.zqytsv
Dec  8 04:49:46 np0005550137 ceph-mgr[74806]: mgr handle_mgr_map Activating!
Dec  8 04:49:46 np0005550137 ceph-mgr[74806]: mgr handle_mgr_map I am now activating
Dec  8 04:49:47 np0005550137 ceph-mon[74516]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mon metadata", "id": "compute-0"} v 0)
Dec  8 04:49:47 np0005550137 ceph-mon[74516]: log_channel(audit) log [DBG] : from='mgr.14766 192.168.122.100:0/2066651810' entity='mgr.compute-0.kitiwu' cmd=[{"prefix": "mon metadata", "id": "compute-0"}]: dispatch
Dec  8 04:49:47 np0005550137 ceph-mon[74516]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mon metadata", "id": "compute-1"} v 0)
Dec  8 04:49:47 np0005550137 ceph-mon[74516]: log_channel(audit) log [DBG] : from='mgr.14766 192.168.122.100:0/2066651810' entity='mgr.compute-0.kitiwu' cmd=[{"prefix": "mon metadata", "id": "compute-1"}]: dispatch
Dec  8 04:49:47 np0005550137 ceph-mon[74516]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mon metadata", "id": "compute-2"} v 0)
Dec  8 04:49:47 np0005550137 ceph-mon[74516]: log_channel(audit) log [DBG] : from='mgr.14766 192.168.122.100:0/2066651810' entity='mgr.compute-0.kitiwu' cmd=[{"prefix": "mon metadata", "id": "compute-2"}]: dispatch
Dec  8 04:49:47 np0005550137 ceph-mon[74516]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mds metadata", "who": "cephfs.compute-0.ywanut"} v 0)
Dec  8 04:49:47 np0005550137 ceph-mon[74516]: log_channel(audit) log [DBG] : from='mgr.14766 192.168.122.100:0/2066651810' entity='mgr.compute-0.kitiwu' cmd=[{"prefix": "mds metadata", "who": "cephfs.compute-0.ywanut"}]: dispatch
Dec  8 04:49:47 np0005550137 ceph-mon[74516]: mon.compute-0@0(leader).mds e11 all = 0
Dec  8 04:49:47 np0005550137 ceph-mon[74516]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mds metadata", "who": "cephfs.compute-1.tjxjxt"} v 0)
Dec  8 04:49:47 np0005550137 ceph-mon[74516]: log_channel(audit) log [DBG] : from='mgr.14766 192.168.122.100:0/2066651810' entity='mgr.compute-0.kitiwu' cmd=[{"prefix": "mds metadata", "who": "cephfs.compute-1.tjxjxt"}]: dispatch
Dec  8 04:49:47 np0005550137 ceph-mon[74516]: mon.compute-0@0(leader).mds e11 all = 0
Dec  8 04:49:47 np0005550137 ceph-mon[74516]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mds metadata", "who": "cephfs.compute-2.hhmzvb"} v 0)
Dec  8 04:49:47 np0005550137 ceph-mon[74516]: log_channel(audit) log [DBG] : from='mgr.14766 192.168.122.100:0/2066651810' entity='mgr.compute-0.kitiwu' cmd=[{"prefix": "mds metadata", "who": "cephfs.compute-2.hhmzvb"}]: dispatch
Dec  8 04:49:47 np0005550137 ceph-mon[74516]: mon.compute-0@0(leader).mds e11 all = 0
Dec  8 04:49:47 np0005550137 ceph-mon[74516]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mgr metadata", "who": "compute-0.kitiwu", "id": "compute-0.kitiwu"} v 0)
Dec  8 04:49:47 np0005550137 ceph-mon[74516]: log_channel(audit) log [DBG] : from='mgr.14766 192.168.122.100:0/2066651810' entity='mgr.compute-0.kitiwu' cmd=[{"prefix": "mgr metadata", "who": "compute-0.kitiwu", "id": "compute-0.kitiwu"}]: dispatch
Dec  8 04:49:47 np0005550137 ceph-mon[74516]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mgr metadata", "who": "compute-1.mmkaif", "id": "compute-1.mmkaif"} v 0)
Dec  8 04:49:47 np0005550137 ceph-mon[74516]: log_channel(audit) log [DBG] : from='mgr.14766 192.168.122.100:0/2066651810' entity='mgr.compute-0.kitiwu' cmd=[{"prefix": "mgr metadata", "who": "compute-1.mmkaif", "id": "compute-1.mmkaif"}]: dispatch
Dec  8 04:49:47 np0005550137 ceph-mon[74516]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mgr metadata", "who": "compute-2.zqytsv", "id": "compute-2.zqytsv"} v 0)
Dec  8 04:49:47 np0005550137 ceph-mon[74516]: log_channel(audit) log [DBG] : from='mgr.14766 192.168.122.100:0/2066651810' entity='mgr.compute-0.kitiwu' cmd=[{"prefix": "mgr metadata", "who": "compute-2.zqytsv", "id": "compute-2.zqytsv"}]: dispatch
Dec  8 04:49:47 np0005550137 ceph-mon[74516]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd metadata", "id": 0} v 0)
Dec  8 04:49:47 np0005550137 ceph-mon[74516]: log_channel(audit) log [DBG] : from='mgr.14766 192.168.122.100:0/2066651810' entity='mgr.compute-0.kitiwu' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch
Dec  8 04:49:47 np0005550137 ceph-mon[74516]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd metadata", "id": 1} v 0)
Dec  8 04:49:47 np0005550137 ceph-mon[74516]: log_channel(audit) log [DBG] : from='mgr.14766 192.168.122.100:0/2066651810' entity='mgr.compute-0.kitiwu' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch
Dec  8 04:49:47 np0005550137 ceph-mon[74516]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd metadata", "id": 2} v 0)
Dec  8 04:49:47 np0005550137 ceph-mon[74516]: log_channel(audit) log [DBG] : from='mgr.14766 192.168.122.100:0/2066651810' entity='mgr.compute-0.kitiwu' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch
Dec  8 04:49:47 np0005550137 ceph-mon[74516]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mds metadata"} v 0)
Dec  8 04:49:47 np0005550137 ceph-mon[74516]: log_channel(audit) log [DBG] : from='mgr.14766 192.168.122.100:0/2066651810' entity='mgr.compute-0.kitiwu' cmd=[{"prefix": "mds metadata"}]: dispatch
Dec  8 04:49:47 np0005550137 ceph-mon[74516]: mon.compute-0@0(leader).mds e11 all = 1
Dec  8 04:49:47 np0005550137 ceph-mon[74516]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd metadata"} v 0)
Dec  8 04:49:47 np0005550137 ceph-mon[74516]: log_channel(audit) log [DBG] : from='mgr.14766 192.168.122.100:0/2066651810' entity='mgr.compute-0.kitiwu' cmd=[{"prefix": "osd metadata"}]: dispatch
Dec  8 04:49:47 np0005550137 ceph-mon[74516]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mon metadata"} v 0)
Dec  8 04:49:47 np0005550137 ceph-mon[74516]: log_channel(audit) log [DBG] : from='mgr.14766 192.168.122.100:0/2066651810' entity='mgr.compute-0.kitiwu' cmd=[{"prefix": "mon metadata"}]: dispatch
Dec  8 04:49:47 np0005550137 ceph-mon[74516]: log_channel(cluster) log [DBG] : Standby manager daemon compute-1.mmkaif restarted
Dec  8 04:49:47 np0005550137 ceph-mon[74516]: log_channel(cluster) log [DBG] : Standby manager daemon compute-1.mmkaif started
Dec  8 04:49:47 np0005550137 ceph-mon[74516]: log_channel(cluster) log [DBG] : Standby manager daemon compute-2.zqytsv restarted
Dec  8 04:49:47 np0005550137 ceph-mon[74516]: log_channel(cluster) log [DBG] : Standby manager daemon compute-2.zqytsv started
Dec  8 04:49:47 np0005550137 ceph-mon[74516]: log_channel(cluster) log [INF] : Manager daemon compute-0.kitiwu is now available
Dec  8 04:49:47 np0005550137 ceph-mgr[74806]: [balancer DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Dec  8 04:49:47 np0005550137 ceph-mgr[74806]: mgr load Constructed class from module: balancer
Dec  8 04:49:47 np0005550137 ceph-mgr[74806]: [balancer INFO root] Starting
Dec  8 04:49:47 np0005550137 ceph-mgr[74806]: [cephadm DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Dec  8 04:49:47 np0005550137 ceph-mgr[74806]: [balancer INFO root] Optimize plan auto_2025-12-08_09:49:47
Dec  8 04:49:47 np0005550137 ceph-mgr[74806]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Dec  8 04:49:47 np0005550137 ceph-mgr[74806]: [balancer INFO root] Some PGs (1.000000) are unknown; try again later
Dec  8 04:49:47 np0005550137 ceph-mgr[74806]: mgr load Constructed class from module: cephadm
Dec  8 04:49:47 np0005550137 ceph-mgr[74806]: [crash DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Dec  8 04:49:47 np0005550137 ceph-mgr[74806]: mgr load Constructed class from module: crash
Dec  8 04:49:47 np0005550137 ceph-mgr[74806]: [dashboard DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Dec  8 04:49:47 np0005550137 ceph-mgr[74806]: mgr load Constructed class from module: dashboard
Dec  8 04:49:47 np0005550137 ceph-mgr[74806]: [dashboard INFO access_control] Loading user roles DB version=2
Dec  8 04:49:47 np0005550137 ceph-mgr[74806]: [dashboard INFO sso] Loading SSO DB version=1
Dec  8 04:49:47 np0005550137 ceph-mgr[74806]: [dashboard INFO root] server: ssl=no host=192.168.122.100 port=8443
Dec  8 04:49:47 np0005550137 ceph-mgr[74806]: [dashboard INFO root] Configured CherryPy, starting engine...
Dec  8 04:49:47 np0005550137 ceph-mgr[74806]: [devicehealth DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Dec  8 04:49:47 np0005550137 ceph-mgr[74806]: mgr load Constructed class from module: devicehealth
Dec  8 04:49:47 np0005550137 ceph-mgr[74806]: [devicehealth INFO root] Starting
Dec  8 04:49:47 np0005550137 ceph-mgr[74806]: [iostat DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Dec  8 04:49:47 np0005550137 ceph-mgr[74806]: mgr load Constructed class from module: iostat
Dec  8 04:49:47 np0005550137 ceph-mgr[74806]: [nfs DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Dec  8 04:49:47 np0005550137 ceph-mgr[74806]: mgr load Constructed class from module: nfs
Dec  8 04:49:47 np0005550137 ceph-mgr[74806]: [orchestrator DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Dec  8 04:49:47 np0005550137 ceph-mgr[74806]: mgr load Constructed class from module: orchestrator
Dec  8 04:49:47 np0005550137 ceph-mgr[74806]: [pg_autoscaler DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Dec  8 04:49:47 np0005550137 ceph-mgr[74806]: mgr load Constructed class from module: pg_autoscaler
Dec  8 04:49:47 np0005550137 ceph-mgr[74806]: [progress DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Dec  8 04:49:47 np0005550137 ceph-mgr[74806]: mgr load Constructed class from module: progress
Dec  8 04:49:47 np0005550137 ceph-mgr[74806]: [pg_autoscaler INFO root] _maybe_adjust
Dec  8 04:49:47 np0005550137 ceph-mgr[74806]: [prometheus DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Dec  8 04:49:47 np0005550137 ceph-mon[74516]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/prometheus/health_history}] v 0)
Dec  8 04:49:47 np0005550137 ceph-mgr[74806]: [progress INFO root] Loading...
Dec  8 04:49:47 np0005550137 ceph-mgr[74806]: [progress INFO root] Loaded [<progress.module.GhostEvent object at 0x7fd14a265e50>, <progress.module.GhostEvent object at 0x7fd14a265e80>, <progress.module.GhostEvent object at 0x7fd14a265eb0>, <progress.module.GhostEvent object at 0x7fd14a265ee0>, <progress.module.GhostEvent object at 0x7fd14a265e20>, <progress.module.GhostEvent object at 0x7fd14a265df0>, <progress.module.GhostEvent object at 0x7fd14a265dc0>, <progress.module.GhostEvent object at 0x7fd14a265d90>, <progress.module.GhostEvent object at 0x7fd14a265d00>, <progress.module.GhostEvent object at 0x7fd14a265ca0>, <progress.module.GhostEvent object at 0x7fd14a265cd0>, <progress.module.GhostEvent object at 0x7fd14a265d30>, <progress.module.GhostEvent object at 0x7fd14a265d60>, <progress.module.GhostEvent object at 0x7fd14a265c70>, <progress.module.GhostEvent object at 0x7fd14a265c40>, <progress.module.GhostEvent object at 0x7fd14a265c10>, <progress.module.GhostEvent object at 0x7fd14a265be0>, <progress.module.GhostEvent object at 0x7fd14a265bb0>, <progress.module.GhostEvent object at 0x7fd14a265b80>, <progress.module.GhostEvent object at 0x7fd14a265b50>, <progress.module.GhostEvent object at 0x7fd14a265040>, <progress.module.GhostEvent object at 0x7fd14a265070>, <progress.module.GhostEvent object at 0x7fd14a2650a0>, <progress.module.GhostEvent object at 0x7fd14a2650d0>, <progress.module.GhostEvent object at 0x7fd14a265100>] historic events
Dec  8 04:49:47 np0005550137 ceph-mgr[74806]: [progress INFO root] Loaded OSDMap, ready.
Dec  8 04:49:47 np0005550137 ceph-mon[74516]: log_channel(audit) log [INF] : from='mgr.14766 192.168.122.100:0/2066651810' entity='mgr.compute-0.kitiwu' 
Dec  8 04:49:47 np0005550137 ceph-mgr[74806]: mgr load Constructed class from module: prometheus
Dec  8 04:49:47 np0005550137 ceph-mgr[74806]: [prometheus INFO root] server_addr: :: server_port: 9283
Dec  8 04:49:47 np0005550137 ceph-mgr[74806]: [prometheus INFO root] Cache enabled
Dec  8 04:49:47 np0005550137 ceph-mgr[74806]: [prometheus INFO root] starting metric collection thread
Dec  8 04:49:47 np0005550137 ceph-mgr[74806]: [prometheus INFO root] Starting engine...
Dec  8 04:49:47 np0005550137 ceph-ceb838ef-9d5d-54e4-bddb-2f01adce2ad4-mgr-compute-0-kitiwu[74802]: [08/Dec/2025:09:49:47] ENGINE Bus STARTING
Dec  8 04:49:47 np0005550137 ceph-mgr[74806]: [rbd_support DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Dec  8 04:49:47 np0005550137 ceph-mgr[74806]: [prometheus INFO cherrypy.error] [08/Dec/2025:09:49:47] ENGINE Bus STARTING
Dec  8 04:49:47 np0005550137 ceph-ceb838ef-9d5d-54e4-bddb-2f01adce2ad4-mgr-compute-0-kitiwu[74802]: CherryPy Checker:
Dec  8 04:49:47 np0005550137 ceph-ceb838ef-9d5d-54e4-bddb-2f01adce2ad4-mgr-compute-0-kitiwu[74802]: The Application mounted at '' has an empty config.
Dec  8 04:49:47 np0005550137 ceph-ceb838ef-9d5d-54e4-bddb-2f01adce2ad4-mgr-compute-0-kitiwu[74802]: 
Dec  8 04:49:47 np0005550137 ceph-mon[74516]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Dec  8 04:49:47 np0005550137 ceph-mon[74516]: log_channel(audit) log [DBG] : from='mgr.14766 192.168.122.100:0/2066651810' entity='mgr.compute-0.kitiwu' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Dec  8 04:49:47 np0005550137 ceph-mgr[74806]: [rbd_support INFO root] recovery thread starting
Dec  8 04:49:47 np0005550137 ceph-mgr[74806]: [rbd_support INFO root] starting setup
Dec  8 04:49:47 np0005550137 ceph-mgr[74806]: mgr load Constructed class from module: rbd_support
Dec  8 04:49:47 np0005550137 ceph-mgr[74806]: [restful DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Dec  8 04:49:47 np0005550137 ceph-mgr[74806]: mgr load Constructed class from module: restful
Dec  8 04:49:47 np0005550137 ceph-mgr[74806]: [status DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Dec  8 04:49:47 np0005550137 ceph-mgr[74806]: mgr load Constructed class from module: status
Dec  8 04:49:47 np0005550137 ceph-mon[74516]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/compute-0.kitiwu/mirror_snapshot_schedule"} v 0)
Dec  8 04:49:47 np0005550137 ceph-mon[74516]: log_channel(audit) log [INF] : from='mgr.14766 192.168.122.100:0/2066651810' entity='mgr.compute-0.kitiwu' cmd=[{"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/compute-0.kitiwu/mirror_snapshot_schedule"}]: dispatch
Dec  8 04:49:47 np0005550137 ceph-mgr[74806]: [telemetry DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Dec  8 04:49:47 np0005550137 ceph-mgr[74806]: mgr load Constructed class from module: telemetry
Dec  8 04:49:47 np0005550137 ceph-mgr[74806]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Dec  8 04:49:47 np0005550137 ceph-mgr[74806]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec  8 04:49:47 np0005550137 ceph-mgr[74806]: [restful INFO root] server_addr: :: server_port: 8003
Dec  8 04:49:47 np0005550137 ceph-mgr[74806]: [restful WARNING root] server not running: no certificate configured
Dec  8 04:49:47 np0005550137 ceph-mgr[74806]: [volumes DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Dec  8 04:49:47 np0005550137 ceph-mgr[74806]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec  8 04:49:47 np0005550137 ceph-mgr[74806]: [rbd_support INFO root] load_schedules: backups, start_after=
Dec  8 04:49:47 np0005550137 ceph-ceb838ef-9d5d-54e4-bddb-2f01adce2ad4-nfs-cephfs-2-0-compute-0-cuvvno[97606]: 08/12/2025 09:49:47 : epoch 69369efc : compute-0 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f56f0003520 fd 15 proxy header rest len failed header rlen = % (will set dead)
Dec  8 04:49:47 np0005550137 ceph-mgr[74806]: [rbd_support INFO root] load_schedules: images, start_after=
Dec  8 04:49:47 np0005550137 ceph-mgr[74806]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: starting
Dec  8 04:49:47 np0005550137 ceph-ceb838ef-9d5d-54e4-bddb-2f01adce2ad4-nfs-cephfs-2-0-compute-0-cuvvno[97606]: 08/12/2025 09:49:47 : epoch 69369efc : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f5710002520 fd 15 proxy header rest len failed header rlen = % (will set dead)
Dec  8 04:49:47 np0005550137 ceph-mgr[74806]: [rbd_support INFO root] PerfHandler: starting
Dec  8 04:49:47 np0005550137 ceph-mgr[74806]: [rbd_support INFO root] load_task_task: vms, start_after=
Dec  8 04:49:47 np0005550137 ceph-mgr[74806]: [rbd_support INFO root] load_task_task: volumes, start_after=
Dec  8 04:49:47 np0005550137 ceph-mon[74516]: Active manager daemon compute-0.kitiwu restarted
Dec  8 04:49:47 np0005550137 ceph-mon[74516]: Activating manager daemon compute-0.kitiwu
Dec  8 04:49:47 np0005550137 ceph-mon[74516]: Manager daemon compute-0.kitiwu is now available
Dec  8 04:49:47 np0005550137 ceph-mon[74516]: from='mgr.14766 192.168.122.100:0/2066651810' entity='mgr.compute-0.kitiwu' 
Dec  8 04:49:47 np0005550137 ceph-mon[74516]: from='mgr.14766 192.168.122.100:0/2066651810' entity='mgr.compute-0.kitiwu' cmd=[{"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/compute-0.kitiwu/mirror_snapshot_schedule"}]: dispatch
Dec  8 04:49:47 np0005550137 ceph-mgr[74806]: [rbd_support INFO root] load_task_task: backups, start_after=
Dec  8 04:49:47 np0005550137 ceph-mgr[74806]: [volumes INFO volumes.fs.async_job] queuing job for volume 'cephfs'
Dec  8 04:49:47 np0005550137 ceph-mgr[74806]: [volumes INFO volumes.fs.async_job] queuing job for volume 'cephfs'
Dec  8 04:49:47 np0005550137 ceph-mgr[74806]: mgr load Constructed class from module: volumes
Dec  8 04:49:47 np0005550137 ceph-mgr[74806]: [rbd_support INFO root] load_task_task: images, start_after=
Dec  8 04:49:47 np0005550137 ceph-mgr[74806]: [rbd_support INFO root] TaskHandler: starting
Dec  8 04:49:47 np0005550137 ceph-mon[74516]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/compute-0.kitiwu/trash_purge_schedule"} v 0)
Dec  8 04:49:47 np0005550137 ceph-mon[74516]: log_channel(audit) log [INF] : from='mgr.14766 192.168.122.100:0/2066651810' entity='mgr.compute-0.kitiwu' cmd=[{"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/compute-0.kitiwu/trash_purge_schedule"}]: dispatch
Dec  8 04:49:47 np0005550137 ceph-mgr[74806]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Dec  8 04:49:47 np0005550137 ceph-mgr[74806]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec  8 04:49:47 np0005550137 ceph-ceb838ef-9d5d-54e4-bddb-2f01adce2ad4-mgr-compute-0-kitiwu[74802]: [08/Dec/2025:09:49:47] ENGINE Serving on http://:::9283
Dec  8 04:49:47 np0005550137 ceph-ceb838ef-9d5d-54e4-bddb-2f01adce2ad4-mgr-compute-0-kitiwu[74802]: [08/Dec/2025:09:49:47] ENGINE Bus STARTED
Dec  8 04:49:47 np0005550137 ceph-mgr[74806]: [prometheus INFO cherrypy.error] [08/Dec/2025:09:49:47] ENGINE Serving on http://:::9283
Dec  8 04:49:47 np0005550137 ceph-mgr[74806]: [prometheus INFO cherrypy.error] [08/Dec/2025:09:49:47] ENGINE Bus STARTED
Dec  8 04:49:47 np0005550137 ceph-mgr[74806]: [prometheus INFO root] Engine started.
Dec  8 04:49:47 np0005550137 ceph-mgr[74806]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec  8 04:49:47 np0005550137 ceph-mgr[74806]: [rbd_support INFO root] load_schedules: backups, start_after=
Dec  8 04:49:47 np0005550137 ceph-ceb838ef-9d5d-54e4-bddb-2f01adce2ad4-mgr-compute-0-kitiwu[74802]: 2025-12-08T09:49:47.299+0000 7fd135f9a640 -1 client.0 error registering admin socket command: (17) File exists
Dec  8 04:49:47 np0005550137 ceph-mgr[74806]: client.0 error registering admin socket command: (17) File exists
Dec  8 04:49:47 np0005550137 ceph-mgr[74806]: [rbd_support INFO root] load_schedules: images, start_after=
Dec  8 04:49:47 np0005550137 ceph-ceb838ef-9d5d-54e4-bddb-2f01adce2ad4-mgr-compute-0-kitiwu[74802]: 2025-12-08T09:49:47.304+0000 7fd131e12640 -1 client.0 error registering admin socket command: (17) File exists
Dec  8 04:49:47 np0005550137 ceph-mgr[74806]: client.0 error registering admin socket command: (17) File exists
Dec  8 04:49:47 np0005550137 ceph-ceb838ef-9d5d-54e4-bddb-2f01adce2ad4-mgr-compute-0-kitiwu[74802]: 2025-12-08T09:49:47.304+0000 7fd131e12640 -1 client.0 error registering admin socket command: (17) File exists
Dec  8 04:49:47 np0005550137 ceph-mgr[74806]: client.0 error registering admin socket command: (17) File exists
Dec  8 04:49:47 np0005550137 ceph-ceb838ef-9d5d-54e4-bddb-2f01adce2ad4-mgr-compute-0-kitiwu[74802]: 2025-12-08T09:49:47.304+0000 7fd131e12640 -1 client.0 error registering admin socket command: (17) File exists
Dec  8 04:49:47 np0005550137 ceph-mgr[74806]: client.0 error registering admin socket command: (17) File exists
Dec  8 04:49:47 np0005550137 ceph-ceb838ef-9d5d-54e4-bddb-2f01adce2ad4-mgr-compute-0-kitiwu[74802]: 2025-12-08T09:49:47.304+0000 7fd131e12640 -1 client.0 error registering admin socket command: (17) File exists
Dec  8 04:49:47 np0005550137 ceph-mgr[74806]: client.0 error registering admin socket command: (17) File exists
Dec  8 04:49:47 np0005550137 ceph-ceb838ef-9d5d-54e4-bddb-2f01adce2ad4-mgr-compute-0-kitiwu[74802]: 2025-12-08T09:49:47.304+0000 7fd131e12640 -1 client.0 error registering admin socket command: (17) File exists
Dec  8 04:49:47 np0005550137 ceph-mgr[74806]: client.0 error registering admin socket command: (17) File exists
Dec  8 04:49:47 np0005550137 ceph-mgr[74806]: [rbd_support INFO root] TrashPurgeScheduleHandler: starting
Dec  8 04:49:47 np0005550137 ceph-mgr[74806]: [rbd_support INFO root] setup complete
Dec  8 04:49:47 np0005550137 ceph-mgr[74806]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: CephFS -> /api/cephfs
Dec  8 04:49:47 np0005550137 ceph-mgr[74806]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: CephFsUi -> /ui-api/cephfs
Dec  8 04:49:47 np0005550137 ceph-mgr[74806]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: CephFSSubvolume -> /api/cephfs/subvolume
Dec  8 04:49:47 np0005550137 ceph-mgr[74806]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: CephFSSubvolumeGroups -> /api/cephfs/subvolume/group
Dec  8 04:49:47 np0005550137 ceph-mgr[74806]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: CephFSSubvolumeSnapshots -> /api/cephfs/subvolume/snapshot
Dec  8 04:49:47 np0005550137 ceph-mgr[74806]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: CephFsSnapshotClone -> /api/cephfs/subvolume/snapshot/clone
Dec  8 04:49:47 np0005550137 ceph-mgr[74806]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: CephFSSnapshotSchedule -> /api/cephfs/snapshot/schedule
Dec  8 04:49:47 np0005550137 ceph-mgr[74806]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: IscsiUi -> /ui-api/iscsi
Dec  8 04:49:47 np0005550137 ceph-mgr[74806]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: Iscsi -> /api/iscsi
Dec  8 04:49:47 np0005550137 ceph-mgr[74806]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: IscsiTarget -> /api/iscsi/target
Dec  8 04:49:47 np0005550137 ceph-mgr[74806]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: NFSGaneshaCluster -> /api/nfs-ganesha/cluster
Dec  8 04:49:47 np0005550137 ceph-mgr[74806]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: NFSGaneshaExports -> /api/nfs-ganesha/export
Dec  8 04:49:47 np0005550137 radosgw[89717]: ====== starting new request req=0x7ff6ecd845d0 =====
Dec  8 04:49:47 np0005550137 radosgw[89717]: ====== req done req=0x7ff6ecd845d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  8 04:49:47 np0005550137 radosgw[89717]: beast: 0x7ff6ecd845d0: 192.168.122.102 - anonymous [08/Dec/2025:09:49:47.457 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  8 04:49:47 np0005550137 ceph-mgr[74806]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: NFSGaneshaUi -> /ui-api/nfs-ganesha
Dec  8 04:49:47 np0005550137 ceph-mgr[74806]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: Orchestrator -> /ui-api/orchestrator
Dec  8 04:49:47 np0005550137 ceph-mgr[74806]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: Service -> /api/service
Dec  8 04:49:47 np0005550137 ceph-mgr[74806]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: RbdMirroring -> /api/block/mirroring
Dec  8 04:49:47 np0005550137 ceph-mgr[74806]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: RbdMirroringSummary -> /api/block/mirroring/summary
Dec  8 04:49:47 np0005550137 ceph-mgr[74806]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: RbdMirroringPoolMode -> /api/block/mirroring/pool
Dec  8 04:49:47 np0005550137 ceph-mgr[74806]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: RbdMirroringPoolBootstrap -> /api/block/mirroring/pool/{pool_name}/bootstrap
Dec  8 04:49:47 np0005550137 ceph-mgr[74806]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: RbdMirroringPoolPeer -> /api/block/mirroring/pool/{pool_name}/peer
Dec  8 04:49:47 np0005550137 ceph-mgr[74806]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: RbdMirroringStatus -> /ui-api/block/mirroring
Dec  8 04:49:47 np0005550137 ceph-mgr[74806]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: Pool -> /api/pool
Dec  8 04:49:47 np0005550137 ceph-mgr[74806]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: PoolUi -> /ui-api/pool
Dec  8 04:49:47 np0005550137 ceph-mgr[74806]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: RBDPool -> /api/pool
Dec  8 04:49:47 np0005550137 ceph-mgr[74806]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: Rbd -> /api/block/image
Dec  8 04:49:47 np0005550137 ceph-mgr[74806]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: RbdStatus -> /ui-api/block/rbd
Dec  8 04:49:47 np0005550137 ceph-mgr[74806]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: RbdSnapshot -> /api/block/image/{image_spec}/snap
Dec  8 04:49:47 np0005550137 ceph-mgr[74806]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: RbdTrash -> /api/block/image/trash
Dec  8 04:49:47 np0005550137 ceph-mgr[74806]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: RbdNamespace -> /api/block/pool/{pool_name}/namespace
Dec  8 04:49:47 np0005550137 ceph-mgr[74806]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: Rgw -> /ui-api/rgw
Dec  8 04:49:47 np0005550137 ceph-mgr[74806]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: RgwMultisiteStatus -> /ui-api/rgw/multisite
Dec  8 04:49:47 np0005550137 ceph-mgr[74806]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: RgwMultisiteController -> /api/rgw/multisite
Dec  8 04:49:47 np0005550137 ceph-mgr[74806]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: RgwDaemon -> /api/rgw/daemon
Dec  8 04:49:47 np0005550137 ceph-mgr[74806]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: RgwSite -> /api/rgw/site
Dec  8 04:49:47 np0005550137 ceph-mgr[74806]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: RgwBucket -> /api/rgw/bucket
Dec  8 04:49:47 np0005550137 ceph-mgr[74806]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: RgwBucketUi -> /ui-api/rgw/bucket
Dec  8 04:49:47 np0005550137 ceph-mgr[74806]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: RgwUser -> /api/rgw/user
Dec  8 04:49:47 np0005550137 ceph-mgr[74806]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: rgwroles_CRUDClass -> /api/rgw/roles
Dec  8 04:49:47 np0005550137 ceph-mgr[74806]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: rgwroles_CRUDClassMetadata -> /ui-api/rgw/roles
Dec  8 04:49:47 np0005550137 ceph-mgr[74806]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: RgwRealm -> /api/rgw/realm
Dec  8 04:49:47 np0005550137 ceph-mgr[74806]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: RgwZonegroup -> /api/rgw/zonegroup
Dec  8 04:49:47 np0005550137 ceph-mgr[74806]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: RgwZone -> /api/rgw/zone
Dec  8 04:49:47 np0005550137 ceph-mgr[74806]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: Auth -> /api/auth
Dec  8 04:49:47 np0005550137 ceph-mgr[74806]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: clusteruser_CRUDClass -> /api/cluster/user
Dec  8 04:49:47 np0005550137 ceph-mgr[74806]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: clusteruser_CRUDClassMetadata -> /ui-api/cluster/user
Dec  8 04:49:47 np0005550137 ceph-mgr[74806]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: Cluster -> /api/cluster
Dec  8 04:49:47 np0005550137 ceph-mgr[74806]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: ClusterUpgrade -> /api/cluster/upgrade
Dec  8 04:49:47 np0005550137 ceph-mgr[74806]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: ClusterConfiguration -> /api/cluster_conf
Dec  8 04:49:47 np0005550137 ceph-mgr[74806]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: CrushRule -> /api/crush_rule
Dec  8 04:49:47 np0005550137 ceph-mgr[74806]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: CrushRuleUi -> /ui-api/crush_rule
Dec  8 04:49:47 np0005550137 ceph-mgr[74806]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: Daemon -> /api/daemon
Dec  8 04:49:47 np0005550137 ceph-mgr[74806]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: Docs -> /docs
Dec  8 04:49:47 np0005550137 ceph-mgr[74806]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: ErasureCodeProfile -> /api/erasure_code_profile
Dec  8 04:49:47 np0005550137 ceph-mgr[74806]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: ErasureCodeProfileUi -> /ui-api/erasure_code_profile
Dec  8 04:49:47 np0005550137 ceph-mgr[74806]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: FeedbackController -> /api/feedback
Dec  8 04:49:47 np0005550137 ceph-mgr[74806]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: FeedbackApiController -> /api/feedback/api_key
Dec  8 04:49:47 np0005550137 ceph-mgr[74806]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: FeedbackUiController -> /ui-api/feedback/api_key
Dec  8 04:49:47 np0005550137 ceph-mgr[74806]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: FrontendLogging -> /ui-api/logging
Dec  8 04:49:47 np0005550137 ceph-mgr[74806]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: Grafana -> /api/grafana
Dec  8 04:49:47 np0005550137 ceph-mgr[74806]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: Host -> /api/host
Dec  8 04:49:47 np0005550137 ceph-mgr[74806]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: HostUi -> /ui-api/host
Dec  8 04:49:47 np0005550137 ceph-mgr[74806]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: Health -> /api/health
Dec  8 04:49:47 np0005550137 ceph-mgr[74806]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: HomeController -> /
Dec  8 04:49:47 np0005550137 ceph-mgr[74806]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: LangsController -> /ui-api/langs
Dec  8 04:49:47 np0005550137 ceph-mgr[74806]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: LoginController -> /ui-api/login
Dec  8 04:49:47 np0005550137 ceph-mgr[74806]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: Logs -> /api/logs
Dec  8 04:49:47 np0005550137 ceph-mgr[74806]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: MgrModules -> /api/mgr/module
Dec  8 04:49:47 np0005550137 ceph-mgr[74806]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: Monitor -> /api/monitor
Dec  8 04:49:47 np0005550137 ceph-mgr[74806]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: NVMeoFGateway -> /api/nvmeof/gateway
Dec  8 04:49:47 np0005550137 ceph-mgr[74806]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: NVMeoFSpdk -> /api/nvmeof/spdk
Dec  8 04:49:47 np0005550137 ceph-mgr[74806]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: NVMeoFSubsystem -> /api/nvmeof/subsystem
Dec  8 04:49:47 np0005550137 ceph-mgr[74806]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: NVMeoFListener -> /api/nvmeof/subsystem/{nqn}/listener
Dec  8 04:49:47 np0005550137 ceph-mgr[74806]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: NVMeoFNamespace -> /api/nvmeof/subsystem/{nqn}/namespace
Dec  8 04:49:47 np0005550137 ceph-mgr[74806]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: NVMeoFHost -> /api/nvmeof/subsystem/{nqn}/host
Dec  8 04:49:47 np0005550137 ceph-mgr[74806]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: NVMeoFConnection -> /api/nvmeof/subsystem/{nqn}/connection
Dec  8 04:49:47 np0005550137 ceph-mgr[74806]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: NVMeoFTcpUI -> /ui-api/nvmeof
Dec  8 04:49:47 np0005550137 ceph-mgr[74806]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: Osd -> /api/osd
Dec  8 04:49:47 np0005550137 ceph-mgr[74806]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: OsdUi -> /ui-api/osd
Dec  8 04:49:47 np0005550137 ceph-mgr[74806]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: OsdFlagsController -> /api/osd/flags
Dec  8 04:49:47 np0005550137 ceph-mgr[74806]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: MdsPerfCounter -> /api/perf_counters/mds
Dec  8 04:49:47 np0005550137 ceph-mgr[74806]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: MonPerfCounter -> /api/perf_counters/mon
Dec  8 04:49:47 np0005550137 ceph-mgr[74806]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: OsdPerfCounter -> /api/perf_counters/osd
Dec  8 04:49:47 np0005550137 ceph-mgr[74806]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: RgwPerfCounter -> /api/perf_counters/rgw
Dec  8 04:49:47 np0005550137 ceph-mgr[74806]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: RbdMirrorPerfCounter -> /api/perf_counters/rbd-mirror
Dec  8 04:49:47 np0005550137 ceph-mgr[74806]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: MgrPerfCounter -> /api/perf_counters/mgr
Dec  8 04:49:47 np0005550137 ceph-mgr[74806]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: TcmuRunnerPerfCounter -> /api/perf_counters/tcmu-runner
Dec  8 04:49:47 np0005550137 ceph-mgr[74806]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: PerfCounters -> /api/perf_counters
Dec  8 04:49:47 np0005550137 ceph-mgr[74806]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: PrometheusReceiver -> /api/prometheus_receiver
Dec  8 04:49:47 np0005550137 ceph-mgr[74806]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: Prometheus -> /api/prometheus
Dec  8 04:49:47 np0005550137 ceph-mgr[74806]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: PrometheusNotifications -> /api/prometheus/notifications
Dec  8 04:49:47 np0005550137 ceph-mgr[74806]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: PrometheusSettings -> /ui-api/prometheus
Dec  8 04:49:47 np0005550137 ceph-mgr[74806]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: Role -> /api/role
Dec  8 04:49:47 np0005550137 ceph-mgr[74806]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: Scope -> /ui-api/scope
Dec  8 04:49:47 np0005550137 ceph-mgr[74806]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: Saml2 -> /auth/saml2
Dec  8 04:49:47 np0005550137 ceph-mgr[74806]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: Settings -> /api/settings
Dec  8 04:49:47 np0005550137 ceph-mgr[74806]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: StandardSettings -> /ui-api/standard_settings
Dec  8 04:49:47 np0005550137 ceph-mgr[74806]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: Summary -> /api/summary
Dec  8 04:49:47 np0005550137 ceph-mgr[74806]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: Task -> /api/task
Dec  8 04:49:47 np0005550137 ceph-mgr[74806]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: Telemetry -> /api/telemetry
Dec  8 04:49:47 np0005550137 ceph-mgr[74806]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: User -> /api/user
Dec  8 04:49:47 np0005550137 ceph-mgr[74806]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: UserPasswordPolicy -> /api/user
Dec  8 04:49:47 np0005550137 ceph-mgr[74806]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: UserChangePassword -> /api/user/{username}
Dec  8 04:49:47 np0005550137 ceph-mgr[74806]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: FeatureTogglesEndpoint -> /api/feature_toggles
Dec  8 04:49:47 np0005550137 ceph-mgr[74806]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: MessageOfTheDay -> /ui-api/motd
Dec  8 04:49:47 np0005550137 ceph-osd[83009]: log_channel(cluster) log [DBG] : 11.6 scrub starts
Dec  8 04:49:47 np0005550137 ceph-osd[83009]: log_channel(cluster) log [DBG] : 11.6 scrub ok
Dec  8 04:49:47 np0005550137 systemd-logind[805]: New session 37 of user ceph-admin.
Dec  8 04:49:47 np0005550137 systemd[1]: Started Session 37 of User ceph-admin.
Dec  8 04:49:47 np0005550137 ceph-mgr[74806]: [dashboard INFO dashboard.module] Engine started.
Dec  8 04:49:48 np0005550137 ceph-mon[74516]: log_channel(cluster) log [DBG] : mgrmap e29: compute-0.kitiwu(active, since 1.06894s), standbys: compute-1.mmkaif, compute-2.zqytsv
Dec  8 04:49:48 np0005550137 ceph-mgr[74806]: log_channel(cluster) log [DBG] : pgmap v3: 353 pgs: 353 active+clean; 458 KiB data, 125 MiB used, 60 GiB / 60 GiB avail
Dec  8 04:49:48 np0005550137 ceph-mgr[74806]: [cephadm INFO cherrypy.error] [08/Dec/2025:09:49:48] ENGINE Bus STARTING
Dec  8 04:49:48 np0005550137 ceph-mgr[74806]: log_channel(cephadm) log [INF] : [08/Dec/2025:09:49:48] ENGINE Bus STARTING
Dec  8 04:49:48 np0005550137 ceph-mon[74516]: from='mgr.14766 192.168.122.100:0/2066651810' entity='mgr.compute-0.kitiwu' cmd=[{"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/compute-0.kitiwu/trash_purge_schedule"}]: dispatch
Dec  8 04:49:48 np0005550137 ceph-mgr[74806]: [cephadm INFO cherrypy.error] [08/Dec/2025:09:49:48] ENGINE Serving on https://192.168.122.100:7150
Dec  8 04:49:48 np0005550137 ceph-mgr[74806]: log_channel(cephadm) log [INF] : [08/Dec/2025:09:49:48] ENGINE Serving on https://192.168.122.100:7150
Dec  8 04:49:48 np0005550137 ceph-mgr[74806]: [cephadm INFO cherrypy.error] [08/Dec/2025:09:49:48] ENGINE Client ('192.168.122.100', 34334) lost — peer dropped the TLS connection suddenly, during handshake: (6, 'TLS/SSL connection has been closed (EOF) (_ssl.c:1147)')
Dec  8 04:49:48 np0005550137 ceph-mgr[74806]: log_channel(cephadm) log [INF] : [08/Dec/2025:09:49:48] ENGINE Client ('192.168.122.100', 34334) lost — peer dropped the TLS connection suddenly, during handshake: (6, 'TLS/SSL connection has been closed (EOF) (_ssl.c:1147)')
Dec  8 04:49:48 np0005550137 podman[101186]: 2025-12-08 09:49:48.456545338 +0000 UTC m=+0.071758172 container exec e9eed32aa882af654b63b8cfe5830bc1e3b4cbe08ed44f11c04405084f3a1007 (image=quay.io/ceph/ceph:v19, name=ceph-ceb838ef-9d5d-54e4-bddb-2f01adce2ad4-mon-compute-0, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec  8 04:49:48 np0005550137 ceph-mgr[74806]: [cephadm INFO cherrypy.error] [08/Dec/2025:09:49:48] ENGINE Serving on http://192.168.122.100:8765
Dec  8 04:49:48 np0005550137 ceph-mgr[74806]: log_channel(cephadm) log [INF] : [08/Dec/2025:09:49:48] ENGINE Serving on http://192.168.122.100:8765
Dec  8 04:49:48 np0005550137 ceph-mgr[74806]: [cephadm INFO cherrypy.error] [08/Dec/2025:09:49:48] ENGINE Bus STARTED
Dec  8 04:49:48 np0005550137 ceph-mgr[74806]: log_channel(cephadm) log [INF] : [08/Dec/2025:09:49:48] ENGINE Bus STARTED
Dec  8 04:49:48 np0005550137 podman[101186]: 2025-12-08 09:49:48.540041784 +0000 UTC m=+0.155254528 container exec_died e9eed32aa882af654b63b8cfe5830bc1e3b4cbe08ed44f11c04405084f3a1007 (image=quay.io/ceph/ceph:v19, name=ceph-ceb838ef-9d5d-54e4-bddb-2f01adce2ad4-mon-compute-0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, io.buildah.version=1.40.1)
Dec  8 04:49:48 np0005550137 ceph-osd[83009]: log_channel(cluster) log [DBG] : 11.18 scrub starts
Dec  8 04:49:48 np0005550137 ceph-osd[83009]: log_channel(cluster) log [DBG] : 11.18 scrub ok
Dec  8 04:49:48 np0005550137 ceph-ceb838ef-9d5d-54e4-bddb-2f01adce2ad4-nfs-cephfs-2-0-compute-0-cuvvno[97606]: 08/12/2025 09:49:48 : epoch 69369efc : compute-0 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f5714002950 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec  8 04:49:48 np0005550137 radosgw[89717]: ====== starting new request req=0x7ff6ecd845d0 =====
Dec  8 04:49:48 np0005550137 radosgw[89717]: ====== req done req=0x7ff6ecd845d0 op status=0 http_status=200 latency=0.002000062s ======
Dec  8 04:49:48 np0005550137 radosgw[89717]: beast: 0x7ff6ecd845d0: 192.168.122.100 - anonymous [08/Dec/2025:09:49:48.888 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.002000062s
Dec  8 04:49:49 np0005550137 ceph-mgr[74806]: log_channel(cluster) log [DBG] : pgmap v4: 353 pgs: 353 active+clean; 458 KiB data, 125 MiB used, 60 GiB / 60 GiB avail
Dec  8 04:49:49 np0005550137 ceph-mon[74516]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "11"} v 0)
Dec  8 04:49:49 np0005550137 ceph-mon[74516]: log_channel(audit) log [INF] : from='mgr.14766 192.168.122.100:0/2066651810' entity='mgr.compute-0.kitiwu' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "11"}]: dispatch
Dec  8 04:49:49 np0005550137 ceph-ceb838ef-9d5d-54e4-bddb-2f01adce2ad4-nfs-cephfs-2-0-compute-0-cuvvno[97606]: 08/12/2025 09:49:49 : epoch 69369efc : compute-0 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f56e8003c10 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec  8 04:49:49 np0005550137 ceph-mgr[74806]: [devicehealth INFO root] Check health
Dec  8 04:49:49 np0005550137 ceph-ceb838ef-9d5d-54e4-bddb-2f01adce2ad4-nfs-cephfs-2-0-compute-0-cuvvno[97606]: 08/12/2025 09:49:49 : epoch 69369efc : compute-0 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f56f0003520 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec  8 04:49:49 np0005550137 podman[101335]: 2025-12-08 09:49:49.217981931 +0000 UTC m=+0.059304803 container exec 76b11ba15419197dfbc2b41db10c193f2abd6fd1997bcedb2a4f58e24399d05c (image=quay.io/prometheus/node-exporter:v1.7.0, name=ceph-ceb838ef-9d5d-54e4-bddb-2f01adce2ad4-node-exporter-compute-0, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Dec  8 04:49:49 np0005550137 podman[101335]: 2025-12-08 09:49:49.229977886 +0000 UTC m=+0.071300718 container exec_died 76b11ba15419197dfbc2b41db10c193f2abd6fd1997bcedb2a4f58e24399d05c (image=quay.io/prometheus/node-exporter:v1.7.0, name=ceph-ceb838ef-9d5d-54e4-bddb-2f01adce2ad4-node-exporter-compute-0, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Dec  8 04:49:49 np0005550137 ceph-mon[74516]: mon.compute-0@0(leader).osd e77 do_prune osdmap full prune enabled
Dec  8 04:49:49 np0005550137 ceph-mon[74516]: [08/Dec/2025:09:49:48] ENGINE Bus STARTING
Dec  8 04:49:49 np0005550137 ceph-mon[74516]: [08/Dec/2025:09:49:48] ENGINE Serving on https://192.168.122.100:7150
Dec  8 04:49:49 np0005550137 ceph-mon[74516]: [08/Dec/2025:09:49:48] ENGINE Client ('192.168.122.100', 34334) lost — peer dropped the TLS connection suddenly, during handshake: (6, 'TLS/SSL connection has been closed (EOF) (_ssl.c:1147)')
Dec  8 04:49:49 np0005550137 ceph-mon[74516]: [08/Dec/2025:09:49:48] ENGINE Serving on http://192.168.122.100:8765
Dec  8 04:49:49 np0005550137 ceph-mon[74516]: [08/Dec/2025:09:49:48] ENGINE Bus STARTED
Dec  8 04:49:49 np0005550137 ceph-mon[74516]: from='mgr.14766 192.168.122.100:0/2066651810' entity='mgr.compute-0.kitiwu' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "11"}]: dispatch
Dec  8 04:49:49 np0005550137 ceph-mon[74516]: log_channel(audit) log [INF] : from='mgr.14766 192.168.122.100:0/2066651810' entity='mgr.compute-0.kitiwu' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "11"}]': finished
Dec  8 04:49:49 np0005550137 ceph-mon[74516]: mon.compute-0@0(leader).osd e78 e78: 3 total, 3 up, 3 in
Dec  8 04:49:49 np0005550137 ceph-mon[74516]: log_channel(cluster) log [DBG] : osdmap e78: 3 total, 3 up, 3 in
Dec  8 04:49:49 np0005550137 ceph-mon[74516]: log_channel(cluster) log [DBG] : mgrmap e30: compute-0.kitiwu(active, since 2s), standbys: compute-1.mmkaif, compute-2.zqytsv
Dec  8 04:49:49 np0005550137 radosgw[89717]: ====== starting new request req=0x7ff6ecd845d0 =====
Dec  8 04:49:49 np0005550137 radosgw[89717]: ====== req done req=0x7ff6ecd845d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  8 04:49:49 np0005550137 radosgw[89717]: beast: 0x7ff6ecd845d0: 192.168.122.102 - anonymous [08/Dec/2025:09:49:49.461 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  8 04:49:49 np0005550137 podman[101417]: 2025-12-08 09:49:49.540262142 +0000 UTC m=+0.069857553 container exec 7034b3b818aede2fb8291924130aa2cf54d35eaefa864ea9d8e70488e88a0698 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-ceb838ef-9d5d-54e4-bddb-2f01adce2ad4-nfs-cephfs-2-0-compute-0-cuvvno, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image)
Dec  8 04:49:49 np0005550137 podman[101417]: 2025-12-08 09:49:49.556065632 +0000 UTC m=+0.085660943 container exec_died 7034b3b818aede2fb8291924130aa2cf54d35eaefa864ea9d8e70488e88a0698 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-ceb838ef-9d5d-54e4-bddb-2f01adce2ad4-nfs-cephfs-2-0-compute-0-cuvvno, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, OSD_FLAVOR=default)
Dec  8 04:49:49 np0005550137 ceph-osd[83009]: log_channel(cluster) log [DBG] : 8.1a scrub starts
Dec  8 04:49:49 np0005550137 ceph-osd[83009]: log_channel(cluster) log [DBG] : 8.1a scrub ok
Dec  8 04:49:49 np0005550137 podman[101480]: 2025-12-08 09:49:49.825331292 +0000 UTC m=+0.054786586 container exec 7f6df096ca74536932244eb4e1f4382864c206173aaf0a0b9089cd0768af80db (image=quay.io/ceph/haproxy:2.3, name=ceph-ceb838ef-9d5d-54e4-bddb-2f01adce2ad4-haproxy-nfs-cephfs-compute-0-dvsreo)
Dec  8 04:49:49 np0005550137 podman[101480]: 2025-12-08 09:49:49.834966255 +0000 UTC m=+0.064421499 container exec_died 7f6df096ca74536932244eb4e1f4382864c206173aaf0a0b9089cd0768af80db (image=quay.io/ceph/haproxy:2.3, name=ceph-ceb838ef-9d5d-54e4-bddb-2f01adce2ad4-haproxy-nfs-cephfs-compute-0-dvsreo)
Dec  8 04:49:49 np0005550137 ceph-mon[74516]: mon.compute-0@0(leader).osd e78 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  8 04:49:50 np0005550137 podman[101547]: 2025-12-08 09:49:50.044594694 +0000 UTC m=+0.063556453 container exec 860f9b1fceef64b25d38d4f198ba3ddb3d3c4871377cfc5a28c6fffa3c89de5c (image=quay.io/ceph/keepalived:2.2.4, name=ceph-ceb838ef-9d5d-54e4-bddb-2f01adce2ad4-keepalived-nfs-cephfs-compute-0-qxgfft, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., release=1793, vendor=Red Hat, Inc., io.k8s.display-name=Keepalived on RHEL 9, summary=Provides keepalived on RHEL 9 for Ceph., vcs-ref=befaf1f5ec7b874aef2651ee1384d51828504eb9, io.buildah.version=1.28.2, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, version=2.2.4, architecture=x86_64, vcs-type=git, io.openshift.tags=Ceph keepalived, build-date=2023-02-22T09:23:20, com.redhat.component=keepalived-container, name=keepalived, description=keepalived for Ceph, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9-minimal/images/9.1.0-1793, io.openshift.expose-services=, distribution-scope=public)
Dec  8 04:49:50 np0005550137 podman[101547]: 2025-12-08 09:49:50.052518505 +0000 UTC m=+0.071480274 container exec_died 860f9b1fceef64b25d38d4f198ba3ddb3d3c4871377cfc5a28c6fffa3c89de5c (image=quay.io/ceph/keepalived:2.2.4, name=ceph-ceb838ef-9d5d-54e4-bddb-2f01adce2ad4-keepalived-nfs-cephfs-compute-0-qxgfft, release=1793, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9-minimal/images/9.1.0-1793, vendor=Red Hat, Inc., com.redhat.component=keepalived-container, vcs-type=git, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, distribution-scope=public, io.openshift.tags=Ceph keepalived, build-date=2023-02-22T09:23:20, io.buildah.version=1.28.2, io.k8s.display-name=Keepalived on RHEL 9, summary=Provides keepalived on RHEL 9 for Ceph., description=keepalived for Ceph, io.openshift.expose-services=, vcs-ref=befaf1f5ec7b874aef2651ee1384d51828504eb9, name=keepalived, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., maintainer=Guillaume Abrioux <gabrioux@redhat.com>, version=2.2.4, architecture=x86_64)
Dec  8 04:49:50 np0005550137 ceph-mon[74516]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1.devices.0}] v 0)
Dec  8 04:49:50 np0005550137 ceph-mon[74516]: log_channel(audit) log [INF] : from='mgr.14766 192.168.122.100:0/2066651810' entity='mgr.compute-0.kitiwu' 
Dec  8 04:49:50 np0005550137 ceph-mon[74516]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1}] v 0)
Dec  8 04:49:50 np0005550137 ceph-osd[83009]: osd.1 pg_epoch: 78 pg[9.a( v 49'1026 (0'0,49'1026] local-lis/les=52/53 n=6 ec=52/36 lis/c=52/52 les/c/f=53/53/0 sis=78 pruub=11.673411369s) [0] r=-1 lpr=78 pi=[52,78)/1 crt=49'1026 lcod 0'0 mlcod 0'0 active pruub 210.524490356s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Dec  8 04:49:50 np0005550137 ceph-osd[83009]: osd.1 pg_epoch: 78 pg[9.a( v 49'1026 (0'0,49'1026] local-lis/les=52/53 n=6 ec=52/36 lis/c=52/52 les/c/f=53/53/0 sis=78 pruub=11.673361778s) [0] r=-1 lpr=78 pi=[52,78)/1 crt=49'1026 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 210.524490356s@ mbc={}] state<Start>: transitioning to Stray
Dec  8 04:49:50 np0005550137 ceph-osd[83009]: osd.1 pg_epoch: 78 pg[9.1a( v 49'1026 (0'0,49'1026] local-lis/les=52/53 n=5 ec=52/36 lis/c=52/52 les/c/f=53/53/0 sis=78 pruub=11.673402786s) [0] r=-1 lpr=78 pi=[52,78)/1 crt=49'1026 lcod 0'0 mlcod 0'0 active pruub 210.524795532s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Dec  8 04:49:50 np0005550137 ceph-osd[83009]: osd.1 pg_epoch: 78 pg[9.1a( v 49'1026 (0'0,49'1026] local-lis/les=52/53 n=5 ec=52/36 lis/c=52/52 les/c/f=53/53/0 sis=78 pruub=11.673370361s) [0] r=-1 lpr=78 pi=[52,78)/1 crt=49'1026 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 210.524795532s@ mbc={}] state<Start>: transitioning to Stray
Dec  8 04:49:50 np0005550137 ceph-mon[74516]: log_channel(audit) log [INF] : from='mgr.14766 192.168.122.100:0/2066651810' entity='mgr.compute-0.kitiwu' 
Dec  8 04:49:50 np0005550137 ceph-mon[74516]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2.devices.0}] v 0)
Dec  8 04:49:50 np0005550137 ceph-mon[74516]: log_channel(audit) log [INF] : from='mgr.14766 192.168.122.100:0/2066651810' entity='mgr.compute-0.kitiwu' 
Dec  8 04:49:50 np0005550137 ceph-mon[74516]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2}] v 0)
Dec  8 04:49:50 np0005550137 podman[101612]: 2025-12-08 09:49:50.309638386 +0000 UTC m=+0.074333290 container exec 7099edc240bca550d8ffa93b6e07ba6fcf270dd0e9d3a56c64a700eedb3fc8a8 (image=quay.io/prometheus/alertmanager:v0.25.0, name=ceph-ceb838ef-9d5d-54e4-bddb-2f01adce2ad4-alertmanager-compute-0, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Dec  8 04:49:50 np0005550137 ceph-mon[74516]: log_channel(audit) log [INF] : from='mgr.14766 192.168.122.100:0/2066651810' entity='mgr.compute-0.kitiwu' 
Dec  8 04:49:50 np0005550137 podman[101612]: 2025-12-08 09:49:50.336013177 +0000 UTC m=+0.100708021 container exec_died 7099edc240bca550d8ffa93b6e07ba6fcf270dd0e9d3a56c64a700eedb3fc8a8 (image=quay.io/prometheus/alertmanager:v0.25.0, name=ceph-ceb838ef-9d5d-54e4-bddb-2f01adce2ad4-alertmanager-compute-0, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Dec  8 04:49:50 np0005550137 ceph-mon[74516]: from='mgr.14766 192.168.122.100:0/2066651810' entity='mgr.compute-0.kitiwu' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "11"}]': finished
Dec  8 04:49:50 np0005550137 ceph-mon[74516]: from='mgr.14766 192.168.122.100:0/2066651810' entity='mgr.compute-0.kitiwu' 
Dec  8 04:49:50 np0005550137 ceph-mon[74516]: from='mgr.14766 192.168.122.100:0/2066651810' entity='mgr.compute-0.kitiwu' 
Dec  8 04:49:50 np0005550137 ceph-mon[74516]: from='mgr.14766 192.168.122.100:0/2066651810' entity='mgr.compute-0.kitiwu' 
Dec  8 04:49:50 np0005550137 ceph-mon[74516]: from='mgr.14766 192.168.122.100:0/2066651810' entity='mgr.compute-0.kitiwu' 
Dec  8 04:49:50 np0005550137 podman[101684]: 2025-12-08 09:49:50.556740473 +0000 UTC m=+0.052825455 container exec b8b17017c0f8b03c982d86c76026d22cd109a761d95550a2b2d3e0b24e8d9fc9 (image=quay.io/ceph/grafana:10.4.0, name=ceph-ceb838ef-9d5d-54e4-bddb-2f01adce2ad4-grafana-compute-0, maintainer=Grafana Labs <hello@grafana.com>)
Dec  8 04:49:50 np0005550137 ceph-osd[83009]: log_channel(cluster) log [DBG] : 8.1e scrub starts
Dec  8 04:49:50 np0005550137 ceph-osd[83009]: log_channel(cluster) log [DBG] : 8.1e scrub ok
Dec  8 04:49:50 np0005550137 podman[101684]: 2025-12-08 09:49:50.75054329 +0000 UTC m=+0.246628172 container exec_died b8b17017c0f8b03c982d86c76026d22cd109a761d95550a2b2d3e0b24e8d9fc9 (image=quay.io/ceph/grafana:10.4.0, name=ceph-ceb838ef-9d5d-54e4-bddb-2f01adce2ad4-grafana-compute-0, maintainer=Grafana Labs <hello@grafana.com>)
Dec  8 04:49:50 np0005550137 ceph-ceb838ef-9d5d-54e4-bddb-2f01adce2ad4-nfs-cephfs-2-0-compute-0-cuvvno[97606]: 08/12/2025 09:49:50 : epoch 69369efc : compute-0 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f56f0003520 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec  8 04:49:50 np0005550137 radosgw[89717]: ====== starting new request req=0x7ff6ecd845d0 =====
Dec  8 04:49:50 np0005550137 radosgw[89717]: ====== req done req=0x7ff6ecd845d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  8 04:49:50 np0005550137 radosgw[89717]: beast: 0x7ff6ecd845d0: 192.168.122.100 - anonymous [08/Dec/2025:09:49:50.890 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  8 04:49:51 np0005550137 ceph-mgr[74806]: log_channel(cluster) log [DBG] : pgmap v6: 353 pgs: 353 active+clean; 458 KiB data, 125 MiB used, 60 GiB / 60 GiB avail
Dec  8 04:49:51 np0005550137 ceph-mon[74516]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "12"} v 0)
Dec  8 04:49:51 np0005550137 ceph-mon[74516]: log_channel(audit) log [INF] : from='mgr.14766 192.168.122.100:0/2066651810' entity='mgr.compute-0.kitiwu' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "12"}]: dispatch
Dec  8 04:49:51 np0005550137 podman[101797]: 2025-12-08 09:49:51.149958265 +0000 UTC m=+0.067642176 container exec d4d95e6750bbb2e718479d53632eff7b41f65f80181b0d3e530725fcb6fb28e3 (image=quay.io/prometheus/prometheus:v2.51.0, name=ceph-ceb838ef-9d5d-54e4-bddb-2f01adce2ad4-prometheus-compute-0, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Dec  8 04:49:51 np0005550137 ceph-mon[74516]: mon.compute-0@0(leader).osd e78 do_prune osdmap full prune enabled
Dec  8 04:49:51 np0005550137 ceph-mon[74516]: log_channel(audit) log [INF] : from='mgr.14766 192.168.122.100:0/2066651810' entity='mgr.compute-0.kitiwu' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "12"}]': finished
Dec  8 04:49:51 np0005550137 ceph-mon[74516]: mon.compute-0@0(leader).osd e79 e79: 3 total, 3 up, 3 in
Dec  8 04:49:51 np0005550137 ceph-mon[74516]: log_channel(cluster) log [DBG] : osdmap e79: 3 total, 3 up, 3 in
Dec  8 04:49:51 np0005550137 ceph-osd[83009]: osd.1 pg_epoch: 79 pg[9.a( v 49'1026 (0'0,49'1026] local-lis/les=52/53 n=6 ec=52/36 lis/c=52/52 les/c/f=53/53/0 sis=79) [0]/[1] r=0 lpr=79 pi=[52,79)/1 crt=49'1026 lcod 0'0 mlcod 0'0 remapped NOTIFY mbc={}] PeeringState::start_peering_interval up [0] -> [0], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 0, role -1 -> 0, features acting 4540701547738038271 upacting 4540701547738038271
Dec  8 04:49:51 np0005550137 ceph-osd[83009]: osd.1 pg_epoch: 79 pg[9.a( v 49'1026 (0'0,49'1026] local-lis/les=52/53 n=6 ec=52/36 lis/c=52/52 les/c/f=53/53/0 sis=79) [0]/[1] r=0 lpr=79 pi=[52,79)/1 crt=49'1026 lcod 0'0 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Dec  8 04:49:51 np0005550137 ceph-osd[83009]: osd.1 pg_epoch: 79 pg[9.1a( v 49'1026 (0'0,49'1026] local-lis/les=52/53 n=5 ec=52/36 lis/c=52/52 les/c/f=53/53/0 sis=79) [0]/[1] r=0 lpr=79 pi=[52,79)/1 crt=49'1026 lcod 0'0 mlcod 0'0 remapped NOTIFY mbc={}] PeeringState::start_peering_interval up [0] -> [0], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 0, role -1 -> 0, features acting 4540701547738038271 upacting 4540701547738038271
Dec  8 04:49:51 np0005550137 ceph-osd[83009]: osd.1 pg_epoch: 79 pg[9.1a( v 49'1026 (0'0,49'1026] local-lis/les=52/53 n=5 ec=52/36 lis/c=52/52 les/c/f=53/53/0 sis=79) [0]/[1] r=0 lpr=79 pi=[52,79)/1 crt=49'1026 lcod 0'0 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Dec  8 04:49:51 np0005550137 ceph-ceb838ef-9d5d-54e4-bddb-2f01adce2ad4-nfs-cephfs-2-0-compute-0-cuvvno[97606]: 08/12/2025 09:49:51 : epoch 69369efc : compute-0 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f5714002950 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec  8 04:49:51 np0005550137 podman[101797]: 2025-12-08 09:49:51.206101651 +0000 UTC m=+0.123785482 container exec_died d4d95e6750bbb2e718479d53632eff7b41f65f80181b0d3e530725fcb6fb28e3 (image=quay.io/prometheus/prometheus:v2.51.0, name=ceph-ceb838ef-9d5d-54e4-bddb-2f01adce2ad4-prometheus-compute-0, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Dec  8 04:49:51 np0005550137 ceph-ceb838ef-9d5d-54e4-bddb-2f01adce2ad4-nfs-cephfs-2-0-compute-0-cuvvno[97606]: 08/12/2025 09:49:51 : epoch 69369efc : compute-0 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f56e8003c10 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec  8 04:49:51 np0005550137 ceph-mon[74516]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Dec  8 04:49:51 np0005550137 ceph-mon[74516]: log_channel(audit) log [INF] : from='mgr.14766 192.168.122.100:0/2066651810' entity='mgr.compute-0.kitiwu' 
Dec  8 04:49:51 np0005550137 ceph-mon[74516]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Dec  8 04:49:51 np0005550137 ceph-mon[74516]: log_channel(audit) log [INF] : from='mgr.14766 192.168.122.100:0/2066651810' entity='mgr.compute-0.kitiwu' 
Dec  8 04:49:51 np0005550137 ceph-mon[74516]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1.devices.0}] v 0)
Dec  8 04:49:51 np0005550137 ceph-mon[74516]: log_channel(audit) log [INF] : from='mgr.14766 192.168.122.100:0/2066651810' entity='mgr.compute-0.kitiwu' 
Dec  8 04:49:51 np0005550137 ceph-mon[74516]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1}] v 0)
Dec  8 04:49:51 np0005550137 ceph-mon[74516]: log_channel(audit) log [INF] : from='mgr.14766 192.168.122.100:0/2066651810' entity='mgr.compute-0.kitiwu' 
Dec  8 04:49:51 np0005550137 ceph-mon[74516]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config rm", "who": "osd/host:compute-1", "name": "osd_memory_target"} v 0)
Dec  8 04:49:51 np0005550137 ceph-mon[74516]: log_channel(audit) log [INF] : from='mgr.14766 192.168.122.100:0/2066651810' entity='mgr.compute-0.kitiwu' cmd=[{"prefix": "config rm", "who": "osd/host:compute-1", "name": "osd_memory_target"}]: dispatch
Dec  8 04:49:51 np0005550137 ceph-mon[74516]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2.devices.0}] v 0)
Dec  8 04:49:51 np0005550137 ceph-mon[74516]: log_channel(audit) log [INF] : from='mgr.14766 192.168.122.100:0/2066651810' entity='mgr.compute-0.kitiwu' 
Dec  8 04:49:51 np0005550137 ceph-mon[74516]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2}] v 0)
Dec  8 04:49:51 np0005550137 ceph-mon[74516]: from='mgr.14766 192.168.122.100:0/2066651810' entity='mgr.compute-0.kitiwu' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "12"}]: dispatch
Dec  8 04:49:51 np0005550137 ceph-mon[74516]: from='mgr.14766 192.168.122.100:0/2066651810' entity='mgr.compute-0.kitiwu' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "12"}]': finished
Dec  8 04:49:51 np0005550137 ceph-mon[74516]: from='mgr.14766 192.168.122.100:0/2066651810' entity='mgr.compute-0.kitiwu' 
Dec  8 04:49:51 np0005550137 ceph-mon[74516]: from='mgr.14766 192.168.122.100:0/2066651810' entity='mgr.compute-0.kitiwu' 
Dec  8 04:49:51 np0005550137 ceph-mon[74516]: from='mgr.14766 192.168.122.100:0/2066651810' entity='mgr.compute-0.kitiwu' 
Dec  8 04:49:51 np0005550137 ceph-mon[74516]: from='mgr.14766 192.168.122.100:0/2066651810' entity='mgr.compute-0.kitiwu' 
Dec  8 04:49:51 np0005550137 ceph-mon[74516]: from='mgr.14766 192.168.122.100:0/2066651810' entity='mgr.compute-0.kitiwu' cmd=[{"prefix": "config rm", "who": "osd/host:compute-1", "name": "osd_memory_target"}]: dispatch
Dec  8 04:49:51 np0005550137 ceph-mon[74516]: log_channel(cluster) log [DBG] : mgrmap e31: compute-0.kitiwu(active, since 4s), standbys: compute-1.mmkaif, compute-2.zqytsv
Dec  8 04:49:51 np0005550137 ceph-mon[74516]: log_channel(audit) log [INF] : from='mgr.14766 192.168.122.100:0/2066651810' entity='mgr.compute-0.kitiwu' 
Dec  8 04:49:51 np0005550137 ceph-mon[74516]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config rm", "who": "osd/host:compute-2", "name": "osd_memory_target"} v 0)
Dec  8 04:49:51 np0005550137 ceph-mon[74516]: log_channel(audit) log [INF] : from='mgr.14766 192.168.122.100:0/2066651810' entity='mgr.compute-0.kitiwu' cmd=[{"prefix": "config rm", "who": "osd/host:compute-2", "name": "osd_memory_target"}]: dispatch
Dec  8 04:49:51 np0005550137 radosgw[89717]: ====== starting new request req=0x7ff6ecd845d0 =====
Dec  8 04:49:51 np0005550137 radosgw[89717]: ====== req done req=0x7ff6ecd845d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  8 04:49:51 np0005550137 radosgw[89717]: beast: 0x7ff6ecd845d0: 192.168.122.102 - anonymous [08/Dec/2025:09:49:51.464 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  8 04:49:51 np0005550137 ceph-osd[83009]: log_channel(cluster) log [DBG] : 8.1d scrub starts
Dec  8 04:49:51 np0005550137 ceph-osd[83009]: log_channel(cluster) log [DBG] : 8.1d scrub ok
Dec  8 04:49:52 np0005550137 ceph-mon[74516]: mon.compute-0@0(leader).osd e79 do_prune osdmap full prune enabled
Dec  8 04:49:52 np0005550137 ceph-mon[74516]: mon.compute-0@0(leader).osd e80 e80: 3 total, 3 up, 3 in
Dec  8 04:49:52 np0005550137 ceph-mon[74516]: log_channel(cluster) log [DBG] : osdmap e80: 3 total, 3 up, 3 in
Dec  8 04:49:52 np0005550137 ceph-osd[83009]: osd.1 pg_epoch: 80 pg[9.1a( v 49'1026 (0'0,49'1026] local-lis/les=79/80 n=5 ec=52/36 lis/c=52/52 les/c/f=53/53/0 sis=79) [0]/[1] async=[0] r=0 lpr=79 pi=[52,79)/1 crt=49'1026 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=4}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  8 04:49:52 np0005550137 ceph-osd[83009]: osd.1 pg_epoch: 80 pg[9.a( v 49'1026 (0'0,49'1026] local-lis/les=79/80 n=6 ec=52/36 lis/c=52/52 les/c/f=53/53/0 sis=79) [0]/[1] async=[0] r=0 lpr=79 pi=[52,79)/1 crt=49'1026 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=9}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  8 04:49:52 np0005550137 ceph-mon[74516]: from='mgr.14766 192.168.122.100:0/2066651810' entity='mgr.compute-0.kitiwu' 
Dec  8 04:49:52 np0005550137 ceph-mon[74516]: from='mgr.14766 192.168.122.100:0/2066651810' entity='mgr.compute-0.kitiwu' 
Dec  8 04:49:52 np0005550137 ceph-mon[74516]: from='mgr.14766 192.168.122.100:0/2066651810' entity='mgr.compute-0.kitiwu' cmd=[{"prefix": "config rm", "who": "osd/host:compute-2", "name": "osd_memory_target"}]: dispatch
Dec  8 04:49:52 np0005550137 ceph-mon[74516]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Dec  8 04:49:52 np0005550137 ceph-mon[74516]: log_channel(audit) log [INF] : from='mgr.14766 192.168.122.100:0/2066651810' entity='mgr.compute-0.kitiwu' 
Dec  8 04:49:52 np0005550137 ceph-mon[74516]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Dec  8 04:49:52 np0005550137 ceph-mon[74516]: log_channel(audit) log [INF] : from='mgr.14766 192.168.122.100:0/2066651810' entity='mgr.compute-0.kitiwu' 
Dec  8 04:49:52 np0005550137 ceph-mon[74516]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"} v 0)
Dec  8 04:49:52 np0005550137 ceph-mon[74516]: log_channel(audit) log [INF] : from='mgr.14766 192.168.122.100:0/2066651810' entity='mgr.compute-0.kitiwu' cmd=[{"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"}]: dispatch
Dec  8 04:49:52 np0005550137 ceph-mon[74516]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Dec  8 04:49:52 np0005550137 ceph-mon[74516]: log_channel(audit) log [DBG] : from='mgr.14766 192.168.122.100:0/2066651810' entity='mgr.compute-0.kitiwu' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec  8 04:49:52 np0005550137 ceph-mon[74516]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Dec  8 04:49:52 np0005550137 ceph-mon[74516]: log_channel(audit) log [INF] : from='mgr.14766 192.168.122.100:0/2066651810' entity='mgr.compute-0.kitiwu' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Dec  8 04:49:52 np0005550137 ceph-mgr[74806]: [cephadm INFO cephadm.serve] Updating compute-0:/etc/ceph/ceph.conf
Dec  8 04:49:52 np0005550137 ceph-mgr[74806]: log_channel(cephadm) log [INF] : Updating compute-0:/etc/ceph/ceph.conf
Dec  8 04:49:52 np0005550137 ceph-mgr[74806]: [cephadm INFO cephadm.serve] Updating compute-1:/etc/ceph/ceph.conf
Dec  8 04:49:52 np0005550137 ceph-mgr[74806]: log_channel(cephadm) log [INF] : Updating compute-1:/etc/ceph/ceph.conf
Dec  8 04:49:52 np0005550137 ceph-mgr[74806]: [cephadm INFO cephadm.serve] Updating compute-2:/etc/ceph/ceph.conf
Dec  8 04:49:52 np0005550137 ceph-mgr[74806]: log_channel(cephadm) log [INF] : Updating compute-2:/etc/ceph/ceph.conf
Dec  8 04:49:52 np0005550137 ceph-osd[83009]: log_channel(cluster) log [DBG] : 11.1f scrub starts
Dec  8 04:49:52 np0005550137 ceph-osd[83009]: log_channel(cluster) log [DBG] : 11.1f scrub ok
Dec  8 04:49:52 np0005550137 ceph-ceb838ef-9d5d-54e4-bddb-2f01adce2ad4-nfs-cephfs-2-0-compute-0-cuvvno[97606]: 08/12/2025 09:49:52 : epoch 69369efc : compute-0 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f56f0003520 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec  8 04:49:52 np0005550137 radosgw[89717]: ====== starting new request req=0x7ff6ecd845d0 =====
Dec  8 04:49:52 np0005550137 radosgw[89717]: ====== req done req=0x7ff6ecd845d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  8 04:49:52 np0005550137 radosgw[89717]: beast: 0x7ff6ecd845d0: 192.168.122.100 - anonymous [08/Dec/2025:09:49:52.893 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  8 04:49:53 np0005550137 ceph-mgr[74806]: log_channel(cluster) log [DBG] : pgmap v9: 353 pgs: 353 active+clean; 458 KiB data, 125 MiB used, 60 GiB / 60 GiB avail
Dec  8 04:49:53 np0005550137 ceph-mon[74516]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "13"} v 0)
Dec  8 04:49:53 np0005550137 ceph-mon[74516]: log_channel(audit) log [INF] : from='mgr.14766 192.168.122.100:0/2066651810' entity='mgr.compute-0.kitiwu' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "13"}]: dispatch
Dec  8 04:49:53 np0005550137 ceph-mgr[74806]: [cephadm INFO cephadm.serve] Updating compute-1:/var/lib/ceph/ceb838ef-9d5d-54e4-bddb-2f01adce2ad4/config/ceph.conf
Dec  8 04:49:53 np0005550137 ceph-mgr[74806]: log_channel(cephadm) log [INF] : Updating compute-1:/var/lib/ceph/ceb838ef-9d5d-54e4-bddb-2f01adce2ad4/config/ceph.conf
Dec  8 04:49:53 np0005550137 ceph-mgr[74806]: [cephadm INFO cephadm.serve] Updating compute-2:/var/lib/ceph/ceb838ef-9d5d-54e4-bddb-2f01adce2ad4/config/ceph.conf
Dec  8 04:49:53 np0005550137 ceph-mgr[74806]: log_channel(cephadm) log [INF] : Updating compute-2:/var/lib/ceph/ceb838ef-9d5d-54e4-bddb-2f01adce2ad4/config/ceph.conf
Dec  8 04:49:53 np0005550137 ceph-mon[74516]: mon.compute-0@0(leader).osd e80 do_prune osdmap full prune enabled
Dec  8 04:49:53 np0005550137 ceph-mon[74516]: log_channel(audit) log [INF] : from='mgr.14766 192.168.122.100:0/2066651810' entity='mgr.compute-0.kitiwu' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "13"}]': finished
Dec  8 04:49:53 np0005550137 ceph-mon[74516]: mon.compute-0@0(leader).osd e81 e81: 3 total, 3 up, 3 in
Dec  8 04:49:53 np0005550137 ceph-mon[74516]: log_channel(cluster) log [DBG] : osdmap e81: 3 total, 3 up, 3 in
Dec  8 04:49:53 np0005550137 ceph-osd[83009]: osd.1 pg_epoch: 81 pg[9.a( v 49'1026 (0'0,49'1026] local-lis/les=79/80 n=6 ec=52/36 lis/c=79/52 les/c/f=80/53/0 sis=81 pruub=15.006590843s) [0] async=[0] r=-1 lpr=81 pi=[52,81)/1 crt=49'1026 lcod 0'0 mlcod 0'0 active pruub 216.894760132s@ mbc={255={}}] PeeringState::start_peering_interval up [0] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 0 -> 0, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Dec  8 04:49:53 np0005550137 ceph-osd[83009]: osd.1 pg_epoch: 81 pg[9.a( v 49'1026 (0'0,49'1026] local-lis/les=79/80 n=6 ec=52/36 lis/c=79/52 les/c/f=80/53/0 sis=81 pruub=15.006509781s) [0] r=-1 lpr=81 pi=[52,81)/1 crt=49'1026 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 216.894760132s@ mbc={}] state<Start>: transitioning to Stray
Dec  8 04:49:53 np0005550137 ceph-osd[83009]: osd.1 pg_epoch: 81 pg[9.1a( v 49'1026 (0'0,49'1026] local-lis/les=79/80 n=5 ec=52/36 lis/c=79/52 les/c/f=80/53/0 sis=81 pruub=15.001467705s) [0] async=[0] r=-1 lpr=81 pi=[52,81)/1 crt=49'1026 lcod 0'0 mlcod 0'0 active pruub 216.890594482s@ mbc={255={}}] PeeringState::start_peering_interval up [0] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 0 -> 0, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Dec  8 04:49:53 np0005550137 ceph-osd[83009]: osd.1 pg_epoch: 81 pg[9.1a( v 49'1026 (0'0,49'1026] local-lis/les=79/80 n=5 ec=52/36 lis/c=79/52 les/c/f=80/53/0 sis=81 pruub=15.001148224s) [0] r=-1 lpr=81 pi=[52,81)/1 crt=49'1026 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 216.890594482s@ mbc={}] state<Start>: transitioning to Stray
Dec  8 04:49:53 np0005550137 ceph-mgr[74806]: [cephadm INFO cephadm.serve] Updating compute-0:/var/lib/ceph/ceb838ef-9d5d-54e4-bddb-2f01adce2ad4/config/ceph.conf
Dec  8 04:49:53 np0005550137 ceph-mgr[74806]: log_channel(cephadm) log [INF] : Updating compute-0:/var/lib/ceph/ceb838ef-9d5d-54e4-bddb-2f01adce2ad4/config/ceph.conf
Dec  8 04:49:53 np0005550137 ceph-ceb838ef-9d5d-54e4-bddb-2f01adce2ad4-nfs-cephfs-2-0-compute-0-cuvvno[97606]: 08/12/2025 09:49:53 : epoch 69369efc : compute-0 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f56f0003520 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec  8 04:49:53 np0005550137 ceph-ceb838ef-9d5d-54e4-bddb-2f01adce2ad4-nfs-cephfs-2-0-compute-0-cuvvno[97606]: 08/12/2025 09:49:53 : epoch 69369efc : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f571c001320 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec  8 04:49:53 np0005550137 radosgw[89717]: ====== starting new request req=0x7ff6ecd845d0 =====
Dec  8 04:49:53 np0005550137 radosgw[89717]: ====== req done req=0x7ff6ecd845d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  8 04:49:53 np0005550137 radosgw[89717]: beast: 0x7ff6ecd845d0: 192.168.122.102 - anonymous [08/Dec/2025:09:49:53.468 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  8 04:49:53 np0005550137 ceph-mon[74516]: from='mgr.14766 192.168.122.100:0/2066651810' entity='mgr.compute-0.kitiwu' 
Dec  8 04:49:53 np0005550137 ceph-mon[74516]: from='mgr.14766 192.168.122.100:0/2066651810' entity='mgr.compute-0.kitiwu' 
Dec  8 04:49:53 np0005550137 ceph-mon[74516]: from='mgr.14766 192.168.122.100:0/2066651810' entity='mgr.compute-0.kitiwu' cmd=[{"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"}]: dispatch
Dec  8 04:49:53 np0005550137 ceph-mon[74516]: from='mgr.14766 192.168.122.100:0/2066651810' entity='mgr.compute-0.kitiwu' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Dec  8 04:49:53 np0005550137 ceph-mon[74516]: Updating compute-0:/etc/ceph/ceph.conf
Dec  8 04:49:53 np0005550137 ceph-mon[74516]: Updating compute-1:/etc/ceph/ceph.conf
Dec  8 04:49:53 np0005550137 ceph-mon[74516]: Updating compute-2:/etc/ceph/ceph.conf
Dec  8 04:49:53 np0005550137 ceph-mon[74516]: from='mgr.14766 192.168.122.100:0/2066651810' entity='mgr.compute-0.kitiwu' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "13"}]: dispatch
Dec  8 04:49:53 np0005550137 ceph-mon[74516]: from='mgr.14766 192.168.122.100:0/2066651810' entity='mgr.compute-0.kitiwu' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "13"}]': finished
Dec  8 04:49:53 np0005550137 ceph-osd[83009]: log_channel(cluster) log [DBG] : 11.10 scrub starts
Dec  8 04:49:53 np0005550137 ceph-osd[83009]: log_channel(cluster) log [DBG] : 11.10 scrub ok
Dec  8 04:49:53 np0005550137 ceph-mgr[74806]: [cephadm INFO cephadm.serve] Updating compute-1:/etc/ceph/ceph.client.admin.keyring
Dec  8 04:49:53 np0005550137 ceph-mgr[74806]: log_channel(cephadm) log [INF] : Updating compute-1:/etc/ceph/ceph.client.admin.keyring
Dec  8 04:49:53 np0005550137 ceph-mgr[74806]: [cephadm INFO cephadm.serve] Updating compute-2:/etc/ceph/ceph.client.admin.keyring
Dec  8 04:49:53 np0005550137 ceph-mgr[74806]: log_channel(cephadm) log [INF] : Updating compute-2:/etc/ceph/ceph.client.admin.keyring
Dec  8 04:49:53 np0005550137 ceph-mgr[74806]: [cephadm INFO cephadm.serve] Updating compute-0:/etc/ceph/ceph.client.admin.keyring
Dec  8 04:49:53 np0005550137 ceph-mgr[74806]: log_channel(cephadm) log [INF] : Updating compute-0:/etc/ceph/ceph.client.admin.keyring
Dec  8 04:49:54 np0005550137 ceph-mon[74516]: mon.compute-0@0(leader).osd e81 do_prune osdmap full prune enabled
Dec  8 04:49:54 np0005550137 ceph-mon[74516]: mon.compute-0@0(leader).osd e82 e82: 3 total, 3 up, 3 in
Dec  8 04:49:54 np0005550137 ceph-mon[74516]: log_channel(cluster) log [DBG] : osdmap e82: 3 total, 3 up, 3 in
Dec  8 04:49:54 np0005550137 ceph-mgr[74806]: [cephadm INFO cephadm.serve] Updating compute-1:/var/lib/ceph/ceb838ef-9d5d-54e4-bddb-2f01adce2ad4/config/ceph.client.admin.keyring
Dec  8 04:49:54 np0005550137 ceph-mgr[74806]: log_channel(cephadm) log [INF] : Updating compute-1:/var/lib/ceph/ceb838ef-9d5d-54e4-bddb-2f01adce2ad4/config/ceph.client.admin.keyring
Dec  8 04:49:54 np0005550137 ceph-mgr[74806]: [cephadm INFO cephadm.serve] Updating compute-2:/var/lib/ceph/ceb838ef-9d5d-54e4-bddb-2f01adce2ad4/config/ceph.client.admin.keyring
Dec  8 04:49:54 np0005550137 ceph-mgr[74806]: log_channel(cephadm) log [INF] : Updating compute-2:/var/lib/ceph/ceb838ef-9d5d-54e4-bddb-2f01adce2ad4/config/ceph.client.admin.keyring
Dec  8 04:49:54 np0005550137 ceph-mon[74516]: Updating compute-1:/var/lib/ceph/ceb838ef-9d5d-54e4-bddb-2f01adce2ad4/config/ceph.conf
Dec  8 04:49:54 np0005550137 ceph-mon[74516]: Updating compute-2:/var/lib/ceph/ceb838ef-9d5d-54e4-bddb-2f01adce2ad4/config/ceph.conf
Dec  8 04:49:54 np0005550137 ceph-mon[74516]: Updating compute-0:/var/lib/ceph/ceb838ef-9d5d-54e4-bddb-2f01adce2ad4/config/ceph.conf
Dec  8 04:49:54 np0005550137 ceph-mgr[74806]: [cephadm INFO cephadm.serve] Updating compute-0:/var/lib/ceph/ceb838ef-9d5d-54e4-bddb-2f01adce2ad4/config/ceph.client.admin.keyring
Dec  8 04:49:54 np0005550137 ceph-mgr[74806]: log_channel(cephadm) log [INF] : Updating compute-0:/var/lib/ceph/ceb838ef-9d5d-54e4-bddb-2f01adce2ad4/config/ceph.client.admin.keyring
Dec  8 04:49:54 np0005550137 ceph-osd[83009]: log_channel(cluster) log [DBG] : 11.11 scrub starts
Dec  8 04:49:54 np0005550137 ceph-osd[83009]: log_channel(cluster) log [DBG] : 11.11 scrub ok
Dec  8 04:49:54 np0005550137 ceph-ceb838ef-9d5d-54e4-bddb-2f01adce2ad4-nfs-cephfs-2-0-compute-0-cuvvno[97606]: 08/12/2025 09:49:54 : epoch 69369efc : compute-0 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f56e8003c10 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec  8 04:49:54 np0005550137 radosgw[89717]: ====== starting new request req=0x7ff6ecd845d0 =====
Dec  8 04:49:54 np0005550137 radosgw[89717]: ====== req done req=0x7ff6ecd845d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  8 04:49:54 np0005550137 radosgw[89717]: beast: 0x7ff6ecd845d0: 192.168.122.100 - anonymous [08/Dec/2025:09:49:54.895 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  8 04:49:54 np0005550137 ceph-mon[74516]: mon.compute-0@0(leader).osd e82 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  8 04:49:55 np0005550137 ceph-mgr[74806]: log_channel(cluster) log [DBG] : pgmap v12: 353 pgs: 1 active+clean+scrubbing, 2 active+remapped, 350 active+clean; 458 KiB data, 125 MiB used, 60 GiB / 60 GiB avail; 41 KiB/s rd, 0 B/s wr, 14 op/s; 54 B/s, 3 objects/s recovering
Dec  8 04:49:55 np0005550137 ceph-mon[74516]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "14"} v 0)
Dec  8 04:49:55 np0005550137 ceph-mon[74516]: log_channel(audit) log [INF] : from='mgr.14766 192.168.122.100:0/2066651810' entity='mgr.compute-0.kitiwu' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "14"}]: dispatch
Dec  8 04:49:55 np0005550137 ceph-mon[74516]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2.devices.0}] v 0)
Dec  8 04:49:55 np0005550137 ceph-mon[74516]: log_channel(audit) log [INF] : from='mgr.14766 192.168.122.100:0/2066651810' entity='mgr.compute-0.kitiwu' 
Dec  8 04:49:55 np0005550137 ceph-mon[74516]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2}] v 0)
Dec  8 04:49:55 np0005550137 ceph-mon[74516]: log_channel(audit) log [INF] : from='mgr.14766 192.168.122.100:0/2066651810' entity='mgr.compute-0.kitiwu' 
Dec  8 04:49:55 np0005550137 ceph-mon[74516]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1.devices.0}] v 0)
Dec  8 04:49:55 np0005550137 ceph-mon[74516]: log_channel(audit) log [INF] : from='mgr.14766 192.168.122.100:0/2066651810' entity='mgr.compute-0.kitiwu' 
Dec  8 04:49:55 np0005550137 ceph-mon[74516]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1}] v 0)
Dec  8 04:49:55 np0005550137 ceph-mon[74516]: log_channel(audit) log [INF] : from='mgr.14766 192.168.122.100:0/2066651810' entity='mgr.compute-0.kitiwu' 
Dec  8 04:49:55 np0005550137 ceph-mon[74516]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Dec  8 04:49:55 np0005550137 ceph-mon[74516]: log_channel(audit) log [INF] : from='mgr.14766 192.168.122.100:0/2066651810' entity='mgr.compute-0.kitiwu' 
Dec  8 04:49:55 np0005550137 ceph-mon[74516]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Dec  8 04:49:55 np0005550137 ceph-mon[74516]: log_channel(audit) log [INF] : from='mgr.14766 192.168.122.100:0/2066651810' entity='mgr.compute-0.kitiwu' 
Dec  8 04:49:55 np0005550137 ceph-mon[74516]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Dec  8 04:49:55 np0005550137 ceph-ceb838ef-9d5d-54e4-bddb-2f01adce2ad4-nfs-cephfs-2-0-compute-0-cuvvno[97606]: 08/12/2025 09:49:55 : epoch 69369efc : compute-0 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f5710002520 fd 48 proxy header rest len failed header rlen = % (will set dead)
Dec  8 04:49:55 np0005550137 ceph-mon[74516]: log_channel(audit) log [INF] : from='mgr.14766 192.168.122.100:0/2066651810' entity='mgr.compute-0.kitiwu' 
Dec  8 04:49:55 np0005550137 kernel: ganesha.nfsd[97653]: segfault at 50 ip 00007f57c51ad32e sp 00007f57877fd210 error 4 in libntirpc.so.5.8[7f57c5192000+2c000] likely on CPU 6 (core 0, socket 6)
Dec  8 04:49:55 np0005550137 kernel: Code: 47 20 66 41 89 86 f2 00 00 00 41 bf 01 00 00 00 b9 40 00 00 00 e9 af fd ff ff 66 90 48 8b 85 f8 00 00 00 48 8b 40 08 4c 8b 28 <45> 8b 65 50 49 8b 75 68 41 8b be 28 02 00 00 b9 40 00 00 00 e8 29
Dec  8 04:49:55 np0005550137 ceph-ceb838ef-9d5d-54e4-bddb-2f01adce2ad4-nfs-cephfs-2-0-compute-0-cuvvno[97606]: 08/12/2025 09:49:55 : epoch 69369efc : compute-0 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f5710002520 fd 48 proxy ignored for local
Dec  8 04:49:55 np0005550137 ceph-mon[74516]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.nfs.cephfs}] v 0)
Dec  8 04:49:55 np0005550137 ceph-mon[74516]: log_channel(audit) log [INF] : from='mgr.14766 192.168.122.100:0/2066651810' entity='mgr.compute-0.kitiwu' 
Dec  8 04:49:55 np0005550137 ceph-mon[74516]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Dec  8 04:49:55 np0005550137 ceph-mon[74516]: log_channel(audit) log [DBG] : from='mgr.14766 192.168.122.100:0/2066651810' entity='mgr.compute-0.kitiwu' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Dec  8 04:49:55 np0005550137 ceph-mon[74516]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Dec  8 04:49:55 np0005550137 ceph-mon[74516]: log_channel(audit) log [INF] : from='mgr.14766 192.168.122.100:0/2066651810' entity='mgr.compute-0.kitiwu' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Dec  8 04:49:55 np0005550137 ceph-mon[74516]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Dec  8 04:49:55 np0005550137 ceph-mon[74516]: log_channel(audit) log [DBG] : from='mgr.14766 192.168.122.100:0/2066651810' entity='mgr.compute-0.kitiwu' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec  8 04:49:55 np0005550137 systemd[1]: Created slice Slice /system/systemd-coredump.
Dec  8 04:49:55 np0005550137 systemd[1]: Started Process Core Dump (PID 102885/UID 0).
Dec  8 04:49:55 np0005550137 radosgw[89717]: ====== starting new request req=0x7ff6ecd845d0 =====
Dec  8 04:49:55 np0005550137 radosgw[89717]: ====== req done req=0x7ff6ecd845d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  8 04:49:55 np0005550137 radosgw[89717]: beast: 0x7ff6ecd845d0: 192.168.122.102 - anonymous [08/Dec/2025:09:49:55.472 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  8 04:49:55 np0005550137 ceph-mon[74516]: mon.compute-0@0(leader).osd e82 do_prune osdmap full prune enabled
Dec  8 04:49:55 np0005550137 ceph-mon[74516]: log_channel(audit) log [INF] : from='mgr.14766 192.168.122.100:0/2066651810' entity='mgr.compute-0.kitiwu' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "14"}]': finished
Dec  8 04:49:55 np0005550137 ceph-mon[74516]: mon.compute-0@0(leader).osd e83 e83: 3 total, 3 up, 3 in
Dec  8 04:49:55 np0005550137 ceph-mon[74516]: log_channel(cluster) log [DBG] : osdmap e83: 3 total, 3 up, 3 in
Dec  8 04:49:55 np0005550137 ceph-mon[74516]: Updating compute-1:/etc/ceph/ceph.client.admin.keyring
Dec  8 04:49:55 np0005550137 ceph-mon[74516]: Updating compute-2:/etc/ceph/ceph.client.admin.keyring
Dec  8 04:49:55 np0005550137 ceph-mon[74516]: Updating compute-0:/etc/ceph/ceph.client.admin.keyring
Dec  8 04:49:55 np0005550137 ceph-mon[74516]: Updating compute-1:/var/lib/ceph/ceb838ef-9d5d-54e4-bddb-2f01adce2ad4/config/ceph.client.admin.keyring
Dec  8 04:49:55 np0005550137 ceph-mon[74516]: Updating compute-2:/var/lib/ceph/ceb838ef-9d5d-54e4-bddb-2f01adce2ad4/config/ceph.client.admin.keyring
Dec  8 04:49:55 np0005550137 ceph-mon[74516]: Updating compute-0:/var/lib/ceph/ceb838ef-9d5d-54e4-bddb-2f01adce2ad4/config/ceph.client.admin.keyring
Dec  8 04:49:55 np0005550137 ceph-mon[74516]: from='mgr.14766 192.168.122.100:0/2066651810' entity='mgr.compute-0.kitiwu' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "14"}]: dispatch
Dec  8 04:49:55 np0005550137 ceph-mon[74516]: from='mgr.14766 192.168.122.100:0/2066651810' entity='mgr.compute-0.kitiwu' 
Dec  8 04:49:55 np0005550137 ceph-mon[74516]: from='mgr.14766 192.168.122.100:0/2066651810' entity='mgr.compute-0.kitiwu' 
Dec  8 04:49:55 np0005550137 ceph-mon[74516]: from='mgr.14766 192.168.122.100:0/2066651810' entity='mgr.compute-0.kitiwu' 
Dec  8 04:49:55 np0005550137 ceph-mon[74516]: from='mgr.14766 192.168.122.100:0/2066651810' entity='mgr.compute-0.kitiwu' 
Dec  8 04:49:55 np0005550137 ceph-mon[74516]: from='mgr.14766 192.168.122.100:0/2066651810' entity='mgr.compute-0.kitiwu' 
Dec  8 04:49:55 np0005550137 ceph-mon[74516]: from='mgr.14766 192.168.122.100:0/2066651810' entity='mgr.compute-0.kitiwu' 
Dec  8 04:49:55 np0005550137 ceph-mon[74516]: from='mgr.14766 192.168.122.100:0/2066651810' entity='mgr.compute-0.kitiwu' 
Dec  8 04:49:55 np0005550137 ceph-mon[74516]: from='mgr.14766 192.168.122.100:0/2066651810' entity='mgr.compute-0.kitiwu' 
Dec  8 04:49:55 np0005550137 ceph-mon[74516]: from='mgr.14766 192.168.122.100:0/2066651810' entity='mgr.compute-0.kitiwu' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Dec  8 04:49:55 np0005550137 ceph-osd[83009]: log_channel(cluster) log [DBG] : 8.13 scrub starts
Dec  8 04:49:55 np0005550137 ceph-osd[83009]: log_channel(cluster) log [DBG] : 8.13 scrub ok
Dec  8 04:49:55 np0005550137 podman[102977]: 2025-12-08 09:49:55.784779543 +0000 UTC m=+0.053187446 container create 4bda3f11ace6c3f095084f0d3a8dd91886abd76b83dbd4ebb6fb9cb2560c932f (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=jovial_noether, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_REF=squid, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1)
Dec  8 04:49:55 np0005550137 systemd[1]: Started libpod-conmon-4bda3f11ace6c3f095084f0d3a8dd91886abd76b83dbd4ebb6fb9cb2560c932f.scope.
Dec  8 04:49:55 np0005550137 podman[102977]: 2025-12-08 09:49:55.76261051 +0000 UTC m=+0.031018493 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Dec  8 04:49:55 np0005550137 systemd[1]: Started libcrun container.
Dec  8 04:49:55 np0005550137 podman[102977]: 2025-12-08 09:49:55.883198974 +0000 UTC m=+0.151606907 container init 4bda3f11ace6c3f095084f0d3a8dd91886abd76b83dbd4ebb6fb9cb2560c932f (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=jovial_noether, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS)
Dec  8 04:49:55 np0005550137 podman[102977]: 2025-12-08 09:49:55.892595409 +0000 UTC m=+0.161003322 container start 4bda3f11ace6c3f095084f0d3a8dd91886abd76b83dbd4ebb6fb9cb2560c932f (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=jovial_noether, ceph=True, io.buildah.version=1.40.1, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec  8 04:49:55 np0005550137 podman[102977]: 2025-12-08 09:49:55.898012654 +0000 UTC m=+0.166420567 container attach 4bda3f11ace6c3f095084f0d3a8dd91886abd76b83dbd4ebb6fb9cb2560c932f (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=jovial_noether, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, ceph=True)
Dec  8 04:49:55 np0005550137 jovial_noether[102994]: 167 167
Dec  8 04:49:55 np0005550137 systemd[1]: libpod-4bda3f11ace6c3f095084f0d3a8dd91886abd76b83dbd4ebb6fb9cb2560c932f.scope: Deactivated successfully.
Dec  8 04:49:55 np0005550137 podman[102999]: 2025-12-08 09:49:55.940323919 +0000 UTC m=+0.027387774 container died 4bda3f11ace6c3f095084f0d3a8dd91886abd76b83dbd4ebb6fb9cb2560c932f (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=jovial_noether, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec  8 04:49:55 np0005550137 ceph-ceb838ef-9d5d-54e4-bddb-2f01adce2ad4-mgr-compute-0-kitiwu[74802]: ::ffff:192.168.122.100 - - [08/Dec/2025:09:49:55] "GET /metrics HTTP/1.1" 200 46586 "" "Prometheus/2.51.0"
Dec  8 04:49:55 np0005550137 ceph-mgr[74806]: [prometheus INFO cherrypy.access.140536868639312] ::ffff:192.168.122.100 - - [08/Dec/2025:09:49:55] "GET /metrics HTTP/1.1" 200 46586 "" "Prometheus/2.51.0"
Dec  8 04:49:56 np0005550137 ceph-mon[74516]: mon.compute-0@0(leader).osd e83 do_prune osdmap full prune enabled
Dec  8 04:49:56 np0005550137 ceph-mon[74516]: mon.compute-0@0(leader).osd e84 e84: 3 total, 3 up, 3 in
Dec  8 04:49:56 np0005550137 ceph-mon[74516]: log_channel(cluster) log [DBG] : osdmap e84: 3 total, 3 up, 3 in
Dec  8 04:49:56 np0005550137 systemd-coredump[102890]: Process 97610 (ganesha.nfsd) of user 0 dumped core.#012#012Stack trace of thread 43:#012#0  0x00007f57c51ad32e n/a (/usr/lib64/libntirpc.so.5.8 + 0x2232e)#012ELF object binary architecture: AMD x86-64
Dec  8 04:49:56 np0005550137 ceph-mon[74516]: from='mgr.14766 192.168.122.100:0/2066651810' entity='mgr.compute-0.kitiwu' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "14"}]': finished
Dec  8 04:49:56 np0005550137 systemd[1]: var-lib-containers-storage-overlay-35b52eeb6d05196125ba97a565ff204d2b578fa0ae845a7a78463a6dcdad8310-merged.mount: Deactivated successfully.
Dec  8 04:49:56 np0005550137 podman[102999]: 2025-12-08 09:49:56.612761998 +0000 UTC m=+0.699825812 container remove 4bda3f11ace6c3f095084f0d3a8dd91886abd76b83dbd4ebb6fb9cb2560c932f (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=jovial_noether, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1)
Dec  8 04:49:56 np0005550137 systemd[1]: libpod-conmon-4bda3f11ace6c3f095084f0d3a8dd91886abd76b83dbd4ebb6fb9cb2560c932f.scope: Deactivated successfully.
Dec  8 04:49:56 np0005550137 ceph-osd[83009]: log_channel(cluster) log [DBG] : 12.19 scrub starts
Dec  8 04:49:56 np0005550137 systemd[1]: systemd-coredump@0-102885-0.service: Deactivated successfully.
Dec  8 04:49:56 np0005550137 systemd[1]: systemd-coredump@0-102885-0.service: Consumed 1.268s CPU time.
Dec  8 04:49:56 np0005550137 ceph-osd[83009]: log_channel(cluster) log [DBG] : 12.19 scrub ok
Dec  8 04:49:56 np0005550137 podman[103021]: 2025-12-08 09:49:56.737524999 +0000 UTC m=+0.029962331 container died 7034b3b818aede2fb8291924130aa2cf54d35eaefa864ea9d8e70488e88a0698 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-ceb838ef-9d5d-54e4-bddb-2f01adce2ad4-nfs-cephfs-2-0-compute-0-cuvvno, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0)
Dec  8 04:49:56 np0005550137 systemd[1]: var-lib-containers-storage-overlay-f37e944498f6df2867e24eb13592b428c6a0b02cab302eb4e0a1d9de1c4c7bfa-merged.mount: Deactivated successfully.
Dec  8 04:49:56 np0005550137 podman[103021]: 2025-12-08 09:49:56.773732249 +0000 UTC m=+0.066169551 container remove 7034b3b818aede2fb8291924130aa2cf54d35eaefa864ea9d8e70488e88a0698 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-ceb838ef-9d5d-54e4-bddb-2f01adce2ad4-nfs-cephfs-2-0-compute-0-cuvvno, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, org.label-schema.license=GPLv2)
Dec  8 04:49:56 np0005550137 systemd[1]: ceph-ceb838ef-9d5d-54e4-bddb-2f01adce2ad4@nfs.cephfs.2.0.compute-0.cuvvno.service: Main process exited, code=exited, status=139/n/a
Dec  8 04:49:56 np0005550137 podman[103036]: 2025-12-08 09:49:56.786895538 +0000 UTC m=+0.050413983 container create a1f49b07e9ec6304bd190dc5a262c3ee58b5d0195fa398f12759dbf39ea0cc5f (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=zen_hawking, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2)
Dec  8 04:49:56 np0005550137 systemd[1]: Started libpod-conmon-a1f49b07e9ec6304bd190dc5a262c3ee58b5d0195fa398f12759dbf39ea0cc5f.scope.
Dec  8 04:49:56 np0005550137 systemd[1]: Started libcrun container.
Dec  8 04:49:56 np0005550137 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/712ef9ccc9e02f37f8a01ac83e2a47c52d14afbf959f47b1fc7aab0740c40b6c/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec  8 04:49:56 np0005550137 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/712ef9ccc9e02f37f8a01ac83e2a47c52d14afbf959f47b1fc7aab0740c40b6c/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec  8 04:49:56 np0005550137 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/712ef9ccc9e02f37f8a01ac83e2a47c52d14afbf959f47b1fc7aab0740c40b6c/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec  8 04:49:56 np0005550137 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/712ef9ccc9e02f37f8a01ac83e2a47c52d14afbf959f47b1fc7aab0740c40b6c/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec  8 04:49:56 np0005550137 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/712ef9ccc9e02f37f8a01ac83e2a47c52d14afbf959f47b1fc7aab0740c40b6c/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Dec  8 04:49:56 np0005550137 podman[103036]: 2025-12-08 09:49:56.850156991 +0000 UTC m=+0.113675436 container init a1f49b07e9ec6304bd190dc5a262c3ee58b5d0195fa398f12759dbf39ea0cc5f (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=zen_hawking, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, org.label-schema.license=GPLv2)
Dec  8 04:49:56 np0005550137 podman[103036]: 2025-12-08 09:49:56.858515705 +0000 UTC m=+0.122034150 container start a1f49b07e9ec6304bd190dc5a262c3ee58b5d0195fa398f12759dbf39ea0cc5f (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=zen_hawking, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, ceph=True, OSD_FLAVOR=default, org.label-schema.vendor=CentOS)
Dec  8 04:49:56 np0005550137 podman[103036]: 2025-12-08 09:49:56.76355545 +0000 UTC m=+0.027073915 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Dec  8 04:49:56 np0005550137 podman[103036]: 2025-12-08 09:49:56.861479765 +0000 UTC m=+0.124998220 container attach a1f49b07e9ec6304bd190dc5a262c3ee58b5d0195fa398f12759dbf39ea0cc5f (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=zen_hawking, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, ceph=True, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec  8 04:49:56 np0005550137 radosgw[89717]: ====== starting new request req=0x7ff6ecd845d0 =====
Dec  8 04:49:56 np0005550137 radosgw[89717]: ====== req done req=0x7ff6ecd845d0 op status=0 http_status=200 latency=0.001000030s ======
Dec  8 04:49:56 np0005550137 radosgw[89717]: beast: 0x7ff6ecd845d0: 192.168.122.100 - anonymous [08/Dec/2025:09:49:56.896 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000030s
Dec  8 04:49:56 np0005550137 systemd[1]: ceph-ceb838ef-9d5d-54e4-bddb-2f01adce2ad4@nfs.cephfs.2.0.compute-0.cuvvno.service: Failed with result 'exit-code'.
Dec  8 04:49:56 np0005550137 systemd[1]: ceph-ceb838ef-9d5d-54e4-bddb-2f01adce2ad4@nfs.cephfs.2.0.compute-0.cuvvno.service: Consumed 1.679s CPU time.
Dec  8 04:49:57 np0005550137 ceph-mgr[74806]: log_channel(cluster) log [DBG] : pgmap v15: 353 pgs: 1 active+clean+scrubbing, 2 active+remapped, 350 active+clean; 458 KiB data, 125 MiB used, 60 GiB / 60 GiB avail; 41 KiB/s rd, 0 B/s wr, 14 op/s; 54 B/s, 3 objects/s recovering
Dec  8 04:49:57 np0005550137 ceph-mon[74516]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "15"} v 0)
Dec  8 04:49:57 np0005550137 ceph-mon[74516]: log_channel(audit) log [INF] : from='mgr.14766 192.168.122.100:0/2066651810' entity='mgr.compute-0.kitiwu' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "15"}]: dispatch
Dec  8 04:49:57 np0005550137 zen_hawking[103057]: --> passed data devices: 0 physical, 1 LVM
Dec  8 04:49:57 np0005550137 zen_hawking[103057]: --> All data devices are unavailable
Dec  8 04:49:57 np0005550137 systemd[1]: libpod-a1f49b07e9ec6304bd190dc5a262c3ee58b5d0195fa398f12759dbf39ea0cc5f.scope: Deactivated successfully.
Dec  8 04:49:57 np0005550137 podman[103036]: 2025-12-08 09:49:57.251287817 +0000 UTC m=+0.514806292 container died a1f49b07e9ec6304bd190dc5a262c3ee58b5d0195fa398f12759dbf39ea0cc5f (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=zen_hawking, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2)
Dec  8 04:49:57 np0005550137 systemd[1]: var-lib-containers-storage-overlay-712ef9ccc9e02f37f8a01ac83e2a47c52d14afbf959f47b1fc7aab0740c40b6c-merged.mount: Deactivated successfully.
Dec  8 04:49:57 np0005550137 podman[103036]: 2025-12-08 09:49:57.305294298 +0000 UTC m=+0.568812773 container remove a1f49b07e9ec6304bd190dc5a262c3ee58b5d0195fa398f12759dbf39ea0cc5f (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=zen_hawking, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, CEPH_REF=squid)
Dec  8 04:49:57 np0005550137 systemd[1]: libpod-conmon-a1f49b07e9ec6304bd190dc5a262c3ee58b5d0195fa398f12759dbf39ea0cc5f.scope: Deactivated successfully.
Dec  8 04:49:57 np0005550137 radosgw[89717]: ====== starting new request req=0x7ff6ecd845d0 =====
Dec  8 04:49:57 np0005550137 radosgw[89717]: ====== req done req=0x7ff6ecd845d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  8 04:49:57 np0005550137 radosgw[89717]: beast: 0x7ff6ecd845d0: 192.168.122.102 - anonymous [08/Dec/2025:09:49:57.475 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  8 04:49:57 np0005550137 ceph-mon[74516]: mon.compute-0@0(leader).osd e84 do_prune osdmap full prune enabled
Dec  8 04:49:57 np0005550137 ceph-mon[74516]: from='mgr.14766 192.168.122.100:0/2066651810' entity='mgr.compute-0.kitiwu' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "15"}]: dispatch
Dec  8 04:49:57 np0005550137 ceph-mon[74516]: log_channel(audit) log [INF] : from='mgr.14766 192.168.122.100:0/2066651810' entity='mgr.compute-0.kitiwu' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "15"}]': finished
Dec  8 04:49:57 np0005550137 ceph-mon[74516]: mon.compute-0@0(leader).osd e85 e85: 3 total, 3 up, 3 in
Dec  8 04:49:57 np0005550137 ceph-mon[74516]: log_channel(cluster) log [DBG] : osdmap e85: 3 total, 3 up, 3 in
Dec  8 04:49:57 np0005550137 ceph-osd[83009]: log_channel(cluster) log [DBG] : 12.8 scrub starts
Dec  8 04:49:57 np0005550137 ceph-osd[83009]: log_channel(cluster) log [DBG] : 12.8 scrub ok
Dec  8 04:49:57 np0005550137 podman[103203]: 2025-12-08 09:49:57.914117054 +0000 UTC m=+0.052298990 container create ba522b2bdb11929235b6052e255f446b40b9de729fb635e05cebfa9642883320 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=relaxed_kalam, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.build-date=20250325)
Dec  8 04:49:57 np0005550137 systemd[1]: Started libpod-conmon-ba522b2bdb11929235b6052e255f446b40b9de729fb635e05cebfa9642883320.scope.
Dec  8 04:49:57 np0005550137 podman[103203]: 2025-12-08 09:49:57.888770994 +0000 UTC m=+0.026953000 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Dec  8 04:49:57 np0005550137 systemd[1]: Started libcrun container.
Dec  8 04:49:58 np0005550137 podman[103203]: 2025-12-08 09:49:58.007560183 +0000 UTC m=+0.145742109 container init ba522b2bdb11929235b6052e255f446b40b9de729fb635e05cebfa9642883320 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=relaxed_kalam, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, ceph=True)
Dec  8 04:49:58 np0005550137 podman[103203]: 2025-12-08 09:49:58.013260556 +0000 UTC m=+0.151442472 container start ba522b2bdb11929235b6052e255f446b40b9de729fb635e05cebfa9642883320 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=relaxed_kalam, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, CEPH_REF=squid)
Dec  8 04:49:58 np0005550137 podman[103203]: 2025-12-08 09:49:58.016252957 +0000 UTC m=+0.154434893 container attach ba522b2bdb11929235b6052e255f446b40b9de729fb635e05cebfa9642883320 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=relaxed_kalam, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec  8 04:49:58 np0005550137 relaxed_kalam[103219]: 167 167
Dec  8 04:49:58 np0005550137 systemd[1]: libpod-ba522b2bdb11929235b6052e255f446b40b9de729fb635e05cebfa9642883320.scope: Deactivated successfully.
Dec  8 04:49:58 np0005550137 podman[103203]: 2025-12-08 09:49:58.018310829 +0000 UTC m=+0.156492745 container died ba522b2bdb11929235b6052e255f446b40b9de729fb635e05cebfa9642883320 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=relaxed_kalam, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1)
Dec  8 04:49:58 np0005550137 systemd[1]: var-lib-containers-storage-overlay-a08476679ed22c031f92c7b4020d7cfa4f0f4243841ddc43a01137a9bf190045-merged.mount: Deactivated successfully.
Dec  8 04:49:58 np0005550137 podman[103203]: 2025-12-08 09:49:58.064692939 +0000 UTC m=+0.202874855 container remove ba522b2bdb11929235b6052e255f446b40b9de729fb635e05cebfa9642883320 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=relaxed_kalam, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Dec  8 04:49:58 np0005550137 systemd[1]: libpod-conmon-ba522b2bdb11929235b6052e255f446b40b9de729fb635e05cebfa9642883320.scope: Deactivated successfully.
Dec  8 04:49:58 np0005550137 podman[103242]: 2025-12-08 09:49:58.208840078 +0000 UTC m=+0.040069158 container create c6ae2a1a845a1afb117f11f760fd5e153ac332149df739b46e22b53f51140e34 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=amazing_thompson, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, ceph=True, org.label-schema.build-date=20250325, OSD_FLAVOR=default, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2)
Dec  8 04:49:58 np0005550137 systemd[1]: Started libpod-conmon-c6ae2a1a845a1afb117f11f760fd5e153ac332149df739b46e22b53f51140e34.scope.
Dec  8 04:49:58 np0005550137 systemd[1]: Started libcrun container.
Dec  8 04:49:58 np0005550137 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fca03d383957e2f2e54c7af1235c45084a0cfee72d6565f0b008c0c8f9238645/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec  8 04:49:58 np0005550137 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fca03d383957e2f2e54c7af1235c45084a0cfee72d6565f0b008c0c8f9238645/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec  8 04:49:58 np0005550137 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fca03d383957e2f2e54c7af1235c45084a0cfee72d6565f0b008c0c8f9238645/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec  8 04:49:58 np0005550137 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fca03d383957e2f2e54c7af1235c45084a0cfee72d6565f0b008c0c8f9238645/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec  8 04:49:58 np0005550137 podman[103242]: 2025-12-08 09:49:58.19113056 +0000 UTC m=+0.022359670 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Dec  8 04:49:58 np0005550137 podman[103242]: 2025-12-08 09:49:58.294185671 +0000 UTC m=+0.125414781 container init c6ae2a1a845a1afb117f11f760fd5e153ac332149df739b46e22b53f51140e34 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=amazing_thompson, ceph=True, org.label-schema.build-date=20250325, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0)
Dec  8 04:49:58 np0005550137 podman[103242]: 2025-12-08 09:49:58.305151614 +0000 UTC m=+0.136380694 container start c6ae2a1a845a1afb117f11f760fd5e153ac332149df739b46e22b53f51140e34 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=amazing_thompson, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec  8 04:49:58 np0005550137 podman[103242]: 2025-12-08 09:49:58.308178196 +0000 UTC m=+0.139407276 container attach c6ae2a1a845a1afb117f11f760fd5e153ac332149df739b46e22b53f51140e34 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=amazing_thompson, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, ceph=True, CEPH_REF=squid, org.label-schema.build-date=20250325, OSD_FLAVOR=default, org.label-schema.schema-version=1.0)
Dec  8 04:49:58 np0005550137 amazing_thompson[103259]: {
Dec  8 04:49:58 np0005550137 amazing_thompson[103259]:    "1": [
Dec  8 04:49:58 np0005550137 amazing_thompson[103259]:        {
Dec  8 04:49:58 np0005550137 amazing_thompson[103259]:            "devices": [
Dec  8 04:49:58 np0005550137 amazing_thompson[103259]:                "/dev/loop3"
Dec  8 04:49:58 np0005550137 amazing_thompson[103259]:            ],
Dec  8 04:49:58 np0005550137 amazing_thompson[103259]:            "lv_name": "ceph_lv0",
Dec  8 04:49:58 np0005550137 amazing_thompson[103259]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Dec  8 04:49:58 np0005550137 amazing_thompson[103259]:            "lv_size": "21470642176",
Dec  8 04:49:58 np0005550137 amazing_thompson[103259]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=RomYjf-Huw1-Uvyl-0ZXT-yb3F-i1Wo-KjPFYE,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=ceb838ef-9d5d-54e4-bddb-2f01adce2ad4,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=10863df8-16d4-4896-ae26-227efb76290e,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Dec  8 04:49:58 np0005550137 amazing_thompson[103259]:            "lv_uuid": "RomYjf-Huw1-Uvyl-0ZXT-yb3F-i1Wo-KjPFYE",
Dec  8 04:49:58 np0005550137 amazing_thompson[103259]:            "name": "ceph_lv0",
Dec  8 04:49:58 np0005550137 amazing_thompson[103259]:            "path": "/dev/ceph_vg0/ceph_lv0",
Dec  8 04:49:58 np0005550137 amazing_thompson[103259]:            "tags": {
Dec  8 04:49:58 np0005550137 amazing_thompson[103259]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Dec  8 04:49:58 np0005550137 amazing_thompson[103259]:                "ceph.block_uuid": "RomYjf-Huw1-Uvyl-0ZXT-yb3F-i1Wo-KjPFYE",
Dec  8 04:49:58 np0005550137 amazing_thompson[103259]:                "ceph.cephx_lockbox_secret": "",
Dec  8 04:49:58 np0005550137 amazing_thompson[103259]:                "ceph.cluster_fsid": "ceb838ef-9d5d-54e4-bddb-2f01adce2ad4",
Dec  8 04:49:58 np0005550137 amazing_thompson[103259]:                "ceph.cluster_name": "ceph",
Dec  8 04:49:58 np0005550137 amazing_thompson[103259]:                "ceph.crush_device_class": "",
Dec  8 04:49:58 np0005550137 amazing_thompson[103259]:                "ceph.encrypted": "0",
Dec  8 04:49:58 np0005550137 amazing_thompson[103259]:                "ceph.osd_fsid": "10863df8-16d4-4896-ae26-227efb76290e",
Dec  8 04:49:58 np0005550137 amazing_thompson[103259]:                "ceph.osd_id": "1",
Dec  8 04:49:58 np0005550137 amazing_thompson[103259]:                "ceph.osdspec_affinity": "default_drive_group",
Dec  8 04:49:58 np0005550137 amazing_thompson[103259]:                "ceph.type": "block",
Dec  8 04:49:58 np0005550137 amazing_thompson[103259]:                "ceph.vdo": "0",
Dec  8 04:49:58 np0005550137 amazing_thompson[103259]:                "ceph.with_tpm": "0"
Dec  8 04:49:58 np0005550137 amazing_thompson[103259]:            },
Dec  8 04:49:58 np0005550137 amazing_thompson[103259]:            "type": "block",
Dec  8 04:49:58 np0005550137 amazing_thompson[103259]:            "vg_name": "ceph_vg0"
Dec  8 04:49:58 np0005550137 amazing_thompson[103259]:        }
Dec  8 04:49:58 np0005550137 amazing_thompson[103259]:    ]
Dec  8 04:49:58 np0005550137 amazing_thompson[103259]: }
Dec  8 04:49:58 np0005550137 systemd[1]: libpod-c6ae2a1a845a1afb117f11f760fd5e153ac332149df739b46e22b53f51140e34.scope: Deactivated successfully.
Dec  8 04:49:58 np0005550137 podman[103242]: 2025-12-08 09:49:58.609932703 +0000 UTC m=+0.441161813 container died c6ae2a1a845a1afb117f11f760fd5e153ac332149df739b46e22b53f51140e34 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=amazing_thompson, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image)
Dec  8 04:49:58 np0005550137 ceph-mon[74516]: mon.compute-0@0(leader).osd e85 do_prune osdmap full prune enabled
Dec  8 04:49:58 np0005550137 ceph-mon[74516]: from='mgr.14766 192.168.122.100:0/2066651810' entity='mgr.compute-0.kitiwu' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "15"}]': finished
Dec  8 04:49:58 np0005550137 ceph-mon[74516]: mon.compute-0@0(leader).osd e86 e86: 3 total, 3 up, 3 in
Dec  8 04:49:58 np0005550137 systemd[1]: var-lib-containers-storage-overlay-fca03d383957e2f2e54c7af1235c45084a0cfee72d6565f0b008c0c8f9238645-merged.mount: Deactivated successfully.
Dec  8 04:49:58 np0005550137 ceph-mon[74516]: log_channel(cluster) log [DBG] : osdmap e86: 3 total, 3 up, 3 in
Dec  8 04:49:58 np0005550137 ceph-osd[83009]: log_channel(cluster) log [DBG] : 12.a scrub starts
Dec  8 04:49:58 np0005550137 ceph-osd[83009]: log_channel(cluster) log [DBG] : 12.a scrub ok
Dec  8 04:49:58 np0005550137 podman[103242]: 2025-12-08 09:49:58.6595407 +0000 UTC m=+0.490769790 container remove c6ae2a1a845a1afb117f11f760fd5e153ac332149df739b46e22b53f51140e34 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=amazing_thompson, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, CEPH_REF=squid, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec  8 04:49:58 np0005550137 systemd[1]: libpod-conmon-c6ae2a1a845a1afb117f11f760fd5e153ac332149df739b46e22b53f51140e34.scope: Deactivated successfully.
Dec  8 04:49:58 np0005550137 radosgw[89717]: ====== starting new request req=0x7ff6ecd845d0 =====
Dec  8 04:49:58 np0005550137 radosgw[89717]: ====== req done req=0x7ff6ecd845d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  8 04:49:58 np0005550137 radosgw[89717]: beast: 0x7ff6ecd845d0: 192.168.122.100 - anonymous [08/Dec/2025:09:49:58.899 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  8 04:49:59 np0005550137 ceph-mgr[74806]: log_channel(cluster) log [DBG] : pgmap v18: 353 pgs: 2 active+remapped, 351 active+clean; 457 KiB data, 125 MiB used, 60 GiB / 60 GiB avail; 82 B/s, 3 objects/s recovering
Dec  8 04:49:59 np0005550137 ceph-mon[74516]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "16"} v 0)
Dec  8 04:49:59 np0005550137 ceph-mon[74516]: log_channel(audit) log [INF] : from='mgr.14766 192.168.122.100:0/2066651810' entity='mgr.compute-0.kitiwu' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "16"}]: dispatch
Dec  8 04:49:59 np0005550137 podman[103368]: 2025-12-08 09:49:59.293295674 +0000 UTC m=+0.022737012 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Dec  8 04:49:59 np0005550137 radosgw[89717]: ====== starting new request req=0x7ff6ecd845d0 =====
Dec  8 04:49:59 np0005550137 radosgw[89717]: ====== req done req=0x7ff6ecd845d0 op status=0 http_status=200 latency=0.001000030s ======
Dec  8 04:49:59 np0005550137 radosgw[89717]: beast: 0x7ff6ecd845d0: 192.168.122.102 - anonymous [08/Dec/2025:09:49:59.479 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000030s
Dec  8 04:49:59 np0005550137 ceph-osd[83009]: log_channel(cluster) log [DBG] : 12.e scrub starts
Dec  8 04:49:59 np0005550137 ceph-osd[83009]: log_channel(cluster) log [DBG] : 12.e scrub ok
Dec  8 04:49:59 np0005550137 ceph-mon[74516]: mon.compute-0@0(leader).osd e86 do_prune osdmap full prune enabled
Dec  8 04:49:59 np0005550137 podman[103368]: 2025-12-08 09:49:59.931819702 +0000 UTC m=+0.661261050 container create dc23a66ef993561c578ad1d5cb2d23bce7b066c0e0f02458839d7eb792a79158 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=romantic_shtern, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_REF=squid, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec  8 04:49:59 np0005550137 systemd[1]: Started libpod-conmon-dc23a66ef993561c578ad1d5cb2d23bce7b066c0e0f02458839d7eb792a79158.scope.
Dec  8 04:50:00 np0005550137 systemd[1]: Started libcrun container.
Dec  8 04:50:00 np0005550137 ceph-mon[74516]: log_channel(cluster) log [INF] : overall HEALTH_OK
Dec  8 04:50:00 np0005550137 podman[103368]: 2025-12-08 09:50:00.016925087 +0000 UTC m=+0.746366475 container init dc23a66ef993561c578ad1d5cb2d23bce7b066c0e0f02458839d7eb792a79158 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=romantic_shtern, CEPH_REF=squid, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1)
Dec  8 04:50:00 np0005550137 podman[103368]: 2025-12-08 09:50:00.024498258 +0000 UTC m=+0.753939606 container start dc23a66ef993561c578ad1d5cb2d23bce7b066c0e0f02458839d7eb792a79158 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=romantic_shtern, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_REF=squid, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS)
Dec  8 04:50:00 np0005550137 podman[103368]: 2025-12-08 09:50:00.029057986 +0000 UTC m=+0.758499374 container attach dc23a66ef993561c578ad1d5cb2d23bce7b066c0e0f02458839d7eb792a79158 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=romantic_shtern, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec  8 04:50:00 np0005550137 romantic_shtern[103384]: 167 167
Dec  8 04:50:00 np0005550137 systemd[1]: libpod-dc23a66ef993561c578ad1d5cb2d23bce7b066c0e0f02458839d7eb792a79158.scope: Deactivated successfully.
Dec  8 04:50:00 np0005550137 conmon[103384]: conmon dc23a66ef993561c578a <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-dc23a66ef993561c578ad1d5cb2d23bce7b066c0e0f02458839d7eb792a79158.scope/container/memory.events
Dec  8 04:50:00 np0005550137 podman[103368]: 2025-12-08 09:50:00.033900653 +0000 UTC m=+0.763341971 container died dc23a66ef993561c578ad1d5cb2d23bce7b066c0e0f02458839d7eb792a79158 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=romantic_shtern, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325)
Dec  8 04:50:00 np0005550137 systemd[1]: var-lib-containers-storage-overlay-c3a2c1dffe2cfebffd9d654ffd04538029a585e1e0e5b8d141edd9fe615b6c9f-merged.mount: Deactivated successfully.
Dec  8 04:50:00 np0005550137 podman[103368]: 2025-12-08 09:50:00.091336169 +0000 UTC m=+0.820777517 container remove dc23a66ef993561c578ad1d5cb2d23bce7b066c0e0f02458839d7eb792a79158 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=romantic_shtern, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image)
Dec  8 04:50:00 np0005550137 systemd[1]: libpod-conmon-dc23a66ef993561c578ad1d5cb2d23bce7b066c0e0f02458839d7eb792a79158.scope: Deactivated successfully.
Dec  8 04:50:00 np0005550137 ceph-mon[74516]: from='mgr.14766 192.168.122.100:0/2066651810' entity='mgr.compute-0.kitiwu' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "16"}]: dispatch
Dec  8 04:50:00 np0005550137 ceph-mon[74516]: log_channel(audit) log [INF] : from='mgr.14766 192.168.122.100:0/2066651810' entity='mgr.compute-0.kitiwu' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "16"}]': finished
Dec  8 04:50:00 np0005550137 ceph-mon[74516]: mon.compute-0@0(leader).osd e87 e87: 3 total, 3 up, 3 in
Dec  8 04:50:00 np0005550137 ceph-mon[74516]: log_channel(cluster) log [DBG] : osdmap e87: 3 total, 3 up, 3 in
Dec  8 04:50:00 np0005550137 podman[103406]: 2025-12-08 09:50:00.313684153 +0000 UTC m=+0.049166734 container create 87218b03c3348bec1d498cfa84272c66e60b10c627b29f239e833e1f1382fb92 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=naughty_dhawan, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid)
Dec  8 04:50:00 np0005550137 systemd[1]: Started libpod-conmon-87218b03c3348bec1d498cfa84272c66e60b10c627b29f239e833e1f1382fb92.scope.
Dec  8 04:50:00 np0005550137 systemd[1]: Started libcrun container.
Dec  8 04:50:00 np0005550137 podman[103406]: 2025-12-08 09:50:00.293366927 +0000 UTC m=+0.028849518 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Dec  8 04:50:00 np0005550137 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/63fba78f60a4f03e8519c538bfab1fb836dc8b9c9c96f7d2701faf53495bf9b5/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec  8 04:50:00 np0005550137 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/63fba78f60a4f03e8519c538bfab1fb836dc8b9c9c96f7d2701faf53495bf9b5/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec  8 04:50:00 np0005550137 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/63fba78f60a4f03e8519c538bfab1fb836dc8b9c9c96f7d2701faf53495bf9b5/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec  8 04:50:00 np0005550137 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/63fba78f60a4f03e8519c538bfab1fb836dc8b9c9c96f7d2701faf53495bf9b5/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec  8 04:50:00 np0005550137 podman[103406]: 2025-12-08 09:50:00.405239235 +0000 UTC m=+0.140721866 container init 87218b03c3348bec1d498cfa84272c66e60b10c627b29f239e833e1f1382fb92 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=naughty_dhawan, OSD_FLAVOR=default, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0)
Dec  8 04:50:00 np0005550137 podman[103406]: 2025-12-08 09:50:00.417315842 +0000 UTC m=+0.152798413 container start 87218b03c3348bec1d498cfa84272c66e60b10c627b29f239e833e1f1382fb92 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=naughty_dhawan, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image)
Dec  8 04:50:00 np0005550137 podman[103406]: 2025-12-08 09:50:00.420992554 +0000 UTC m=+0.156475235 container attach 87218b03c3348bec1d498cfa84272c66e60b10c627b29f239e833e1f1382fb92 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=naughty_dhawan, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, OSD_FLAVOR=default)
Dec  8 04:50:00 np0005550137 ceph-osd[83009]: log_channel(cluster) log [DBG] : 12.c deep-scrub starts
Dec  8 04:50:00 np0005550137 ceph-osd[83009]: log_channel(cluster) log [DBG] : 12.c deep-scrub ok
Dec  8 04:50:00 np0005550137 radosgw[89717]: ====== starting new request req=0x7ff6ecd845d0 =====
Dec  8 04:50:00 np0005550137 radosgw[89717]: ====== req done req=0x7ff6ecd845d0 op status=0 http_status=200 latency=0.001000031s ======
Dec  8 04:50:00 np0005550137 radosgw[89717]: beast: 0x7ff6ecd845d0: 192.168.122.100 - anonymous [08/Dec/2025:09:50:00.900 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Dec  8 04:50:01 np0005550137 ceph-mgr[74806]: log_channel(cluster) log [DBG] : pgmap v20: 353 pgs: 2 active+remapped, 351 active+clean; 457 KiB data, 125 MiB used, 60 GiB / 60 GiB avail; 74 B/s, 2 objects/s recovering
Dec  8 04:50:01 np0005550137 ceph-mon[74516]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "17"} v 0)
Dec  8 04:50:01 np0005550137 ceph-mon[74516]: log_channel(audit) log [INF] : from='mgr.14766 192.168.122.100:0/2066651810' entity='mgr.compute-0.kitiwu' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "17"}]: dispatch
Dec  8 04:50:01 np0005550137 lvm[103498]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Dec  8 04:50:01 np0005550137 lvm[103498]: VG ceph_vg0 finished
Dec  8 04:50:01 np0005550137 naughty_dhawan[103422]: {}
Dec  8 04:50:01 np0005550137 systemd[1]: libpod-87218b03c3348bec1d498cfa84272c66e60b10c627b29f239e833e1f1382fb92.scope: Deactivated successfully.
Dec  8 04:50:01 np0005550137 systemd[1]: libpod-87218b03c3348bec1d498cfa84272c66e60b10c627b29f239e833e1f1382fb92.scope: Consumed 1.230s CPU time.
Dec  8 04:50:01 np0005550137 podman[103406]: 2025-12-08 09:50:01.178992392 +0000 UTC m=+0.914474973 container died 87218b03c3348bec1d498cfa84272c66e60b10c627b29f239e833e1f1382fb92 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=naughty_dhawan, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default)
Dec  8 04:50:01 np0005550137 systemd[1]: var-lib-containers-storage-overlay-63fba78f60a4f03e8519c538bfab1fb836dc8b9c9c96f7d2701faf53495bf9b5-merged.mount: Deactivated successfully.
Dec  8 04:50:01 np0005550137 ceph-ceb838ef-9d5d-54e4-bddb-2f01adce2ad4-haproxy-nfs-cephfs-compute-0-dvsreo[98033]: [WARNING] 341/095001 (4) : Server backend/nfs.cephfs.2 is DOWN, reason: Layer4 connection problem, info: "Connection refused", check duration: 0ms. 2 active and 0 backup servers left. 0 sessions active, 0 requeued, 0 remaining in queue.
Dec  8 04:50:01 np0005550137 podman[103406]: 2025-12-08 09:50:01.238991415 +0000 UTC m=+0.974474036 container remove 87218b03c3348bec1d498cfa84272c66e60b10c627b29f239e833e1f1382fb92 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=naughty_dhawan, CEPH_REF=squid, ceph=True, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec  8 04:50:01 np0005550137 systemd[1]: libpod-conmon-87218b03c3348bec1d498cfa84272c66e60b10c627b29f239e833e1f1382fb92.scope: Deactivated successfully.
Dec  8 04:50:01 np0005550137 ceph-mon[74516]: overall HEALTH_OK
Dec  8 04:50:01 np0005550137 ceph-mon[74516]: from='mgr.14766 192.168.122.100:0/2066651810' entity='mgr.compute-0.kitiwu' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "16"}]': finished
Dec  8 04:50:01 np0005550137 ceph-mon[74516]: from='mgr.14766 192.168.122.100:0/2066651810' entity='mgr.compute-0.kitiwu' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "17"}]: dispatch
Dec  8 04:50:01 np0005550137 ceph-mon[74516]: mon.compute-0@0(leader).osd e87 do_prune osdmap full prune enabled
Dec  8 04:50:01 np0005550137 ceph-mon[74516]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Dec  8 04:50:01 np0005550137 ceph-mon[74516]: log_channel(audit) log [INF] : from='mgr.14766 192.168.122.100:0/2066651810' entity='mgr.compute-0.kitiwu' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "17"}]': finished
Dec  8 04:50:01 np0005550137 ceph-mon[74516]: mon.compute-0@0(leader).osd e88 e88: 3 total, 3 up, 3 in
Dec  8 04:50:01 np0005550137 ceph-mon[74516]: log_channel(cluster) log [DBG] : osdmap e88: 3 total, 3 up, 3 in
Dec  8 04:50:01 np0005550137 ceph-osd[83009]: osd.1 pg_epoch: 88 pg[9.10( v 49'1026 (0'0,49'1026] local-lis/les=52/53 n=2 ec=52/36 lis/c=52/52 les/c/f=53/53/0 sis=88 pruub=8.522350311s) [0] r=-1 lpr=88 pi=[52,88)/1 crt=49'1026 lcod 0'0 mlcod 0'0 active pruub 218.521179199s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Dec  8 04:50:01 np0005550137 ceph-osd[83009]: osd.1 pg_epoch: 88 pg[9.10( v 49'1026 (0'0,49'1026] local-lis/les=52/53 n=2 ec=52/36 lis/c=52/52 les/c/f=53/53/0 sis=88 pruub=8.522304535s) [0] r=-1 lpr=88 pi=[52,88)/1 crt=49'1026 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 218.521179199s@ mbc={}] state<Start>: transitioning to Stray
Dec  8 04:50:01 np0005550137 ceph-mon[74516]: log_channel(audit) log [INF] : from='mgr.14766 192.168.122.100:0/2066651810' entity='mgr.compute-0.kitiwu' 
Dec  8 04:50:01 np0005550137 ceph-mon[74516]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Dec  8 04:50:01 np0005550137 ceph-mon[74516]: log_channel(audit) log [INF] : from='mgr.14766 192.168.122.100:0/2066651810' entity='mgr.compute-0.kitiwu' 
Dec  8 04:50:01 np0005550137 ceph-mon[74516]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.prometheus}] v 0)
Dec  8 04:50:01 np0005550137 ceph-mon[74516]: log_channel(audit) log [INF] : from='mgr.14766 192.168.122.100:0/2066651810' entity='mgr.compute-0.kitiwu' 
Dec  8 04:50:01 np0005550137 radosgw[89717]: ====== starting new request req=0x7ff6ecd845d0 =====
Dec  8 04:50:01 np0005550137 radosgw[89717]: ====== req done req=0x7ff6ecd845d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  8 04:50:01 np0005550137 radosgw[89717]: beast: 0x7ff6ecd845d0: 192.168.122.102 - anonymous [08/Dec/2025:09:50:01.483 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  8 04:50:01 np0005550137 ceph-mgr[74806]: [cephadm INFO cephadm.serve] Reconfiguring mon.compute-0 (monmap changed)...
Dec  8 04:50:01 np0005550137 ceph-mgr[74806]: log_channel(cephadm) log [INF] : Reconfiguring mon.compute-0 (monmap changed)...
Dec  8 04:50:01 np0005550137 ceph-mon[74516]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "mon."} v 0)
Dec  8 04:50:01 np0005550137 ceph-mon[74516]: log_channel(audit) log [INF] : from='mgr.14766 192.168.122.100:0/2066651810' entity='mgr.compute-0.kitiwu' cmd=[{"prefix": "auth get", "entity": "mon."}]: dispatch
Dec  8 04:50:01 np0005550137 ceph-mon[74516]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config get", "who": "mon", "key": "public_network"} v 0)
Dec  8 04:50:01 np0005550137 ceph-mon[74516]: log_channel(audit) log [DBG] : from='mgr.14766 192.168.122.100:0/2066651810' entity='mgr.compute-0.kitiwu' cmd=[{"prefix": "config get", "who": "mon", "key": "public_network"}]: dispatch
Dec  8 04:50:01 np0005550137 ceph-mon[74516]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Dec  8 04:50:01 np0005550137 ceph-mon[74516]: log_channel(audit) log [DBG] : from='mgr.14766 192.168.122.100:0/2066651810' entity='mgr.compute-0.kitiwu' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec  8 04:50:01 np0005550137 ceph-mgr[74806]: [cephadm INFO cephadm.serve] Reconfiguring daemon mon.compute-0 on compute-0
Dec  8 04:50:01 np0005550137 ceph-mgr[74806]: log_channel(cephadm) log [INF] : Reconfiguring daemon mon.compute-0 on compute-0
Dec  8 04:50:01 np0005550137 ceph-osd[83009]: log_channel(cluster) log [DBG] : 12.6 scrub starts
Dec  8 04:50:01 np0005550137 ceph-osd[83009]: log_channel(cluster) log [DBG] : 12.6 scrub ok
Dec  8 04:50:02 np0005550137 podman[103627]: 2025-12-08 09:50:02.066723302 +0000 UTC m=+0.058366415 container create 8a27eaddd7bf866ffb86749a35bd4516434583983eb17790ab80a6d7e7fc1cda (image=quay.io/ceph/ceph:v19, name=suspicious_banach, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, io.buildah.version=1.40.1)
Dec  8 04:50:02 np0005550137 systemd[1]: Started libpod-conmon-8a27eaddd7bf866ffb86749a35bd4516434583983eb17790ab80a6d7e7fc1cda.scope.
Dec  8 04:50:02 np0005550137 podman[103627]: 2025-12-08 09:50:02.041631409 +0000 UTC m=+0.033274542 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Dec  8 04:50:02 np0005550137 systemd[1]: Started libcrun container.
Dec  8 04:50:02 np0005550137 ceph-mon[74516]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Dec  8 04:50:02 np0005550137 ceph-mon[74516]: log_channel(audit) log [DBG] : from='mgr.14766 192.168.122.100:0/2066651810' entity='mgr.compute-0.kitiwu' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Dec  8 04:50:02 np0005550137 podman[103627]: 2025-12-08 09:50:02.169112213 +0000 UTC m=+0.160755376 container init 8a27eaddd7bf866ffb86749a35bd4516434583983eb17790ab80a6d7e7fc1cda (image=quay.io/ceph/ceph:v19, name=suspicious_banach, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, OSD_FLAVOR=default, ceph=True, CEPH_REF=squid, org.label-schema.vendor=CentOS)
Dec  8 04:50:02 np0005550137 podman[103627]: 2025-12-08 09:50:02.178087866 +0000 UTC m=+0.169730939 container start 8a27eaddd7bf866ffb86749a35bd4516434583983eb17790ab80a6d7e7fc1cda (image=quay.io/ceph/ceph:v19, name=suspicious_banach, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, OSD_FLAVOR=default)
Dec  8 04:50:02 np0005550137 podman[103627]: 2025-12-08 09:50:02.181531739 +0000 UTC m=+0.173174812 container attach 8a27eaddd7bf866ffb86749a35bd4516434583983eb17790ab80a6d7e7fc1cda (image=quay.io/ceph/ceph:v19, name=suspicious_banach, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.schema-version=1.0, CEPH_REF=squid, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Dec  8 04:50:02 np0005550137 suspicious_banach[103644]: 167 167
Dec  8 04:50:02 np0005550137 systemd[1]: libpod-8a27eaddd7bf866ffb86749a35bd4516434583983eb17790ab80a6d7e7fc1cda.scope: Deactivated successfully.
Dec  8 04:50:02 np0005550137 podman[103627]: 2025-12-08 09:50:02.187633465 +0000 UTC m=+0.179276538 container died 8a27eaddd7bf866ffb86749a35bd4516434583983eb17790ab80a6d7e7fc1cda (image=quay.io/ceph/ceph:v19, name=suspicious_banach, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS)
Dec  8 04:50:02 np0005550137 systemd[1]: var-lib-containers-storage-overlay-2f8bff89feb37b7e309a83971ea0e51d0750401eac4a147e0766f06559788c53-merged.mount: Deactivated successfully.
Dec  8 04:50:02 np0005550137 podman[103627]: 2025-12-08 09:50:02.228720823 +0000 UTC m=+0.220363936 container remove 8a27eaddd7bf866ffb86749a35bd4516434583983eb17790ab80a6d7e7fc1cda (image=quay.io/ceph/ceph:v19, name=suspicious_banach, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, OSD_FLAVOR=default, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Dec  8 04:50:02 np0005550137 systemd[1]: libpod-conmon-8a27eaddd7bf866ffb86749a35bd4516434583983eb17790ab80a6d7e7fc1cda.scope: Deactivated successfully.
Dec  8 04:50:02 np0005550137 ceph-mon[74516]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Dec  8 04:50:02 np0005550137 ceph-mon[74516]: log_channel(audit) log [INF] : from='mgr.14766 192.168.122.100:0/2066651810' entity='mgr.compute-0.kitiwu' 
Dec  8 04:50:02 np0005550137 ceph-mon[74516]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Dec  8 04:50:02 np0005550137 ceph-mon[74516]: log_channel(audit) log [INF] : from='mgr.14766 192.168.122.100:0/2066651810' entity='mgr.compute-0.kitiwu' 
Dec  8 04:50:02 np0005550137 ceph-mgr[74806]: [cephadm INFO cephadm.serve] Reconfiguring mgr.compute-0.kitiwu (monmap changed)...
Dec  8 04:50:02 np0005550137 ceph-mgr[74806]: log_channel(cephadm) log [INF] : Reconfiguring mgr.compute-0.kitiwu (monmap changed)...
Dec  8 04:50:02 np0005550137 ceph-mon[74516]: mon.compute-0@0(leader).osd e88 do_prune osdmap full prune enabled
Dec  8 04:50:02 np0005550137 ceph-mon[74516]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get-or-create", "entity": "mgr.compute-0.kitiwu", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]} v 0)
Dec  8 04:50:02 np0005550137 ceph-mon[74516]: log_channel(audit) log [INF] : from='mgr.14766 192.168.122.100:0/2066651810' entity='mgr.compute-0.kitiwu' cmd=[{"prefix": "auth get-or-create", "entity": "mgr.compute-0.kitiwu", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]}]: dispatch
Dec  8 04:50:02 np0005550137 ceph-mon[74516]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mgr services"} v 0)
Dec  8 04:50:02 np0005550137 ceph-mon[74516]: log_channel(audit) log [DBG] : from='mgr.14766 192.168.122.100:0/2066651810' entity='mgr.compute-0.kitiwu' cmd=[{"prefix": "mgr services"}]: dispatch
Dec  8 04:50:02 np0005550137 ceph-mon[74516]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Dec  8 04:50:02 np0005550137 ceph-mon[74516]: log_channel(audit) log [DBG] : from='mgr.14766 192.168.122.100:0/2066651810' entity='mgr.compute-0.kitiwu' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec  8 04:50:02 np0005550137 ceph-mgr[74806]: [cephadm INFO cephadm.serve] Reconfiguring daemon mgr.compute-0.kitiwu on compute-0
Dec  8 04:50:02 np0005550137 ceph-mgr[74806]: log_channel(cephadm) log [INF] : Reconfiguring daemon mgr.compute-0.kitiwu on compute-0
Dec  8 04:50:02 np0005550137 ceph-mon[74516]: from='mgr.14766 192.168.122.100:0/2066651810' entity='mgr.compute-0.kitiwu' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "17"}]': finished
Dec  8 04:50:02 np0005550137 ceph-mon[74516]: from='mgr.14766 192.168.122.100:0/2066651810' entity='mgr.compute-0.kitiwu' 
Dec  8 04:50:02 np0005550137 ceph-mon[74516]: from='mgr.14766 192.168.122.100:0/2066651810' entity='mgr.compute-0.kitiwu' 
Dec  8 04:50:02 np0005550137 ceph-mon[74516]: from='mgr.14766 192.168.122.100:0/2066651810' entity='mgr.compute-0.kitiwu' 
Dec  8 04:50:02 np0005550137 ceph-mon[74516]: Reconfiguring mon.compute-0 (monmap changed)...
Dec  8 04:50:02 np0005550137 ceph-mon[74516]: from='mgr.14766 192.168.122.100:0/2066651810' entity='mgr.compute-0.kitiwu' cmd=[{"prefix": "auth get", "entity": "mon."}]: dispatch
Dec  8 04:50:02 np0005550137 ceph-mon[74516]: Reconfiguring daemon mon.compute-0 on compute-0
Dec  8 04:50:02 np0005550137 ceph-mon[74516]: from='mgr.14766 192.168.122.100:0/2066651810' entity='mgr.compute-0.kitiwu' 
Dec  8 04:50:02 np0005550137 ceph-mon[74516]: from='mgr.14766 192.168.122.100:0/2066651810' entity='mgr.compute-0.kitiwu' 
Dec  8 04:50:02 np0005550137 ceph-mon[74516]: mon.compute-0@0(leader).osd e89 e89: 3 total, 3 up, 3 in
Dec  8 04:50:02 np0005550137 ceph-mon[74516]: log_channel(cluster) log [DBG] : osdmap e89: 3 total, 3 up, 3 in
Dec  8 04:50:02 np0005550137 ceph-osd[83009]: osd.1 pg_epoch: 89 pg[9.10( v 49'1026 (0'0,49'1026] local-lis/les=52/53 n=2 ec=52/36 lis/c=52/52 les/c/f=53/53/0 sis=89) [0]/[1] r=0 lpr=89 pi=[52,89)/1 crt=49'1026 lcod 0'0 mlcod 0'0 remapped NOTIFY mbc={}] PeeringState::start_peering_interval up [0] -> [0], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 0, role -1 -> 0, features acting 4540701547738038271 upacting 4540701547738038271
Dec  8 04:50:02 np0005550137 ceph-osd[83009]: osd.1 pg_epoch: 89 pg[9.10( v 49'1026 (0'0,49'1026] local-lis/les=52/53 n=2 ec=52/36 lis/c=52/52 les/c/f=53/53/0 sis=89) [0]/[1] r=0 lpr=89 pi=[52,89)/1 crt=49'1026 lcod 0'0 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Dec  8 04:50:02 np0005550137 ceph-osd[83009]: log_channel(cluster) log [DBG] : 12.b scrub starts
Dec  8 04:50:02 np0005550137 ceph-osd[83009]: log_channel(cluster) log [DBG] : 12.b scrub ok
Dec  8 04:50:02 np0005550137 podman[103728]: 2025-12-08 09:50:02.749053322 +0000 UTC m=+0.052614550 container create 0a4c91306b02e02389a062a3fca6fd7de1ceeb1050ed49db21f6bedda9827fde (image=quay.io/ceph/ceph:v19, name=distracted_moser, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.build-date=20250325, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec  8 04:50:02 np0005550137 systemd[1]: Started libpod-conmon-0a4c91306b02e02389a062a3fca6fd7de1ceeb1050ed49db21f6bedda9827fde.scope.
Dec  8 04:50:02 np0005550137 systemd[1]: Started libcrun container.
Dec  8 04:50:02 np0005550137 podman[103728]: 2025-12-08 09:50:02.726951609 +0000 UTC m=+0.030512827 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Dec  8 04:50:02 np0005550137 podman[103728]: 2025-12-08 09:50:02.837347264 +0000 UTC m=+0.140908492 container init 0a4c91306b02e02389a062a3fca6fd7de1ceeb1050ed49db21f6bedda9827fde (image=quay.io/ceph/ceph:v19, name=distracted_moser, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0)
Dec  8 04:50:02 np0005550137 podman[103728]: 2025-12-08 09:50:02.843491791 +0000 UTC m=+0.147053009 container start 0a4c91306b02e02389a062a3fca6fd7de1ceeb1050ed49db21f6bedda9827fde (image=quay.io/ceph/ceph:v19, name=distracted_moser, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec  8 04:50:02 np0005550137 distracted_moser[103744]: 167 167
Dec  8 04:50:02 np0005550137 podman[103728]: 2025-12-08 09:50:02.847155812 +0000 UTC m=+0.150717000 container attach 0a4c91306b02e02389a062a3fca6fd7de1ceeb1050ed49db21f6bedda9827fde (image=quay.io/ceph/ceph:v19, name=distracted_moser, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, io.buildah.version=1.40.1, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0)
Dec  8 04:50:02 np0005550137 systemd[1]: libpod-0a4c91306b02e02389a062a3fca6fd7de1ceeb1050ed49db21f6bedda9827fde.scope: Deactivated successfully.
Dec  8 04:50:02 np0005550137 podman[103728]: 2025-12-08 09:50:02.84875404 +0000 UTC m=+0.152315228 container died 0a4c91306b02e02389a062a3fca6fd7de1ceeb1050ed49db21f6bedda9827fde (image=quay.io/ceph/ceph:v19, name=distracted_moser, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, ceph=True, CEPH_REF=squid)
Dec  8 04:50:02 np0005550137 systemd[1]: var-lib-containers-storage-overlay-5ffdd129f7705f3168f54fa54778019aeb84f1777a22e49ac324e3c0a3932493-merged.mount: Deactivated successfully.
Dec  8 04:50:02 np0005550137 podman[103728]: 2025-12-08 09:50:02.888497288 +0000 UTC m=+0.192058516 container remove 0a4c91306b02e02389a062a3fca6fd7de1ceeb1050ed49db21f6bedda9827fde (image=quay.io/ceph/ceph:v19, name=distracted_moser, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_REF=squid, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec  8 04:50:02 np0005550137 radosgw[89717]: ====== starting new request req=0x7ff6ecd845d0 =====
Dec  8 04:50:02 np0005550137 radosgw[89717]: ====== req done req=0x7ff6ecd845d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  8 04:50:02 np0005550137 radosgw[89717]: beast: 0x7ff6ecd845d0: 192.168.122.100 - anonymous [08/Dec/2025:09:50:02.903 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  8 04:50:02 np0005550137 systemd[1]: libpod-conmon-0a4c91306b02e02389a062a3fca6fd7de1ceeb1050ed49db21f6bedda9827fde.scope: Deactivated successfully.
Dec  8 04:50:02 np0005550137 ceph-mon[74516]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Dec  8 04:50:02 np0005550137 ceph-mon[74516]: log_channel(audit) log [INF] : from='mgr.14766 192.168.122.100:0/2066651810' entity='mgr.compute-0.kitiwu' 
Dec  8 04:50:02 np0005550137 ceph-mon[74516]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Dec  8 04:50:02 np0005550137 ceph-mon[74516]: log_channel(audit) log [INF] : from='mgr.14766 192.168.122.100:0/2066651810' entity='mgr.compute-0.kitiwu' 
Dec  8 04:50:02 np0005550137 ceph-mgr[74806]: [cephadm INFO cephadm.serve] Reconfiguring crash.compute-0 (monmap changed)...
Dec  8 04:50:02 np0005550137 ceph-mgr[74806]: log_channel(cephadm) log [INF] : Reconfiguring crash.compute-0 (monmap changed)...
Dec  8 04:50:02 np0005550137 ceph-mon[74516]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get-or-create", "entity": "client.crash.compute-0", "caps": ["mon", "profile crash", "mgr", "profile crash"]} v 0)
Dec  8 04:50:02 np0005550137 ceph-mon[74516]: log_channel(audit) log [INF] : from='mgr.14766 192.168.122.100:0/2066651810' entity='mgr.compute-0.kitiwu' cmd=[{"prefix": "auth get-or-create", "entity": "client.crash.compute-0", "caps": ["mon", "profile crash", "mgr", "profile crash"]}]: dispatch
Dec  8 04:50:02 np0005550137 ceph-mon[74516]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Dec  8 04:50:02 np0005550137 ceph-mon[74516]: log_channel(audit) log [DBG] : from='mgr.14766 192.168.122.100:0/2066651810' entity='mgr.compute-0.kitiwu' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec  8 04:50:02 np0005550137 ceph-mgr[74806]: [cephadm INFO cephadm.serve] Reconfiguring daemon crash.compute-0 on compute-0
Dec  8 04:50:02 np0005550137 ceph-mgr[74806]: log_channel(cephadm) log [INF] : Reconfiguring daemon crash.compute-0 on compute-0
Dec  8 04:50:03 np0005550137 ceph-mgr[74806]: log_channel(cluster) log [DBG] : pgmap v23: 353 pgs: 2 active+remapped, 351 active+clean; 457 KiB data, 125 MiB used, 60 GiB / 60 GiB avail
Dec  8 04:50:03 np0005550137 ceph-mon[74516]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "18"} v 0)
Dec  8 04:50:03 np0005550137 ceph-mon[74516]: log_channel(audit) log [INF] : from='mgr.14766 192.168.122.100:0/2066651810' entity='mgr.compute-0.kitiwu' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "18"}]: dispatch
Dec  8 04:50:03 np0005550137 ceph-mon[74516]: Reconfiguring mgr.compute-0.kitiwu (monmap changed)...
Dec  8 04:50:03 np0005550137 ceph-mon[74516]: from='mgr.14766 192.168.122.100:0/2066651810' entity='mgr.compute-0.kitiwu' cmd=[{"prefix": "auth get-or-create", "entity": "mgr.compute-0.kitiwu", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]}]: dispatch
Dec  8 04:50:03 np0005550137 ceph-mon[74516]: Reconfiguring daemon mgr.compute-0.kitiwu on compute-0
Dec  8 04:50:03 np0005550137 ceph-mon[74516]: from='mgr.14766 192.168.122.100:0/2066651810' entity='mgr.compute-0.kitiwu' 
Dec  8 04:50:03 np0005550137 ceph-mon[74516]: from='mgr.14766 192.168.122.100:0/2066651810' entity='mgr.compute-0.kitiwu' 
Dec  8 04:50:03 np0005550137 ceph-mon[74516]: from='mgr.14766 192.168.122.100:0/2066651810' entity='mgr.compute-0.kitiwu' cmd=[{"prefix": "auth get-or-create", "entity": "client.crash.compute-0", "caps": ["mon", "profile crash", "mgr", "profile crash"]}]: dispatch
Dec  8 04:50:03 np0005550137 ceph-mon[74516]: from='mgr.14766 192.168.122.100:0/2066651810' entity='mgr.compute-0.kitiwu' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "18"}]: dispatch
Dec  8 04:50:03 np0005550137 ceph-mon[74516]: mon.compute-0@0(leader).osd e89 do_prune osdmap full prune enabled
Dec  8 04:50:03 np0005550137 ceph-mon[74516]: log_channel(audit) log [INF] : from='mgr.14766 192.168.122.100:0/2066651810' entity='mgr.compute-0.kitiwu' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "18"}]': finished
Dec  8 04:50:03 np0005550137 ceph-mon[74516]: mon.compute-0@0(leader).osd e90 e90: 3 total, 3 up, 3 in
Dec  8 04:50:03 np0005550137 ceph-mon[74516]: log_channel(cluster) log [DBG] : osdmap e90: 3 total, 3 up, 3 in
Dec  8 04:50:03 np0005550137 ceph-osd[83009]: osd.1 pg_epoch: 90 pg[9.11( v 49'1026 (0'0,49'1026] local-lis/les=52/53 n=5 ec=52/36 lis/c=52/52 les/c/f=53/53/0 sis=90 pruub=14.479331017s) [0] r=-1 lpr=90 pi=[52,90)/1 crt=49'1026 lcod 0'0 mlcod 0'0 active pruub 226.521347046s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Dec  8 04:50:03 np0005550137 ceph-osd[83009]: osd.1 pg_epoch: 90 pg[9.11( v 49'1026 (0'0,49'1026] local-lis/les=52/53 n=5 ec=52/36 lis/c=52/52 les/c/f=53/53/0 sis=90 pruub=14.479293823s) [0] r=-1 lpr=90 pi=[52,90)/1 crt=49'1026 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 226.521347046s@ mbc={}] state<Start>: transitioning to Stray
Dec  8 04:50:03 np0005550137 ceph-osd[83009]: osd.1 pg_epoch: 90 pg[9.10( v 49'1026 (0'0,49'1026] local-lis/les=89/90 n=2 ec=52/36 lis/c=52/52 les/c/f=53/53/0 sis=89) [0]/[1] async=[0] r=0 lpr=89 pi=[52,89)/1 crt=49'1026 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=2}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  8 04:50:03 np0005550137 radosgw[89717]: ====== starting new request req=0x7ff6ecd845d0 =====
Dec  8 04:50:03 np0005550137 radosgw[89717]: ====== req done req=0x7ff6ecd845d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  8 04:50:03 np0005550137 radosgw[89717]: beast: 0x7ff6ecd845d0: 192.168.122.102 - anonymous [08/Dec/2025:09:50:03.486 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  8 04:50:03 np0005550137 podman[103830]: 2025-12-08 09:50:03.542846667 +0000 UTC m=+0.065495811 container create 218056d4981bc2decda5b1fabe68cccf88424f2695c52936ce06ea1c296fc17c (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=charming_shamir, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_REF=squid, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, OSD_FLAVOR=default, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec  8 04:50:03 np0005550137 systemd[1]: Started libpod-conmon-218056d4981bc2decda5b1fabe68cccf88424f2695c52936ce06ea1c296fc17c.scope.
Dec  8 04:50:03 np0005550137 systemd[1]: Started libcrun container.
Dec  8 04:50:03 np0005550137 podman[103830]: 2025-12-08 09:50:03.523322754 +0000 UTC m=+0.045971878 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Dec  8 04:50:03 np0005550137 podman[103830]: 2025-12-08 09:50:03.627070266 +0000 UTC m=+0.149719420 container init 218056d4981bc2decda5b1fabe68cccf88424f2695c52936ce06ea1c296fc17c (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=charming_shamir, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec  8 04:50:03 np0005550137 podman[103830]: 2025-12-08 09:50:03.636822232 +0000 UTC m=+0.159471366 container start 218056d4981bc2decda5b1fabe68cccf88424f2695c52936ce06ea1c296fc17c (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=charming_shamir, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_REF=squid, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True)
Dec  8 04:50:03 np0005550137 podman[103830]: 2025-12-08 09:50:03.640910046 +0000 UTC m=+0.163559190 container attach 218056d4981bc2decda5b1fabe68cccf88424f2695c52936ce06ea1c296fc17c (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=charming_shamir, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Dec  8 04:50:03 np0005550137 charming_shamir[103846]: 167 167
Dec  8 04:50:03 np0005550137 systemd[1]: libpod-218056d4981bc2decda5b1fabe68cccf88424f2695c52936ce06ea1c296fc17c.scope: Deactivated successfully.
Dec  8 04:50:03 np0005550137 podman[103830]: 2025-12-08 09:50:03.643153904 +0000 UTC m=+0.165803038 container died 218056d4981bc2decda5b1fabe68cccf88424f2695c52936ce06ea1c296fc17c (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=charming_shamir, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2)
Dec  8 04:50:03 np0005550137 ceph-osd[83009]: log_channel(cluster) log [DBG] : 12.12 scrub starts
Dec  8 04:50:03 np0005550137 ceph-osd[83009]: log_channel(cluster) log [DBG] : 12.12 scrub ok
Dec  8 04:50:03 np0005550137 systemd[1]: var-lib-containers-storage-overlay-0184f552f280a75de9437b19c0427ef12ab6335f1133d5428c3ff67109e2559b-merged.mount: Deactivated successfully.
Dec  8 04:50:03 np0005550137 podman[103830]: 2025-12-08 09:50:03.691582926 +0000 UTC m=+0.214232030 container remove 218056d4981bc2decda5b1fabe68cccf88424f2695c52936ce06ea1c296fc17c (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=charming_shamir, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, io.buildah.version=1.40.1)
Dec  8 04:50:03 np0005550137 systemd[1]: libpod-conmon-218056d4981bc2decda5b1fabe68cccf88424f2695c52936ce06ea1c296fc17c.scope: Deactivated successfully.
Dec  8 04:50:03 np0005550137 ceph-mon[74516]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Dec  8 04:50:03 np0005550137 ceph-mon[74516]: log_channel(audit) log [INF] : from='mgr.14766 192.168.122.100:0/2066651810' entity='mgr.compute-0.kitiwu' 
Dec  8 04:50:03 np0005550137 ceph-mon[74516]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Dec  8 04:50:03 np0005550137 ceph-mon[74516]: log_channel(audit) log [INF] : from='mgr.14766 192.168.122.100:0/2066651810' entity='mgr.compute-0.kitiwu' 
Dec  8 04:50:03 np0005550137 ceph-mgr[74806]: [cephadm INFO cephadm.serve] Reconfiguring osd.1 (monmap changed)...
Dec  8 04:50:03 np0005550137 ceph-mgr[74806]: log_channel(cephadm) log [INF] : Reconfiguring osd.1 (monmap changed)...
Dec  8 04:50:03 np0005550137 ceph-mon[74516]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "osd.1"} v 0)
Dec  8 04:50:03 np0005550137 ceph-mon[74516]: log_channel(audit) log [INF] : from='mgr.14766 192.168.122.100:0/2066651810' entity='mgr.compute-0.kitiwu' cmd=[{"prefix": "auth get", "entity": "osd.1"}]: dispatch
Dec  8 04:50:03 np0005550137 ceph-mon[74516]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Dec  8 04:50:03 np0005550137 ceph-mon[74516]: log_channel(audit) log [DBG] : from='mgr.14766 192.168.122.100:0/2066651810' entity='mgr.compute-0.kitiwu' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec  8 04:50:03 np0005550137 ceph-mgr[74806]: [cephadm INFO cephadm.serve] Reconfiguring daemon osd.1 on compute-0
Dec  8 04:50:03 np0005550137 ceph-mgr[74806]: log_channel(cephadm) log [INF] : Reconfiguring daemon osd.1 on compute-0
Dec  8 04:50:04 np0005550137 podman[103930]: 2025-12-08 09:50:04.237360697 +0000 UTC m=+0.026887397 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Dec  8 04:50:04 np0005550137 ceph-mon[74516]: mon.compute-0@0(leader).osd e90 do_prune osdmap full prune enabled
Dec  8 04:50:04 np0005550137 podman[103930]: 2025-12-08 09:50:04.592924519 +0000 UTC m=+0.382451239 container create 00e6c709296791ff365fe0f0c0e09c4cab6eee4bdb1ddeffe1b6583d1691b31a (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ecstatic_lamport, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_REF=squid, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec  8 04:50:04 np0005550137 ceph-mon[74516]: mon.compute-0@0(leader).osd e91 e91: 3 total, 3 up, 3 in
Dec  8 04:50:04 np0005550137 ceph-mon[74516]: log_channel(cluster) log [DBG] : osdmap e91: 3 total, 3 up, 3 in
Dec  8 04:50:04 np0005550137 ceph-osd[83009]: osd.1 pg_epoch: 91 pg[9.11( v 49'1026 (0'0,49'1026] local-lis/les=52/53 n=5 ec=52/36 lis/c=52/52 les/c/f=53/53/0 sis=91) [0]/[1] r=0 lpr=91 pi=[52,91)/1 crt=49'1026 lcod 0'0 mlcod 0'0 remapped NOTIFY mbc={}] PeeringState::start_peering_interval up [0] -> [0], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 0, role -1 -> 0, features acting 4540701547738038271 upacting 4540701547738038271
Dec  8 04:50:04 np0005550137 ceph-osd[83009]: osd.1 pg_epoch: 91 pg[9.11( v 49'1026 (0'0,49'1026] local-lis/les=52/53 n=5 ec=52/36 lis/c=52/52 les/c/f=53/53/0 sis=91) [0]/[1] r=0 lpr=91 pi=[52,91)/1 crt=49'1026 lcod 0'0 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Dec  8 04:50:04 np0005550137 ceph-osd[83009]: osd.1 pg_epoch: 91 pg[9.10( v 49'1026 (0'0,49'1026] local-lis/les=89/90 n=2 ec=52/36 lis/c=89/52 les/c/f=90/53/0 sis=91 pruub=14.748675346s) [0] async=[0] r=-1 lpr=91 pi=[52,91)/1 crt=49'1026 lcod 0'0 mlcod 0'0 active pruub 228.048370361s@ mbc={255={}}] PeeringState::start_peering_interval up [0] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 0 -> 0, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Dec  8 04:50:04 np0005550137 ceph-osd[83009]: osd.1 pg_epoch: 91 pg[9.10( v 49'1026 (0'0,49'1026] local-lis/les=89/90 n=2 ec=52/36 lis/c=89/52 les/c/f=90/53/0 sis=91 pruub=14.748625755s) [0] r=-1 lpr=91 pi=[52,91)/1 crt=49'1026 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 228.048370361s@ mbc={}] state<Start>: transitioning to Stray
Dec  8 04:50:04 np0005550137 ceph-mon[74516]: Reconfiguring crash.compute-0 (monmap changed)...
Dec  8 04:50:04 np0005550137 ceph-mon[74516]: Reconfiguring daemon crash.compute-0 on compute-0
Dec  8 04:50:04 np0005550137 ceph-mon[74516]: from='mgr.14766 192.168.122.100:0/2066651810' entity='mgr.compute-0.kitiwu' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "18"}]': finished
Dec  8 04:50:04 np0005550137 ceph-mon[74516]: from='mgr.14766 192.168.122.100:0/2066651810' entity='mgr.compute-0.kitiwu' 
Dec  8 04:50:04 np0005550137 ceph-mon[74516]: from='mgr.14766 192.168.122.100:0/2066651810' entity='mgr.compute-0.kitiwu' 
Dec  8 04:50:04 np0005550137 ceph-mon[74516]: from='mgr.14766 192.168.122.100:0/2066651810' entity='mgr.compute-0.kitiwu' cmd=[{"prefix": "auth get", "entity": "osd.1"}]: dispatch
Dec  8 04:50:04 np0005550137 systemd[1]: Started libpod-conmon-00e6c709296791ff365fe0f0c0e09c4cab6eee4bdb1ddeffe1b6583d1691b31a.scope.
Dec  8 04:50:04 np0005550137 systemd[1]: Started libcrun container.
Dec  8 04:50:04 np0005550137 ceph-osd[83009]: log_channel(cluster) log [DBG] : 12.1c scrub starts
Dec  8 04:50:04 np0005550137 podman[103930]: 2025-12-08 09:50:04.68447958 +0000 UTC m=+0.474006290 container init 00e6c709296791ff365fe0f0c0e09c4cab6eee4bdb1ddeffe1b6583d1691b31a (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ecstatic_lamport, ceph=True, OSD_FLAVOR=default, CEPH_REF=squid, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec  8 04:50:04 np0005550137 ceph-osd[83009]: log_channel(cluster) log [DBG] : 12.1c scrub ok
Dec  8 04:50:04 np0005550137 podman[103930]: 2025-12-08 09:50:04.692745621 +0000 UTC m=+0.482272341 container start 00e6c709296791ff365fe0f0c0e09c4cab6eee4bdb1ddeffe1b6583d1691b31a (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ecstatic_lamport, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, ceph=True)
Dec  8 04:50:04 np0005550137 ecstatic_lamport[103948]: 167 167
Dec  8 04:50:04 np0005550137 podman[103930]: 2025-12-08 09:50:04.696576028 +0000 UTC m=+0.486102718 container attach 00e6c709296791ff365fe0f0c0e09c4cab6eee4bdb1ddeffe1b6583d1691b31a (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ecstatic_lamport, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.vendor=CentOS, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image)
Dec  8 04:50:04 np0005550137 systemd[1]: libpod-00e6c709296791ff365fe0f0c0e09c4cab6eee4bdb1ddeffe1b6583d1691b31a.scope: Deactivated successfully.
Dec  8 04:50:04 np0005550137 podman[103930]: 2025-12-08 09:50:04.697067713 +0000 UTC m=+0.486594423 container died 00e6c709296791ff365fe0f0c0e09c4cab6eee4bdb1ddeffe1b6583d1691b31a (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ecstatic_lamport, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec  8 04:50:04 np0005550137 systemd[1]: var-lib-containers-storage-overlay-d927a22bef04a7e0978ff3eb3187c9194c97706632b61d9d00f8405882751939-merged.mount: Deactivated successfully.
Dec  8 04:50:04 np0005550137 podman[103930]: 2025-12-08 09:50:04.742940686 +0000 UTC m=+0.532467386 container remove 00e6c709296791ff365fe0f0c0e09c4cab6eee4bdb1ddeffe1b6583d1691b31a (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ecstatic_lamport, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Dec  8 04:50:04 np0005550137 systemd[1]: libpod-conmon-00e6c709296791ff365fe0f0c0e09c4cab6eee4bdb1ddeffe1b6583d1691b31a.scope: Deactivated successfully.
Dec  8 04:50:04 np0005550137 ceph-mon[74516]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Dec  8 04:50:04 np0005550137 ceph-mon[74516]: log_channel(audit) log [INF] : from='mgr.14766 192.168.122.100:0/2066651810' entity='mgr.compute-0.kitiwu' 
Dec  8 04:50:04 np0005550137 ceph-mon[74516]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Dec  8 04:50:04 np0005550137 radosgw[89717]: ====== starting new request req=0x7ff6ecd845d0 =====
Dec  8 04:50:04 np0005550137 radosgw[89717]: ====== req done req=0x7ff6ecd845d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  8 04:50:04 np0005550137 radosgw[89717]: beast: 0x7ff6ecd845d0: 192.168.122.100 - anonymous [08/Dec/2025:09:50:04.906 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  8 04:50:04 np0005550137 ceph-mon[74516]: log_channel(audit) log [INF] : from='mgr.14766 192.168.122.100:0/2066651810' entity='mgr.compute-0.kitiwu' 
Dec  8 04:50:04 np0005550137 ceph-mgr[74806]: [cephadm INFO cephadm.serve] Reconfiguring rgw.rgw.compute-0.slkrtm (unknown last config time)...
Dec  8 04:50:04 np0005550137 ceph-mgr[74806]: log_channel(cephadm) log [INF] : Reconfiguring rgw.rgw.compute-0.slkrtm (unknown last config time)...
Dec  8 04:50:04 np0005550137 ceph-mon[74516]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get-or-create", "entity": "client.rgw.rgw.compute-0.slkrtm", "caps": ["mon", "allow *", "mgr", "allow rw", "osd", "allow rwx tag rgw *=*"]} v 0)
Dec  8 04:50:04 np0005550137 ceph-mon[74516]: log_channel(audit) log [INF] : from='mgr.14766 192.168.122.100:0/2066651810' entity='mgr.compute-0.kitiwu' cmd=[{"prefix": "auth get-or-create", "entity": "client.rgw.rgw.compute-0.slkrtm", "caps": ["mon", "allow *", "mgr", "allow rw", "osd", "allow rwx tag rgw *=*"]}]: dispatch
Dec  8 04:50:04 np0005550137 ceph-mon[74516]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config set, name=rgw_frontends}] v 0)
Dec  8 04:50:04 np0005550137 ceph-mon[74516]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Dec  8 04:50:04 np0005550137 ceph-mon[74516]: log_channel(audit) log [DBG] : from='mgr.14766 192.168.122.100:0/2066651810' entity='mgr.compute-0.kitiwu' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec  8 04:50:04 np0005550137 ceph-mgr[74806]: [cephadm INFO cephadm.serve] Reconfiguring daemon rgw.rgw.compute-0.slkrtm on compute-0
Dec  8 04:50:04 np0005550137 ceph-mgr[74806]: log_channel(cephadm) log [INF] : Reconfiguring daemon rgw.rgw.compute-0.slkrtm on compute-0
Dec  8 04:50:04 np0005550137 ceph-mon[74516]: mon.compute-0@0(leader).osd e91 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  8 04:50:05 np0005550137 ceph-mgr[74806]: log_channel(cluster) log [DBG] : pgmap v26: 353 pgs: 1 remapped+peering, 2 active+remapped, 350 active+clean; 457 KiB data, 126 MiB used, 60 GiB / 60 GiB avail
Dec  8 04:50:05 np0005550137 podman[104038]: 2025-12-08 09:50:05.387488658 +0000 UTC m=+0.045621947 container create 71b3019e1d90e7588785011add00735055cfaca4e272ec43fa098724481b8bd4 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=beautiful_kalam, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec  8 04:50:05 np0005550137 systemd[1]: Started libpod-conmon-71b3019e1d90e7588785011add00735055cfaca4e272ec43fa098724481b8bd4.scope.
Dec  8 04:50:05 np0005550137 systemd[1]: Started libcrun container.
Dec  8 04:50:05 np0005550137 podman[104038]: 2025-12-08 09:50:05.368958426 +0000 UTC m=+0.027091795 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Dec  8 04:50:05 np0005550137 podman[104038]: 2025-12-08 09:50:05.474929125 +0000 UTC m=+0.133062434 container init 71b3019e1d90e7588785011add00735055cfaca4e272ec43fa098724481b8bd4 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=beautiful_kalam, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec  8 04:50:05 np0005550137 podman[104038]: 2025-12-08 09:50:05.482746412 +0000 UTC m=+0.140879721 container start 71b3019e1d90e7588785011add00735055cfaca4e272ec43fa098724481b8bd4 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=beautiful_kalam, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image)
Dec  8 04:50:05 np0005550137 beautiful_kalam[104055]: 167 167
Dec  8 04:50:05 np0005550137 podman[104038]: 2025-12-08 09:50:05.486591869 +0000 UTC m=+0.144725208 container attach 71b3019e1d90e7588785011add00735055cfaca4e272ec43fa098724481b8bd4 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=beautiful_kalam, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec  8 04:50:05 np0005550137 systemd[1]: libpod-71b3019e1d90e7588785011add00735055cfaca4e272ec43fa098724481b8bd4.scope: Deactivated successfully.
Dec  8 04:50:05 np0005550137 podman[104038]: 2025-12-08 09:50:05.487604579 +0000 UTC m=+0.145737928 container died 71b3019e1d90e7588785011add00735055cfaca4e272ec43fa098724481b8bd4 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=beautiful_kalam, org.label-schema.schema-version=1.0, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image)
Dec  8 04:50:05 np0005550137 radosgw[89717]: ====== starting new request req=0x7ff6ecd845d0 =====
Dec  8 04:50:05 np0005550137 radosgw[89717]: ====== req done req=0x7ff6ecd845d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  8 04:50:05 np0005550137 radosgw[89717]: beast: 0x7ff6ecd845d0: 192.168.122.102 - anonymous [08/Dec/2025:09:50:05.489 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  8 04:50:05 np0005550137 systemd[1]: var-lib-containers-storage-overlay-f7c94066c228954dce6fd19ff42051bf3a4e4bbe311d5d2227e3189d37a2ca66-merged.mount: Deactivated successfully.
Dec  8 04:50:05 np0005550137 podman[104038]: 2025-12-08 09:50:05.526177322 +0000 UTC m=+0.184310621 container remove 71b3019e1d90e7588785011add00735055cfaca4e272ec43fa098724481b8bd4 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=beautiful_kalam, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, ceph=True, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2)
Dec  8 04:50:05 np0005550137 systemd[1]: libpod-conmon-71b3019e1d90e7588785011add00735055cfaca4e272ec43fa098724481b8bd4.scope: Deactivated successfully.
Dec  8 04:50:05 np0005550137 ceph-mon[74516]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Dec  8 04:50:05 np0005550137 ceph-mon[74516]: log_channel(audit) log [INF] : from='mgr.14766 192.168.122.100:0/2066651810' entity='mgr.compute-0.kitiwu' 
Dec  8 04:50:05 np0005550137 ceph-mon[74516]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Dec  8 04:50:05 np0005550137 ceph-mon[74516]: mon.compute-0@0(leader).osd e91 do_prune osdmap full prune enabled
Dec  8 04:50:05 np0005550137 ceph-mon[74516]: log_channel(audit) log [INF] : from='mgr.14766 192.168.122.100:0/2066651810' entity='mgr.compute-0.kitiwu' 
Dec  8 04:50:05 np0005550137 ceph-mgr[74806]: [cephadm INFO cephadm.serve] Reconfiguring node-exporter.compute-0 (unknown last config time)...
Dec  8 04:50:05 np0005550137 ceph-mgr[74806]: log_channel(cephadm) log [INF] : Reconfiguring node-exporter.compute-0 (unknown last config time)...
Dec  8 04:50:05 np0005550137 ceph-mgr[74806]: [cephadm INFO cephadm.serve] Reconfiguring daemon node-exporter.compute-0 on compute-0
Dec  8 04:50:05 np0005550137 ceph-mgr[74806]: log_channel(cephadm) log [INF] : Reconfiguring daemon node-exporter.compute-0 on compute-0
Dec  8 04:50:05 np0005550137 ceph-mon[74516]: mon.compute-0@0(leader).osd e92 e92: 3 total, 3 up, 3 in
Dec  8 04:50:05 np0005550137 ceph-mon[74516]: log_channel(cluster) log [DBG] : osdmap e92: 3 total, 3 up, 3 in
Dec  8 04:50:05 np0005550137 ceph-mon[74516]: Reconfiguring osd.1 (monmap changed)...
Dec  8 04:50:05 np0005550137 ceph-mon[74516]: Reconfiguring daemon osd.1 on compute-0
Dec  8 04:50:05 np0005550137 ceph-osd[83009]: osd.1 pg_epoch: 92 pg[9.11( v 49'1026 (0'0,49'1026] local-lis/les=91/92 n=5 ec=52/36 lis/c=52/52 les/c/f=53/53/0 sis=91) [0]/[1] async=[0] r=0 lpr=91 pi=[52,91)/1 crt=49'1026 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=5}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  8 04:50:05 np0005550137 ceph-mon[74516]: from='mgr.14766 192.168.122.100:0/2066651810' entity='mgr.compute-0.kitiwu' 
Dec  8 04:50:05 np0005550137 ceph-mon[74516]: from='mgr.14766 192.168.122.100:0/2066651810' entity='mgr.compute-0.kitiwu' 
Dec  8 04:50:05 np0005550137 ceph-mon[74516]: from='mgr.14766 192.168.122.100:0/2066651810' entity='mgr.compute-0.kitiwu' cmd=[{"prefix": "auth get-or-create", "entity": "client.rgw.rgw.compute-0.slkrtm", "caps": ["mon", "allow *", "mgr", "allow rw", "osd", "allow rwx tag rgw *=*"]}]: dispatch
Dec  8 04:50:05 np0005550137 ceph-mon[74516]: from='mgr.14766 192.168.122.100:0/2066651810' entity='mgr.compute-0.kitiwu' 
Dec  8 04:50:05 np0005550137 ceph-mon[74516]: from='mgr.14766 192.168.122.100:0/2066651810' entity='mgr.compute-0.kitiwu' 
Dec  8 04:50:05 np0005550137 ceph-ceb838ef-9d5d-54e4-bddb-2f01adce2ad4-mgr-compute-0-kitiwu[74802]: ::ffff:192.168.122.100 - - [08/Dec/2025:09:50:05] "GET /metrics HTTP/1.1" 200 48277 "" "Prometheus/2.51.0"
Dec  8 04:50:05 np0005550137 ceph-mgr[74806]: [prometheus INFO cherrypy.access.140536868639312] ::ffff:192.168.122.100 - - [08/Dec/2025:09:50:05] "GET /metrics HTTP/1.1" 200 48277 "" "Prometheus/2.51.0"
Dec  8 04:50:06 np0005550137 systemd[1]: Stopping Ceph node-exporter.compute-0 for ceb838ef-9d5d-54e4-bddb-2f01adce2ad4...
Dec  8 04:50:06 np0005550137 podman[104168]: 2025-12-08 09:50:06.18706291 +0000 UTC m=+0.046986379 container died 76b11ba15419197dfbc2b41db10c193f2abd6fd1997bcedb2a4f58e24399d05c (image=quay.io/prometheus/node-exporter:v1.7.0, name=ceph-ceb838ef-9d5d-54e4-bddb-2f01adce2ad4-node-exporter-compute-0, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Dec  8 04:50:06 np0005550137 systemd[1]: var-lib-containers-storage-overlay-f7401ea864925811372830366978da613707859a0568bf1f0ffe1780de2241ec-merged.mount: Deactivated successfully.
Dec  8 04:50:06 np0005550137 podman[104168]: 2025-12-08 09:50:06.22459184 +0000 UTC m=+0.084515309 container remove 76b11ba15419197dfbc2b41db10c193f2abd6fd1997bcedb2a4f58e24399d05c (image=quay.io/prometheus/node-exporter:v1.7.0, name=ceph-ceb838ef-9d5d-54e4-bddb-2f01adce2ad4-node-exporter-compute-0, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Dec  8 04:50:06 np0005550137 bash[104168]: ceph-ceb838ef-9d5d-54e4-bddb-2f01adce2ad4-node-exporter-compute-0
Dec  8 04:50:06 np0005550137 systemd[1]: ceph-ceb838ef-9d5d-54e4-bddb-2f01adce2ad4@node-exporter.compute-0.service: Main process exited, code=exited, status=143/n/a
Dec  8 04:50:06 np0005550137 systemd[1]: ceph-ceb838ef-9d5d-54e4-bddb-2f01adce2ad4@node-exporter.compute-0.service: Failed with result 'exit-code'.
Dec  8 04:50:06 np0005550137 systemd[1]: Stopped Ceph node-exporter.compute-0 for ceb838ef-9d5d-54e4-bddb-2f01adce2ad4.
Dec  8 04:50:06 np0005550137 systemd[1]: ceph-ceb838ef-9d5d-54e4-bddb-2f01adce2ad4@node-exporter.compute-0.service: Consumed 2.069s CPU time.
Dec  8 04:50:06 np0005550137 systemd[1]: Starting Ceph node-exporter.compute-0 for ceb838ef-9d5d-54e4-bddb-2f01adce2ad4...
Dec  8 04:50:06 np0005550137 ceph-mon[74516]: mon.compute-0@0(leader).osd e92 do_prune osdmap full prune enabled
Dec  8 04:50:06 np0005550137 ceph-mon[74516]: Reconfiguring rgw.rgw.compute-0.slkrtm (unknown last config time)...
Dec  8 04:50:06 np0005550137 ceph-mon[74516]: Reconfiguring daemon rgw.rgw.compute-0.slkrtm on compute-0
Dec  8 04:50:06 np0005550137 ceph-mon[74516]: Reconfiguring node-exporter.compute-0 (unknown last config time)...
Dec  8 04:50:06 np0005550137 ceph-mon[74516]: Reconfiguring daemon node-exporter.compute-0 on compute-0
Dec  8 04:50:06 np0005550137 ceph-mon[74516]: mon.compute-0@0(leader).osd e93 e93: 3 total, 3 up, 3 in
Dec  8 04:50:06 np0005550137 ceph-mon[74516]: log_channel(cluster) log [DBG] : osdmap e93: 3 total, 3 up, 3 in
Dec  8 04:50:06 np0005550137 ceph-osd[83009]: osd.1 pg_epoch: 93 pg[9.11( v 49'1026 (0'0,49'1026] local-lis/les=91/92 n=5 ec=52/36 lis/c=91/52 les/c/f=92/53/0 sis=93 pruub=14.965817451s) [0] async=[0] r=-1 lpr=93 pi=[52,93)/1 crt=49'1026 lcod 0'0 mlcod 0'0 active pruub 230.332641602s@ mbc={255={}}] PeeringState::start_peering_interval up [0] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 0 -> 0, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Dec  8 04:50:06 np0005550137 ceph-osd[83009]: osd.1 pg_epoch: 93 pg[9.11( v 49'1026 (0'0,49'1026] local-lis/les=91/92 n=5 ec=52/36 lis/c=91/52 les/c/f=92/53/0 sis=93 pruub=14.965740204s) [0] r=-1 lpr=93 pi=[52,93)/1 crt=49'1026 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 230.332641602s@ mbc={}] state<Start>: transitioning to Stray
Dec  8 04:50:06 np0005550137 podman[104269]: 2025-12-08 09:50:06.69232773 +0000 UTC m=+0.058706125 container create a993be6ff2aac952a2d6ade088491ae2d5efa7b34003bbbc41aca7a803586ead (image=quay.io/prometheus/node-exporter:v1.7.0, name=ceph-ceb838ef-9d5d-54e4-bddb-2f01adce2ad4-node-exporter-compute-0, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Dec  8 04:50:06 np0005550137 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4e49339f714f7ea7b45dec52169b36501490519ff1a74ae094642212c21f17ed/merged/etc/node-exporter supports timestamps until 2038 (0x7fffffff)
Dec  8 04:50:06 np0005550137 podman[104269]: 2025-12-08 09:50:06.752128057 +0000 UTC m=+0.118506482 container init a993be6ff2aac952a2d6ade088491ae2d5efa7b34003bbbc41aca7a803586ead (image=quay.io/prometheus/node-exporter:v1.7.0, name=ceph-ceb838ef-9d5d-54e4-bddb-2f01adce2ad4-node-exporter-compute-0, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Dec  8 04:50:06 np0005550137 podman[104269]: 2025-12-08 09:50:06.758177171 +0000 UTC m=+0.124555566 container start a993be6ff2aac952a2d6ade088491ae2d5efa7b34003bbbc41aca7a803586ead (image=quay.io/prometheus/node-exporter:v1.7.0, name=ceph-ceb838ef-9d5d-54e4-bddb-2f01adce2ad4-node-exporter-compute-0, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Dec  8 04:50:06 np0005550137 bash[104269]: a993be6ff2aac952a2d6ade088491ae2d5efa7b34003bbbc41aca7a803586ead
Dec  8 04:50:06 np0005550137 podman[104269]: 2025-12-08 09:50:06.671199919 +0000 UTC m=+0.037578334 image pull 72c9c208898624938c9e4183d6686ea4a5fd3f912bc29bc3f00147924c521a3e quay.io/prometheus/node-exporter:v1.7.0
Dec  8 04:50:06 np0005550137 ceph-ceb838ef-9d5d-54e4-bddb-2f01adce2ad4-node-exporter-compute-0[104284]: ts=2025-12-08T09:50:06.765Z caller=node_exporter.go:192 level=info msg="Starting node_exporter" version="(version=1.7.0, branch=HEAD, revision=7333465abf9efba81876303bb57e6fadb946041b)"
Dec  8 04:50:06 np0005550137 ceph-ceb838ef-9d5d-54e4-bddb-2f01adce2ad4-node-exporter-compute-0[104284]: ts=2025-12-08T09:50:06.765Z caller=node_exporter.go:193 level=info msg="Build context" build_context="(go=go1.21.4, platform=linux/amd64, user=root@35918982f6d8, date=20231112-23:53:35, tags=netgo osusergo static_build)"
Dec  8 04:50:06 np0005550137 ceph-ceb838ef-9d5d-54e4-bddb-2f01adce2ad4-node-exporter-compute-0[104284]: ts=2025-12-08T09:50:06.766Z caller=filesystem_common.go:111 level=info collector=filesystem msg="Parsed flag --collector.filesystem.mount-points-exclude" flag=^/(dev|proc|run/credentials/.+|sys|var/lib/docker/.+|var/lib/containers/storage/.+)($|/)
Dec  8 04:50:06 np0005550137 ceph-ceb838ef-9d5d-54e4-bddb-2f01adce2ad4-node-exporter-compute-0[104284]: ts=2025-12-08T09:50:06.766Z caller=filesystem_common.go:113 level=info collector=filesystem msg="Parsed flag --collector.filesystem.fs-types-exclude" flag=^(autofs|binfmt_misc|bpf|cgroup2?|configfs|debugfs|devpts|devtmpfs|fusectl|hugetlbfs|iso9660|mqueue|nsfs|overlay|proc|procfs|pstore|rpc_pipefs|securityfs|selinuxfs|squashfs|sysfs|tracefs)$
Dec  8 04:50:06 np0005550137 systemd[1]: Started Ceph node-exporter.compute-0 for ceb838ef-9d5d-54e4-bddb-2f01adce2ad4.
Dec  8 04:50:06 np0005550137 ceph-ceb838ef-9d5d-54e4-bddb-2f01adce2ad4-node-exporter-compute-0[104284]: ts=2025-12-08T09:50:06.767Z caller=diskstats_common.go:111 level=info collector=diskstats msg="Parsed flag --collector.diskstats.device-exclude" flag=^(ram|loop|fd|(h|s|v|xv)d[a-z]|nvme\d+n\d+p)\d+$
Dec  8 04:50:06 np0005550137 ceph-ceb838ef-9d5d-54e4-bddb-2f01adce2ad4-node-exporter-compute-0[104284]: ts=2025-12-08T09:50:06.767Z caller=diskstats_linux.go:265 level=error collector=diskstats msg="Failed to open directory, disabling udev device properties" path=/run/udev/data
Dec  8 04:50:06 np0005550137 ceph-ceb838ef-9d5d-54e4-bddb-2f01adce2ad4-node-exporter-compute-0[104284]: ts=2025-12-08T09:50:06.767Z caller=node_exporter.go:110 level=info msg="Enabled collectors"
Dec  8 04:50:06 np0005550137 ceph-ceb838ef-9d5d-54e4-bddb-2f01adce2ad4-node-exporter-compute-0[104284]: ts=2025-12-08T09:50:06.767Z caller=node_exporter.go:117 level=info collector=arp
Dec  8 04:50:06 np0005550137 ceph-ceb838ef-9d5d-54e4-bddb-2f01adce2ad4-node-exporter-compute-0[104284]: ts=2025-12-08T09:50:06.767Z caller=node_exporter.go:117 level=info collector=bcache
Dec  8 04:50:06 np0005550137 ceph-ceb838ef-9d5d-54e4-bddb-2f01adce2ad4-node-exporter-compute-0[104284]: ts=2025-12-08T09:50:06.767Z caller=node_exporter.go:117 level=info collector=bonding
Dec  8 04:50:06 np0005550137 ceph-ceb838ef-9d5d-54e4-bddb-2f01adce2ad4-node-exporter-compute-0[104284]: ts=2025-12-08T09:50:06.767Z caller=node_exporter.go:117 level=info collector=btrfs
Dec  8 04:50:06 np0005550137 ceph-ceb838ef-9d5d-54e4-bddb-2f01adce2ad4-node-exporter-compute-0[104284]: ts=2025-12-08T09:50:06.767Z caller=node_exporter.go:117 level=info collector=conntrack
Dec  8 04:50:06 np0005550137 ceph-ceb838ef-9d5d-54e4-bddb-2f01adce2ad4-node-exporter-compute-0[104284]: ts=2025-12-08T09:50:06.767Z caller=node_exporter.go:117 level=info collector=cpu
Dec  8 04:50:06 np0005550137 ceph-ceb838ef-9d5d-54e4-bddb-2f01adce2ad4-node-exporter-compute-0[104284]: ts=2025-12-08T09:50:06.767Z caller=node_exporter.go:117 level=info collector=cpufreq
Dec  8 04:50:06 np0005550137 ceph-ceb838ef-9d5d-54e4-bddb-2f01adce2ad4-node-exporter-compute-0[104284]: ts=2025-12-08T09:50:06.767Z caller=node_exporter.go:117 level=info collector=diskstats
Dec  8 04:50:06 np0005550137 ceph-ceb838ef-9d5d-54e4-bddb-2f01adce2ad4-node-exporter-compute-0[104284]: ts=2025-12-08T09:50:06.767Z caller=node_exporter.go:117 level=info collector=dmi
Dec  8 04:50:06 np0005550137 ceph-ceb838ef-9d5d-54e4-bddb-2f01adce2ad4-node-exporter-compute-0[104284]: ts=2025-12-08T09:50:06.767Z caller=node_exporter.go:117 level=info collector=edac
Dec  8 04:50:06 np0005550137 ceph-ceb838ef-9d5d-54e4-bddb-2f01adce2ad4-node-exporter-compute-0[104284]: ts=2025-12-08T09:50:06.767Z caller=node_exporter.go:117 level=info collector=entropy
Dec  8 04:50:06 np0005550137 ceph-ceb838ef-9d5d-54e4-bddb-2f01adce2ad4-node-exporter-compute-0[104284]: ts=2025-12-08T09:50:06.767Z caller=node_exporter.go:117 level=info collector=fibrechannel
Dec  8 04:50:06 np0005550137 ceph-ceb838ef-9d5d-54e4-bddb-2f01adce2ad4-node-exporter-compute-0[104284]: ts=2025-12-08T09:50:06.767Z caller=node_exporter.go:117 level=info collector=filefd
Dec  8 04:50:06 np0005550137 ceph-ceb838ef-9d5d-54e4-bddb-2f01adce2ad4-node-exporter-compute-0[104284]: ts=2025-12-08T09:50:06.767Z caller=node_exporter.go:117 level=info collector=filesystem
Dec  8 04:50:06 np0005550137 ceph-ceb838ef-9d5d-54e4-bddb-2f01adce2ad4-node-exporter-compute-0[104284]: ts=2025-12-08T09:50:06.767Z caller=node_exporter.go:117 level=info collector=hwmon
Dec  8 04:50:06 np0005550137 ceph-ceb838ef-9d5d-54e4-bddb-2f01adce2ad4-node-exporter-compute-0[104284]: ts=2025-12-08T09:50:06.767Z caller=node_exporter.go:117 level=info collector=infiniband
Dec  8 04:50:06 np0005550137 ceph-ceb838ef-9d5d-54e4-bddb-2f01adce2ad4-node-exporter-compute-0[104284]: ts=2025-12-08T09:50:06.767Z caller=node_exporter.go:117 level=info collector=ipvs
Dec  8 04:50:06 np0005550137 ceph-ceb838ef-9d5d-54e4-bddb-2f01adce2ad4-node-exporter-compute-0[104284]: ts=2025-12-08T09:50:06.767Z caller=node_exporter.go:117 level=info collector=loadavg
Dec  8 04:50:06 np0005550137 ceph-ceb838ef-9d5d-54e4-bddb-2f01adce2ad4-node-exporter-compute-0[104284]: ts=2025-12-08T09:50:06.767Z caller=node_exporter.go:117 level=info collector=mdadm
Dec  8 04:50:06 np0005550137 ceph-ceb838ef-9d5d-54e4-bddb-2f01adce2ad4-node-exporter-compute-0[104284]: ts=2025-12-08T09:50:06.767Z caller=node_exporter.go:117 level=info collector=meminfo
Dec  8 04:50:06 np0005550137 ceph-ceb838ef-9d5d-54e4-bddb-2f01adce2ad4-node-exporter-compute-0[104284]: ts=2025-12-08T09:50:06.767Z caller=node_exporter.go:117 level=info collector=netclass
Dec  8 04:50:06 np0005550137 ceph-ceb838ef-9d5d-54e4-bddb-2f01adce2ad4-node-exporter-compute-0[104284]: ts=2025-12-08T09:50:06.768Z caller=node_exporter.go:117 level=info collector=netdev
Dec  8 04:50:06 np0005550137 ceph-ceb838ef-9d5d-54e4-bddb-2f01adce2ad4-node-exporter-compute-0[104284]: ts=2025-12-08T09:50:06.768Z caller=node_exporter.go:117 level=info collector=netstat
Dec  8 04:50:06 np0005550137 ceph-ceb838ef-9d5d-54e4-bddb-2f01adce2ad4-node-exporter-compute-0[104284]: ts=2025-12-08T09:50:06.768Z caller=node_exporter.go:117 level=info collector=nfs
Dec  8 04:50:06 np0005550137 ceph-ceb838ef-9d5d-54e4-bddb-2f01adce2ad4-node-exporter-compute-0[104284]: ts=2025-12-08T09:50:06.768Z caller=node_exporter.go:117 level=info collector=nfsd
Dec  8 04:50:06 np0005550137 ceph-ceb838ef-9d5d-54e4-bddb-2f01adce2ad4-node-exporter-compute-0[104284]: ts=2025-12-08T09:50:06.768Z caller=node_exporter.go:117 level=info collector=nvme
Dec  8 04:50:06 np0005550137 ceph-ceb838ef-9d5d-54e4-bddb-2f01adce2ad4-node-exporter-compute-0[104284]: ts=2025-12-08T09:50:06.768Z caller=node_exporter.go:117 level=info collector=os
Dec  8 04:50:06 np0005550137 ceph-ceb838ef-9d5d-54e4-bddb-2f01adce2ad4-node-exporter-compute-0[104284]: ts=2025-12-08T09:50:06.768Z caller=node_exporter.go:117 level=info collector=powersupplyclass
Dec  8 04:50:06 np0005550137 ceph-ceb838ef-9d5d-54e4-bddb-2f01adce2ad4-node-exporter-compute-0[104284]: ts=2025-12-08T09:50:06.768Z caller=node_exporter.go:117 level=info collector=pressure
Dec  8 04:50:06 np0005550137 ceph-ceb838ef-9d5d-54e4-bddb-2f01adce2ad4-node-exporter-compute-0[104284]: ts=2025-12-08T09:50:06.768Z caller=node_exporter.go:117 level=info collector=rapl
Dec  8 04:50:06 np0005550137 ceph-ceb838ef-9d5d-54e4-bddb-2f01adce2ad4-node-exporter-compute-0[104284]: ts=2025-12-08T09:50:06.768Z caller=node_exporter.go:117 level=info collector=schedstat
Dec  8 04:50:06 np0005550137 ceph-ceb838ef-9d5d-54e4-bddb-2f01adce2ad4-node-exporter-compute-0[104284]: ts=2025-12-08T09:50:06.768Z caller=node_exporter.go:117 level=info collector=selinux
Dec  8 04:50:06 np0005550137 ceph-ceb838ef-9d5d-54e4-bddb-2f01adce2ad4-node-exporter-compute-0[104284]: ts=2025-12-08T09:50:06.768Z caller=node_exporter.go:117 level=info collector=sockstat
Dec  8 04:50:06 np0005550137 ceph-ceb838ef-9d5d-54e4-bddb-2f01adce2ad4-node-exporter-compute-0[104284]: ts=2025-12-08T09:50:06.768Z caller=node_exporter.go:117 level=info collector=softnet
Dec  8 04:50:06 np0005550137 ceph-ceb838ef-9d5d-54e4-bddb-2f01adce2ad4-node-exporter-compute-0[104284]: ts=2025-12-08T09:50:06.768Z caller=node_exporter.go:117 level=info collector=stat
Dec  8 04:50:06 np0005550137 ceph-ceb838ef-9d5d-54e4-bddb-2f01adce2ad4-node-exporter-compute-0[104284]: ts=2025-12-08T09:50:06.768Z caller=node_exporter.go:117 level=info collector=tapestats
Dec  8 04:50:06 np0005550137 ceph-ceb838ef-9d5d-54e4-bddb-2f01adce2ad4-node-exporter-compute-0[104284]: ts=2025-12-08T09:50:06.768Z caller=node_exporter.go:117 level=info collector=textfile
Dec  8 04:50:06 np0005550137 ceph-ceb838ef-9d5d-54e4-bddb-2f01adce2ad4-node-exporter-compute-0[104284]: ts=2025-12-08T09:50:06.768Z caller=node_exporter.go:117 level=info collector=thermal_zone
Dec  8 04:50:06 np0005550137 ceph-ceb838ef-9d5d-54e4-bddb-2f01adce2ad4-node-exporter-compute-0[104284]: ts=2025-12-08T09:50:06.768Z caller=node_exporter.go:117 level=info collector=time
Dec  8 04:50:06 np0005550137 ceph-ceb838ef-9d5d-54e4-bddb-2f01adce2ad4-node-exporter-compute-0[104284]: ts=2025-12-08T09:50:06.768Z caller=node_exporter.go:117 level=info collector=udp_queues
Dec  8 04:50:06 np0005550137 ceph-ceb838ef-9d5d-54e4-bddb-2f01adce2ad4-node-exporter-compute-0[104284]: ts=2025-12-08T09:50:06.768Z caller=node_exporter.go:117 level=info collector=uname
Dec  8 04:50:06 np0005550137 ceph-ceb838ef-9d5d-54e4-bddb-2f01adce2ad4-node-exporter-compute-0[104284]: ts=2025-12-08T09:50:06.768Z caller=node_exporter.go:117 level=info collector=vmstat
Dec  8 04:50:06 np0005550137 ceph-ceb838ef-9d5d-54e4-bddb-2f01adce2ad4-node-exporter-compute-0[104284]: ts=2025-12-08T09:50:06.768Z caller=node_exporter.go:117 level=info collector=xfs
Dec  8 04:50:06 np0005550137 ceph-ceb838ef-9d5d-54e4-bddb-2f01adce2ad4-node-exporter-compute-0[104284]: ts=2025-12-08T09:50:06.768Z caller=node_exporter.go:117 level=info collector=zfs
Dec  8 04:50:06 np0005550137 ceph-ceb838ef-9d5d-54e4-bddb-2f01adce2ad4-node-exporter-compute-0[104284]: ts=2025-12-08T09:50:06.769Z caller=tls_config.go:274 level=info msg="Listening on" address=[::]:9100
Dec  8 04:50:06 np0005550137 ceph-ceb838ef-9d5d-54e4-bddb-2f01adce2ad4-node-exporter-compute-0[104284]: ts=2025-12-08T09:50:06.769Z caller=tls_config.go:277 level=info msg="TLS is disabled." http2=false address=[::]:9100
Dec  8 04:50:06 np0005550137 ceph-mon[74516]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Dec  8 04:50:06 np0005550137 ceph-mon[74516]: log_channel(audit) log [INF] : from='mgr.14766 192.168.122.100:0/2066651810' entity='mgr.compute-0.kitiwu' 
Dec  8 04:50:06 np0005550137 ceph-mon[74516]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Dec  8 04:50:06 np0005550137 ceph-mon[74516]: log_channel(audit) log [INF] : from='mgr.14766 192.168.122.100:0/2066651810' entity='mgr.compute-0.kitiwu' 
Dec  8 04:50:06 np0005550137 ceph-mgr[74806]: [cephadm INFO cephadm.serve] Reconfiguring alertmanager.compute-0 (dependencies changed)...
Dec  8 04:50:06 np0005550137 ceph-mgr[74806]: log_channel(cephadm) log [INF] : Reconfiguring alertmanager.compute-0 (dependencies changed)...
Dec  8 04:50:06 np0005550137 ceph-mgr[74806]: [cephadm INFO cephadm.serve] Reconfiguring daemon alertmanager.compute-0 on compute-0
Dec  8 04:50:06 np0005550137 ceph-mgr[74806]: log_channel(cephadm) log [INF] : Reconfiguring daemon alertmanager.compute-0 on compute-0
Dec  8 04:50:06 np0005550137 radosgw[89717]: ====== starting new request req=0x7ff6ecd845d0 =====
Dec  8 04:50:06 np0005550137 radosgw[89717]: ====== req done req=0x7ff6ecd845d0 op status=0 http_status=200 latency=0.002000061s ======
Dec  8 04:50:06 np0005550137 radosgw[89717]: beast: 0x7ff6ecd845d0: 192.168.122.100 - anonymous [08/Dec/2025:09:50:06.907 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.002000061s
Dec  8 04:50:06 np0005550137 systemd[1]: ceph-ceb838ef-9d5d-54e4-bddb-2f01adce2ad4@nfs.cephfs.2.0.compute-0.cuvvno.service: Scheduled restart job, restart counter is at 1.
Dec  8 04:50:06 np0005550137 systemd[1]: Stopped Ceph nfs.cephfs.2.0.compute-0.cuvvno for ceb838ef-9d5d-54e4-bddb-2f01adce2ad4.
Dec  8 04:50:06 np0005550137 systemd[1]: ceph-ceb838ef-9d5d-54e4-bddb-2f01adce2ad4@nfs.cephfs.2.0.compute-0.cuvvno.service: Consumed 1.679s CPU time.
Dec  8 04:50:07 np0005550137 systemd[1]: Starting Ceph nfs.cephfs.2.0.compute-0.cuvvno for ceb838ef-9d5d-54e4-bddb-2f01adce2ad4...
Dec  8 04:50:07 np0005550137 ceph-mgr[74806]: log_channel(cluster) log [DBG] : pgmap v29: 353 pgs: 1 remapped+peering, 2 active+remapped, 350 active+clean; 457 KiB data, 126 MiB used, 60 GiB / 60 GiB avail
Dec  8 04:50:07 np0005550137 podman[104391]: 2025-12-08 09:50:07.258352566 +0000 UTC m=+0.057919120 container create 3d3ae79956baf0088b3c44608b1b17208fadcf8da8dd9e93dccde248bbc68292 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-ceb838ef-9d5d-54e4-bddb-2f01adce2ad4-nfs-cephfs-2-0-compute-0-cuvvno, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec  8 04:50:07 np0005550137 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1be1305a2ab343e02f225e5fe8a403c8ae824de8773ff5f052831bda71d9a76c/merged/etc/ganesha supports timestamps until 2038 (0x7fffffff)
Dec  8 04:50:07 np0005550137 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1be1305a2ab343e02f225e5fe8a403c8ae824de8773ff5f052831bda71d9a76c/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec  8 04:50:07 np0005550137 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1be1305a2ab343e02f225e5fe8a403c8ae824de8773ff5f052831bda71d9a76c/merged/etc/ceph/keyring supports timestamps until 2038 (0x7fffffff)
Dec  8 04:50:07 np0005550137 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1be1305a2ab343e02f225e5fe8a403c8ae824de8773ff5f052831bda71d9a76c/merged/var/lib/ceph/radosgw/ceph-nfs.cephfs.2.0.compute-0.cuvvno-rgw/keyring supports timestamps until 2038 (0x7fffffff)
Dec  8 04:50:07 np0005550137 podman[104391]: 2025-12-08 09:50:07.319962448 +0000 UTC m=+0.119529042 container init 3d3ae79956baf0088b3c44608b1b17208fadcf8da8dd9e93dccde248bbc68292 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-ceb838ef-9d5d-54e4-bddb-2f01adce2ad4-nfs-cephfs-2-0-compute-0-cuvvno, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.build-date=20250325)
Dec  8 04:50:07 np0005550137 podman[104391]: 2025-12-08 09:50:07.325202817 +0000 UTC m=+0.124769381 container start 3d3ae79956baf0088b3c44608b1b17208fadcf8da8dd9e93dccde248bbc68292 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-ceb838ef-9d5d-54e4-bddb-2f01adce2ad4-nfs-cephfs-2-0-compute-0-cuvvno, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec  8 04:50:07 np0005550137 podman[104391]: 2025-12-08 09:50:07.232887303 +0000 UTC m=+0.032453947 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Dec  8 04:50:07 np0005550137 bash[104391]: 3d3ae79956baf0088b3c44608b1b17208fadcf8da8dd9e93dccde248bbc68292
Dec  8 04:50:07 np0005550137 systemd[1]: Started Ceph nfs.cephfs.2.0.compute-0.cuvvno for ceb838ef-9d5d-54e4-bddb-2f01adce2ad4.
Dec  8 04:50:07 np0005550137 ceph-ceb838ef-9d5d-54e4-bddb-2f01adce2ad4-nfs-cephfs-2-0-compute-0-cuvvno[104424]: 08/12/2025 09:50:07 : epoch 69369f4f : compute-0 : ganesha.nfsd-2[main] init_logging :LOG :NULL :LOG: Setting log level for all components to NIV_EVENT
Dec  8 04:50:07 np0005550137 ceph-ceb838ef-9d5d-54e4-bddb-2f01adce2ad4-nfs-cephfs-2-0-compute-0-cuvvno[104424]: 08/12/2025 09:50:07 : epoch 69369f4f : compute-0 : ganesha.nfsd-2[main] main :MAIN :EVENT :ganesha.nfsd Starting: Ganesha Version 5.9
Dec  8 04:50:07 np0005550137 podman[104420]: 2025-12-08 09:50:07.340909934 +0000 UTC m=+0.048098612 volume create df134fd56384b5acb6f44436e6edbfb1bd029ecf947f6e0b1d227707cc4832c4
Dec  8 04:50:07 np0005550137 podman[104420]: 2025-12-08 09:50:07.351253249 +0000 UTC m=+0.058441917 container create 8ecc830562ea0dd4ae6fc2228eeddc1a3f7ab9ad4e07e50fa4bf6f221d9c1fa3 (image=quay.io/prometheus/alertmanager:v0.25.0, name=confident_keldysh, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Dec  8 04:50:07 np0005550137 ceph-ceb838ef-9d5d-54e4-bddb-2f01adce2ad4-nfs-cephfs-2-0-compute-0-cuvvno[104424]: 08/12/2025 09:50:07 : epoch 69369f4f : compute-0 : ganesha.nfsd-2[main] nfs_set_param_from_conf :NFS STARTUP :EVENT :Configuration file successfully parsed
Dec  8 04:50:07 np0005550137 ceph-ceb838ef-9d5d-54e4-bddb-2f01adce2ad4-nfs-cephfs-2-0-compute-0-cuvvno[104424]: 08/12/2025 09:50:07 : epoch 69369f4f : compute-0 : ganesha.nfsd-2[main] monitoring_init :NFS STARTUP :EVENT :Init monitoring at 0.0.0.0:9587
Dec  8 04:50:07 np0005550137 ceph-ceb838ef-9d5d-54e4-bddb-2f01adce2ad4-nfs-cephfs-2-0-compute-0-cuvvno[104424]: 08/12/2025 09:50:07 : epoch 69369f4f : compute-0 : ganesha.nfsd-2[main] fsal_init_fds_limit :MDCACHE LRU :EVENT :Setting the system-imposed limit on FDs to 1048576.
Dec  8 04:50:07 np0005550137 ceph-ceb838ef-9d5d-54e4-bddb-2f01adce2ad4-nfs-cephfs-2-0-compute-0-cuvvno[104424]: 08/12/2025 09:50:07 : epoch 69369f4f : compute-0 : ganesha.nfsd-2[main] init_server_pkgs :NFS STARTUP :EVENT :Initializing ID Mapper.
Dec  8 04:50:07 np0005550137 systemd[1]: Started libpod-conmon-8ecc830562ea0dd4ae6fc2228eeddc1a3f7ab9ad4e07e50fa4bf6f221d9c1fa3.scope.
Dec  8 04:50:07 np0005550137 ceph-ceb838ef-9d5d-54e4-bddb-2f01adce2ad4-nfs-cephfs-2-0-compute-0-cuvvno[104424]: 08/12/2025 09:50:07 : epoch 69369f4f : compute-0 : ganesha.nfsd-2[main] init_server_pkgs :NFS STARTUP :EVENT :ID Mapper successfully initialized.
Dec  8 04:50:07 np0005550137 podman[104420]: 2025-12-08 09:50:07.319404661 +0000 UTC m=+0.026593359 image pull c8568f914cd25b2062c44e9f79f9c18da6e3b85fe0c47a12a2191c61426c2b19 quay.io/prometheus/alertmanager:v0.25.0
Dec  8 04:50:07 np0005550137 ceph-ceb838ef-9d5d-54e4-bddb-2f01adce2ad4-nfs-cephfs-2-0-compute-0-cuvvno[104424]: 08/12/2025 09:50:07 : epoch 69369f4f : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Dec  8 04:50:07 np0005550137 systemd[1]: Started libcrun container.
Dec  8 04:50:07 np0005550137 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fe168802cf5dc0833abe3e187e8480644bb32bd7aaf748ac18ed3f50adae4b99/merged/alertmanager supports timestamps until 2038 (0x7fffffff)
Dec  8 04:50:07 np0005550137 podman[104420]: 2025-12-08 09:50:07.454401732 +0000 UTC m=+0.161590400 container init 8ecc830562ea0dd4ae6fc2228eeddc1a3f7ab9ad4e07e50fa4bf6f221d9c1fa3 (image=quay.io/prometheus/alertmanager:v0.25.0, name=confident_keldysh, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Dec  8 04:50:07 np0005550137 podman[104420]: 2025-12-08 09:50:07.463629963 +0000 UTC m=+0.170818631 container start 8ecc830562ea0dd4ae6fc2228eeddc1a3f7ab9ad4e07e50fa4bf6f221d9c1fa3 (image=quay.io/prometheus/alertmanager:v0.25.0, name=confident_keldysh, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Dec  8 04:50:07 np0005550137 confident_keldysh[104465]: 65534 65534
Dec  8 04:50:07 np0005550137 podman[104420]: 2025-12-08 09:50:07.467013475 +0000 UTC m=+0.174202173 container attach 8ecc830562ea0dd4ae6fc2228eeddc1a3f7ab9ad4e07e50fa4bf6f221d9c1fa3 (image=quay.io/prometheus/alertmanager:v0.25.0, name=confident_keldysh, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Dec  8 04:50:07 np0005550137 systemd[1]: libpod-8ecc830562ea0dd4ae6fc2228eeddc1a3f7ab9ad4e07e50fa4bf6f221d9c1fa3.scope: Deactivated successfully.
Dec  8 04:50:07 np0005550137 podman[104420]: 2025-12-08 09:50:07.467896132 +0000 UTC m=+0.175084810 container died 8ecc830562ea0dd4ae6fc2228eeddc1a3f7ab9ad4e07e50fa4bf6f221d9c1fa3 (image=quay.io/prometheus/alertmanager:v0.25.0, name=confident_keldysh, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Dec  8 04:50:07 np0005550137 systemd[1]: var-lib-containers-storage-overlay-fe168802cf5dc0833abe3e187e8480644bb32bd7aaf748ac18ed3f50adae4b99-merged.mount: Deactivated successfully.
Dec  8 04:50:07 np0005550137 radosgw[89717]: ====== starting new request req=0x7ff6ecd845d0 =====
Dec  8 04:50:07 np0005550137 radosgw[89717]: ====== req done req=0x7ff6ecd845d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  8 04:50:07 np0005550137 radosgw[89717]: beast: 0x7ff6ecd845d0: 192.168.122.102 - anonymous [08/Dec/2025:09:50:07.495 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  8 04:50:07 np0005550137 podman[104420]: 2025-12-08 09:50:07.510545008 +0000 UTC m=+0.217733686 container remove 8ecc830562ea0dd4ae6fc2228eeddc1a3f7ab9ad4e07e50fa4bf6f221d9c1fa3 (image=quay.io/prometheus/alertmanager:v0.25.0, name=confident_keldysh, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Dec  8 04:50:07 np0005550137 podman[104420]: 2025-12-08 09:50:07.515963593 +0000 UTC m=+0.223152281 volume remove df134fd56384b5acb6f44436e6edbfb1bd029ecf947f6e0b1d227707cc4832c4
Dec  8 04:50:07 np0005550137 systemd[1]: libpod-conmon-8ecc830562ea0dd4ae6fc2228eeddc1a3f7ab9ad4e07e50fa4bf6f221d9c1fa3.scope: Deactivated successfully.
Dec  8 04:50:07 np0005550137 podman[104496]: 2025-12-08 09:50:07.579560664 +0000 UTC m=+0.043609685 volume create 0b20da47364ca5d9a63446de1eddaefcabcdffadb7afa4f982c26fb12c49b823
Dec  8 04:50:07 np0005550137 podman[104496]: 2025-12-08 09:50:07.592406625 +0000 UTC m=+0.056455636 container create ed6b703600873cd3d08f47db1b26680c4b07f2a482ae215b9826f29e7e154536 (image=quay.io/prometheus/alertmanager:v0.25.0, name=admiring_antonelli, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Dec  8 04:50:07 np0005550137 podman[104496]: 2025-12-08 09:50:07.55934096 +0000 UTC m=+0.023390031 image pull c8568f914cd25b2062c44e9f79f9c18da6e3b85fe0c47a12a2191c61426c2b19 quay.io/prometheus/alertmanager:v0.25.0
Dec  8 04:50:07 np0005550137 systemd[1]: Started libpod-conmon-ed6b703600873cd3d08f47db1b26680c4b07f2a482ae215b9826f29e7e154536.scope.
Dec  8 04:50:07 np0005550137 ceph-mon[74516]: mon.compute-0@0(leader).osd e93 do_prune osdmap full prune enabled
Dec  8 04:50:07 np0005550137 ceph-mon[74516]: from='mgr.14766 192.168.122.100:0/2066651810' entity='mgr.compute-0.kitiwu' 
Dec  8 04:50:07 np0005550137 ceph-mon[74516]: from='mgr.14766 192.168.122.100:0/2066651810' entity='mgr.compute-0.kitiwu' 
Dec  8 04:50:07 np0005550137 systemd[1]: Started libcrun container.
Dec  8 04:50:07 np0005550137 ceph-mon[74516]: mon.compute-0@0(leader).osd e94 e94: 3 total, 3 up, 3 in
Dec  8 04:50:07 np0005550137 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ed11df5ab62044434e1327d35ecf2464934516257722d4e555719a2e591787c8/merged/alertmanager supports timestamps until 2038 (0x7fffffff)
Dec  8 04:50:07 np0005550137 ceph-mon[74516]: log_channel(cluster) log [DBG] : osdmap e94: 3 total, 3 up, 3 in
Dec  8 04:50:07 np0005550137 podman[104496]: 2025-12-08 09:50:07.709135321 +0000 UTC m=+0.173184342 container init ed6b703600873cd3d08f47db1b26680c4b07f2a482ae215b9826f29e7e154536 (image=quay.io/prometheus/alertmanager:v0.25.0, name=admiring_antonelli, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Dec  8 04:50:07 np0005550137 podman[104496]: 2025-12-08 09:50:07.715996919 +0000 UTC m=+0.180045900 container start ed6b703600873cd3d08f47db1b26680c4b07f2a482ae215b9826f29e7e154536 (image=quay.io/prometheus/alertmanager:v0.25.0, name=admiring_antonelli, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Dec  8 04:50:07 np0005550137 admiring_antonelli[104512]: 65534 65534
Dec  8 04:50:07 np0005550137 systemd[1]: libpod-ed6b703600873cd3d08f47db1b26680c4b07f2a482ae215b9826f29e7e154536.scope: Deactivated successfully.
Dec  8 04:50:07 np0005550137 podman[104496]: 2025-12-08 09:50:07.719594479 +0000 UTC m=+0.183643460 container attach ed6b703600873cd3d08f47db1b26680c4b07f2a482ae215b9826f29e7e154536 (image=quay.io/prometheus/alertmanager:v0.25.0, name=admiring_antonelli, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Dec  8 04:50:07 np0005550137 podman[104496]: 2025-12-08 09:50:07.719873487 +0000 UTC m=+0.183922468 container died ed6b703600873cd3d08f47db1b26680c4b07f2a482ae215b9826f29e7e154536 (image=quay.io/prometheus/alertmanager:v0.25.0, name=admiring_antonelli, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Dec  8 04:50:07 np0005550137 systemd[1]: var-lib-containers-storage-overlay-ed11df5ab62044434e1327d35ecf2464934516257722d4e555719a2e591787c8-merged.mount: Deactivated successfully.
Dec  8 04:50:07 np0005550137 podman[104496]: 2025-12-08 09:50:07.797792235 +0000 UTC m=+0.261841216 container remove ed6b703600873cd3d08f47db1b26680c4b07f2a482ae215b9826f29e7e154536 (image=quay.io/prometheus/alertmanager:v0.25.0, name=admiring_antonelli, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Dec  8 04:50:07 np0005550137 podman[104496]: 2025-12-08 09:50:07.80124955 +0000 UTC m=+0.265298531 volume remove 0b20da47364ca5d9a63446de1eddaefcabcdffadb7afa4f982c26fb12c49b823
Dec  8 04:50:07 np0005550137 systemd[1]: libpod-conmon-ed6b703600873cd3d08f47db1b26680c4b07f2a482ae215b9826f29e7e154536.scope: Deactivated successfully.
Dec  8 04:50:07 np0005550137 systemd[1]: Stopping Ceph alertmanager.compute-0 for ceb838ef-9d5d-54e4-bddb-2f01adce2ad4...
Dec  8 04:50:08 np0005550137 ceph-ceb838ef-9d5d-54e4-bddb-2f01adce2ad4-alertmanager-compute-0[98834]: ts=2025-12-08T09:50:08.082Z caller=main.go:583 level=info msg="Received SIGTERM, exiting gracefully..."
Dec  8 04:50:08 np0005550137 podman[104564]: 2025-12-08 09:50:08.093041785 +0000 UTC m=+0.054835358 container died 7099edc240bca550d8ffa93b6e07ba6fcf270dd0e9d3a56c64a700eedb3fc8a8 (image=quay.io/prometheus/alertmanager:v0.25.0, name=ceph-ceb838ef-9d5d-54e4-bddb-2f01adce2ad4-alertmanager-compute-0, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Dec  8 04:50:08 np0005550137 podman[104564]: 2025-12-08 09:50:08.128554663 +0000 UTC m=+0.090348236 container remove 7099edc240bca550d8ffa93b6e07ba6fcf270dd0e9d3a56c64a700eedb3fc8a8 (image=quay.io/prometheus/alertmanager:v0.25.0, name=ceph-ceb838ef-9d5d-54e4-bddb-2f01adce2ad4-alertmanager-compute-0, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Dec  8 04:50:08 np0005550137 podman[104564]: 2025-12-08 09:50:08.132986907 +0000 UTC m=+0.094780490 volume remove ac36cabf1685985f12b4719a79e790ff0b5cfe2ee9af3f5025152a374f3d5695
Dec  8 04:50:08 np0005550137 bash[104564]: ceph-ceb838ef-9d5d-54e4-bddb-2f01adce2ad4-alertmanager-compute-0
Dec  8 04:50:08 np0005550137 systemd[1]: var-lib-containers-storage-overlay-2e7624865efeb6a95f84bc47dd5a2668c1b36291a43db5be7b7a6a5e59098268-merged.mount: Deactivated successfully.
Dec  8 04:50:08 np0005550137 systemd[1]: ceph-ceb838ef-9d5d-54e4-bddb-2f01adce2ad4@alertmanager.compute-0.service: Deactivated successfully.
Dec  8 04:50:08 np0005550137 systemd[1]: Stopped Ceph alertmanager.compute-0 for ceb838ef-9d5d-54e4-bddb-2f01adce2ad4.
Dec  8 04:50:08 np0005550137 systemd[1]: ceph-ceb838ef-9d5d-54e4-bddb-2f01adce2ad4@alertmanager.compute-0.service: Consumed 1.266s CPU time.
Dec  8 04:50:08 np0005550137 systemd[1]: Starting Ceph alertmanager.compute-0 for ceb838ef-9d5d-54e4-bddb-2f01adce2ad4...
Dec  8 04:50:08 np0005550137 podman[104665]: 2025-12-08 09:50:08.491386846 +0000 UTC m=+0.039678226 volume create 9f8b767bde4472abfb181ac34e1cd0591a79251ce8eee54f2cceac3de1a73fdc
Dec  8 04:50:08 np0005550137 podman[104665]: 2025-12-08 09:50:08.502456292 +0000 UTC m=+0.050747712 container create 595e69113afbc821777e75fb319ad360d8a2f8bfd86aab1547345132e22e0412 (image=quay.io/prometheus/alertmanager:v0.25.0, name=ceph-ceb838ef-9d5d-54e4-bddb-2f01adce2ad4-alertmanager-compute-0, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Dec  8 04:50:08 np0005550137 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/bf0c70f2b9efacf10510804b960bf498811dd999273d05b076d9a7c835081d68/merged/alertmanager supports timestamps until 2038 (0x7fffffff)
Dec  8 04:50:08 np0005550137 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/bf0c70f2b9efacf10510804b960bf498811dd999273d05b076d9a7c835081d68/merged/etc/alertmanager supports timestamps until 2038 (0x7fffffff)
Dec  8 04:50:08 np0005550137 podman[104665]: 2025-12-08 09:50:08.565397794 +0000 UTC m=+0.113689194 container init 595e69113afbc821777e75fb319ad360d8a2f8bfd86aab1547345132e22e0412 (image=quay.io/prometheus/alertmanager:v0.25.0, name=ceph-ceb838ef-9d5d-54e4-bddb-2f01adce2ad4-alertmanager-compute-0, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Dec  8 04:50:08 np0005550137 podman[104665]: 2025-12-08 09:50:08.570152649 +0000 UTC m=+0.118444029 container start 595e69113afbc821777e75fb319ad360d8a2f8bfd86aab1547345132e22e0412 (image=quay.io/prometheus/alertmanager:v0.25.0, name=ceph-ceb838ef-9d5d-54e4-bddb-2f01adce2ad4-alertmanager-compute-0, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Dec  8 04:50:08 np0005550137 podman[104665]: 2025-12-08 09:50:08.475748291 +0000 UTC m=+0.024039691 image pull c8568f914cd25b2062c44e9f79f9c18da6e3b85fe0c47a12a2191c61426c2b19 quay.io/prometheus/alertmanager:v0.25.0
Dec  8 04:50:08 np0005550137 bash[104665]: 595e69113afbc821777e75fb319ad360d8a2f8bfd86aab1547345132e22e0412
Dec  8 04:50:08 np0005550137 systemd[1]: Started Ceph alertmanager.compute-0 for ceb838ef-9d5d-54e4-bddb-2f01adce2ad4.
Dec  8 04:50:08 np0005550137 ceph-ceb838ef-9d5d-54e4-bddb-2f01adce2ad4-alertmanager-compute-0[104680]: ts=2025-12-08T09:50:08.611Z caller=main.go:240 level=info msg="Starting Alertmanager" version="(version=0.25.0, branch=HEAD, revision=258fab7cdd551f2cf251ed0348f0ad7289aee789)"
Dec  8 04:50:08 np0005550137 ceph-ceb838ef-9d5d-54e4-bddb-2f01adce2ad4-alertmanager-compute-0[104680]: ts=2025-12-08T09:50:08.611Z caller=main.go:241 level=info build_context="(go=go1.19.4, user=root@abe866dd5717, date=20221222-14:51:36)"
Dec  8 04:50:08 np0005550137 ceph-ceb838ef-9d5d-54e4-bddb-2f01adce2ad4-alertmanager-compute-0[104680]: ts=2025-12-08T09:50:08.622Z caller=cluster.go:185 level=info component=cluster msg="setting advertise address explicitly" addr=192.168.122.100 port=9094
Dec  8 04:50:08 np0005550137 ceph-ceb838ef-9d5d-54e4-bddb-2f01adce2ad4-alertmanager-compute-0[104680]: ts=2025-12-08T09:50:08.625Z caller=cluster.go:681 level=info component=cluster msg="Waiting for gossip to settle..." interval=2s
Dec  8 04:50:08 np0005550137 ceph-mon[74516]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Dec  8 04:50:08 np0005550137 ceph-mon[74516]: log_channel(audit) log [INF] : from='mgr.14766 192.168.122.100:0/2066651810' entity='mgr.compute-0.kitiwu' 
Dec  8 04:50:08 np0005550137 ceph-mon[74516]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Dec  8 04:50:08 np0005550137 ceph-mon[74516]: log_channel(audit) log [INF] : from='mgr.14766 192.168.122.100:0/2066651810' entity='mgr.compute-0.kitiwu' 
Dec  8 04:50:08 np0005550137 ceph-mgr[74806]: [cephadm INFO cephadm.serve] Reconfiguring grafana.compute-0 (dependencies changed)...
Dec  8 04:50:08 np0005550137 ceph-mgr[74806]: log_channel(cephadm) log [INF] : Reconfiguring grafana.compute-0 (dependencies changed)...
Dec  8 04:50:08 np0005550137 ceph-ceb838ef-9d5d-54e4-bddb-2f01adce2ad4-alertmanager-compute-0[104680]: ts=2025-12-08T09:50:08.678Z caller=coordinator.go:113 level=info component=configuration msg="Loading configuration file" file=/etc/alertmanager/alertmanager.yml
Dec  8 04:50:08 np0005550137 ceph-ceb838ef-9d5d-54e4-bddb-2f01adce2ad4-alertmanager-compute-0[104680]: ts=2025-12-08T09:50:08.680Z caller=coordinator.go:126 level=info component=configuration msg="Completed loading of configuration file" file=/etc/alertmanager/alertmanager.yml
Dec  8 04:50:08 np0005550137 ceph-ceb838ef-9d5d-54e4-bddb-2f01adce2ad4-alertmanager-compute-0[104680]: ts=2025-12-08T09:50:08.689Z caller=tls_config.go:232 level=info msg="Listening on" address=192.168.122.100:9093
Dec  8 04:50:08 np0005550137 ceph-ceb838ef-9d5d-54e4-bddb-2f01adce2ad4-alertmanager-compute-0[104680]: ts=2025-12-08T09:50:08.689Z caller=tls_config.go:235 level=info msg="TLS is disabled." http2=false address=192.168.122.100:9093
Dec  8 04:50:08 np0005550137 ceph-mon[74516]: Reconfiguring alertmanager.compute-0 (dependencies changed)...
Dec  8 04:50:08 np0005550137 ceph-mon[74516]: Reconfiguring daemon alertmanager.compute-0 on compute-0
Dec  8 04:50:08 np0005550137 ceph-mon[74516]: from='mgr.14766 192.168.122.100:0/2066651810' entity='mgr.compute-0.kitiwu' 
Dec  8 04:50:08 np0005550137 ceph-mon[74516]: from='mgr.14766 192.168.122.100:0/2066651810' entity='mgr.compute-0.kitiwu' 
Dec  8 04:50:08 np0005550137 ceph-mgr[74806]: [cephadm INFO cephadm.serve] Reconfiguring daemon grafana.compute-0 on compute-0
Dec  8 04:50:08 np0005550137 ceph-mgr[74806]: log_channel(cephadm) log [INF] : Reconfiguring daemon grafana.compute-0 on compute-0
Dec  8 04:50:08 np0005550137 radosgw[89717]: ====== starting new request req=0x7ff6ecd845d0 =====
Dec  8 04:50:08 np0005550137 radosgw[89717]: ====== req done req=0x7ff6ecd845d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  8 04:50:08 np0005550137 radosgw[89717]: beast: 0x7ff6ecd845d0: 192.168.122.100 - anonymous [08/Dec/2025:09:50:08.910 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  8 04:50:09 np0005550137 ceph-mgr[74806]: log_channel(cluster) log [DBG] : pgmap v31: 353 pgs: 353 active+clean; 457 KiB data, 126 MiB used, 60 GiB / 60 GiB avail
Dec  8 04:50:09 np0005550137 ceph-mon[74516]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "19"} v 0)
Dec  8 04:50:09 np0005550137 ceph-mon[74516]: log_channel(audit) log [INF] : from='mgr.14766 192.168.122.100:0/2066651810' entity='mgr.compute-0.kitiwu' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "19"}]: dispatch
Dec  8 04:50:09 np0005550137 podman[104767]: 2025-12-08 09:50:09.409615031 +0000 UTC m=+0.047673619 container create 9ac439b132089ae39f8493b3c6a560fda0d02f8f8cb3b2241c9739a1107457c7 (image=quay.io/ceph/grafana:10.4.0, name=youthful_euclid, maintainer=Grafana Labs <hello@grafana.com>)
Dec  8 04:50:09 np0005550137 systemd[1]: Started libpod-conmon-9ac439b132089ae39f8493b3c6a560fda0d02f8f8cb3b2241c9739a1107457c7.scope.
Dec  8 04:50:09 np0005550137 podman[104767]: 2025-12-08 09:50:09.389410788 +0000 UTC m=+0.027469406 image pull c8b91775d855b99270fc5d22f3c6737e8cca01ef4c25c8b0362295e0746fa39b quay.io/ceph/grafana:10.4.0
Dec  8 04:50:09 np0005550137 systemd[1]: Started libcrun container.
Dec  8 04:50:09 np0005550137 radosgw[89717]: ====== starting new request req=0x7ff6ecd845d0 =====
Dec  8 04:50:09 np0005550137 radosgw[89717]: ====== req done req=0x7ff6ecd845d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  8 04:50:09 np0005550137 radosgw[89717]: beast: 0x7ff6ecd845d0: 192.168.122.102 - anonymous [08/Dec/2025:09:50:09.498 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  8 04:50:09 np0005550137 podman[104767]: 2025-12-08 09:50:09.511187437 +0000 UTC m=+0.149246105 container init 9ac439b132089ae39f8493b3c6a560fda0d02f8f8cb3b2241c9739a1107457c7 (image=quay.io/ceph/grafana:10.4.0, name=youthful_euclid, maintainer=Grafana Labs <hello@grafana.com>)
Dec  8 04:50:09 np0005550137 podman[104767]: 2025-12-08 09:50:09.522351786 +0000 UTC m=+0.160410404 container start 9ac439b132089ae39f8493b3c6a560fda0d02f8f8cb3b2241c9739a1107457c7 (image=quay.io/ceph/grafana:10.4.0, name=youthful_euclid, maintainer=Grafana Labs <hello@grafana.com>)
Dec  8 04:50:09 np0005550137 podman[104767]: 2025-12-08 09:50:09.526620176 +0000 UTC m=+0.164678794 container attach 9ac439b132089ae39f8493b3c6a560fda0d02f8f8cb3b2241c9739a1107457c7 (image=quay.io/ceph/grafana:10.4.0, name=youthful_euclid, maintainer=Grafana Labs <hello@grafana.com>)
Dec  8 04:50:09 np0005550137 youthful_euclid[104783]: 472 0
Dec  8 04:50:09 np0005550137 systemd[1]: libpod-9ac439b132089ae39f8493b3c6a560fda0d02f8f8cb3b2241c9739a1107457c7.scope: Deactivated successfully.
Dec  8 04:50:09 np0005550137 podman[104767]: 2025-12-08 09:50:09.528320768 +0000 UTC m=+0.166379416 container died 9ac439b132089ae39f8493b3c6a560fda0d02f8f8cb3b2241c9739a1107457c7 (image=quay.io/ceph/grafana:10.4.0, name=youthful_euclid, maintainer=Grafana Labs <hello@grafana.com>)
Dec  8 04:50:09 np0005550137 systemd[1]: var-lib-containers-storage-overlay-fa2bdbcd36dcc9e5230f3341df877b804f6de36ecf38f5f8d45563d08303f9a2-merged.mount: Deactivated successfully.
Dec  8 04:50:09 np0005550137 podman[104767]: 2025-12-08 09:50:09.583227435 +0000 UTC m=+0.221286023 container remove 9ac439b132089ae39f8493b3c6a560fda0d02f8f8cb3b2241c9739a1107457c7 (image=quay.io/ceph/grafana:10.4.0, name=youthful_euclid, maintainer=Grafana Labs <hello@grafana.com>)
Dec  8 04:50:09 np0005550137 systemd[1]: libpod-conmon-9ac439b132089ae39f8493b3c6a560fda0d02f8f8cb3b2241c9739a1107457c7.scope: Deactivated successfully.
Dec  8 04:50:09 np0005550137 podman[104800]: 2025-12-08 09:50:09.66532859 +0000 UTC m=+0.055164587 container create 97746a24551a37b2218dbc5604e7c9a036aa042ad44b8421c0e8cb6e65b29e2a (image=quay.io/ceph/grafana:10.4.0, name=peaceful_kowalevski, maintainer=Grafana Labs <hello@grafana.com>)
Dec  8 04:50:09 np0005550137 ceph-mon[74516]: mon.compute-0@0(leader).osd e94 do_prune osdmap full prune enabled
Dec  8 04:50:09 np0005550137 systemd[1]: Started libpod-conmon-97746a24551a37b2218dbc5604e7c9a036aa042ad44b8421c0e8cb6e65b29e2a.scope.
Dec  8 04:50:09 np0005550137 ceph-mon[74516]: log_channel(audit) log [INF] : from='mgr.14766 192.168.122.100:0/2066651810' entity='mgr.compute-0.kitiwu' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "19"}]': finished
Dec  8 04:50:09 np0005550137 ceph-mon[74516]: mon.compute-0@0(leader).osd e95 e95: 3 total, 3 up, 3 in
Dec  8 04:50:09 np0005550137 ceph-mon[74516]: log_channel(cluster) log [DBG] : osdmap e95: 3 total, 3 up, 3 in
Dec  8 04:50:09 np0005550137 ceph-osd[83009]: osd.1 pg_epoch: 95 pg[9.12( v 49'1026 (0'0,49'1026] local-lis/les=52/53 n=6 ec=52/36 lis/c=52/52 les/c/f=53/53/0 sis=95 pruub=8.115404129s) [0] r=-1 lpr=95 pi=[52,95)/1 crt=49'1026 lcod 0'0 mlcod 0'0 active pruub 226.525329590s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Dec  8 04:50:09 np0005550137 ceph-osd[83009]: osd.1 pg_epoch: 95 pg[9.12( v 49'1026 (0'0,49'1026] local-lis/les=52/53 n=6 ec=52/36 lis/c=52/52 les/c/f=53/53/0 sis=95 pruub=8.115352631s) [0] r=-1 lpr=95 pi=[52,95)/1 crt=49'1026 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 226.525329590s@ mbc={}] state<Start>: transitioning to Stray
Dec  8 04:50:09 np0005550137 ceph-mon[74516]: Reconfiguring grafana.compute-0 (dependencies changed)...
Dec  8 04:50:09 np0005550137 ceph-mon[74516]: from='mgr.14766 192.168.122.100:0/2066651810' entity='mgr.compute-0.kitiwu' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "19"}]: dispatch
Dec  8 04:50:09 np0005550137 podman[104800]: 2025-12-08 09:50:09.640725523 +0000 UTC m=+0.030561550 image pull c8b91775d855b99270fc5d22f3c6737e8cca01ef4c25c8b0362295e0746fa39b quay.io/ceph/grafana:10.4.0
Dec  8 04:50:09 np0005550137 systemd[1]: Started libcrun container.
Dec  8 04:50:09 np0005550137 podman[104800]: 2025-12-08 09:50:09.759002236 +0000 UTC m=+0.148838223 container init 97746a24551a37b2218dbc5604e7c9a036aa042ad44b8421c0e8cb6e65b29e2a (image=quay.io/ceph/grafana:10.4.0, name=peaceful_kowalevski, maintainer=Grafana Labs <hello@grafana.com>)
Dec  8 04:50:09 np0005550137 podman[104800]: 2025-12-08 09:50:09.766033229 +0000 UTC m=+0.155869206 container start 97746a24551a37b2218dbc5604e7c9a036aa042ad44b8421c0e8cb6e65b29e2a (image=quay.io/ceph/grafana:10.4.0, name=peaceful_kowalevski, maintainer=Grafana Labs <hello@grafana.com>)
Dec  8 04:50:09 np0005550137 peaceful_kowalevski[104816]: 472 0
Dec  8 04:50:09 np0005550137 podman[104800]: 2025-12-08 09:50:09.769576777 +0000 UTC m=+0.159412794 container attach 97746a24551a37b2218dbc5604e7c9a036aa042ad44b8421c0e8cb6e65b29e2a (image=quay.io/ceph/grafana:10.4.0, name=peaceful_kowalevski, maintainer=Grafana Labs <hello@grafana.com>)
Dec  8 04:50:09 np0005550137 systemd[1]: libpod-97746a24551a37b2218dbc5604e7c9a036aa042ad44b8421c0e8cb6e65b29e2a.scope: Deactivated successfully.
Dec  8 04:50:09 np0005550137 podman[104800]: 2025-12-08 09:50:09.788941766 +0000 UTC m=+0.178777763 container died 97746a24551a37b2218dbc5604e7c9a036aa042ad44b8421c0e8cb6e65b29e2a (image=quay.io/ceph/grafana:10.4.0, name=peaceful_kowalevski, maintainer=Grafana Labs <hello@grafana.com>)
Dec  8 04:50:09 np0005550137 systemd[1]: var-lib-containers-storage-overlay-a9bb955cf4b6094da388a6778c9d32a8c698dc999c753f77f9ed5f84a5376c81-merged.mount: Deactivated successfully.
Dec  8 04:50:09 np0005550137 podman[104800]: 2025-12-08 09:50:09.83647587 +0000 UTC m=+0.226311867 container remove 97746a24551a37b2218dbc5604e7c9a036aa042ad44b8421c0e8cb6e65b29e2a (image=quay.io/ceph/grafana:10.4.0, name=peaceful_kowalevski, maintainer=Grafana Labs <hello@grafana.com>)
Dec  8 04:50:09 np0005550137 systemd[1]: libpod-conmon-97746a24551a37b2218dbc5604e7c9a036aa042ad44b8421c0e8cb6e65b29e2a.scope: Deactivated successfully.
Dec  8 04:50:09 np0005550137 systemd[1]: Stopping Ceph grafana.compute-0 for ceb838ef-9d5d-54e4-bddb-2f01adce2ad4...
Dec  8 04:50:10 np0005550137 ceph-mon[74516]: mon.compute-0@0(leader).osd e95 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  8 04:50:10 np0005550137 ceph-mon[74516]: mon.compute-0@0(leader).osd e95 do_prune osdmap full prune enabled
Dec  8 04:50:10 np0005550137 ceph-mon[74516]: mon.compute-0@0(leader).osd e96 e96: 3 total, 3 up, 3 in
Dec  8 04:50:10 np0005550137 ceph-mon[74516]: log_channel(cluster) log [DBG] : osdmap e96: 3 total, 3 up, 3 in
Dec  8 04:50:10 np0005550137 ceph-osd[83009]: osd.1 pg_epoch: 96 pg[9.12( v 49'1026 (0'0,49'1026] local-lis/les=52/53 n=6 ec=52/36 lis/c=52/52 les/c/f=53/53/0 sis=96) [0]/[1] r=0 lpr=96 pi=[52,96)/1 crt=49'1026 lcod 0'0 mlcod 0'0 remapped NOTIFY mbc={}] PeeringState::start_peering_interval up [0] -> [0], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 0, role -1 -> 0, features acting 4540701547738038271 upacting 4540701547738038271
Dec  8 04:50:10 np0005550137 ceph-osd[83009]: osd.1 pg_epoch: 96 pg[9.12( v 49'1026 (0'0,49'1026] local-lis/les=52/53 n=6 ec=52/36 lis/c=52/52 les/c/f=53/53/0 sis=96) [0]/[1] r=0 lpr=96 pi=[52,96)/1 crt=49'1026 lcod 0'0 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Dec  8 04:50:10 np0005550137 ceph-ceb838ef-9d5d-54e4-bddb-2f01adce2ad4-grafana-compute-0[99477]: logger=server t=2025-12-08T09:50:10.186595206Z level=info msg="Shutdown started" reason="System signal: terminated"
Dec  8 04:50:10 np0005550137 ceph-ceb838ef-9d5d-54e4-bddb-2f01adce2ad4-grafana-compute-0[99477]: logger=ticker t=2025-12-08T09:50:10.186774192Z level=info msg=stopped last_tick=2025-12-08T09:50:10Z
Dec  8 04:50:10 np0005550137 ceph-ceb838ef-9d5d-54e4-bddb-2f01adce2ad4-grafana-compute-0[99477]: logger=tracing t=2025-12-08T09:50:10.186864045Z level=info msg="Closing tracing"
Dec  8 04:50:10 np0005550137 ceph-ceb838ef-9d5d-54e4-bddb-2f01adce2ad4-grafana-compute-0[99477]: logger=grafana-apiserver t=2025-12-08T09:50:10.18703435Z level=info msg="StorageObjectCountTracker pruner is exiting"
Dec  8 04:50:10 np0005550137 ceph-ceb838ef-9d5d-54e4-bddb-2f01adce2ad4-grafana-compute-0[99477]: logger=sqlstore.transactions t=2025-12-08T09:50:10.199186678Z level=info msg="Database locked, sleeping then retrying" error="database is locked" retry=0 code="database is locked"
Dec  8 04:50:10 np0005550137 podman[104865]: 2025-12-08 09:50:10.227718545 +0000 UTC m=+0.096175562 container died b8b17017c0f8b03c982d86c76026d22cd109a761d95550a2b2d3e0b24e8d9fc9 (image=quay.io/ceph/grafana:10.4.0, name=ceph-ceb838ef-9d5d-54e4-bddb-2f01adce2ad4-grafana-compute-0, maintainer=Grafana Labs <hello@grafana.com>)
Dec  8 04:50:10 np0005550137 systemd[1]: var-lib-containers-storage-overlay-f86b2c750d09f580b9d0ba33e2685eea552d8e935ab26508b8c6038f89080ecb-merged.mount: Deactivated successfully.
Dec  8 04:50:10 np0005550137 podman[104865]: 2025-12-08 09:50:10.283976085 +0000 UTC m=+0.152433072 container remove b8b17017c0f8b03c982d86c76026d22cd109a761d95550a2b2d3e0b24e8d9fc9 (image=quay.io/ceph/grafana:10.4.0, name=ceph-ceb838ef-9d5d-54e4-bddb-2f01adce2ad4-grafana-compute-0, maintainer=Grafana Labs <hello@grafana.com>)
Dec  8 04:50:10 np0005550137 bash[104865]: ceph-ceb838ef-9d5d-54e4-bddb-2f01adce2ad4-grafana-compute-0
Dec  8 04:50:10 np0005550137 systemd[1]: ceph-ceb838ef-9d5d-54e4-bddb-2f01adce2ad4@grafana.compute-0.service: Deactivated successfully.
Dec  8 04:50:10 np0005550137 systemd[1]: Stopped Ceph grafana.compute-0 for ceb838ef-9d5d-54e4-bddb-2f01adce2ad4.
Dec  8 04:50:10 np0005550137 systemd[1]: ceph-ceb838ef-9d5d-54e4-bddb-2f01adce2ad4@grafana.compute-0.service: Consumed 4.413s CPU time.
Dec  8 04:50:10 np0005550137 systemd[1]: Starting Ceph grafana.compute-0 for ceb838ef-9d5d-54e4-bddb-2f01adce2ad4...
Dec  8 04:50:10 np0005550137 ceph-osd[83009]: log_channel(cluster) log [DBG] : 9.1 scrub starts
Dec  8 04:50:10 np0005550137 ceph-osd[83009]: log_channel(cluster) log [DBG] : 9.1 scrub ok
Dec  8 04:50:10 np0005550137 ceph-ceb838ef-9d5d-54e4-bddb-2f01adce2ad4-alertmanager-compute-0[104680]: ts=2025-12-08T09:50:10.626Z caller=cluster.go:706 level=info component=cluster msg="gossip not settled" polls=0 before=0 now=1 elapsed=2.001063402s
Dec  8 04:50:10 np0005550137 podman[104972]: 2025-12-08 09:50:10.671445647 +0000 UTC m=+0.044138083 container create 5c6ee1dec0d4c3842d66658944a72f6e8e0d87939360a1fbd7cc3dbf06b76fb4 (image=quay.io/ceph/grafana:10.4.0, name=ceph-ceb838ef-9d5d-54e4-bddb-2f01adce2ad4-grafana-compute-0, maintainer=Grafana Labs <hello@grafana.com>)
Dec  8 04:50:10 np0005550137 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/17a9ce5cf693f9c204d351f247dd223c810aa2eebb99860c64ff36f54992542a/merged/etc/grafana/certs supports timestamps until 2038 (0x7fffffff)
Dec  8 04:50:10 np0005550137 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/17a9ce5cf693f9c204d351f247dd223c810aa2eebb99860c64ff36f54992542a/merged/etc/grafana/grafana.ini supports timestamps until 2038 (0x7fffffff)
Dec  8 04:50:10 np0005550137 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/17a9ce5cf693f9c204d351f247dd223c810aa2eebb99860c64ff36f54992542a/merged/var/lib/grafana/grafana.db supports timestamps until 2038 (0x7fffffff)
Dec  8 04:50:10 np0005550137 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/17a9ce5cf693f9c204d351f247dd223c810aa2eebb99860c64ff36f54992542a/merged/etc/grafana/provisioning/datasources supports timestamps until 2038 (0x7fffffff)
Dec  8 04:50:10 np0005550137 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/17a9ce5cf693f9c204d351f247dd223c810aa2eebb99860c64ff36f54992542a/merged/etc/grafana/provisioning/dashboards supports timestamps until 2038 (0x7fffffff)
Dec  8 04:50:10 np0005550137 ceph-mon[74516]: Reconfiguring daemon grafana.compute-0 on compute-0
Dec  8 04:50:10 np0005550137 ceph-mon[74516]: from='mgr.14766 192.168.122.100:0/2066651810' entity='mgr.compute-0.kitiwu' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "19"}]': finished
Dec  8 04:50:10 np0005550137 podman[104972]: 2025-12-08 09:50:10.650721447 +0000 UTC m=+0.023413923 image pull c8b91775d855b99270fc5d22f3c6737e8cca01ef4c25c8b0362295e0746fa39b quay.io/ceph/grafana:10.4.0
Dec  8 04:50:10 np0005550137 podman[104972]: 2025-12-08 09:50:10.74792868 +0000 UTC m=+0.120621136 container init 5c6ee1dec0d4c3842d66658944a72f6e8e0d87939360a1fbd7cc3dbf06b76fb4 (image=quay.io/ceph/grafana:10.4.0, name=ceph-ceb838ef-9d5d-54e4-bddb-2f01adce2ad4-grafana-compute-0, maintainer=Grafana Labs <hello@grafana.com>)
Dec  8 04:50:10 np0005550137 podman[104972]: 2025-12-08 09:50:10.758416639 +0000 UTC m=+0.131109105 container start 5c6ee1dec0d4c3842d66658944a72f6e8e0d87939360a1fbd7cc3dbf06b76fb4 (image=quay.io/ceph/grafana:10.4.0, name=ceph-ceb838ef-9d5d-54e4-bddb-2f01adce2ad4-grafana-compute-0, maintainer=Grafana Labs <hello@grafana.com>)
Dec  8 04:50:10 np0005550137 bash[104972]: 5c6ee1dec0d4c3842d66658944a72f6e8e0d87939360a1fbd7cc3dbf06b76fb4
Dec  8 04:50:10 np0005550137 systemd[1]: Started Ceph grafana.compute-0 for ceb838ef-9d5d-54e4-bddb-2f01adce2ad4.
Dec  8 04:50:10 np0005550137 ceph-mon[74516]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Dec  8 04:50:10 np0005550137 ceph-mon[74516]: log_channel(audit) log [INF] : from='mgr.14766 192.168.122.100:0/2066651810' entity='mgr.compute-0.kitiwu' 
Dec  8 04:50:10 np0005550137 ceph-mon[74516]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Dec  8 04:50:10 np0005550137 ceph-mon[74516]: log_channel(audit) log [INF] : from='mgr.14766 192.168.122.100:0/2066651810' entity='mgr.compute-0.kitiwu' 
Dec  8 04:50:10 np0005550137 ceph-mgr[74806]: [cephadm INFO cephadm.serve] Reconfiguring crash.compute-1 (monmap changed)...
Dec  8 04:50:10 np0005550137 ceph-mgr[74806]: log_channel(cephadm) log [INF] : Reconfiguring crash.compute-1 (monmap changed)...
Dec  8 04:50:10 np0005550137 ceph-mon[74516]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get-or-create", "entity": "client.crash.compute-1", "caps": ["mon", "profile crash", "mgr", "profile crash"]} v 0)
Dec  8 04:50:10 np0005550137 ceph-mon[74516]: log_channel(audit) log [INF] : from='mgr.14766 192.168.122.100:0/2066651810' entity='mgr.compute-0.kitiwu' cmd=[{"prefix": "auth get-or-create", "entity": "client.crash.compute-1", "caps": ["mon", "profile crash", "mgr", "profile crash"]}]: dispatch
Dec  8 04:50:10 np0005550137 ceph-mon[74516]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Dec  8 04:50:10 np0005550137 ceph-mon[74516]: log_channel(audit) log [DBG] : from='mgr.14766 192.168.122.100:0/2066651810' entity='mgr.compute-0.kitiwu' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec  8 04:50:10 np0005550137 ceph-mgr[74806]: [cephadm INFO cephadm.serve] Reconfiguring daemon crash.compute-1 on compute-1
Dec  8 04:50:10 np0005550137 ceph-mgr[74806]: log_channel(cephadm) log [INF] : Reconfiguring daemon crash.compute-1 on compute-1
Dec  8 04:50:10 np0005550137 radosgw[89717]: ====== starting new request req=0x7ff6ecd845d0 =====
Dec  8 04:50:10 np0005550137 radosgw[89717]: ====== req done req=0x7ff6ecd845d0 op status=0 http_status=200 latency=0.001000031s ======
Dec  8 04:50:10 np0005550137 radosgw[89717]: beast: 0x7ff6ecd845d0: 192.168.122.100 - anonymous [08/Dec/2025:09:50:10.912 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Dec  8 04:50:10 np0005550137 ceph-ceb838ef-9d5d-54e4-bddb-2f01adce2ad4-grafana-compute-0[104988]: logger=settings t=2025-12-08T09:50:10.95068951Z level=info msg="Starting Grafana" version=10.4.0 commit=03f502a94d17f7dc4e6c34acdf8428aedd986e4c branch=HEAD compiled=2025-12-08T09:50:10Z
Dec  8 04:50:10 np0005550137 ceph-ceb838ef-9d5d-54e4-bddb-2f01adce2ad4-grafana-compute-0[104988]: logger=settings t=2025-12-08T09:50:10.950913227Z level=info msg="Config loaded from" file=/usr/share/grafana/conf/defaults.ini
Dec  8 04:50:10 np0005550137 ceph-ceb838ef-9d5d-54e4-bddb-2f01adce2ad4-grafana-compute-0[104988]: logger=settings t=2025-12-08T09:50:10.950919617Z level=info msg="Config loaded from" file=/etc/grafana/grafana.ini
Dec  8 04:50:10 np0005550137 ceph-ceb838ef-9d5d-54e4-bddb-2f01adce2ad4-grafana-compute-0[104988]: logger=settings t=2025-12-08T09:50:10.950923357Z level=info msg="Config overridden from command line" arg="default.paths.data=/var/lib/grafana"
Dec  8 04:50:10 np0005550137 ceph-ceb838ef-9d5d-54e4-bddb-2f01adce2ad4-grafana-compute-0[104988]: logger=settings t=2025-12-08T09:50:10.950926767Z level=info msg="Config overridden from command line" arg="default.paths.logs=/var/log/grafana"
Dec  8 04:50:10 np0005550137 ceph-ceb838ef-9d5d-54e4-bddb-2f01adce2ad4-grafana-compute-0[104988]: logger=settings t=2025-12-08T09:50:10.950930517Z level=info msg="Config overridden from command line" arg="default.paths.plugins=/var/lib/grafana/plugins"
Dec  8 04:50:10 np0005550137 ceph-ceb838ef-9d5d-54e4-bddb-2f01adce2ad4-grafana-compute-0[104988]: logger=settings t=2025-12-08T09:50:10.950933847Z level=info msg="Config overridden from command line" arg="default.paths.provisioning=/etc/grafana/provisioning"
Dec  8 04:50:10 np0005550137 ceph-ceb838ef-9d5d-54e4-bddb-2f01adce2ad4-grafana-compute-0[104988]: logger=settings t=2025-12-08T09:50:10.950937147Z level=info msg="Config overridden from command line" arg="default.log.mode=console"
Dec  8 04:50:10 np0005550137 ceph-ceb838ef-9d5d-54e4-bddb-2f01adce2ad4-grafana-compute-0[104988]: logger=settings t=2025-12-08T09:50:10.950940588Z level=info msg="Config overridden from Environment variable" var="GF_PATHS_DATA=/var/lib/grafana"
Dec  8 04:50:10 np0005550137 ceph-ceb838ef-9d5d-54e4-bddb-2f01adce2ad4-grafana-compute-0[104988]: logger=settings t=2025-12-08T09:50:10.950943908Z level=info msg="Config overridden from Environment variable" var="GF_PATHS_LOGS=/var/log/grafana"
Dec  8 04:50:10 np0005550137 ceph-ceb838ef-9d5d-54e4-bddb-2f01adce2ad4-grafana-compute-0[104988]: logger=settings t=2025-12-08T09:50:10.950947808Z level=info msg="Config overridden from Environment variable" var="GF_PATHS_PLUGINS=/var/lib/grafana/plugins"
Dec  8 04:50:10 np0005550137 ceph-ceb838ef-9d5d-54e4-bddb-2f01adce2ad4-grafana-compute-0[104988]: logger=settings t=2025-12-08T09:50:10.950951038Z level=info msg="Config overridden from Environment variable" var="GF_PATHS_PROVISIONING=/etc/grafana/provisioning"
Dec  8 04:50:10 np0005550137 ceph-ceb838ef-9d5d-54e4-bddb-2f01adce2ad4-grafana-compute-0[104988]: logger=settings t=2025-12-08T09:50:10.950954398Z level=info msg=Target target=[all]
Dec  8 04:50:10 np0005550137 ceph-ceb838ef-9d5d-54e4-bddb-2f01adce2ad4-grafana-compute-0[104988]: logger=settings t=2025-12-08T09:50:10.950960238Z level=info msg="Path Home" path=/usr/share/grafana
Dec  8 04:50:10 np0005550137 ceph-ceb838ef-9d5d-54e4-bddb-2f01adce2ad4-grafana-compute-0[104988]: logger=settings t=2025-12-08T09:50:10.950963758Z level=info msg="Path Data" path=/var/lib/grafana
Dec  8 04:50:10 np0005550137 ceph-ceb838ef-9d5d-54e4-bddb-2f01adce2ad4-grafana-compute-0[104988]: logger=settings t=2025-12-08T09:50:10.950967288Z level=info msg="Path Logs" path=/var/log/grafana
Dec  8 04:50:10 np0005550137 ceph-ceb838ef-9d5d-54e4-bddb-2f01adce2ad4-grafana-compute-0[104988]: logger=settings t=2025-12-08T09:50:10.950970418Z level=info msg="Path Plugins" path=/var/lib/grafana/plugins
Dec  8 04:50:10 np0005550137 ceph-ceb838ef-9d5d-54e4-bddb-2f01adce2ad4-grafana-compute-0[104988]: logger=settings t=2025-12-08T09:50:10.950973549Z level=info msg="Path Provisioning" path=/etc/grafana/provisioning
Dec  8 04:50:10 np0005550137 ceph-ceb838ef-9d5d-54e4-bddb-2f01adce2ad4-grafana-compute-0[104988]: logger=settings t=2025-12-08T09:50:10.950976729Z level=info msg="App mode production"
Dec  8 04:50:10 np0005550137 ceph-ceb838ef-9d5d-54e4-bddb-2f01adce2ad4-grafana-compute-0[104988]: logger=sqlstore t=2025-12-08T09:50:10.951417472Z level=info msg="Connecting to DB" dbtype=sqlite3
Dec  8 04:50:10 np0005550137 ceph-ceb838ef-9d5d-54e4-bddb-2f01adce2ad4-grafana-compute-0[104988]: logger=sqlstore t=2025-12-08T09:50:10.951436563Z level=warn msg="SQLite database file has broader permissions than it should" path=/var/lib/grafana/grafana.db mode=-rw-r--r-- expected=-rw-r-----
Dec  8 04:50:10 np0005550137 ceph-ceb838ef-9d5d-54e4-bddb-2f01adce2ad4-grafana-compute-0[104988]: logger=migrator t=2025-12-08T09:50:10.951937718Z level=info msg="Starting DB migrations"
Dec  8 04:50:10 np0005550137 ceph-ceb838ef-9d5d-54e4-bddb-2f01adce2ad4-grafana-compute-0[104988]: logger=migrator t=2025-12-08T09:50:10.972493642Z level=info msg="migrations completed" performed=0 skipped=547 duration=1.082292ms
Dec  8 04:50:10 np0005550137 ceph-ceb838ef-9d5d-54e4-bddb-2f01adce2ad4-grafana-compute-0[104988]: logger=sqlstore t=2025-12-08T09:50:10.973552755Z level=info msg="Created default organization"
Dec  8 04:50:10 np0005550137 ceph-ceb838ef-9d5d-54e4-bddb-2f01adce2ad4-grafana-compute-0[104988]: logger=secrets t=2025-12-08T09:50:10.974003849Z level=info msg="Envelope encryption state" enabled=true currentprovider=secretKey.v1
Dec  8 04:50:10 np0005550137 ceph-ceb838ef-9d5d-54e4-bddb-2f01adce2ad4-grafana-compute-0[104988]: logger=plugin.store t=2025-12-08T09:50:10.990765148Z level=info msg="Loading plugins..."
Dec  8 04:50:11 np0005550137 ceph-mon[74516]: mon.compute-0@0(leader).osd e96 do_prune osdmap full prune enabled
Dec  8 04:50:11 np0005550137 ceph-mgr[74806]: log_channel(cluster) log [DBG] : pgmap v34: 353 pgs: 353 active+clean; 457 KiB data, 126 MiB used, 60 GiB / 60 GiB avail
Dec  8 04:50:11 np0005550137 ceph-mon[74516]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "20"} v 0)
Dec  8 04:50:11 np0005550137 ceph-mon[74516]: log_channel(audit) log [INF] : from='mgr.14766 192.168.122.100:0/2066651810' entity='mgr.compute-0.kitiwu' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "20"}]: dispatch
Dec  8 04:50:11 np0005550137 ceph-mon[74516]: mon.compute-0@0(leader).osd e97 e97: 3 total, 3 up, 3 in
Dec  8 04:50:11 np0005550137 ceph-mon[74516]: log_channel(cluster) log [DBG] : osdmap e97: 3 total, 3 up, 3 in
Dec  8 04:50:11 np0005550137 ceph-ceb838ef-9d5d-54e4-bddb-2f01adce2ad4-grafana-compute-0[104988]: logger=local.finder t=2025-12-08T09:50:11.080940827Z level=warn msg="Skipping finding plugins as directory does not exist" path=/usr/share/grafana/plugins-bundled
Dec  8 04:50:11 np0005550137 ceph-ceb838ef-9d5d-54e4-bddb-2f01adce2ad4-grafana-compute-0[104988]: logger=plugin.store t=2025-12-08T09:50:11.080995698Z level=info msg="Plugins loaded" count=55 duration=90.22849ms
Dec  8 04:50:11 np0005550137 ceph-ceb838ef-9d5d-54e4-bddb-2f01adce2ad4-grafana-compute-0[104988]: logger=query_data t=2025-12-08T09:50:11.085425954Z level=info msg="Query Service initialization"
Dec  8 04:50:11 np0005550137 ceph-ceb838ef-9d5d-54e4-bddb-2f01adce2ad4-grafana-compute-0[104988]: logger=live.push_http t=2025-12-08T09:50:11.09024523Z level=info msg="Live Push Gateway initialization"
Dec  8 04:50:11 np0005550137 ceph-ceb838ef-9d5d-54e4-bddb-2f01adce2ad4-grafana-compute-0[104988]: logger=ngalert.migration t=2025-12-08T09:50:11.093477658Z level=info msg=Starting
Dec  8 04:50:11 np0005550137 ceph-ceb838ef-9d5d-54e4-bddb-2f01adce2ad4-grafana-compute-0[104988]: logger=ngalert.state.manager t=2025-12-08T09:50:11.104598896Z level=info msg="Running in alternative execution of Error/NoData mode"
Dec  8 04:50:11 np0005550137 ceph-ceb838ef-9d5d-54e4-bddb-2f01adce2ad4-grafana-compute-0[104988]: logger=infra.usagestats.collector t=2025-12-08T09:50:11.106280267Z level=info msg="registering usage stat providers" usageStatsProvidersLen=2
Dec  8 04:50:11 np0005550137 ceph-ceb838ef-9d5d-54e4-bddb-2f01adce2ad4-grafana-compute-0[104988]: logger=provisioning.datasources t=2025-12-08T09:50:11.108261107Z level=info msg="inserting datasource from configuration" name=Dashboard1 uid=P43CA22E17D0F9596
Dec  8 04:50:11 np0005550137 ceph-ceb838ef-9d5d-54e4-bddb-2f01adce2ad4-grafana-compute-0[104988]: logger=provisioning.alerting t=2025-12-08T09:50:11.130114911Z level=info msg="starting to provision alerting"
Dec  8 04:50:11 np0005550137 ceph-ceb838ef-9d5d-54e4-bddb-2f01adce2ad4-grafana-compute-0[104988]: logger=provisioning.alerting t=2025-12-08T09:50:11.130138592Z level=info msg="finished to provision alerting"
Dec  8 04:50:11 np0005550137 ceph-ceb838ef-9d5d-54e4-bddb-2f01adce2ad4-grafana-compute-0[104988]: logger=ngalert.state.manager t=2025-12-08T09:50:11.130277726Z level=info msg="Warming state cache for startup"
Dec  8 04:50:11 np0005550137 ceph-ceb838ef-9d5d-54e4-bddb-2f01adce2ad4-grafana-compute-0[104988]: logger=ngalert.multiorg.alertmanager t=2025-12-08T09:50:11.130539684Z level=info msg="Starting MultiOrg Alertmanager"
Dec  8 04:50:11 np0005550137 ceph-ceb838ef-9d5d-54e4-bddb-2f01adce2ad4-grafana-compute-0[104988]: logger=grafanaStorageLogger t=2025-12-08T09:50:11.130844814Z level=info msg="Storage starting"
Dec  8 04:50:11 np0005550137 ceph-ceb838ef-9d5d-54e4-bddb-2f01adce2ad4-grafana-compute-0[104988]: logger=http.server t=2025-12-08T09:50:11.140373852Z level=info msg="HTTP Server TLS settings" MinTLSVersion=TLS1.2 configuredciphers=TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA,TLS_ECDHE_ECDSA_WITH_AES_256_CBC_SHA,TLS_ECDHE_RSA_WITH_AES_256_CBC_SHA,TLS_RSA_WITH_AES_128_GCM_SHA256,TLS_RSA_WITH_AES_256_GCM_SHA384,TLS_RSA_WITH_AES_128_CBC_SHA,TLS_RSA_WITH_AES_256_CBC_SHA
Dec  8 04:50:11 np0005550137 ceph-ceb838ef-9d5d-54e4-bddb-2f01adce2ad4-grafana-compute-0[104988]: logger=http.server t=2025-12-08T09:50:11.140785925Z level=info msg="HTTP Server Listen" address=192.168.122.100:3000 protocol=https subUrl= socket=
Dec  8 04:50:11 np0005550137 ceph-ceb838ef-9d5d-54e4-bddb-2f01adce2ad4-grafana-compute-0[104988]: logger=provisioning.dashboard t=2025-12-08T09:50:11.169439916Z level=info msg="starting to provision dashboards"
Dec  8 04:50:11 np0005550137 ceph-ceb838ef-9d5d-54e4-bddb-2f01adce2ad4-grafana-compute-0[104988]: logger=ngalert.state.manager t=2025-12-08T09:50:11.171692174Z level=info msg="State cache has been initialized" states=0 duration=41.412868ms
Dec  8 04:50:11 np0005550137 ceph-ceb838ef-9d5d-54e4-bddb-2f01adce2ad4-grafana-compute-0[104988]: logger=ngalert.scheduler t=2025-12-08T09:50:11.171732145Z level=info msg="Starting scheduler" tickInterval=10s maxAttempts=1
Dec  8 04:50:11 np0005550137 ceph-ceb838ef-9d5d-54e4-bddb-2f01adce2ad4-grafana-compute-0[104988]: logger=ticker t=2025-12-08T09:50:11.171809758Z level=info msg=starting first_tick=2025-12-08T09:50:20Z
Dec  8 04:50:11 np0005550137 ceph-ceb838ef-9d5d-54e4-bddb-2f01adce2ad4-grafana-compute-0[104988]: logger=provisioning.dashboard t=2025-12-08T09:50:11.18867222Z level=info msg="finished to provision dashboards"
Dec  8 04:50:11 np0005550137 ceph-ceb838ef-9d5d-54e4-bddb-2f01adce2ad4-grafana-compute-0[104988]: logger=grafana.update.checker t=2025-12-08T09:50:11.197941712Z level=info msg="Update check succeeded" duration=67.213262ms
Dec  8 04:50:11 np0005550137 ceph-ceb838ef-9d5d-54e4-bddb-2f01adce2ad4-grafana-compute-0[104988]: logger=plugins.update.checker t=2025-12-08T09:50:11.215231036Z level=info msg="Update check succeeded" duration=84.461395ms
Dec  8 04:50:11 np0005550137 ceph-osd[83009]: osd.1 pg_epoch: 97 pg[9.12( v 49'1026 (0'0,49'1026] local-lis/les=96/97 n=6 ec=52/36 lis/c=52/52 les/c/f=53/53/0 sis=96) [0]/[1] async=[0] r=0 lpr=96 pi=[52,96)/1 crt=49'1026 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=4}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  8 04:50:11 np0005550137 ceph-ceb838ef-9d5d-54e4-bddb-2f01adce2ad4-grafana-compute-0[104988]: logger=grafana-apiserver t=2025-12-08T09:50:11.435844179Z level=info msg="Adding GroupVersion playlist.grafana.app v0alpha1 to ResourceManager"
Dec  8 04:50:11 np0005550137 ceph-ceb838ef-9d5d-54e4-bddb-2f01adce2ad4-grafana-compute-0[104988]: logger=grafana-apiserver t=2025-12-08T09:50:11.436256122Z level=info msg="Adding GroupVersion featuretoggle.grafana.app v0alpha1 to ResourceManager"
Dec  8 04:50:11 np0005550137 radosgw[89717]: ====== starting new request req=0x7ff6ecd845d0 =====
Dec  8 04:50:11 np0005550137 radosgw[89717]: ====== req done req=0x7ff6ecd845d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  8 04:50:11 np0005550137 radosgw[89717]: beast: 0x7ff6ecd845d0: 192.168.122.102 - anonymous [08/Dec/2025:09:50:11.503 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  8 04:50:11 np0005550137 ceph-osd[83009]: log_channel(cluster) log [DBG] : 12.10 deep-scrub starts
Dec  8 04:50:11 np0005550137 ceph-osd[83009]: log_channel(cluster) log [DBG] : 12.10 deep-scrub ok
Dec  8 04:50:11 np0005550137 ceph-mon[74516]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1.devices.0}] v 0)
Dec  8 04:50:11 np0005550137 ceph-mon[74516]: log_channel(audit) log [INF] : from='mgr.14766 192.168.122.100:0/2066651810' entity='mgr.compute-0.kitiwu' 
Dec  8 04:50:11 np0005550137 ceph-mon[74516]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1}] v 0)
Dec  8 04:50:11 np0005550137 ceph-mon[74516]: log_channel(audit) log [INF] : from='mgr.14766 192.168.122.100:0/2066651810' entity='mgr.compute-0.kitiwu' 
Dec  8 04:50:11 np0005550137 ceph-mgr[74806]: [cephadm INFO cephadm.serve] Reconfiguring osd.0 (monmap changed)...
Dec  8 04:50:11 np0005550137 ceph-mgr[74806]: log_channel(cephadm) log [INF] : Reconfiguring osd.0 (monmap changed)...
Dec  8 04:50:11 np0005550137 ceph-mon[74516]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "osd.0"} v 0)
Dec  8 04:50:11 np0005550137 ceph-mon[74516]: log_channel(audit) log [INF] : from='mgr.14766 192.168.122.100:0/2066651810' entity='mgr.compute-0.kitiwu' cmd=[{"prefix": "auth get", "entity": "osd.0"}]: dispatch
Dec  8 04:50:11 np0005550137 ceph-mon[74516]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Dec  8 04:50:11 np0005550137 ceph-mon[74516]: log_channel(audit) log [DBG] : from='mgr.14766 192.168.122.100:0/2066651810' entity='mgr.compute-0.kitiwu' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec  8 04:50:11 np0005550137 ceph-mgr[74806]: [cephadm INFO cephadm.serve] Reconfiguring daemon osd.0 on compute-1
Dec  8 04:50:11 np0005550137 ceph-mgr[74806]: log_channel(cephadm) log [INF] : Reconfiguring daemon osd.0 on compute-1
Dec  8 04:50:11 np0005550137 ceph-mon[74516]: from='mgr.14766 192.168.122.100:0/2066651810' entity='mgr.compute-0.kitiwu' 
Dec  8 04:50:11 np0005550137 ceph-mon[74516]: from='mgr.14766 192.168.122.100:0/2066651810' entity='mgr.compute-0.kitiwu' 
Dec  8 04:50:11 np0005550137 ceph-mon[74516]: Reconfiguring crash.compute-1 (monmap changed)...
Dec  8 04:50:11 np0005550137 ceph-mon[74516]: from='mgr.14766 192.168.122.100:0/2066651810' entity='mgr.compute-0.kitiwu' cmd=[{"prefix": "auth get-or-create", "entity": "client.crash.compute-1", "caps": ["mon", "profile crash", "mgr", "profile crash"]}]: dispatch
Dec  8 04:50:11 np0005550137 ceph-mon[74516]: Reconfiguring daemon crash.compute-1 on compute-1
Dec  8 04:50:11 np0005550137 ceph-mon[74516]: from='mgr.14766 192.168.122.100:0/2066651810' entity='mgr.compute-0.kitiwu' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "20"}]: dispatch
Dec  8 04:50:11 np0005550137 ceph-mon[74516]: from='mgr.14766 192.168.122.100:0/2066651810' entity='mgr.compute-0.kitiwu' 
Dec  8 04:50:11 np0005550137 ceph-mon[74516]: from='mgr.14766 192.168.122.100:0/2066651810' entity='mgr.compute-0.kitiwu' 
Dec  8 04:50:11 np0005550137 ceph-mon[74516]: Reconfiguring osd.0 (monmap changed)...
Dec  8 04:50:11 np0005550137 ceph-mon[74516]: from='mgr.14766 192.168.122.100:0/2066651810' entity='mgr.compute-0.kitiwu' cmd=[{"prefix": "auth get", "entity": "osd.0"}]: dispatch
Dec  8 04:50:11 np0005550137 ceph-mon[74516]: Reconfiguring daemon osd.0 on compute-1
Dec  8 04:50:12 np0005550137 ceph-mon[74516]: mon.compute-0@0(leader).osd e97 do_prune osdmap full prune enabled
Dec  8 04:50:12 np0005550137 ceph-mon[74516]: log_channel(audit) log [INF] : from='mgr.14766 192.168.122.100:0/2066651810' entity='mgr.compute-0.kitiwu' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "20"}]': finished
Dec  8 04:50:12 np0005550137 ceph-mon[74516]: mon.compute-0@0(leader).osd e98 e98: 3 total, 3 up, 3 in
Dec  8 04:50:12 np0005550137 ceph-mon[74516]: log_channel(cluster) log [DBG] : osdmap e98: 3 total, 3 up, 3 in
Dec  8 04:50:12 np0005550137 ceph-osd[83009]: osd.1 pg_epoch: 98 pg[9.12( v 49'1026 (0'0,49'1026] local-lis/les=96/97 n=6 ec=52/36 lis/c=96/52 les/c/f=97/53/0 sis=98 pruub=15.382191658s) [0] async=[0] r=-1 lpr=98 pi=[52,98)/1 crt=49'1026 lcod 0'0 mlcod 0'0 active pruub 236.122207642s@ mbc={255={}}] PeeringState::start_peering_interval up [0] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 0 -> 0, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Dec  8 04:50:12 np0005550137 ceph-osd[83009]: osd.1 pg_epoch: 98 pg[9.12( v 49'1026 (0'0,49'1026] local-lis/les=96/97 n=6 ec=52/36 lis/c=96/52 les/c/f=97/53/0 sis=98 pruub=15.382119179s) [0] r=-1 lpr=98 pi=[52,98)/1 crt=49'1026 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 236.122207642s@ mbc={}] state<Start>: transitioning to Stray
Dec  8 04:50:12 np0005550137 ceph-mon[74516]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1.devices.0}] v 0)
Dec  8 04:50:12 np0005550137 ceph-mon[74516]: log_channel(audit) log [INF] : from='mgr.14766 192.168.122.100:0/2066651810' entity='mgr.compute-0.kitiwu' 
Dec  8 04:50:12 np0005550137 ceph-mon[74516]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1}] v 0)
Dec  8 04:50:12 np0005550137 ceph-mon[74516]: log_channel(audit) log [INF] : from='mgr.14766 192.168.122.100:0/2066651810' entity='mgr.compute-0.kitiwu' 
Dec  8 04:50:12 np0005550137 ceph-mgr[74806]: [cephadm INFO cephadm.serve] Reconfiguring mon.compute-1 (monmap changed)...
Dec  8 04:50:12 np0005550137 ceph-mgr[74806]: log_channel(cephadm) log [INF] : Reconfiguring mon.compute-1 (monmap changed)...
Dec  8 04:50:12 np0005550137 ceph-mon[74516]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "mon."} v 0)
Dec  8 04:50:12 np0005550137 ceph-mon[74516]: log_channel(audit) log [INF] : from='mgr.14766 192.168.122.100:0/2066651810' entity='mgr.compute-0.kitiwu' cmd=[{"prefix": "auth get", "entity": "mon."}]: dispatch
Dec  8 04:50:12 np0005550137 ceph-mon[74516]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config get", "who": "mon", "key": "public_network"} v 0)
Dec  8 04:50:12 np0005550137 ceph-mon[74516]: log_channel(audit) log [DBG] : from='mgr.14766 192.168.122.100:0/2066651810' entity='mgr.compute-0.kitiwu' cmd=[{"prefix": "config get", "who": "mon", "key": "public_network"}]: dispatch
Dec  8 04:50:12 np0005550137 ceph-mon[74516]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Dec  8 04:50:12 np0005550137 ceph-mon[74516]: log_channel(audit) log [DBG] : from='mgr.14766 192.168.122.100:0/2066651810' entity='mgr.compute-0.kitiwu' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec  8 04:50:12 np0005550137 ceph-mgr[74806]: [cephadm INFO cephadm.serve] Reconfiguring daemon mon.compute-1 on compute-1
Dec  8 04:50:12 np0005550137 ceph-mgr[74806]: log_channel(cephadm) log [INF] : Reconfiguring daemon mon.compute-1 on compute-1
Dec  8 04:50:12 np0005550137 ceph-osd[83009]: log_channel(cluster) log [DBG] : 10.13 scrub starts
Dec  8 04:50:12 np0005550137 ceph-osd[83009]: log_channel(cluster) log [DBG] : 10.13 scrub ok
Dec  8 04:50:12 np0005550137 radosgw[89717]: ====== starting new request req=0x7ff6ecd845d0 =====
Dec  8 04:50:12 np0005550137 radosgw[89717]: ====== req done req=0x7ff6ecd845d0 op status=0 http_status=200 latency=0.001000030s ======
Dec  8 04:50:12 np0005550137 radosgw[89717]: beast: 0x7ff6ecd845d0: 192.168.122.100 - anonymous [08/Dec/2025:09:50:12.915 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000030s
Dec  8 04:50:13 np0005550137 ceph-mgr[74806]: log_channel(cluster) log [DBG] : pgmap v37: 353 pgs: 353 active+clean; 457 KiB data, 126 MiB used, 60 GiB / 60 GiB avail
Dec  8 04:50:13 np0005550137 ceph-mon[74516]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "21"} v 0)
Dec  8 04:50:13 np0005550137 ceph-mon[74516]: log_channel(audit) log [INF] : from='mgr.14766 192.168.122.100:0/2066651810' entity='mgr.compute-0.kitiwu' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "21"}]: dispatch
Dec  8 04:50:13 np0005550137 ceph-mon[74516]: mon.compute-0@0(leader).osd e98 do_prune osdmap full prune enabled
Dec  8 04:50:13 np0005550137 ceph-mon[74516]: log_channel(audit) log [INF] : from='mgr.14766 192.168.122.100:0/2066651810' entity='mgr.compute-0.kitiwu' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "21"}]': finished
Dec  8 04:50:13 np0005550137 ceph-mon[74516]: mon.compute-0@0(leader).osd e99 e99: 3 total, 3 up, 3 in
Dec  8 04:50:13 np0005550137 ceph-mon[74516]: log_channel(cluster) log [DBG] : osdmap e99: 3 total, 3 up, 3 in
Dec  8 04:50:13 np0005550137 ceph-mon[74516]: from='mgr.14766 192.168.122.100:0/2066651810' entity='mgr.compute-0.kitiwu' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "20"}]': finished
Dec  8 04:50:13 np0005550137 ceph-mon[74516]: from='mgr.14766 192.168.122.100:0/2066651810' entity='mgr.compute-0.kitiwu' 
Dec  8 04:50:13 np0005550137 ceph-mon[74516]: from='mgr.14766 192.168.122.100:0/2066651810' entity='mgr.compute-0.kitiwu' 
Dec  8 04:50:13 np0005550137 ceph-mon[74516]: Reconfiguring mon.compute-1 (monmap changed)...
Dec  8 04:50:13 np0005550137 ceph-mon[74516]: from='mgr.14766 192.168.122.100:0/2066651810' entity='mgr.compute-0.kitiwu' cmd=[{"prefix": "auth get", "entity": "mon."}]: dispatch
Dec  8 04:50:13 np0005550137 ceph-mon[74516]: Reconfiguring daemon mon.compute-1 on compute-1
Dec  8 04:50:13 np0005550137 ceph-mon[74516]: from='mgr.14766 192.168.122.100:0/2066651810' entity='mgr.compute-0.kitiwu' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "21"}]: dispatch
Dec  8 04:50:13 np0005550137 ceph-mon[74516]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1.devices.0}] v 0)
Dec  8 04:50:13 np0005550137 ceph-mon[74516]: log_channel(audit) log [INF] : from='mgr.14766 192.168.122.100:0/2066651810' entity='mgr.compute-0.kitiwu' 
Dec  8 04:50:13 np0005550137 ceph-mon[74516]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1}] v 0)
Dec  8 04:50:13 np0005550137 ceph-mon[74516]: log_channel(audit) log [INF] : from='mgr.14766 192.168.122.100:0/2066651810' entity='mgr.compute-0.kitiwu' 
Dec  8 04:50:13 np0005550137 ceph-mgr[74806]: [cephadm INFO cephadm.serve] Reconfiguring mon.compute-2 (monmap changed)...
Dec  8 04:50:13 np0005550137 ceph-mgr[74806]: log_channel(cephadm) log [INF] : Reconfiguring mon.compute-2 (monmap changed)...
Dec  8 04:50:13 np0005550137 ceph-mon[74516]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "mon."} v 0)
Dec  8 04:50:13 np0005550137 ceph-mon[74516]: log_channel(audit) log [INF] : from='mgr.14766 192.168.122.100:0/2066651810' entity='mgr.compute-0.kitiwu' cmd=[{"prefix": "auth get", "entity": "mon."}]: dispatch
Dec  8 04:50:13 np0005550137 ceph-mon[74516]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config get", "who": "mon", "key": "public_network"} v 0)
Dec  8 04:50:13 np0005550137 ceph-mon[74516]: log_channel(audit) log [DBG] : from='mgr.14766 192.168.122.100:0/2066651810' entity='mgr.compute-0.kitiwu' cmd=[{"prefix": "config get", "who": "mon", "key": "public_network"}]: dispatch
Dec  8 04:50:13 np0005550137 ceph-mon[74516]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Dec  8 04:50:13 np0005550137 ceph-mon[74516]: log_channel(audit) log [DBG] : from='mgr.14766 192.168.122.100:0/2066651810' entity='mgr.compute-0.kitiwu' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec  8 04:50:13 np0005550137 ceph-mgr[74806]: [cephadm INFO cephadm.serve] Reconfiguring daemon mon.compute-2 on compute-2
Dec  8 04:50:13 np0005550137 ceph-mgr[74806]: log_channel(cephadm) log [INF] : Reconfiguring daemon mon.compute-2 on compute-2
Dec  8 04:50:13 np0005550137 ceph-ceb838ef-9d5d-54e4-bddb-2f01adce2ad4-nfs-cephfs-2-0-compute-0-cuvvno[104424]: 08/12/2025 09:50:13 : epoch 69369f4f : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Dec  8 04:50:13 np0005550137 ceph-ceb838ef-9d5d-54e4-bddb-2f01adce2ad4-nfs-cephfs-2-0-compute-0-cuvvno[104424]: 08/12/2025 09:50:13 : epoch 69369f4f : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Dec  8 04:50:13 np0005550137 radosgw[89717]: ====== starting new request req=0x7ff6ecd845d0 =====
Dec  8 04:50:13 np0005550137 radosgw[89717]: ====== req done req=0x7ff6ecd845d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  8 04:50:13 np0005550137 radosgw[89717]: beast: 0x7ff6ecd845d0: 192.168.122.102 - anonymous [08/Dec/2025:09:50:13.506 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  8 04:50:13 np0005550137 ceph-osd[83009]: log_channel(cluster) log [DBG] : 10.19 scrub starts
Dec  8 04:50:13 np0005550137 ceph-osd[83009]: log_channel(cluster) log [DBG] : 10.19 scrub ok
Dec  8 04:50:14 np0005550137 ceph-mon[74516]: from='mgr.14766 192.168.122.100:0/2066651810' entity='mgr.compute-0.kitiwu' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "21"}]': finished
Dec  8 04:50:14 np0005550137 ceph-mon[74516]: from='mgr.14766 192.168.122.100:0/2066651810' entity='mgr.compute-0.kitiwu' 
Dec  8 04:50:14 np0005550137 ceph-mon[74516]: from='mgr.14766 192.168.122.100:0/2066651810' entity='mgr.compute-0.kitiwu' 
Dec  8 04:50:14 np0005550137 ceph-mon[74516]: Reconfiguring mon.compute-2 (monmap changed)...
Dec  8 04:50:14 np0005550137 ceph-mon[74516]: from='mgr.14766 192.168.122.100:0/2066651810' entity='mgr.compute-0.kitiwu' cmd=[{"prefix": "auth get", "entity": "mon."}]: dispatch
Dec  8 04:50:14 np0005550137 ceph-mon[74516]: Reconfiguring daemon mon.compute-2 on compute-2
Dec  8 04:50:14 np0005550137 ceph-mon[74516]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2.devices.0}] v 0)
Dec  8 04:50:14 np0005550137 ceph-mon[74516]: log_channel(audit) log [INF] : from='mgr.14766 192.168.122.100:0/2066651810' entity='mgr.compute-0.kitiwu' 
Dec  8 04:50:14 np0005550137 ceph-mon[74516]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2}] v 0)
Dec  8 04:50:14 np0005550137 ceph-mon[74516]: log_channel(audit) log [INF] : from='mgr.14766 192.168.122.100:0/2066651810' entity='mgr.compute-0.kitiwu' 
Dec  8 04:50:14 np0005550137 ceph-mgr[74806]: [cephadm INFO cephadm.serve] Reconfiguring mgr.compute-2.zqytsv (monmap changed)...
Dec  8 04:50:14 np0005550137 ceph-mgr[74806]: log_channel(cephadm) log [INF] : Reconfiguring mgr.compute-2.zqytsv (monmap changed)...
Dec  8 04:50:14 np0005550137 ceph-mon[74516]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get-or-create", "entity": "mgr.compute-2.zqytsv", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]} v 0)
Dec  8 04:50:14 np0005550137 ceph-mon[74516]: log_channel(audit) log [INF] : from='mgr.14766 192.168.122.100:0/2066651810' entity='mgr.compute-0.kitiwu' cmd=[{"prefix": "auth get-or-create", "entity": "mgr.compute-2.zqytsv", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]}]: dispatch
Dec  8 04:50:14 np0005550137 ceph-mon[74516]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mgr services"} v 0)
Dec  8 04:50:14 np0005550137 ceph-mon[74516]: log_channel(audit) log [DBG] : from='mgr.14766 192.168.122.100:0/2066651810' entity='mgr.compute-0.kitiwu' cmd=[{"prefix": "mgr services"}]: dispatch
Dec  8 04:50:14 np0005550137 ceph-mon[74516]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Dec  8 04:50:14 np0005550137 ceph-mon[74516]: log_channel(audit) log [DBG] : from='mgr.14766 192.168.122.100:0/2066651810' entity='mgr.compute-0.kitiwu' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec  8 04:50:14 np0005550137 ceph-mgr[74806]: [cephadm INFO cephadm.serve] Reconfiguring daemon mgr.compute-2.zqytsv on compute-2
Dec  8 04:50:14 np0005550137 ceph-mgr[74806]: log_channel(cephadm) log [INF] : Reconfiguring daemon mgr.compute-2.zqytsv on compute-2
Dec  8 04:50:14 np0005550137 ceph-osd[83009]: log_channel(cluster) log [DBG] : 10.2 scrub starts
Dec  8 04:50:14 np0005550137 ceph-osd[83009]: log_channel(cluster) log [DBG] : 10.2 scrub ok
Dec  8 04:50:14 np0005550137 ceph-mon[74516]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2.devices.0}] v 0)
Dec  8 04:50:14 np0005550137 ceph-mon[74516]: log_channel(audit) log [INF] : from='mgr.14766 192.168.122.100:0/2066651810' entity='mgr.compute-0.kitiwu' 
Dec  8 04:50:14 np0005550137 ceph-mon[74516]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2}] v 0)
Dec  8 04:50:14 np0005550137 ceph-mon[74516]: log_channel(audit) log [INF] : from='mgr.14766 192.168.122.100:0/2066651810' entity='mgr.compute-0.kitiwu' 
Dec  8 04:50:14 np0005550137 ceph-mon[74516]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "dashboard get-alertmanager-api-host"} v 0)
Dec  8 04:50:14 np0005550137 ceph-mon[74516]: log_channel(audit) log [DBG] : from='mgr.14766 192.168.122.100:0/2066651810' entity='mgr.compute-0.kitiwu' cmd=[{"prefix": "dashboard get-alertmanager-api-host"}]: dispatch
Dec  8 04:50:14 np0005550137 ceph-mgr[74806]: log_channel(audit) log [DBG] : from='mon.0 -' entity='mon.' cmd=[{"prefix": "dashboard get-alertmanager-api-host"}]: dispatch
Dec  8 04:50:14 np0005550137 ceph-mon[74516]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "dashboard get-grafana-api-url"} v 0)
Dec  8 04:50:14 np0005550137 ceph-mon[74516]: log_channel(audit) log [DBG] : from='mgr.14766 192.168.122.100:0/2066651810' entity='mgr.compute-0.kitiwu' cmd=[{"prefix": "dashboard get-grafana-api-url"}]: dispatch
Dec  8 04:50:14 np0005550137 ceph-mgr[74806]: log_channel(audit) log [DBG] : from='mon.0 -' entity='mon.' cmd=[{"prefix": "dashboard get-grafana-api-url"}]: dispatch
Dec  8 04:50:14 np0005550137 radosgw[89717]: ====== starting new request req=0x7ff6ecd845d0 =====
Dec  8 04:50:14 np0005550137 ceph-mon[74516]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "dashboard set-grafana-api-url", "value": "https://192.168.122.100:3000"} v 0)
Dec  8 04:50:14 np0005550137 ceph-mon[74516]: log_channel(audit) log [INF] : from='mgr.14766 192.168.122.100:0/2066651810' entity='mgr.compute-0.kitiwu' cmd=[{"prefix": "dashboard set-grafana-api-url", "value": "https://192.168.122.100:3000"}]: dispatch
Dec  8 04:50:14 np0005550137 radosgw[89717]: ====== req done req=0x7ff6ecd845d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  8 04:50:14 np0005550137 radosgw[89717]: beast: 0x7ff6ecd845d0: 192.168.122.100 - anonymous [08/Dec/2025:09:50:14.917 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  8 04:50:14 np0005550137 ceph-mgr[74806]: log_channel(audit) log [DBG] : from='mon.0 -' entity='mon.' cmd=[{"prefix": "dashboard set-grafana-api-url", "value": "https://192.168.122.100:3000"}]: dispatch
Dec  8 04:50:14 np0005550137 ceph-mon[74516]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config set, name=mgr/dashboard/GRAFANA_API_URL}] v 0)
Dec  8 04:50:14 np0005550137 ceph-mon[74516]: log_channel(audit) log [INF] : from='mgr.14766 192.168.122.100:0/2066651810' entity='mgr.compute-0.kitiwu' 
Dec  8 04:50:14 np0005550137 ceph-mgr[74806]: [prometheus INFO root] Restarting engine...
Dec  8 04:50:14 np0005550137 ceph-ceb838ef-9d5d-54e4-bddb-2f01adce2ad4-mgr-compute-0-kitiwu[74802]: [08/Dec/2025:09:50:14] ENGINE Bus STOPPING
Dec  8 04:50:14 np0005550137 ceph-mgr[74806]: [prometheus INFO cherrypy.error] [08/Dec/2025:09:50:14] ENGINE Bus STOPPING
Dec  8 04:50:14 np0005550137 ceph-ceb838ef-9d5d-54e4-bddb-2f01adce2ad4-mgr-compute-0-kitiwu[74802]: [08/Dec/2025:09:50:14] ENGINE HTTP Server cherrypy._cpwsgi_server.CPWSGIServer(('::', 9283)) shut down
Dec  8 04:50:14 np0005550137 ceph-ceb838ef-9d5d-54e4-bddb-2f01adce2ad4-mgr-compute-0-kitiwu[74802]: [08/Dec/2025:09:50:14] ENGINE Bus STOPPED
Dec  8 04:50:14 np0005550137 ceph-mgr[74806]: [prometheus INFO cherrypy.error] [08/Dec/2025:09:50:14] ENGINE HTTP Server cherrypy._cpwsgi_server.CPWSGIServer(('::', 9283)) shut down
Dec  8 04:50:14 np0005550137 ceph-mgr[74806]: [prometheus INFO cherrypy.error] [08/Dec/2025:09:50:14] ENGINE Bus STOPPED
Dec  8 04:50:14 np0005550137 ceph-ceb838ef-9d5d-54e4-bddb-2f01adce2ad4-mgr-compute-0-kitiwu[74802]: [08/Dec/2025:09:50:14] ENGINE Bus STARTING
Dec  8 04:50:14 np0005550137 ceph-mgr[74806]: [prometheus INFO cherrypy.error] [08/Dec/2025:09:50:14] ENGINE Bus STARTING
Dec  8 04:50:15 np0005550137 ceph-mon[74516]: mon.compute-0@0(leader).osd e99 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  8 04:50:15 np0005550137 ceph-mgr[74806]: log_channel(cluster) log [DBG] : pgmap v39: 353 pgs: 1 peering, 1 active+clean+scrubbing+deep, 351 active+clean; 458 KiB data, 144 MiB used, 60 GiB / 60 GiB avail
Dec  8 04:50:15 np0005550137 ceph-ceb838ef-9d5d-54e4-bddb-2f01adce2ad4-mgr-compute-0-kitiwu[74802]: [08/Dec/2025:09:50:15] ENGINE Serving on http://:::9283
Dec  8 04:50:15 np0005550137 ceph-mgr[74806]: [prometheus INFO cherrypy.error] [08/Dec/2025:09:50:15] ENGINE Serving on http://:::9283
Dec  8 04:50:15 np0005550137 ceph-ceb838ef-9d5d-54e4-bddb-2f01adce2ad4-mgr-compute-0-kitiwu[74802]: [08/Dec/2025:09:50:15] ENGINE Bus STARTED
Dec  8 04:50:15 np0005550137 ceph-mgr[74806]: [prometheus INFO cherrypy.error] [08/Dec/2025:09:50:15] ENGINE Bus STARTED
Dec  8 04:50:15 np0005550137 ceph-mgr[74806]: [prometheus INFO root] Engine started.
Dec  8 04:50:15 np0005550137 ceph-mon[74516]: from='mgr.14766 192.168.122.100:0/2066651810' entity='mgr.compute-0.kitiwu' 
Dec  8 04:50:15 np0005550137 ceph-mon[74516]: from='mgr.14766 192.168.122.100:0/2066651810' entity='mgr.compute-0.kitiwu' 
Dec  8 04:50:15 np0005550137 ceph-mon[74516]: Reconfiguring mgr.compute-2.zqytsv (monmap changed)...
Dec  8 04:50:15 np0005550137 ceph-mon[74516]: from='mgr.14766 192.168.122.100:0/2066651810' entity='mgr.compute-0.kitiwu' cmd=[{"prefix": "auth get-or-create", "entity": "mgr.compute-2.zqytsv", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]}]: dispatch
Dec  8 04:50:15 np0005550137 ceph-mon[74516]: Reconfiguring daemon mgr.compute-2.zqytsv on compute-2
Dec  8 04:50:15 np0005550137 ceph-mon[74516]: from='mgr.14766 192.168.122.100:0/2066651810' entity='mgr.compute-0.kitiwu' 
Dec  8 04:50:15 np0005550137 ceph-mon[74516]: from='mgr.14766 192.168.122.100:0/2066651810' entity='mgr.compute-0.kitiwu' 
Dec  8 04:50:15 np0005550137 ceph-mon[74516]: from='mgr.14766 192.168.122.100:0/2066651810' entity='mgr.compute-0.kitiwu' cmd=[{"prefix": "dashboard set-grafana-api-url", "value": "https://192.168.122.100:3000"}]: dispatch
Dec  8 04:50:15 np0005550137 ceph-mon[74516]: from='mgr.14766 192.168.122.100:0/2066651810' entity='mgr.compute-0.kitiwu' 
Dec  8 04:50:15 np0005550137 radosgw[89717]: ====== starting new request req=0x7ff6ecd845d0 =====
Dec  8 04:50:15 np0005550137 radosgw[89717]: ====== req done req=0x7ff6ecd845d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  8 04:50:15 np0005550137 radosgw[89717]: beast: 0x7ff6ecd845d0: 192.168.122.102 - anonymous [08/Dec/2025:09:50:15.509 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  8 04:50:15 np0005550137 ceph-osd[83009]: log_channel(cluster) log [DBG] : 10.15 scrub starts
Dec  8 04:50:15 np0005550137 ceph-osd[83009]: log_channel(cluster) log [DBG] : 10.15 scrub ok
Dec  8 04:50:15 np0005550137 podman[105152]: 2025-12-08 09:50:15.674249404 +0000 UTC m=+0.056911620 container exec e9eed32aa882af654b63b8cfe5830bc1e3b4cbe08ed44f11c04405084f3a1007 (image=quay.io/ceph/ceph:v19, name=ceph-ceb838ef-9d5d-54e4-bddb-2f01adce2ad4-mon-compute-0, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1)
Dec  8 04:50:15 np0005550137 podman[105152]: 2025-12-08 09:50:15.792044833 +0000 UTC m=+0.174707039 container exec_died e9eed32aa882af654b63b8cfe5830bc1e3b4cbe08ed44f11c04405084f3a1007 (image=quay.io/ceph/ceph:v19, name=ceph-ceb838ef-9d5d-54e4-bddb-2f01adce2ad4-mon-compute-0, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True)
Dec  8 04:50:15 np0005550137 ceph-mgr[74806]: [prometheus INFO cherrypy.access.140536868639312] ::ffff:192.168.122.100 - - [08/Dec/2025:09:50:15] "GET /metrics HTTP/1.1" 200 48277 "" "Prometheus/2.51.0"
Dec  8 04:50:15 np0005550137 ceph-ceb838ef-9d5d-54e4-bddb-2f01adce2ad4-mgr-compute-0-kitiwu[74802]: ::ffff:192.168.122.100 - - [08/Dec/2025:09:50:15] "GET /metrics HTTP/1.1" 200 48277 "" "Prometheus/2.51.0"
Dec  8 04:50:16 np0005550137 podman[105288]: 2025-12-08 09:50:16.487526841 +0000 UTC m=+0.077354310 container exec a993be6ff2aac952a2d6ade088491ae2d5efa7b34003bbbc41aca7a803586ead (image=quay.io/prometheus/node-exporter:v1.7.0, name=ceph-ceb838ef-9d5d-54e4-bddb-2f01adce2ad4-node-exporter-compute-0, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Dec  8 04:50:16 np0005550137 podman[105288]: 2025-12-08 09:50:16.495969228 +0000 UTC m=+0.085796657 container exec_died a993be6ff2aac952a2d6ade088491ae2d5efa7b34003bbbc41aca7a803586ead (image=quay.io/prometheus/node-exporter:v1.7.0, name=ceph-ceb838ef-9d5d-54e4-bddb-2f01adce2ad4-node-exporter-compute-0, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Dec  8 04:50:16 np0005550137 ceph-osd[83009]: log_channel(cluster) log [DBG] : 10.8 scrub starts
Dec  8 04:50:16 np0005550137 ceph-osd[83009]: log_channel(cluster) log [DBG] : 10.8 scrub ok
Dec  8 04:50:16 np0005550137 podman[105363]: 2025-12-08 09:50:16.73531259 +0000 UTC m=+0.052201148 container exec 3d3ae79956baf0088b3c44608b1b17208fadcf8da8dd9e93dccde248bbc68292 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-ceb838ef-9d5d-54e4-bddb-2f01adce2ad4-nfs-cephfs-2-0-compute-0-cuvvno, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec  8 04:50:16 np0005550137 podman[105363]: 2025-12-08 09:50:16.748054416 +0000 UTC m=+0.064942974 container exec_died 3d3ae79956baf0088b3c44608b1b17208fadcf8da8dd9e93dccde248bbc68292 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-ceb838ef-9d5d-54e4-bddb-2f01adce2ad4-nfs-cephfs-2-0-compute-0-cuvvno, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec  8 04:50:16 np0005550137 radosgw[89717]: ====== starting new request req=0x7ff6ecd845d0 =====
Dec  8 04:50:16 np0005550137 radosgw[89717]: ====== req done req=0x7ff6ecd845d0 op status=0 http_status=200 latency=0.001000030s ======
Dec  8 04:50:16 np0005550137 radosgw[89717]: beast: 0x7ff6ecd845d0: 192.168.122.100 - anonymous [08/Dec/2025:09:50:16.919 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000030s
Dec  8 04:50:16 np0005550137 podman[105424]: 2025-12-08 09:50:16.998777124 +0000 UTC m=+0.061652744 container exec 7f6df096ca74536932244eb4e1f4382864c206173aaf0a0b9089cd0768af80db (image=quay.io/ceph/haproxy:2.3, name=ceph-ceb838ef-9d5d-54e4-bddb-2f01adce2ad4-haproxy-nfs-cephfs-compute-0-dvsreo)
Dec  8 04:50:17 np0005550137 podman[105424]: 2025-12-08 09:50:17.009986744 +0000 UTC m=+0.072862344 container exec_died 7f6df096ca74536932244eb4e1f4382864c206173aaf0a0b9089cd0768af80db (image=quay.io/ceph/haproxy:2.3, name=ceph-ceb838ef-9d5d-54e4-bddb-2f01adce2ad4-haproxy-nfs-cephfs-compute-0-dvsreo)
Dec  8 04:50:17 np0005550137 ceph-mgr[74806]: log_channel(cluster) log [DBG] : pgmap v40: 353 pgs: 1 peering, 1 active+clean+scrubbing+deep, 351 active+clean; 458 KiB data, 144 MiB used, 60 GiB / 60 GiB avail
Dec  8 04:50:17 np0005550137 ceph-mon[74516]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Dec  8 04:50:17 np0005550137 ceph-mon[74516]: log_channel(audit) log [DBG] : from='mgr.14766 192.168.122.100:0/2066651810' entity='mgr.compute-0.kitiwu' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Dec  8 04:50:17 np0005550137 ceph-mgr[74806]: [volumes INFO mgr_util] scanning for idle connections..
Dec  8 04:50:17 np0005550137 ceph-mgr[74806]: [volumes INFO mgr_util] cleaning up connections: []
Dec  8 04:50:17 np0005550137 ceph-mgr[74806]: [volumes INFO mgr_util] scanning for idle connections..
Dec  8 04:50:17 np0005550137 ceph-mgr[74806]: [volumes INFO mgr_util] cleaning up connections: []
Dec  8 04:50:17 np0005550137 podman[105492]: 2025-12-08 09:50:17.262479614 +0000 UTC m=+0.074627457 container exec 860f9b1fceef64b25d38d4f198ba3ddb3d3c4871377cfc5a28c6fffa3c89de5c (image=quay.io/ceph/keepalived:2.2.4, name=ceph-ceb838ef-9d5d-54e4-bddb-2f01adce2ad4-keepalived-nfs-cephfs-compute-0-qxgfft, com.redhat.component=keepalived-container, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, build-date=2023-02-22T09:23:20, name=keepalived, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9-minimal/images/9.1.0-1793, distribution-scope=public, io.k8s.display-name=Keepalived on RHEL 9, vcs-type=git, version=2.2.4, vendor=Red Hat, Inc., io.openshift.tags=Ceph keepalived, vcs-ref=befaf1f5ec7b874aef2651ee1384d51828504eb9, architecture=x86_64, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., maintainer=Guillaume Abrioux <gabrioux@redhat.com>, summary=Provides keepalived on RHEL 9 for Ceph., io.buildah.version=1.28.2, release=1793, io.openshift.expose-services=, description=keepalived for Ceph)
Dec  8 04:50:17 np0005550137 ceph-mgr[74806]: [volumes INFO mgr_util] scanning for idle connections..
Dec  8 04:50:17 np0005550137 ceph-mgr[74806]: [volumes INFO mgr_util] cleaning up connections: []
Dec  8 04:50:17 np0005550137 podman[105492]: 2025-12-08 09:50:17.28106821 +0000 UTC m=+0.093216013 container exec_died 860f9b1fceef64b25d38d4f198ba3ddb3d3c4871377cfc5a28c6fffa3c89de5c (image=quay.io/ceph/keepalived:2.2.4, name=ceph-ceb838ef-9d5d-54e4-bddb-2f01adce2ad4-keepalived-nfs-cephfs-compute-0-qxgfft, summary=Provides keepalived on RHEL 9 for Ceph., build-date=2023-02-22T09:23:20, name=keepalived, io.buildah.version=1.28.2, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, description=keepalived for Ceph, version=2.2.4, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9-minimal/images/9.1.0-1793, vcs-type=git, release=1793, com.redhat.component=keepalived-container, io.openshift.expose-services=, vendor=Red Hat, Inc., io.openshift.tags=Ceph keepalived, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, distribution-scope=public, io.k8s.display-name=Keepalived on RHEL 9, vcs-ref=befaf1f5ec7b874aef2651ee1384d51828504eb9, architecture=x86_64, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly.)
Dec  8 04:50:17 np0005550137 radosgw[89717]: ====== starting new request req=0x7ff6ecd845d0 =====
Dec  8 04:50:17 np0005550137 radosgw[89717]: ====== req done req=0x7ff6ecd845d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  8 04:50:17 np0005550137 radosgw[89717]: beast: 0x7ff6ecd845d0: 192.168.122.102 - anonymous [08/Dec/2025:09:50:17.514 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  8 04:50:17 np0005550137 podman[105558]: 2025-12-08 09:50:17.544838284 +0000 UTC m=+0.062836971 container exec 595e69113afbc821777e75fb319ad360d8a2f8bfd86aab1547345132e22e0412 (image=quay.io/prometheus/alertmanager:v0.25.0, name=ceph-ceb838ef-9d5d-54e4-bddb-2f01adce2ad4-alertmanager-compute-0, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Dec  8 04:50:17 np0005550137 podman[105558]: 2025-12-08 09:50:17.590440158 +0000 UTC m=+0.108438835 container exec_died 595e69113afbc821777e75fb319ad360d8a2f8bfd86aab1547345132e22e0412 (image=quay.io/prometheus/alertmanager:v0.25.0, name=ceph-ceb838ef-9d5d-54e4-bddb-2f01adce2ad4-alertmanager-compute-0, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Dec  8 04:50:17 np0005550137 ceph-osd[83009]: log_channel(cluster) log [DBG] : 10.14 scrub starts
Dec  8 04:50:17 np0005550137 ceph-osd[83009]: log_channel(cluster) log [DBG] : 10.14 scrub ok
Dec  8 04:50:17 np0005550137 podman[105633]: 2025-12-08 09:50:17.846336133 +0000 UTC m=+0.075050231 container exec 5c6ee1dec0d4c3842d66658944a72f6e8e0d87939360a1fbd7cc3dbf06b76fb4 (image=quay.io/ceph/grafana:10.4.0, name=ceph-ceb838ef-9d5d-54e4-bddb-2f01adce2ad4-grafana-compute-0, maintainer=Grafana Labs <hello@grafana.com>)
Dec  8 04:50:18 np0005550137 podman[105633]: 2025-12-08 09:50:18.007625633 +0000 UTC m=+0.236339701 container exec_died 5c6ee1dec0d4c3842d66658944a72f6e8e0d87939360a1fbd7cc3dbf06b76fb4 (image=quay.io/ceph/grafana:10.4.0, name=ceph-ceb838ef-9d5d-54e4-bddb-2f01adce2ad4-grafana-compute-0, maintainer=Grafana Labs <hello@grafana.com>)
Dec  8 04:50:18 np0005550137 podman[105746]: 2025-12-08 09:50:18.412734571 +0000 UTC m=+0.050221858 container exec d4d95e6750bbb2e718479d53632eff7b41f65f80181b0d3e530725fcb6fb28e3 (image=quay.io/prometheus/prometheus:v2.51.0, name=ceph-ceb838ef-9d5d-54e4-bddb-2f01adce2ad4-prometheus-compute-0, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Dec  8 04:50:18 np0005550137 podman[105746]: 2025-12-08 09:50:18.443551547 +0000 UTC m=+0.081038824 container exec_died d4d95e6750bbb2e718479d53632eff7b41f65f80181b0d3e530725fcb6fb28e3 (image=quay.io/prometheus/prometheus:v2.51.0, name=ceph-ceb838ef-9d5d-54e4-bddb-2f01adce2ad4-prometheus-compute-0, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Dec  8 04:50:18 np0005550137 ceph-mon[74516]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Dec  8 04:50:18 np0005550137 ceph-mon[74516]: log_channel(audit) log [INF] : from='mgr.14766 192.168.122.100:0/2066651810' entity='mgr.compute-0.kitiwu' 
Dec  8 04:50:18 np0005550137 ceph-mon[74516]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Dec  8 04:50:18 np0005550137 ceph-mon[74516]: log_channel(audit) log [INF] : from='mgr.14766 192.168.122.100:0/2066651810' entity='mgr.compute-0.kitiwu' 
Dec  8 04:50:18 np0005550137 ceph-mon[74516]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Dec  8 04:50:18 np0005550137 ceph-mon[74516]: log_channel(audit) log [DBG] : from='mgr.14766 192.168.122.100:0/2066651810' entity='mgr.compute-0.kitiwu' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec  8 04:50:18 np0005550137 ceph-mon[74516]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Dec  8 04:50:18 np0005550137 ceph-mon[74516]: log_channel(audit) log [INF] : from='mgr.14766 192.168.122.100:0/2066651810' entity='mgr.compute-0.kitiwu' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Dec  8 04:50:18 np0005550137 ceph-mon[74516]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Dec  8 04:50:18 np0005550137 ceph-mon[74516]: log_channel(audit) log [INF] : from='mgr.14766 192.168.122.100:0/2066651810' entity='mgr.compute-0.kitiwu' 
Dec  8 04:50:18 np0005550137 ceph-mon[74516]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.nfs.cephfs}] v 0)
Dec  8 04:50:18 np0005550137 ceph-mon[74516]: log_channel(audit) log [INF] : from='mgr.14766 192.168.122.100:0/2066651810' entity='mgr.compute-0.kitiwu' 
Dec  8 04:50:18 np0005550137 ceph-mon[74516]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Dec  8 04:50:18 np0005550137 ceph-mon[74516]: log_channel(audit) log [DBG] : from='mgr.14766 192.168.122.100:0/2066651810' entity='mgr.compute-0.kitiwu' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Dec  8 04:50:18 np0005550137 ceph-mon[74516]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Dec  8 04:50:18 np0005550137 ceph-mon[74516]: log_channel(audit) log [INF] : from='mgr.14766 192.168.122.100:0/2066651810' entity='mgr.compute-0.kitiwu' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Dec  8 04:50:18 np0005550137 ceph-mon[74516]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Dec  8 04:50:18 np0005550137 ceph-mon[74516]: log_channel(audit) log [DBG] : from='mgr.14766 192.168.122.100:0/2066651810' entity='mgr.compute-0.kitiwu' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Dec  8 04:50:18 np0005550137 ceph-osd[83009]: log_channel(cluster) log [DBG] : 9.0 deep-scrub starts
Dec  8 04:50:18 np0005550137 ceph-osd[83009]: log_channel(cluster) log [DBG] : 9.0 deep-scrub ok
Dec  8 04:50:18 np0005550137 ceph-ceb838ef-9d5d-54e4-bddb-2f01adce2ad4-alertmanager-compute-0[104680]: ts=2025-12-08T09:50:18.630Z caller=cluster.go:698 level=info component=cluster msg="gossip settled; proceeding" elapsed=10.005006455s
Dec  8 04:50:18 np0005550137 radosgw[89717]: ====== starting new request req=0x7ff6ecd845d0 =====
Dec  8 04:50:18 np0005550137 radosgw[89717]: ====== req done req=0x7ff6ecd845d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  8 04:50:18 np0005550137 radosgw[89717]: beast: 0x7ff6ecd845d0: 192.168.122.100 - anonymous [08/Dec/2025:09:50:18.921 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  8 04:50:19 np0005550137 ceph-mgr[74806]: log_channel(cluster) log [DBG] : pgmap v41: 353 pgs: 353 active+clean; 457 KiB data, 144 MiB used, 60 GiB / 60 GiB avail
Dec  8 04:50:19 np0005550137 ceph-mon[74516]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "22"} v 0)
Dec  8 04:50:19 np0005550137 ceph-mon[74516]: log_channel(audit) log [INF] : from='mgr.14766 192.168.122.100:0/2066651810' entity='mgr.compute-0.kitiwu' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "22"}]: dispatch
Dec  8 04:50:19 np0005550137 ceph-mon[74516]: from='mgr.14766 192.168.122.100:0/2066651810' entity='mgr.compute-0.kitiwu' 
Dec  8 04:50:19 np0005550137 ceph-mon[74516]: from='mgr.14766 192.168.122.100:0/2066651810' entity='mgr.compute-0.kitiwu' 
Dec  8 04:50:19 np0005550137 ceph-mon[74516]: from='mgr.14766 192.168.122.100:0/2066651810' entity='mgr.compute-0.kitiwu' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Dec  8 04:50:19 np0005550137 ceph-mon[74516]: from='mgr.14766 192.168.122.100:0/2066651810' entity='mgr.compute-0.kitiwu' 
Dec  8 04:50:19 np0005550137 ceph-mon[74516]: from='mgr.14766 192.168.122.100:0/2066651810' entity='mgr.compute-0.kitiwu' 
Dec  8 04:50:19 np0005550137 ceph-mon[74516]: from='mgr.14766 192.168.122.100:0/2066651810' entity='mgr.compute-0.kitiwu' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Dec  8 04:50:19 np0005550137 ceph-mon[74516]: from='mgr.14766 192.168.122.100:0/2066651810' entity='mgr.compute-0.kitiwu' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "22"}]: dispatch
Dec  8 04:50:19 np0005550137 podman[105882]: 2025-12-08 09:50:19.180255828 +0000 UTC m=+0.064385337 container create 96fb1dc425e43c8cf91e510c64ff9424e66cd40f28aa7cabb4be21246140e77b (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=festive_kapitsa, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec  8 04:50:19 np0005550137 systemd[93042]: Starting Mark boot as successful...
Dec  8 04:50:19 np0005550137 systemd[93042]: Finished Mark boot as successful.
Dec  8 04:50:19 np0005550137 systemd[1]: Started libpod-conmon-96fb1dc425e43c8cf91e510c64ff9424e66cd40f28aa7cabb4be21246140e77b.scope.
Dec  8 04:50:19 np0005550137 systemd[1]: Started libcrun container.
Dec  8 04:50:19 np0005550137 podman[105882]: 2025-12-08 09:50:19.156815736 +0000 UTC m=+0.040945305 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Dec  8 04:50:19 np0005550137 podman[105882]: 2025-12-08 09:50:19.259851416 +0000 UTC m=+0.143981005 container init 96fb1dc425e43c8cf91e510c64ff9424e66cd40f28aa7cabb4be21246140e77b (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=festive_kapitsa, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325)
Dec  8 04:50:19 np0005550137 podman[105882]: 2025-12-08 09:50:19.267521929 +0000 UTC m=+0.151651468 container start 96fb1dc425e43c8cf91e510c64ff9424e66cd40f28aa7cabb4be21246140e77b (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=festive_kapitsa, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec  8 04:50:19 np0005550137 podman[105882]: 2025-12-08 09:50:19.271460699 +0000 UTC m=+0.155590258 container attach 96fb1dc425e43c8cf91e510c64ff9424e66cd40f28aa7cabb4be21246140e77b (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=festive_kapitsa, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec  8 04:50:19 np0005550137 festive_kapitsa[105899]: 167 167
Dec  8 04:50:19 np0005550137 systemd[1]: libpod-96fb1dc425e43c8cf91e510c64ff9424e66cd40f28aa7cabb4be21246140e77b.scope: Deactivated successfully.
Dec  8 04:50:19 np0005550137 podman[105882]: 2025-12-08 09:50:19.277967596 +0000 UTC m=+0.162097155 container died 96fb1dc425e43c8cf91e510c64ff9424e66cd40f28aa7cabb4be21246140e77b (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=festive_kapitsa, ceph=True, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec  8 04:50:19 np0005550137 systemd[1]: var-lib-containers-storage-overlay-23b3f549e9c17af679d3a5ee5400c00b05b293085059bc268ca89f2d6c1a1e27-merged.mount: Deactivated successfully.
Dec  8 04:50:19 np0005550137 podman[105882]: 2025-12-08 09:50:19.330608066 +0000 UTC m=+0.214737615 container remove 96fb1dc425e43c8cf91e510c64ff9424e66cd40f28aa7cabb4be21246140e77b (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=festive_kapitsa, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, OSD_FLAVOR=default)
Dec  8 04:50:19 np0005550137 systemd[1]: libpod-conmon-96fb1dc425e43c8cf91e510c64ff9424e66cd40f28aa7cabb4be21246140e77b.scope: Deactivated successfully.
Dec  8 04:50:19 np0005550137 ceph-ceb838ef-9d5d-54e4-bddb-2f01adce2ad4-nfs-cephfs-2-0-compute-0-cuvvno[104424]: 08/12/2025 09:50:19 : epoch 69369f4f : compute-0 : ganesha.nfsd-2[main] nfs_lift_grace_locked :STATE :EVENT :NFS Server Now NOT IN GRACE
Dec  8 04:50:19 np0005550137 ceph-ceb838ef-9d5d-54e4-bddb-2f01adce2ad4-nfs-cephfs-2-0-compute-0-cuvvno[104424]: 08/12/2025 09:50:19 : epoch 69369f4f : compute-0 : ganesha.nfsd-2[main] main :NFS STARTUP :WARN :No export entries found in configuration file !!!
Dec  8 04:50:19 np0005550137 ceph-ceb838ef-9d5d-54e4-bddb-2f01adce2ad4-nfs-cephfs-2-0-compute-0-cuvvno[104424]: 08/12/2025 09:50:19 : epoch 69369f4f : compute-0 : ganesha.nfsd-2[main] config_errs_to_log :CONFIG :WARN :Config File (/etc/ganesha/ganesha.conf:24): Unknown block (RADOS_URLS)
Dec  8 04:50:19 np0005550137 ceph-ceb838ef-9d5d-54e4-bddb-2f01adce2ad4-nfs-cephfs-2-0-compute-0-cuvvno[104424]: 08/12/2025 09:50:19 : epoch 69369f4f : compute-0 : ganesha.nfsd-2[main] config_errs_to_log :CONFIG :WARN :Config File (/etc/ganesha/ganesha.conf:29): Unknown block (RGW)
Dec  8 04:50:19 np0005550137 ceph-ceb838ef-9d5d-54e4-bddb-2f01adce2ad4-nfs-cephfs-2-0-compute-0-cuvvno[104424]: 08/12/2025 09:50:19 : epoch 69369f4f : compute-0 : ganesha.nfsd-2[main] lower_my_caps :NFS STARTUP :EVENT :CAP_SYS_RESOURCE was successfully removed for proper quota management in FSAL
Dec  8 04:50:19 np0005550137 ceph-ceb838ef-9d5d-54e4-bddb-2f01adce2ad4-nfs-cephfs-2-0-compute-0-cuvvno[104424]: 08/12/2025 09:50:19 : epoch 69369f4f : compute-0 : ganesha.nfsd-2[main] lower_my_caps :NFS STARTUP :EVENT :currently set capabilities are: cap_chown,cap_dac_override,cap_fowner,cap_fsetid,cap_kill,cap_setgid,cap_setuid,cap_setpcap,cap_net_bind_service,cap_sys_chroot,cap_setfcap=ep
Dec  8 04:50:19 np0005550137 ceph-ceb838ef-9d5d-54e4-bddb-2f01adce2ad4-nfs-cephfs-2-0-compute-0-cuvvno[104424]: 08/12/2025 09:50:19 : epoch 69369f4f : compute-0 : ganesha.nfsd-2[main] gsh_dbus_pkginit :DBUS :CRIT :dbus_bus_get failed (Failed to connect to socket /run/dbus/system_bus_socket: No such file or directory)
Dec  8 04:50:19 np0005550137 ceph-ceb838ef-9d5d-54e4-bddb-2f01adce2ad4-nfs-cephfs-2-0-compute-0-cuvvno[104424]: 08/12/2025 09:50:19 : epoch 69369f4f : compute-0 : ganesha.nfsd-2[main] gsh_dbus_register_path :DBUS :CRIT :dbus_connection_register_object_path called with no DBUS connection
Dec  8 04:50:19 np0005550137 ceph-ceb838ef-9d5d-54e4-bddb-2f01adce2ad4-nfs-cephfs-2-0-compute-0-cuvvno[104424]: 08/12/2025 09:50:19 : epoch 69369f4f : compute-0 : ganesha.nfsd-2[main] gsh_dbus_register_path :DBUS :CRIT :dbus_connection_register_object_path called with no DBUS connection
Dec  8 04:50:19 np0005550137 ceph-ceb838ef-9d5d-54e4-bddb-2f01adce2ad4-nfs-cephfs-2-0-compute-0-cuvvno[104424]: 08/12/2025 09:50:19 : epoch 69369f4f : compute-0 : ganesha.nfsd-2[main] gsh_dbus_register_path :DBUS :CRIT :dbus_connection_register_object_path called with no DBUS connection
Dec  8 04:50:19 np0005550137 podman[105924]: 2025-12-08 09:50:19.519258697 +0000 UTC m=+0.062060507 container create 5e7aec91f592b0b2bb462eeaa39ac5021e861997500e75271e848f69d2996648 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=dazzling_mahavira, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image)
Dec  8 04:50:19 np0005550137 radosgw[89717]: ====== starting new request req=0x7ff6ecd845d0 =====
Dec  8 04:50:19 np0005550137 radosgw[89717]: ====== req done req=0x7ff6ecd845d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  8 04:50:19 np0005550137 radosgw[89717]: beast: 0x7ff6ecd845d0: 192.168.122.102 - anonymous [08/Dec/2025:09:50:19.527 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  8 04:50:19 np0005550137 ceph-ceb838ef-9d5d-54e4-bddb-2f01adce2ad4-nfs-cephfs-2-0-compute-0-cuvvno[104424]: 08/12/2025 09:50:19 : epoch 69369f4f : compute-0 : ganesha.nfsd-2[main] nfs_Init_svc :DISP :CRIT :Cannot acquire credentials for principal nfs
Dec  8 04:50:19 np0005550137 ceph-ceb838ef-9d5d-54e4-bddb-2f01adce2ad4-nfs-cephfs-2-0-compute-0-cuvvno[104424]: 08/12/2025 09:50:19 : epoch 69369f4f : compute-0 : ganesha.nfsd-2[main] gsh_dbus_register_path :DBUS :CRIT :dbus_connection_register_object_path called with no DBUS connection
Dec  8 04:50:19 np0005550137 ceph-ceb838ef-9d5d-54e4-bddb-2f01adce2ad4-nfs-cephfs-2-0-compute-0-cuvvno[104424]: 08/12/2025 09:50:19 : epoch 69369f4f : compute-0 : ganesha.nfsd-2[main] nfs_Init_admin_thread :NFS CB :EVENT :Admin thread initialized
Dec  8 04:50:19 np0005550137 ceph-ceb838ef-9d5d-54e4-bddb-2f01adce2ad4-nfs-cephfs-2-0-compute-0-cuvvno[104424]: 08/12/2025 09:50:19 : epoch 69369f4f : compute-0 : ganesha.nfsd-2[main] nfs_rpc_cb_init_ccache :NFS STARTUP :EVENT :Callback creds directory (/var/run/ganesha) already exists
Dec  8 04:50:19 np0005550137 ceph-ceb838ef-9d5d-54e4-bddb-2f01adce2ad4-nfs-cephfs-2-0-compute-0-cuvvno[104424]: 08/12/2025 09:50:19 : epoch 69369f4f : compute-0 : ganesha.nfsd-2[main] find_keytab_entry :NFS CB :WARN :Configuration file does not specify default realm while getting default realm name
Dec  8 04:50:19 np0005550137 ceph-ceb838ef-9d5d-54e4-bddb-2f01adce2ad4-nfs-cephfs-2-0-compute-0-cuvvno[104424]: 08/12/2025 09:50:19 : epoch 69369f4f : compute-0 : ganesha.nfsd-2[main] gssd_refresh_krb5_machine_credential :NFS CB :CRIT :ERROR: gssd_refresh_krb5_machine_credential: no usable keytab entry found in keytab /etc/krb5.keytab for connection with host localhost
Dec  8 04:50:19 np0005550137 ceph-ceb838ef-9d5d-54e4-bddb-2f01adce2ad4-nfs-cephfs-2-0-compute-0-cuvvno[104424]: 08/12/2025 09:50:19 : epoch 69369f4f : compute-0 : ganesha.nfsd-2[main] nfs_rpc_cb_init_ccache :NFS STARTUP :WARN :gssd_refresh_krb5_machine_credential failed (-1765328160:0)
Dec  8 04:50:19 np0005550137 ceph-ceb838ef-9d5d-54e4-bddb-2f01adce2ad4-nfs-cephfs-2-0-compute-0-cuvvno[104424]: 08/12/2025 09:50:19 : epoch 69369f4f : compute-0 : ganesha.nfsd-2[main] nfs_Start_threads :THREAD :EVENT :Starting delayed executor.
Dec  8 04:50:19 np0005550137 ceph-ceb838ef-9d5d-54e4-bddb-2f01adce2ad4-nfs-cephfs-2-0-compute-0-cuvvno[104424]: 08/12/2025 09:50:19 : epoch 69369f4f : compute-0 : ganesha.nfsd-2[main] nfs_Start_threads :THREAD :EVENT :gsh_dbusthread was started successfully
Dec  8 04:50:19 np0005550137 ceph-ceb838ef-9d5d-54e4-bddb-2f01adce2ad4-nfs-cephfs-2-0-compute-0-cuvvno[104424]: 08/12/2025 09:50:19 : epoch 69369f4f : compute-0 : ganesha.nfsd-2[main] nfs_Start_threads :THREAD :EVENT :admin thread was started successfully
Dec  8 04:50:19 np0005550137 ceph-ceb838ef-9d5d-54e4-bddb-2f01adce2ad4-nfs-cephfs-2-0-compute-0-cuvvno[104424]: 08/12/2025 09:50:19 : epoch 69369f4f : compute-0 : ganesha.nfsd-2[dbus] gsh_dbus_thread :DBUS :CRIT :DBUS not initialized, service thread exiting
Dec  8 04:50:19 np0005550137 ceph-ceb838ef-9d5d-54e4-bddb-2f01adce2ad4-nfs-cephfs-2-0-compute-0-cuvvno[104424]: 08/12/2025 09:50:19 : epoch 69369f4f : compute-0 : ganesha.nfsd-2[dbus] gsh_dbus_thread :DBUS :EVENT :shutdown
Dec  8 04:50:19 np0005550137 ceph-ceb838ef-9d5d-54e4-bddb-2f01adce2ad4-nfs-cephfs-2-0-compute-0-cuvvno[104424]: 08/12/2025 09:50:19 : epoch 69369f4f : compute-0 : ganesha.nfsd-2[main] nfs_Start_threads :THREAD :EVENT :reaper thread was started successfully
Dec  8 04:50:19 np0005550137 ceph-ceb838ef-9d5d-54e4-bddb-2f01adce2ad4-nfs-cephfs-2-0-compute-0-cuvvno[104424]: 08/12/2025 09:50:19 : epoch 69369f4f : compute-0 : ganesha.nfsd-2[main] nfs_Start_threads :THREAD :EVENT :General fridge was started successfully
Dec  8 04:50:19 np0005550137 ceph-ceb838ef-9d5d-54e4-bddb-2f01adce2ad4-nfs-cephfs-2-0-compute-0-cuvvno[104424]: 08/12/2025 09:50:19 : epoch 69369f4f : compute-0 : ganesha.nfsd-2[main] nfs_start :NFS STARTUP :EVENT :-------------------------------------------------
Dec  8 04:50:19 np0005550137 ceph-ceb838ef-9d5d-54e4-bddb-2f01adce2ad4-nfs-cephfs-2-0-compute-0-cuvvno[104424]: 08/12/2025 09:50:19 : epoch 69369f4f : compute-0 : ganesha.nfsd-2[main] nfs_start :NFS STARTUP :EVENT :             NFS SERVER INITIALIZED
Dec  8 04:50:19 np0005550137 ceph-ceb838ef-9d5d-54e4-bddb-2f01adce2ad4-nfs-cephfs-2-0-compute-0-cuvvno[104424]: 08/12/2025 09:50:19 : epoch 69369f4f : compute-0 : ganesha.nfsd-2[main] nfs_start :NFS STARTUP :EVENT :-------------------------------------------------
Dec  8 04:50:19 np0005550137 ceph-mon[74516]: mon.compute-0@0(leader).osd e99 do_prune osdmap full prune enabled
Dec  8 04:50:19 np0005550137 systemd[1]: Started libpod-conmon-5e7aec91f592b0b2bb462eeaa39ac5021e861997500e75271e848f69d2996648.scope.
Dec  8 04:50:19 np0005550137 ceph-mon[74516]: log_channel(audit) log [INF] : from='mgr.14766 192.168.122.100:0/2066651810' entity='mgr.compute-0.kitiwu' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "22"}]': finished
Dec  8 04:50:19 np0005550137 ceph-mon[74516]: mon.compute-0@0(leader).osd e100 e100: 3 total, 3 up, 3 in
Dec  8 04:50:19 np0005550137 ceph-mon[74516]: log_channel(cluster) log [DBG] : osdmap e100: 3 total, 3 up, 3 in
Dec  8 04:50:19 np0005550137 podman[105924]: 2025-12-08 09:50:19.496377092 +0000 UTC m=+0.039178892 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Dec  8 04:50:19 np0005550137 systemd[1]: Started libcrun container.
Dec  8 04:50:19 np0005550137 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/66fc796b3d187281481731ea3fa35c51ac7786b6334afb83eeff16ff7d1e28a9/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec  8 04:50:19 np0005550137 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/66fc796b3d187281481731ea3fa35c51ac7786b6334afb83eeff16ff7d1e28a9/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec  8 04:50:19 np0005550137 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/66fc796b3d187281481731ea3fa35c51ac7786b6334afb83eeff16ff7d1e28a9/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec  8 04:50:19 np0005550137 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/66fc796b3d187281481731ea3fa35c51ac7786b6334afb83eeff16ff7d1e28a9/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec  8 04:50:19 np0005550137 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/66fc796b3d187281481731ea3fa35c51ac7786b6334afb83eeff16ff7d1e28a9/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Dec  8 04:50:19 np0005550137 ceph-osd[83009]: log_channel(cluster) log [DBG] : 9.4 scrub starts
Dec  8 04:50:19 np0005550137 podman[105924]: 2025-12-08 09:50:19.627065262 +0000 UTC m=+0.169867052 container init 5e7aec91f592b0b2bb462eeaa39ac5021e861997500e75271e848f69d2996648 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=dazzling_mahavira, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default)
Dec  8 04:50:19 np0005550137 podman[105924]: 2025-12-08 09:50:19.643943696 +0000 UTC m=+0.186745476 container start 5e7aec91f592b0b2bb462eeaa39ac5021e861997500e75271e848f69d2996648 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=dazzling_mahavira, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.build-date=20250325)
Dec  8 04:50:19 np0005550137 podman[105924]: 2025-12-08 09:50:19.648256176 +0000 UTC m=+0.191057956 container attach 5e7aec91f592b0b2bb462eeaa39ac5021e861997500e75271e848f69d2996648 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=dazzling_mahavira, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, ceph=True, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Dec  8 04:50:19 np0005550137 ceph-osd[83009]: log_channel(cluster) log [DBG] : 9.4 scrub ok
Dec  8 04:50:20 np0005550137 ceph-mon[74516]: mon.compute-0@0(leader).osd e100 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  8 04:50:20 np0005550137 ceph-mon[74516]: mon.compute-0@0(leader).osd e100 do_prune osdmap full prune enabled
Dec  8 04:50:20 np0005550137 dazzling_mahavira[105952]: --> passed data devices: 0 physical, 1 LVM
Dec  8 04:50:20 np0005550137 dazzling_mahavira[105952]: --> All data devices are unavailable
Dec  8 04:50:20 np0005550137 ceph-mon[74516]: mon.compute-0@0(leader).osd e101 e101: 3 total, 3 up, 3 in
Dec  8 04:50:20 np0005550137 ceph-mon[74516]: log_channel(cluster) log [DBG] : osdmap e101: 3 total, 3 up, 3 in
Dec  8 04:50:20 np0005550137 systemd[1]: libpod-5e7aec91f592b0b2bb462eeaa39ac5021e861997500e75271e848f69d2996648.scope: Deactivated successfully.
Dec  8 04:50:20 np0005550137 podman[105924]: 2025-12-08 09:50:20.059289823 +0000 UTC m=+0.602091633 container died 5e7aec91f592b0b2bb462eeaa39ac5021e861997500e75271e848f69d2996648 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=dazzling_mahavira, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, CEPH_REF=squid, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image)
Dec  8 04:50:20 np0005550137 systemd[1]: var-lib-containers-storage-overlay-66fc796b3d187281481731ea3fa35c51ac7786b6334afb83eeff16ff7d1e28a9-merged.mount: Deactivated successfully.
Dec  8 04:50:20 np0005550137 podman[105924]: 2025-12-08 09:50:20.123121073 +0000 UTC m=+0.665922893 container remove 5e7aec91f592b0b2bb462eeaa39ac5021e861997500e75271e848f69d2996648 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=dazzling_mahavira, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec  8 04:50:20 np0005550137 systemd[1]: libpod-conmon-5e7aec91f592b0b2bb462eeaa39ac5021e861997500e75271e848f69d2996648.scope: Deactivated successfully.
Dec  8 04:50:20 np0005550137 ceph-mon[74516]: from='mgr.14766 192.168.122.100:0/2066651810' entity='mgr.compute-0.kitiwu' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "22"}]': finished
Dec  8 04:50:20 np0005550137 ceph-osd[83009]: log_channel(cluster) log [DBG] : 9.1c scrub starts
Dec  8 04:50:20 np0005550137 ceph-osd[83009]: log_channel(cluster) log [DBG] : 9.1c scrub ok
Dec  8 04:50:20 np0005550137 podman[106075]: 2025-12-08 09:50:20.790194739 +0000 UTC m=+0.059223591 container create 823ce91aa8966a644514f43b97a70c9f074d15e4deeda318d665735c1d7cc865 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=objective_feistel, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Dec  8 04:50:20 np0005550137 systemd[1]: Started libpod-conmon-823ce91aa8966a644514f43b97a70c9f074d15e4deeda318d665735c1d7cc865.scope.
Dec  8 04:50:20 np0005550137 podman[106075]: 2025-12-08 09:50:20.763082395 +0000 UTC m=+0.032111267 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Dec  8 04:50:20 np0005550137 systemd[1]: Started libcrun container.
Dec  8 04:50:20 np0005550137 podman[106075]: 2025-12-08 09:50:20.882265126 +0000 UTC m=+0.151294028 container init 823ce91aa8966a644514f43b97a70c9f074d15e4deeda318d665735c1d7cc865 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=objective_feistel, OSD_FLAVOR=default, org.label-schema.license=GPLv2, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid)
Dec  8 04:50:20 np0005550137 podman[106075]: 2025-12-08 09:50:20.889499776 +0000 UTC m=+0.158528628 container start 823ce91aa8966a644514f43b97a70c9f074d15e4deeda318d665735c1d7cc865 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=objective_feistel, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, io.buildah.version=1.40.1)
Dec  8 04:50:20 np0005550137 podman[106075]: 2025-12-08 09:50:20.893420135 +0000 UTC m=+0.162449027 container attach 823ce91aa8966a644514f43b97a70c9f074d15e4deeda318d665735c1d7cc865 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=objective_feistel, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec  8 04:50:20 np0005550137 objective_feistel[106091]: 167 167
Dec  8 04:50:20 np0005550137 systemd[1]: libpod-823ce91aa8966a644514f43b97a70c9f074d15e4deeda318d665735c1d7cc865.scope: Deactivated successfully.
Dec  8 04:50:20 np0005550137 podman[106075]: 2025-12-08 09:50:20.897723756 +0000 UTC m=+0.166752618 container died 823ce91aa8966a644514f43b97a70c9f074d15e4deeda318d665735c1d7cc865 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=objective_feistel, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, ceph=True)
Dec  8 04:50:20 np0005550137 ceph-ceb838ef-9d5d-54e4-bddb-2f01adce2ad4-nfs-cephfs-2-0-compute-0-cuvvno[104424]: 08/12/2025 09:50:20 : epoch 69369f4f : compute-0 : ganesha.nfsd-2[svc_2] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6988000df0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec  8 04:50:20 np0005550137 radosgw[89717]: ====== starting new request req=0x7ff6ecd845d0 =====
Dec  8 04:50:20 np0005550137 radosgw[89717]: ====== req done req=0x7ff6ecd845d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  8 04:50:20 np0005550137 radosgw[89717]: beast: 0x7ff6ecd845d0: 192.168.122.100 - anonymous [08/Dec/2025:09:50:20.925 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  8 04:50:20 np0005550137 systemd[1]: var-lib-containers-storage-overlay-e7e9840e621706c9ec7c15406edd0e35a29107b94f405928b180165f44af9209-merged.mount: Deactivated successfully.
Dec  8 04:50:20 np0005550137 podman[106075]: 2025-12-08 09:50:20.94231089 +0000 UTC m=+0.211339712 container remove 823ce91aa8966a644514f43b97a70c9f074d15e4deeda318d665735c1d7cc865 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=objective_feistel, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.vendor=CentOS, CEPH_REF=squid)
Dec  8 04:50:20 np0005550137 systemd[1]: libpod-conmon-823ce91aa8966a644514f43b97a70c9f074d15e4deeda318d665735c1d7cc865.scope: Deactivated successfully.
Dec  8 04:50:21 np0005550137 ceph-mgr[74806]: log_channel(cluster) log [DBG] : pgmap v44: 353 pgs: 353 active+clean; 457 KiB data, 144 MiB used, 60 GiB / 60 GiB avail
Dec  8 04:50:21 np0005550137 ceph-mon[74516]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "23"} v 0)
Dec  8 04:50:21 np0005550137 ceph-mon[74516]: log_channel(audit) log [INF] : from='mgr.14766 192.168.122.100:0/2066651810' entity='mgr.compute-0.kitiwu' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "23"}]: dispatch
Dec  8 04:50:21 np0005550137 ceph-mon[74516]: mon.compute-0@0(leader).osd e101 do_prune osdmap full prune enabled
Dec  8 04:50:21 np0005550137 ceph-mon[74516]: log_channel(audit) log [INF] : from='mgr.14766 192.168.122.100:0/2066651810' entity='mgr.compute-0.kitiwu' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "23"}]': finished
Dec  8 04:50:21 np0005550137 ceph-mon[74516]: mon.compute-0@0(leader).osd e102 e102: 3 total, 3 up, 3 in
Dec  8 04:50:21 np0005550137 ceph-mon[74516]: log_channel(cluster) log [DBG] : osdmap e102: 3 total, 3 up, 3 in
Dec  8 04:50:21 np0005550137 podman[106116]: 2025-12-08 09:50:21.169239055 +0000 UTC m=+0.056307762 container create 24c1a3635dcbbc582690337ce2accde711364a498ba1d0d268e4f12f793c639c (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=strange_brattain, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec  8 04:50:21 np0005550137 ceph-mon[74516]: from='mgr.14766 192.168.122.100:0/2066651810' entity='mgr.compute-0.kitiwu' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "23"}]: dispatch
Dec  8 04:50:21 np0005550137 ceph-mon[74516]: from='mgr.14766 192.168.122.100:0/2066651810' entity='mgr.compute-0.kitiwu' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "23"}]': finished
Dec  8 04:50:21 np0005550137 systemd[1]: Started libpod-conmon-24c1a3635dcbbc582690337ce2accde711364a498ba1d0d268e4f12f793c639c.scope.
Dec  8 04:50:21 np0005550137 ceph-ceb838ef-9d5d-54e4-bddb-2f01adce2ad4-nfs-cephfs-2-0-compute-0-cuvvno[104424]: 08/12/2025 09:50:21 : epoch 69369f4f : compute-0 : ganesha.nfsd-2[svc_8] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6980001e50 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec  8 04:50:21 np0005550137 podman[106116]: 2025-12-08 09:50:21.140496361 +0000 UTC m=+0.027565118 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Dec  8 04:50:21 np0005550137 ceph-ceb838ef-9d5d-54e4-bddb-2f01adce2ad4-nfs-cephfs-2-0-compute-0-cuvvno[104424]: 08/12/2025 09:50:21 : epoch 69369f4f : compute-0 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f695c000b60 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec  8 04:50:21 np0005550137 systemd[1]: Started libcrun container.
Dec  8 04:50:21 np0005550137 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2a5c6b26c80b30f25b74113195942be68ea1be9d9c9d0f49715f6784458bff37/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec  8 04:50:21 np0005550137 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2a5c6b26c80b30f25b74113195942be68ea1be9d9c9d0f49715f6784458bff37/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec  8 04:50:21 np0005550137 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2a5c6b26c80b30f25b74113195942be68ea1be9d9c9d0f49715f6784458bff37/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec  8 04:50:21 np0005550137 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2a5c6b26c80b30f25b74113195942be68ea1be9d9c9d0f49715f6784458bff37/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec  8 04:50:21 np0005550137 podman[106116]: 2025-12-08 09:50:21.275930236 +0000 UTC m=+0.162998993 container init 24c1a3635dcbbc582690337ce2accde711364a498ba1d0d268e4f12f793c639c (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=strange_brattain, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, OSD_FLAVOR=default, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Dec  8 04:50:21 np0005550137 podman[106116]: 2025-12-08 09:50:21.288205318 +0000 UTC m=+0.175274055 container start 24c1a3635dcbbc582690337ce2accde711364a498ba1d0d268e4f12f793c639c (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=strange_brattain, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, ceph=True, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Dec  8 04:50:21 np0005550137 podman[106116]: 2025-12-08 09:50:21.292116948 +0000 UTC m=+0.179185755 container attach 24c1a3635dcbbc582690337ce2accde711364a498ba1d0d268e4f12f793c639c (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=strange_brattain, org.label-schema.license=GPLv2, ceph=True, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec  8 04:50:21 np0005550137 radosgw[89717]: ====== starting new request req=0x7ff6ecd845d0 =====
Dec  8 04:50:21 np0005550137 radosgw[89717]: ====== req done req=0x7ff6ecd845d0 op status=0 http_status=200 latency=0.001000031s ======
Dec  8 04:50:21 np0005550137 radosgw[89717]: beast: 0x7ff6ecd845d0: 192.168.122.102 - anonymous [08/Dec/2025:09:50:21.530 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Dec  8 04:50:21 np0005550137 strange_brattain[106132]: {
Dec  8 04:50:21 np0005550137 strange_brattain[106132]:    "1": [
Dec  8 04:50:21 np0005550137 strange_brattain[106132]:        {
Dec  8 04:50:21 np0005550137 strange_brattain[106132]:            "devices": [
Dec  8 04:50:21 np0005550137 strange_brattain[106132]:                "/dev/loop3"
Dec  8 04:50:21 np0005550137 strange_brattain[106132]:            ],
Dec  8 04:50:21 np0005550137 strange_brattain[106132]:            "lv_name": "ceph_lv0",
Dec  8 04:50:21 np0005550137 strange_brattain[106132]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Dec  8 04:50:21 np0005550137 strange_brattain[106132]:            "lv_size": "21470642176",
Dec  8 04:50:21 np0005550137 strange_brattain[106132]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=RomYjf-Huw1-Uvyl-0ZXT-yb3F-i1Wo-KjPFYE,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=ceb838ef-9d5d-54e4-bddb-2f01adce2ad4,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=10863df8-16d4-4896-ae26-227efb76290e,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Dec  8 04:50:21 np0005550137 strange_brattain[106132]:            "lv_uuid": "RomYjf-Huw1-Uvyl-0ZXT-yb3F-i1Wo-KjPFYE",
Dec  8 04:50:21 np0005550137 strange_brattain[106132]:            "name": "ceph_lv0",
Dec  8 04:50:21 np0005550137 strange_brattain[106132]:            "path": "/dev/ceph_vg0/ceph_lv0",
Dec  8 04:50:21 np0005550137 strange_brattain[106132]:            "tags": {
Dec  8 04:50:21 np0005550137 strange_brattain[106132]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Dec  8 04:50:21 np0005550137 strange_brattain[106132]:                "ceph.block_uuid": "RomYjf-Huw1-Uvyl-0ZXT-yb3F-i1Wo-KjPFYE",
Dec  8 04:50:21 np0005550137 strange_brattain[106132]:                "ceph.cephx_lockbox_secret": "",
Dec  8 04:50:21 np0005550137 strange_brattain[106132]:                "ceph.cluster_fsid": "ceb838ef-9d5d-54e4-bddb-2f01adce2ad4",
Dec  8 04:50:21 np0005550137 strange_brattain[106132]:                "ceph.cluster_name": "ceph",
Dec  8 04:50:21 np0005550137 strange_brattain[106132]:                "ceph.crush_device_class": "",
Dec  8 04:50:21 np0005550137 strange_brattain[106132]:                "ceph.encrypted": "0",
Dec  8 04:50:21 np0005550137 strange_brattain[106132]:                "ceph.osd_fsid": "10863df8-16d4-4896-ae26-227efb76290e",
Dec  8 04:50:21 np0005550137 strange_brattain[106132]:                "ceph.osd_id": "1",
Dec  8 04:50:21 np0005550137 strange_brattain[106132]:                "ceph.osdspec_affinity": "default_drive_group",
Dec  8 04:50:21 np0005550137 strange_brattain[106132]:                "ceph.type": "block",
Dec  8 04:50:21 np0005550137 strange_brattain[106132]:                "ceph.vdo": "0",
Dec  8 04:50:21 np0005550137 strange_brattain[106132]:                "ceph.with_tpm": "0"
Dec  8 04:50:21 np0005550137 strange_brattain[106132]:            },
Dec  8 04:50:21 np0005550137 strange_brattain[106132]:            "type": "block",
Dec  8 04:50:21 np0005550137 strange_brattain[106132]:            "vg_name": "ceph_vg0"
Dec  8 04:50:21 np0005550137 strange_brattain[106132]:        }
Dec  8 04:50:21 np0005550137 strange_brattain[106132]:    ]
Dec  8 04:50:21 np0005550137 strange_brattain[106132]: }
Dec  8 04:50:21 np0005550137 systemd[1]: libpod-24c1a3635dcbbc582690337ce2accde711364a498ba1d0d268e4f12f793c639c.scope: Deactivated successfully.
Dec  8 04:50:21 np0005550137 podman[106116]: 2025-12-08 09:50:21.617169453 +0000 UTC m=+0.504238170 container died 24c1a3635dcbbc582690337ce2accde711364a498ba1d0d268e4f12f793c639c (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=strange_brattain, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1)
Dec  8 04:50:21 np0005550137 systemd[1]: var-lib-containers-storage-overlay-2a5c6b26c80b30f25b74113195942be68ea1be9d9c9d0f49715f6784458bff37-merged.mount: Deactivated successfully.
Dec  8 04:50:21 np0005550137 podman[106116]: 2025-12-08 09:50:21.667876014 +0000 UTC m=+0.554944731 container remove 24c1a3635dcbbc582690337ce2accde711364a498ba1d0d268e4f12f793c639c (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=strange_brattain, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325)
Dec  8 04:50:21 np0005550137 systemd[1]: libpod-conmon-24c1a3635dcbbc582690337ce2accde711364a498ba1d0d268e4f12f793c639c.scope: Deactivated successfully.
Dec  8 04:50:22 np0005550137 ceph-mon[74516]: mon.compute-0@0(leader).osd e102 do_prune osdmap full prune enabled
Dec  8 04:50:22 np0005550137 ceph-mon[74516]: mon.compute-0@0(leader).osd e103 e103: 3 total, 3 up, 3 in
Dec  8 04:50:22 np0005550137 ceph-mon[74516]: log_channel(cluster) log [DBG] : osdmap e103: 3 total, 3 up, 3 in
Dec  8 04:50:22 np0005550137 podman[106267]: 2025-12-08 09:50:22.24773206 +0000 UTC m=+0.067860803 container create 37399860bb16a6bb7aca0f3acece6eaf2fb592f27b8644dd9e8e0a7d40065644 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=gallant_saha, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec  8 04:50:22 np0005550137 systemd[1]: Started libpod-conmon-37399860bb16a6bb7aca0f3acece6eaf2fb592f27b8644dd9e8e0a7d40065644.scope.
Dec  8 04:50:22 np0005550137 podman[106267]: 2025-12-08 09:50:22.217505951 +0000 UTC m=+0.037634724 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Dec  8 04:50:22 np0005550137 systemd[1]: Started libcrun container.
Dec  8 04:50:22 np0005550137 podman[106267]: 2025-12-08 09:50:22.334682472 +0000 UTC m=+0.154811195 container init 37399860bb16a6bb7aca0f3acece6eaf2fb592f27b8644dd9e8e0a7d40065644 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=gallant_saha, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, org.label-schema.license=GPLv2)
Dec  8 04:50:22 np0005550137 podman[106267]: 2025-12-08 09:50:22.340527229 +0000 UTC m=+0.160655932 container start 37399860bb16a6bb7aca0f3acece6eaf2fb592f27b8644dd9e8e0a7d40065644 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=gallant_saha, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.build-date=20250325, CEPH_REF=squid, org.label-schema.license=GPLv2, OSD_FLAVOR=default)
Dec  8 04:50:22 np0005550137 podman[106267]: 2025-12-08 09:50:22.344018775 +0000 UTC m=+0.164147478 container attach 37399860bb16a6bb7aca0f3acece6eaf2fb592f27b8644dd9e8e0a7d40065644 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=gallant_saha, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec  8 04:50:22 np0005550137 gallant_saha[106283]: 167 167
Dec  8 04:50:22 np0005550137 systemd[1]: libpod-37399860bb16a6bb7aca0f3acece6eaf2fb592f27b8644dd9e8e0a7d40065644.scope: Deactivated successfully.
Dec  8 04:50:22 np0005550137 conmon[106283]: conmon 37399860bb16a6bb7aca <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-37399860bb16a6bb7aca0f3acece6eaf2fb592f27b8644dd9e8e0a7d40065644.scope/container/memory.events
Dec  8 04:50:22 np0005550137 podman[106267]: 2025-12-08 09:50:22.349087699 +0000 UTC m=+0.169216402 container died 37399860bb16a6bb7aca0f3acece6eaf2fb592f27b8644dd9e8e0a7d40065644 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=gallant_saha, OSD_FLAVOR=default, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Dec  8 04:50:22 np0005550137 systemd[1]: var-lib-containers-storage-overlay-8079c7759dcddf5af8324193630b471ed25424ae52ea143d1717bf94d6421970-merged.mount: Deactivated successfully.
Dec  8 04:50:22 np0005550137 podman[106267]: 2025-12-08 09:50:22.389853778 +0000 UTC m=+0.209982521 container remove 37399860bb16a6bb7aca0f3acece6eaf2fb592f27b8644dd9e8e0a7d40065644 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=gallant_saha, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, CEPH_REF=squid, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec  8 04:50:22 np0005550137 systemd[1]: libpod-conmon-37399860bb16a6bb7aca0f3acece6eaf2fb592f27b8644dd9e8e0a7d40065644.scope: Deactivated successfully.
Dec  8 04:50:22 np0005550137 podman[106309]: 2025-12-08 09:50:22.627797826 +0000 UTC m=+0.063476089 container create 4669a56061dac2f74aad6ea1537c9d88c29a3781635c2040e6561ae1c0363a13 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=upbeat_kare, org.label-schema.vendor=CentOS, CEPH_REF=squid, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, org.label-schema.build-date=20250325)
Dec  8 04:50:22 np0005550137 systemd[1]: Started libpod-conmon-4669a56061dac2f74aad6ea1537c9d88c29a3781635c2040e6561ae1c0363a13.scope.
Dec  8 04:50:22 np0005550137 systemd[1]: Started libcrun container.
Dec  8 04:50:22 np0005550137 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/64f4c01b3071732b4cc4266a6782fe844360a091c2e760461ef065448189a531/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec  8 04:50:22 np0005550137 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/64f4c01b3071732b4cc4266a6782fe844360a091c2e760461ef065448189a531/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec  8 04:50:22 np0005550137 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/64f4c01b3071732b4cc4266a6782fe844360a091c2e760461ef065448189a531/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec  8 04:50:22 np0005550137 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/64f4c01b3071732b4cc4266a6782fe844360a091c2e760461ef065448189a531/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec  8 04:50:22 np0005550137 podman[106309]: 2025-12-08 09:50:22.608275953 +0000 UTC m=+0.043954216 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Dec  8 04:50:22 np0005550137 podman[106309]: 2025-12-08 09:50:22.709042145 +0000 UTC m=+0.144720418 container init 4669a56061dac2f74aad6ea1537c9d88c29a3781635c2040e6561ae1c0363a13 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=upbeat_kare, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2)
Dec  8 04:50:22 np0005550137 podman[106309]: 2025-12-08 09:50:22.71942998 +0000 UTC m=+0.155108233 container start 4669a56061dac2f74aad6ea1537c9d88c29a3781635c2040e6561ae1c0363a13 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=upbeat_kare, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Dec  8 04:50:22 np0005550137 podman[106309]: 2025-12-08 09:50:22.722953667 +0000 UTC m=+0.158631920 container attach 4669a56061dac2f74aad6ea1537c9d88c29a3781635c2040e6561ae1c0363a13 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=upbeat_kare, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image)
Dec  8 04:50:22 np0005550137 ceph-ceb838ef-9d5d-54e4-bddb-2f01adce2ad4-nfs-cephfs-2-0-compute-0-cuvvno[104424]: 08/12/2025 09:50:22 : epoch 69369f4f : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6988000df0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec  8 04:50:22 np0005550137 radosgw[89717]: ====== starting new request req=0x7ff6ecd845d0 =====
Dec  8 04:50:22 np0005550137 radosgw[89717]: ====== req done req=0x7ff6ecd845d0 op status=0 http_status=200 latency=0.001000031s ======
Dec  8 04:50:22 np0005550137 radosgw[89717]: beast: 0x7ff6ecd845d0: 192.168.122.100 - anonymous [08/Dec/2025:09:50:22.927 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Dec  8 04:50:23 np0005550137 ceph-mgr[74806]: log_channel(cluster) log [DBG] : pgmap v47: 353 pgs: 353 active+clean; 457 KiB data, 144 MiB used, 60 GiB / 60 GiB avail
Dec  8 04:50:23 np0005550137 ceph-mon[74516]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "24"} v 0)
Dec  8 04:50:23 np0005550137 ceph-mon[74516]: log_channel(audit) log [INF] : from='mgr.14766 192.168.122.100:0/2066651810' entity='mgr.compute-0.kitiwu' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "24"}]: dispatch
Dec  8 04:50:23 np0005550137 ceph-mon[74516]: mon.compute-0@0(leader).osd e103 do_prune osdmap full prune enabled
Dec  8 04:50:23 np0005550137 ceph-mon[74516]: log_channel(audit) log [INF] : from='mgr.14766 192.168.122.100:0/2066651810' entity='mgr.compute-0.kitiwu' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "24"}]': finished
Dec  8 04:50:23 np0005550137 ceph-mon[74516]: mon.compute-0@0(leader).osd e104 e104: 3 total, 3 up, 3 in
Dec  8 04:50:23 np0005550137 ceph-mon[74516]: log_channel(cluster) log [DBG] : osdmap e104: 3 total, 3 up, 3 in
Dec  8 04:50:23 np0005550137 ceph-mon[74516]: from='mgr.14766 192.168.122.100:0/2066651810' entity='mgr.compute-0.kitiwu' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "24"}]: dispatch
Dec  8 04:50:23 np0005550137 ceph-mon[74516]: from='mgr.14766 192.168.122.100:0/2066651810' entity='mgr.compute-0.kitiwu' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "24"}]': finished
Dec  8 04:50:23 np0005550137 ceph-ceb838ef-9d5d-54e4-bddb-2f01adce2ad4-haproxy-nfs-cephfs-compute-0-dvsreo[98033]: [WARNING] 341/095023 (4) : Server backend/nfs.cephfs.2 is UP, reason: Layer4 check passed, check duration: 0ms. 3 active and 0 backup servers online. 0 sessions requeued, 0 total in queue.
Dec  8 04:50:23 np0005550137 ceph-ceb838ef-9d5d-54e4-bddb-2f01adce2ad4-nfs-cephfs-2-0-compute-0-cuvvno[104424]: 08/12/2025 09:50:23 : epoch 69369f4f : compute-0 : ganesha.nfsd-2[svc_2] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6964000fa0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec  8 04:50:23 np0005550137 ceph-ceb838ef-9d5d-54e4-bddb-2f01adce2ad4-nfs-cephfs-2-0-compute-0-cuvvno[104424]: 08/12/2025 09:50:23 : epoch 69369f4f : compute-0 : ganesha.nfsd-2[svc_8] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6980001e50 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec  8 04:50:23 np0005550137 lvm[106401]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Dec  8 04:50:23 np0005550137 lvm[106401]: VG ceph_vg0 finished
Dec  8 04:50:23 np0005550137 lvm[106404]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Dec  8 04:50:23 np0005550137 lvm[106404]: VG ceph_vg0 finished
Dec  8 04:50:23 np0005550137 upbeat_kare[106327]: {}
Dec  8 04:50:23 np0005550137 systemd[1]: libpod-4669a56061dac2f74aad6ea1537c9d88c29a3781635c2040e6561ae1c0363a13.scope: Deactivated successfully.
Dec  8 04:50:23 np0005550137 podman[106309]: 2025-12-08 09:50:23.424023227 +0000 UTC m=+0.859701500 container died 4669a56061dac2f74aad6ea1537c9d88c29a3781635c2040e6561ae1c0363a13 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=upbeat_kare, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_REF=squid, ceph=True, OSD_FLAVOR=default, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec  8 04:50:23 np0005550137 systemd[1]: libpod-4669a56061dac2f74aad6ea1537c9d88c29a3781635c2040e6561ae1c0363a13.scope: Consumed 1.135s CPU time.
Dec  8 04:50:23 np0005550137 systemd[1]: var-lib-containers-storage-overlay-64f4c01b3071732b4cc4266a6782fe844360a091c2e760461ef065448189a531-merged.mount: Deactivated successfully.
Dec  8 04:50:23 np0005550137 podman[106309]: 2025-12-08 09:50:23.470607551 +0000 UTC m=+0.906285804 container remove 4669a56061dac2f74aad6ea1537c9d88c29a3781635c2040e6561ae1c0363a13 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=upbeat_kare, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image)
Dec  8 04:50:23 np0005550137 systemd[1]: libpod-conmon-4669a56061dac2f74aad6ea1537c9d88c29a3781635c2040e6561ae1c0363a13.scope: Deactivated successfully.
Dec  8 04:50:23 np0005550137 ceph-mon[74516]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Dec  8 04:50:23 np0005550137 ceph-mon[74516]: log_channel(audit) log [INF] : from='mgr.14766 192.168.122.100:0/2066651810' entity='mgr.compute-0.kitiwu' 
Dec  8 04:50:23 np0005550137 ceph-mon[74516]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Dec  8 04:50:23 np0005550137 radosgw[89717]: ====== starting new request req=0x7ff6ecd845d0 =====
Dec  8 04:50:23 np0005550137 ceph-mon[74516]: log_channel(audit) log [INF] : from='mgr.14766 192.168.122.100:0/2066651810' entity='mgr.compute-0.kitiwu' 
Dec  8 04:50:23 np0005550137 radosgw[89717]: ====== req done req=0x7ff6ecd845d0 op status=0 http_status=200 latency=0.001000030s ======
Dec  8 04:50:23 np0005550137 radosgw[89717]: beast: 0x7ff6ecd845d0: 192.168.122.102 - anonymous [08/Dec/2025:09:50:23.533 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000030s
Dec  8 04:50:24 np0005550137 ceph-mon[74516]: mon.compute-0@0(leader).osd e104 do_prune osdmap full prune enabled
Dec  8 04:50:24 np0005550137 ceph-mon[74516]: mon.compute-0@0(leader).osd e105 e105: 3 total, 3 up, 3 in
Dec  8 04:50:24 np0005550137 ceph-mon[74516]: log_channel(cluster) log [DBG] : osdmap e105: 3 total, 3 up, 3 in
Dec  8 04:50:24 np0005550137 ceph-mon[74516]: from='mgr.14766 192.168.122.100:0/2066651810' entity='mgr.compute-0.kitiwu' 
Dec  8 04:50:24 np0005550137 ceph-mon[74516]: from='mgr.14766 192.168.122.100:0/2066651810' entity='mgr.compute-0.kitiwu' 
Dec  8 04:50:24 np0005550137 ceph-ceb838ef-9d5d-54e4-bddb-2f01adce2ad4-nfs-cephfs-2-0-compute-0-cuvvno[104424]: 08/12/2025 09:50:24 : epoch 69369f4f : compute-0 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f695c0016a0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec  8 04:50:24 np0005550137 radosgw[89717]: ====== starting new request req=0x7ff6ecd845d0 =====
Dec  8 04:50:24 np0005550137 radosgw[89717]: ====== req done req=0x7ff6ecd845d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  8 04:50:24 np0005550137 radosgw[89717]: beast: 0x7ff6ecd845d0: 192.168.122.100 - anonymous [08/Dec/2025:09:50:24.929 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  8 04:50:25 np0005550137 ceph-mon[74516]: mon.compute-0@0(leader).osd e105 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  8 04:50:25 np0005550137 ceph-mgr[74806]: log_channel(cluster) log [DBG] : pgmap v50: 353 pgs: 1 remapped+peering, 1 peering, 351 active+clean; 458 KiB data, 144 MiB used, 60 GiB / 60 GiB avail
Dec  8 04:50:25 np0005550137 ceph-ceb838ef-9d5d-54e4-bddb-2f01adce2ad4-nfs-cephfs-2-0-compute-0-cuvvno[104424]: 08/12/2025 09:50:25 : epoch 69369f4f : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f69880021f0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec  8 04:50:25 np0005550137 ceph-ceb838ef-9d5d-54e4-bddb-2f01adce2ad4-nfs-cephfs-2-0-compute-0-cuvvno[104424]: 08/12/2025 09:50:25 : epoch 69369f4f : compute-0 : ganesha.nfsd-2[svc_2] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6964001ac0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec  8 04:50:25 np0005550137 ceph-mon[74516]: mon.compute-0@0(leader).osd e105 do_prune osdmap full prune enabled
Dec  8 04:50:25 np0005550137 ceph-mon[74516]: mon.compute-0@0(leader).osd e106 e106: 3 total, 3 up, 3 in
Dec  8 04:50:25 np0005550137 ceph-mon[74516]: log_channel(cluster) log [DBG] : osdmap e106: 3 total, 3 up, 3 in
Dec  8 04:50:25 np0005550137 radosgw[89717]: ====== starting new request req=0x7ff6ecd845d0 =====
Dec  8 04:50:25 np0005550137 radosgw[89717]: ====== req done req=0x7ff6ecd845d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  8 04:50:25 np0005550137 radosgw[89717]: beast: 0x7ff6ecd845d0: 192.168.122.102 - anonymous [08/Dec/2025:09:50:25.537 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  8 04:50:25 np0005550137 ceph-ceb838ef-9d5d-54e4-bddb-2f01adce2ad4-mgr-compute-0-kitiwu[74802]: ::ffff:192.168.122.100 - - [08/Dec/2025:09:50:25] "GET /metrics HTTP/1.1" 200 48267 "" "Prometheus/2.51.0"
Dec  8 04:50:25 np0005550137 ceph-mgr[74806]: [prometheus INFO cherrypy.access.140536868639312] ::ffff:192.168.122.100 - - [08/Dec/2025:09:50:25] "GET /metrics HTTP/1.1" 200 48267 "" "Prometheus/2.51.0"
Dec  8 04:50:26 np0005550137 ceph-ceb838ef-9d5d-54e4-bddb-2f01adce2ad4-nfs-cephfs-2-0-compute-0-cuvvno[104424]: 08/12/2025 09:50:26 : epoch 69369f4f : compute-0 : ganesha.nfsd-2[svc_8] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6980002d40 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec  8 04:50:26 np0005550137 radosgw[89717]: ====== starting new request req=0x7ff6ecd845d0 =====
Dec  8 04:50:26 np0005550137 radosgw[89717]: ====== req done req=0x7ff6ecd845d0 op status=0 http_status=200 latency=0.001000030s ======
Dec  8 04:50:26 np0005550137 radosgw[89717]: beast: 0x7ff6ecd845d0: 192.168.122.100 - anonymous [08/Dec/2025:09:50:26.931 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000030s
Dec  8 04:50:27 np0005550137 ceph-mgr[74806]: log_channel(cluster) log [DBG] : pgmap v52: 353 pgs: 1 remapped+peering, 1 peering, 351 active+clean; 458 KiB data, 144 MiB used, 60 GiB / 60 GiB avail
Dec  8 04:50:27 np0005550137 ceph-ceb838ef-9d5d-54e4-bddb-2f01adce2ad4-nfs-cephfs-2-0-compute-0-cuvvno[104424]: 08/12/2025 09:50:27 : epoch 69369f4f : compute-0 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f695c0016a0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec  8 04:50:27 np0005550137 ceph-ceb838ef-9d5d-54e4-bddb-2f01adce2ad4-nfs-cephfs-2-0-compute-0-cuvvno[104424]: 08/12/2025 09:50:27 : epoch 69369f4f : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f69880021f0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec  8 04:50:27 np0005550137 radosgw[89717]: ====== starting new request req=0x7ff6ecd845d0 =====
Dec  8 04:50:27 np0005550137 radosgw[89717]: ====== req done req=0x7ff6ecd845d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  8 04:50:27 np0005550137 radosgw[89717]: beast: 0x7ff6ecd845d0: 192.168.122.102 - anonymous [08/Dec/2025:09:50:27.541 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  8 04:50:28 np0005550137 ceph-ceb838ef-9d5d-54e4-bddb-2f01adce2ad4-nfs-cephfs-2-0-compute-0-cuvvno[104424]: 08/12/2025 09:50:28 : epoch 69369f4f : compute-0 : ganesha.nfsd-2[svc_2] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6964001ac0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec  8 04:50:28 np0005550137 radosgw[89717]: ====== starting new request req=0x7ff6ecd845d0 =====
Dec  8 04:50:28 np0005550137 radosgw[89717]: ====== req done req=0x7ff6ecd845d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  8 04:50:28 np0005550137 radosgw[89717]: beast: 0x7ff6ecd845d0: 192.168.122.100 - anonymous [08/Dec/2025:09:50:28.933 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  8 04:50:29 np0005550137 ceph-mgr[74806]: log_channel(cluster) log [DBG] : pgmap v53: 353 pgs: 353 active+clean; 458 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Dec  8 04:50:29 np0005550137 ceph-mon[74516]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "25"} v 0)
Dec  8 04:50:29 np0005550137 ceph-mon[74516]: log_channel(audit) log [INF] : from='mgr.14766 192.168.122.100:0/2066651810' entity='mgr.compute-0.kitiwu' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "25"}]: dispatch
Dec  8 04:50:29 np0005550137 ceph-ceb838ef-9d5d-54e4-bddb-2f01adce2ad4-nfs-cephfs-2-0-compute-0-cuvvno[104424]: 08/12/2025 09:50:29 : epoch 69369f4f : compute-0 : ganesha.nfsd-2[svc_8] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6980002d40 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec  8 04:50:29 np0005550137 ceph-ceb838ef-9d5d-54e4-bddb-2f01adce2ad4-nfs-cephfs-2-0-compute-0-cuvvno[104424]: 08/12/2025 09:50:29 : epoch 69369f4f : compute-0 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f695c0016a0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec  8 04:50:29 np0005550137 ceph-mon[74516]: mon.compute-0@0(leader).osd e106 do_prune osdmap full prune enabled
Dec  8 04:50:29 np0005550137 ceph-mon[74516]: from='mgr.14766 192.168.122.100:0/2066651810' entity='mgr.compute-0.kitiwu' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "25"}]: dispatch
Dec  8 04:50:29 np0005550137 ceph-mon[74516]: log_channel(audit) log [INF] : from='mgr.14766 192.168.122.100:0/2066651810' entity='mgr.compute-0.kitiwu' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "25"}]': finished
Dec  8 04:50:29 np0005550137 ceph-mon[74516]: mon.compute-0@0(leader).osd e107 e107: 3 total, 3 up, 3 in
Dec  8 04:50:29 np0005550137 ceph-mon[74516]: log_channel(cluster) log [DBG] : osdmap e107: 3 total, 3 up, 3 in
Dec  8 04:50:29 np0005550137 radosgw[89717]: ====== starting new request req=0x7ff6ecd845d0 =====
Dec  8 04:50:29 np0005550137 radosgw[89717]: ====== req done req=0x7ff6ecd845d0 op status=0 http_status=200 latency=0.001000031s ======
Dec  8 04:50:29 np0005550137 radosgw[89717]: beast: 0x7ff6ecd845d0: 192.168.122.102 - anonymous [08/Dec/2025:09:50:29.544 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Dec  8 04:50:29 np0005550137 systemd-logind[805]: New session 38 of user zuul.
Dec  8 04:50:29 np0005550137 systemd[1]: Started Session 38 of User zuul.
Dec  8 04:50:30 np0005550137 ceph-mon[74516]: mon.compute-0@0(leader).osd e107 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  8 04:50:30 np0005550137 ceph-mon[74516]: from='mgr.14766 192.168.122.100:0/2066651810' entity='mgr.compute-0.kitiwu' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "25"}]': finished
Dec  8 04:50:30 np0005550137 ceph-ceb838ef-9d5d-54e4-bddb-2f01adce2ad4-nfs-cephfs-2-0-compute-0-cuvvno[104424]: 08/12/2025 09:50:30 : epoch 69369f4f : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f69880021f0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec  8 04:50:30 np0005550137 radosgw[89717]: ====== starting new request req=0x7ff6ecd845d0 =====
Dec  8 04:50:30 np0005550137 radosgw[89717]: ====== req done req=0x7ff6ecd845d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  8 04:50:30 np0005550137 radosgw[89717]: beast: 0x7ff6ecd845d0: 192.168.122.100 - anonymous [08/Dec/2025:09:50:30.934 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  8 04:50:31 np0005550137 ceph-mgr[74806]: log_channel(cluster) log [DBG] : pgmap v55: 353 pgs: 353 active+clean; 458 KiB data, 148 MiB used, 60 GiB / 60 GiB avail; 302 B/s rd, 0 op/s; 16 B/s, 0 objects/s recovering
Dec  8 04:50:31 np0005550137 ceph-mon[74516]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "26"} v 0)
Dec  8 04:50:31 np0005550137 ceph-mon[74516]: log_channel(audit) log [INF] : from='mgr.14766 192.168.122.100:0/2066651810' entity='mgr.compute-0.kitiwu' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "26"}]: dispatch
Dec  8 04:50:31 np0005550137 ceph-ceb838ef-9d5d-54e4-bddb-2f01adce2ad4-nfs-cephfs-2-0-compute-0-cuvvno[104424]: 08/12/2025 09:50:31 : epoch 69369f4f : compute-0 : ganesha.nfsd-2[svc_2] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6964001ac0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec  8 04:50:31 np0005550137 ceph-ceb838ef-9d5d-54e4-bddb-2f01adce2ad4-nfs-cephfs-2-0-compute-0-cuvvno[104424]: 08/12/2025 09:50:31 : epoch 69369f4f : compute-0 : ganesha.nfsd-2[svc_8] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6980002d40 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec  8 04:50:31 np0005550137 ceph-mon[74516]: mon.compute-0@0(leader).osd e107 do_prune osdmap full prune enabled
Dec  8 04:50:31 np0005550137 ceph-mon[74516]: from='mgr.14766 192.168.122.100:0/2066651810' entity='mgr.compute-0.kitiwu' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "26"}]: dispatch
Dec  8 04:50:31 np0005550137 ceph-mon[74516]: log_channel(audit) log [INF] : from='mgr.14766 192.168.122.100:0/2066651810' entity='mgr.compute-0.kitiwu' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "26"}]': finished
Dec  8 04:50:31 np0005550137 ceph-mon[74516]: mon.compute-0@0(leader).osd e108 e108: 3 total, 3 up, 3 in
Dec  8 04:50:31 np0005550137 ceph-mon[74516]: log_channel(cluster) log [DBG] : osdmap e108: 3 total, 3 up, 3 in
Dec  8 04:50:31 np0005550137 ceph-osd[83009]: osd.1 pg_epoch: 108 pg[9.19( empty local-lis/les=0/0 n=0 ec=52/36 lis/c=75/75 les/c/f=76/76/0 sis=108) [1] r=0 lpr=108 pi=[75,108)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  8 04:50:31 np0005550137 radosgw[89717]: ====== starting new request req=0x7ff6ecd845d0 =====
Dec  8 04:50:31 np0005550137 radosgw[89717]: ====== req done req=0x7ff6ecd845d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  8 04:50:31 np0005550137 radosgw[89717]: beast: 0x7ff6ecd845d0: 192.168.122.102 - anonymous [08/Dec/2025:09:50:31.548 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  8 04:50:32 np0005550137 ceph-mon[74516]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Dec  8 04:50:32 np0005550137 ceph-mon[74516]: log_channel(audit) log [DBG] : from='mgr.14766 192.168.122.100:0/2066651810' entity='mgr.compute-0.kitiwu' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Dec  8 04:50:32 np0005550137 ceph-mon[74516]: mon.compute-0@0(leader).osd e108 do_prune osdmap full prune enabled
Dec  8 04:50:32 np0005550137 ceph-mon[74516]: mon.compute-0@0(leader).osd e109 e109: 3 total, 3 up, 3 in
Dec  8 04:50:32 np0005550137 ceph-mon[74516]: log_channel(cluster) log [DBG] : osdmap e109: 3 total, 3 up, 3 in
Dec  8 04:50:32 np0005550137 ceph-osd[83009]: osd.1 pg_epoch: 109 pg[9.19( empty local-lis/les=0/0 n=0 ec=52/36 lis/c=75/75 les/c/f=76/76/0 sis=109) [1]/[2] r=-1 lpr=109 pi=[75,109)/1 crt=0'0 mlcod 0'0 remapped mbc={}] PeeringState::start_peering_interval up [1] -> [1], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 1, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Dec  8 04:50:32 np0005550137 ceph-osd[83009]: osd.1 pg_epoch: 109 pg[9.19( empty local-lis/les=0/0 n=0 ec=52/36 lis/c=75/75 les/c/f=76/76/0 sis=109) [1]/[2] r=-1 lpr=109 pi=[75,109)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Dec  8 04:50:32 np0005550137 ceph-mon[74516]: from='mgr.14766 192.168.122.100:0/2066651810' entity='mgr.compute-0.kitiwu' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "26"}]': finished
Dec  8 04:50:32 np0005550137 ceph-ceb838ef-9d5d-54e4-bddb-2f01adce2ad4-nfs-cephfs-2-0-compute-0-cuvvno[104424]: 08/12/2025 09:50:32 : epoch 69369f4f : compute-0 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f695c002b10 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec  8 04:50:32 np0005550137 radosgw[89717]: ====== starting new request req=0x7ff6ecd845d0 =====
Dec  8 04:50:32 np0005550137 radosgw[89717]: ====== req done req=0x7ff6ecd845d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  8 04:50:32 np0005550137 radosgw[89717]: beast: 0x7ff6ecd845d0: 192.168.122.100 - anonymous [08/Dec/2025:09:50:32.937 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  8 04:50:33 np0005550137 ceph-mgr[74806]: log_channel(cluster) log [DBG] : pgmap v58: 353 pgs: 353 active+clean; 458 KiB data, 148 MiB used, 60 GiB / 60 GiB avail; 341 B/s rd, 0 op/s; 18 B/s, 0 objects/s recovering
Dec  8 04:50:33 np0005550137 ceph-mon[74516]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "27"} v 0)
Dec  8 04:50:33 np0005550137 ceph-mon[74516]: log_channel(audit) log [INF] : from='mgr.14766 192.168.122.100:0/2066651810' entity='mgr.compute-0.kitiwu' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "27"}]: dispatch
Dec  8 04:50:33 np0005550137 ceph-ceb838ef-9d5d-54e4-bddb-2f01adce2ad4-nfs-cephfs-2-0-compute-0-cuvvno[104424]: 08/12/2025 09:50:33 : epoch 69369f4f : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f69880095a0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec  8 04:50:33 np0005550137 ceph-ceb838ef-9d5d-54e4-bddb-2f01adce2ad4-nfs-cephfs-2-0-compute-0-cuvvno[104424]: 08/12/2025 09:50:33 : epoch 69369f4f : compute-0 : ganesha.nfsd-2[svc_2] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6964002f50 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec  8 04:50:33 np0005550137 ceph-mon[74516]: mon.compute-0@0(leader).osd e109 do_prune osdmap full prune enabled
Dec  8 04:50:33 np0005550137 ceph-mon[74516]: from='mgr.14766 192.168.122.100:0/2066651810' entity='mgr.compute-0.kitiwu' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "27"}]: dispatch
Dec  8 04:50:33 np0005550137 ceph-mon[74516]: log_channel(audit) log [INF] : from='mgr.14766 192.168.122.100:0/2066651810' entity='mgr.compute-0.kitiwu' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "27"}]': finished
Dec  8 04:50:33 np0005550137 ceph-mon[74516]: mon.compute-0@0(leader).osd e110 e110: 3 total, 3 up, 3 in
Dec  8 04:50:33 np0005550137 ceph-mon[74516]: log_channel(cluster) log [DBG] : osdmap e110: 3 total, 3 up, 3 in
Dec  8 04:50:33 np0005550137 ceph-osd[83009]: osd.1 pg_epoch: 110 pg[9.1a( empty local-lis/les=0/0 n=0 ec=52/36 lis/c=81/81 les/c/f=82/82/0 sis=110) [1] r=0 lpr=110 pi=[81,110)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  8 04:50:33 np0005550137 radosgw[89717]: ====== starting new request req=0x7ff6ecd845d0 =====
Dec  8 04:50:33 np0005550137 radosgw[89717]: ====== req done req=0x7ff6ecd845d0 op status=0 http_status=200 latency=0.001000030s ======
Dec  8 04:50:33 np0005550137 radosgw[89717]: beast: 0x7ff6ecd845d0: 192.168.122.102 - anonymous [08/Dec/2025:09:50:33.552 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000030s
Dec  8 04:50:34 np0005550137 ceph-mon[74516]: mon.compute-0@0(leader).osd e110 do_prune osdmap full prune enabled
Dec  8 04:50:34 np0005550137 ceph-mon[74516]: mon.compute-0@0(leader).osd e111 e111: 3 total, 3 up, 3 in
Dec  8 04:50:34 np0005550137 ceph-mon[74516]: log_channel(cluster) log [DBG] : osdmap e111: 3 total, 3 up, 3 in
Dec  8 04:50:34 np0005550137 ceph-osd[83009]: osd.1 pg_epoch: 111 pg[9.1a( empty local-lis/les=0/0 n=0 ec=52/36 lis/c=81/81 les/c/f=82/82/0 sis=111) [1]/[0] r=-1 lpr=111 pi=[81,111)/1 crt=0'0 mlcod 0'0 remapped mbc={}] PeeringState::start_peering_interval up [1] -> [1], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 1, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Dec  8 04:50:34 np0005550137 ceph-osd[83009]: osd.1 pg_epoch: 111 pg[9.1a( empty local-lis/les=0/0 n=0 ec=52/36 lis/c=81/81 les/c/f=82/82/0 sis=111) [1]/[0] r=-1 lpr=111 pi=[81,111)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Dec  8 04:50:34 np0005550137 ceph-osd[83009]: osd.1 pg_epoch: 111 pg[9.19( v 49'1026 (0'0,49'1026] local-lis/les=0/0 n=7 ec=52/36 lis/c=109/75 les/c/f=110/76/0 sis=111) [1] r=0 lpr=111 pi=[75,111)/1 luod=0'0 crt=49'1026 mlcod 0'0 active mbc={}] PeeringState::start_peering_interval up [1] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 1 -> 1, role -1 -> 0, features acting 4540701547738038271 upacting 4540701547738038271
Dec  8 04:50:34 np0005550137 ceph-osd[83009]: osd.1 pg_epoch: 111 pg[9.19( v 49'1026 (0'0,49'1026] local-lis/les=0/0 n=7 ec=52/36 lis/c=109/75 les/c/f=110/76/0 sis=111) [1] r=0 lpr=111 pi=[75,111)/1 crt=49'1026 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  8 04:50:34 np0005550137 ceph-mon[74516]: from='mgr.14766 192.168.122.100:0/2066651810' entity='mgr.compute-0.kitiwu' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "27"}]': finished
Dec  8 04:50:34 np0005550137 ovs-vsctl[106662]: ovs|00001|db_ctl_base|ERR|no key "dpdk-init" in Open_vSwitch record "." column other_config
Dec  8 04:50:34 np0005550137 ceph-ceb838ef-9d5d-54e4-bddb-2f01adce2ad4-nfs-cephfs-2-0-compute-0-cuvvno[104424]: 08/12/2025 09:50:34 : epoch 69369f4f : compute-0 : ganesha.nfsd-2[svc_8] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6980002d40 fd 39 proxy header rest len failed header rlen = % (will set dead)
Dec  8 04:50:34 np0005550137 radosgw[89717]: ====== starting new request req=0x7ff6ecd845d0 =====
Dec  8 04:50:34 np0005550137 radosgw[89717]: ====== req done req=0x7ff6ecd845d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  8 04:50:34 np0005550137 radosgw[89717]: beast: 0x7ff6ecd845d0: 192.168.122.100 - anonymous [08/Dec/2025:09:50:34.939 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  8 04:50:35 np0005550137 ceph-mon[74516]: mon.compute-0@0(leader).osd e111 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  8 04:50:35 np0005550137 ceph-mgr[74806]: log_channel(cluster) log [DBG] : pgmap v61: 353 pgs: 1 remapped+peering, 352 active+clean; 458 KiB data, 148 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Dec  8 04:50:35 np0005550137 ceph-ceb838ef-9d5d-54e4-bddb-2f01adce2ad4-nfs-cephfs-2-0-compute-0-cuvvno[104424]: 08/12/2025 09:50:35 : epoch 69369f4f : compute-0 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f695c002b10 fd 39 proxy ignored for local
Dec  8 04:50:35 np0005550137 kernel: ganesha.nfsd[106098]: segfault at 50 ip 00007f6a3145932e sp 00007f69e67fb210 error 4 in libntirpc.so.5.8[7f6a3143e000+2c000] likely on CPU 7 (core 0, socket 7)
Dec  8 04:50:35 np0005550137 kernel: Code: 47 20 66 41 89 86 f2 00 00 00 41 bf 01 00 00 00 b9 40 00 00 00 e9 af fd ff ff 66 90 48 8b 85 f8 00 00 00 48 8b 40 08 4c 8b 28 <45> 8b 65 50 49 8b 75 68 41 8b be 28 02 00 00 b9 40 00 00 00 e8 29
Dec  8 04:50:35 np0005550137 systemd[1]: Started Process Core Dump (PID 106695/UID 0).
Dec  8 04:50:35 np0005550137 ceph-mon[74516]: mon.compute-0@0(leader).osd e111 do_prune osdmap full prune enabled
Dec  8 04:50:35 np0005550137 ceph-mon[74516]: mon.compute-0@0(leader).osd e112 e112: 3 total, 3 up, 3 in
Dec  8 04:50:35 np0005550137 ceph-mon[74516]: log_channel(cluster) log [DBG] : osdmap e112: 3 total, 3 up, 3 in
Dec  8 04:50:35 np0005550137 ceph-osd[83009]: osd.1 pg_epoch: 112 pg[9.19( v 49'1026 (0'0,49'1026] local-lis/les=111/112 n=7 ec=52/36 lis/c=109/75 les/c/f=110/76/0 sis=111) [1] r=0 lpr=111 pi=[75,111)/1 crt=49'1026 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  8 04:50:35 np0005550137 radosgw[89717]: ====== starting new request req=0x7ff6ecd845d0 =====
Dec  8 04:50:35 np0005550137 radosgw[89717]: ====== req done req=0x7ff6ecd845d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  8 04:50:35 np0005550137 radosgw[89717]: beast: 0x7ff6ecd845d0: 192.168.122.102 - anonymous [08/Dec/2025:09:50:35.556 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  8 04:50:35 np0005550137 ceph-mgr[74806]: [prometheus INFO cherrypy.access.140536868639312] ::ffff:192.168.122.100 - - [08/Dec/2025:09:50:35] "GET /metrics HTTP/1.1" 200 48255 "" "Prometheus/2.51.0"
Dec  8 04:50:35 np0005550137 ceph-ceb838ef-9d5d-54e4-bddb-2f01adce2ad4-mgr-compute-0-kitiwu[74802]: ::ffff:192.168.122.100 - - [08/Dec/2025:09:50:35] "GET /metrics HTTP/1.1" 200 48255 "" "Prometheus/2.51.0"
Dec  8 04:50:36 np0005550137 lvm[106982]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Dec  8 04:50:36 np0005550137 lvm[106982]: VG ceph_vg0 finished
Dec  8 04:50:36 np0005550137 ceph-mon[74516]: mon.compute-0@0(leader).osd e112 do_prune osdmap full prune enabled
Dec  8 04:50:36 np0005550137 systemd-coredump[106696]: Process 104440 (ganesha.nfsd) of user 0 dumped core.#012#012Stack trace of thread 55:#012#0  0x00007f6a3145932e n/a (/usr/lib64/libntirpc.so.5.8 + 0x2232e)#012ELF object binary architecture: AMD x86-64
Dec  8 04:50:36 np0005550137 ceph-mon[74516]: mon.compute-0@0(leader).osd e113 e113: 3 total, 3 up, 3 in
Dec  8 04:50:36 np0005550137 ceph-mon[74516]: log_channel(cluster) log [DBG] : osdmap e113: 3 total, 3 up, 3 in
Dec  8 04:50:36 np0005550137 ceph-osd[83009]: osd.1 pg_epoch: 113 pg[9.1a( v 49'1026 (0'0,49'1026] local-lis/les=0/0 n=4 ec=52/36 lis/c=111/81 les/c/f=112/82/0 sis=113) [1] r=0 lpr=113 pi=[81,113)/1 luod=0'0 crt=49'1026 mlcod 0'0 active mbc={}] PeeringState::start_peering_interval up [1] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 1 -> 1, role -1 -> 0, features acting 4540701547738038271 upacting 4540701547738038271
Dec  8 04:50:36 np0005550137 ceph-osd[83009]: osd.1 pg_epoch: 113 pg[9.1a( v 49'1026 (0'0,49'1026] local-lis/les=0/0 n=4 ec=52/36 lis/c=111/81 les/c/f=112/82/0 sis=113) [1] r=0 lpr=113 pi=[81,113)/1 crt=49'1026 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  8 04:50:36 np0005550137 systemd[1]: systemd-coredump@1-106695-0.service: Deactivated successfully.
Dec  8 04:50:36 np0005550137 systemd[1]: systemd-coredump@1-106695-0.service: Consumed 1.266s CPU time.
Dec  8 04:50:36 np0005550137 podman[107105]: 2025-12-08 09:50:36.716248712 +0000 UTC m=+0.030995383 container died 3d3ae79956baf0088b3c44608b1b17208fadcf8da8dd9e93dccde248bbc68292 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-ceb838ef-9d5d-54e4-bddb-2f01adce2ad4-nfs-cephfs-2-0-compute-0-cuvvno, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec  8 04:50:36 np0005550137 systemd[1]: var-lib-containers-storage-overlay-1be1305a2ab343e02f225e5fe8a403c8ae824de8773ff5f052831bda71d9a76c-merged.mount: Deactivated successfully.
Dec  8 04:50:36 np0005550137 podman[107105]: 2025-12-08 09:50:36.78107454 +0000 UTC m=+0.095821191 container remove 3d3ae79956baf0088b3c44608b1b17208fadcf8da8dd9e93dccde248bbc68292 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-ceb838ef-9d5d-54e4-bddb-2f01adce2ad4-nfs-cephfs-2-0-compute-0-cuvvno, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, ceph=True, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec  8 04:50:36 np0005550137 systemd[1]: ceph-ceb838ef-9d5d-54e4-bddb-2f01adce2ad4@nfs.cephfs.2.0.compute-0.cuvvno.service: Main process exited, code=exited, status=139/n/a
Dec  8 04:50:36 np0005550137 systemd[1]: ceph-ceb838ef-9d5d-54e4-bddb-2f01adce2ad4@nfs.cephfs.2.0.compute-0.cuvvno.service: Failed with result 'exit-code'.
Dec  8 04:50:36 np0005550137 systemd[1]: ceph-ceb838ef-9d5d-54e4-bddb-2f01adce2ad4@nfs.cephfs.2.0.compute-0.cuvvno.service: Consumed 1.585s CPU time.
Dec  8 04:50:36 np0005550137 radosgw[89717]: ====== starting new request req=0x7ff6ecd845d0 =====
Dec  8 04:50:36 np0005550137 radosgw[89717]: ====== req done req=0x7ff6ecd845d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  8 04:50:36 np0005550137 radosgw[89717]: beast: 0x7ff6ecd845d0: 192.168.122.100 - anonymous [08/Dec/2025:09:50:36.940 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  8 04:50:37 np0005550137 ceph-mgr[74806]: log_channel(cluster) log [DBG] : pgmap v64: 353 pgs: 1 remapped+peering, 352 active+clean; 458 KiB data, 148 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Dec  8 04:50:37 np0005550137 radosgw[89717]: ====== starting new request req=0x7ff6ecd845d0 =====
Dec  8 04:50:37 np0005550137 radosgw[89717]: ====== req done req=0x7ff6ecd845d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  8 04:50:37 np0005550137 radosgw[89717]: beast: 0x7ff6ecd845d0: 192.168.122.102 - anonymous [08/Dec/2025:09:50:37.559 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  8 04:50:37 np0005550137 ceph-mon[74516]: mon.compute-0@0(leader).osd e113 do_prune osdmap full prune enabled
Dec  8 04:50:37 np0005550137 ceph-mon[74516]: mon.compute-0@0(leader).osd e114 e114: 3 total, 3 up, 3 in
Dec  8 04:50:37 np0005550137 ceph-mon[74516]: log_channel(cluster) log [DBG] : osdmap e114: 3 total, 3 up, 3 in
Dec  8 04:50:37 np0005550137 ceph-osd[83009]: osd.1 pg_epoch: 114 pg[9.1a( v 49'1026 (0'0,49'1026] local-lis/les=113/114 n=4 ec=52/36 lis/c=111/81 les/c/f=112/82/0 sis=113) [1] r=0 lpr=113 pi=[81,113)/1 crt=49'1026 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec  8 04:50:38 np0005550137 radosgw[89717]: ====== starting new request req=0x7ff6ecd845d0 =====
Dec  8 04:50:38 np0005550137 radosgw[89717]: ====== req done req=0x7ff6ecd845d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  8 04:50:38 np0005550137 radosgw[89717]: beast: 0x7ff6ecd845d0: 192.168.122.100 - anonymous [08/Dec/2025:09:50:38.942 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  8 04:50:39 np0005550137 ceph-mgr[74806]: log_channel(cluster) log [DBG] : pgmap v66: 353 pgs: 353 active+clean; 458 KiB data, 148 MiB used, 60 GiB / 60 GiB avail; 449 B/s rd, 0 op/s; 48 B/s, 2 objects/s recovering
Dec  8 04:50:39 np0005550137 ceph-mon[74516]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "28"} v 0)
Dec  8 04:50:39 np0005550137 ceph-mon[74516]: log_channel(audit) log [INF] : from='mgr.14766 192.168.122.100:0/2066651810' entity='mgr.compute-0.kitiwu' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "28"}]: dispatch
Dec  8 04:50:39 np0005550137 radosgw[89717]: ====== starting new request req=0x7ff6ecd845d0 =====
Dec  8 04:50:39 np0005550137 radosgw[89717]: ====== req done req=0x7ff6ecd845d0 op status=0 http_status=200 latency=0.001000031s ======
Dec  8 04:50:39 np0005550137 radosgw[89717]: beast: 0x7ff6ecd845d0: 192.168.122.102 - anonymous [08/Dec/2025:09:50:39.562 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Dec  8 04:50:39 np0005550137 ceph-mon[74516]: mon.compute-0@0(leader).osd e114 do_prune osdmap full prune enabled
Dec  8 04:50:39 np0005550137 ceph-mon[74516]: from='mgr.14766 192.168.122.100:0/2066651810' entity='mgr.compute-0.kitiwu' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "28"}]: dispatch
Dec  8 04:50:39 np0005550137 ceph-mon[74516]: log_channel(audit) log [INF] : from='mgr.14766 192.168.122.100:0/2066651810' entity='mgr.compute-0.kitiwu' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "28"}]': finished
Dec  8 04:50:39 np0005550137 ceph-mon[74516]: mon.compute-0@0(leader).osd e115 e115: 3 total, 3 up, 3 in
Dec  8 04:50:39 np0005550137 ceph-mon[74516]: log_channel(cluster) log [DBG] : osdmap e115: 3 total, 3 up, 3 in
Dec  8 04:50:39 np0005550137 ceph-osd[83009]: osd.1 pg_epoch: 115 pg[9.1b( empty local-lis/les=0/0 n=0 ec=52/36 lis/c=64/64 les/c/f=65/65/0 sis=115) [1] r=0 lpr=115 pi=[64,115)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec  8 04:50:40 np0005550137 ceph-mon[74516]: mon.compute-0@0(leader).osd e115 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec  8 04:50:40 np0005550137 ceph-mon[74516]: mon.compute-0@0(leader).osd e115 do_prune osdmap full prune enabled
Dec  8 04:50:40 np0005550137 ceph-mon[74516]: mon.compute-0@0(leader).osd e116 e116: 3 total, 3 up, 3 in
Dec  8 04:50:40 np0005550137 ceph-mon[74516]: log_channel(cluster) log [DBG] : osdmap e116: 3 total, 3 up, 3 in
Dec  8 04:50:40 np0005550137 ceph-osd[83009]: osd.1 pg_epoch: 116 pg[9.1b( empty local-lis/les=0/0 n=0 ec=52/36 lis/c=64/64 les/c/f=65/65/0 sis=116) [1]/[2] r=-1 lpr=116 pi=[64,116)/1 crt=0'0 mlcod 0'0 remapped mbc={}] PeeringState::start_peering_interval up [1] -> [1], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 1, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Dec  8 04:50:40 np0005550137 ceph-osd[83009]: osd.1 pg_epoch: 116 pg[9.1b( empty local-lis/les=0/0 n=0 ec=52/36 lis/c=64/64 les/c/f=65/65/0 sis=116) [1]/[2] r=-1 lpr=116 pi=[64,116)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Dec  8 04:50:40 np0005550137 systemd[1]: Starting Hostname Service...
Dec  8 04:50:40 np0005550137 systemd[1]: Started Hostname Service.
Dec  8 04:50:40 np0005550137 ceph-mon[74516]: from='mgr.14766 192.168.122.100:0/2066651810' entity='mgr.compute-0.kitiwu' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "28"}]': finished
Dec  8 04:50:40 np0005550137 radosgw[89717]: ====== starting new request req=0x7ff6ecd845d0 =====
Dec  8 04:50:40 np0005550137 radosgw[89717]: ====== req done req=0x7ff6ecd845d0 op status=0 http_status=200 latency=0.002000062s ======
Dec  8 04:50:40 np0005550137 radosgw[89717]: beast: 0x7ff6ecd845d0: 192.168.122.100 - anonymous [08/Dec/2025:09:50:40.945 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.002000062s
Dec  8 04:50:41 np0005550137 ceph-mgr[74806]: log_channel(cluster) log [DBG] : pgmap v69: 353 pgs: 353 active+clean; 458 KiB data, 148 MiB used, 60 GiB / 60 GiB avail; 463 B/s rd, 0 op/s; 49 B/s, 2 objects/s recovering
Dec  8 04:50:41 np0005550137 ceph-mon[74516]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "29"} v 0)
Dec  8 04:50:41 np0005550137 ceph-mon[74516]: log_channel(audit) log [INF] : from='mgr.14766 192.168.122.100:0/2066651810' entity='mgr.compute-0.kitiwu' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "29"}]: dispatch
Dec  8 04:50:41 np0005550137 ceph-mon[74516]: mon.compute-0@0(leader).osd e116 do_prune osdmap full prune enabled
Dec  8 04:50:41 np0005550137 ceph-mon[74516]: log_channel(audit) log [INF] : from='mgr.14766 192.168.122.100:0/2066651810' entity='mgr.compute-0.kitiwu' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "29"}]': finished
Dec  8 04:50:41 np0005550137 ceph-mon[74516]: mon.compute-0@0(leader).osd e117 e117: 3 total, 3 up, 3 in
Dec  8 04:50:41 np0005550137 ceph-mon[74516]: log_channel(cluster) log [DBG] : osdmap e117: 3 total, 3 up, 3 in
Dec  8 04:50:41 np0005550137 ceph-ceb838ef-9d5d-54e4-bddb-2f01adce2ad4-haproxy-nfs-cephfs-compute-0-dvsreo[98033]: [WARNING] 341/095041 (4) : Server backend/nfs.cephfs.2 is DOWN, reason: Layer4 connection problem, info: "Connection refused", check duration: 0ms. 2 active and 0 backup servers left. 0 sessions active, 0 requeued, 0 remaining in queue.
Dec  8 04:50:41 np0005550137 radosgw[89717]: ====== starting new request req=0x7ff6ecd845d0 =====
Dec  8 04:50:41 np0005550137 radosgw[89717]: ====== req done req=0x7ff6ecd845d0 op status=0 http_status=200 latency=0.000000000s ======
Dec  8 04:50:41 np0005550137 radosgw[89717]: beast: 0x7ff6ecd845d0: 192.168.122.102 - anonymous [08/Dec/2025:09:50:41.564 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Dec  8 04:50:41 np0005550137 ceph-mon[74516]: from='mgr.14766 192.168.122.100:0/2066651810' entity='mgr.compute-0.kitiwu' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "29"}]: dispatch
Dec  8 04:50:41 np0005550137 ceph-mon[74516]: from='mgr.14766 192.168.122.100:0/2066651810' entity='mgr.compute-0.kitiwu' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "29"}]': finished
Dec  8 04:50:42 np0005550137 ceph-mon[74516]: mon.compute-0@0(leader).osd e117 do_prune osdmap full prune enabled
Dec  8 04:50:42 np0005550137 ceph-mon[74516]: mon.compute-0@0(leader).osd e118 e118: 3 total, 3 up, 3 in
Dec  8 04:50:42 np0005550137 ceph-mon[74516]: log_channel(cluster) log [DBG] : osdmap e118: 3 total, 3 up, 3 in
Dec  8 04:50:42 np0005550137 ceph-osd[83009]: osd.1 pg_epoch: 118 pg[9.1b( v 49'1026 (0'0,49'1026] local-lis/les=0/0 n=2 ec=52/36 lis/c=116/64 les/c/f=117/65/0 sis=118) [1] r=0 lpr=118 pi=[64,118)/1 luod=0'0 crt=49'1026 mlcod 0'0 active mbc={}] PeeringState::start_peering_interval up [1] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 1 -> 1, role -1 -> 0, features acting 4540701547738038271 upacting 4540701547738038271
Dec  8 04:50:42 np0005550137 ceph-osd[83009]: osd.1 pg_epoch: 118 pg[9.1b( v 49'1026 (0'0,49'1026] local-lis/les=0/0 n=2 ec=52/36 lis/c=116/64 les/c/f=117/65/0 sis=118) [1] r=0 lpr=118 pi=[64,118)/1 crt=49'1026 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
