Nov 27 05:15:29 np0005537642 kernel: Linux version 5.14.0-642.el9.x86_64 (mockbuild@x86-05.stream.rdu2.redhat.com) (gcc (GCC) 11.5.0 20240719 (Red Hat 11.5.0-14), GNU ld version 2.35.2-68.el9) #1 SMP PREEMPT_DYNAMIC Thu Nov 20 14:15:03 UTC 2025
Nov 27 05:15:29 np0005537642 kernel: The list of certified hardware and cloud instances for Red Hat Enterprise Linux 9 can be viewed at the Red Hat Ecosystem Catalog, https://catalog.redhat.com.
Nov 27 05:15:29 np0005537642 kernel: Command line: BOOT_IMAGE=(hd0,msdos1)/boot/vmlinuz-5.14.0-642.el9.x86_64 root=UUID=b277050f-8ace-464d-abb6-4c46d4c45253 ro console=ttyS0,115200n8 no_timer_check net.ifnames=0 crashkernel=1G-2G:192M,2G-64G:256M,64G-:512M
Nov 27 05:15:29 np0005537642 kernel: BIOS-provided physical RAM map:
Nov 27 05:15:29 np0005537642 kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009fbff] usable
Nov 27 05:15:29 np0005537642 kernel: BIOS-e820: [mem 0x000000000009fc00-0x000000000009ffff] reserved
Nov 27 05:15:29 np0005537642 kernel: BIOS-e820: [mem 0x00000000000f0000-0x00000000000fffff] reserved
Nov 27 05:15:29 np0005537642 kernel: BIOS-e820: [mem 0x0000000000100000-0x00000000bffdafff] usable
Nov 27 05:15:29 np0005537642 kernel: BIOS-e820: [mem 0x00000000bffdb000-0x00000000bfffffff] reserved
Nov 27 05:15:29 np0005537642 kernel: BIOS-e820: [mem 0x00000000feffc000-0x00000000feffffff] reserved
Nov 27 05:15:29 np0005537642 kernel: BIOS-e820: [mem 0x00000000fffc0000-0x00000000ffffffff] reserved
Nov 27 05:15:29 np0005537642 kernel: BIOS-e820: [mem 0x0000000100000000-0x000000023fffffff] usable
Nov 27 05:15:29 np0005537642 kernel: NX (Execute Disable) protection: active
Nov 27 05:15:29 np0005537642 kernel: APIC: Static calls initialized
Nov 27 05:15:29 np0005537642 kernel: SMBIOS 2.8 present.
Nov 27 05:15:29 np0005537642 kernel: DMI: OpenStack Foundation OpenStack Nova, BIOS 1.15.0-1 04/01/2014
Nov 27 05:15:29 np0005537642 kernel: Hypervisor detected: KVM
Nov 27 05:15:29 np0005537642 kernel: kvm-clock: Using msrs 4b564d01 and 4b564d00
Nov 27 05:15:29 np0005537642 kernel: kvm-clock: using sched offset of 4106678150 cycles
Nov 27 05:15:29 np0005537642 kernel: clocksource: kvm-clock: mask: 0xffffffffffffffff max_cycles: 0x1cd42e4dffb, max_idle_ns: 881590591483 ns
Nov 27 05:15:29 np0005537642 kernel: tsc: Detected 2800.000 MHz processor
Nov 27 05:15:29 np0005537642 kernel: last_pfn = 0x240000 max_arch_pfn = 0x400000000
Nov 27 05:15:29 np0005537642 kernel: MTRR map: 4 entries (3 fixed + 1 variable; max 19), built from 8 variable MTRRs
Nov 27 05:15:29 np0005537642 kernel: x86/PAT: Configuration [0-7]: WB  WC  UC- UC  WB  WP  UC- WT  
Nov 27 05:15:29 np0005537642 kernel: last_pfn = 0xbffdb max_arch_pfn = 0x400000000
Nov 27 05:15:29 np0005537642 kernel: found SMP MP-table at [mem 0x000f5ae0-0x000f5aef]
Nov 27 05:15:29 np0005537642 kernel: Using GB pages for direct mapping
Nov 27 05:15:29 np0005537642 kernel: RAMDISK: [mem 0x2d83a000-0x32c14fff]
Nov 27 05:15:29 np0005537642 kernel: ACPI: Early table checksum verification disabled
Nov 27 05:15:29 np0005537642 kernel: ACPI: RSDP 0x00000000000F5AA0 000014 (v00 BOCHS )
Nov 27 05:15:29 np0005537642 kernel: ACPI: RSDT 0x00000000BFFE16BD 000030 (v01 BOCHS  BXPC     00000001 BXPC 00000001)
Nov 27 05:15:29 np0005537642 kernel: ACPI: FACP 0x00000000BFFE1571 000074 (v01 BOCHS  BXPC     00000001 BXPC 00000001)
Nov 27 05:15:29 np0005537642 kernel: ACPI: DSDT 0x00000000BFFDFC80 0018F1 (v01 BOCHS  BXPC     00000001 BXPC 00000001)
Nov 27 05:15:29 np0005537642 kernel: ACPI: FACS 0x00000000BFFDFC40 000040
Nov 27 05:15:29 np0005537642 kernel: ACPI: APIC 0x00000000BFFE15E5 0000B0 (v01 BOCHS  BXPC     00000001 BXPC 00000001)
Nov 27 05:15:29 np0005537642 kernel: ACPI: WAET 0x00000000BFFE1695 000028 (v01 BOCHS  BXPC     00000001 BXPC 00000001)
Nov 27 05:15:29 np0005537642 kernel: ACPI: Reserving FACP table memory at [mem 0xbffe1571-0xbffe15e4]
Nov 27 05:15:29 np0005537642 kernel: ACPI: Reserving DSDT table memory at [mem 0xbffdfc80-0xbffe1570]
Nov 27 05:15:29 np0005537642 kernel: ACPI: Reserving FACS table memory at [mem 0xbffdfc40-0xbffdfc7f]
Nov 27 05:15:29 np0005537642 kernel: ACPI: Reserving APIC table memory at [mem 0xbffe15e5-0xbffe1694]
Nov 27 05:15:29 np0005537642 kernel: ACPI: Reserving WAET table memory at [mem 0xbffe1695-0xbffe16bc]
Nov 27 05:15:29 np0005537642 kernel: No NUMA configuration found
Nov 27 05:15:29 np0005537642 kernel: Faking a node at [mem 0x0000000000000000-0x000000023fffffff]
Nov 27 05:15:29 np0005537642 kernel: NODE_DATA(0) allocated [mem 0x23ffd5000-0x23fffffff]
Nov 27 05:15:29 np0005537642 kernel: crashkernel reserved: 0x00000000af000000 - 0x00000000bf000000 (256 MB)
Nov 27 05:15:29 np0005537642 kernel: Zone ranges:
Nov 27 05:15:29 np0005537642 kernel:  DMA      [mem 0x0000000000001000-0x0000000000ffffff]
Nov 27 05:15:29 np0005537642 kernel:  DMA32    [mem 0x0000000001000000-0x00000000ffffffff]
Nov 27 05:15:29 np0005537642 kernel:  Normal   [mem 0x0000000100000000-0x000000023fffffff]
Nov 27 05:15:29 np0005537642 kernel:  Device   empty
Nov 27 05:15:29 np0005537642 kernel: Movable zone start for each node
Nov 27 05:15:29 np0005537642 kernel: Early memory node ranges
Nov 27 05:15:29 np0005537642 kernel:  node   0: [mem 0x0000000000001000-0x000000000009efff]
Nov 27 05:15:29 np0005537642 kernel:  node   0: [mem 0x0000000000100000-0x00000000bffdafff]
Nov 27 05:15:29 np0005537642 kernel:  node   0: [mem 0x0000000100000000-0x000000023fffffff]
Nov 27 05:15:29 np0005537642 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000023fffffff]
Nov 27 05:15:29 np0005537642 kernel: On node 0, zone DMA: 1 pages in unavailable ranges
Nov 27 05:15:29 np0005537642 kernel: On node 0, zone DMA: 97 pages in unavailable ranges
Nov 27 05:15:29 np0005537642 kernel: On node 0, zone Normal: 37 pages in unavailable ranges
Nov 27 05:15:29 np0005537642 kernel: ACPI: PM-Timer IO Port: 0x608
Nov 27 05:15:29 np0005537642 kernel: ACPI: LAPIC_NMI (acpi_id[0xff] dfl dfl lint[0x1])
Nov 27 05:15:29 np0005537642 kernel: IOAPIC[0]: apic_id 0, version 17, address 0xfec00000, GSI 0-23
Nov 27 05:15:29 np0005537642 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 0 global_irq 2 dfl dfl)
Nov 27 05:15:29 np0005537642 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 5 global_irq 5 high level)
Nov 27 05:15:29 np0005537642 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level)
Nov 27 05:15:29 np0005537642 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 10 global_irq 10 high level)
Nov 27 05:15:29 np0005537642 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 11 global_irq 11 high level)
Nov 27 05:15:29 np0005537642 kernel: ACPI: Using ACPI (MADT) for SMP configuration information
Nov 27 05:15:29 np0005537642 kernel: TSC deadline timer available
Nov 27 05:15:29 np0005537642 kernel: CPU topo: Max. logical packages:   8
Nov 27 05:15:29 np0005537642 kernel: CPU topo: Max. logical dies:       8
Nov 27 05:15:29 np0005537642 kernel: CPU topo: Max. dies per package:   1
Nov 27 05:15:29 np0005537642 kernel: CPU topo: Max. threads per core:   1
Nov 27 05:15:29 np0005537642 kernel: CPU topo: Num. cores per package:     1
Nov 27 05:15:29 np0005537642 kernel: CPU topo: Num. threads per package:   1
Nov 27 05:15:29 np0005537642 kernel: CPU topo: Allowing 8 present CPUs plus 0 hotplug CPUs
Nov 27 05:15:29 np0005537642 kernel: kvm-guest: APIC: eoi() replaced with kvm_guest_apic_eoi_write()
Nov 27 05:15:29 np0005537642 kernel: PM: hibernation: Registered nosave memory: [mem 0x00000000-0x00000fff]
Nov 27 05:15:29 np0005537642 kernel: PM: hibernation: Registered nosave memory: [mem 0x0009f000-0x0009ffff]
Nov 27 05:15:29 np0005537642 kernel: PM: hibernation: Registered nosave memory: [mem 0x000a0000-0x000effff]
Nov 27 05:15:29 np0005537642 kernel: PM: hibernation: Registered nosave memory: [mem 0x000f0000-0x000fffff]
Nov 27 05:15:29 np0005537642 kernel: PM: hibernation: Registered nosave memory: [mem 0xbffdb000-0xbfffffff]
Nov 27 05:15:29 np0005537642 kernel: PM: hibernation: Registered nosave memory: [mem 0xc0000000-0xfeffbfff]
Nov 27 05:15:29 np0005537642 kernel: PM: hibernation: Registered nosave memory: [mem 0xfeffc000-0xfeffffff]
Nov 27 05:15:29 np0005537642 kernel: PM: hibernation: Registered nosave memory: [mem 0xff000000-0xfffbffff]
Nov 27 05:15:29 np0005537642 kernel: PM: hibernation: Registered nosave memory: [mem 0xfffc0000-0xffffffff]
Nov 27 05:15:29 np0005537642 kernel: [mem 0xc0000000-0xfeffbfff] available for PCI devices
Nov 27 05:15:29 np0005537642 kernel: Booting paravirtualized kernel on KVM
Nov 27 05:15:29 np0005537642 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns
Nov 27 05:15:29 np0005537642 kernel: setup_percpu: NR_CPUS:8192 nr_cpumask_bits:8 nr_cpu_ids:8 nr_node_ids:1
Nov 27 05:15:29 np0005537642 kernel: percpu: Embedded 64 pages/cpu s225280 r8192 d28672 u262144
Nov 27 05:15:29 np0005537642 kernel: kvm-guest: PV spinlocks disabled, no host support
Nov 27 05:15:29 np0005537642 kernel: Kernel command line: BOOT_IMAGE=(hd0,msdos1)/boot/vmlinuz-5.14.0-642.el9.x86_64 root=UUID=b277050f-8ace-464d-abb6-4c46d4c45253 ro console=ttyS0,115200n8 no_timer_check net.ifnames=0 crashkernel=1G-2G:192M,2G-64G:256M,64G-:512M
Nov 27 05:15:29 np0005537642 kernel: Unknown kernel command line parameters "BOOT_IMAGE=(hd0,msdos1)/boot/vmlinuz-5.14.0-642.el9.x86_64", will be passed to user space.
Nov 27 05:15:29 np0005537642 kernel: random: crng init done
Nov 27 05:15:29 np0005537642 kernel: Dentry cache hash table entries: 1048576 (order: 11, 8388608 bytes, linear)
Nov 27 05:15:29 np0005537642 kernel: Inode-cache hash table entries: 524288 (order: 10, 4194304 bytes, linear)
Nov 27 05:15:29 np0005537642 kernel: Fallback order for Node 0: 0 
Nov 27 05:15:29 np0005537642 kernel: Built 1 zonelists, mobility grouping on.  Total pages: 2064091
Nov 27 05:15:29 np0005537642 kernel: Policy zone: Normal
Nov 27 05:15:29 np0005537642 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off
Nov 27 05:15:29 np0005537642 kernel: software IO TLB: area num 8.
Nov 27 05:15:29 np0005537642 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=8, Nodes=1
Nov 27 05:15:29 np0005537642 kernel: ftrace: allocating 49313 entries in 193 pages
Nov 27 05:15:29 np0005537642 kernel: ftrace: allocated 193 pages with 3 groups
Nov 27 05:15:29 np0005537642 kernel: Dynamic Preempt: voluntary
Nov 27 05:15:29 np0005537642 kernel: rcu: Preemptible hierarchical RCU implementation.
Nov 27 05:15:29 np0005537642 kernel: rcu: #011RCU event tracing is enabled.
Nov 27 05:15:29 np0005537642 kernel: rcu: #011RCU restricting CPUs from NR_CPUS=8192 to nr_cpu_ids=8.
Nov 27 05:15:29 np0005537642 kernel: #011Trampoline variant of Tasks RCU enabled.
Nov 27 05:15:29 np0005537642 kernel: #011Rude variant of Tasks RCU enabled.
Nov 27 05:15:29 np0005537642 kernel: #011Tracing variant of Tasks RCU enabled.
Nov 27 05:15:29 np0005537642 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies.
Nov 27 05:15:29 np0005537642 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=8
Nov 27 05:15:29 np0005537642 kernel: RCU Tasks: Setting shift to 3 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=8.
Nov 27 05:15:29 np0005537642 kernel: RCU Tasks Rude: Setting shift to 3 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=8.
Nov 27 05:15:29 np0005537642 kernel: RCU Tasks Trace: Setting shift to 3 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=8.
Nov 27 05:15:29 np0005537642 kernel: NR_IRQS: 524544, nr_irqs: 488, preallocated irqs: 16
Nov 27 05:15:29 np0005537642 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention.
Nov 27 05:15:29 np0005537642 kernel: kfence: initialized - using 2097152 bytes for 255 objects at 0x(____ptrval____)-0x(____ptrval____)
Nov 27 05:15:29 np0005537642 kernel: Console: colour VGA+ 80x25
Nov 27 05:15:29 np0005537642 kernel: printk: console [ttyS0] enabled
Nov 27 05:15:29 np0005537642 kernel: ACPI: Core revision 20230331
Nov 27 05:15:29 np0005537642 kernel: APIC: Switch to symmetric I/O mode setup
Nov 27 05:15:29 np0005537642 kernel: x2apic enabled
Nov 27 05:15:29 np0005537642 kernel: APIC: Switched APIC routing to: physical x2apic
Nov 27 05:15:29 np0005537642 kernel: tsc: Marking TSC unstable due to TSCs unsynchronized
Nov 27 05:15:29 np0005537642 kernel: Calibrating delay loop (skipped) preset value.. 5600.00 BogoMIPS (lpj=2800000)
Nov 27 05:15:29 np0005537642 kernel: x86/cpu: User Mode Instruction Prevention (UMIP) activated
Nov 27 05:15:29 np0005537642 kernel: Last level iTLB entries: 4KB 512, 2MB 255, 4MB 127
Nov 27 05:15:29 np0005537642 kernel: Last level dTLB entries: 4KB 512, 2MB 255, 4MB 127, 1GB 0
Nov 27 05:15:29 np0005537642 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization
Nov 27 05:15:29 np0005537642 kernel: Spectre V2 : Mitigation: Retpolines
Nov 27 05:15:29 np0005537642 kernel: Spectre V2 : Spectre v2 / SpectreRSB: Filling RSB on context switch and VMEXIT
Nov 27 05:15:29 np0005537642 kernel: Spectre V2 : Enabling Speculation Barrier for firmware calls
Nov 27 05:15:29 np0005537642 kernel: RETBleed: Mitigation: untrained return thunk
Nov 27 05:15:29 np0005537642 kernel: Spectre V2 : mitigation: Enabling conditional Indirect Branch Prediction Barrier
Nov 27 05:15:29 np0005537642 kernel: Speculative Store Bypass: Mitigation: Speculative Store Bypass disabled via prctl
Nov 27 05:15:29 np0005537642 kernel: Speculative Return Stack Overflow: IBPB-extending microcode not applied!
Nov 27 05:15:29 np0005537642 kernel: Speculative Return Stack Overflow: WARNING: See https://kernel.org/doc/html/latest/admin-guide/hw-vuln/srso.html for mitigation options.
Nov 27 05:15:29 np0005537642 kernel: x86/bugs: return thunk changed
Nov 27 05:15:29 np0005537642 kernel: Speculative Return Stack Overflow: Vulnerable: Safe RET, no microcode
Nov 27 05:15:29 np0005537642 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers'
Nov 27 05:15:29 np0005537642 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers'
Nov 27 05:15:29 np0005537642 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers'
Nov 27 05:15:29 np0005537642 kernel: x86/fpu: xstate_offset[2]:  576, xstate_sizes[2]:  256
Nov 27 05:15:29 np0005537642 kernel: x86/fpu: Enabled xstate features 0x7, context size is 832 bytes, using 'compacted' format.
Nov 27 05:15:29 np0005537642 kernel: Freeing SMP alternatives memory: 40K
Nov 27 05:15:29 np0005537642 kernel: pid_max: default: 32768 minimum: 301
Nov 27 05:15:29 np0005537642 kernel: LSM: initializing lsm=lockdown,capability,landlock,yama,integrity,selinux,bpf
Nov 27 05:15:29 np0005537642 kernel: landlock: Up and running.
Nov 27 05:15:29 np0005537642 kernel: Yama: becoming mindful.
Nov 27 05:15:29 np0005537642 kernel: SELinux:  Initializing.
Nov 27 05:15:29 np0005537642 kernel: LSM support for eBPF active
Nov 27 05:15:29 np0005537642 kernel: Mount-cache hash table entries: 16384 (order: 5, 131072 bytes, linear)
Nov 27 05:15:29 np0005537642 kernel: Mountpoint-cache hash table entries: 16384 (order: 5, 131072 bytes, linear)
Nov 27 05:15:29 np0005537642 kernel: smpboot: CPU0: AMD EPYC-Rome Processor (family: 0x17, model: 0x31, stepping: 0x0)
Nov 27 05:15:29 np0005537642 kernel: Performance Events: Fam17h+ core perfctr, AMD PMU driver.
Nov 27 05:15:29 np0005537642 kernel: ... version:                0
Nov 27 05:15:29 np0005537642 kernel: ... bit width:              48
Nov 27 05:15:29 np0005537642 kernel: ... generic registers:      6
Nov 27 05:15:29 np0005537642 kernel: ... value mask:             0000ffffffffffff
Nov 27 05:15:29 np0005537642 kernel: ... max period:             00007fffffffffff
Nov 27 05:15:29 np0005537642 kernel: ... fixed-purpose events:   0
Nov 27 05:15:29 np0005537642 kernel: ... event mask:             000000000000003f
Nov 27 05:15:29 np0005537642 kernel: signal: max sigframe size: 1776
Nov 27 05:15:29 np0005537642 kernel: rcu: Hierarchical SRCU implementation.
Nov 27 05:15:29 np0005537642 kernel: rcu: #011Max phase no-delay instances is 400.
Nov 27 05:15:29 np0005537642 kernel: smp: Bringing up secondary CPUs ...
Nov 27 05:15:29 np0005537642 kernel: smpboot: x86: Booting SMP configuration:
Nov 27 05:15:29 np0005537642 kernel: .... node  #0, CPUs:      #1 #2 #3 #4 #5 #6 #7
Nov 27 05:15:29 np0005537642 kernel: smp: Brought up 1 node, 8 CPUs
Nov 27 05:15:29 np0005537642 kernel: smpboot: Total of 8 processors activated (44800.00 BogoMIPS)
Nov 27 05:15:29 np0005537642 kernel: node 0 deferred pages initialised in 10ms
Nov 27 05:15:29 np0005537642 kernel: Memory: 7765920K/8388068K available (16384K kernel code, 5787K rwdata, 13900K rodata, 4192K init, 7172K bss, 616268K reserved, 0K cma-reserved)
Nov 27 05:15:29 np0005537642 kernel: devtmpfs: initialized
Nov 27 05:15:29 np0005537642 kernel: x86/mm: Memory block size: 128MB
Nov 27 05:15:29 np0005537642 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns
Nov 27 05:15:29 np0005537642 kernel: futex hash table entries: 2048 (order: 5, 131072 bytes, linear)
Nov 27 05:15:29 np0005537642 kernel: pinctrl core: initialized pinctrl subsystem
Nov 27 05:15:29 np0005537642 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family
Nov 27 05:15:29 np0005537642 kernel: DMA: preallocated 1024 KiB GFP_KERNEL pool for atomic allocations
Nov 27 05:15:29 np0005537642 kernel: DMA: preallocated 1024 KiB GFP_KERNEL|GFP_DMA pool for atomic allocations
Nov 27 05:15:29 np0005537642 kernel: DMA: preallocated 1024 KiB GFP_KERNEL|GFP_DMA32 pool for atomic allocations
Nov 27 05:15:29 np0005537642 kernel: audit: initializing netlink subsys (disabled)
Nov 27 05:15:29 np0005537642 kernel: audit: type=2000 audit(1764238528.590:1): state=initialized audit_enabled=0 res=1
Nov 27 05:15:29 np0005537642 kernel: thermal_sys: Registered thermal governor 'fair_share'
Nov 27 05:15:29 np0005537642 kernel: thermal_sys: Registered thermal governor 'step_wise'
Nov 27 05:15:29 np0005537642 kernel: thermal_sys: Registered thermal governor 'user_space'
Nov 27 05:15:29 np0005537642 kernel: cpuidle: using governor menu
Nov 27 05:15:29 np0005537642 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5
Nov 27 05:15:29 np0005537642 kernel: PCI: Using configuration type 1 for base access
Nov 27 05:15:29 np0005537642 kernel: PCI: Using configuration type 1 for extended access
Nov 27 05:15:29 np0005537642 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible.
Nov 27 05:15:29 np0005537642 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages
Nov 27 05:15:29 np0005537642 kernel: HugeTLB: 16380 KiB vmemmap can be freed for a 1.00 GiB page
Nov 27 05:15:29 np0005537642 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages
Nov 27 05:15:29 np0005537642 kernel: HugeTLB: 28 KiB vmemmap can be freed for a 2.00 MiB page
Nov 27 05:15:29 np0005537642 kernel: Demotion targets for Node 0: null
Nov 27 05:15:29 np0005537642 kernel: cryptd: max_cpu_qlen set to 1000
Nov 27 05:15:29 np0005537642 kernel: ACPI: Added _OSI(Module Device)
Nov 27 05:15:29 np0005537642 kernel: ACPI: Added _OSI(Processor Device)
Nov 27 05:15:29 np0005537642 kernel: ACPI: Added _OSI(3.0 _SCP Extensions)
Nov 27 05:15:29 np0005537642 kernel: ACPI: Added _OSI(Processor Aggregator Device)
Nov 27 05:15:29 np0005537642 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded
Nov 27 05:15:29 np0005537642 kernel: ACPI: _OSC evaluation for CPUs failed, trying _PDC
Nov 27 05:15:29 np0005537642 kernel: ACPI: Interpreter enabled
Nov 27 05:15:29 np0005537642 kernel: ACPI: PM: (supports S0 S3 S4 S5)
Nov 27 05:15:29 np0005537642 kernel: ACPI: Using IOAPIC for interrupt routing
Nov 27 05:15:29 np0005537642 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug
Nov 27 05:15:29 np0005537642 kernel: PCI: Using E820 reservations for host bridge windows
Nov 27 05:15:29 np0005537642 kernel: ACPI: Enabled 2 GPEs in block 00 to 0F
Nov 27 05:15:29 np0005537642 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff])
Nov 27 05:15:29 np0005537642 kernel: acpi PNP0A03:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI EDR HPX-Type3]
Nov 27 05:15:29 np0005537642 kernel: acpiphp: Slot [3] registered
Nov 27 05:15:29 np0005537642 kernel: acpiphp: Slot [4] registered
Nov 27 05:15:29 np0005537642 kernel: acpiphp: Slot [5] registered
Nov 27 05:15:29 np0005537642 kernel: acpiphp: Slot [6] registered
Nov 27 05:15:29 np0005537642 kernel: acpiphp: Slot [7] registered
Nov 27 05:15:29 np0005537642 kernel: acpiphp: Slot [8] registered
Nov 27 05:15:29 np0005537642 kernel: acpiphp: Slot [9] registered
Nov 27 05:15:29 np0005537642 kernel: acpiphp: Slot [10] registered
Nov 27 05:15:29 np0005537642 kernel: acpiphp: Slot [11] registered
Nov 27 05:15:29 np0005537642 kernel: acpiphp: Slot [12] registered
Nov 27 05:15:29 np0005537642 kernel: acpiphp: Slot [13] registered
Nov 27 05:15:29 np0005537642 kernel: acpiphp: Slot [14] registered
Nov 27 05:15:29 np0005537642 kernel: acpiphp: Slot [15] registered
Nov 27 05:15:29 np0005537642 kernel: acpiphp: Slot [16] registered
Nov 27 05:15:29 np0005537642 kernel: acpiphp: Slot [17] registered
Nov 27 05:15:29 np0005537642 kernel: acpiphp: Slot [18] registered
Nov 27 05:15:29 np0005537642 kernel: acpiphp: Slot [19] registered
Nov 27 05:15:29 np0005537642 kernel: acpiphp: Slot [20] registered
Nov 27 05:15:29 np0005537642 kernel: acpiphp: Slot [21] registered
Nov 27 05:15:29 np0005537642 kernel: acpiphp: Slot [22] registered
Nov 27 05:15:29 np0005537642 kernel: acpiphp: Slot [23] registered
Nov 27 05:15:29 np0005537642 kernel: acpiphp: Slot [24] registered
Nov 27 05:15:29 np0005537642 kernel: acpiphp: Slot [25] registered
Nov 27 05:15:29 np0005537642 kernel: acpiphp: Slot [26] registered
Nov 27 05:15:29 np0005537642 kernel: acpiphp: Slot [27] registered
Nov 27 05:15:29 np0005537642 kernel: acpiphp: Slot [28] registered
Nov 27 05:15:29 np0005537642 kernel: acpiphp: Slot [29] registered
Nov 27 05:15:29 np0005537642 kernel: acpiphp: Slot [30] registered
Nov 27 05:15:29 np0005537642 kernel: acpiphp: Slot [31] registered
Nov 27 05:15:29 np0005537642 kernel: PCI host bridge to bus 0000:00
Nov 27 05:15:29 np0005537642 kernel: pci_bus 0000:00: root bus resource [io  0x0000-0x0cf7 window]
Nov 27 05:15:29 np0005537642 kernel: pci_bus 0000:00: root bus resource [io  0x0d00-0xffff window]
Nov 27 05:15:29 np0005537642 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window]
Nov 27 05:15:29 np0005537642 kernel: pci_bus 0000:00: root bus resource [mem 0xc0000000-0xfebfffff window]
Nov 27 05:15:29 np0005537642 kernel: pci_bus 0000:00: root bus resource [mem 0x240000000-0x2bfffffff window]
Nov 27 05:15:29 np0005537642 kernel: pci_bus 0000:00: root bus resource [bus 00-ff]
Nov 27 05:15:29 np0005537642 kernel: pci 0000:00:00.0: [8086:1237] type 00 class 0x060000 conventional PCI endpoint
Nov 27 05:15:29 np0005537642 kernel: pci 0000:00:01.0: [8086:7000] type 00 class 0x060100 conventional PCI endpoint
Nov 27 05:15:29 np0005537642 kernel: pci 0000:00:01.1: [8086:7010] type 00 class 0x010180 conventional PCI endpoint
Nov 27 05:15:29 np0005537642 kernel: pci 0000:00:01.1: BAR 4 [io  0xc140-0xc14f]
Nov 27 05:15:29 np0005537642 kernel: pci 0000:00:01.1: BAR 0 [io  0x01f0-0x01f7]: legacy IDE quirk
Nov 27 05:15:29 np0005537642 kernel: pci 0000:00:01.1: BAR 1 [io  0x03f6]: legacy IDE quirk
Nov 27 05:15:29 np0005537642 kernel: pci 0000:00:01.1: BAR 2 [io  0x0170-0x0177]: legacy IDE quirk
Nov 27 05:15:29 np0005537642 kernel: pci 0000:00:01.1: BAR 3 [io  0x0376]: legacy IDE quirk
Nov 27 05:15:29 np0005537642 kernel: pci 0000:00:01.2: [8086:7020] type 00 class 0x0c0300 conventional PCI endpoint
Nov 27 05:15:29 np0005537642 kernel: pci 0000:00:01.2: BAR 4 [io  0xc100-0xc11f]
Nov 27 05:15:29 np0005537642 kernel: pci 0000:00:01.3: [8086:7113] type 00 class 0x068000 conventional PCI endpoint
Nov 27 05:15:29 np0005537642 kernel: pci 0000:00:01.3: quirk: [io  0x0600-0x063f] claimed by PIIX4 ACPI
Nov 27 05:15:29 np0005537642 kernel: pci 0000:00:01.3: quirk: [io  0x0700-0x070f] claimed by PIIX4 SMB
Nov 27 05:15:29 np0005537642 kernel: pci 0000:00:02.0: [1af4:1050] type 00 class 0x030000 conventional PCI endpoint
Nov 27 05:15:29 np0005537642 kernel: pci 0000:00:02.0: BAR 0 [mem 0xfe000000-0xfe7fffff pref]
Nov 27 05:15:29 np0005537642 kernel: pci 0000:00:02.0: BAR 2 [mem 0xfe800000-0xfe803fff 64bit pref]
Nov 27 05:15:29 np0005537642 kernel: pci 0000:00:02.0: BAR 4 [mem 0xfeb90000-0xfeb90fff]
Nov 27 05:15:29 np0005537642 kernel: pci 0000:00:02.0: ROM [mem 0xfeb80000-0xfeb8ffff pref]
Nov 27 05:15:29 np0005537642 kernel: pci 0000:00:02.0: Video device with shadowed ROM at [mem 0x000c0000-0x000dffff]
Nov 27 05:15:29 np0005537642 kernel: pci 0000:00:03.0: [1af4:1000] type 00 class 0x020000 conventional PCI endpoint
Nov 27 05:15:29 np0005537642 kernel: pci 0000:00:03.0: BAR 0 [io  0xc080-0xc0bf]
Nov 27 05:15:29 np0005537642 kernel: pci 0000:00:03.0: BAR 1 [mem 0xfeb91000-0xfeb91fff]
Nov 27 05:15:29 np0005537642 kernel: pci 0000:00:03.0: BAR 4 [mem 0xfe804000-0xfe807fff 64bit pref]
Nov 27 05:15:29 np0005537642 kernel: pci 0000:00:03.0: ROM [mem 0xfeb00000-0xfeb7ffff pref]
Nov 27 05:15:29 np0005537642 kernel: pci 0000:00:04.0: [1af4:1001] type 00 class 0x010000 conventional PCI endpoint
Nov 27 05:15:29 np0005537642 kernel: pci 0000:00:04.0: BAR 0 [io  0xc000-0xc07f]
Nov 27 05:15:29 np0005537642 kernel: pci 0000:00:04.0: BAR 1 [mem 0xfeb92000-0xfeb92fff]
Nov 27 05:15:29 np0005537642 kernel: pci 0000:00:04.0: BAR 4 [mem 0xfe808000-0xfe80bfff 64bit pref]
Nov 27 05:15:29 np0005537642 kernel: pci 0000:00:05.0: [1af4:1002] type 00 class 0x00ff00 conventional PCI endpoint
Nov 27 05:15:29 np0005537642 kernel: pci 0000:00:05.0: BAR 0 [io  0xc0c0-0xc0ff]
Nov 27 05:15:29 np0005537642 kernel: pci 0000:00:05.0: BAR 4 [mem 0xfe80c000-0xfe80ffff 64bit pref]
Nov 27 05:15:29 np0005537642 kernel: pci 0000:00:06.0: [1af4:1005] type 00 class 0x00ff00 conventional PCI endpoint
Nov 27 05:15:29 np0005537642 kernel: pci 0000:00:06.0: BAR 0 [io  0xc120-0xc13f]
Nov 27 05:15:29 np0005537642 kernel: pci 0000:00:06.0: BAR 4 [mem 0xfe810000-0xfe813fff 64bit pref]
Nov 27 05:15:29 np0005537642 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 10
Nov 27 05:15:29 np0005537642 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 10
Nov 27 05:15:29 np0005537642 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 11
Nov 27 05:15:29 np0005537642 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 11
Nov 27 05:15:29 np0005537642 kernel: ACPI: PCI: Interrupt link LNKS configured for IRQ 9
Nov 27 05:15:29 np0005537642 kernel: iommu: Default domain type: Translated
Nov 27 05:15:29 np0005537642 kernel: iommu: DMA domain TLB invalidation policy: lazy mode
Nov 27 05:15:29 np0005537642 kernel: SCSI subsystem initialized
Nov 27 05:15:29 np0005537642 kernel: ACPI: bus type USB registered
Nov 27 05:15:29 np0005537642 kernel: usbcore: registered new interface driver usbfs
Nov 27 05:15:29 np0005537642 kernel: usbcore: registered new interface driver hub
Nov 27 05:15:29 np0005537642 kernel: usbcore: registered new device driver usb
Nov 27 05:15:29 np0005537642 kernel: pps_core: LinuxPPS API ver. 1 registered
Nov 27 05:15:29 np0005537642 kernel: pps_core: Software ver. 5.3.6 - Copyright 2005-2007 Rodolfo Giometti <giometti@linux.it>
Nov 27 05:15:29 np0005537642 kernel: PTP clock support registered
Nov 27 05:15:29 np0005537642 kernel: EDAC MC: Ver: 3.0.0
Nov 27 05:15:29 np0005537642 kernel: NetLabel: Initializing
Nov 27 05:15:29 np0005537642 kernel: NetLabel:  domain hash size = 128
Nov 27 05:15:29 np0005537642 kernel: NetLabel:  protocols = UNLABELED CIPSOv4 CALIPSO
Nov 27 05:15:29 np0005537642 kernel: NetLabel:  unlabeled traffic allowed by default
Nov 27 05:15:29 np0005537642 kernel: PCI: Using ACPI for IRQ routing
Nov 27 05:15:29 np0005537642 kernel: pci 0000:00:02.0: vgaarb: setting as boot VGA device
Nov 27 05:15:29 np0005537642 kernel: pci 0000:00:02.0: vgaarb: bridge control possible
Nov 27 05:15:29 np0005537642 kernel: pci 0000:00:02.0: vgaarb: VGA device added: decodes=io+mem,owns=io+mem,locks=none
Nov 27 05:15:29 np0005537642 kernel: vgaarb: loaded
Nov 27 05:15:29 np0005537642 kernel: clocksource: Switched to clocksource kvm-clock
Nov 27 05:15:29 np0005537642 kernel: VFS: Disk quotas dquot_6.6.0
Nov 27 05:15:29 np0005537642 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes)
Nov 27 05:15:29 np0005537642 kernel: pnp: PnP ACPI init
Nov 27 05:15:29 np0005537642 kernel: pnp: PnP ACPI: found 5 devices
Nov 27 05:15:29 np0005537642 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns
Nov 27 05:15:29 np0005537642 kernel: NET: Registered PF_INET protocol family
Nov 27 05:15:29 np0005537642 kernel: IP idents hash table entries: 131072 (order: 8, 1048576 bytes, linear)
Nov 27 05:15:29 np0005537642 kernel: tcp_listen_portaddr_hash hash table entries: 4096 (order: 4, 65536 bytes, linear)
Nov 27 05:15:29 np0005537642 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear)
Nov 27 05:15:29 np0005537642 kernel: TCP established hash table entries: 65536 (order: 7, 524288 bytes, linear)
Nov 27 05:15:29 np0005537642 kernel: TCP bind hash table entries: 65536 (order: 8, 1048576 bytes, linear)
Nov 27 05:15:29 np0005537642 kernel: TCP: Hash tables configured (established 65536 bind 65536)
Nov 27 05:15:29 np0005537642 kernel: MPTCP token hash table entries: 8192 (order: 5, 196608 bytes, linear)
Nov 27 05:15:29 np0005537642 kernel: UDP hash table entries: 4096 (order: 5, 131072 bytes, linear)
Nov 27 05:15:29 np0005537642 kernel: UDP-Lite hash table entries: 4096 (order: 5, 131072 bytes, linear)
Nov 27 05:15:29 np0005537642 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family
Nov 27 05:15:29 np0005537642 kernel: NET: Registered PF_XDP protocol family
Nov 27 05:15:29 np0005537642 kernel: pci_bus 0000:00: resource 4 [io  0x0000-0x0cf7 window]
Nov 27 05:15:29 np0005537642 kernel: pci_bus 0000:00: resource 5 [io  0x0d00-0xffff window]
Nov 27 05:15:29 np0005537642 kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window]
Nov 27 05:15:29 np0005537642 kernel: pci_bus 0000:00: resource 7 [mem 0xc0000000-0xfebfffff window]
Nov 27 05:15:29 np0005537642 kernel: pci_bus 0000:00: resource 8 [mem 0x240000000-0x2bfffffff window]
Nov 27 05:15:29 np0005537642 kernel: pci 0000:00:01.0: PIIX3: Enabling Passive Release
Nov 27 05:15:29 np0005537642 kernel: pci 0000:00:00.0: Limiting direct PCI/PCI transfers
Nov 27 05:15:29 np0005537642 kernel: ACPI: \_SB_.LNKD: Enabled at IRQ 11
Nov 27 05:15:29 np0005537642 kernel: pci 0000:00:01.2: quirk_usb_early_handoff+0x0/0x160 took 89171 usecs
Nov 27 05:15:29 np0005537642 kernel: PCI: CLS 0 bytes, default 64
Nov 27 05:15:29 np0005537642 kernel: PCI-DMA: Using software bounce buffering for IO (SWIOTLB)
Nov 27 05:15:29 np0005537642 kernel: software IO TLB: mapped [mem 0x00000000ab000000-0x00000000af000000] (64MB)
Nov 27 05:15:29 np0005537642 kernel: ACPI: bus type thunderbolt registered
Nov 27 05:15:29 np0005537642 kernel: Trying to unpack rootfs image as initramfs...
Nov 27 05:15:29 np0005537642 kernel: Initialise system trusted keyrings
Nov 27 05:15:29 np0005537642 kernel: Key type blacklist registered
Nov 27 05:15:29 np0005537642 kernel: workingset: timestamp_bits=36 max_order=21 bucket_order=0
Nov 27 05:15:29 np0005537642 kernel: zbud: loaded
Nov 27 05:15:29 np0005537642 kernel: integrity: Platform Keyring initialized
Nov 27 05:15:29 np0005537642 kernel: integrity: Machine keyring initialized
Nov 27 05:15:29 np0005537642 kernel: Freeing initrd memory: 85868K
Nov 27 05:15:29 np0005537642 kernel: NET: Registered PF_ALG protocol family
Nov 27 05:15:29 np0005537642 kernel: xor: automatically using best checksumming function   avx       
Nov 27 05:15:29 np0005537642 kernel: Key type asymmetric registered
Nov 27 05:15:29 np0005537642 kernel: Asymmetric key parser 'x509' registered
Nov 27 05:15:29 np0005537642 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 246)
Nov 27 05:15:29 np0005537642 kernel: io scheduler mq-deadline registered
Nov 27 05:15:29 np0005537642 kernel: io scheduler kyber registered
Nov 27 05:15:29 np0005537642 kernel: io scheduler bfq registered
Nov 27 05:15:29 np0005537642 kernel: atomic64_test: passed for x86-64 platform with CX8 and with SSE
Nov 27 05:15:29 np0005537642 kernel: shpchp: Standard Hot Plug PCI Controller Driver version: 0.4
Nov 27 05:15:29 np0005537642 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input0
Nov 27 05:15:29 np0005537642 kernel: ACPI: button: Power Button [PWRF]
Nov 27 05:15:29 np0005537642 kernel: ACPI: \_SB_.LNKB: Enabled at IRQ 10
Nov 27 05:15:29 np0005537642 kernel: ACPI: \_SB_.LNKC: Enabled at IRQ 11
Nov 27 05:15:29 np0005537642 kernel: ACPI: \_SB_.LNKA: Enabled at IRQ 10
Nov 27 05:15:29 np0005537642 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled
Nov 27 05:15:29 np0005537642 kernel: 00:00: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A
Nov 27 05:15:29 np0005537642 kernel: Non-volatile memory driver v1.3
Nov 27 05:15:29 np0005537642 kernel: rdac: device handler registered
Nov 27 05:15:29 np0005537642 kernel: hp_sw: device handler registered
Nov 27 05:15:29 np0005537642 kernel: emc: device handler registered
Nov 27 05:15:29 np0005537642 kernel: alua: device handler registered
Nov 27 05:15:29 np0005537642 kernel: uhci_hcd 0000:00:01.2: UHCI Host Controller
Nov 27 05:15:29 np0005537642 kernel: uhci_hcd 0000:00:01.2: new USB bus registered, assigned bus number 1
Nov 27 05:15:29 np0005537642 kernel: uhci_hcd 0000:00:01.2: detected 2 ports
Nov 27 05:15:29 np0005537642 kernel: uhci_hcd 0000:00:01.2: irq 11, io port 0x0000c100
Nov 27 05:15:29 np0005537642 kernel: usb usb1: New USB device found, idVendor=1d6b, idProduct=0001, bcdDevice= 5.14
Nov 27 05:15:29 np0005537642 kernel: usb usb1: New USB device strings: Mfr=3, Product=2, SerialNumber=1
Nov 27 05:15:29 np0005537642 kernel: usb usb1: Product: UHCI Host Controller
Nov 27 05:15:29 np0005537642 kernel: usb usb1: Manufacturer: Linux 5.14.0-642.el9.x86_64 uhci_hcd
Nov 27 05:15:29 np0005537642 kernel: usb usb1: SerialNumber: 0000:00:01.2
Nov 27 05:15:29 np0005537642 kernel: hub 1-0:1.0: USB hub found
Nov 27 05:15:29 np0005537642 kernel: hub 1-0:1.0: 2 ports detected
Nov 27 05:15:29 np0005537642 kernel: usbcore: registered new interface driver usbserial_generic
Nov 27 05:15:29 np0005537642 kernel: usbserial: USB Serial support registered for generic
Nov 27 05:15:29 np0005537642 kernel: i8042: PNP: PS/2 Controller [PNP0303:KBD,PNP0f13:MOU] at 0x60,0x64 irq 1,12
Nov 27 05:15:29 np0005537642 kernel: serio: i8042 KBD port at 0x60,0x64 irq 1
Nov 27 05:15:29 np0005537642 kernel: serio: i8042 AUX port at 0x60,0x64 irq 12
Nov 27 05:15:29 np0005537642 kernel: mousedev: PS/2 mouse device common for all mice
Nov 27 05:15:29 np0005537642 kernel: rtc_cmos 00:04: RTC can wake from S4
Nov 27 05:15:29 np0005537642 kernel: input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input1
Nov 27 05:15:29 np0005537642 kernel: input: VirtualPS/2 VMware VMMouse as /devices/platform/i8042/serio1/input/input4
Nov 27 05:15:29 np0005537642 kernel: input: VirtualPS/2 VMware VMMouse as /devices/platform/i8042/serio1/input/input3
Nov 27 05:15:29 np0005537642 kernel: rtc_cmos 00:04: registered as rtc0
Nov 27 05:15:29 np0005537642 kernel: rtc_cmos 00:04: setting system clock to 2025-11-27T10:15:28 UTC (1764238528)
Nov 27 05:15:29 np0005537642 kernel: rtc_cmos 00:04: alarms up to one day, y3k, 242 bytes nvram
Nov 27 05:15:29 np0005537642 kernel: amd_pstate: the _CPC object is not present in SBIOS or ACPI disabled
Nov 27 05:15:29 np0005537642 kernel: hid: raw HID events driver (C) Jiri Kosina
Nov 27 05:15:29 np0005537642 kernel: usbcore: registered new interface driver usbhid
Nov 27 05:15:29 np0005537642 kernel: usbhid: USB HID core driver
Nov 27 05:15:29 np0005537642 kernel: drop_monitor: Initializing network drop monitor service
Nov 27 05:15:29 np0005537642 kernel: Initializing XFRM netlink socket
Nov 27 05:15:29 np0005537642 kernel: NET: Registered PF_INET6 protocol family
Nov 27 05:15:29 np0005537642 kernel: Segment Routing with IPv6
Nov 27 05:15:29 np0005537642 kernel: NET: Registered PF_PACKET protocol family
Nov 27 05:15:29 np0005537642 kernel: mpls_gso: MPLS GSO support
Nov 27 05:15:29 np0005537642 kernel: IPI shorthand broadcast: enabled
Nov 27 05:15:29 np0005537642 kernel: AVX2 version of gcm_enc/dec engaged.
Nov 27 05:15:29 np0005537642 kernel: AES CTR mode by8 optimization enabled
Nov 27 05:15:29 np0005537642 kernel: sched_clock: Marking stable (1245005310, 146996460)->(1468473650, -76471880)
Nov 27 05:15:29 np0005537642 kernel: registered taskstats version 1
Nov 27 05:15:29 np0005537642 kernel: Loading compiled-in X.509 certificates
Nov 27 05:15:29 np0005537642 kernel: Loaded X.509 cert 'The CentOS Project: CentOS Stream kernel signing key: 8ec4bd273f582f9a9b9a494ae677ca1f1488f19e'
Nov 27 05:15:29 np0005537642 kernel: Loaded X.509 cert 'Red Hat Enterprise Linux Driver Update Program (key 3): bf57f3e87362bc7229d9f465321773dfd1f77a80'
Nov 27 05:15:29 np0005537642 kernel: Loaded X.509 cert 'Red Hat Enterprise Linux kpatch signing key: 4d38fd864ebe18c5f0b72e3852e2014c3a676fc8'
Nov 27 05:15:29 np0005537642 kernel: Loaded X.509 cert 'RH-IMA-CA: Red Hat IMA CA: fb31825dd0e073685b264e3038963673f753959a'
Nov 27 05:15:29 np0005537642 kernel: Loaded X.509 cert 'Nvidia GPU OOT signing 001: 55e1cef88193e60419f0b0ec379c49f77545acf0'
Nov 27 05:15:29 np0005537642 kernel: Demotion targets for Node 0: null
Nov 27 05:15:29 np0005537642 kernel: page_owner is disabled
Nov 27 05:15:29 np0005537642 kernel: Key type .fscrypt registered
Nov 27 05:15:29 np0005537642 kernel: Key type fscrypt-provisioning registered
Nov 27 05:15:29 np0005537642 kernel: Key type big_key registered
Nov 27 05:15:29 np0005537642 kernel: Key type encrypted registered
Nov 27 05:15:29 np0005537642 kernel: ima: No TPM chip found, activating TPM-bypass!
Nov 27 05:15:29 np0005537642 kernel: Loading compiled-in module X.509 certificates
Nov 27 05:15:29 np0005537642 kernel: Loaded X.509 cert 'The CentOS Project: CentOS Stream kernel signing key: 8ec4bd273f582f9a9b9a494ae677ca1f1488f19e'
Nov 27 05:15:29 np0005537642 kernel: ima: Allocated hash algorithm: sha256
Nov 27 05:15:29 np0005537642 kernel: ima: No architecture policies found
Nov 27 05:15:29 np0005537642 kernel: evm: Initialising EVM extended attributes:
Nov 27 05:15:29 np0005537642 kernel: evm: security.selinux
Nov 27 05:15:29 np0005537642 kernel: evm: security.SMACK64 (disabled)
Nov 27 05:15:29 np0005537642 kernel: evm: security.SMACK64EXEC (disabled)
Nov 27 05:15:29 np0005537642 kernel: evm: security.SMACK64TRANSMUTE (disabled)
Nov 27 05:15:29 np0005537642 kernel: evm: security.SMACK64MMAP (disabled)
Nov 27 05:15:29 np0005537642 kernel: evm: security.apparmor (disabled)
Nov 27 05:15:29 np0005537642 kernel: evm: security.ima
Nov 27 05:15:29 np0005537642 kernel: evm: security.capability
Nov 27 05:15:29 np0005537642 kernel: evm: HMAC attrs: 0x1
Nov 27 05:15:29 np0005537642 kernel: usb 1-1: new full-speed USB device number 2 using uhci_hcd
Nov 27 05:15:29 np0005537642 kernel: Running certificate verification RSA selftest
Nov 27 05:15:29 np0005537642 kernel: Loaded X.509 cert 'Certificate verification self-testing key: f58703bb33ce1b73ee02eccdee5b8817518fe3db'
Nov 27 05:15:29 np0005537642 kernel: Running certificate verification ECDSA selftest
Nov 27 05:15:29 np0005537642 kernel: Loaded X.509 cert 'Certificate verification ECDSA self-testing key: 2900bcea1deb7bc8479a84a23d758efdfdd2b2d3'
Nov 27 05:15:29 np0005537642 kernel: clk: Disabling unused clocks
Nov 27 05:15:29 np0005537642 kernel: Freeing unused decrypted memory: 2028K
Nov 27 05:15:29 np0005537642 kernel: Freeing unused kernel image (initmem) memory: 4192K
Nov 27 05:15:29 np0005537642 kernel: Write protecting the kernel read-only data: 30720k
Nov 27 05:15:29 np0005537642 kernel: Freeing unused kernel image (rodata/data gap) memory: 436K
Nov 27 05:15:29 np0005537642 kernel: x86/mm: Checked W+X mappings: passed, no W+X pages found.
Nov 27 05:15:29 np0005537642 kernel: Run /init as init process
Nov 27 05:15:29 np0005537642 systemd: systemd 252-59.el9 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT +GNUTLS +OPENSSL +ACL +BLKID +CURL +ELFUTILS +FIDO2 +IDN2 -IDN -IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY +P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK +XKBCOMMON +UTMP +SYSVINIT default-hierarchy=unified)
Nov 27 05:15:29 np0005537642 systemd: Detected virtualization kvm.
Nov 27 05:15:29 np0005537642 systemd: Detected architecture x86-64.
Nov 27 05:15:29 np0005537642 systemd: Running in initrd.
Nov 27 05:15:29 np0005537642 systemd: No hostname configured, using default hostname.
Nov 27 05:15:29 np0005537642 systemd: Hostname set to <localhost>.
Nov 27 05:15:29 np0005537642 systemd: Initializing machine ID from VM UUID.
Nov 27 05:15:29 np0005537642 kernel: usb 1-1: New USB device found, idVendor=0627, idProduct=0001, bcdDevice= 0.00
Nov 27 05:15:29 np0005537642 kernel: usb 1-1: New USB device strings: Mfr=1, Product=3, SerialNumber=10
Nov 27 05:15:29 np0005537642 kernel: usb 1-1: Product: QEMU USB Tablet
Nov 27 05:15:29 np0005537642 kernel: usb 1-1: Manufacturer: QEMU
Nov 27 05:15:29 np0005537642 kernel: usb 1-1: SerialNumber: 28754-0000:00:01.2-1
Nov 27 05:15:29 np0005537642 kernel: input: QEMU QEMU USB Tablet as /devices/pci0000:00/0000:00:01.2/usb1/1-1/1-1:1.0/0003:0627:0001.0001/input/input5
Nov 27 05:15:29 np0005537642 kernel: hid-generic 0003:0627:0001.0001: input,hidraw0: USB HID v0.01 Mouse [QEMU QEMU USB Tablet] on usb-0000:00:01.2-1/input0
Nov 27 05:15:29 np0005537642 systemd: Queued start job for default target Initrd Default Target.
Nov 27 05:15:29 np0005537642 systemd: Started Dispatch Password Requests to Console Directory Watch.
Nov 27 05:15:29 np0005537642 systemd: Reached target Local Encrypted Volumes.
Nov 27 05:15:29 np0005537642 systemd: Reached target Initrd /usr File System.
Nov 27 05:15:29 np0005537642 systemd: Reached target Local File Systems.
Nov 27 05:15:29 np0005537642 systemd: Reached target Path Units.
Nov 27 05:15:29 np0005537642 systemd: Reached target Slice Units.
Nov 27 05:15:29 np0005537642 systemd: Reached target Swaps.
Nov 27 05:15:29 np0005537642 systemd: Reached target Timer Units.
Nov 27 05:15:29 np0005537642 systemd: Listening on D-Bus System Message Bus Socket.
Nov 27 05:15:29 np0005537642 systemd: Listening on Journal Socket (/dev/log).
Nov 27 05:15:29 np0005537642 systemd: Listening on Journal Socket.
Nov 27 05:15:29 np0005537642 systemd: Listening on udev Control Socket.
Nov 27 05:15:29 np0005537642 systemd: Listening on udev Kernel Socket.
Nov 27 05:15:29 np0005537642 systemd: Reached target Socket Units.
Nov 27 05:15:29 np0005537642 systemd: Starting Create List of Static Device Nodes...
Nov 27 05:15:29 np0005537642 systemd: Starting Journal Service...
Nov 27 05:15:29 np0005537642 systemd: Load Kernel Modules was skipped because no trigger condition checks were met.
Nov 27 05:15:29 np0005537642 systemd: Starting Apply Kernel Variables...
Nov 27 05:15:29 np0005537642 systemd: Starting Create System Users...
Nov 27 05:15:29 np0005537642 systemd: Starting Setup Virtual Console...
Nov 27 05:15:29 np0005537642 systemd: Finished Create List of Static Device Nodes.
Nov 27 05:15:29 np0005537642 systemd: Finished Apply Kernel Variables.
Nov 27 05:15:29 np0005537642 systemd: Finished Create System Users.
Nov 27 05:15:29 np0005537642 systemd: Starting Create Static Device Nodes in /dev...
Nov 27 05:15:29 np0005537642 systemd-journald[301]: Journal started
Nov 27 05:15:29 np0005537642 systemd-journald[301]: Runtime Journal (/run/log/journal/08c144d4d9504b5ebc73c3be48ec71c6) is 8.0M, max 153.6M, 145.6M free.
Nov 27 05:15:29 np0005537642 systemd-sysusers[306]: Creating group 'users' with GID 100.
Nov 27 05:15:29 np0005537642 systemd-sysusers[306]: Creating group 'dbus' with GID 81.
Nov 27 05:15:29 np0005537642 systemd-sysusers[306]: Creating user 'dbus' (System Message Bus) with UID 81 and GID 81.
Nov 27 05:15:29 np0005537642 systemd: Started Journal Service.
Nov 27 05:15:29 np0005537642 systemd[1]: Starting Create Volatile Files and Directories...
Nov 27 05:15:29 np0005537642 systemd[1]: Finished Create Static Device Nodes in /dev.
Nov 27 05:15:29 np0005537642 systemd[1]: Finished Create Volatile Files and Directories.
Nov 27 05:15:29 np0005537642 systemd[1]: Finished Setup Virtual Console.
Nov 27 05:15:29 np0005537642 systemd[1]: dracut ask for additional cmdline parameters was skipped because no trigger condition checks were met.
Nov 27 05:15:29 np0005537642 systemd[1]: Starting dracut cmdline hook...
Nov 27 05:15:29 np0005537642 dracut-cmdline[320]: dracut-9 dracut-057-102.git20250818.el9
Nov 27 05:15:29 np0005537642 dracut-cmdline[320]: Using kernel command line parameters:    BOOT_IMAGE=(hd0,msdos1)/boot/vmlinuz-5.14.0-642.el9.x86_64 root=UUID=b277050f-8ace-464d-abb6-4c46d4c45253 ro console=ttyS0,115200n8 no_timer_check net.ifnames=0 crashkernel=1G-2G:192M,2G-64G:256M,64G-:512M
Nov 27 05:15:29 np0005537642 systemd[1]: Finished dracut cmdline hook.
Nov 27 05:15:29 np0005537642 systemd[1]: Starting dracut pre-udev hook...
Nov 27 05:15:29 np0005537642 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log.
Nov 27 05:15:29 np0005537642 kernel: device-mapper: uevent: version 1.0.3
Nov 27 05:15:29 np0005537642 kernel: device-mapper: ioctl: 4.50.0-ioctl (2025-04-28) initialised: dm-devel@lists.linux.dev
Nov 27 05:15:29 np0005537642 kernel: RPC: Registered named UNIX socket transport module.
Nov 27 05:15:29 np0005537642 kernel: RPC: Registered udp transport module.
Nov 27 05:15:29 np0005537642 kernel: RPC: Registered tcp transport module.
Nov 27 05:15:29 np0005537642 kernel: RPC: Registered tcp-with-tls transport module.
Nov 27 05:15:29 np0005537642 kernel: RPC: Registered tcp NFSv4.1 backchannel transport module.
Nov 27 05:15:29 np0005537642 rpc.statd[438]: Version 2.5.4 starting
Nov 27 05:15:29 np0005537642 rpc.statd[438]: Initializing NSM state
Nov 27 05:15:29 np0005537642 rpc.idmapd[443]: Setting log level to 0
Nov 27 05:15:29 np0005537642 systemd[1]: Finished dracut pre-udev hook.
Nov 27 05:15:29 np0005537642 systemd[1]: Starting Rule-based Manager for Device Events and Files...
Nov 27 05:15:29 np0005537642 systemd-udevd[456]: Using default interface naming scheme 'rhel-9.0'.
Nov 27 05:15:29 np0005537642 systemd[1]: Started Rule-based Manager for Device Events and Files.
Nov 27 05:15:29 np0005537642 systemd[1]: Starting dracut pre-trigger hook...
Nov 27 05:15:29 np0005537642 systemd[1]: Finished dracut pre-trigger hook.
Nov 27 05:15:29 np0005537642 systemd[1]: Starting Coldplug All udev Devices...
Nov 27 05:15:29 np0005537642 systemd[1]: Created slice Slice /system/modprobe.
Nov 27 05:15:29 np0005537642 systemd[1]: Starting Load Kernel Module configfs...
Nov 27 05:15:29 np0005537642 systemd[1]: Finished Coldplug All udev Devices.
Nov 27 05:15:29 np0005537642 systemd[1]: modprobe@configfs.service: Deactivated successfully.
Nov 27 05:15:29 np0005537642 systemd[1]: Finished Load Kernel Module configfs.
Nov 27 05:15:29 np0005537642 systemd[1]: nm-initrd.service was skipped because of an unmet condition check (ConditionPathExists=/run/NetworkManager/initrd/neednet).
Nov 27 05:15:29 np0005537642 systemd[1]: Reached target Network.
Nov 27 05:15:29 np0005537642 systemd[1]: nm-wait-online-initrd.service was skipped because of an unmet condition check (ConditionPathExists=/run/NetworkManager/initrd/neednet).
Nov 27 05:15:29 np0005537642 systemd[1]: Starting dracut initqueue hook...
Nov 27 05:15:29 np0005537642 kernel: virtio_blk virtio2: 8/0/0 default/read/poll queues
Nov 27 05:15:29 np0005537642 kernel: virtio_blk virtio2: [vda] 167772160 512-byte logical blocks (85.9 GB/80.0 GiB)
Nov 27 05:15:29 np0005537642 kernel: vda: vda1
Nov 27 05:15:30 np0005537642 systemd-udevd[460]: Network interface NamePolicy= disabled on kernel command line.
Nov 27 05:15:30 np0005537642 kernel: scsi host0: ata_piix
Nov 27 05:15:30 np0005537642 kernel: scsi host1: ata_piix
Nov 27 05:15:30 np0005537642 kernel: ata1: PATA max MWDMA2 cmd 0x1f0 ctl 0x3f6 bmdma 0xc140 irq 14 lpm-pol 0
Nov 27 05:15:30 np0005537642 kernel: ata2: PATA max MWDMA2 cmd 0x170 ctl 0x376 bmdma 0xc148 irq 15 lpm-pol 0
Nov 27 05:15:30 np0005537642 systemd[1]: Found device /dev/disk/by-uuid/b277050f-8ace-464d-abb6-4c46d4c45253.
Nov 27 05:15:30 np0005537642 systemd[1]: Reached target Initrd Root Device.
Nov 27 05:15:30 np0005537642 systemd[1]: Mounting Kernel Configuration File System...
Nov 27 05:15:30 np0005537642 kernel: ata1: found unknown device (class 0)
Nov 27 05:15:30 np0005537642 kernel: ata1.00: ATAPI: QEMU DVD-ROM, 2.5+, max UDMA/100
Nov 27 05:15:30 np0005537642 kernel: scsi 0:0:0:0: CD-ROM            QEMU     QEMU DVD-ROM     2.5+ PQ: 0 ANSI: 5
Nov 27 05:15:30 np0005537642 systemd[1]: Mounted Kernel Configuration File System.
Nov 27 05:15:30 np0005537642 systemd[1]: Reached target System Initialization.
Nov 27 05:15:30 np0005537642 systemd[1]: Reached target Basic System.
Nov 27 05:15:30 np0005537642 kernel: scsi 0:0:0:0: Attached scsi generic sg0 type 5
Nov 27 05:15:30 np0005537642 kernel: sr 0:0:0:0: [sr0] scsi3-mmc drive: 4x/4x cd/rw xa/form2 tray
Nov 27 05:15:30 np0005537642 kernel: cdrom: Uniform CD-ROM driver Revision: 3.20
Nov 27 05:15:30 np0005537642 systemd[1]: Finished dracut initqueue hook.
Nov 27 05:15:30 np0005537642 systemd[1]: Reached target Preparation for Remote File Systems.
Nov 27 05:15:30 np0005537642 systemd[1]: Reached target Remote Encrypted Volumes.
Nov 27 05:15:30 np0005537642 systemd[1]: Reached target Remote File Systems.
Nov 27 05:15:30 np0005537642 systemd[1]: Starting dracut pre-mount hook...
Nov 27 05:15:30 np0005537642 systemd[1]: Finished dracut pre-mount hook.
Nov 27 05:15:30 np0005537642 systemd[1]: Starting File System Check on /dev/disk/by-uuid/b277050f-8ace-464d-abb6-4c46d4c45253...
Nov 27 05:15:30 np0005537642 systemd-fsck[553]: /usr/sbin/fsck.xfs: XFS file system.
Nov 27 05:15:30 np0005537642 systemd[1]: Finished File System Check on /dev/disk/by-uuid/b277050f-8ace-464d-abb6-4c46d4c45253.
Nov 27 05:15:30 np0005537642 systemd[1]: Mounting /sysroot...
Nov 27 05:15:30 np0005537642 kernel: SGI XFS with ACLs, security attributes, scrub, quota, no debug enabled
Nov 27 05:15:30 np0005537642 kernel: XFS (vda1): Mounting V5 Filesystem b277050f-8ace-464d-abb6-4c46d4c45253
Nov 27 05:15:31 np0005537642 kernel: XFS (vda1): Ending clean mount
Nov 27 05:15:31 np0005537642 systemd[1]: Mounted /sysroot.
Nov 27 05:15:31 np0005537642 systemd[1]: Reached target Initrd Root File System.
Nov 27 05:15:31 np0005537642 systemd[1]: Starting Mountpoints Configured in the Real Root...
Nov 27 05:15:31 np0005537642 systemd[1]: initrd-parse-etc.service: Deactivated successfully.
Nov 27 05:15:31 np0005537642 systemd[1]: Finished Mountpoints Configured in the Real Root.
Nov 27 05:15:31 np0005537642 systemd[1]: Reached target Initrd File Systems.
Nov 27 05:15:31 np0005537642 systemd[1]: Reached target Initrd Default Target.
Nov 27 05:15:31 np0005537642 systemd[1]: Starting dracut mount hook...
Nov 27 05:15:31 np0005537642 systemd[1]: Finished dracut mount hook.
Nov 27 05:15:31 np0005537642 systemd[1]: Starting dracut pre-pivot and cleanup hook...
Nov 27 05:15:31 np0005537642 rpc.idmapd[443]: exiting on signal 15
Nov 27 05:15:31 np0005537642 systemd[1]: var-lib-nfs-rpc_pipefs.mount: Deactivated successfully.
Nov 27 05:15:31 np0005537642 systemd[1]: Finished dracut pre-pivot and cleanup hook.
Nov 27 05:15:31 np0005537642 systemd[1]: Starting Cleaning Up and Shutting Down Daemons...
Nov 27 05:15:31 np0005537642 systemd[1]: Stopped target Network.
Nov 27 05:15:31 np0005537642 systemd[1]: Stopped target Remote Encrypted Volumes.
Nov 27 05:15:31 np0005537642 systemd[1]: Stopped target Timer Units.
Nov 27 05:15:31 np0005537642 systemd[1]: dbus.socket: Deactivated successfully.
Nov 27 05:15:31 np0005537642 systemd[1]: Closed D-Bus System Message Bus Socket.
Nov 27 05:15:31 np0005537642 systemd[1]: dracut-pre-pivot.service: Deactivated successfully.
Nov 27 05:15:31 np0005537642 systemd[1]: Stopped dracut pre-pivot and cleanup hook.
Nov 27 05:15:31 np0005537642 systemd[1]: Stopped target Initrd Default Target.
Nov 27 05:15:31 np0005537642 systemd[1]: Stopped target Basic System.
Nov 27 05:15:31 np0005537642 systemd[1]: Stopped target Initrd Root Device.
Nov 27 05:15:31 np0005537642 systemd[1]: Stopped target Initrd /usr File System.
Nov 27 05:15:31 np0005537642 systemd[1]: Stopped target Path Units.
Nov 27 05:15:31 np0005537642 systemd[1]: Stopped target Remote File Systems.
Nov 27 05:15:31 np0005537642 systemd[1]: Stopped target Preparation for Remote File Systems.
Nov 27 05:15:31 np0005537642 systemd[1]: Stopped target Slice Units.
Nov 27 05:15:31 np0005537642 systemd[1]: Stopped target Socket Units.
Nov 27 05:15:31 np0005537642 systemd[1]: Stopped target System Initialization.
Nov 27 05:15:31 np0005537642 systemd[1]: Stopped target Local File Systems.
Nov 27 05:15:31 np0005537642 systemd[1]: Stopped target Swaps.
Nov 27 05:15:31 np0005537642 systemd[1]: dracut-mount.service: Deactivated successfully.
Nov 27 05:15:31 np0005537642 systemd[1]: Stopped dracut mount hook.
Nov 27 05:15:31 np0005537642 systemd[1]: dracut-pre-mount.service: Deactivated successfully.
Nov 27 05:15:31 np0005537642 systemd[1]: Stopped dracut pre-mount hook.
Nov 27 05:15:31 np0005537642 systemd[1]: Stopped target Local Encrypted Volumes.
Nov 27 05:15:31 np0005537642 systemd[1]: systemd-ask-password-console.path: Deactivated successfully.
Nov 27 05:15:31 np0005537642 systemd[1]: Stopped Dispatch Password Requests to Console Directory Watch.
Nov 27 05:15:31 np0005537642 systemd[1]: dracut-initqueue.service: Deactivated successfully.
Nov 27 05:15:31 np0005537642 systemd[1]: Stopped dracut initqueue hook.
Nov 27 05:15:31 np0005537642 systemd[1]: systemd-sysctl.service: Deactivated successfully.
Nov 27 05:15:31 np0005537642 systemd[1]: Stopped Apply Kernel Variables.
Nov 27 05:15:31 np0005537642 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully.
Nov 27 05:15:31 np0005537642 systemd[1]: Stopped Create Volatile Files and Directories.
Nov 27 05:15:31 np0005537642 systemd[1]: systemd-udev-trigger.service: Deactivated successfully.
Nov 27 05:15:31 np0005537642 systemd[1]: Stopped Coldplug All udev Devices.
Nov 27 05:15:31 np0005537642 systemd[1]: dracut-pre-trigger.service: Deactivated successfully.
Nov 27 05:15:31 np0005537642 systemd[1]: Stopped dracut pre-trigger hook.
Nov 27 05:15:31 np0005537642 systemd[1]: Stopping Rule-based Manager for Device Events and Files...
Nov 27 05:15:31 np0005537642 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully.
Nov 27 05:15:31 np0005537642 systemd[1]: Stopped Setup Virtual Console.
Nov 27 05:15:31 np0005537642 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup.service.mount: Deactivated successfully.
Nov 27 05:15:31 np0005537642 systemd[1]: run-credentials-systemd\x2dsysctl.service.mount: Deactivated successfully.
Nov 27 05:15:31 np0005537642 systemd[1]: systemd-udevd.service: Deactivated successfully.
Nov 27 05:15:31 np0005537642 systemd[1]: Stopped Rule-based Manager for Device Events and Files.
Nov 27 05:15:31 np0005537642 systemd[1]: initrd-cleanup.service: Deactivated successfully.
Nov 27 05:15:31 np0005537642 systemd[1]: Finished Cleaning Up and Shutting Down Daemons.
Nov 27 05:15:31 np0005537642 systemd[1]: systemd-udevd-control.socket: Deactivated successfully.
Nov 27 05:15:31 np0005537642 systemd[1]: Closed udev Control Socket.
Nov 27 05:15:31 np0005537642 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully.
Nov 27 05:15:31 np0005537642 systemd[1]: Closed udev Kernel Socket.
Nov 27 05:15:31 np0005537642 systemd[1]: dracut-pre-udev.service: Deactivated successfully.
Nov 27 05:15:31 np0005537642 systemd[1]: Stopped dracut pre-udev hook.
Nov 27 05:15:31 np0005537642 systemd[1]: dracut-cmdline.service: Deactivated successfully.
Nov 27 05:15:31 np0005537642 systemd[1]: Stopped dracut cmdline hook.
Nov 27 05:15:31 np0005537642 systemd[1]: Starting Cleanup udev Database...
Nov 27 05:15:31 np0005537642 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully.
Nov 27 05:15:31 np0005537642 systemd[1]: Stopped Create Static Device Nodes in /dev.
Nov 27 05:15:31 np0005537642 systemd[1]: kmod-static-nodes.service: Deactivated successfully.
Nov 27 05:15:31 np0005537642 systemd[1]: Stopped Create List of Static Device Nodes.
Nov 27 05:15:31 np0005537642 systemd[1]: systemd-sysusers.service: Deactivated successfully.
Nov 27 05:15:31 np0005537642 systemd[1]: Stopped Create System Users.
Nov 27 05:15:31 np0005537642 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup\x2ddev.service.mount: Deactivated successfully.
Nov 27 05:15:31 np0005537642 systemd[1]: run-credentials-systemd\x2dsysusers.service.mount: Deactivated successfully.
Nov 27 05:15:31 np0005537642 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully.
Nov 27 05:15:31 np0005537642 systemd[1]: Finished Cleanup udev Database.
Nov 27 05:15:31 np0005537642 systemd[1]: Reached target Switch Root.
Nov 27 05:15:31 np0005537642 systemd[1]: Starting Switch Root...
Nov 27 05:15:31 np0005537642 systemd[1]: Switching root.
Nov 27 05:15:31 np0005537642 systemd-journald[301]: Journal stopped
Nov 27 05:15:32 np0005537642 systemd-journald: Received SIGTERM from PID 1 (systemd).
Nov 27 05:15:32 np0005537642 kernel: audit: type=1404 audit(1764238531.623:2): enforcing=1 old_enforcing=0 auid=4294967295 ses=4294967295 enabled=1 old-enabled=1 lsm=selinux res=1
Nov 27 05:15:32 np0005537642 kernel: SELinux:  policy capability network_peer_controls=1
Nov 27 05:15:32 np0005537642 kernel: SELinux:  policy capability open_perms=1
Nov 27 05:15:32 np0005537642 kernel: SELinux:  policy capability extended_socket_class=1
Nov 27 05:15:32 np0005537642 kernel: SELinux:  policy capability always_check_network=0
Nov 27 05:15:32 np0005537642 kernel: SELinux:  policy capability cgroup_seclabel=1
Nov 27 05:15:32 np0005537642 kernel: SELinux:  policy capability nnp_nosuid_transition=1
Nov 27 05:15:32 np0005537642 kernel: SELinux:  policy capability genfs_seclabel_symlinks=1
Nov 27 05:15:32 np0005537642 kernel: audit: type=1403 audit(1764238531.787:3): auid=4294967295 ses=4294967295 lsm=selinux res=1
Nov 27 05:15:32 np0005537642 systemd: Successfully loaded SELinux policy in 170.651ms.
Nov 27 05:15:32 np0005537642 systemd: Relabelled /dev, /dev/shm, /run, /sys/fs/cgroup in 31.196ms.
Nov 27 05:15:32 np0005537642 systemd: systemd 252-59.el9 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT +GNUTLS +OPENSSL +ACL +BLKID +CURL +ELFUTILS +FIDO2 +IDN2 -IDN -IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY +P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK +XKBCOMMON +UTMP +SYSVINIT default-hierarchy=unified)
Nov 27 05:15:32 np0005537642 systemd: Detected virtualization kvm.
Nov 27 05:15:32 np0005537642 systemd: Detected architecture x86-64.
Nov 27 05:15:32 np0005537642 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 27 05:15:32 np0005537642 systemd: initrd-switch-root.service: Deactivated successfully.
Nov 27 05:15:32 np0005537642 systemd: Stopped Switch Root.
Nov 27 05:15:32 np0005537642 systemd: systemd-journald.service: Scheduled restart job, restart counter is at 1.
Nov 27 05:15:32 np0005537642 systemd: Created slice Slice /system/getty.
Nov 27 05:15:32 np0005537642 systemd: Created slice Slice /system/serial-getty.
Nov 27 05:15:32 np0005537642 systemd: Created slice Slice /system/sshd-keygen.
Nov 27 05:15:32 np0005537642 systemd: Created slice User and Session Slice.
Nov 27 05:15:32 np0005537642 systemd: Started Dispatch Password Requests to Console Directory Watch.
Nov 27 05:15:32 np0005537642 systemd: Started Forward Password Requests to Wall Directory Watch.
Nov 27 05:15:32 np0005537642 systemd: Set up automount Arbitrary Executable File Formats File System Automount Point.
Nov 27 05:15:32 np0005537642 systemd: Reached target Local Encrypted Volumes.
Nov 27 05:15:32 np0005537642 systemd: Stopped target Switch Root.
Nov 27 05:15:32 np0005537642 systemd: Stopped target Initrd File Systems.
Nov 27 05:15:32 np0005537642 systemd: Stopped target Initrd Root File System.
Nov 27 05:15:32 np0005537642 systemd: Reached target Local Integrity Protected Volumes.
Nov 27 05:15:32 np0005537642 systemd: Reached target Path Units.
Nov 27 05:15:32 np0005537642 systemd: Reached target rpc_pipefs.target.
Nov 27 05:15:32 np0005537642 systemd: Reached target Slice Units.
Nov 27 05:15:32 np0005537642 systemd: Reached target Swaps.
Nov 27 05:15:32 np0005537642 systemd: Reached target Local Verity Protected Volumes.
Nov 27 05:15:32 np0005537642 systemd: Listening on RPCbind Server Activation Socket.
Nov 27 05:15:32 np0005537642 systemd: Reached target RPC Port Mapper.
Nov 27 05:15:32 np0005537642 systemd: Listening on Process Core Dump Socket.
Nov 27 05:15:32 np0005537642 systemd: Listening on initctl Compatibility Named Pipe.
Nov 27 05:15:32 np0005537642 systemd: Listening on udev Control Socket.
Nov 27 05:15:32 np0005537642 systemd: Listening on udev Kernel Socket.
Nov 27 05:15:32 np0005537642 systemd: Mounting Huge Pages File System...
Nov 27 05:15:32 np0005537642 systemd: Mounting POSIX Message Queue File System...
Nov 27 05:15:32 np0005537642 systemd: Mounting Kernel Debug File System...
Nov 27 05:15:32 np0005537642 systemd: Mounting Kernel Trace File System...
Nov 27 05:15:32 np0005537642 systemd: Kernel Module supporting RPCSEC_GSS was skipped because of an unmet condition check (ConditionPathExists=/etc/krb5.keytab).
Nov 27 05:15:32 np0005537642 systemd: Starting Create List of Static Device Nodes...
Nov 27 05:15:32 np0005537642 systemd: Starting Load Kernel Module configfs...
Nov 27 05:15:32 np0005537642 systemd: Starting Load Kernel Module drm...
Nov 27 05:15:32 np0005537642 systemd: Starting Load Kernel Module efi_pstore...
Nov 27 05:15:32 np0005537642 systemd: Starting Load Kernel Module fuse...
Nov 27 05:15:32 np0005537642 systemd: Starting Read and set NIS domainname from /etc/sysconfig/network...
Nov 27 05:15:32 np0005537642 systemd: systemd-fsck-root.service: Deactivated successfully.
Nov 27 05:15:32 np0005537642 systemd: Stopped File System Check on Root Device.
Nov 27 05:15:32 np0005537642 systemd: Stopped Journal Service.
Nov 27 05:15:32 np0005537642 systemd: Starting Journal Service...
Nov 27 05:15:32 np0005537642 systemd: Load Kernel Modules was skipped because no trigger condition checks were met.
Nov 27 05:15:32 np0005537642 systemd: Starting Generate network units from Kernel command line...
Nov 27 05:15:32 np0005537642 systemd: TPM2 PCR Machine ID Measurement was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/StubPcrKernelImage-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f).
Nov 27 05:15:32 np0005537642 systemd: Starting Remount Root and Kernel File Systems...
Nov 27 05:15:32 np0005537642 systemd: Repartition Root Disk was skipped because no trigger condition checks were met.
Nov 27 05:15:32 np0005537642 systemd: Starting Apply Kernel Variables...
Nov 27 05:15:32 np0005537642 kernel: fuse: init (API version 7.37)
Nov 27 05:15:32 np0005537642 systemd: Starting Coldplug All udev Devices...
Nov 27 05:15:32 np0005537642 kernel: xfs filesystem being remounted at / supports timestamps until 2038 (0x7fffffff)
Nov 27 05:15:32 np0005537642 systemd: Mounted Huge Pages File System.
Nov 27 05:15:32 np0005537642 systemd: Mounted POSIX Message Queue File System.
Nov 27 05:15:32 np0005537642 systemd-journald[677]: Journal started
Nov 27 05:15:32 np0005537642 systemd-journald[677]: Runtime Journal (/run/log/journal/1f988c78c563e12389ab342aced42dbb) is 8.0M, max 153.6M, 145.6M free.
Nov 27 05:15:32 np0005537642 systemd[1]: Queued start job for default target Multi-User System.
Nov 27 05:15:32 np0005537642 systemd[1]: systemd-journald.service: Deactivated successfully.
Nov 27 05:15:32 np0005537642 systemd: Started Journal Service.
Nov 27 05:15:32 np0005537642 systemd[1]: Mounted Kernel Debug File System.
Nov 27 05:15:32 np0005537642 systemd[1]: Mounted Kernel Trace File System.
Nov 27 05:15:32 np0005537642 systemd[1]: Finished Create List of Static Device Nodes.
Nov 27 05:15:32 np0005537642 systemd[1]: modprobe@configfs.service: Deactivated successfully.
Nov 27 05:15:32 np0005537642 systemd[1]: Finished Load Kernel Module configfs.
Nov 27 05:15:32 np0005537642 kernel: ACPI: bus type drm_connector registered
Nov 27 05:15:32 np0005537642 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully.
Nov 27 05:15:32 np0005537642 systemd[1]: Finished Load Kernel Module efi_pstore.
Nov 27 05:15:32 np0005537642 systemd[1]: modprobe@drm.service: Deactivated successfully.
Nov 27 05:15:32 np0005537642 systemd[1]: Finished Load Kernel Module drm.
Nov 27 05:15:32 np0005537642 systemd[1]: modprobe@fuse.service: Deactivated successfully.
Nov 27 05:15:32 np0005537642 systemd[1]: Finished Load Kernel Module fuse.
Nov 27 05:15:32 np0005537642 systemd[1]: Finished Read and set NIS domainname from /etc/sysconfig/network.
Nov 27 05:15:32 np0005537642 systemd[1]: Finished Generate network units from Kernel command line.
Nov 27 05:15:32 np0005537642 systemd[1]: Finished Remount Root and Kernel File Systems.
Nov 27 05:15:32 np0005537642 systemd[1]: Mounting FUSE Control File System...
Nov 27 05:15:32 np0005537642 systemd[1]: First Boot Wizard was skipped because of an unmet condition check (ConditionFirstBoot=yes).
Nov 27 05:15:32 np0005537642 systemd[1]: Starting Rebuild Hardware Database...
Nov 27 05:15:32 np0005537642 systemd[1]: Starting Flush Journal to Persistent Storage...
Nov 27 05:15:32 np0005537642 systemd[1]: Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore).
Nov 27 05:15:32 np0005537642 systemd[1]: Starting Load/Save OS Random Seed...
Nov 27 05:15:32 np0005537642 systemd[1]: Starting Create System Users...
Nov 27 05:15:32 np0005537642 systemd-journald[677]: Runtime Journal (/run/log/journal/1f988c78c563e12389ab342aced42dbb) is 8.0M, max 153.6M, 145.6M free.
Nov 27 05:15:32 np0005537642 systemd-journald[677]: Received client request to flush runtime journal.
Nov 27 05:15:32 np0005537642 systemd[1]: Finished Apply Kernel Variables.
Nov 27 05:15:32 np0005537642 systemd[1]: Mounted FUSE Control File System.
Nov 27 05:15:32 np0005537642 systemd[1]: Finished Flush Journal to Persistent Storage.
Nov 27 05:15:32 np0005537642 systemd[1]: Finished Load/Save OS Random Seed.
Nov 27 05:15:32 np0005537642 systemd[1]: First Boot Complete was skipped because of an unmet condition check (ConditionFirstBoot=yes).
Nov 27 05:15:32 np0005537642 systemd[1]: Finished Create System Users.
Nov 27 05:15:32 np0005537642 systemd[1]: Starting Create Static Device Nodes in /dev...
Nov 27 05:15:32 np0005537642 systemd[1]: Finished Coldplug All udev Devices.
Nov 27 05:15:32 np0005537642 systemd[1]: Finished Create Static Device Nodes in /dev.
Nov 27 05:15:32 np0005537642 systemd[1]: Reached target Preparation for Local File Systems.
Nov 27 05:15:32 np0005537642 systemd[1]: Reached target Local File Systems.
Nov 27 05:15:32 np0005537642 systemd[1]: Starting Rebuild Dynamic Linker Cache...
Nov 27 05:15:32 np0005537642 systemd[1]: Mark the need to relabel after reboot was skipped because of an unmet condition check (ConditionSecurity=!selinux).
Nov 27 05:15:32 np0005537642 systemd[1]: Set Up Additional Binary Formats was skipped because no trigger condition checks were met.
Nov 27 05:15:32 np0005537642 systemd[1]: Update Boot Loader Random Seed was skipped because no trigger condition checks were met.
Nov 27 05:15:32 np0005537642 systemd[1]: Starting Automatic Boot Loader Update...
Nov 27 05:15:32 np0005537642 systemd[1]: Commit a transient machine-id on disk was skipped because of an unmet condition check (ConditionPathIsMountPoint=/etc/machine-id).
Nov 27 05:15:32 np0005537642 systemd[1]: Starting Create Volatile Files and Directories...
Nov 27 05:15:32 np0005537642 bootctl[697]: Couldn't find EFI system partition, skipping.
Nov 27 05:15:32 np0005537642 systemd[1]: Finished Automatic Boot Loader Update.
Nov 27 05:15:32 np0005537642 systemd[1]: Finished Create Volatile Files and Directories.
Nov 27 05:15:32 np0005537642 systemd[1]: Starting Security Auditing Service...
Nov 27 05:15:32 np0005537642 systemd[1]: Starting RPC Bind...
Nov 27 05:15:32 np0005537642 systemd[1]: Starting Rebuild Journal Catalog...
Nov 27 05:15:32 np0005537642 auditd[703]: audit dispatcher initialized with q_depth=2000 and 1 active plugins
Nov 27 05:15:32 np0005537642 auditd[703]: Init complete, auditd 3.1.5 listening for events (startup state enable)
Nov 27 05:15:32 np0005537642 systemd[1]: Started RPC Bind.
Nov 27 05:15:32 np0005537642 systemd[1]: Finished Rebuild Journal Catalog.
Nov 27 05:15:32 np0005537642 augenrules[708]: /sbin/augenrules: No change
Nov 27 05:15:32 np0005537642 augenrules[723]: No rules
Nov 27 05:15:32 np0005537642 augenrules[723]: enabled 1
Nov 27 05:15:32 np0005537642 augenrules[723]: failure 1
Nov 27 05:15:32 np0005537642 augenrules[723]: pid 703
Nov 27 05:15:32 np0005537642 augenrules[723]: rate_limit 0
Nov 27 05:15:32 np0005537642 augenrules[723]: backlog_limit 8192
Nov 27 05:15:32 np0005537642 augenrules[723]: lost 0
Nov 27 05:15:32 np0005537642 augenrules[723]: backlog 0
Nov 27 05:15:32 np0005537642 augenrules[723]: backlog_wait_time 60000
Nov 27 05:15:32 np0005537642 augenrules[723]: backlog_wait_time_actual 0
Nov 27 05:15:32 np0005537642 augenrules[723]: enabled 1
Nov 27 05:15:32 np0005537642 augenrules[723]: failure 1
Nov 27 05:15:32 np0005537642 augenrules[723]: pid 703
Nov 27 05:15:32 np0005537642 augenrules[723]: rate_limit 0
Nov 27 05:15:32 np0005537642 augenrules[723]: backlog_limit 8192
Nov 27 05:15:32 np0005537642 augenrules[723]: lost 0
Nov 27 05:15:32 np0005537642 augenrules[723]: backlog 4
Nov 27 05:15:32 np0005537642 augenrules[723]: backlog_wait_time 60000
Nov 27 05:15:32 np0005537642 augenrules[723]: backlog_wait_time_actual 0
Nov 27 05:15:32 np0005537642 augenrules[723]: enabled 1
Nov 27 05:15:32 np0005537642 augenrules[723]: failure 1
Nov 27 05:15:32 np0005537642 augenrules[723]: pid 703
Nov 27 05:15:32 np0005537642 augenrules[723]: rate_limit 0
Nov 27 05:15:32 np0005537642 augenrules[723]: backlog_limit 8192
Nov 27 05:15:32 np0005537642 augenrules[723]: lost 0
Nov 27 05:15:32 np0005537642 augenrules[723]: backlog 4
Nov 27 05:15:32 np0005537642 augenrules[723]: backlog_wait_time 60000
Nov 27 05:15:32 np0005537642 augenrules[723]: backlog_wait_time_actual 0
Nov 27 05:15:32 np0005537642 systemd[1]: Started Security Auditing Service.
Nov 27 05:15:32 np0005537642 systemd[1]: Starting Record System Boot/Shutdown in UTMP...
Nov 27 05:15:32 np0005537642 systemd[1]: Finished Record System Boot/Shutdown in UTMP.
Nov 27 05:15:33 np0005537642 systemd[1]: Finished Rebuild Hardware Database.
Nov 27 05:15:33 np0005537642 systemd[1]: Starting Rule-based Manager for Device Events and Files...
Nov 27 05:15:33 np0005537642 systemd-udevd[731]: Using default interface naming scheme 'rhel-9.0'.
Nov 27 05:15:33 np0005537642 systemd[1]: Finished Rebuild Dynamic Linker Cache.
Nov 27 05:15:33 np0005537642 systemd[1]: Starting Update is Completed...
Nov 27 05:15:33 np0005537642 systemd[1]: Finished Update is Completed.
Nov 27 05:15:33 np0005537642 systemd[1]: Started Rule-based Manager for Device Events and Files.
Nov 27 05:15:33 np0005537642 systemd[1]: Reached target System Initialization.
Nov 27 05:15:33 np0005537642 systemd[1]: Started dnf makecache --timer.
Nov 27 05:15:33 np0005537642 systemd[1]: Started Daily rotation of log files.
Nov 27 05:15:33 np0005537642 systemd[1]: Started Daily Cleanup of Temporary Directories.
Nov 27 05:15:33 np0005537642 systemd[1]: Reached target Timer Units.
Nov 27 05:15:33 np0005537642 systemd[1]: Listening on D-Bus System Message Bus Socket.
Nov 27 05:15:33 np0005537642 systemd[1]: Listening on SSSD Kerberos Cache Manager responder socket.
Nov 27 05:15:33 np0005537642 systemd[1]: Reached target Socket Units.
Nov 27 05:15:33 np0005537642 systemd[1]: Starting D-Bus System Message Bus...
Nov 27 05:15:33 np0005537642 systemd[1]: TPM2 PCR Barrier (Initialization) was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/StubPcrKernelImage-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f).
Nov 27 05:15:33 np0005537642 systemd[1]: Condition check resulted in /dev/ttyS0 being skipped.
Nov 27 05:15:33 np0005537642 systemd[1]: Starting Load Kernel Module configfs...
Nov 27 05:15:33 np0005537642 systemd[1]: modprobe@configfs.service: Deactivated successfully.
Nov 27 05:15:33 np0005537642 systemd[1]: Finished Load Kernel Module configfs.
Nov 27 05:15:33 np0005537642 systemd-udevd[737]: Network interface NamePolicy= disabled on kernel command line.
Nov 27 05:15:33 np0005537642 systemd[1]: Started D-Bus System Message Bus.
Nov 27 05:15:33 np0005537642 systemd[1]: Reached target Basic System.
Nov 27 05:15:33 np0005537642 dbus-broker-lau[748]: Ready
Nov 27 05:15:33 np0005537642 systemd[1]: Starting NTP client/server...
Nov 27 05:15:33 np0005537642 systemd[1]: Starting Cloud-init: Local Stage (pre-network)...
Nov 27 05:15:33 np0005537642 systemd[1]: Starting Restore /run/initramfs on shutdown...
Nov 27 05:15:33 np0005537642 kernel: piix4_smbus 0000:00:01.3: SMBus Host Controller at 0x700, revision 0
Nov 27 05:15:33 np0005537642 kernel: i2c i2c-0: 1/1 memory slots populated (from DMI)
Nov 27 05:15:33 np0005537642 kernel: i2c i2c-0: Memory type 0x07 not supported yet, not instantiating SPD
Nov 27 05:15:33 np0005537642 kernel: input: PC Speaker as /devices/platform/pcspkr/input/input6
Nov 27 05:15:33 np0005537642 systemd[1]: Starting IPv4 firewall with iptables...
Nov 27 05:15:33 np0005537642 chronyd[792]: chronyd version 4.8 starting (+CMDMON +REFCLOCK +RTC +PRIVDROP +SCFILTER +SIGND +NTS +SECHASH +IPV6 +DEBUG)
Nov 27 05:15:33 np0005537642 chronyd[792]: Loaded 0 symmetric keys
Nov 27 05:15:33 np0005537642 chronyd[792]: Using right/UTC timezone to obtain leap second data
Nov 27 05:15:33 np0005537642 chronyd[792]: Loaded seccomp filter (level 2)
Nov 27 05:15:33 np0005537642 systemd[1]: Started irqbalance daemon.
Nov 27 05:15:33 np0005537642 systemd[1]: Load CPU microcode update was skipped because of an unmet condition check (ConditionPathExists=/sys/devices/system/cpu/microcode/reload).
Nov 27 05:15:33 np0005537642 systemd[1]: OpenSSH ecdsa Server Key Generation was skipped because of an unmet condition check (ConditionPathExists=!/run/systemd/generator.early/multi-user.target.wants/cloud-init.target).
Nov 27 05:15:33 np0005537642 systemd[1]: OpenSSH ed25519 Server Key Generation was skipped because of an unmet condition check (ConditionPathExists=!/run/systemd/generator.early/multi-user.target.wants/cloud-init.target).
Nov 27 05:15:33 np0005537642 systemd[1]: OpenSSH rsa Server Key Generation was skipped because of an unmet condition check (ConditionPathExists=!/run/systemd/generator.early/multi-user.target.wants/cloud-init.target).
Nov 27 05:15:33 np0005537642 systemd[1]: Reached target sshd-keygen.target.
Nov 27 05:15:33 np0005537642 systemd[1]: System Security Services Daemon was skipped because no trigger condition checks were met.
Nov 27 05:15:33 np0005537642 systemd[1]: Reached target User and Group Name Lookups.
Nov 27 05:15:33 np0005537642 systemd[1]: Starting User Login Management...
Nov 27 05:15:33 np0005537642 kernel: Warning: Deprecated Driver is detected: nft_compat will not be maintained in a future major release and may be disabled
Nov 27 05:15:33 np0005537642 kernel: Warning: Deprecated Driver is detected: nft_compat_module_init will not be maintained in a future major release and may be disabled
Nov 27 05:15:33 np0005537642 kernel: [drm] pci: virtio-vga detected at 0000:00:02.0
Nov 27 05:15:33 np0005537642 kernel: virtio-pci 0000:00:02.0: vgaarb: deactivate vga console
Nov 27 05:15:33 np0005537642 kernel: kvm_amd: TSC scaling supported
Nov 27 05:15:33 np0005537642 kernel: kvm_amd: Nested Virtualization enabled
Nov 27 05:15:33 np0005537642 kernel: kvm_amd: Nested Paging enabled
Nov 27 05:15:33 np0005537642 kernel: kvm_amd: LBR virtualization supported
Nov 27 05:15:33 np0005537642 systemd-logind[801]: Watching system buttons on /dev/input/event0 (Power Button)
Nov 27 05:15:33 np0005537642 systemd-logind[801]: Watching system buttons on /dev/input/event1 (AT Translated Set 2 keyboard)
Nov 27 05:15:33 np0005537642 kernel: Console: switching to colour dummy device 80x25
Nov 27 05:15:33 np0005537642 kernel: [drm] features: -virgl +edid -resource_blob -host_visible
Nov 27 05:15:33 np0005537642 kernel: [drm] features: -context_init
Nov 27 05:15:33 np0005537642 kernel: [drm] number of scanouts: 1
Nov 27 05:15:33 np0005537642 kernel: [drm] number of cap sets: 0
Nov 27 05:15:33 np0005537642 systemd-logind[801]: New seat seat0.
Nov 27 05:15:33 np0005537642 systemd[1]: Started User Login Management.
Nov 27 05:15:33 np0005537642 kernel: [drm] Initialized virtio_gpu 0.1.0 for 0000:00:02.0 on minor 0
Nov 27 05:15:33 np0005537642 systemd[1]: Started NTP client/server.
Nov 27 05:15:33 np0005537642 systemd[1]: Finished Restore /run/initramfs on shutdown.
Nov 27 05:15:33 np0005537642 kernel: fbcon: virtio_gpudrmfb (fb0) is primary device
Nov 27 05:15:33 np0005537642 kernel: Console: switching to colour frame buffer device 128x48
Nov 27 05:15:33 np0005537642 kernel: virtio-pci 0000:00:02.0: [drm] fb0: virtio_gpudrmfb frame buffer device
Nov 27 05:15:33 np0005537642 iptables.init[782]: iptables: Applying firewall rules: [  OK  ]
Nov 27 05:15:33 np0005537642 systemd[1]: Finished IPv4 firewall with iptables.
Nov 27 05:15:34 np0005537642 cloud-init[839]: Cloud-init v. 24.4-7.el9 running 'init-local' at Thu, 27 Nov 2025 10:15:34 +0000. Up 6.73 seconds.
Nov 27 05:15:34 np0005537642 systemd[1]: run-cloud\x2dinit-tmp-tmpi9d7ww8f.mount: Deactivated successfully.
Nov 27 05:15:34 np0005537642 systemd[1]: Starting Hostname Service...
Nov 27 05:15:34 np0005537642 systemd[1]: Started Hostname Service.
Nov 27 05:15:34 np0005537642 systemd-hostnamed[853]: Hostname set to <np0005537642.novalocal> (static)
Nov 27 05:15:34 np0005537642 systemd[1]: Finished Cloud-init: Local Stage (pre-network).
Nov 27 05:15:34 np0005537642 systemd[1]: Reached target Preparation for Network.
Nov 27 05:15:34 np0005537642 systemd[1]: Starting Network Manager...
Nov 27 05:15:34 np0005537642 NetworkManager[857]: <info>  [1764238534.7335] NetworkManager (version 1.54.1-1.el9) is starting... (boot:15470e99-5312-4d44-ad2f-5b0f2ebe5cc1)
Nov 27 05:15:34 np0005537642 NetworkManager[857]: <info>  [1764238534.7343] Read config: /etc/NetworkManager/NetworkManager.conf, /run/NetworkManager/conf.d/15-carrier-timeout.conf
Nov 27 05:15:34 np0005537642 NetworkManager[857]: <info>  [1764238534.7576] manager[0x557c2c2ec080]: monitoring kernel firmware directory '/lib/firmware'.
Nov 27 05:15:34 np0005537642 NetworkManager[857]: <info>  [1764238534.7630] hostname: hostname: using hostnamed
Nov 27 05:15:34 np0005537642 NetworkManager[857]: <info>  [1764238534.7632] hostname: static hostname changed from (none) to "np0005537642.novalocal"
Nov 27 05:15:34 np0005537642 NetworkManager[857]: <info>  [1764238534.7637] dns-mgr: init: dns=default,systemd-resolved rc-manager=symlink (auto)
Nov 27 05:15:34 np0005537642 NetworkManager[857]: <info>  [1764238534.7755] manager[0x557c2c2ec080]: rfkill: Wi-Fi hardware radio set enabled
Nov 27 05:15:34 np0005537642 NetworkManager[857]: <info>  [1764238534.7756] manager[0x557c2c2ec080]: rfkill: WWAN hardware radio set enabled
Nov 27 05:15:34 np0005537642 systemd[1]: Listening on Load/Save RF Kill Switch Status /dev/rfkill Watch.
Nov 27 05:15:34 np0005537642 NetworkManager[857]: <info>  [1764238534.7881] Loaded device plugin: NMTeamFactory (/usr/lib64/NetworkManager/1.54.1-1.el9/libnm-device-plugin-team.so)
Nov 27 05:15:34 np0005537642 NetworkManager[857]: <info>  [1764238534.7881] manager: rfkill: Wi-Fi enabled by radio killswitch; enabled by state file
Nov 27 05:15:34 np0005537642 NetworkManager[857]: <info>  [1764238534.7882] manager: rfkill: WWAN enabled by radio killswitch; enabled by state file
Nov 27 05:15:34 np0005537642 NetworkManager[857]: <info>  [1764238534.7882] manager: Networking is enabled by state file
Nov 27 05:15:34 np0005537642 NetworkManager[857]: <info>  [1764238534.7884] settings: Loaded settings plugin: keyfile (internal)
Nov 27 05:15:34 np0005537642 NetworkManager[857]: <info>  [1764238534.7918] settings: Loaded settings plugin: ifcfg-rh ("/usr/lib64/NetworkManager/1.54.1-1.el9/libnm-settings-plugin-ifcfg-rh.so")
Nov 27 05:15:34 np0005537642 NetworkManager[857]: <info>  [1764238534.7953] Warning: the ifcfg-rh plugin is deprecated, please migrate connections to the keyfile format using "nmcli connection migrate"
Nov 27 05:15:34 np0005537642 NetworkManager[857]: <info>  [1764238534.7978] dhcp: init: Using DHCP client 'internal'
Nov 27 05:15:34 np0005537642 NetworkManager[857]: <info>  [1764238534.7981] manager: (lo): new Loopback device (/org/freedesktop/NetworkManager/Devices/1)
Nov 27 05:15:34 np0005537642 NetworkManager[857]: <info>  [1764238534.7999] device (lo): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Nov 27 05:15:34 np0005537642 NetworkManager[857]: <info>  [1764238534.8016] device (lo): state change: unavailable -> disconnected (reason 'connection-assumed', managed-type: 'external')
Nov 27 05:15:34 np0005537642 NetworkManager[857]: <info>  [1764238534.8027] device (lo): Activation: starting connection 'lo' (f909ba12-8db7-4bba-9d0e-d40b1f19aeea)
Nov 27 05:15:34 np0005537642 systemd[1]: Starting Network Manager Script Dispatcher Service...
Nov 27 05:15:34 np0005537642 NetworkManager[857]: <info>  [1764238534.8038] manager: (eth0): new Ethernet device (/org/freedesktop/NetworkManager/Devices/2)
Nov 27 05:15:34 np0005537642 NetworkManager[857]: <info>  [1764238534.8043] device (eth0): state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Nov 27 05:15:34 np0005537642 systemd[1]: Started Network Manager.
Nov 27 05:15:34 np0005537642 NetworkManager[857]: <info>  [1764238534.8083] bus-manager: acquired D-Bus service "org.freedesktop.NetworkManager"
Nov 27 05:15:34 np0005537642 NetworkManager[857]: <info>  [1764238534.8094] device (lo): state change: disconnected -> prepare (reason 'none', managed-type: 'external')
Nov 27 05:15:34 np0005537642 systemd[1]: Reached target Network.
Nov 27 05:15:34 np0005537642 NetworkManager[857]: <info>  [1764238534.8100] device (lo): state change: prepare -> config (reason 'none', managed-type: 'external')
Nov 27 05:15:34 np0005537642 NetworkManager[857]: <info>  [1764238534.8103] device (lo): state change: config -> ip-config (reason 'none', managed-type: 'external')
Nov 27 05:15:34 np0005537642 NetworkManager[857]: <info>  [1764238534.8107] device (eth0): carrier: link connected
Nov 27 05:15:34 np0005537642 NetworkManager[857]: <info>  [1764238534.8111] device (lo): state change: ip-config -> ip-check (reason 'none', managed-type: 'external')
Nov 27 05:15:34 np0005537642 NetworkManager[857]: <info>  [1764238534.8121] device (eth0): state change: unavailable -> disconnected (reason 'carrier-changed', managed-type: 'full')
Nov 27 05:15:34 np0005537642 NetworkManager[857]: <info>  [1764238534.8128] policy: auto-activating connection 'System eth0' (5fb06bd0-0bb0-7ffb-45f1-d6edd65f3e03)
Nov 27 05:15:34 np0005537642 NetworkManager[857]: <info>  [1764238534.8132] device (eth0): Activation: starting connection 'System eth0' (5fb06bd0-0bb0-7ffb-45f1-d6edd65f3e03)
Nov 27 05:15:34 np0005537642 NetworkManager[857]: <info>  [1764238534.8133] device (eth0): state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Nov 27 05:15:34 np0005537642 systemd[1]: Starting Network Manager Wait Online...
Nov 27 05:15:34 np0005537642 NetworkManager[857]: <info>  [1764238534.8138] manager: NetworkManager state is now CONNECTING
Nov 27 05:15:34 np0005537642 NetworkManager[857]: <info>  [1764238534.8140] device (eth0): state change: prepare -> config (reason 'none', managed-type: 'full')
Nov 27 05:15:34 np0005537642 NetworkManager[857]: <info>  [1764238534.8147] device (eth0): state change: config -> ip-config (reason 'none', managed-type: 'full')
Nov 27 05:15:34 np0005537642 NetworkManager[857]: <info>  [1764238534.8151] dhcp4 (eth0): activation: beginning transaction (timeout in 45 seconds)
Nov 27 05:15:34 np0005537642 systemd[1]: Starting GSSAPI Proxy Daemon...
Nov 27 05:15:34 np0005537642 systemd[1]: Started Network Manager Script Dispatcher Service.
Nov 27 05:15:34 np0005537642 NetworkManager[857]: <info>  [1764238534.8184] dhcp4 (eth0): state changed new lease, address=38.102.83.130
Nov 27 05:15:34 np0005537642 NetworkManager[857]: <info>  [1764238534.8193] policy: set 'System eth0' (eth0) as default for IPv4 routing and DNS
Nov 27 05:15:34 np0005537642 NetworkManager[857]: <info>  [1764238534.8220] device (eth0): state change: ip-config -> ip-check (reason 'none', managed-type: 'full')
Nov 27 05:15:34 np0005537642 NetworkManager[857]: <info>  [1764238534.8230] device (lo): state change: ip-check -> secondaries (reason 'none', managed-type: 'external')
Nov 27 05:15:34 np0005537642 NetworkManager[857]: <info>  [1764238534.8233] device (lo): state change: secondaries -> activated (reason 'none', managed-type: 'external')
Nov 27 05:15:34 np0005537642 NetworkManager[857]: <info>  [1764238534.8243] device (lo): Activation: successful, device activated.
Nov 27 05:15:34 np0005537642 NetworkManager[857]: <info>  [1764238534.8255] device (eth0): state change: ip-check -> secondaries (reason 'none', managed-type: 'full')
Nov 27 05:15:34 np0005537642 NetworkManager[857]: <info>  [1764238534.8258] device (eth0): state change: secondaries -> activated (reason 'none', managed-type: 'full')
Nov 27 05:15:34 np0005537642 NetworkManager[857]: <info>  [1764238534.8263] manager: NetworkManager state is now CONNECTED_SITE
Nov 27 05:15:34 np0005537642 NetworkManager[857]: <info>  [1764238534.8269] device (eth0): Activation: successful, device activated.
Nov 27 05:15:34 np0005537642 NetworkManager[857]: <info>  [1764238534.8274] manager: NetworkManager state is now CONNECTED_GLOBAL
Nov 27 05:15:34 np0005537642 NetworkManager[857]: <info>  [1764238534.8280] manager: startup complete
Nov 27 05:15:34 np0005537642 systemd[1]: Started GSSAPI Proxy Daemon.
Nov 27 05:15:34 np0005537642 systemd[1]: RPC security service for NFS client and server was skipped because of an unmet condition check (ConditionPathExists=/etc/krb5.keytab).
Nov 27 05:15:34 np0005537642 systemd[1]: Reached target NFS client services.
Nov 27 05:15:34 np0005537642 systemd[1]: Reached target Preparation for Remote File Systems.
Nov 27 05:15:34 np0005537642 systemd[1]: Reached target Remote File Systems.
Nov 27 05:15:34 np0005537642 systemd[1]: TPM2 PCR Barrier (User) was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/StubPcrKernelImage-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f).
Nov 27 05:15:34 np0005537642 systemd[1]: Finished Network Manager Wait Online.
Nov 27 05:15:34 np0005537642 systemd[1]: Starting Cloud-init: Network Stage...
Nov 27 05:15:35 np0005537642 cloud-init[921]: Cloud-init v. 24.4-7.el9 running 'init' at Thu, 27 Nov 2025 10:15:35 +0000. Up 7.86 seconds.
Nov 27 05:15:35 np0005537642 cloud-init[921]: ci-info: +++++++++++++++++++++++++++++++++++++++Net device info+++++++++++++++++++++++++++++++++++++++
Nov 27 05:15:35 np0005537642 cloud-init[921]: ci-info: +--------+------+------------------------------+---------------+--------+-------------------+
Nov 27 05:15:35 np0005537642 cloud-init[921]: ci-info: | Device |  Up  |           Address            |      Mask     | Scope  |     Hw-Address    |
Nov 27 05:15:35 np0005537642 cloud-init[921]: ci-info: +--------+------+------------------------------+---------------+--------+-------------------+
Nov 27 05:15:35 np0005537642 cloud-init[921]: ci-info: |  eth0  | True |        38.102.83.130         | 255.255.255.0 | global | fa:16:3e:79:5c:4a |
Nov 27 05:15:35 np0005537642 cloud-init[921]: ci-info: |  eth0  | True | fe80::f816:3eff:fe79:5c4a/64 |       .       |  link  | fa:16:3e:79:5c:4a |
Nov 27 05:15:35 np0005537642 cloud-init[921]: ci-info: |   lo   | True |          127.0.0.1           |   255.0.0.0   |  host  |         .         |
Nov 27 05:15:35 np0005537642 cloud-init[921]: ci-info: |   lo   | True |           ::1/128            |       .       |  host  |         .         |
Nov 27 05:15:35 np0005537642 cloud-init[921]: ci-info: +--------+------+------------------------------+---------------+--------+-------------------+
Nov 27 05:15:35 np0005537642 cloud-init[921]: ci-info: +++++++++++++++++++++++++++++++++Route IPv4 info+++++++++++++++++++++++++++++++++
Nov 27 05:15:35 np0005537642 cloud-init[921]: ci-info: +-------+-----------------+---------------+-----------------+-----------+-------+
Nov 27 05:15:35 np0005537642 cloud-init[921]: ci-info: | Route |   Destination   |    Gateway    |     Genmask     | Interface | Flags |
Nov 27 05:15:35 np0005537642 cloud-init[921]: ci-info: +-------+-----------------+---------------+-----------------+-----------+-------+
Nov 27 05:15:35 np0005537642 cloud-init[921]: ci-info: |   0   |     0.0.0.0     |  38.102.83.1  |     0.0.0.0     |    eth0   |   UG  |
Nov 27 05:15:35 np0005537642 cloud-init[921]: ci-info: |   1   |   38.102.83.0   |    0.0.0.0    |  255.255.255.0  |    eth0   |   U   |
Nov 27 05:15:35 np0005537642 cloud-init[921]: ci-info: |   2   | 169.254.169.254 | 38.102.83.126 | 255.255.255.255 |    eth0   |  UGH  |
Nov 27 05:15:35 np0005537642 cloud-init[921]: ci-info: +-------+-----------------+---------------+-----------------+-----------+-------+
Nov 27 05:15:35 np0005537642 cloud-init[921]: ci-info: +++++++++++++++++++Route IPv6 info+++++++++++++++++++
Nov 27 05:15:35 np0005537642 cloud-init[921]: ci-info: +-------+-------------+---------+-----------+-------+
Nov 27 05:15:35 np0005537642 cloud-init[921]: ci-info: | Route | Destination | Gateway | Interface | Flags |
Nov 27 05:15:35 np0005537642 cloud-init[921]: ci-info: +-------+-------------+---------+-----------+-------+
Nov 27 05:15:35 np0005537642 cloud-init[921]: ci-info: |   1   |  fe80::/64  |    ::   |    eth0   |   U   |
Nov 27 05:15:35 np0005537642 cloud-init[921]: ci-info: |   3   |  multicast  |    ::   |    eth0   |   U   |
Nov 27 05:15:35 np0005537642 cloud-init[921]: ci-info: +-------+-------------+---------+-----------+-------+
Nov 27 05:15:36 np0005537642 cloud-init[921]: Generating public/private rsa key pair.
Nov 27 05:15:36 np0005537642 cloud-init[921]: Your identification has been saved in /etc/ssh/ssh_host_rsa_key
Nov 27 05:15:36 np0005537642 cloud-init[921]: Your public key has been saved in /etc/ssh/ssh_host_rsa_key.pub
Nov 27 05:15:36 np0005537642 cloud-init[921]: The key fingerprint is:
Nov 27 05:15:36 np0005537642 cloud-init[921]: SHA256:vrsR927eUKaNejgUKHqhge/oVr3LTLc1wwu80kH8OEc root@np0005537642.novalocal
Nov 27 05:15:36 np0005537642 cloud-init[921]: The key's randomart image is:
Nov 27 05:15:36 np0005537642 cloud-init[921]: +---[RSA 3072]----+
Nov 27 05:15:36 np0005537642 cloud-init[921]: |                 |
Nov 27 05:15:36 np0005537642 cloud-init[921]: |                 |
Nov 27 05:15:36 np0005537642 cloud-init[921]: |   .   . .       |
Nov 27 05:15:36 np0005537642 cloud-init[921]: |  . . o + E      |
Nov 27 05:15:36 np0005537642 cloud-init[921]: |   . = +S+..  o  |
Nov 27 05:15:36 np0005537642 cloud-init[921]: |    = oo+++. *   |
Nov 27 05:15:36 np0005537642 cloud-init[921]: |   + ..o*==.= .  |
Nov 27 05:15:36 np0005537642 cloud-init[921]: |  o .+o..Bo*oo   |
Nov 27 05:15:36 np0005537642 cloud-init[921]: | o.   +o*oo=o .  |
Nov 27 05:15:36 np0005537642 cloud-init[921]: +----[SHA256]-----+
Nov 27 05:15:36 np0005537642 cloud-init[921]: Generating public/private ecdsa key pair.
Nov 27 05:15:36 np0005537642 cloud-init[921]: Your identification has been saved in /etc/ssh/ssh_host_ecdsa_key
Nov 27 05:15:36 np0005537642 cloud-init[921]: Your public key has been saved in /etc/ssh/ssh_host_ecdsa_key.pub
Nov 27 05:15:36 np0005537642 cloud-init[921]: The key fingerprint is:
Nov 27 05:15:36 np0005537642 cloud-init[921]: SHA256:uC/MzjU+SN/A8CKKMhXLqVlNcwqwqmAhPTwttVkHewg root@np0005537642.novalocal
Nov 27 05:15:36 np0005537642 cloud-init[921]: The key's randomart image is:
Nov 27 05:15:36 np0005537642 cloud-init[921]: +---[ECDSA 256]---+
Nov 27 05:15:36 np0005537642 cloud-init[921]: |    E ..         |
Nov 27 05:15:36 np0005537642 cloud-init[921]: |.   ...o.        |
Nov 27 05:15:36 np0005537642 cloud-init[921]: | = o +o..        |
Nov 27 05:15:36 np0005537642 cloud-init[921]: |o.O * oo         |
Nov 27 05:15:36 np0005537642 cloud-init[921]: |o..@ +.+S        |
Nov 27 05:15:36 np0005537642 cloud-init[921]: |o.* + o.+        |
Nov 27 05:15:36 np0005537642 cloud-init[921]: |+* . =.+oo       |
Nov 27 05:15:36 np0005537642 cloud-init[921]: |B .  .=+o..      |
Nov 27 05:15:36 np0005537642 cloud-init[921]: |..   .o.o.       |
Nov 27 05:15:36 np0005537642 cloud-init[921]: +----[SHA256]-----+
Nov 27 05:15:36 np0005537642 cloud-init[921]: Generating public/private ed25519 key pair.
Nov 27 05:15:36 np0005537642 cloud-init[921]: Your identification has been saved in /etc/ssh/ssh_host_ed25519_key
Nov 27 05:15:36 np0005537642 cloud-init[921]: Your public key has been saved in /etc/ssh/ssh_host_ed25519_key.pub
Nov 27 05:15:36 np0005537642 cloud-init[921]: The key fingerprint is:
Nov 27 05:15:36 np0005537642 cloud-init[921]: SHA256:2dQsD084OqaIqiFUkJA3rAtg7dumuP0brftoRExCxtc root@np0005537642.novalocal
Nov 27 05:15:36 np0005537642 cloud-init[921]: The key's randomart image is:
Nov 27 05:15:36 np0005537642 cloud-init[921]: +--[ED25519 256]--+
Nov 27 05:15:36 np0005537642 cloud-init[921]: |o+=o  .          |
Nov 27 05:15:36 np0005537642 cloud-init[921]: |oo*+ o E   +     |
Nov 27 05:15:36 np0005537642 cloud-init[921]: |oo.o=     * +    |
Nov 27 05:15:36 np0005537642 cloud-init[921]: |o .. o   = B     |
Nov 27 05:15:36 np0005537642 cloud-init[921]: |.o  +   S . o    |
Nov 27 05:15:36 np0005537642 cloud-init[921]: |o  o =.o .       |
Nov 27 05:15:36 np0005537642 cloud-init[921]: |o o =...         |
Nov 27 05:15:36 np0005537642 cloud-init[921]: |.+.. .+          |
Nov 27 05:15:36 np0005537642 cloud-init[921]: |+...o*+.         |
Nov 27 05:15:36 np0005537642 cloud-init[921]: +----[SHA256]-----+
Nov 27 05:15:36 np0005537642 systemd[1]: Finished Cloud-init: Network Stage.
Nov 27 05:15:36 np0005537642 systemd[1]: Reached target Cloud-config availability.
Nov 27 05:15:36 np0005537642 systemd[1]: Reached target Network is Online.
Nov 27 05:15:36 np0005537642 systemd[1]: Starting Cloud-init: Config Stage...
Nov 27 05:15:36 np0005537642 systemd[1]: Starting Crash recovery kernel arming...
Nov 27 05:15:36 np0005537642 systemd[1]: Starting Notify NFS peers of a restart...
Nov 27 05:15:36 np0005537642 systemd[1]: Starting System Logging Service...
Nov 27 05:15:36 np0005537642 sm-notify[1003]: Version 2.5.4 starting
Nov 27 05:15:36 np0005537642 systemd[1]: Starting OpenSSH server daemon...
Nov 27 05:15:36 np0005537642 systemd[1]: Starting Permit User Sessions...
Nov 27 05:15:36 np0005537642 systemd[1]: Started Notify NFS peers of a restart.
Nov 27 05:15:36 np0005537642 systemd[1]: Started OpenSSH server daemon.
Nov 27 05:15:36 np0005537642 systemd[1]: Finished Permit User Sessions.
Nov 27 05:15:36 np0005537642 systemd[1]: Started Command Scheduler.
Nov 27 05:15:36 np0005537642 rsyslogd[1004]: [origin software="rsyslogd" swVersion="8.2510.0-2.el9" x-pid="1004" x-info="https://www.rsyslog.com"] start
Nov 27 05:15:36 np0005537642 rsyslogd[1004]: imjournal: No statefile exists, /var/lib/rsyslog/imjournal.state will be created (ignore if this is first run): No such file or directory [v8.2510.0-2.el9 try https://www.rsyslog.com/e/2040 ]
Nov 27 05:15:36 np0005537642 systemd[1]: Started Getty on tty1.
Nov 27 05:15:36 np0005537642 systemd[1]: Started Serial Getty on ttyS0.
Nov 27 05:15:36 np0005537642 systemd[1]: Reached target Login Prompts.
Nov 27 05:15:36 np0005537642 systemd[1]: Started System Logging Service.
Nov 27 05:15:36 np0005537642 systemd[1]: Reached target Multi-User System.
Nov 27 05:15:36 np0005537642 systemd[1]: Starting Record Runlevel Change in UTMP...
Nov 27 05:15:36 np0005537642 systemd[1]: systemd-update-utmp-runlevel.service: Deactivated successfully.
Nov 27 05:15:36 np0005537642 systemd[1]: Finished Record Runlevel Change in UTMP.
Nov 27 05:15:36 np0005537642 rsyslogd[1004]: imjournal: journal files changed, reloading...  [v8.2510.0-2.el9 try https://www.rsyslog.com/e/0 ]
Nov 27 05:15:36 np0005537642 kdumpctl[1017]: kdump: No kdump initial ramdisk found.
Nov 27 05:15:36 np0005537642 kdumpctl[1017]: kdump: Rebuilding /boot/initramfs-5.14.0-642.el9.x86_64kdump.img
Nov 27 05:15:36 np0005537642 cloud-init[1095]: Cloud-init v. 24.4-7.el9 running 'modules:config' at Thu, 27 Nov 2025 10:15:36 +0000. Up 9.50 seconds.
Nov 27 05:15:36 np0005537642 systemd[1]: Finished Cloud-init: Config Stage.
Nov 27 05:15:36 np0005537642 systemd[1]: Starting Cloud-init: Final Stage...
Nov 27 05:15:37 np0005537642 cloud-init[1264]: Cloud-init v. 24.4-7.el9 running 'modules:final' at Thu, 27 Nov 2025 10:15:37 +0000. Up 9.88 seconds.
Nov 27 05:15:37 np0005537642 dracut[1268]: dracut-057-102.git20250818.el9
Nov 27 05:15:37 np0005537642 cloud-init[1283]: #############################################################
Nov 27 05:15:37 np0005537642 cloud-init[1286]: -----BEGIN SSH HOST KEY FINGERPRINTS-----
Nov 27 05:15:37 np0005537642 cloud-init[1288]: 256 SHA256:uC/MzjU+SN/A8CKKMhXLqVlNcwqwqmAhPTwttVkHewg root@np0005537642.novalocal (ECDSA)
Nov 27 05:15:37 np0005537642 cloud-init[1290]: 256 SHA256:2dQsD084OqaIqiFUkJA3rAtg7dumuP0brftoRExCxtc root@np0005537642.novalocal (ED25519)
Nov 27 05:15:37 np0005537642 cloud-init[1292]: 3072 SHA256:vrsR927eUKaNejgUKHqhge/oVr3LTLc1wwu80kH8OEc root@np0005537642.novalocal (RSA)
Nov 27 05:15:37 np0005537642 cloud-init[1293]: -----END SSH HOST KEY FINGERPRINTS-----
Nov 27 05:15:37 np0005537642 cloud-init[1294]: #############################################################
Nov 27 05:15:37 np0005537642 cloud-init[1264]: Cloud-init v. 24.4-7.el9 finished at Thu, 27 Nov 2025 10:15:37 +0000. Datasource DataSourceConfigDrive [net,ver=2][source=/dev/sr0].  Up 10.08 seconds
Nov 27 05:15:37 np0005537642 dracut[1270]: Executing: /usr/bin/dracut --quiet --hostonly --hostonly-cmdline --hostonly-i18n --hostonly-mode strict --hostonly-nics  --mount "/dev/disk/by-uuid/b277050f-8ace-464d-abb6-4c46d4c45253 /sysroot xfs rw,relatime,seclabel,attr2,inode64,logbufs=8,logbsize=32k,noquota" --squash-compressor zstd --no-hostonly-default-device --add-confdir /lib/kdump/dracut.conf.d -f /boot/initramfs-5.14.0-642.el9.x86_64kdump.img 5.14.0-642.el9.x86_64
Nov 27 05:15:37 np0005537642 systemd[1]: Finished Cloud-init: Final Stage.
Nov 27 05:15:37 np0005537642 systemd[1]: Reached target Cloud-init target.
Nov 27 05:15:37 np0005537642 dracut[1270]: dracut module 'systemd-networkd' will not be installed, because command 'networkctl' could not be found!
Nov 27 05:15:38 np0005537642 dracut[1270]: dracut module 'systemd-networkd' will not be installed, because command '/usr/lib/systemd/systemd-networkd' could not be found!
Nov 27 05:15:38 np0005537642 dracut[1270]: dracut module 'systemd-networkd' will not be installed, because command '/usr/lib/systemd/systemd-networkd-wait-online' could not be found!
Nov 27 05:15:38 np0005537642 dracut[1270]: dracut module 'systemd-resolved' will not be installed, because command 'resolvectl' could not be found!
Nov 27 05:15:38 np0005537642 dracut[1270]: dracut module 'systemd-resolved' will not be installed, because command '/usr/lib/systemd/systemd-resolved' could not be found!
Nov 27 05:15:38 np0005537642 dracut[1270]: dracut module 'systemd-timesyncd' will not be installed, because command '/usr/lib/systemd/systemd-timesyncd' could not be found!
Nov 27 05:15:38 np0005537642 dracut[1270]: dracut module 'systemd-timesyncd' will not be installed, because command '/usr/lib/systemd/systemd-time-wait-sync' could not be found!
Nov 27 05:15:38 np0005537642 dracut[1270]: dracut module 'busybox' will not be installed, because command 'busybox' could not be found!
Nov 27 05:15:38 np0005537642 dracut[1270]: dracut module 'dbus-daemon' will not be installed, because command 'dbus-daemon' could not be found!
Nov 27 05:15:38 np0005537642 dracut[1270]: dracut module 'rngd' will not be installed, because command 'rngd' could not be found!
Nov 27 05:15:38 np0005537642 dracut[1270]: dracut module 'connman' will not be installed, because command 'connmand' could not be found!
Nov 27 05:15:38 np0005537642 dracut[1270]: dracut module 'connman' will not be installed, because command 'connmanctl' could not be found!
Nov 27 05:15:38 np0005537642 dracut[1270]: dracut module 'connman' will not be installed, because command 'connmand-wait-online' could not be found!
Nov 27 05:15:38 np0005537642 dracut[1270]: dracut module 'network-wicked' will not be installed, because command 'wicked' could not be found!
Nov 27 05:15:38 np0005537642 dracut[1270]: 62bluetooth: Could not find any command of '/usr/lib/bluetooth/bluetoothd /usr/libexec/bluetooth/bluetoothd'!
Nov 27 05:15:38 np0005537642 dracut[1270]: dracut module 'lvmmerge' will not be installed, because command 'lvm' could not be found!
Nov 27 05:15:38 np0005537642 dracut[1270]: dracut module 'lvmthinpool-monitor' will not be installed, because command 'lvm' could not be found!
Nov 27 05:15:38 np0005537642 dracut[1270]: dracut module 'btrfs' will not be installed, because command 'btrfs' could not be found!
Nov 27 05:15:38 np0005537642 dracut[1270]: dracut module 'dmraid' will not be installed, because command 'dmraid' could not be found!
Nov 27 05:15:38 np0005537642 dracut[1270]: dracut module 'lvm' will not be installed, because command 'lvm' could not be found!
Nov 27 05:15:38 np0005537642 dracut[1270]: dracut module 'mdraid' will not be installed, because command 'mdadm' could not be found!
Nov 27 05:15:38 np0005537642 dracut[1270]: dracut module 'pcsc' will not be installed, because command 'pcscd' could not be found!
Nov 27 05:15:38 np0005537642 dracut[1270]: dracut module 'tpm2-tss' will not be installed, because command 'tpm2' could not be found!
Nov 27 05:15:38 np0005537642 dracut[1270]: dracut module 'cifs' will not be installed, because command 'mount.cifs' could not be found!
Nov 27 05:15:38 np0005537642 dracut[1270]: dracut module 'iscsi' will not be installed, because command 'iscsi-iname' could not be found!
Nov 27 05:15:38 np0005537642 dracut[1270]: dracut module 'iscsi' will not be installed, because command 'iscsiadm' could not be found!
Nov 27 05:15:38 np0005537642 dracut[1270]: dracut module 'iscsi' will not be installed, because command 'iscsid' could not be found!
Nov 27 05:15:38 np0005537642 dracut[1270]: dracut module 'nvmf' will not be installed, because command 'nvme' could not be found!
Nov 27 05:15:38 np0005537642 dracut[1270]: dracut module 'biosdevname' will not be installed, because command 'biosdevname' could not be found!
Nov 27 05:15:38 np0005537642 dracut[1270]: dracut module 'memstrack' will not be installed, because command 'memstrack' could not be found!
Nov 27 05:15:38 np0005537642 dracut[1270]: memstrack is not available
Nov 27 05:15:38 np0005537642 dracut[1270]: If you need to use rd.memdebug>=4, please install memstrack and procps-ng
Nov 27 05:15:38 np0005537642 dracut[1270]: dracut module 'systemd-resolved' will not be installed, because command 'resolvectl' could not be found!
Nov 27 05:15:38 np0005537642 dracut[1270]: dracut module 'systemd-resolved' will not be installed, because command '/usr/lib/systemd/systemd-resolved' could not be found!
Nov 27 05:15:38 np0005537642 dracut[1270]: dracut module 'systemd-timesyncd' will not be installed, because command '/usr/lib/systemd/systemd-timesyncd' could not be found!
Nov 27 05:15:38 np0005537642 dracut[1270]: dracut module 'systemd-timesyncd' will not be installed, because command '/usr/lib/systemd/systemd-time-wait-sync' could not be found!
Nov 27 05:15:38 np0005537642 dracut[1270]: dracut module 'busybox' will not be installed, because command 'busybox' could not be found!
Nov 27 05:15:38 np0005537642 dracut[1270]: dracut module 'dbus-daemon' will not be installed, because command 'dbus-daemon' could not be found!
Nov 27 05:15:38 np0005537642 dracut[1270]: dracut module 'rngd' will not be installed, because command 'rngd' could not be found!
Nov 27 05:15:38 np0005537642 dracut[1270]: dracut module 'connman' will not be installed, because command 'connmand' could not be found!
Nov 27 05:15:38 np0005537642 dracut[1270]: dracut module 'connman' will not be installed, because command 'connmanctl' could not be found!
Nov 27 05:15:38 np0005537642 dracut[1270]: dracut module 'connman' will not be installed, because command 'connmand-wait-online' could not be found!
Nov 27 05:15:38 np0005537642 dracut[1270]: dracut module 'network-wicked' will not be installed, because command 'wicked' could not be found!
Nov 27 05:15:38 np0005537642 dracut[1270]: 62bluetooth: Could not find any command of '/usr/lib/bluetooth/bluetoothd /usr/libexec/bluetooth/bluetoothd'!
Nov 27 05:15:38 np0005537642 dracut[1270]: dracut module 'lvmmerge' will not be installed, because command 'lvm' could not be found!
Nov 27 05:15:38 np0005537642 dracut[1270]: dracut module 'lvmthinpool-monitor' will not be installed, because command 'lvm' could not be found!
Nov 27 05:15:38 np0005537642 dracut[1270]: dracut module 'btrfs' will not be installed, because command 'btrfs' could not be found!
Nov 27 05:15:38 np0005537642 dracut[1270]: dracut module 'dmraid' will not be installed, because command 'dmraid' could not be found!
Nov 27 05:15:38 np0005537642 dracut[1270]: dracut module 'lvm' will not be installed, because command 'lvm' could not be found!
Nov 27 05:15:38 np0005537642 dracut[1270]: dracut module 'mdraid' will not be installed, because command 'mdadm' could not be found!
Nov 27 05:15:38 np0005537642 dracut[1270]: dracut module 'pcsc' will not be installed, because command 'pcscd' could not be found!
Nov 27 05:15:38 np0005537642 dracut[1270]: dracut module 'tpm2-tss' will not be installed, because command 'tpm2' could not be found!
Nov 27 05:15:38 np0005537642 dracut[1270]: dracut module 'cifs' will not be installed, because command 'mount.cifs' could not be found!
Nov 27 05:15:38 np0005537642 dracut[1270]: dracut module 'iscsi' will not be installed, because command 'iscsi-iname' could not be found!
Nov 27 05:15:38 np0005537642 dracut[1270]: dracut module 'iscsi' will not be installed, because command 'iscsiadm' could not be found!
Nov 27 05:15:38 np0005537642 dracut[1270]: dracut module 'iscsi' will not be installed, because command 'iscsid' could not be found!
Nov 27 05:15:38 np0005537642 dracut[1270]: dracut module 'nvmf' will not be installed, because command 'nvme' could not be found!
Nov 27 05:15:38 np0005537642 dracut[1270]: dracut module 'memstrack' will not be installed, because command 'memstrack' could not be found!
Nov 27 05:15:38 np0005537642 dracut[1270]: memstrack is not available
Nov 27 05:15:38 np0005537642 dracut[1270]: If you need to use rd.memdebug>=4, please install memstrack and procps-ng
Nov 27 05:15:39 np0005537642 dracut[1270]: *** Including module: systemd ***
Nov 27 05:15:39 np0005537642 dracut[1270]: *** Including module: fips ***
Nov 27 05:15:39 np0005537642 dracut[1270]: *** Including module: systemd-initrd ***
Nov 27 05:15:39 np0005537642 chronyd[792]: Selected source 206.108.0.132 (2.centos.pool.ntp.org)
Nov 27 05:15:39 np0005537642 chronyd[792]: System clock TAI offset set to 37 seconds
Nov 27 05:15:39 np0005537642 dracut[1270]: *** Including module: i18n ***
Nov 27 05:15:39 np0005537642 dracut[1270]: *** Including module: drm ***
Nov 27 05:15:40 np0005537642 dracut[1270]: *** Including module: prefixdevname ***
Nov 27 05:15:40 np0005537642 dracut[1270]: *** Including module: kernel-modules ***
Nov 27 05:15:40 np0005537642 kernel: block vda: the capability attribute has been deprecated.
Nov 27 05:15:41 np0005537642 dracut[1270]: *** Including module: kernel-modules-extra ***
Nov 27 05:15:41 np0005537642 dracut[1270]: *** Including module: qemu ***
Nov 27 05:15:41 np0005537642 dracut[1270]: *** Including module: fstab-sys ***
Nov 27 05:15:41 np0005537642 dracut[1270]: *** Including module: rootfs-block ***
Nov 27 05:15:41 np0005537642 dracut[1270]: *** Including module: terminfo ***
Nov 27 05:15:41 np0005537642 dracut[1270]: *** Including module: udev-rules ***
Nov 27 05:15:42 np0005537642 dracut[1270]: Skipping udev rule: 91-permissions.rules
Nov 27 05:15:42 np0005537642 dracut[1270]: Skipping udev rule: 80-drivers-modprobe.rules
Nov 27 05:15:42 np0005537642 dracut[1270]: *** Including module: virtiofs ***
Nov 27 05:15:42 np0005537642 dracut[1270]: *** Including module: dracut-systemd ***
Nov 27 05:15:42 np0005537642 dracut[1270]: *** Including module: usrmount ***
Nov 27 05:15:42 np0005537642 dracut[1270]: *** Including module: base ***
Nov 27 05:15:42 np0005537642 dracut[1270]: *** Including module: fs-lib ***
Nov 27 05:15:42 np0005537642 dracut[1270]: *** Including module: kdumpbase ***
Nov 27 05:15:43 np0005537642 dracut[1270]: *** Including module: microcode_ctl-fw_dir_override ***
Nov 27 05:15:43 np0005537642 dracut[1270]:  microcode_ctl module: mangling fw_dir
Nov 27 05:15:43 np0005537642 dracut[1270]:    microcode_ctl: reset fw_dir to "/lib/firmware/updates /lib/firmware"
Nov 27 05:15:43 np0005537642 dracut[1270]:    microcode_ctl: processing data directory  "/usr/share/microcode_ctl/ucode_with_caveats/intel"...
Nov 27 05:15:43 np0005537642 dracut[1270]:    microcode_ctl: configuration "intel" is ignored
Nov 27 05:15:43 np0005537642 dracut[1270]:    microcode_ctl: processing data directory  "/usr/share/microcode_ctl/ucode_with_caveats/intel-06-2d-07"...
Nov 27 05:15:43 np0005537642 dracut[1270]:    microcode_ctl: configuration "intel-06-2d-07" is ignored
Nov 27 05:15:43 np0005537642 dracut[1270]:    microcode_ctl: processing data directory  "/usr/share/microcode_ctl/ucode_with_caveats/intel-06-4e-03"...
Nov 27 05:15:43 np0005537642 dracut[1270]:    microcode_ctl: configuration "intel-06-4e-03" is ignored
Nov 27 05:15:43 np0005537642 dracut[1270]:    microcode_ctl: processing data directory  "/usr/share/microcode_ctl/ucode_with_caveats/intel-06-4f-01"...
Nov 27 05:15:43 np0005537642 dracut[1270]:    microcode_ctl: configuration "intel-06-4f-01" is ignored
Nov 27 05:15:43 np0005537642 dracut[1270]:    microcode_ctl: processing data directory  "/usr/share/microcode_ctl/ucode_with_caveats/intel-06-55-04"...
Nov 27 05:15:43 np0005537642 dracut[1270]:    microcode_ctl: configuration "intel-06-55-04" is ignored
Nov 27 05:15:43 np0005537642 dracut[1270]:    microcode_ctl: processing data directory  "/usr/share/microcode_ctl/ucode_with_caveats/intel-06-5e-03"...
Nov 27 05:15:43 np0005537642 irqbalance[794]: Cannot change IRQ 25 affinity: Operation not permitted
Nov 27 05:15:43 np0005537642 irqbalance[794]: IRQ 25 affinity is now unmanaged
Nov 27 05:15:43 np0005537642 irqbalance[794]: Cannot change IRQ 31 affinity: Operation not permitted
Nov 27 05:15:43 np0005537642 irqbalance[794]: IRQ 31 affinity is now unmanaged
Nov 27 05:15:43 np0005537642 irqbalance[794]: Cannot change IRQ 28 affinity: Operation not permitted
Nov 27 05:15:43 np0005537642 irqbalance[794]: IRQ 28 affinity is now unmanaged
Nov 27 05:15:43 np0005537642 irqbalance[794]: Cannot change IRQ 32 affinity: Operation not permitted
Nov 27 05:15:43 np0005537642 irqbalance[794]: IRQ 32 affinity is now unmanaged
Nov 27 05:15:43 np0005537642 irqbalance[794]: Cannot change IRQ 30 affinity: Operation not permitted
Nov 27 05:15:43 np0005537642 irqbalance[794]: IRQ 30 affinity is now unmanaged
Nov 27 05:15:43 np0005537642 irqbalance[794]: Cannot change IRQ 29 affinity: Operation not permitted
Nov 27 05:15:43 np0005537642 irqbalance[794]: IRQ 29 affinity is now unmanaged
Nov 27 05:15:43 np0005537642 dracut[1270]:    microcode_ctl: configuration "intel-06-5e-03" is ignored
Nov 27 05:15:43 np0005537642 dracut[1270]:    microcode_ctl: processing data directory  "/usr/share/microcode_ctl/ucode_with_caveats/intel-06-8c-01"...
Nov 27 05:15:43 np0005537642 dracut[1270]:    microcode_ctl: configuration "intel-06-8c-01" is ignored
Nov 27 05:15:43 np0005537642 dracut[1270]:    microcode_ctl: processing data directory  "/usr/share/microcode_ctl/ucode_with_caveats/intel-06-8e-9e-0x-0xca"...
Nov 27 05:15:43 np0005537642 dracut[1270]:    microcode_ctl: configuration "intel-06-8e-9e-0x-0xca" is ignored
Nov 27 05:15:43 np0005537642 dracut[1270]:    microcode_ctl: processing data directory  "/usr/share/microcode_ctl/ucode_with_caveats/intel-06-8e-9e-0x-dell"...
Nov 27 05:15:43 np0005537642 dracut[1270]:    microcode_ctl: configuration "intel-06-8e-9e-0x-dell" is ignored
Nov 27 05:15:43 np0005537642 dracut[1270]:    microcode_ctl: processing data directory  "/usr/share/microcode_ctl/ucode_with_caveats/intel-06-8f-08"...
Nov 27 05:15:43 np0005537642 dracut[1270]:    microcode_ctl: configuration "intel-06-8f-08" is ignored
Nov 27 05:15:43 np0005537642 dracut[1270]:    microcode_ctl: final fw_dir: "/lib/firmware/updates /lib/firmware"
Nov 27 05:15:43 np0005537642 dracut[1270]: *** Including module: openssl ***
Nov 27 05:15:43 np0005537642 dracut[1270]: *** Including module: shutdown ***
Nov 27 05:15:43 np0005537642 dracut[1270]: *** Including module: squash ***
Nov 27 05:15:43 np0005537642 dracut[1270]: *** Including modules done ***
Nov 27 05:15:43 np0005537642 dracut[1270]: *** Installing kernel module dependencies ***
Nov 27 05:15:44 np0005537642 dracut[1270]: *** Installing kernel module dependencies done ***
Nov 27 05:15:44 np0005537642 dracut[1270]: *** Resolving executable dependencies ***
Nov 27 05:15:44 np0005537642 systemd[1]: NetworkManager-dispatcher.service: Deactivated successfully.
Nov 27 05:15:46 np0005537642 dracut[1270]: *** Resolving executable dependencies done ***
Nov 27 05:15:46 np0005537642 dracut[1270]: *** Generating early-microcode cpio image ***
Nov 27 05:15:46 np0005537642 dracut[1270]: *** Store current command line parameters ***
Nov 27 05:15:46 np0005537642 dracut[1270]: Stored kernel commandline:
Nov 27 05:15:46 np0005537642 dracut[1270]: No dracut internal kernel commandline stored in the initramfs
Nov 27 05:15:46 np0005537642 dracut[1270]: *** Install squash loader ***
Nov 27 05:15:47 np0005537642 dracut[1270]: *** Squashing the files inside the initramfs ***
Nov 27 05:15:48 np0005537642 dracut[1270]: *** Squashing the files inside the initramfs done ***
Nov 27 05:15:48 np0005537642 dracut[1270]: *** Creating image file '/boot/initramfs-5.14.0-642.el9.x86_64kdump.img' ***
Nov 27 05:15:48 np0005537642 dracut[1270]: *** Hardlinking files ***
Nov 27 05:15:48 np0005537642 dracut[1270]: *** Hardlinking files done ***
Nov 27 05:15:49 np0005537642 dracut[1270]: *** Creating initramfs image file '/boot/initramfs-5.14.0-642.el9.x86_64kdump.img' done ***
Nov 27 05:15:49 np0005537642 kdumpctl[1017]: kdump: kexec: loaded kdump kernel
Nov 27 05:15:49 np0005537642 kdumpctl[1017]: kdump: Starting kdump: [OK]
Nov 27 05:15:49 np0005537642 systemd[1]: Finished Crash recovery kernel arming.
Nov 27 05:15:49 np0005537642 systemd[1]: Startup finished in 1.614s (kernel) + 2.700s (initrd) + 18.252s (userspace) = 22.568s.
Nov 27 05:15:52 np0005537642 systemd[1]: Created slice User Slice of UID 1000.
Nov 27 05:15:52 np0005537642 systemd[1]: Starting User Runtime Directory /run/user/1000...
Nov 27 05:15:52 np0005537642 systemd-logind[801]: New session 1 of user zuul.
Nov 27 05:15:52 np0005537642 systemd[1]: Finished User Runtime Directory /run/user/1000.
Nov 27 05:15:52 np0005537642 systemd[1]: Starting User Manager for UID 1000...
Nov 27 05:15:52 np0005537642 systemd[4299]: Queued start job for default target Main User Target.
Nov 27 05:15:52 np0005537642 systemd[4299]: Created slice User Application Slice.
Nov 27 05:15:52 np0005537642 systemd[4299]: Started Mark boot as successful after the user session has run 2 minutes.
Nov 27 05:15:52 np0005537642 systemd[4299]: Started Daily Cleanup of User's Temporary Directories.
Nov 27 05:15:52 np0005537642 systemd[4299]: Reached target Paths.
Nov 27 05:15:52 np0005537642 systemd[4299]: Reached target Timers.
Nov 27 05:15:52 np0005537642 systemd[4299]: Starting D-Bus User Message Bus Socket...
Nov 27 05:15:52 np0005537642 systemd[4299]: Starting Create User's Volatile Files and Directories...
Nov 27 05:15:52 np0005537642 systemd[4299]: Finished Create User's Volatile Files and Directories.
Nov 27 05:15:52 np0005537642 systemd[4299]: Listening on D-Bus User Message Bus Socket.
Nov 27 05:15:52 np0005537642 systemd[4299]: Reached target Sockets.
Nov 27 05:15:52 np0005537642 systemd[4299]: Reached target Basic System.
Nov 27 05:15:52 np0005537642 systemd[4299]: Reached target Main User Target.
Nov 27 05:15:52 np0005537642 systemd[4299]: Startup finished in 118ms.
Nov 27 05:15:52 np0005537642 systemd[1]: Started User Manager for UID 1000.
Nov 27 05:15:52 np0005537642 systemd[1]: Started Session 1 of User zuul.
Nov 27 05:15:53 np0005537642 irqbalance[794]: Cannot change IRQ 27 affinity: Operation not permitted
Nov 27 05:15:53 np0005537642 irqbalance[794]: IRQ 27 affinity is now unmanaged
Nov 27 05:15:53 np0005537642 python3[4381]: ansible-setup Invoked with gather_subset=['!all'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Nov 27 05:15:55 np0005537642 python3[4409]: ansible-ansible.legacy.setup Invoked with gather_subset=['all'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Nov 27 05:16:04 np0005537642 python3[4467]: ansible-setup Invoked with gather_subset=['network'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Nov 27 05:16:04 np0005537642 systemd[1]: systemd-hostnamed.service: Deactivated successfully.
Nov 27 05:16:05 np0005537642 python3[4509]: ansible-zuul_console Invoked with path=/tmp/console-{log_uuid}.log port=19885 state=present
Nov 27 05:16:07 np0005537642 python3[4535]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQDPJOZO78ADbOjgo6c+bfe2ozRcJqS8Fkn83V4/dhlNdK+PjQaeiFtMq+zDQJHek63dGlmrdddPf8BEOtk0jSTDW0/DP7M7vmWabTcqKWoZi/UaEutL4FGbTqqjVuJvWi5VXWYO7u0C0k6P8QxhHRaawM3o1FX34lu1mDvwYCFG7p3RaLMFHpWhgPicF7kpEKxpf1FWnsxpvDU7KTa3+H89sdcj5/KAtPq+Hg+wBmvDYCqgzQ7JWjzaIkaGmLVkiGUSMpgEqeVoNiHVJqmzwL+wCO4Vmy733Ydxg+/o3kY30KOLkPCJ2RMuyn/CIjG77xpapPUzw26ka807k2cU7uL5aCTWcb17yT2JWg1IsdE9qlwvThddkIMHaNJ+RLCv29SyHEJN8d7VvneSFH3iAMvka2s2RA9S1TbE+4eJNAl5QA7dNGGeaZnfv0u7phOVRcVw6vNPLHSJFENwVMghEn2NxgQov4j4RPXZkHBcDK0w5Ye2B+caP9DZQaZxsSYEd+E= zuul-build-sshkey manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Nov 27 05:16:07 np0005537642 python3[4559]: ansible-file Invoked with state=directory path=/home/zuul/.ssh mode=448 recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 27 05:16:08 np0005537642 python3[4660]: ansible-ansible.legacy.stat Invoked with path=/home/zuul/.ssh/id_rsa follow=False get_checksum=False checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Nov 27 05:16:08 np0005537642 python3[4731]: ansible-ansible.legacy.copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1764238567.9087024-251-26584569111045/source dest=/home/zuul/.ssh/id_rsa mode=384 force=False _original_basename=a4316214207c45d7bea9512b98fc75ac_id_rsa follow=False checksum=d0823428c766d1285f2e0af6acdfc00134759a52 backup=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 27 05:16:09 np0005537642 python3[4854]: ansible-ansible.legacy.stat Invoked with path=/home/zuul/.ssh/id_rsa.pub follow=False get_checksum=False checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Nov 27 05:16:09 np0005537642 python3[4925]: ansible-ansible.legacy.copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1764238568.9186492-306-194723180294802/source dest=/home/zuul/.ssh/id_rsa.pub mode=420 force=False _original_basename=a4316214207c45d7bea9512b98fc75ac_id_rsa.pub follow=False checksum=d0412db6fa1d01395f333eff1350d14dc70659cc backup=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 27 05:16:10 np0005537642 python3[4973]: ansible-ping Invoked with data=pong
Nov 27 05:16:12 np0005537642 python3[4997]: ansible-setup Invoked with gather_subset=['all'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Nov 27 05:16:14 np0005537642 python3[5055]: ansible-zuul_debug_info Invoked with ipv4_route_required=False ipv6_route_required=False image_manifest_files=['/etc/dib-builddate.txt', '/etc/image-hostname.txt'] image_manifest=None traceroute_host=None
Nov 27 05:16:15 np0005537642 python3[5087]: ansible-file Invoked with path=/home/zuul/zuul-output/logs state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 27 05:16:15 np0005537642 python3[5111]: ansible-file Invoked with path=/home/zuul/zuul-output/artifacts state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 27 05:16:15 np0005537642 python3[5135]: ansible-file Invoked with path=/home/zuul/zuul-output/docs state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 27 05:16:16 np0005537642 python3[5159]: ansible-file Invoked with path=/home/zuul/zuul-output/logs state=directory mode=493 recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 27 05:16:16 np0005537642 python3[5183]: ansible-file Invoked with path=/home/zuul/zuul-output/artifacts state=directory mode=493 recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 27 05:16:16 np0005537642 python3[5207]: ansible-file Invoked with path=/home/zuul/zuul-output/docs state=directory mode=493 recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 27 05:16:18 np0005537642 python3[5233]: ansible-file Invoked with path=/etc/ci state=directory owner=root group=root mode=493 recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 27 05:16:19 np0005537642 python3[5311]: ansible-ansible.legacy.stat Invoked with path=/etc/ci/mirror_info.sh follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Nov 27 05:16:20 np0005537642 python3[5384]: ansible-ansible.legacy.copy Invoked with dest=/etc/ci/mirror_info.sh owner=root group=root mode=420 src=/home/zuul/.ansible/tmp/ansible-tmp-1764238578.916702-31-249738808752231/source follow=False _original_basename=mirror_info.sh.j2 checksum=92d92a03afdddee82732741071f662c729080c35 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 27 05:16:20 np0005537642 python3[5432]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-rsa AAAAB3NzaC1yc2EAAAABIwAAAQEA4Z/c9osaGGtU6X8fgELwfj/yayRurfcKA0HMFfdpPxev2dbwljysMuzoVp4OZmW1gvGtyYPSNRvnzgsaabPNKNo2ym5NToCP6UM+KSe93aln4BcM/24mXChYAbXJQ5Bqq/pIzsGs/pKetQN+vwvMxLOwTvpcsCJBXaa981RKML6xj9l/UZ7IIq1HSEKMvPLxZMWdu0Ut8DkCd5F4nOw9Wgml2uYpDCj5LLCrQQ9ChdOMz8hz6SighhNlRpPkvPaet3OXxr/ytFMu7j7vv06CaEnuMMiY2aTWN1Imin9eHAylIqFHta/3gFfQSWt9jXM7owkBLKL7ATzhaAn+fjNupw== arxcruz@redhat.com manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Nov 27 05:16:21 np0005537642 python3[5456]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABAQDS4Fn6k4deCnIlOtLWqZJyksbepjQt04j8Ed8CGx9EKkj0fKiAxiI4TadXQYPuNHMixZy4Nevjb6aDhL5Z906TfvNHKUrjrG7G26a0k8vdc61NEQ7FmcGMWRLwwc6ReDO7lFpzYKBMk4YqfWgBuGU/K6WLKiVW2cVvwIuGIaYrE1OiiX0iVUUk7KApXlDJMXn7qjSYynfO4mF629NIp8FJal38+Kv+HA+0QkE5Y2xXnzD4Lar5+keymiCHRntPppXHeLIRzbt0gxC7v3L72hpQ3BTBEzwHpeS8KY+SX1y5lRMN45thCHfJqGmARJREDjBvWG8JXOPmVIKQtZmVcD5b mandreou@redhat.com manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Nov 27 05:16:21 np0005537642 python3[5480]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABAQC9MiLfy30deHA7xPOAlew5qUq3UP2gmRMYJi8PtkjFB20/DKeWwWNnkZPqP9AayruRoo51SIiVg870gbZE2jYl+Ncx/FYDe56JeC3ySZsXoAVkC9bP7gkOGqOmJjirvAgPMI7bogVz8i+66Q4Ar7OKTp3762G4IuWPPEg4ce4Y7lx9qWocZapHYq4cYKMxrOZ7SEbFSATBbe2bPZAPKTw8do/Eny+Hq/LkHFhIeyra6cqTFQYShr+zPln0Cr+ro/pDX3bB+1ubFgTpjpkkkQsLhDfR6cCdCWM2lgnS3BTtYj5Ct9/JRPR5YOphqZz+uB+OEu2IL68hmU9vNTth1KeX rlandy@redhat.com manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Nov 27 05:16:21 np0005537642 python3[5504]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIFCbgz8gdERiJlk2IKOtkjQxEXejrio6ZYMJAVJYpOIp raukadah@gmail.com manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Nov 27 05:16:22 np0005537642 python3[5528]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIBqb3Q/9uDf4LmihQ7xeJ9gA/STIQUFPSfyyV0m8AoQi bshewale@redhat.com manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Nov 27 05:16:22 np0005537642 python3[5552]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABAQC0I8QqQx0Az2ysJt2JuffucLijhBqnsXKEIx5GyHwxVULROa8VtNFXUDH6ZKZavhiMcmfHB2+TBTda+lDP4FldYj06dGmzCY+IYGa+uDRdxHNGYjvCfLFcmLlzRK6fNbTcui+KlUFUdKe0fb9CRoGKyhlJD5GRkM1Dv+Yb6Bj+RNnmm1fVGYxzmrD2utvffYEb0SZGWxq2R9gefx1q/3wCGjeqvufEV+AskPhVGc5T7t9eyZ4qmslkLh1/nMuaIBFcr9AUACRajsvk6mXrAN1g3HlBf2gQlhi1UEyfbqIQvzzFtsbLDlSum/KmKjy818GzvWjERfQ0VkGzCd9bSLVL dviroel@redhat.com manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Nov 27 05:16:22 np0005537642 python3[5576]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQDLOQd4ZLtkZXQGY6UwAr/06ppWQK4fDO3HaqxPk98csyOCBXsliSKK39Bso828+5srIXiW7aI6aC9P5mwi4mUZlGPfJlQbfrcGvY+b/SocuvaGK+1RrHLoJCT52LBhwgrzlXio2jeksZeein8iaTrhsPrOAs7KggIL/rB9hEiB3NaOPWhhoCP4vlW6MEMExGcqB/1FVxXFBPnLkEyW0Lk7ycVflZl2ocRxbfjZi0+tI1Wlinp8PvSQSc/WVrAcDgKjc/mB4ODPOyYy3G8FHgfMsrXSDEyjBKgLKMsdCrAUcqJQWjkqXleXSYOV4q3pzL+9umK+q/e3P/bIoSFQzmJKTU1eDfuvPXmow9F5H54fii/Da7ezlMJ+wPGHJrRAkmzvMbALy7xwswLhZMkOGNtRcPqaKYRmIBKpw3o6bCTtcNUHOtOQnzwY8JzrM2eBWJBXAANYw+9/ho80JIiwhg29CFNpVBuHbql2YxJQNrnl90guN65rYNpDxdIluweyUf8= anbanerj@kaermorhen manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Nov 27 05:16:22 np0005537642 python3[5600]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQC3VwV8Im9kRm49lt3tM36hj4Zv27FxGo4C1Q/0jqhzFmHY7RHbmeRr8ObhwWoHjXSozKWg8FL5ER0z3hTwL0W6lez3sL7hUaCmSuZmG5Hnl3x4vTSxDI9JZ/Y65rtYiiWQo2fC5xJhU/4+0e5e/pseCm8cKRSu+SaxhO+sd6FDojA2x1BzOzKiQRDy/1zWGp/cZkxcEuB1wHI5LMzN03c67vmbu+fhZRAUO4dQkvcnj2LrhQtpa+ytvnSjr8icMDosf1OsbSffwZFyHB/hfWGAfe0eIeSA2XPraxiPknXxiPKx2MJsaUTYbsZcm3EjFdHBBMumw5rBI74zLrMRvCO9GwBEmGT4rFng1nP+yw5DB8sn2zqpOsPg1LYRwCPOUveC13P6pgsZZPh812e8v5EKnETct+5XI3dVpdw6CnNiLwAyVAF15DJvBGT/u1k0Myg/bQn+Gv9k2MSj6LvQmf6WbZu2Wgjm30z3FyCneBqTL7mLF19YXzeC0ufHz5pnO1E= dasm@fedora manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Nov 27 05:16:23 np0005537642 python3[5624]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIHUnwjB20UKmsSed9X73eGNV5AOEFccQ3NYrRW776pEk cjeanner manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Nov 27 05:16:23 np0005537642 python3[5648]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIDercCMGn8rW1C4P67tHgtflPdTeXlpyUJYH+6XDd2lR jgilaber@redhat.com manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Nov 27 05:16:23 np0005537642 python3[5672]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIAMI6kkg9Wg0sG7jIJmyZemEBwUn1yzNpQQd3gnulOmZ adrianfuscoarnejo@gmail.com manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Nov 27 05:16:24 np0005537642 python3[5696]: ansible-authorized_key Invoked with user=zuul state=present key=ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBPijwpQu/3jhhhBZInXNOLEH57DrknPc3PLbsRvYyJIFzwYjX+WD4a7+nGnMYS42MuZk6TJcVqgnqofVx4isoD4= ramishra@redhat.com manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Nov 27 05:16:24 np0005537642 python3[5720]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIGpU/BepK3qX0NRf5Np+dOBDqzQEefhNrw2DCZaH3uWW rebtoor@monolith manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Nov 27 05:16:24 np0005537642 python3[5744]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIDK0iKdi8jQTpQrDdLVH/AAgLVYyTXF7AQ1gjc/5uT3t ykarel@yatinkarel manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Nov 27 05:16:25 np0005537642 python3[5768]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIF/V/cLotA6LZeO32VL45Hd78skuA2lJA425Sm2LlQeZ fmount@horcrux manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Nov 27 05:16:25 np0005537642 python3[5792]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIDa7QCjuDMVmRPo1rREbGwzYeBCYVN+Ou/3WKXZEC6Sr manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Nov 27 05:16:25 np0005537642 python3[5816]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAACAQCfNtF7NvKl915TGsGGoseUb06Hj8L/S4toWf0hExeY+F00woL6NvBlJD0nDct+P5a22I4EhvoQCRQ8reaPCm1lybR3uiRIJsj+8zkVvLwby9LXzfZorlNG9ofjd00FEmB09uW/YvTl6Q9XwwwX6tInzIOv3TMqTHHGOL74ibbj8J/FJR0cFEyj0z4WQRvtkh32xAHl83gbuINryMt0sqRI+clj2381NKL55DRLQrVw0gsfqqxiHAnXg21qWmc4J+b9e9kiuAFQjcjwTVkwJCcg3xbPwC/qokYRby/Y5S40UUd7/jEARGXT7RZgpzTuDd1oZiCVrnrqJNPaMNdVv5MLeFdf1B7iIe5aa/fGouX7AO4SdKhZUdnJmCFAGvjC6S3JMZ2wAcUl+OHnssfmdj7XL50cLo27vjuzMtLAgSqi6N99m92WCF2s8J9aVzszX7Xz9OKZCeGsiVJp3/NdABKzSEAyM9xBD/5Vho894Sav+otpySHe3p6RUTgbB5Zu8VyZRZ/UtB3ueXxyo764yrc6qWIDqrehm84Xm9g+/jpIBzGPl07NUNJpdt/6Sgf9RIKXw/7XypO5yZfUcuFNGTxLfqjTNrtgLZNcjfav6sSdVXVcMPL//XNuRdKmVFaO76eV/oGMQGr1fGcCD+N+CpI7+Q+fCNB6VFWG4nZFuI/Iuw== averdagu@redhat.com manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Nov 27 05:16:25 np0005537642 python3[5840]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQDq8l27xI+QlQVdS4djp9ogSoyrNE2+Ox6vKPdhSNL1J3PE5w+WCSvMz9A5gnNuH810zwbekEApbxTze/gLQJwBHA52CChfURpXrFaxY7ePXRElwKAL3mJfzBWY/c5jnNL9TCVmFJTGZkFZP3Nh+BMgZvL6xBkt3WKm6Uq18qzd9XeKcZusrA+O+uLv1fVeQnadY9RIqOCyeFYCzLWrUfTyE8x/XG0hAWIM7qpnF2cALQS2h9n4hW5ybiUN790H08wf9hFwEf5nxY9Z9dVkPFQiTSGKNBzmnCXU9skxS/xhpFjJ5duGSZdtAHe9O+nGZm9c67hxgtf8e5PDuqAdXEv2cf6e3VBAt+Bz8EKI3yosTj0oZHfwr42Yzb1l/SKy14Rggsrc9KAQlrGXan6+u2jcQqqx7l+SWmnpFiWTV9u5cWj2IgOhApOitmRBPYqk9rE2usfO0hLn/Pj/R/Nau4803e1/EikdLE7Ps95s9mX5jRDjAoUa2JwFF5RsVFyL910= ashigupt@ashigupt.remote.csb manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Nov 27 05:16:26 np0005537642 python3[5864]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIOKLl0NYKwoZ/JY5KeZU8VwRAggeOxqQJeoqp3dsAaY9 manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Nov 27 05:16:26 np0005537642 python3[5888]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIASASQOH2BcOyLKuuDOdWZlPi2orcjcA8q4400T73DLH evallesp@fedora manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Nov 27 05:16:26 np0005537642 python3[5912]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAILeBWlamUph+jRKV2qrx1PGU7vWuGIt5+z9k96I8WehW amsinha@amsinha-mac manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Nov 27 05:16:27 np0005537642 python3[5936]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIANvVgvJBlK3gb1yz5uef/JqIGq4HLEmY2dYA8e37swb morenod@redhat-laptop manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Nov 27 05:16:27 np0005537642 python3[5960]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAACAQDZdI7t1cxYx65heVI24HTV4F7oQLW1zyfxHreL2TIJKxjyrUUKIFEUmTutcBlJRLNT2Eoix6x1sOw9YrchloCLcn//SGfTElr9mSc5jbjb7QXEU+zJMhtxyEJ1Po3CUGnj7ckiIXw7wcawZtrEOAQ9pH3ExYCJcEMiyNjRQZCxT3tPK+S4B95EWh5Fsrz9CkwpjNRPPH7LigCeQTM3Wc7r97utAslBUUvYceDSLA7rMgkitJE38b7rZBeYzsGQ8YYUBjTCtehqQXxCRjizbHWaaZkBU+N3zkKB6n/iCNGIO690NK7A/qb6msTijiz1PeuM8ThOsi9qXnbX5v0PoTpcFSojV7NHAQ71f0XXuS43FhZctT+Dcx44dT8Fb5vJu2cJGrk+qF8ZgJYNpRS7gPg0EG2EqjK7JMf9ULdjSu0r+KlqIAyLvtzT4eOnQipoKlb/WG5D/0ohKv7OMQ352ggfkBFIQsRXyyTCT98Ft9juqPuahi3CAQmP4H9dyE+7+Kz437PEtsxLmfm6naNmWi7Ee1DqWPwS8rEajsm4sNM4wW9gdBboJQtc0uZw0DfLj1I9r3Mc8Ol0jYtz0yNQDSzVLrGCaJlC311trU70tZ+ZkAVV6Mn8lOhSbj1cK0lvSr6ZK4dgqGl3I1eTZJJhbLNdg7UOVaiRx9543+C/p/As7w== brjackma@redhat.com manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Nov 27 05:16:27 np0005537642 python3[5984]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIKwedoZ0TWPJX/z/4TAbO/kKcDZOQVgRH0hAqrL5UCI1 vcastell@redhat.com manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Nov 27 05:16:27 np0005537642 python3[6008]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIEmv8sE8GCk6ZTPIqF0FQrttBdL3mq7rCm/IJy0xDFh7 michburk@redhat.com manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Nov 27 05:16:28 np0005537642 python3[6032]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAICy6GpGEtwevXEEn4mmLR5lmSLe23dGgAvzkB9DMNbkf rsafrono@rsafrono manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Nov 27 05:16:30 np0005537642 python3[6058]: ansible-community.general.timezone Invoked with name=UTC hwclock=None
Nov 27 05:16:30 np0005537642 systemd[1]: Starting Time & Date Service...
Nov 27 05:16:30 np0005537642 systemd[1]: Started Time & Date Service.
Nov 27 05:16:30 np0005537642 systemd-timedated[6060]: Changed time zone to 'UTC' (UTC).
Nov 27 05:16:31 np0005537642 python3[6089]: ansible-file Invoked with path=/etc/nodepool state=directory mode=511 recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 27 05:16:32 np0005537642 python3[6165]: ansible-ansible.legacy.stat Invoked with path=/etc/nodepool/sub_nodes follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Nov 27 05:16:32 np0005537642 python3[6236]: ansible-ansible.legacy.copy Invoked with dest=/etc/nodepool/sub_nodes src=/home/zuul/.ansible/tmp/ansible-tmp-1764238592.101562-251-64025694851940/source _original_basename=tmphjzj8p_2 follow=False checksum=da39a3ee5e6b4b0d3255bfef95601890afd80709 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 27 05:16:33 np0005537642 python3[6336]: ansible-ansible.legacy.stat Invoked with path=/etc/nodepool/sub_nodes_private follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Nov 27 05:16:33 np0005537642 python3[6407]: ansible-ansible.legacy.copy Invoked with dest=/etc/nodepool/sub_nodes_private src=/home/zuul/.ansible/tmp/ansible-tmp-1764238593.0206616-301-183590261898469/source _original_basename=tmpdm3uf0jh follow=False checksum=da39a3ee5e6b4b0d3255bfef95601890afd80709 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 27 05:16:34 np0005537642 python3[6509]: ansible-ansible.legacy.stat Invoked with path=/etc/nodepool/node_private follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Nov 27 05:16:34 np0005537642 python3[6582]: ansible-ansible.legacy.copy Invoked with dest=/etc/nodepool/node_private src=/home/zuul/.ansible/tmp/ansible-tmp-1764238594.2332394-381-206064245133292/source _original_basename=tmpisjhvjen follow=False checksum=8c950ced7b8ca77f4bd28dc7b76c4d94d84c9436 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 27 05:16:35 np0005537642 python3[6630]: ansible-ansible.legacy.command Invoked with _raw_params=cp .ssh/id_rsa /etc/nodepool/id_rsa zuul_log_id=in-loop-ignore zuul_ansible_split_streams=False _uses_shell=False warn=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 27 05:16:35 np0005537642 python3[6656]: ansible-ansible.legacy.command Invoked with _raw_params=cp .ssh/id_rsa.pub /etc/nodepool/id_rsa.pub zuul_log_id=in-loop-ignore zuul_ansible_split_streams=False _uses_shell=False warn=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 27 05:16:36 np0005537642 python3[6736]: ansible-ansible.legacy.stat Invoked with path=/etc/sudoers.d/zuul-sudo-grep follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Nov 27 05:16:36 np0005537642 python3[6809]: ansible-ansible.legacy.copy Invoked with dest=/etc/sudoers.d/zuul-sudo-grep mode=288 src=/home/zuul/.ansible/tmp/ansible-tmp-1764238595.9001324-451-150460024095276/source _original_basename=tmp5c3kr69l follow=False checksum=bdca1a77493d00fb51567671791f4aa30f66c2f0 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 27 05:16:37 np0005537642 python3[6860]: ansible-ansible.legacy.command Invoked with _raw_params=/usr/sbin/visudo -c zuul_log_id=fa163ec2-ffbe-4a46-6ff5-00000000001f-1-compute0 zuul_ansible_split_streams=False _uses_shell=False warn=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 27 05:16:37 np0005537642 python3[6888]: ansible-ansible.legacy.command Invoked with executable=/bin/bash _raw_params=env#012 _uses_shell=True zuul_log_id=fa163ec2-ffbe-4a46-6ff5-000000000020-1-compute0 zuul_ansible_split_streams=False warn=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None creates=None removes=None stdin=None
Nov 27 05:16:39 np0005537642 python3[6916]: ansible-file Invoked with path=/home/zuul/workspace state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 27 05:16:59 np0005537642 python3[6942]: ansible-ansible.builtin.file Invoked with path=/etc/ci/env state=directory mode=0755 recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 27 05:17:00 np0005537642 systemd[1]: systemd-timedated.service: Deactivated successfully.
Nov 27 05:17:40 np0005537642 kernel: pci 0000:00:07.0: [1af4:1000] type 00 class 0x020000 conventional PCI endpoint
Nov 27 05:17:40 np0005537642 kernel: pci 0000:00:07.0: BAR 0 [io  0x0000-0x003f]
Nov 27 05:17:40 np0005537642 kernel: pci 0000:00:07.0: BAR 1 [mem 0x00000000-0x00000fff]
Nov 27 05:17:40 np0005537642 kernel: pci 0000:00:07.0: BAR 4 [mem 0x00000000-0x00003fff 64bit pref]
Nov 27 05:17:40 np0005537642 kernel: pci 0000:00:07.0: ROM [mem 0x00000000-0x0007ffff pref]
Nov 27 05:17:40 np0005537642 kernel: pci 0000:00:07.0: ROM [mem 0xc0000000-0xc007ffff pref]: assigned
Nov 27 05:17:40 np0005537642 kernel: pci 0000:00:07.0: BAR 4 [mem 0x240000000-0x240003fff 64bit pref]: assigned
Nov 27 05:17:40 np0005537642 kernel: pci 0000:00:07.0: BAR 1 [mem 0xc0080000-0xc0080fff]: assigned
Nov 27 05:17:40 np0005537642 kernel: pci 0000:00:07.0: BAR 0 [io  0x1000-0x103f]: assigned
Nov 27 05:17:40 np0005537642 kernel: virtio-pci 0000:00:07.0: enabling device (0000 -> 0003)
Nov 27 05:17:41 np0005537642 NetworkManager[857]: <info>  [1764238661.0271] manager: (eth1): new Ethernet device (/org/freedesktop/NetworkManager/Devices/3)
Nov 27 05:17:41 np0005537642 systemd-udevd[6946]: Network interface NamePolicy= disabled on kernel command line.
Nov 27 05:17:41 np0005537642 NetworkManager[857]: <info>  [1764238661.0443] device (eth1): state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Nov 27 05:17:41 np0005537642 NetworkManager[857]: <info>  [1764238661.0479] settings: (eth1): created default wired connection 'Wired connection 1'
Nov 27 05:17:41 np0005537642 NetworkManager[857]: <info>  [1764238661.0484] device (eth1): carrier: link connected
Nov 27 05:17:41 np0005537642 NetworkManager[857]: <info>  [1764238661.0487] device (eth1): state change: unavailable -> disconnected (reason 'carrier-changed', managed-type: 'full')
Nov 27 05:17:41 np0005537642 NetworkManager[857]: <info>  [1764238661.0493] policy: auto-activating connection 'Wired connection 1' (36db9ee6-df8b-3fd7-85c2-f23e4a4c2171)
Nov 27 05:17:41 np0005537642 NetworkManager[857]: <info>  [1764238661.0498] device (eth1): Activation: starting connection 'Wired connection 1' (36db9ee6-df8b-3fd7-85c2-f23e4a4c2171)
Nov 27 05:17:41 np0005537642 NetworkManager[857]: <info>  [1764238661.0500] device (eth1): state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Nov 27 05:17:41 np0005537642 NetworkManager[857]: <info>  [1764238661.0503] device (eth1): state change: prepare -> config (reason 'none', managed-type: 'full')
Nov 27 05:17:41 np0005537642 NetworkManager[857]: <info>  [1764238661.0507] device (eth1): state change: config -> ip-config (reason 'none', managed-type: 'full')
Nov 27 05:17:41 np0005537642 NetworkManager[857]: <info>  [1764238661.0511] dhcp4 (eth1): activation: beginning transaction (timeout in 45 seconds)
Nov 27 05:17:42 np0005537642 python3[6972]: ansible-ansible.legacy.command Invoked with _raw_params=ip -j link zuul_log_id=fa163ec2-ffbe-eddc-dff7-000000000128-0-controller zuul_ansible_split_streams=False _uses_shell=False warn=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 27 05:17:52 np0005537642 python3[7053]: ansible-ansible.legacy.stat Invoked with path=/etc/NetworkManager/system-connections/ci-private-network.nmconnection follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Nov 27 05:17:52 np0005537642 python3[7126]: ansible-ansible.legacy.copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1764238671.942623-104-118209640613021/source dest=/etc/NetworkManager/system-connections/ci-private-network.nmconnection mode=0600 owner=root group=root follow=False _original_basename=bootstrap-ci-network-nm-connection.nmconnection.j2 checksum=c3f204bab4258f0b19531250175f63c7bacccfb2 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 27 05:17:53 np0005537642 python3[7176]: ansible-ansible.builtin.systemd Invoked with name=NetworkManager state=restarted daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Nov 27 05:17:53 np0005537642 systemd[1]: NetworkManager-wait-online.service: Deactivated successfully.
Nov 27 05:17:53 np0005537642 systemd[1]: Stopped Network Manager Wait Online.
Nov 27 05:17:53 np0005537642 systemd[1]: Stopping Network Manager Wait Online...
Nov 27 05:17:53 np0005537642 systemd[1]: Stopping Network Manager...
Nov 27 05:17:53 np0005537642 NetworkManager[857]: <info>  [1764238673.5211] caught SIGTERM, shutting down normally.
Nov 27 05:17:53 np0005537642 NetworkManager[857]: <info>  [1764238673.5226] dhcp4 (eth0): canceled DHCP transaction
Nov 27 05:17:53 np0005537642 NetworkManager[857]: <info>  [1764238673.5227] dhcp4 (eth0): activation: beginning transaction (timeout in 45 seconds)
Nov 27 05:17:53 np0005537642 NetworkManager[857]: <info>  [1764238673.5227] dhcp4 (eth0): state changed no lease
Nov 27 05:17:53 np0005537642 NetworkManager[857]: <info>  [1764238673.5231] manager: NetworkManager state is now CONNECTING
Nov 27 05:17:53 np0005537642 NetworkManager[857]: <info>  [1764238673.5416] dhcp4 (eth1): canceled DHCP transaction
Nov 27 05:17:53 np0005537642 NetworkManager[857]: <info>  [1764238673.5417] dhcp4 (eth1): state changed no lease
Nov 27 05:17:53 np0005537642 NetworkManager[857]: <info>  [1764238673.5458] exiting (success)
Nov 27 05:17:53 np0005537642 systemd[1]: Starting Network Manager Script Dispatcher Service...
Nov 27 05:17:53 np0005537642 systemd[1]: Started Network Manager Script Dispatcher Service.
Nov 27 05:17:53 np0005537642 systemd[1]: NetworkManager.service: Deactivated successfully.
Nov 27 05:17:53 np0005537642 systemd[1]: Stopped Network Manager.
Nov 27 05:17:53 np0005537642 systemd[1]: NetworkManager.service: Consumed 1.011s CPU time, 10.0M memory peak.
Nov 27 05:17:53 np0005537642 systemd[1]: Starting Network Manager...
Nov 27 05:17:53 np0005537642 NetworkManager[7186]: <info>  [1764238673.6084] NetworkManager (version 1.54.1-1.el9) is starting... (after a restart, boot:15470e99-5312-4d44-ad2f-5b0f2ebe5cc1)
Nov 27 05:17:53 np0005537642 NetworkManager[7186]: <info>  [1764238673.6087] Read config: /etc/NetworkManager/NetworkManager.conf, /run/NetworkManager/conf.d/15-carrier-timeout.conf
Nov 27 05:17:53 np0005537642 NetworkManager[7186]: <info>  [1764238673.6136] manager[0x5559fbfcd070]: monitoring kernel firmware directory '/lib/firmware'.
Nov 27 05:17:53 np0005537642 systemd[1]: Starting Hostname Service...
Nov 27 05:17:53 np0005537642 systemd[1]: Started Hostname Service.
Nov 27 05:17:53 np0005537642 NetworkManager[7186]: <info>  [1764238673.6782] hostname: hostname: using hostnamed
Nov 27 05:17:53 np0005537642 NetworkManager[7186]: <info>  [1764238673.6783] hostname: static hostname changed from (none) to "np0005537642.novalocal"
Nov 27 05:17:53 np0005537642 NetworkManager[7186]: <info>  [1764238673.6791] dns-mgr: init: dns=default,systemd-resolved rc-manager=symlink (auto)
Nov 27 05:17:53 np0005537642 NetworkManager[7186]: <info>  [1764238673.6801] manager[0x5559fbfcd070]: rfkill: Wi-Fi hardware radio set enabled
Nov 27 05:17:53 np0005537642 NetworkManager[7186]: <info>  [1764238673.6802] manager[0x5559fbfcd070]: rfkill: WWAN hardware radio set enabled
Nov 27 05:17:53 np0005537642 NetworkManager[7186]: <info>  [1764238673.6855] Loaded device plugin: NMTeamFactory (/usr/lib64/NetworkManager/1.54.1-1.el9/libnm-device-plugin-team.so)
Nov 27 05:17:53 np0005537642 NetworkManager[7186]: <info>  [1764238673.6856] manager: rfkill: Wi-Fi enabled by radio killswitch; enabled by state file
Nov 27 05:17:53 np0005537642 NetworkManager[7186]: <info>  [1764238673.6857] manager: rfkill: WWAN enabled by radio killswitch; enabled by state file
Nov 27 05:17:53 np0005537642 NetworkManager[7186]: <info>  [1764238673.6858] manager: Networking is enabled by state file
Nov 27 05:17:53 np0005537642 NetworkManager[7186]: <info>  [1764238673.6862] settings: Loaded settings plugin: keyfile (internal)
Nov 27 05:17:53 np0005537642 NetworkManager[7186]: <info>  [1764238673.6869] settings: Loaded settings plugin: ifcfg-rh ("/usr/lib64/NetworkManager/1.54.1-1.el9/libnm-settings-plugin-ifcfg-rh.so")
Nov 27 05:17:53 np0005537642 NetworkManager[7186]: <info>  [1764238673.6914] Warning: the ifcfg-rh plugin is deprecated, please migrate connections to the keyfile format using "nmcli connection migrate"
Nov 27 05:17:53 np0005537642 NetworkManager[7186]: <info>  [1764238673.6931] dhcp: init: Using DHCP client 'internal'
Nov 27 05:17:53 np0005537642 NetworkManager[7186]: <info>  [1764238673.6936] manager: (lo): new Loopback device (/org/freedesktop/NetworkManager/Devices/1)
Nov 27 05:17:53 np0005537642 NetworkManager[7186]: <info>  [1764238673.6945] device (lo): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Nov 27 05:17:53 np0005537642 NetworkManager[7186]: <info>  [1764238673.6954] device (lo): state change: unavailable -> disconnected (reason 'connection-assumed', managed-type: 'external')
Nov 27 05:17:53 np0005537642 NetworkManager[7186]: <info>  [1764238673.6968] device (lo): Activation: starting connection 'lo' (f909ba12-8db7-4bba-9d0e-d40b1f19aeea)
Nov 27 05:17:53 np0005537642 NetworkManager[7186]: <info>  [1764238673.6981] device (eth0): carrier: link connected
Nov 27 05:17:53 np0005537642 NetworkManager[7186]: <info>  [1764238673.6989] manager: (eth0): new Ethernet device (/org/freedesktop/NetworkManager/Devices/2)
Nov 27 05:17:53 np0005537642 NetworkManager[7186]: <info>  [1764238673.6998] manager: (eth0): assume: will attempt to assume matching connection 'System eth0' (5fb06bd0-0bb0-7ffb-45f1-d6edd65f3e03) (indicated)
Nov 27 05:17:53 np0005537642 NetworkManager[7186]: <info>  [1764238673.6999] device (eth0): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'assume')
Nov 27 05:17:53 np0005537642 NetworkManager[7186]: <info>  [1764238673.7011] device (eth0): state change: unavailable -> disconnected (reason 'connection-assumed', managed-type: 'assume')
Nov 27 05:17:53 np0005537642 NetworkManager[7186]: <info>  [1764238673.7027] device (eth0): Activation: starting connection 'System eth0' (5fb06bd0-0bb0-7ffb-45f1-d6edd65f3e03)
Nov 27 05:17:53 np0005537642 NetworkManager[7186]: <info>  [1764238673.7039] device (eth1): carrier: link connected
Nov 27 05:17:53 np0005537642 NetworkManager[7186]: <info>  [1764238673.7048] manager: (eth1): new Ethernet device (/org/freedesktop/NetworkManager/Devices/3)
Nov 27 05:17:53 np0005537642 NetworkManager[7186]: <info>  [1764238673.7059] manager: (eth1): assume: will attempt to assume matching connection 'Wired connection 1' (36db9ee6-df8b-3fd7-85c2-f23e4a4c2171) (indicated)
Nov 27 05:17:53 np0005537642 NetworkManager[7186]: <info>  [1764238673.7060] device (eth1): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'assume')
Nov 27 05:17:53 np0005537642 NetworkManager[7186]: <info>  [1764238673.7072] device (eth1): state change: unavailable -> disconnected (reason 'connection-assumed', managed-type: 'assume')
Nov 27 05:17:53 np0005537642 NetworkManager[7186]: <info>  [1764238673.7085] device (eth1): Activation: starting connection 'Wired connection 1' (36db9ee6-df8b-3fd7-85c2-f23e4a4c2171)
Nov 27 05:17:53 np0005537642 systemd[1]: Started Network Manager.
Nov 27 05:17:53 np0005537642 NetworkManager[7186]: <info>  [1764238673.7096] bus-manager: acquired D-Bus service "org.freedesktop.NetworkManager"
Nov 27 05:17:53 np0005537642 NetworkManager[7186]: <info>  [1764238673.7102] device (lo): state change: disconnected -> prepare (reason 'none', managed-type: 'external')
Nov 27 05:17:53 np0005537642 NetworkManager[7186]: <info>  [1764238673.7108] device (lo): state change: prepare -> config (reason 'none', managed-type: 'external')
Nov 27 05:17:53 np0005537642 NetworkManager[7186]: <info>  [1764238673.7111] device (lo): state change: config -> ip-config (reason 'none', managed-type: 'external')
Nov 27 05:17:53 np0005537642 NetworkManager[7186]: <info>  [1764238673.7114] device (eth0): state change: disconnected -> prepare (reason 'none', managed-type: 'assume')
Nov 27 05:17:53 np0005537642 NetworkManager[7186]: <info>  [1764238673.7121] device (eth0): state change: prepare -> config (reason 'none', managed-type: 'assume')
Nov 27 05:17:53 np0005537642 NetworkManager[7186]: <info>  [1764238673.7126] device (eth1): state change: disconnected -> prepare (reason 'none', managed-type: 'assume')
Nov 27 05:17:53 np0005537642 NetworkManager[7186]: <info>  [1764238673.7129] device (eth1): state change: prepare -> config (reason 'none', managed-type: 'assume')
Nov 27 05:17:53 np0005537642 NetworkManager[7186]: <info>  [1764238673.7134] device (lo): state change: ip-config -> ip-check (reason 'none', managed-type: 'external')
Nov 27 05:17:53 np0005537642 NetworkManager[7186]: <info>  [1764238673.7148] device (eth0): state change: config -> ip-config (reason 'none', managed-type: 'assume')
Nov 27 05:17:53 np0005537642 NetworkManager[7186]: <info>  [1764238673.7154] dhcp4 (eth0): activation: beginning transaction (timeout in 45 seconds)
Nov 27 05:17:53 np0005537642 NetworkManager[7186]: <info>  [1764238673.7170] device (eth1): state change: config -> ip-config (reason 'none', managed-type: 'assume')
Nov 27 05:17:53 np0005537642 NetworkManager[7186]: <info>  [1764238673.7177] dhcp4 (eth1): activation: beginning transaction (timeout in 45 seconds)
Nov 27 05:17:53 np0005537642 NetworkManager[7186]: <info>  [1764238673.7208] device (lo): state change: ip-check -> secondaries (reason 'none', managed-type: 'external')
Nov 27 05:17:53 np0005537642 NetworkManager[7186]: <info>  [1764238673.7213] dhcp4 (eth0): state changed new lease, address=38.102.83.130
Nov 27 05:17:53 np0005537642 NetworkManager[7186]: <info>  [1764238673.7222] device (lo): state change: secondaries -> activated (reason 'none', managed-type: 'external')
Nov 27 05:17:53 np0005537642 NetworkManager[7186]: <info>  [1764238673.7231] device (lo): Activation: successful, device activated.
Nov 27 05:17:53 np0005537642 systemd[1]: Starting Network Manager Wait Online...
Nov 27 05:17:53 np0005537642 NetworkManager[7186]: <info>  [1764238673.7246] policy: set 'System eth0' (eth0) as default for IPv4 routing and DNS
Nov 27 05:17:53 np0005537642 NetworkManager[7186]: <info>  [1764238673.7306] device (eth0): state change: ip-config -> ip-check (reason 'none', managed-type: 'assume')
Nov 27 05:17:53 np0005537642 NetworkManager[7186]: <info>  [1764238673.7354] device (eth0): state change: ip-check -> secondaries (reason 'none', managed-type: 'assume')
Nov 27 05:17:53 np0005537642 NetworkManager[7186]: <info>  [1764238673.7357] device (eth0): state change: secondaries -> activated (reason 'none', managed-type: 'assume')
Nov 27 05:17:53 np0005537642 NetworkManager[7186]: <info>  [1764238673.7360] manager: NetworkManager state is now CONNECTED_SITE
Nov 27 05:17:53 np0005537642 NetworkManager[7186]: <info>  [1764238673.7363] device (eth0): Activation: successful, device activated.
Nov 27 05:17:53 np0005537642 NetworkManager[7186]: <info>  [1764238673.7370] manager: NetworkManager state is now CONNECTED_GLOBAL
Nov 27 05:17:54 np0005537642 python3[7260]: ansible-ansible.legacy.command Invoked with _raw_params=ip route zuul_log_id=fa163ec2-ffbe-eddc-dff7-0000000000bd-0-controller zuul_ansible_split_streams=False _uses_shell=False warn=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 27 05:18:03 np0005537642 systemd[1]: NetworkManager-dispatcher.service: Deactivated successfully.
Nov 27 05:18:08 np0005537642 systemd[4299]: Starting Mark boot as successful...
Nov 27 05:18:08 np0005537642 systemd[4299]: Finished Mark boot as successful.
Nov 27 05:18:23 np0005537642 systemd[1]: systemd-hostnamed.service: Deactivated successfully.
Nov 27 05:18:39 np0005537642 NetworkManager[7186]: <info>  [1764238719.2937] device (eth1): state change: ip-config -> ip-check (reason 'none', managed-type: 'assume')
Nov 27 05:18:39 np0005537642 systemd[1]: Starting Network Manager Script Dispatcher Service...
Nov 27 05:18:39 np0005537642 systemd[1]: Started Network Manager Script Dispatcher Service.
Nov 27 05:18:39 np0005537642 NetworkManager[7186]: <info>  [1764238719.3307] device (eth1): state change: ip-check -> secondaries (reason 'none', managed-type: 'assume')
Nov 27 05:18:39 np0005537642 NetworkManager[7186]: <info>  [1764238719.3311] device (eth1): state change: secondaries -> activated (reason 'none', managed-type: 'assume')
Nov 27 05:18:39 np0005537642 NetworkManager[7186]: <info>  [1764238719.3321] device (eth1): Activation: successful, device activated.
Nov 27 05:18:39 np0005537642 NetworkManager[7186]: <info>  [1764238719.3327] manager: startup complete
Nov 27 05:18:39 np0005537642 NetworkManager[7186]: <info>  [1764238719.3329] device (eth1): state change: activated -> failed (reason 'ip-config-unavailable', managed-type: 'full')
Nov 27 05:18:39 np0005537642 NetworkManager[7186]: <warn>  [1764238719.3333] device (eth1): Activation: failed for connection 'Wired connection 1'
Nov 27 05:18:39 np0005537642 NetworkManager[7186]: <info>  [1764238719.3340] device (eth1): state change: failed -> disconnected (reason 'none', managed-type: 'full')
Nov 27 05:18:39 np0005537642 systemd[1]: Finished Network Manager Wait Online.
Nov 27 05:18:39 np0005537642 NetworkManager[7186]: <info>  [1764238719.3456] dhcp4 (eth1): canceled DHCP transaction
Nov 27 05:18:39 np0005537642 NetworkManager[7186]: <info>  [1764238719.3456] dhcp4 (eth1): activation: beginning transaction (timeout in 45 seconds)
Nov 27 05:18:39 np0005537642 NetworkManager[7186]: <info>  [1764238719.3457] dhcp4 (eth1): state changed no lease
Nov 27 05:18:39 np0005537642 NetworkManager[7186]: <info>  [1764238719.3478] policy: auto-activating connection 'ci-private-network' (34d29167-9f55-51d9-826e-f0fb3219e8aa)
Nov 27 05:18:39 np0005537642 NetworkManager[7186]: <info>  [1764238719.3484] device (eth1): Activation: starting connection 'ci-private-network' (34d29167-9f55-51d9-826e-f0fb3219e8aa)
Nov 27 05:18:39 np0005537642 NetworkManager[7186]: <info>  [1764238719.3485] device (eth1): state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Nov 27 05:18:39 np0005537642 NetworkManager[7186]: <info>  [1764238719.3492] device (eth1): state change: prepare -> config (reason 'none', managed-type: 'full')
Nov 27 05:18:39 np0005537642 NetworkManager[7186]: <info>  [1764238719.3500] device (eth1): state change: config -> ip-config (reason 'none', managed-type: 'full')
Nov 27 05:18:39 np0005537642 NetworkManager[7186]: <info>  [1764238719.3511] device (eth1): state change: ip-config -> ip-check (reason 'none', managed-type: 'full')
Nov 27 05:18:39 np0005537642 NetworkManager[7186]: <info>  [1764238719.3556] device (eth1): state change: ip-check -> secondaries (reason 'none', managed-type: 'full')
Nov 27 05:18:39 np0005537642 NetworkManager[7186]: <info>  [1764238719.3559] device (eth1): state change: secondaries -> activated (reason 'none', managed-type: 'full')
Nov 27 05:18:39 np0005537642 NetworkManager[7186]: <info>  [1764238719.3569] device (eth1): Activation: successful, device activated.
Nov 27 05:18:49 np0005537642 systemd[1]: NetworkManager-dispatcher.service: Deactivated successfully.
Nov 27 05:18:54 np0005537642 systemd-logind[801]: Session 1 logged out. Waiting for processes to exit.
Nov 27 05:19:58 np0005537642 systemd-logind[801]: New session 3 of user zuul.
Nov 27 05:19:58 np0005537642 systemd[1]: Started Session 3 of User zuul.
Nov 27 05:19:58 np0005537642 python3[7370]: ansible-ansible.legacy.stat Invoked with path=/etc/ci/env/networking-info.yml follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Nov 27 05:19:58 np0005537642 python3[7443]: ansible-ansible.legacy.copy Invoked with dest=/etc/ci/env/networking-info.yml owner=root group=root mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1764238798.2662325-373-42641351769194/source _original_basename=tmpsoc87ypv follow=False checksum=1aed22df9514a6990e2275ec98a59c1e2d4a74d1 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 27 05:20:03 np0005537642 systemd-logind[801]: Session 3 logged out. Waiting for processes to exit.
Nov 27 05:20:03 np0005537642 systemd[1]: session-3.scope: Deactivated successfully.
Nov 27 05:20:03 np0005537642 systemd-logind[801]: Removed session 3.
Nov 27 05:21:08 np0005537642 systemd[4299]: Created slice User Background Tasks Slice.
Nov 27 05:21:08 np0005537642 systemd[4299]: Starting Cleanup of User's Temporary Files and Directories...
Nov 27 05:21:08 np0005537642 systemd[4299]: Finished Cleanup of User's Temporary Files and Directories.
Nov 27 05:25:41 np0005537642 systemd-logind[801]: New session 4 of user zuul.
Nov 27 05:25:41 np0005537642 systemd[1]: Started Session 4 of User zuul.
Nov 27 05:25:41 np0005537642 python3[7504]: ansible-ansible.legacy.command Invoked with _raw_params=lsblk -nd -o MAJ:MIN /dev/vda#012 _uses_shell=True zuul_log_id=fa163ec2-ffbe-c991-9c3d-000000001cd8-1-compute0 zuul_ansible_split_streams=False warn=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 27 05:25:42 np0005537642 python3[7533]: ansible-ansible.builtin.file Invoked with path=/sys/fs/cgroup/init.scope state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 27 05:25:42 np0005537642 python3[7559]: ansible-ansible.builtin.file Invoked with path=/sys/fs/cgroup/machine.slice state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 27 05:25:43 np0005537642 python3[7585]: ansible-ansible.builtin.file Invoked with path=/sys/fs/cgroup/system.slice state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 27 05:25:43 np0005537642 python3[7611]: ansible-ansible.builtin.file Invoked with path=/sys/fs/cgroup/user.slice state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 27 05:25:44 np0005537642 python3[7637]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system.conf.d state=directory mode=0755 recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 27 05:25:44 np0005537642 python3[7715]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system.conf.d/override.conf follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Nov 27 05:25:45 np0005537642 python3[7788]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system.conf.d/override.conf mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1764239144.4766777-519-124811749628213/source _original_basename=tmp8ymrhx76 follow=False checksum=a05098bd3d2321238ea1169d0e6f135b35b392d4 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 27 05:25:46 np0005537642 python3[7838]: ansible-ansible.builtin.systemd_service Invoked with daemon_reload=True daemon_reexec=False scope=system no_block=False name=None state=None enabled=None force=None masked=None
Nov 27 05:25:46 np0005537642 systemd[1]: Reloading.
Nov 27 05:25:46 np0005537642 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 27 05:25:48 np0005537642 python3[7894]: ansible-ansible.builtin.wait_for Invoked with path=/sys/fs/cgroup/system.slice/io.max state=present timeout=30 host=127.0.0.1 connect_timeout=5 delay=0 active_connection_states=['ESTABLISHED', 'FIN_WAIT1', 'FIN_WAIT2', 'SYN_RECV', 'SYN_SENT', 'TIME_WAIT'] sleep=1 port=None search_regex=None exclude_hosts=None msg=None
Nov 27 05:25:49 np0005537642 python3[7920]: ansible-ansible.legacy.command Invoked with _raw_params=echo "252:0   riops=18000 wiops=18000 rbps=262144000 wbps=262144000" > /sys/fs/cgroup/init.scope/io.max#012 _uses_shell=True zuul_log_id=in-loop-ignore zuul_ansible_split_streams=False warn=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 27 05:25:49 np0005537642 python3[7948]: ansible-ansible.legacy.command Invoked with _raw_params=echo "252:0   riops=18000 wiops=18000 rbps=262144000 wbps=262144000" > /sys/fs/cgroup/machine.slice/io.max#012 _uses_shell=True zuul_log_id=in-loop-ignore zuul_ansible_split_streams=False warn=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 27 05:25:49 np0005537642 python3[7976]: ansible-ansible.legacy.command Invoked with _raw_params=echo "252:0   riops=18000 wiops=18000 rbps=262144000 wbps=262144000" > /sys/fs/cgroup/system.slice/io.max#012 _uses_shell=True zuul_log_id=in-loop-ignore zuul_ansible_split_streams=False warn=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 27 05:25:49 np0005537642 python3[8004]: ansible-ansible.legacy.command Invoked with _raw_params=echo "252:0   riops=18000 wiops=18000 rbps=262144000 wbps=262144000" > /sys/fs/cgroup/user.slice/io.max#012 _uses_shell=True zuul_log_id=in-loop-ignore zuul_ansible_split_streams=False warn=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 27 05:25:50 np0005537642 python3[8031]: ansible-ansible.legacy.command Invoked with _raw_params=echo "init";    cat /sys/fs/cgroup/init.scope/io.max; echo "machine"; cat /sys/fs/cgroup/machine.slice/io.max; echo "system";  cat /sys/fs/cgroup/system.slice/io.max; echo "user";    cat /sys/fs/cgroup/user.slice/io.max;#012 _uses_shell=True zuul_log_id=fa163ec2-ffbe-c991-9c3d-000000001cdf-1-compute0 zuul_ansible_split_streams=False warn=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 27 05:25:51 np0005537642 python3[8061]: ansible-ansible.builtin.stat Invoked with path=/sys/fs/cgroup/kubepods.slice/io.max follow=False get_md5=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Nov 27 05:25:54 np0005537642 systemd-logind[801]: Session 4 logged out. Waiting for processes to exit.
Nov 27 05:25:54 np0005537642 systemd[1]: session-4.scope: Deactivated successfully.
Nov 27 05:25:54 np0005537642 systemd[1]: session-4.scope: Consumed 4.825s CPU time.
Nov 27 05:25:54 np0005537642 systemd-logind[801]: Removed session 4.
Nov 27 05:25:56 np0005537642 systemd-logind[801]: New session 5 of user zuul.
Nov 27 05:25:56 np0005537642 systemd[1]: Started Session 5 of User zuul.
Nov 27 05:25:56 np0005537642 python3[8096]: ansible-ansible.legacy.dnf Invoked with name=['podman', 'buildah'] state=present allow_downgrade=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_repoquery=True install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 allowerasing=False nobest=False use_backend=auto conf_file=None disable_excludes=None download_dir=None list=None releasever=None
Nov 27 05:26:11 np0005537642 kernel: SELinux:  Converting 385 SID table entries...
Nov 27 05:26:11 np0005537642 kernel: SELinux:  policy capability network_peer_controls=1
Nov 27 05:26:11 np0005537642 kernel: SELinux:  policy capability open_perms=1
Nov 27 05:26:11 np0005537642 kernel: SELinux:  policy capability extended_socket_class=1
Nov 27 05:26:11 np0005537642 kernel: SELinux:  policy capability always_check_network=0
Nov 27 05:26:11 np0005537642 kernel: SELinux:  policy capability cgroup_seclabel=1
Nov 27 05:26:11 np0005537642 kernel: SELinux:  policy capability nnp_nosuid_transition=1
Nov 27 05:26:11 np0005537642 kernel: SELinux:  policy capability genfs_seclabel_symlinks=1
Nov 27 05:26:19 np0005537642 kernel: SELinux:  Converting 385 SID table entries...
Nov 27 05:26:19 np0005537642 kernel: SELinux:  policy capability network_peer_controls=1
Nov 27 05:26:19 np0005537642 kernel: SELinux:  policy capability open_perms=1
Nov 27 05:26:19 np0005537642 kernel: SELinux:  policy capability extended_socket_class=1
Nov 27 05:26:19 np0005537642 kernel: SELinux:  policy capability always_check_network=0
Nov 27 05:26:19 np0005537642 kernel: SELinux:  policy capability cgroup_seclabel=1
Nov 27 05:26:19 np0005537642 kernel: SELinux:  policy capability nnp_nosuid_transition=1
Nov 27 05:26:19 np0005537642 kernel: SELinux:  policy capability genfs_seclabel_symlinks=1
Nov 27 05:26:27 np0005537642 kernel: SELinux:  Converting 385 SID table entries...
Nov 27 05:26:27 np0005537642 kernel: SELinux:  policy capability network_peer_controls=1
Nov 27 05:26:27 np0005537642 kernel: SELinux:  policy capability open_perms=1
Nov 27 05:26:27 np0005537642 kernel: SELinux:  policy capability extended_socket_class=1
Nov 27 05:26:27 np0005537642 kernel: SELinux:  policy capability always_check_network=0
Nov 27 05:26:27 np0005537642 kernel: SELinux:  policy capability cgroup_seclabel=1
Nov 27 05:26:27 np0005537642 kernel: SELinux:  policy capability nnp_nosuid_transition=1
Nov 27 05:26:27 np0005537642 kernel: SELinux:  policy capability genfs_seclabel_symlinks=1
Nov 27 05:26:29 np0005537642 setsebool[8165]: The virt_use_nfs policy boolean was changed to 1 by root
Nov 27 05:26:29 np0005537642 setsebool[8165]: The virt_sandbox_use_all_caps policy boolean was changed to 1 by root
Nov 27 05:26:39 np0005537642 kernel: SELinux:  Converting 388 SID table entries...
Nov 27 05:26:39 np0005537642 kernel: SELinux:  policy capability network_peer_controls=1
Nov 27 05:26:39 np0005537642 kernel: SELinux:  policy capability open_perms=1
Nov 27 05:26:39 np0005537642 kernel: SELinux:  policy capability extended_socket_class=1
Nov 27 05:26:39 np0005537642 kernel: SELinux:  policy capability always_check_network=0
Nov 27 05:26:39 np0005537642 kernel: SELinux:  policy capability cgroup_seclabel=1
Nov 27 05:26:39 np0005537642 kernel: SELinux:  policy capability nnp_nosuid_transition=1
Nov 27 05:26:39 np0005537642 kernel: SELinux:  policy capability genfs_seclabel_symlinks=1
Nov 27 05:26:57 np0005537642 dbus-broker-launch[774]: avc:  op=load_policy lsm=selinux seqno=6 res=1
Nov 27 05:26:57 np0005537642 systemd[1]: Started /usr/bin/systemctl start man-db-cache-update.
Nov 27 05:26:57 np0005537642 systemd[1]: Starting man-db-cache-update.service...
Nov 27 05:26:57 np0005537642 systemd[1]: Reloading.
Nov 27 05:26:57 np0005537642 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 27 05:26:57 np0005537642 systemd[1]: Queuing reload/restart jobs for marked units…
Nov 27 05:27:05 np0005537642 python3[13907]: ansible-ansible.legacy.command Invoked with _raw_params=echo "openstack-k8s-operators+cirobot"#012 _uses_shell=True zuul_log_id=fa163ec2-ffbe-46cf-a1e8-00000000000c-1-compute0 zuul_ansible_split_streams=False warn=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 27 05:27:06 np0005537642 kernel: evm: overlay not supported
Nov 27 05:27:06 np0005537642 systemd[4299]: Starting D-Bus User Message Bus...
Nov 27 05:27:06 np0005537642 dbus-broker-launch[14137]: Policy to allow eavesdropping in /usr/share/dbus-1/session.conf +31: Eavesdropping is deprecated and ignored
Nov 27 05:27:06 np0005537642 dbus-broker-launch[14137]: Policy to allow eavesdropping in /usr/share/dbus-1/session.conf +33: Eavesdropping is deprecated and ignored
Nov 27 05:27:06 np0005537642 systemd[4299]: Started D-Bus User Message Bus.
Nov 27 05:27:06 np0005537642 dbus-broker-lau[14137]: Ready
Nov 27 05:27:06 np0005537642 systemd[4299]: selinux: avc:  op=load_policy lsm=selinux seqno=6 res=1
Nov 27 05:27:06 np0005537642 systemd[4299]: Created slice Slice /user.
Nov 27 05:27:06 np0005537642 systemd[4299]: podman-14067.scope: unit configures an IP firewall, but not running as root.
Nov 27 05:27:06 np0005537642 systemd[4299]: (This warning is only shown for the first unit using IP firewalling.)
Nov 27 05:27:06 np0005537642 systemd[4299]: Started podman-14067.scope.
Nov 27 05:27:06 np0005537642 systemd[4299]: Started podman-pause-60c49d2e.scope.
Nov 27 05:27:07 np0005537642 python3[14476]: ansible-ansible.builtin.blockinfile Invoked with state=present insertafter=EOF dest=/etc/containers/registries.conf content=[[registry]]#012location = "38.102.83.200:5001"#012insecure = true path=/etc/containers/registries.conf block=[[registry]]#012location = "38.102.83.200:5001"#012insecure = true marker=# {mark} ANSIBLE MANAGED BLOCK create=False backup=False marker_begin=BEGIN marker_end=END unsafe_writes=False insertbefore=None validate=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 27 05:27:07 np0005537642 python3[14476]: ansible-ansible.builtin.blockinfile [WARNING] Module remote_tmp /root/.ansible/tmp did not exist and was created with a mode of 0700, this may cause issues when running as another user. To avoid this, create the remote_tmp dir with the correct permissions manually
Nov 27 05:27:07 np0005537642 systemd[1]: session-5.scope: Deactivated successfully.
Nov 27 05:27:07 np0005537642 systemd[1]: session-5.scope: Consumed 58.740s CPU time.
Nov 27 05:27:07 np0005537642 systemd-logind[801]: Session 5 logged out. Waiting for processes to exit.
Nov 27 05:27:07 np0005537642 systemd-logind[801]: Removed session 5.
Nov 27 05:27:31 np0005537642 systemd-logind[801]: New session 6 of user zuul.
Nov 27 05:27:31 np0005537642 systemd[1]: Started Session 6 of User zuul.
Nov 27 05:27:31 np0005537642 python3[22622]: ansible-ansible.posix.authorized_key Invoked with user=zuul key=ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBPbodGJVz1PtA7HMzemX9zrm47JRwz7uvA6ciQ+7tHFtfy6m9RRUYIZGhT3KK6Usyd1r4T1YvblZlZUAo6F4Uoo= zuul@np0005537641.novalocal#012 manage_dir=True state=present exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Nov 27 05:27:32 np0005537642 python3[22740]: ansible-ansible.posix.authorized_key Invoked with user=root key=ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBPbodGJVz1PtA7HMzemX9zrm47JRwz7uvA6ciQ+7tHFtfy6m9RRUYIZGhT3KK6Usyd1r4T1YvblZlZUAo6F4Uoo= zuul@np0005537641.novalocal#012 manage_dir=True state=present exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Nov 27 05:27:33 np0005537642 python3[23034]: ansible-ansible.builtin.user Invoked with name=cloud-admin shell=/bin/bash state=present non_unique=False force=False remove=False create_home=True system=False move_home=False append=False ssh_key_bits=0 ssh_key_type=rsa ssh_key_comment=ansible-generated on np0005537642.novalocal update_password=always uid=None group=None groups=None comment=None home=None password=NOT_LOGGING_PARAMETER login_class=None password_expire_max=None password_expire_min=None hidden=None seuser=None skeleton=None generate_ssh_key=None ssh_key_file=None ssh_key_passphrase=NOT_LOGGING_PARAMETER expires=None password_lock=None local=None profile=None authorization=None role=None umask=None
Nov 27 05:27:33 np0005537642 python3[23224]: ansible-ansible.posix.authorized_key Invoked with user=cloud-admin key=ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBPbodGJVz1PtA7HMzemX9zrm47JRwz7uvA6ciQ+7tHFtfy6m9RRUYIZGhT3KK6Usyd1r4T1YvblZlZUAo6F4Uoo= zuul@np0005537641.novalocal#012 manage_dir=True state=present exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Nov 27 05:27:34 np0005537642 python3[23439]: ansible-ansible.legacy.stat Invoked with path=/etc/sudoers.d/cloud-admin follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Nov 27 05:27:34 np0005537642 python3[23651]: ansible-ansible.legacy.copy Invoked with dest=/etc/sudoers.d/cloud-admin mode=0640 src=/home/zuul/.ansible/tmp/ansible-tmp-1764239253.9800532-167-46119585350006/source _original_basename=tmp7j3kdn1p follow=False checksum=e7614e5ad3ab06eaae55b8efaa2ed81b63ea5634 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 27 05:27:35 np0005537642 python3[23965]: ansible-ansible.builtin.hostname Invoked with name=compute-0 use=systemd
Nov 27 05:27:35 np0005537642 systemd[1]: Starting Hostname Service...
Nov 27 05:27:35 np0005537642 systemd[1]: Started Hostname Service.
Nov 27 05:27:35 np0005537642 systemd-hostnamed[24049]: Changed pretty hostname to 'compute-0'
Nov 27 05:27:35 np0005537642 systemd-hostnamed[24049]: Hostname set to <compute-0> (static)
Nov 27 05:27:35 np0005537642 NetworkManager[7186]: <info>  [1764239255.9246] hostname: static hostname changed from "np0005537642.novalocal" to "compute-0"
Nov 27 05:27:35 np0005537642 systemd[1]: Starting Network Manager Script Dispatcher Service...
Nov 27 05:27:35 np0005537642 systemd[1]: Started Network Manager Script Dispatcher Service.
Nov 27 05:27:36 np0005537642 systemd[1]: session-6.scope: Deactivated successfully.
Nov 27 05:27:36 np0005537642 systemd[1]: session-6.scope: Consumed 2.732s CPU time.
Nov 27 05:27:36 np0005537642 systemd-logind[801]: Session 6 logged out. Waiting for processes to exit.
Nov 27 05:27:36 np0005537642 systemd-logind[801]: Removed session 6.
Nov 27 05:27:45 np0005537642 systemd[1]: NetworkManager-dispatcher.service: Deactivated successfully.
Nov 27 05:27:55 np0005537642 systemd[1]: man-db-cache-update.service: Deactivated successfully.
Nov 27 05:27:55 np0005537642 systemd[1]: Finished man-db-cache-update.service.
Nov 27 05:27:55 np0005537642 systemd[1]: man-db-cache-update.service: Consumed 1min 9.742s CPU time.
Nov 27 05:27:55 np0005537642 systemd[1]: run-r70f005a089244b718a17ad980ba9b4be.service: Deactivated successfully.
Nov 27 05:28:05 np0005537642 systemd[1]: systemd-hostnamed.service: Deactivated successfully.
Nov 27 05:29:08 np0005537642 systemd[1]: Starting dnf makecache...
Nov 27 05:29:09 np0005537642 dnf[29925]: Failed determining last makecache time.
Nov 27 05:29:09 np0005537642 dnf[29925]: CentOS Stream 9 - BaseOS                         61 kB/s | 7.3 kB     00:00
Nov 27 05:29:09 np0005537642 dnf[29925]: CentOS Stream 9 - AppStream                      31 kB/s | 7.4 kB     00:00
Nov 27 05:29:09 np0005537642 dnf[29925]: CentOS Stream 9 - CRB                            68 kB/s | 7.2 kB     00:00
Nov 27 05:29:09 np0005537642 dnf[29925]: CentOS Stream 9 - Extras packages                68 kB/s | 8.3 kB     00:00
Nov 27 05:29:10 np0005537642 dnf[29925]: Metadata cache created.
Nov 27 05:29:10 np0005537642 systemd[1]: dnf-makecache.service: Deactivated successfully.
Nov 27 05:29:10 np0005537642 systemd[1]: Finished dnf makecache.
Nov 27 05:30:52 np0005537642 systemd[1]: Starting Cleanup of Temporary Directories...
Nov 27 05:30:52 np0005537642 systemd[1]: systemd-tmpfiles-clean.service: Deactivated successfully.
Nov 27 05:30:52 np0005537642 systemd[1]: Finished Cleanup of Temporary Directories.
Nov 27 05:30:52 np0005537642 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dclean.service.mount: Deactivated successfully.
Nov 27 05:31:02 np0005537642 systemd-logind[801]: New session 7 of user zuul.
Nov 27 05:31:02 np0005537642 systemd[1]: Started Session 7 of User zuul.
Nov 27 05:31:03 np0005537642 python3[30013]: ansible-ansible.legacy.setup Invoked with gather_subset=['all'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Nov 27 05:31:05 np0005537642 python3[30129]: ansible-ansible.legacy.stat Invoked with path=/etc/yum.repos.d/delorean.repo follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Nov 27 05:31:05 np0005537642 python3[30202]: ansible-ansible.legacy.copy Invoked with dest=/etc/yum.repos.d/ src=/home/zuul/.ansible/tmp/ansible-tmp-1764239465.0245855-33938-109664448547600/source mode=0755 _original_basename=delorean.repo follow=False checksum=a16f090252000d02a7f7d540bb10f7c1c9cd4ac5 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 27 05:31:06 np0005537642 python3[30228]: ansible-ansible.legacy.stat Invoked with path=/etc/yum.repos.d/delorean-antelope-testing.repo follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Nov 27 05:31:06 np0005537642 python3[30301]: ansible-ansible.legacy.copy Invoked with dest=/etc/yum.repos.d/ src=/home/zuul/.ansible/tmp/ansible-tmp-1764239465.0245855-33938-109664448547600/source mode=0755 _original_basename=delorean-antelope-testing.repo follow=False checksum=0bdbb813b840548359ae77c28d76ca272ccaf31b backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 27 05:31:06 np0005537642 python3[30327]: ansible-ansible.legacy.stat Invoked with path=/etc/yum.repos.d/repo-setup-centos-highavailability.repo follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Nov 27 05:31:07 np0005537642 python3[30400]: ansible-ansible.legacy.copy Invoked with dest=/etc/yum.repos.d/ src=/home/zuul/.ansible/tmp/ansible-tmp-1764239465.0245855-33938-109664448547600/source mode=0755 _original_basename=repo-setup-centos-highavailability.repo follow=False checksum=55d0f695fd0d8f47cbc3044ce0dcf5f88862490f backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 27 05:31:07 np0005537642 python3[30426]: ansible-ansible.legacy.stat Invoked with path=/etc/yum.repos.d/repo-setup-centos-powertools.repo follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Nov 27 05:31:08 np0005537642 python3[30499]: ansible-ansible.legacy.copy Invoked with dest=/etc/yum.repos.d/ src=/home/zuul/.ansible/tmp/ansible-tmp-1764239465.0245855-33938-109664448547600/source mode=0755 _original_basename=repo-setup-centos-powertools.repo follow=False checksum=4b0cf99aa89c5c5be0151545863a7a7568f67568 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 27 05:31:08 np0005537642 python3[30525]: ansible-ansible.legacy.stat Invoked with path=/etc/yum.repos.d/repo-setup-centos-appstream.repo follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Nov 27 05:31:08 np0005537642 python3[30598]: ansible-ansible.legacy.copy Invoked with dest=/etc/yum.repos.d/ src=/home/zuul/.ansible/tmp/ansible-tmp-1764239465.0245855-33938-109664448547600/source mode=0755 _original_basename=repo-setup-centos-appstream.repo follow=False checksum=e89244d2503b2996429dda1857290c1e91e393a1 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 27 05:31:09 np0005537642 python3[30624]: ansible-ansible.legacy.stat Invoked with path=/etc/yum.repos.d/repo-setup-centos-baseos.repo follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Nov 27 05:31:09 np0005537642 python3[30697]: ansible-ansible.legacy.copy Invoked with dest=/etc/yum.repos.d/ src=/home/zuul/.ansible/tmp/ansible-tmp-1764239465.0245855-33938-109664448547600/source mode=0755 _original_basename=repo-setup-centos-baseos.repo follow=False checksum=36d926db23a40dbfa5c84b5e4d43eac6fa2301d6 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 27 05:31:09 np0005537642 python3[30723]: ansible-ansible.legacy.stat Invoked with path=/etc/yum.repos.d/delorean.repo.md5 follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Nov 27 05:31:10 np0005537642 python3[30796]: ansible-ansible.legacy.copy Invoked with dest=/etc/yum.repos.d/ src=/home/zuul/.ansible/tmp/ansible-tmp-1764239465.0245855-33938-109664448547600/source mode=0755 _original_basename=delorean.repo.md5 follow=False checksum=25e801a9a05537c191e2aa500f19076ac31d3e5b backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 27 05:31:21 np0005537642 python3[30854]: ansible-ansible.legacy.command Invoked with _raw_params=hostname _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 27 05:36:21 np0005537642 systemd[1]: session-7.scope: Deactivated successfully.
Nov 27 05:36:21 np0005537642 systemd[1]: session-7.scope: Consumed 6.588s CPU time.
Nov 27 05:36:21 np0005537642 systemd-logind[801]: Session 7 logged out. Waiting for processes to exit.
Nov 27 05:36:21 np0005537642 systemd-logind[801]: Removed session 7.
Nov 27 05:43:01 np0005537642 systemd-logind[801]: New session 8 of user zuul.
Nov 27 05:43:01 np0005537642 systemd[1]: Started Session 8 of User zuul.
Nov 27 05:43:02 np0005537642 python3.9[31023]: ansible-ansible.legacy.setup Invoked with gather_subset=['all'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Nov 27 05:43:05 np0005537642 python3.9[31205]: ansible-ansible.legacy.command Invoked with _raw_params=set -euxo pipefail#012pushd /var/tmp#012curl -sL https://github.com/openstack-k8s-operators/repo-setup/archive/refs/heads/main.tar.gz | tar -xz#012pushd repo-setup-main#012python3 -m venv ./venv#012PBR_VERSION=0.0.0 ./venv/bin/pip install ./#012./venv/bin/repo-setup current-podified -b antelope#012popd#012rm -rf repo-setup-main#012 _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 27 05:43:15 np0005537642 systemd[1]: session-8.scope: Deactivated successfully.
Nov 27 05:43:15 np0005537642 systemd[1]: session-8.scope: Consumed 9.526s CPU time.
Nov 27 05:43:15 np0005537642 systemd-logind[801]: Session 8 logged out. Waiting for processes to exit.
Nov 27 05:43:15 np0005537642 systemd-logind[801]: Removed session 8.
Nov 27 05:43:31 np0005537642 systemd-logind[801]: New session 9 of user zuul.
Nov 27 05:43:31 np0005537642 systemd[1]: Started Session 9 of User zuul.
Nov 27 05:43:32 np0005537642 python3.9[31415]: ansible-ansible.legacy.ping Invoked with data=pong
Nov 27 05:43:33 np0005537642 python3.9[31589]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Nov 27 05:43:34 np0005537642 python3.9[31741]: ansible-ansible.legacy.command Invoked with _raw_params=PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin which growvols#012 _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 27 05:43:35 np0005537642 python3.9[31894]: ansible-ansible.builtin.stat Invoked with path=/etc/ansible/facts.d/bootc.fact follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Nov 27 05:43:36 np0005537642 python3.9[32046]: ansible-ansible.builtin.file Invoked with mode=755 path=/etc/ansible/facts.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 27 05:43:37 np0005537642 python3.9[32198]: ansible-ansible.legacy.stat Invoked with path=/etc/ansible/facts.d/bootc.fact follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 27 05:43:38 np0005537642 python3.9[32321]: ansible-ansible.legacy.copy Invoked with dest=/etc/ansible/facts.d/bootc.fact mode=755 src=/home/zuul/.ansible/tmp/ansible-tmp-1764240216.9229038-177-26213095562840/.source.fact _original_basename=bootc.fact follow=False checksum=eb4122ce7fc50a38407beb511c4ff8c178005b12 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 27 05:43:39 np0005537642 python3.9[32473]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Nov 27 05:43:40 np0005537642 python3.9[32629]: ansible-ansible.builtin.file Invoked with group=root mode=0750 owner=root path=/var/log/journal setype=var_log_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Nov 27 05:43:41 np0005537642 python3.9[32781]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/config-data/ansible-generated recurse=True setype=container_file_t state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Nov 27 05:43:41 np0005537642 python3.9[32931]: ansible-ansible.builtin.service_facts Invoked
Nov 27 05:43:50 np0005537642 python3.9[33185]: ansible-ansible.builtin.lineinfile Invoked with line=cloud-init=disabled path=/proc/cmdline state=present encoding=utf-8 backrefs=False create=False backup=False firstmatch=False unsafe_writes=False regexp=None search_string=None insertafter=None insertbefore=None validate=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 27 05:43:51 np0005537642 python3.9[33335]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Nov 27 05:43:53 np0005537642 python3.9[33489]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local', 'distribution'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Nov 27 05:43:54 np0005537642 python3.9[33647]: ansible-ansible.legacy.setup Invoked with filter=['ansible_pkg_mgr'] gather_subset=['!all'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Nov 27 05:43:55 np0005537642 python3.9[33731]: ansible-ansible.legacy.dnf Invoked with name=['driverctl', 'lvm2', 'crudini', 'jq', 'nftables', 'NetworkManager', 'openstack-selinux', 'python3-libselinux', 'python3-pyyaml', 'rsync', 'tmpwatch', 'sysstat', 'iproute-tc', 'ksmtuned', 'systemd-container', 'crypto-policies-scripts', 'grubby', 'sos'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Nov 27 05:44:55 np0005537642 systemd[1]: Reloading.
Nov 27 05:44:55 np0005537642 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 27 05:44:55 np0005537642 systemd[1]: Listening on Device-mapper event daemon FIFOs.
Nov 27 05:44:58 np0005537642 systemd[1]: Reloading.
Nov 27 05:44:59 np0005537642 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 27 05:44:59 np0005537642 systemd[1]: Starting Monitoring of LVM2 mirrors, snapshots etc. using dmeventd or progress polling...
Nov 27 05:44:59 np0005537642 systemd[1]: Finished Monitoring of LVM2 mirrors, snapshots etc. using dmeventd or progress polling.
Nov 27 05:44:59 np0005537642 systemd[1]: Reloading.
Nov 27 05:44:59 np0005537642 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 27 05:44:59 np0005537642 systemd[1]: Listening on LVM2 poll daemon socket.
Nov 27 05:45:01 np0005537642 dbus-broker-launch[748]: Noticed file-system modification, trigger reload.
Nov 27 05:45:01 np0005537642 dbus-broker-launch[748]: Noticed file-system modification, trigger reload.
Nov 27 05:45:01 np0005537642 dbus-broker-launch[748]: Noticed file-system modification, trigger reload.
Nov 27 05:46:15 np0005537642 kernel: SELinux:  Converting 2718 SID table entries...
Nov 27 05:46:15 np0005537642 kernel: SELinux:  policy capability network_peer_controls=1
Nov 27 05:46:15 np0005537642 kernel: SELinux:  policy capability open_perms=1
Nov 27 05:46:15 np0005537642 kernel: SELinux:  policy capability extended_socket_class=1
Nov 27 05:46:15 np0005537642 kernel: SELinux:  policy capability always_check_network=0
Nov 27 05:46:15 np0005537642 kernel: SELinux:  policy capability cgroup_seclabel=1
Nov 27 05:46:15 np0005537642 kernel: SELinux:  policy capability nnp_nosuid_transition=1
Nov 27 05:46:15 np0005537642 kernel: SELinux:  policy capability genfs_seclabel_symlinks=1
Nov 27 05:46:15 np0005537642 dbus-broker-launch[774]: avc:  op=load_policy lsm=selinux seqno=8 res=1
Nov 27 05:46:15 np0005537642 systemd[1]: Started /usr/bin/systemctl start man-db-cache-update.
Nov 27 05:46:15 np0005537642 systemd[1]: Starting man-db-cache-update.service...
Nov 27 05:46:15 np0005537642 systemd[1]: Reloading.
Nov 27 05:46:15 np0005537642 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 27 05:46:15 np0005537642 systemd[1]: Queuing reload/restart jobs for marked units…
Nov 27 05:46:17 np0005537642 systemd[1]: man-db-cache-update.service: Deactivated successfully.
Nov 27 05:46:17 np0005537642 systemd[1]: Finished man-db-cache-update.service.
Nov 27 05:46:17 np0005537642 systemd[1]: man-db-cache-update.service: Consumed 1.740s CPU time.
Nov 27 05:46:17 np0005537642 systemd[1]: run-r32d231cdb44346c098de01a1c5ea7876.service: Deactivated successfully.
Nov 27 05:46:17 np0005537642 python3.9[35264]: ansible-ansible.legacy.command Invoked with _raw_params=rpm -V driverctl lvm2 crudini jq nftables NetworkManager openstack-selinux python3-libselinux python3-pyyaml rsync tmpwatch sysstat iproute-tc ksmtuned systemd-container crypto-policies-scripts grubby sos _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 27 05:46:19 np0005537642 python3.9[35546]: ansible-ansible.posix.selinux Invoked with policy=targeted state=enforcing configfile=/etc/selinux/config update_kernel_param=False
Nov 27 05:46:20 np0005537642 python3.9[35698]: ansible-ansible.legacy.command Invoked with cmd=dd if=/dev/zero of=/swap count=1024 bs=1M creates=/swap _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None removes=None stdin=None
Nov 27 05:46:23 np0005537642 python3.9[35852]: ansible-ansible.builtin.file Invoked with group=root mode=0600 owner=root path=/swap recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False state=None _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 27 05:46:25 np0005537642 python3.9[36004]: ansible-ansible.posix.mount Invoked with dump=0 fstype=swap name=none opts=sw passno=0 src=/swap state=present path=none boot=True opts_no_log=False backup=False fstab=None
Nov 27 05:46:27 np0005537642 python3.9[36156]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/pki/ca-trust/source/anchors setype=cert_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Nov 27 05:46:34 np0005537642 python3.9[36308]: ansible-ansible.legacy.stat Invoked with path=/etc/pki/ca-trust/source/anchors/tls-ca-bundle.pem follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 27 05:46:35 np0005537642 python3.9[36431]: ansible-ansible.legacy.copy Invoked with dest=/etc/pki/ca-trust/source/anchors/tls-ca-bundle.pem group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1764240393.861766-666-134489724241442/.source.pem _original_basename=tls-ca-bundle.pem follow=False checksum=61726b032acfa271b151f0be3bf1090c70fe7bda backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 27 05:46:36 np0005537642 python3.9[36583]: ansible-ansible.builtin.stat Invoked with path=/etc/lvm/devices/system.devices follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Nov 27 05:46:37 np0005537642 python3.9[36735]: ansible-ansible.legacy.command Invoked with _raw_params=/usr/sbin/vgimportdevices --all _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 27 05:46:38 np0005537642 python3.9[36888]: ansible-ansible.builtin.file Invoked with group=root mode=0600 owner=root path=/etc/lvm/devices/system.devices state=touch recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 27 05:46:39 np0005537642 python3.9[37040]: ansible-ansible.builtin.getent Invoked with database=passwd key=qemu fail_key=True service=None split=None
Nov 27 05:46:40 np0005537642 rsyslogd[1004]: imjournal: journal files changed, reloading...  [v8.2510.0-2.el9 try https://www.rsyslog.com/e/0 ]
Nov 27 05:46:41 np0005537642 python3.9[37194]: ansible-ansible.builtin.group Invoked with gid=107 name=qemu state=present force=False system=False local=False non_unique=False gid_min=None gid_max=None
Nov 27 05:46:42 np0005537642 python3.9[37352]: ansible-ansible.builtin.user Invoked with comment=qemu user group=qemu groups=[''] name=qemu shell=/sbin/nologin state=present uid=107 non_unique=False force=False remove=False create_home=True system=False move_home=False append=False ssh_key_bits=0 ssh_key_type=rsa ssh_key_comment=ansible-generated on compute-0 update_password=always home=None password=NOT_LOGGING_PARAMETER login_class=None password_expire_max=None password_expire_min=None password_expire_warn=None hidden=None seuser=None skeleton=None generate_ssh_key=None ssh_key_file=None ssh_key_passphrase=NOT_LOGGING_PARAMETER expires=None password_lock=None local=None profile=None authorization=None role=None umask=None password_expire_account_disable=None uid_min=None uid_max=None
Nov 27 05:46:44 np0005537642 python3.9[37512]: ansible-ansible.builtin.getent Invoked with database=passwd key=hugetlbfs fail_key=True service=None split=None
Nov 27 05:46:44 np0005537642 python3.9[37665]: ansible-ansible.builtin.group Invoked with gid=42477 name=hugetlbfs state=present force=False system=False local=False non_unique=False gid_min=None gid_max=None
Nov 27 05:46:46 np0005537642 python3.9[37823]: ansible-ansible.builtin.file Invoked with group=qemu mode=0755 owner=qemu path=/var/lib/vhost_sockets setype=virt_cache_t seuser=system_u state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None serole=None selevel=None attributes=None
Nov 27 05:46:47 np0005537642 python3.9[37975]: ansible-ansible.legacy.dnf Invoked with name=['dracut-config-generic'] state=absent allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Nov 27 05:46:50 np0005537642 python3.9[38129]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/modules-load.d setype=etc_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Nov 27 05:46:51 np0005537642 python3.9[38281]: ansible-ansible.legacy.stat Invoked with path=/etc/modules-load.d/99-edpm.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 27 05:46:52 np0005537642 python3.9[38404]: ansible-ansible.legacy.copy Invoked with dest=/etc/modules-load.d/99-edpm.conf group=root mode=0644 owner=root setype=etc_t src=/home/zuul/.ansible/tmp/ansible-tmp-1764240410.8397849-1023-187706322583086/.source.conf follow=False _original_basename=edpm-modprobe.conf.j2 checksum=8021efe01721d8fa8cab46b95c00ec1be6dbb9d0 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None attributes=None
Nov 27 05:46:53 np0005537642 python3.9[38556]: ansible-ansible.builtin.systemd Invoked with name=systemd-modules-load.service state=restarted daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Nov 27 05:46:53 np0005537642 systemd[1]: Starting Load Kernel Modules...
Nov 27 05:46:53 np0005537642 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this.
Nov 27 05:46:53 np0005537642 kernel: Bridge firewalling registered
Nov 27 05:46:53 np0005537642 systemd-modules-load[38560]: Inserted module 'br_netfilter'
Nov 27 05:46:53 np0005537642 systemd[1]: Finished Load Kernel Modules.
Nov 27 05:46:54 np0005537642 python3.9[38715]: ansible-ansible.legacy.stat Invoked with path=/etc/sysctl.d/99-edpm.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 27 05:46:55 np0005537642 python3.9[38838]: ansible-ansible.legacy.copy Invoked with dest=/etc/sysctl.d/99-edpm.conf group=root mode=0644 owner=root setype=etc_t src=/home/zuul/.ansible/tmp/ansible-tmp-1764240414.1346664-1092-20348975187433/.source.conf follow=False _original_basename=edpm-sysctl.conf.j2 checksum=2a366439721b855adcfe4d7f152babb68596a007 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None attributes=None
Nov 27 05:46:56 np0005537642 python3.9[38990]: ansible-ansible.legacy.dnf Invoked with name=['tuned', 'tuned-profiles-cpu-partitioning'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Nov 27 05:46:59 np0005537642 dbus-broker-launch[748]: Noticed file-system modification, trigger reload.
Nov 27 05:46:59 np0005537642 dbus-broker-launch[748]: Noticed file-system modification, trigger reload.
Nov 27 05:47:00 np0005537642 systemd[1]: Started /usr/bin/systemctl start man-db-cache-update.
Nov 27 05:47:00 np0005537642 systemd[1]: Starting man-db-cache-update.service...
Nov 27 05:47:00 np0005537642 systemd[1]: Reloading.
Nov 27 05:47:00 np0005537642 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 27 05:47:00 np0005537642 systemd[1]: Queuing reload/restart jobs for marked units…
Nov 27 05:47:02 np0005537642 python3.9[40520]: ansible-ansible.builtin.stat Invoked with path=/etc/tuned/active_profile follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Nov 27 05:47:02 np0005537642 python3.9[41296]: ansible-ansible.builtin.slurp Invoked with src=/etc/tuned/active_profile
Nov 27 05:47:03 np0005537642 python3.9[42022]: ansible-ansible.builtin.stat Invoked with path=/etc/tuned/throughput-performance-variables.conf follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Nov 27 05:47:05 np0005537642 python3.9[42817]: ansible-ansible.legacy.command Invoked with _raw_params=/usr/sbin/tuned-adm profile throughput-performance _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 27 05:47:05 np0005537642 systemd[1]: Starting Dynamic System Tuning Daemon...
Nov 27 05:47:06 np0005537642 systemd[1]: man-db-cache-update.service: Deactivated successfully.
Nov 27 05:47:06 np0005537642 systemd[1]: Finished man-db-cache-update.service.
Nov 27 05:47:06 np0005537642 systemd[1]: man-db-cache-update.service: Consumed 6.352s CPU time.
Nov 27 05:47:06 np0005537642 systemd[1]: run-r4889ea40a50b447e9dc2bfb38d7b6522.service: Deactivated successfully.
Nov 27 05:47:06 np0005537642 systemd[1]: Starting Authorization Manager...
Nov 27 05:47:06 np0005537642 systemd[1]: Started Dynamic System Tuning Daemon.
Nov 27 05:47:06 np0005537642 polkitd[43375]: Started polkitd version 0.117
Nov 27 05:47:06 np0005537642 systemd[1]: Started Authorization Manager.
Nov 27 05:47:07 np0005537642 python3.9[43545]: ansible-ansible.builtin.systemd Invoked with enabled=True name=tuned state=restarted daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Nov 27 05:47:07 np0005537642 systemd[1]: Stopping Dynamic System Tuning Daemon...
Nov 27 05:47:07 np0005537642 systemd[1]: tuned.service: Deactivated successfully.
Nov 27 05:47:07 np0005537642 systemd[1]: Stopped Dynamic System Tuning Daemon.
Nov 27 05:47:07 np0005537642 systemd[1]: Starting Dynamic System Tuning Daemon...
Nov 27 05:47:07 np0005537642 systemd[1]: Started Dynamic System Tuning Daemon.
Nov 27 05:47:09 np0005537642 python3.9[43706]: ansible-ansible.builtin.slurp Invoked with src=/proc/cmdline
Nov 27 05:47:13 np0005537642 python3.9[43858]: ansible-ansible.builtin.systemd Invoked with enabled=False name=ksm.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Nov 27 05:47:13 np0005537642 systemd[1]: Reloading.
Nov 27 05:47:13 np0005537642 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 27 05:47:14 np0005537642 python3.9[44047]: ansible-ansible.builtin.systemd Invoked with enabled=False name=ksmtuned.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Nov 27 05:47:14 np0005537642 systemd[1]: Reloading.
Nov 27 05:47:14 np0005537642 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 27 05:47:15 np0005537642 python3.9[44236]: ansible-ansible.legacy.command Invoked with _raw_params=mkswap "/swap" _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 27 05:47:16 np0005537642 python3.9[44389]: ansible-ansible.legacy.command Invoked with _raw_params=swapon "/swap" _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 27 05:47:16 np0005537642 kernel: Adding 1048572k swap on /swap.  Priority:-2 extents:1 across:1048572k 
Nov 27 05:47:17 np0005537642 python3.9[44542]: ansible-ansible.legacy.command Invoked with _raw_params=/usr/bin/update-ca-trust _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 27 05:47:20 np0005537642 python3.9[44704]: ansible-ansible.legacy.command Invoked with _raw_params=echo 2 >/sys/kernel/mm/ksm/run _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 27 05:47:21 np0005537642 python3.9[44857]: ansible-ansible.builtin.systemd Invoked with name=systemd-sysctl.service state=restarted daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Nov 27 05:47:21 np0005537642 systemd[1]: systemd-sysctl.service: Deactivated successfully.
Nov 27 05:47:21 np0005537642 systemd[1]: Stopped Apply Kernel Variables.
Nov 27 05:47:21 np0005537642 systemd[1]: Stopping Apply Kernel Variables...
Nov 27 05:47:21 np0005537642 systemd[1]: Starting Apply Kernel Variables...
Nov 27 05:47:21 np0005537642 systemd[1]: run-credentials-systemd\x2dsysctl.service.mount: Deactivated successfully.
Nov 27 05:47:21 np0005537642 systemd[1]: Finished Apply Kernel Variables.
Nov 27 05:47:21 np0005537642 systemd[1]: session-9.scope: Deactivated successfully.
Nov 27 05:47:21 np0005537642 systemd[1]: session-9.scope: Consumed 2min 22.769s CPU time.
Nov 27 05:47:21 np0005537642 systemd-logind[801]: Session 9 logged out. Waiting for processes to exit.
Nov 27 05:47:21 np0005537642 systemd-logind[801]: Removed session 9.
Nov 27 05:47:27 np0005537642 systemd-logind[801]: New session 10 of user zuul.
Nov 27 05:47:27 np0005537642 systemd[1]: Started Session 10 of User zuul.
Nov 27 05:47:28 np0005537642 python3.9[45040]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Nov 27 05:47:30 np0005537642 python3.9[45196]: ansible-ansible.builtin.getent Invoked with database=passwd key=openvswitch fail_key=True service=None split=None
Nov 27 05:47:31 np0005537642 python3.9[45349]: ansible-ansible.builtin.group Invoked with gid=42476 name=openvswitch state=present force=False system=False local=False non_unique=False gid_min=None gid_max=None
Nov 27 05:47:32 np0005537642 python3.9[45507]: ansible-ansible.builtin.user Invoked with comment=openvswitch user group=openvswitch groups=['hugetlbfs'] name=openvswitch shell=/sbin/nologin state=present uid=42476 non_unique=False force=False remove=False create_home=True system=False move_home=False append=False ssh_key_bits=0 ssh_key_type=rsa ssh_key_comment=ansible-generated on compute-0 update_password=always home=None password=NOT_LOGGING_PARAMETER login_class=None password_expire_max=None password_expire_min=None password_expire_warn=None hidden=None seuser=None skeleton=None generate_ssh_key=None ssh_key_file=None ssh_key_passphrase=NOT_LOGGING_PARAMETER expires=None password_lock=None local=None profile=None authorization=None role=None umask=None password_expire_account_disable=None uid_min=None uid_max=None
Nov 27 05:47:34 np0005537642 python3.9[45667]: ansible-ansible.legacy.setup Invoked with filter=['ansible_pkg_mgr'] gather_subset=['!all'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Nov 27 05:47:34 np0005537642 python3.9[45751]: ansible-ansible.legacy.dnf Invoked with download_only=True name=['openvswitch'] allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None state=None
Nov 27 05:47:38 np0005537642 python3.9[45915]: ansible-ansible.legacy.dnf Invoked with name=['openvswitch'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Nov 27 05:47:43 np0005537642 irqbalance[794]: Cannot change IRQ 26 affinity: Operation not permitted
Nov 27 05:47:43 np0005537642 irqbalance[794]: IRQ 26 affinity is now unmanaged
Nov 27 05:47:49 np0005537642 kernel: SELinux:  Converting 2730 SID table entries...
Nov 27 05:47:49 np0005537642 kernel: SELinux:  policy capability network_peer_controls=1
Nov 27 05:47:49 np0005537642 kernel: SELinux:  policy capability open_perms=1
Nov 27 05:47:49 np0005537642 kernel: SELinux:  policy capability extended_socket_class=1
Nov 27 05:47:49 np0005537642 kernel: SELinux:  policy capability always_check_network=0
Nov 27 05:47:49 np0005537642 kernel: SELinux:  policy capability cgroup_seclabel=1
Nov 27 05:47:49 np0005537642 kernel: SELinux:  policy capability nnp_nosuid_transition=1
Nov 27 05:47:49 np0005537642 kernel: SELinux:  policy capability genfs_seclabel_symlinks=1
Nov 27 05:47:49 np0005537642 dbus-broker-launch[774]: avc:  op=load_policy lsm=selinux seqno=9 res=1
Nov 27 05:47:49 np0005537642 systemd[1]: Started daily update of the root trust anchor for DNSSEC.
Nov 27 05:47:51 np0005537642 systemd[1]: Started /usr/bin/systemctl start man-db-cache-update.
Nov 27 05:47:51 np0005537642 systemd[1]: Starting man-db-cache-update.service...
Nov 27 05:47:51 np0005537642 systemd[1]: Reloading.
Nov 27 05:47:51 np0005537642 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 27 05:47:51 np0005537642 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 27 05:47:51 np0005537642 systemd[1]: Queuing reload/restart jobs for marked units…
Nov 27 05:47:56 np0005537642 systemd[1]: man-db-cache-update.service: Deactivated successfully.
Nov 27 05:47:56 np0005537642 systemd[1]: Finished man-db-cache-update.service.
Nov 27 05:47:56 np0005537642 systemd[1]: man-db-cache-update.service: Consumed 1.128s CPU time.
Nov 27 05:47:56 np0005537642 systemd[1]: run-rc589d0ce89ad44c79565d542e7c9006d.service: Deactivated successfully.
Nov 27 05:47:57 np0005537642 python3.9[47012]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=openvswitch.service state=started daemon_reload=False daemon_reexec=False scope=system no_block=False force=None
Nov 27 05:47:57 np0005537642 systemd[1]: Reloading.
Nov 27 05:47:57 np0005537642 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 27 05:47:57 np0005537642 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 27 05:47:57 np0005537642 systemd[1]: Starting Open vSwitch Database Unit...
Nov 27 05:47:57 np0005537642 chown[47055]: /usr/bin/chown: cannot access '/run/openvswitch': No such file or directory
Nov 27 05:47:57 np0005537642 ovs-ctl[47060]: /etc/openvswitch/conf.db does not exist ... (warning).
Nov 27 05:47:57 np0005537642 ovs-ctl[47060]: Creating empty database /etc/openvswitch/conf.db [  OK  ]
Nov 27 05:47:57 np0005537642 ovs-ctl[47060]: Starting ovsdb-server [  OK  ]
Nov 27 05:47:57 np0005537642 ovs-vsctl[47109]: ovs|00001|vsctl|INFO|Called as ovs-vsctl --no-wait -- init -- set Open_vSwitch . db-version=8.5.1
Nov 27 05:47:57 np0005537642 ovs-vsctl[47129]: ovs|00001|vsctl|INFO|Called as ovs-vsctl --no-wait set Open_vSwitch . ovs-version=3.3.5-115.el9s "external-ids:system-id=\"d77ec416-1825-4703-b87a-1ba7b8db0553\"" "external-ids:rundir=\"/var/run/openvswitch\"" "system-type=\"centos\"" "system-version=\"9\""
Nov 27 05:47:57 np0005537642 ovs-ctl[47060]: Configuring Open vSwitch system IDs [  OK  ]
Nov 27 05:47:57 np0005537642 ovs-ctl[47060]: Enabling remote OVSDB managers [  OK  ]
Nov 27 05:47:57 np0005537642 ovs-vsctl[47135]: ovs|00001|vsctl|INFO|Called as ovs-vsctl --no-wait add Open_vSwitch . external-ids hostname=compute-0
Nov 27 05:47:57 np0005537642 systemd[1]: Started Open vSwitch Database Unit.
Nov 27 05:47:57 np0005537642 systemd[1]: Starting Open vSwitch Delete Transient Ports...
Nov 27 05:47:58 np0005537642 systemd[1]: Finished Open vSwitch Delete Transient Ports.
Nov 27 05:47:58 np0005537642 systemd[1]: Starting Open vSwitch Forwarding Unit...
Nov 27 05:47:58 np0005537642 kernel: openvswitch: Open vSwitch switching datapath
Nov 27 05:47:58 np0005537642 ovs-ctl[47179]: Inserting openvswitch module [  OK  ]
Nov 27 05:47:58 np0005537642 ovs-ctl[47148]: Starting ovs-vswitchd [  OK  ]
Nov 27 05:47:58 np0005537642 ovs-vsctl[47196]: ovs|00001|vsctl|INFO|Called as ovs-vsctl --no-wait add Open_vSwitch . external-ids hostname=compute-0
Nov 27 05:47:58 np0005537642 ovs-ctl[47148]: Enabling remote OVSDB managers [  OK  ]
Nov 27 05:47:58 np0005537642 systemd[1]: Started Open vSwitch Forwarding Unit.
Nov 27 05:47:58 np0005537642 systemd[1]: Starting Open vSwitch...
Nov 27 05:47:58 np0005537642 systemd[1]: Finished Open vSwitch.
Nov 27 05:47:59 np0005537642 python3.9[47348]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'selinux'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Nov 27 05:48:00 np0005537642 python3.9[47500]: ansible-community.general.sefcontext Invoked with selevel=s0 setype=container_file_t state=present target=/var/lib/edpm-config(/.*)? ignore_selinux_state=False ftype=a reload=True substitute=None seuser=None
Nov 27 05:48:02 np0005537642 kernel: SELinux:  Converting 2744 SID table entries...
Nov 27 05:48:02 np0005537642 kernel: SELinux:  policy capability network_peer_controls=1
Nov 27 05:48:02 np0005537642 kernel: SELinux:  policy capability open_perms=1
Nov 27 05:48:02 np0005537642 kernel: SELinux:  policy capability extended_socket_class=1
Nov 27 05:48:02 np0005537642 kernel: SELinux:  policy capability always_check_network=0
Nov 27 05:48:02 np0005537642 kernel: SELinux:  policy capability cgroup_seclabel=1
Nov 27 05:48:02 np0005537642 kernel: SELinux:  policy capability nnp_nosuid_transition=1
Nov 27 05:48:02 np0005537642 kernel: SELinux:  policy capability genfs_seclabel_symlinks=1
Nov 27 05:48:03 np0005537642 python3.9[47655]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local', 'distribution'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Nov 27 05:48:04 np0005537642 dbus-broker-launch[774]: avc:  op=load_policy lsm=selinux seqno=10 res=1
Nov 27 05:48:04 np0005537642 python3.9[47813]: ansible-ansible.legacy.dnf Invoked with name=['driverctl', 'lvm2', 'crudini', 'jq', 'nftables', 'NetworkManager', 'openstack-selinux', 'python3-libselinux', 'python3-pyyaml', 'rsync', 'tmpwatch', 'sysstat', 'iproute-tc', 'ksmtuned', 'systemd-container', 'crypto-policies-scripts', 'grubby', 'sos'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Nov 27 05:48:07 np0005537642 python3.9[47966]: ansible-ansible.legacy.command Invoked with _raw_params=rpm -V driverctl lvm2 crudini jq nftables NetworkManager openstack-selinux python3-libselinux python3-pyyaml rsync tmpwatch sysstat iproute-tc ksmtuned systemd-container crypto-policies-scripts grubby sos _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 27 05:48:09 np0005537642 python3.9[48253]: ansible-ansible.builtin.file Invoked with mode=0750 path=/var/lib/edpm-config selevel=s0 setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None attributes=None
Nov 27 05:48:10 np0005537642 python3.9[48403]: ansible-ansible.builtin.stat Invoked with path=/etc/cloud/cloud.cfg.d follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Nov 27 05:48:11 np0005537642 python3.9[48557]: ansible-ansible.legacy.dnf Invoked with name=['NetworkManager-ovs'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Nov 27 05:48:12 np0005537642 systemd[1]: Started /usr/bin/systemctl start man-db-cache-update.
Nov 27 05:48:12 np0005537642 systemd[1]: Starting man-db-cache-update.service...
Nov 27 05:48:12 np0005537642 systemd[1]: Reloading.
Nov 27 05:48:13 np0005537642 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 27 05:48:13 np0005537642 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 27 05:48:13 np0005537642 systemd[1]: Queuing reload/restart jobs for marked units…
Nov 27 05:48:13 np0005537642 systemd[1]: man-db-cache-update.service: Deactivated successfully.
Nov 27 05:48:13 np0005537642 systemd[1]: Finished man-db-cache-update.service.
Nov 27 05:48:13 np0005537642 systemd[1]: run-rca9e59ea91544bd999af4980b3f144ad.service: Deactivated successfully.
Nov 27 05:48:14 np0005537642 python3.9[48874]: ansible-ansible.builtin.systemd Invoked with name=NetworkManager state=restarted daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Nov 27 05:48:14 np0005537642 systemd[1]: NetworkManager-wait-online.service: Deactivated successfully.
Nov 27 05:48:14 np0005537642 systemd[1]: Stopped Network Manager Wait Online.
Nov 27 05:48:14 np0005537642 systemd[1]: Stopping Network Manager Wait Online...
Nov 27 05:48:14 np0005537642 systemd[1]: Stopping Network Manager...
Nov 27 05:48:14 np0005537642 NetworkManager[7186]: <info>  [1764240494.3172] caught SIGTERM, shutting down normally.
Nov 27 05:48:14 np0005537642 NetworkManager[7186]: <info>  [1764240494.3194] dhcp4 (eth0): canceled DHCP transaction
Nov 27 05:48:14 np0005537642 NetworkManager[7186]: <info>  [1764240494.3194] dhcp4 (eth0): activation: beginning transaction (timeout in 45 seconds)
Nov 27 05:48:14 np0005537642 NetworkManager[7186]: <info>  [1764240494.3194] dhcp4 (eth0): state changed no lease
Nov 27 05:48:14 np0005537642 NetworkManager[7186]: <info>  [1764240494.3197] manager: NetworkManager state is now CONNECTED_SITE
Nov 27 05:48:14 np0005537642 systemd[1]: Starting Network Manager Script Dispatcher Service...
Nov 27 05:48:14 np0005537642 systemd[1]: Started Network Manager Script Dispatcher Service.
Nov 27 05:48:14 np0005537642 NetworkManager[7186]: <info>  [1764240494.3667] exiting (success)
Nov 27 05:48:14 np0005537642 systemd[1]: NetworkManager.service: Deactivated successfully.
Nov 27 05:48:14 np0005537642 systemd[1]: Stopped Network Manager.
Nov 27 05:48:14 np0005537642 systemd[1]: NetworkManager.service: Consumed 12.688s CPU time, 4.3M memory peak, read 0B from disk, written 30.5K to disk.
Nov 27 05:48:14 np0005537642 systemd[1]: Starting Network Manager...
Nov 27 05:48:14 np0005537642 NetworkManager[48892]: <info>  [1764240494.4404] NetworkManager (version 1.54.1-1.el9) is starting... (after a restart, boot:15470e99-5312-4d44-ad2f-5b0f2ebe5cc1)
Nov 27 05:48:14 np0005537642 NetworkManager[48892]: <info>  [1764240494.4406] Read config: /etc/NetworkManager/NetworkManager.conf, /run/NetworkManager/conf.d/15-carrier-timeout.conf
Nov 27 05:48:14 np0005537642 NetworkManager[48892]: <info>  [1764240494.4481] manager[0x5575e1331090]: monitoring kernel firmware directory '/lib/firmware'.
Nov 27 05:48:14 np0005537642 systemd[1]: Starting Hostname Service...
Nov 27 05:48:14 np0005537642 systemd[1]: Started Hostname Service.
Nov 27 05:48:14 np0005537642 NetworkManager[48892]: <info>  [1764240494.5691] hostname: hostname: using hostnamed
Nov 27 05:48:14 np0005537642 NetworkManager[48892]: <info>  [1764240494.5693] hostname: static hostname changed from (none) to "compute-0"
Nov 27 05:48:14 np0005537642 NetworkManager[48892]: <info>  [1764240494.5700] dns-mgr: init: dns=default,systemd-resolved rc-manager=symlink (auto)
Nov 27 05:48:14 np0005537642 NetworkManager[48892]: <info>  [1764240494.5707] manager[0x5575e1331090]: rfkill: Wi-Fi hardware radio set enabled
Nov 27 05:48:14 np0005537642 NetworkManager[48892]: <info>  [1764240494.5707] manager[0x5575e1331090]: rfkill: WWAN hardware radio set enabled
Nov 27 05:48:14 np0005537642 NetworkManager[48892]: <info>  [1764240494.5746] Loaded device plugin: NMOvsFactory (/usr/lib64/NetworkManager/1.54.1-1.el9/libnm-device-plugin-ovs.so)
Nov 27 05:48:14 np0005537642 NetworkManager[48892]: <info>  [1764240494.5761] Loaded device plugin: NMTeamFactory (/usr/lib64/NetworkManager/1.54.1-1.el9/libnm-device-plugin-team.so)
Nov 27 05:48:14 np0005537642 NetworkManager[48892]: <info>  [1764240494.5762] manager: rfkill: Wi-Fi enabled by radio killswitch; enabled by state file
Nov 27 05:48:14 np0005537642 NetworkManager[48892]: <info>  [1764240494.5763] manager: rfkill: WWAN enabled by radio killswitch; enabled by state file
Nov 27 05:48:14 np0005537642 NetworkManager[48892]: <info>  [1764240494.5763] manager: Networking is enabled by state file
Nov 27 05:48:14 np0005537642 NetworkManager[48892]: <info>  [1764240494.5767] settings: Loaded settings plugin: keyfile (internal)
Nov 27 05:48:14 np0005537642 NetworkManager[48892]: <info>  [1764240494.5773] settings: Loaded settings plugin: ifcfg-rh ("/usr/lib64/NetworkManager/1.54.1-1.el9/libnm-settings-plugin-ifcfg-rh.so")
Nov 27 05:48:14 np0005537642 NetworkManager[48892]: <info>  [1764240494.5815] Warning: the ifcfg-rh plugin is deprecated, please migrate connections to the keyfile format using "nmcli connection migrate"
Nov 27 05:48:14 np0005537642 NetworkManager[48892]: <info>  [1764240494.5830] dhcp: init: Using DHCP client 'internal'
Nov 27 05:48:14 np0005537642 NetworkManager[48892]: <info>  [1764240494.5834] manager: (lo): new Loopback device (/org/freedesktop/NetworkManager/Devices/1)
Nov 27 05:48:14 np0005537642 NetworkManager[48892]: <info>  [1764240494.5843] device (lo): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Nov 27 05:48:14 np0005537642 NetworkManager[48892]: <info>  [1764240494.5851] device (lo): state change: unavailable -> disconnected (reason 'connection-assumed', managed-type: 'external')
Nov 27 05:48:14 np0005537642 NetworkManager[48892]: <info>  [1764240494.5863] device (lo): Activation: starting connection 'lo' (f909ba12-8db7-4bba-9d0e-d40b1f19aeea)
Nov 27 05:48:14 np0005537642 NetworkManager[48892]: <info>  [1764240494.5872] device (eth0): carrier: link connected
Nov 27 05:48:14 np0005537642 NetworkManager[48892]: <info>  [1764240494.5879] manager: (eth0): new Ethernet device (/org/freedesktop/NetworkManager/Devices/2)
Nov 27 05:48:14 np0005537642 NetworkManager[48892]: <info>  [1764240494.5885] manager: (eth0): assume: will attempt to assume matching connection 'System eth0' (5fb06bd0-0bb0-7ffb-45f1-d6edd65f3e03) (indicated)
Nov 27 05:48:14 np0005537642 NetworkManager[48892]: <info>  [1764240494.5886] device (eth0): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'assume')
Nov 27 05:48:14 np0005537642 NetworkManager[48892]: <info>  [1764240494.5895] device (eth0): state change: unavailable -> disconnected (reason 'connection-assumed', managed-type: 'assume')
Nov 27 05:48:14 np0005537642 NetworkManager[48892]: <info>  [1764240494.5904] device (eth0): Activation: starting connection 'System eth0' (5fb06bd0-0bb0-7ffb-45f1-d6edd65f3e03)
Nov 27 05:48:14 np0005537642 NetworkManager[48892]: <info>  [1764240494.5913] device (eth1): carrier: link connected
Nov 27 05:48:14 np0005537642 NetworkManager[48892]: <info>  [1764240494.5919] manager: (eth1): new Ethernet device (/org/freedesktop/NetworkManager/Devices/3)
Nov 27 05:48:14 np0005537642 NetworkManager[48892]: <info>  [1764240494.5926] manager: (eth1): assume: will attempt to assume matching connection 'ci-private-network' (34d29167-9f55-51d9-826e-f0fb3219e8aa) (indicated)
Nov 27 05:48:14 np0005537642 NetworkManager[48892]: <info>  [1764240494.5927] device (eth1): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'assume')
Nov 27 05:48:14 np0005537642 NetworkManager[48892]: <info>  [1764240494.5934] device (eth1): state change: unavailable -> disconnected (reason 'connection-assumed', managed-type: 'assume')
Nov 27 05:48:14 np0005537642 NetworkManager[48892]: <info>  [1764240494.5944] device (eth1): Activation: starting connection 'ci-private-network' (34d29167-9f55-51d9-826e-f0fb3219e8aa)
Nov 27 05:48:14 np0005537642 systemd[1]: Started Network Manager.
Nov 27 05:48:14 np0005537642 NetworkManager[48892]: <info>  [1764240494.5953] bus-manager: acquired D-Bus service "org.freedesktop.NetworkManager"
Nov 27 05:48:14 np0005537642 NetworkManager[48892]: <info>  [1764240494.5970] device (lo): state change: disconnected -> prepare (reason 'none', managed-type: 'external')
Nov 27 05:48:14 np0005537642 NetworkManager[48892]: <info>  [1764240494.5974] device (lo): state change: prepare -> config (reason 'none', managed-type: 'external')
Nov 27 05:48:14 np0005537642 NetworkManager[48892]: <info>  [1764240494.5978] device (lo): state change: config -> ip-config (reason 'none', managed-type: 'external')
Nov 27 05:48:14 np0005537642 NetworkManager[48892]: <info>  [1764240494.5984] device (eth0): state change: disconnected -> prepare (reason 'none', managed-type: 'assume')
Nov 27 05:48:14 np0005537642 NetworkManager[48892]: <info>  [1764240494.5988] device (eth0): state change: prepare -> config (reason 'none', managed-type: 'assume')
Nov 27 05:48:14 np0005537642 NetworkManager[48892]: <info>  [1764240494.5994] device (eth1): state change: disconnected -> prepare (reason 'none', managed-type: 'assume')
Nov 27 05:48:14 np0005537642 NetworkManager[48892]: <info>  [1764240494.6017] device (eth1): state change: prepare -> config (reason 'none', managed-type: 'assume')
Nov 27 05:48:14 np0005537642 NetworkManager[48892]: <info>  [1764240494.6026] device (lo): state change: ip-config -> ip-check (reason 'none', managed-type: 'external')
Nov 27 05:48:14 np0005537642 NetworkManager[48892]: <info>  [1764240494.6035] device (eth0): state change: config -> ip-config (reason 'none', managed-type: 'assume')
Nov 27 05:48:14 np0005537642 NetworkManager[48892]: <info>  [1764240494.6039] dhcp4 (eth0): activation: beginning transaction (timeout in 45 seconds)
Nov 27 05:48:14 np0005537642 NetworkManager[48892]: <info>  [1764240494.6053] device (eth1): state change: config -> ip-config (reason 'none', managed-type: 'assume')
Nov 27 05:48:14 np0005537642 NetworkManager[48892]: <info>  [1764240494.6074] device (eth1): state change: ip-config -> ip-check (reason 'none', managed-type: 'assume')
Nov 27 05:48:14 np0005537642 NetworkManager[48892]: <info>  [1764240494.6090] dhcp4 (eth0): state changed new lease, address=38.102.83.130
Nov 27 05:48:14 np0005537642 NetworkManager[48892]: <info>  [1764240494.6101] policy: set 'System eth0' (eth0) as default for IPv4 routing and DNS
Nov 27 05:48:14 np0005537642 systemd[1]: Starting Network Manager Wait Online...
Nov 27 05:48:14 np0005537642 NetworkManager[48892]: <info>  [1764240494.6371] device (lo): state change: ip-check -> secondaries (reason 'none', managed-type: 'external')
Nov 27 05:48:14 np0005537642 NetworkManager[48892]: <info>  [1764240494.6399] device (eth0): state change: ip-config -> ip-check (reason 'none', managed-type: 'assume')
Nov 27 05:48:14 np0005537642 NetworkManager[48892]: <info>  [1764240494.6408] device (lo): state change: secondaries -> activated (reason 'none', managed-type: 'external')
Nov 27 05:48:14 np0005537642 NetworkManager[48892]: <info>  [1764240494.6416] device (lo): Activation: successful, device activated.
Nov 27 05:48:14 np0005537642 NetworkManager[48892]: <info>  [1764240494.6428] device (eth1): state change: ip-check -> secondaries (reason 'none', managed-type: 'assume')
Nov 27 05:48:14 np0005537642 NetworkManager[48892]: <info>  [1764240494.6483] device (eth1): state change: secondaries -> activated (reason 'none', managed-type: 'assume')
Nov 27 05:48:14 np0005537642 NetworkManager[48892]: <info>  [1764240494.6490] manager: NetworkManager state is now CONNECTED_LOCAL
Nov 27 05:48:14 np0005537642 NetworkManager[48892]: <info>  [1764240494.6495] device (eth1): Activation: successful, device activated.
Nov 27 05:48:14 np0005537642 NetworkManager[48892]: <info>  [1764240494.6507] device (eth0): state change: ip-check -> secondaries (reason 'none', managed-type: 'assume')
Nov 27 05:48:14 np0005537642 NetworkManager[48892]: <info>  [1764240494.6511] device (eth0): state change: secondaries -> activated (reason 'none', managed-type: 'assume')
Nov 27 05:48:14 np0005537642 NetworkManager[48892]: <info>  [1764240494.6518] manager: NetworkManager state is now CONNECTED_SITE
Nov 27 05:48:14 np0005537642 NetworkManager[48892]: <info>  [1764240494.6524] device (eth0): Activation: successful, device activated.
Nov 27 05:48:14 np0005537642 NetworkManager[48892]: <info>  [1764240494.6536] manager: NetworkManager state is now CONNECTED_GLOBAL
Nov 27 05:48:14 np0005537642 NetworkManager[48892]: <info>  [1764240494.6543] manager: startup complete
Nov 27 05:48:14 np0005537642 systemd[1]: Finished Network Manager Wait Online.
Nov 27 05:48:15 np0005537642 python3.9[49100]: ansible-ansible.legacy.dnf Invoked with name=['os-net-config'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Nov 27 05:48:24 np0005537642 systemd[1]: NetworkManager-dispatcher.service: Deactivated successfully.
Nov 27 05:48:31 np0005537642 systemd[1]: Started /usr/bin/systemctl start man-db-cache-update.
Nov 27 05:48:31 np0005537642 systemd[1]: Starting man-db-cache-update.service...
Nov 27 05:48:31 np0005537642 systemd[1]: Reloading.
Nov 27 05:48:31 np0005537642 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 27 05:48:31 np0005537642 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 27 05:48:31 np0005537642 systemd[1]: Queuing reload/restart jobs for marked units…
Nov 27 05:48:36 np0005537642 python3.9[49559]: ansible-ansible.builtin.stat Invoked with path=/var/lib/edpm-config/os-net-config.returncode follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Nov 27 05:48:36 np0005537642 systemd[1]: man-db-cache-update.service: Deactivated successfully.
Nov 27 05:48:36 np0005537642 systemd[1]: Finished man-db-cache-update.service.
Nov 27 05:48:36 np0005537642 systemd[1]: run-re0cbd2c1081947358aca866e148a2df5.service: Deactivated successfully.
Nov 27 05:48:37 np0005537642 python3.9[49712]: ansible-community.general.ini_file Invoked with backup=True mode=0644 no_extra_spaces=True option=no-auto-default path=/etc/NetworkManager/NetworkManager.conf section=main state=present value=* exclusive=True ignore_spaces=False allow_no_value=False modify_inactive_option=True create=True follow=False unsafe_writes=False section_has_values=None values=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 27 05:48:38 np0005537642 python3.9[49866]: ansible-community.general.ini_file Invoked with backup=True mode=0644 no_extra_spaces=True option=dns path=/etc/NetworkManager/NetworkManager.conf section=main state=absent value=none exclusive=True ignore_spaces=False allow_no_value=False modify_inactive_option=True create=True follow=False unsafe_writes=False section_has_values=None values=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 27 05:48:38 np0005537642 python3.9[50018]: ansible-community.general.ini_file Invoked with backup=True mode=0644 no_extra_spaces=True option=dns path=/etc/NetworkManager/conf.d/99-cloud-init.conf section=main state=absent value=none exclusive=True ignore_spaces=False allow_no_value=False modify_inactive_option=True create=True follow=False unsafe_writes=False section_has_values=None values=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 27 05:48:39 np0005537642 python3.9[50170]: ansible-community.general.ini_file Invoked with backup=True mode=0644 no_extra_spaces=True option=rc-manager path=/etc/NetworkManager/NetworkManager.conf section=main state=absent value=unmanaged exclusive=True ignore_spaces=False allow_no_value=False modify_inactive_option=True create=True follow=False unsafe_writes=False section_has_values=None values=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 27 05:48:40 np0005537642 python3.9[50322]: ansible-community.general.ini_file Invoked with backup=True mode=0644 no_extra_spaces=True option=rc-manager path=/etc/NetworkManager/conf.d/99-cloud-init.conf section=main state=absent value=unmanaged exclusive=True ignore_spaces=False allow_no_value=False modify_inactive_option=True create=True follow=False unsafe_writes=False section_has_values=None values=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 27 05:48:41 np0005537642 python3.9[50474]: ansible-ansible.legacy.stat Invoked with path=/etc/dhcp/dhclient-enter-hooks follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 27 05:48:41 np0005537642 python3.9[50597]: ansible-ansible.legacy.copy Invoked with dest=/etc/dhcp/dhclient-enter-hooks mode=0755 src=/home/zuul/.ansible/tmp/ansible-tmp-1764240520.6463706-647-165556783635648/.source _original_basename=.etz35m33 follow=False checksum=f6278a40de79a9841f6ed1fc584538225566990c backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 27 05:48:42 np0005537642 python3.9[50749]: ansible-ansible.builtin.file Invoked with mode=0755 path=/etc/os-net-config state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 27 05:48:43 np0005537642 python3.9[50901]: ansible-edpm_os_net_config_mappings Invoked with net_config_data_lookup={}
Nov 27 05:48:44 np0005537642 systemd[1]: systemd-hostnamed.service: Deactivated successfully.
Nov 27 05:48:44 np0005537642 python3.9[51055]: ansible-ansible.builtin.file Invoked with path=/var/lib/edpm-config/scripts state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 27 05:48:47 np0005537642 python3.9[51482]: ansible-ansible.builtin.slurp Invoked with path=/etc/os-net-config/config.yaml src=/etc/os-net-config/config.yaml
Nov 27 05:48:48 np0005537642 ansible-async_wrapper.py[51657]: Invoked with j757002643200 300 /home/zuul/.ansible/tmp/ansible-tmp-1764240527.9137926-845-167067826083905/AnsiballZ_edpm_os_net_config.py _
Nov 27 05:48:48 np0005537642 ansible-async_wrapper.py[51660]: Starting module and watcher
Nov 27 05:48:48 np0005537642 ansible-async_wrapper.py[51660]: Start watching 51661 (300)
Nov 27 05:48:48 np0005537642 ansible-async_wrapper.py[51661]: Start module (51661)
Nov 27 05:48:48 np0005537642 ansible-async_wrapper.py[51657]: Return async_wrapper task started.
Nov 27 05:48:49 np0005537642 python3.9[51662]: ansible-edpm_os_net_config Invoked with cleanup=True config_file=/etc/os-net-config/config.yaml debug=True detailed_exit_codes=True safe_defaults=False use_nmstate=True
Nov 27 05:48:49 np0005537642 kernel: cfg80211: Loading compiled-in X.509 certificates for regulatory database
Nov 27 05:48:49 np0005537642 kernel: Loaded X.509 cert 'sforshee: 00b28ddf47aef9cea7'
Nov 27 05:48:49 np0005537642 kernel: Loaded X.509 cert 'wens: 61c038651aabdcf94bd0ac7ff06c7248db18c600'
Nov 27 05:48:49 np0005537642 kernel: platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
Nov 27 05:48:49 np0005537642 kernel: cfg80211: failed to load regulatory.db
Nov 27 05:48:51 np0005537642 NetworkManager[48892]: <info>  [1764240531.4897] audit: op="checkpoint-create" arg="/org/freedesktop/NetworkManager/Checkpoint/1" pid=51663 uid=0 result="success"
Nov 27 05:48:51 np0005537642 NetworkManager[48892]: <info>  [1764240531.4921] audit: op="checkpoint-adjust-rollback-timeout" arg="/org/freedesktop/NetworkManager/Checkpoint/1" pid=51663 uid=0 result="success"
Nov 27 05:48:51 np0005537642 NetworkManager[48892]: <info>  [1764240531.5672] manager: (br-ex): new Open vSwitch Bridge device (/org/freedesktop/NetworkManager/Devices/4)
Nov 27 05:48:51 np0005537642 NetworkManager[48892]: <info>  [1764240531.5674] audit: op="connection-add" uuid="d5899795-14bc-4f52-b509-8e4e9422d7ed" name="br-ex-br" pid=51663 uid=0 result="success"
Nov 27 05:48:51 np0005537642 NetworkManager[48892]: <info>  [1764240531.5698] manager: (br-ex): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/5)
Nov 27 05:48:51 np0005537642 NetworkManager[48892]: <info>  [1764240531.5700] audit: op="connection-add" uuid="ec74cebd-860c-47a7-bf8e-cdb9f5842c7f" name="br-ex-port" pid=51663 uid=0 result="success"
Nov 27 05:48:51 np0005537642 NetworkManager[48892]: <info>  [1764240531.5722] manager: (eth1): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/6)
Nov 27 05:48:51 np0005537642 NetworkManager[48892]: <info>  [1764240531.5724] audit: op="connection-add" uuid="d4c13712-984b-4f5b-9925-d509e9d077b1" name="eth1-port" pid=51663 uid=0 result="success"
Nov 27 05:48:51 np0005537642 NetworkManager[48892]: <info>  [1764240531.5744] manager: (vlan20): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/7)
Nov 27 05:48:51 np0005537642 NetworkManager[48892]: <info>  [1764240531.5747] audit: op="connection-add" uuid="62ac4b2e-4ffa-423f-b5e6-37ff3f863089" name="vlan20-port" pid=51663 uid=0 result="success"
Nov 27 05:48:51 np0005537642 NetworkManager[48892]: <info>  [1764240531.5769] manager: (vlan21): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/8)
Nov 27 05:48:51 np0005537642 NetworkManager[48892]: <info>  [1764240531.5771] audit: op="connection-add" uuid="9be7116c-bb1b-4f60-a750-a84903690939" name="vlan21-port" pid=51663 uid=0 result="success"
Nov 27 05:48:51 np0005537642 NetworkManager[48892]: <info>  [1764240531.5791] manager: (vlan22): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/9)
Nov 27 05:48:51 np0005537642 NetworkManager[48892]: <info>  [1764240531.5794] audit: op="connection-add" uuid="014a4e1a-a908-4849-805f-2c9a5868a561" name="vlan22-port" pid=51663 uid=0 result="success"
Nov 27 05:48:51 np0005537642 NetworkManager[48892]: <info>  [1764240531.5814] manager: (vlan23): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/10)
Nov 27 05:48:51 np0005537642 NetworkManager[48892]: <info>  [1764240531.5817] audit: op="connection-add" uuid="8fbe3419-0c03-456b-9a2d-06e76632fe90" name="vlan23-port" pid=51663 uid=0 result="success"
Nov 27 05:48:51 np0005537642 NetworkManager[48892]: <info>  [1764240531.5850] audit: op="connection-update" uuid="5fb06bd0-0bb0-7ffb-45f1-d6edd65f3e03" name="System eth0" args="connection.autoconnect-priority,connection.timestamp,ipv4.dhcp-client-id,ipv4.dhcp-timeout,802-3-ethernet.mtu,ipv6.dhcp-timeout,ipv6.method,ipv6.addr-gen-mode" pid=51663 uid=0 result="success"
Nov 27 05:48:51 np0005537642 NetworkManager[48892]: <info>  [1764240531.5881] manager: (br-ex): new Open vSwitch Interface device (/org/freedesktop/NetworkManager/Devices/11)
Nov 27 05:48:51 np0005537642 NetworkManager[48892]: <info>  [1764240531.5883] audit: op="connection-add" uuid="9480db6f-59a2-471a-9f0a-78b48b61a74e" name="br-ex-if" pid=51663 uid=0 result="success"
Nov 27 05:48:51 np0005537642 NetworkManager[48892]: <info>  [1764240531.6637] audit: op="connection-update" uuid="34d29167-9f55-51d9-826e-f0fb3219e8aa" name="ci-private-network" args="connection.master,connection.timestamp,connection.controller,connection.slave-type,connection.port-type,ipv4.routes,ipv4.addresses,ipv4.dns,ipv4.method,ipv4.never-default,ipv4.routing-rules,ipv6.routes,ipv6.addresses,ipv6.dns,ipv6.method,ipv6.routing-rules,ipv6.addr-gen-mode,ovs-external-ids.data,ovs-interface.type" pid=51663 uid=0 result="success"
Nov 27 05:48:51 np0005537642 NetworkManager[48892]: <info>  [1764240531.6675] manager: (vlan20): new Open vSwitch Interface device (/org/freedesktop/NetworkManager/Devices/12)
Nov 27 05:48:51 np0005537642 NetworkManager[48892]: <info>  [1764240531.6679] audit: op="connection-add" uuid="fee118a9-39a4-4251-bbb4-b576dd1a0cc7" name="vlan20-if" pid=51663 uid=0 result="success"
Nov 27 05:48:51 np0005537642 NetworkManager[48892]: <info>  [1764240531.6711] manager: (vlan21): new Open vSwitch Interface device (/org/freedesktop/NetworkManager/Devices/13)
Nov 27 05:48:51 np0005537642 NetworkManager[48892]: <info>  [1764240531.6714] audit: op="connection-add" uuid="2b47858f-3045-4bf3-9728-da27413fdca5" name="vlan21-if" pid=51663 uid=0 result="success"
Nov 27 05:48:51 np0005537642 NetworkManager[48892]: <info>  [1764240531.6745] manager: (vlan22): new Open vSwitch Interface device (/org/freedesktop/NetworkManager/Devices/14)
Nov 27 05:48:51 np0005537642 NetworkManager[48892]: <info>  [1764240531.6748] audit: op="connection-add" uuid="1484fecc-1bff-4719-8381-7ab58a4a6a1a" name="vlan22-if" pid=51663 uid=0 result="success"
Nov 27 05:48:51 np0005537642 NetworkManager[48892]: <info>  [1764240531.6777] manager: (vlan23): new Open vSwitch Interface device (/org/freedesktop/NetworkManager/Devices/15)
Nov 27 05:48:51 np0005537642 NetworkManager[48892]: <info>  [1764240531.6780] audit: op="connection-add" uuid="4bbc193e-14d2-459d-b7be-a6dc0cfbfea8" name="vlan23-if" pid=51663 uid=0 result="success"
Nov 27 05:48:51 np0005537642 NetworkManager[48892]: <info>  [1764240531.6803] audit: op="connection-delete" uuid="36db9ee6-df8b-3fd7-85c2-f23e4a4c2171" name="Wired connection 1" pid=51663 uid=0 result="success"
Nov 27 05:48:51 np0005537642 NetworkManager[48892]: <info>  [1764240531.6827] device (br-ex)[Open vSwitch Bridge]: state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Nov 27 05:48:51 np0005537642 NetworkManager[48892]: <info>  [1764240531.6843] device (br-ex)[Open vSwitch Bridge]: state change: unavailable -> disconnected (reason 'user-requested', managed-type: 'full')
Nov 27 05:48:51 np0005537642 NetworkManager[48892]: <info>  [1764240531.6850] device (br-ex)[Open vSwitch Bridge]: Activation: starting connection 'br-ex-br' (d5899795-14bc-4f52-b509-8e4e9422d7ed)
Nov 27 05:48:51 np0005537642 NetworkManager[48892]: <info>  [1764240531.6851] audit: op="connection-activate" uuid="d5899795-14bc-4f52-b509-8e4e9422d7ed" name="br-ex-br" pid=51663 uid=0 result="success"
Nov 27 05:48:51 np0005537642 NetworkManager[48892]: <info>  [1764240531.6854] device (br-ex)[Open vSwitch Port]: state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Nov 27 05:48:51 np0005537642 NetworkManager[48892]: <info>  [1764240531.6867] device (br-ex)[Open vSwitch Port]: state change: unavailable -> disconnected (reason 'user-requested', managed-type: 'full')
Nov 27 05:48:51 np0005537642 NetworkManager[48892]: <info>  [1764240531.6873] device (br-ex)[Open vSwitch Port]: Activation: starting connection 'br-ex-port' (ec74cebd-860c-47a7-bf8e-cdb9f5842c7f)
Nov 27 05:48:51 np0005537642 NetworkManager[48892]: <info>  [1764240531.6876] device (eth1)[Open vSwitch Port]: state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Nov 27 05:48:51 np0005537642 NetworkManager[48892]: <info>  [1764240531.6886] device (eth1)[Open vSwitch Port]: state change: unavailable -> disconnected (reason 'user-requested', managed-type: 'full')
Nov 27 05:48:51 np0005537642 NetworkManager[48892]: <info>  [1764240531.6892] device (eth1)[Open vSwitch Port]: Activation: starting connection 'eth1-port' (d4c13712-984b-4f5b-9925-d509e9d077b1)
Nov 27 05:48:51 np0005537642 NetworkManager[48892]: <info>  [1764240531.6896] device (vlan20)[Open vSwitch Port]: state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Nov 27 05:48:51 np0005537642 NetworkManager[48892]: <info>  [1764240531.6907] device (vlan20)[Open vSwitch Port]: state change: unavailable -> disconnected (reason 'user-requested', managed-type: 'full')
Nov 27 05:48:51 np0005537642 NetworkManager[48892]: <info>  [1764240531.6913] device (vlan20)[Open vSwitch Port]: Activation: starting connection 'vlan20-port' (62ac4b2e-4ffa-423f-b5e6-37ff3f863089)
Nov 27 05:48:51 np0005537642 NetworkManager[48892]: <info>  [1764240531.6916] device (vlan21)[Open vSwitch Port]: state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Nov 27 05:48:51 np0005537642 NetworkManager[48892]: <info>  [1764240531.6928] device (vlan21)[Open vSwitch Port]: state change: unavailable -> disconnected (reason 'user-requested', managed-type: 'full')
Nov 27 05:48:51 np0005537642 NetworkManager[48892]: <info>  [1764240531.6935] device (vlan21)[Open vSwitch Port]: Activation: starting connection 'vlan21-port' (9be7116c-bb1b-4f60-a750-a84903690939)
Nov 27 05:48:51 np0005537642 NetworkManager[48892]: <info>  [1764240531.6938] device (vlan22)[Open vSwitch Port]: state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Nov 27 05:48:51 np0005537642 NetworkManager[48892]: <info>  [1764240531.6949] device (vlan22)[Open vSwitch Port]: state change: unavailable -> disconnected (reason 'user-requested', managed-type: 'full')
Nov 27 05:48:51 np0005537642 NetworkManager[48892]: <info>  [1764240531.6956] device (vlan22)[Open vSwitch Port]: Activation: starting connection 'vlan22-port' (014a4e1a-a908-4849-805f-2c9a5868a561)
Nov 27 05:48:51 np0005537642 NetworkManager[48892]: <info>  [1764240531.6959] device (vlan23)[Open vSwitch Port]: state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Nov 27 05:48:51 np0005537642 NetworkManager[48892]: <info>  [1764240531.6970] device (vlan23)[Open vSwitch Port]: state change: unavailable -> disconnected (reason 'user-requested', managed-type: 'full')
Nov 27 05:48:51 np0005537642 NetworkManager[48892]: <info>  [1764240531.6977] device (vlan23)[Open vSwitch Port]: Activation: starting connection 'vlan23-port' (8fbe3419-0c03-456b-9a2d-06e76632fe90)
Nov 27 05:48:51 np0005537642 NetworkManager[48892]: <info>  [1764240531.6978] device (br-ex)[Open vSwitch Bridge]: state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Nov 27 05:48:51 np0005537642 NetworkManager[48892]: <info>  [1764240531.6982] device (br-ex)[Open vSwitch Bridge]: state change: prepare -> config (reason 'none', managed-type: 'full')
Nov 27 05:48:51 np0005537642 NetworkManager[48892]: <info>  [1764240531.6984] device (br-ex)[Open vSwitch Bridge]: state change: config -> ip-config (reason 'none', managed-type: 'full')
Nov 27 05:48:51 np0005537642 NetworkManager[48892]: <info>  [1764240531.6995] device (br-ex)[Open vSwitch Interface]: state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Nov 27 05:48:51 np0005537642 NetworkManager[48892]: <info>  [1764240531.7002] device (br-ex)[Open vSwitch Interface]: state change: unavailable -> disconnected (reason 'user-requested', managed-type: 'full')
Nov 27 05:48:51 np0005537642 NetworkManager[48892]: <info>  [1764240531.7009] device (br-ex)[Open vSwitch Interface]: Activation: starting connection 'br-ex-if' (9480db6f-59a2-471a-9f0a-78b48b61a74e)
Nov 27 05:48:51 np0005537642 NetworkManager[48892]: <info>  [1764240531.7011] device (br-ex)[Open vSwitch Port]: state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Nov 27 05:48:51 np0005537642 NetworkManager[48892]: <info>  [1764240531.7016] device (br-ex)[Open vSwitch Port]: state change: prepare -> config (reason 'none', managed-type: 'full')
Nov 27 05:48:51 np0005537642 NetworkManager[48892]: <info>  [1764240531.7019] device (br-ex)[Open vSwitch Port]: state change: config -> ip-config (reason 'none', managed-type: 'full')
Nov 27 05:48:51 np0005537642 NetworkManager[48892]: <info>  [1764240531.7021] device (br-ex)[Open vSwitch Port]: Activation: connection 'br-ex-port' attached as port, continuing activation
Nov 27 05:48:51 np0005537642 NetworkManager[48892]: <info>  [1764240531.7023] device (eth1): state change: activated -> deactivating (reason 'new-activation', managed-type: 'full')
Nov 27 05:48:51 np0005537642 NetworkManager[48892]: <info>  [1764240531.7043] device (eth1): disconnecting for new activation request.
Nov 27 05:48:51 np0005537642 NetworkManager[48892]: <info>  [1764240531.7045] device (eth1)[Open vSwitch Port]: state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Nov 27 05:48:51 np0005537642 NetworkManager[48892]: <info>  [1764240531.7050] device (eth1)[Open vSwitch Port]: state change: prepare -> config (reason 'none', managed-type: 'full')
Nov 27 05:48:51 np0005537642 NetworkManager[48892]: <info>  [1764240531.7053] device (eth1)[Open vSwitch Port]: state change: config -> ip-config (reason 'none', managed-type: 'full')
Nov 27 05:48:51 np0005537642 NetworkManager[48892]: <info>  [1764240531.7056] device (eth1)[Open vSwitch Port]: Activation: connection 'eth1-port' attached as port, continuing activation
Nov 27 05:48:51 np0005537642 NetworkManager[48892]: <info>  [1764240531.7060] device (vlan20)[Open vSwitch Interface]: state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Nov 27 05:48:51 np0005537642 NetworkManager[48892]: <info>  [1764240531.7068] device (vlan20)[Open vSwitch Interface]: state change: unavailable -> disconnected (reason 'user-requested', managed-type: 'full')
Nov 27 05:48:51 np0005537642 NetworkManager[48892]: <info>  [1764240531.7075] device (vlan20)[Open vSwitch Interface]: Activation: starting connection 'vlan20-if' (fee118a9-39a4-4251-bbb4-b576dd1a0cc7)
Nov 27 05:48:51 np0005537642 NetworkManager[48892]: <info>  [1764240531.7077] device (vlan20)[Open vSwitch Port]: state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Nov 27 05:48:51 np0005537642 NetworkManager[48892]: <info>  [1764240531.7082] device (vlan20)[Open vSwitch Port]: state change: prepare -> config (reason 'none', managed-type: 'full')
Nov 27 05:48:51 np0005537642 NetworkManager[48892]: <info>  [1764240531.7085] device (vlan20)[Open vSwitch Port]: state change: config -> ip-config (reason 'none', managed-type: 'full')
Nov 27 05:48:51 np0005537642 NetworkManager[48892]: <info>  [1764240531.7087] device (vlan20)[Open vSwitch Port]: Activation: connection 'vlan20-port' attached as port, continuing activation
Nov 27 05:48:51 np0005537642 NetworkManager[48892]: <info>  [1764240531.7092] device (vlan21)[Open vSwitch Interface]: state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Nov 27 05:48:51 np0005537642 NetworkManager[48892]: <info>  [1764240531.7100] device (vlan21)[Open vSwitch Interface]: state change: unavailable -> disconnected (reason 'user-requested', managed-type: 'full')
Nov 27 05:48:51 np0005537642 NetworkManager[48892]: <info>  [1764240531.7107] device (vlan21)[Open vSwitch Interface]: Activation: starting connection 'vlan21-if' (2b47858f-3045-4bf3-9728-da27413fdca5)
Nov 27 05:48:51 np0005537642 NetworkManager[48892]: <info>  [1764240531.7108] device (vlan21)[Open vSwitch Port]: state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Nov 27 05:48:51 np0005537642 NetworkManager[48892]: <info>  [1764240531.7113] device (vlan21)[Open vSwitch Port]: state change: prepare -> config (reason 'none', managed-type: 'full')
Nov 27 05:48:51 np0005537642 NetworkManager[48892]: <info>  [1764240531.7116] device (vlan21)[Open vSwitch Port]: state change: config -> ip-config (reason 'none', managed-type: 'full')
Nov 27 05:48:51 np0005537642 NetworkManager[48892]: <info>  [1764240531.7118] device (vlan21)[Open vSwitch Port]: Activation: connection 'vlan21-port' attached as port, continuing activation
Nov 27 05:48:51 np0005537642 NetworkManager[48892]: <info>  [1764240531.7123] device (vlan22)[Open vSwitch Interface]: state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Nov 27 05:48:51 np0005537642 NetworkManager[48892]: <info>  [1764240531.7131] device (vlan22)[Open vSwitch Interface]: state change: unavailable -> disconnected (reason 'user-requested', managed-type: 'full')
Nov 27 05:48:51 np0005537642 NetworkManager[48892]: <info>  [1764240531.7137] device (vlan22)[Open vSwitch Interface]: Activation: starting connection 'vlan22-if' (1484fecc-1bff-4719-8381-7ab58a4a6a1a)
Nov 27 05:48:51 np0005537642 NetworkManager[48892]: <info>  [1764240531.7139] device (vlan22)[Open vSwitch Port]: state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Nov 27 05:48:51 np0005537642 NetworkManager[48892]: <info>  [1764240531.7143] device (vlan22)[Open vSwitch Port]: state change: prepare -> config (reason 'none', managed-type: 'full')
Nov 27 05:48:51 np0005537642 NetworkManager[48892]: <info>  [1764240531.7146] device (vlan22)[Open vSwitch Port]: state change: config -> ip-config (reason 'none', managed-type: 'full')
Nov 27 05:48:51 np0005537642 NetworkManager[48892]: <info>  [1764240531.7148] device (vlan22)[Open vSwitch Port]: Activation: connection 'vlan22-port' attached as port, continuing activation
Nov 27 05:48:51 np0005537642 NetworkManager[48892]: <info>  [1764240531.7152] device (vlan23)[Open vSwitch Interface]: state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Nov 27 05:48:51 np0005537642 NetworkManager[48892]: <info>  [1764240531.7160] device (vlan23)[Open vSwitch Interface]: state change: unavailable -> disconnected (reason 'user-requested', managed-type: 'full')
Nov 27 05:48:51 np0005537642 NetworkManager[48892]: <info>  [1764240531.7168] device (vlan23)[Open vSwitch Interface]: Activation: starting connection 'vlan23-if' (4bbc193e-14d2-459d-b7be-a6dc0cfbfea8)
Nov 27 05:48:51 np0005537642 NetworkManager[48892]: <info>  [1764240531.7169] device (vlan23)[Open vSwitch Port]: state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Nov 27 05:48:51 np0005537642 NetworkManager[48892]: <info>  [1764240531.7174] device (vlan23)[Open vSwitch Port]: state change: prepare -> config (reason 'none', managed-type: 'full')
Nov 27 05:48:51 np0005537642 NetworkManager[48892]: <info>  [1764240531.7177] device (vlan23)[Open vSwitch Port]: state change: config -> ip-config (reason 'none', managed-type: 'full')
Nov 27 05:48:51 np0005537642 NetworkManager[48892]: <info>  [1764240531.7179] device (vlan23)[Open vSwitch Port]: Activation: connection 'vlan23-port' attached as port, continuing activation
Nov 27 05:48:51 np0005537642 NetworkManager[48892]: <info>  [1764240531.7182] device (br-ex)[Open vSwitch Bridge]: state change: ip-config -> ip-check (reason 'none', managed-type: 'full')
Nov 27 05:48:51 np0005537642 NetworkManager[48892]: <info>  [1764240531.7204] audit: op="device-reapply" interface="eth0" ifindex=2 args="connection.autoconnect-priority,ipv4.dhcp-client-id,ipv4.dhcp-timeout,802-3-ethernet.mtu,ipv6.method,ipv6.addr-gen-mode" pid=51663 uid=0 result="success"
Nov 27 05:48:51 np0005537642 NetworkManager[48892]: <info>  [1764240531.7208] device (br-ex)[Open vSwitch Interface]: state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Nov 27 05:48:51 np0005537642 NetworkManager[48892]: <info>  [1764240531.7213] device (br-ex)[Open vSwitch Interface]: state change: prepare -> config (reason 'none', managed-type: 'full')
Nov 27 05:48:51 np0005537642 systemd[1]: Starting Network Manager Script Dispatcher Service...
Nov 27 05:48:51 np0005537642 NetworkManager[48892]: <info>  [1764240531.7217] device (br-ex)[Open vSwitch Interface]: state change: config -> ip-config (reason 'none', managed-type: 'full')
Nov 27 05:48:51 np0005537642 NetworkManager[48892]: <info>  [1764240531.7226] device (br-ex)[Open vSwitch Port]: state change: ip-config -> ip-check (reason 'none', managed-type: 'full')
Nov 27 05:48:51 np0005537642 NetworkManager[48892]: <info>  [1764240531.7230] device (eth1)[Open vSwitch Port]: state change: ip-config -> ip-check (reason 'none', managed-type: 'full')
Nov 27 05:48:51 np0005537642 NetworkManager[48892]: <info>  [1764240531.7233] device (vlan20)[Open vSwitch Interface]: state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Nov 27 05:48:51 np0005537642 NetworkManager[48892]: <info>  [1764240531.7237] device (vlan20)[Open vSwitch Interface]: state change: prepare -> config (reason 'none', managed-type: 'full')
Nov 27 05:48:51 np0005537642 NetworkManager[48892]: <info>  [1764240531.7238] device (vlan20)[Open vSwitch Interface]: state change: config -> ip-config (reason 'none', managed-type: 'full')
Nov 27 05:48:51 np0005537642 NetworkManager[48892]: <info>  [1764240531.7243] device (vlan20)[Open vSwitch Port]: state change: ip-config -> ip-check (reason 'none', managed-type: 'full')
Nov 27 05:48:51 np0005537642 NetworkManager[48892]: <info>  [1764240531.7247] device (vlan21)[Open vSwitch Interface]: state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Nov 27 05:48:51 np0005537642 NetworkManager[48892]: <info>  [1764240531.7249] device (vlan21)[Open vSwitch Interface]: state change: prepare -> config (reason 'none', managed-type: 'full')
Nov 27 05:48:51 np0005537642 NetworkManager[48892]: <info>  [1764240531.7251] device (vlan21)[Open vSwitch Interface]: state change: config -> ip-config (reason 'none', managed-type: 'full')
Nov 27 05:48:51 np0005537642 NetworkManager[48892]: <info>  [1764240531.7256] device (vlan21)[Open vSwitch Port]: state change: ip-config -> ip-check (reason 'none', managed-type: 'full')
Nov 27 05:48:51 np0005537642 NetworkManager[48892]: <info>  [1764240531.7260] device (vlan22)[Open vSwitch Interface]: state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Nov 27 05:48:51 np0005537642 NetworkManager[48892]: <info>  [1764240531.7262] device (vlan22)[Open vSwitch Interface]: state change: prepare -> config (reason 'none', managed-type: 'full')
Nov 27 05:48:51 np0005537642 NetworkManager[48892]: <info>  [1764240531.7264] device (vlan22)[Open vSwitch Interface]: state change: config -> ip-config (reason 'none', managed-type: 'full')
Nov 27 05:48:51 np0005537642 NetworkManager[48892]: <info>  [1764240531.7269] device (vlan22)[Open vSwitch Port]: state change: ip-config -> ip-check (reason 'none', managed-type: 'full')
Nov 27 05:48:51 np0005537642 NetworkManager[48892]: <info>  [1764240531.7273] device (vlan23)[Open vSwitch Interface]: state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Nov 27 05:48:51 np0005537642 NetworkManager[48892]: <info>  [1764240531.7276] device (vlan23)[Open vSwitch Interface]: state change: prepare -> config (reason 'none', managed-type: 'full')
Nov 27 05:48:51 np0005537642 NetworkManager[48892]: <info>  [1764240531.7277] device (vlan23)[Open vSwitch Interface]: state change: config -> ip-config (reason 'none', managed-type: 'full')
Nov 27 05:48:51 np0005537642 NetworkManager[48892]: <info>  [1764240531.7282] device (vlan23)[Open vSwitch Port]: state change: ip-config -> ip-check (reason 'none', managed-type: 'full')
Nov 27 05:48:51 np0005537642 NetworkManager[48892]: <info>  [1764240531.7285] dhcp4 (eth0): canceled DHCP transaction
Nov 27 05:48:51 np0005537642 NetworkManager[48892]: <info>  [1764240531.7286] dhcp4 (eth0): activation: beginning transaction (timeout in 45 seconds)
Nov 27 05:48:51 np0005537642 NetworkManager[48892]: <info>  [1764240531.7286] dhcp4 (eth0): state changed no lease
Nov 27 05:48:51 np0005537642 NetworkManager[48892]: <info>  [1764240531.7287] dhcp4 (eth0): activation: beginning transaction (no timeout)
Nov 27 05:48:51 np0005537642 NetworkManager[48892]: <info>  [1764240531.7295] audit: op="device-reapply" interface="eth1" ifindex=3 pid=51663 uid=0 result="fail" reason="Device is not activated"
Nov 27 05:48:51 np0005537642 NetworkManager[48892]: <info>  [1764240531.7336] dhcp4 (eth0): state changed new lease, address=38.102.83.130
Nov 27 05:48:51 np0005537642 systemd[1]: Started Network Manager Script Dispatcher Service.
Nov 27 05:48:51 np0005537642 NetworkManager[48892]: <info>  [1764240531.7500] device (eth1): state change: deactivating -> disconnected (reason 'new-activation', managed-type: 'full')
Nov 27 05:48:51 np0005537642 kernel: ovs-system: entered promiscuous mode
Nov 27 05:48:51 np0005537642 kernel: Timeout policy base is empty
Nov 27 05:48:51 np0005537642 systemd-udevd[51669]: Network interface NamePolicy= disabled on kernel command line.
Nov 27 05:48:51 np0005537642 NetworkManager[48892]: <info>  [1764240531.7677] device (eth1): Activation: starting connection 'ci-private-network' (34d29167-9f55-51d9-826e-f0fb3219e8aa)
Nov 27 05:48:51 np0005537642 NetworkManager[48892]: <info>  [1764240531.7686] device (br-ex)[Open vSwitch Interface]: Activation: connection 'br-ex-if' attached as port, continuing activation
Nov 27 05:48:51 np0005537642 NetworkManager[48892]: <info>  [1764240531.7702] device (vlan20)[Open vSwitch Interface]: Activation: connection 'vlan20-if' attached as port, continuing activation
Nov 27 05:48:51 np0005537642 NetworkManager[48892]: <info>  [1764240531.7714] device (vlan21)[Open vSwitch Interface]: Activation: connection 'vlan21-if' attached as port, continuing activation
Nov 27 05:48:51 np0005537642 NetworkManager[48892]: <info>  [1764240531.7728] device (vlan22)[Open vSwitch Interface]: Activation: connection 'vlan22-if' attached as port, continuing activation
Nov 27 05:48:51 np0005537642 NetworkManager[48892]: <info>  [1764240531.7737] device (vlan23)[Open vSwitch Interface]: Activation: connection 'vlan23-if' attached as port, continuing activation
Nov 27 05:48:51 np0005537642 NetworkManager[48892]: <info>  [1764240531.7742] device (eth1): state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Nov 27 05:48:51 np0005537642 NetworkManager[48892]: <info>  [1764240531.7747] device (eth1): state change: prepare -> config (reason 'none', managed-type: 'full')
Nov 27 05:48:51 np0005537642 NetworkManager[48892]: <info>  [1764240531.7754] device (br-ex)[Open vSwitch Bridge]: state change: ip-check -> secondaries (reason 'none', managed-type: 'full')
Nov 27 05:48:51 np0005537642 NetworkManager[48892]: <info>  [1764240531.7757] device (br-ex)[Open vSwitch Port]: state change: ip-check -> secondaries (reason 'none', managed-type: 'full')
Nov 27 05:48:51 np0005537642 NetworkManager[48892]: <info>  [1764240531.7759] device (eth1)[Open vSwitch Port]: state change: ip-check -> secondaries (reason 'none', managed-type: 'full')
Nov 27 05:48:51 np0005537642 NetworkManager[48892]: <info>  [1764240531.7761] device (vlan20)[Open vSwitch Port]: state change: ip-check -> secondaries (reason 'none', managed-type: 'full')
Nov 27 05:48:51 np0005537642 NetworkManager[48892]: <info>  [1764240531.7763] device (vlan21)[Open vSwitch Port]: state change: ip-check -> secondaries (reason 'none', managed-type: 'full')
Nov 27 05:48:51 np0005537642 NetworkManager[48892]: <info>  [1764240531.7766] device (vlan22)[Open vSwitch Port]: state change: ip-check -> secondaries (reason 'none', managed-type: 'full')
Nov 27 05:48:51 np0005537642 NetworkManager[48892]: <info>  [1764240531.7770] device (vlan23)[Open vSwitch Port]: state change: ip-check -> secondaries (reason 'none', managed-type: 'full')
Nov 27 05:48:51 np0005537642 NetworkManager[48892]: <info>  [1764240531.7772] device (eth1): state change: config -> deactivating (reason 'new-activation', managed-type: 'full')
Nov 27 05:48:51 np0005537642 NetworkManager[48892]: <info>  [1764240531.7775] device (eth1): released from controller device eth1
Nov 27 05:48:51 np0005537642 NetworkManager[48892]: <info>  [1764240531.7783] device (eth1): disconnecting for new activation request.
Nov 27 05:48:51 np0005537642 NetworkManager[48892]: <info>  [1764240531.7784] audit: op="connection-activate" uuid="34d29167-9f55-51d9-826e-f0fb3219e8aa" name="ci-private-network" pid=51663 uid=0 result="success"
Nov 27 05:48:51 np0005537642 NetworkManager[48892]: <info>  [1764240531.7789] device (br-ex)[Open vSwitch Bridge]: state change: secondaries -> activated (reason 'none', managed-type: 'full')
Nov 27 05:48:51 np0005537642 NetworkManager[48892]: <info>  [1764240531.7794] device (br-ex)[Open vSwitch Bridge]: Activation: successful, device activated.
Nov 27 05:48:51 np0005537642 NetworkManager[48892]: <info>  [1764240531.7809] device (br-ex)[Open vSwitch Port]: state change: secondaries -> activated (reason 'none', managed-type: 'full')
Nov 27 05:48:51 np0005537642 NetworkManager[48892]: <info>  [1764240531.7813] device (br-ex)[Open vSwitch Port]: Activation: successful, device activated.
Nov 27 05:48:51 np0005537642 NetworkManager[48892]: <info>  [1764240531.7820] device (eth1)[Open vSwitch Port]: state change: secondaries -> activated (reason 'none', managed-type: 'full')
Nov 27 05:48:51 np0005537642 NetworkManager[48892]: <info>  [1764240531.7824] device (eth1)[Open vSwitch Port]: Activation: successful, device activated.
Nov 27 05:48:51 np0005537642 NetworkManager[48892]: <info>  [1764240531.7829] device (vlan20)[Open vSwitch Port]: state change: secondaries -> activated (reason 'none', managed-type: 'full')
Nov 27 05:48:51 np0005537642 NetworkManager[48892]: <info>  [1764240531.7834] device (vlan20)[Open vSwitch Port]: Activation: successful, device activated.
Nov 27 05:48:51 np0005537642 NetworkManager[48892]: <info>  [1764240531.7839] device (vlan21)[Open vSwitch Port]: state change: secondaries -> activated (reason 'none', managed-type: 'full')
Nov 27 05:48:51 np0005537642 NetworkManager[48892]: <info>  [1764240531.7844] device (vlan21)[Open vSwitch Port]: Activation: successful, device activated.
Nov 27 05:48:51 np0005537642 NetworkManager[48892]: <info>  [1764240531.7849] device (vlan22)[Open vSwitch Port]: state change: secondaries -> activated (reason 'none', managed-type: 'full')
Nov 27 05:48:51 np0005537642 NetworkManager[48892]: <info>  [1764240531.7855] device (vlan22)[Open vSwitch Port]: Activation: successful, device activated.
Nov 27 05:48:51 np0005537642 NetworkManager[48892]: <info>  [1764240531.7860] device (vlan23)[Open vSwitch Port]: state change: secondaries -> activated (reason 'none', managed-type: 'full')
Nov 27 05:48:51 np0005537642 NetworkManager[48892]: <info>  [1764240531.7865] device (vlan23)[Open vSwitch Port]: Activation: successful, device activated.
Nov 27 05:48:51 np0005537642 NetworkManager[48892]: <info>  [1764240531.7879] device (eth1): state change: deactivating -> disconnected (reason 'new-activation', managed-type: 'full')
Nov 27 05:48:51 np0005537642 NetworkManager[48892]: <info>  [1764240531.7888] device (eth1): Activation: starting connection 'ci-private-network' (34d29167-9f55-51d9-826e-f0fb3219e8aa)
Nov 27 05:48:51 np0005537642 NetworkManager[48892]: <info>  [1764240531.7892] audit: op="checkpoint-adjust-rollback-timeout" arg="/org/freedesktop/NetworkManager/Checkpoint/1" pid=51663 uid=0 result="success"
Nov 27 05:48:51 np0005537642 NetworkManager[48892]: <info>  [1764240531.7896] device (eth1): state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Nov 27 05:48:51 np0005537642 NetworkManager[48892]: <info>  [1764240531.7901] device (eth1): state change: prepare -> config (reason 'none', managed-type: 'full')
Nov 27 05:48:51 np0005537642 NetworkManager[48892]: <info>  [1764240531.7906] device (eth1): state change: config -> ip-config (reason 'none', managed-type: 'full')
Nov 27 05:48:51 np0005537642 NetworkManager[48892]: <info>  [1764240531.7920] device (eth1): Activation: connection 'ci-private-network' attached as port, continuing activation
Nov 27 05:48:51 np0005537642 NetworkManager[48892]: <info>  [1764240531.7924] device (eth1): state change: ip-config -> ip-check (reason 'none', managed-type: 'full')
Nov 27 05:48:51 np0005537642 NetworkManager[48892]: <info>  [1764240531.7984] device (eth1): state change: ip-check -> secondaries (reason 'none', managed-type: 'full')
Nov 27 05:48:51 np0005537642 NetworkManager[48892]: <info>  [1764240531.7987] device (eth1): state change: secondaries -> activated (reason 'none', managed-type: 'full')
Nov 27 05:48:51 np0005537642 NetworkManager[48892]: <info>  [1764240531.7993] device (eth1): Activation: successful, device activated.
Nov 27 05:48:51 np0005537642 kernel: br-ex: entered promiscuous mode
Nov 27 05:48:51 np0005537642 kernel: virtio_net virtio5 eth1: entered promiscuous mode
Nov 27 05:48:51 np0005537642 kernel: vlan22: entered promiscuous mode
Nov 27 05:48:51 np0005537642 systemd-udevd[51667]: Network interface NamePolicy= disabled on kernel command line.
Nov 27 05:48:51 np0005537642 NetworkManager[48892]: <info>  [1764240531.8477] device (br-ex)[Open vSwitch Interface]: carrier: link connected
Nov 27 05:48:51 np0005537642 NetworkManager[48892]: <info>  [1764240531.8489] device (br-ex)[Open vSwitch Interface]: state change: ip-config -> ip-check (reason 'none', managed-type: 'full')
Nov 27 05:48:51 np0005537642 NetworkManager[48892]: <info>  [1764240531.8516] device (br-ex)[Open vSwitch Interface]: state change: ip-check -> secondaries (reason 'none', managed-type: 'full')
Nov 27 05:48:51 np0005537642 NetworkManager[48892]: <info>  [1764240531.8518] device (br-ex)[Open vSwitch Interface]: state change: secondaries -> activated (reason 'none', managed-type: 'full')
Nov 27 05:48:51 np0005537642 NetworkManager[48892]: <info>  [1764240531.8528] device (br-ex)[Open vSwitch Interface]: Activation: successful, device activated.
Nov 27 05:48:51 np0005537642 kernel: vlan21: entered promiscuous mode
Nov 27 05:48:51 np0005537642 kernel: vlan23: entered promiscuous mode
Nov 27 05:48:51 np0005537642 systemd-udevd[51668]: Network interface NamePolicy= disabled on kernel command line.
Nov 27 05:48:51 np0005537642 NetworkManager[48892]: <info>  [1764240531.8665] device (vlan22)[Open vSwitch Interface]: carrier: link connected
Nov 27 05:48:51 np0005537642 NetworkManager[48892]: <info>  [1764240531.8677] device (vlan22)[Open vSwitch Interface]: state change: ip-config -> ip-check (reason 'none', managed-type: 'full')
Nov 27 05:48:51 np0005537642 kernel: vlan20: entered promiscuous mode
Nov 27 05:48:51 np0005537642 NetworkManager[48892]: <info>  [1764240531.8727] device (vlan22)[Open vSwitch Interface]: state change: ip-check -> secondaries (reason 'none', managed-type: 'full')
Nov 27 05:48:51 np0005537642 NetworkManager[48892]: <info>  [1764240531.8731] device (vlan22)[Open vSwitch Interface]: state change: secondaries -> activated (reason 'none', managed-type: 'full')
Nov 27 05:48:51 np0005537642 NetworkManager[48892]: <info>  [1764240531.8742] device (vlan22)[Open vSwitch Interface]: Activation: successful, device activated.
Nov 27 05:48:51 np0005537642 NetworkManager[48892]: <info>  [1764240531.8797] device (vlan21)[Open vSwitch Interface]: carrier: link connected
Nov 27 05:48:51 np0005537642 NetworkManager[48892]: <info>  [1764240531.8802] device (vlan23)[Open vSwitch Interface]: carrier: link connected
Nov 27 05:48:51 np0005537642 NetworkManager[48892]: <info>  [1764240531.8869] device (vlan21)[Open vSwitch Interface]: state change: ip-config -> ip-check (reason 'none', managed-type: 'full')
Nov 27 05:48:51 np0005537642 NetworkManager[48892]: <info>  [1764240531.8880] device (vlan23)[Open vSwitch Interface]: state change: ip-config -> ip-check (reason 'none', managed-type: 'full')
Nov 27 05:48:51 np0005537642 NetworkManager[48892]: <info>  [1764240531.8906] device (vlan21)[Open vSwitch Interface]: state change: ip-check -> secondaries (reason 'none', managed-type: 'full')
Nov 27 05:48:51 np0005537642 NetworkManager[48892]: <info>  [1764240531.8909] device (vlan23)[Open vSwitch Interface]: state change: ip-check -> secondaries (reason 'none', managed-type: 'full')
Nov 27 05:48:51 np0005537642 NetworkManager[48892]: <info>  [1764240531.8911] device (vlan21)[Open vSwitch Interface]: state change: secondaries -> activated (reason 'none', managed-type: 'full')
Nov 27 05:48:51 np0005537642 NetworkManager[48892]: <info>  [1764240531.8921] device (vlan21)[Open vSwitch Interface]: Activation: successful, device activated.
Nov 27 05:48:51 np0005537642 NetworkManager[48892]: <info>  [1764240531.8930] device (vlan23)[Open vSwitch Interface]: state change: secondaries -> activated (reason 'none', managed-type: 'full')
Nov 27 05:48:51 np0005537642 NetworkManager[48892]: <info>  [1764240531.8939] device (vlan23)[Open vSwitch Interface]: Activation: successful, device activated.
Nov 27 05:48:51 np0005537642 NetworkManager[48892]: <info>  [1764240531.8959] device (vlan20)[Open vSwitch Interface]: carrier: link connected
Nov 27 05:48:51 np0005537642 NetworkManager[48892]: <info>  [1764240531.8979] device (vlan20)[Open vSwitch Interface]: state change: ip-config -> ip-check (reason 'none', managed-type: 'full')
Nov 27 05:48:51 np0005537642 NetworkManager[48892]: <info>  [1764240531.9022] device (vlan20)[Open vSwitch Interface]: state change: ip-check -> secondaries (reason 'none', managed-type: 'full')
Nov 27 05:48:51 np0005537642 NetworkManager[48892]: <info>  [1764240531.9025] device (vlan20)[Open vSwitch Interface]: state change: secondaries -> activated (reason 'none', managed-type: 'full')
Nov 27 05:48:51 np0005537642 NetworkManager[48892]: <info>  [1764240531.9036] device (vlan20)[Open vSwitch Interface]: Activation: successful, device activated.
Nov 27 05:48:52 np0005537642 python3.9[52026]: ansible-ansible.legacy.async_status Invoked with jid=j757002643200.51657 mode=status _async_dir=/root/.ansible_async
Nov 27 05:48:53 np0005537642 NetworkManager[48892]: <info>  [1764240533.0037] audit: op="checkpoint-adjust-rollback-timeout" arg="/org/freedesktop/NetworkManager/Checkpoint/1" pid=51663 uid=0 result="success"
Nov 27 05:48:53 np0005537642 NetworkManager[48892]: <info>  [1764240533.2779] checkpoint[0x5575e1307950]: destroy /org/freedesktop/NetworkManager/Checkpoint/1
Nov 27 05:48:53 np0005537642 NetworkManager[48892]: <info>  [1764240533.2783] audit: op="checkpoint-destroy" arg="/org/freedesktop/NetworkManager/Checkpoint/1" pid=51663 uid=0 result="success"
Nov 27 05:48:53 np0005537642 NetworkManager[48892]: <info>  [1764240533.6868] audit: op="checkpoint-create" arg="/org/freedesktop/NetworkManager/Checkpoint/2" pid=51663 uid=0 result="success"
Nov 27 05:48:53 np0005537642 NetworkManager[48892]: <info>  [1764240533.6877] audit: op="checkpoint-adjust-rollback-timeout" arg="/org/freedesktop/NetworkManager/Checkpoint/2" pid=51663 uid=0 result="success"
Nov 27 05:48:53 np0005537642 NetworkManager[48892]: <info>  [1764240533.8594] audit: op="networking-control" arg="global-dns-configuration" pid=51663 uid=0 result="success"
Nov 27 05:48:53 np0005537642 NetworkManager[48892]: <info>  [1764240533.8622] config: signal: SET_VALUES,values,values-intern,global-dns-config (/etc/NetworkManager/NetworkManager.conf, /run/NetworkManager/conf.d/15-carrier-timeout.conf)
Nov 27 05:48:53 np0005537642 NetworkManager[48892]: <info>  [1764240533.8649] audit: op="networking-control" arg="global-dns-configuration" pid=51663 uid=0 result="success"
Nov 27 05:48:53 np0005537642 ansible-async_wrapper.py[51660]: 51661 still running (300)
Nov 27 05:48:53 np0005537642 NetworkManager[48892]: <info>  [1764240533.9249] audit: op="checkpoint-adjust-rollback-timeout" arg="/org/freedesktop/NetworkManager/Checkpoint/2" pid=51663 uid=0 result="success"
Nov 27 05:48:54 np0005537642 NetworkManager[48892]: <info>  [1764240534.0464] checkpoint[0x5575e1307a20]: destroy /org/freedesktop/NetworkManager/Checkpoint/2
Nov 27 05:48:54 np0005537642 NetworkManager[48892]: <info>  [1764240534.0467] audit: op="checkpoint-destroy" arg="/org/freedesktop/NetworkManager/Checkpoint/2" pid=51663 uid=0 result="success"
Nov 27 05:48:54 np0005537642 ansible-async_wrapper.py[51661]: Module complete (51661)
Nov 27 05:48:56 np0005537642 python3.9[52133]: ansible-ansible.legacy.async_status Invoked with jid=j757002643200.51657 mode=status _async_dir=/root/.ansible_async
Nov 27 05:48:57 np0005537642 python3.9[52232]: ansible-ansible.legacy.async_status Invoked with jid=j757002643200.51657 mode=cleanup _async_dir=/root/.ansible_async
Nov 27 05:48:57 np0005537642 python3.9[52384]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/os-net-config.returncode follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 27 05:48:58 np0005537642 python3.9[52507]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/edpm-config/os-net-config.returncode mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1764240537.3035765-926-60536288564316/.source.returncode _original_basename=._o3trmtt follow=False checksum=b6589fc6ab0dc82cf12099d1c2d40ab994e8410c backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 27 05:48:58 np0005537642 ansible-async_wrapper.py[51660]: Done in kid B.
Nov 27 05:48:59 np0005537642 python3.9[52659]: ansible-ansible.legacy.stat Invoked with path=/etc/cloud/cloud.cfg.d/99-edpm-disable-network-config.cfg follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 27 05:49:00 np0005537642 python3.9[52783]: ansible-ansible.legacy.copy Invoked with dest=/etc/cloud/cloud.cfg.d/99-edpm-disable-network-config.cfg mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1764240538.801794-974-87181906522217/.source.cfg _original_basename=.mwb8u1_z follow=False checksum=f3c5952a9cd4c6c31b314b25eb897168971cc86e backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 27 05:49:01 np0005537642 python3.9[52935]: ansible-ansible.builtin.systemd Invoked with name=NetworkManager state=reloaded daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Nov 27 05:49:01 np0005537642 systemd[1]: Reloading Network Manager...
Nov 27 05:49:01 np0005537642 NetworkManager[48892]: <info>  [1764240541.2263] audit: op="reload" arg="0" pid=52939 uid=0 result="success"
Nov 27 05:49:01 np0005537642 NetworkManager[48892]: <info>  [1764240541.2275] config: signal: SIGHUP,config-files,values,values-user,no-auto-default (/etc/NetworkManager/NetworkManager.conf, /usr/lib/NetworkManager/conf.d/00-server.conf, /run/NetworkManager/conf.d/15-carrier-timeout.conf, /var/lib/NetworkManager/NetworkManager-intern.conf)
Nov 27 05:49:01 np0005537642 systemd[1]: Reloaded Network Manager.
Nov 27 05:49:02 np0005537642 systemd[1]: session-10.scope: Deactivated successfully.
Nov 27 05:49:02 np0005537642 systemd[1]: session-10.scope: Consumed 56.327s CPU time.
Nov 27 05:49:02 np0005537642 systemd-logind[801]: Session 10 logged out. Waiting for processes to exit.
Nov 27 05:49:02 np0005537642 systemd-logind[801]: Removed session 10.
Nov 27 05:49:08 np0005537642 systemd-logind[801]: New session 11 of user zuul.
Nov 27 05:49:08 np0005537642 systemd[1]: Started Session 11 of User zuul.
Nov 27 05:49:09 np0005537642 python3.9[53123]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Nov 27 05:49:10 np0005537642 python3.9[53278]: ansible-ansible.builtin.setup Invoked with filter=['ansible_default_ipv4'] gather_subset=['!all', '!min', 'network'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Nov 27 05:49:11 np0005537642 systemd[1]: NetworkManager-dispatcher.service: Deactivated successfully.
Nov 27 05:49:11 np0005537642 python3.9[53472]: ansible-ansible.legacy.command Invoked with _raw_params=hostname -f _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 27 05:49:12 np0005537642 systemd[1]: session-11.scope: Deactivated successfully.
Nov 27 05:49:12 np0005537642 systemd[1]: session-11.scope: Consumed 2.925s CPU time.
Nov 27 05:49:12 np0005537642 systemd-logind[801]: Session 11 logged out. Waiting for processes to exit.
Nov 27 05:49:12 np0005537642 systemd-logind[801]: Removed session 11.
Nov 27 05:49:17 np0005537642 systemd-logind[801]: New session 12 of user zuul.
Nov 27 05:49:17 np0005537642 systemd[1]: Started Session 12 of User zuul.
Nov 27 05:49:19 np0005537642 python3.9[53654]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Nov 27 05:49:20 np0005537642 python3.9[53808]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Nov 27 05:49:21 np0005537642 python3.9[53964]: ansible-ansible.legacy.setup Invoked with filter=['ansible_pkg_mgr'] gather_subset=['!all'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Nov 27 05:49:22 np0005537642 python3.9[54049]: ansible-ansible.legacy.dnf Invoked with name=['podman'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Nov 27 05:49:24 np0005537642 python3.9[54202]: ansible-ansible.builtin.setup Invoked with filter=['ansible_interfaces'] gather_subset=['!all', '!min', 'network'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Nov 27 05:49:26 np0005537642 python3.9[54397]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/containers/networks recurse=True state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 27 05:49:27 np0005537642 python3.9[54549]: ansible-ansible.legacy.command Invoked with _raw_params=podman network inspect podman#012 _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 27 05:49:27 np0005537642 systemd[1]: var-lib-containers-storage-overlay-compat4251248292-merged.mount: Deactivated successfully.
Nov 27 05:49:27 np0005537642 systemd[1]: var-lib-containers-storage-overlay-metacopy\x2dcheck2438551986-merged.mount: Deactivated successfully.
Nov 27 05:49:27 np0005537642 podman[54550]: 2025-11-27 10:49:27.458893751 +0000 UTC m=+0.155337306 system refresh
Nov 27 05:49:28 np0005537642 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Nov 27 05:49:28 np0005537642 python3.9[54712]: ansible-ansible.legacy.stat Invoked with path=/etc/containers/networks/podman.json follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 27 05:49:29 np0005537642 python3.9[54835]: ansible-ansible.legacy.copy Invoked with dest=/etc/containers/networks/podman.json group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1764240567.7044969-197-97091201676208/.source.json follow=False _original_basename=podman_network_config.j2 checksum=b2054d6eb83b0eef294894589d306fffe3af7ad3 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 27 05:49:30 np0005537642 python3.9[54987]: ansible-ansible.legacy.stat Invoked with path=/etc/containers/registries.conf.d/20-edpm-podman-registries.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 27 05:49:30 np0005537642 python3.9[55110]: ansible-ansible.legacy.copy Invoked with dest=/etc/containers/registries.conf.d/20-edpm-podman-registries.conf group=root mode=0644 owner=root setype=etc_t src=/home/zuul/.ansible/tmp/ansible-tmp-1764240569.6943188-242-155406364867379/.source.conf follow=False _original_basename=registries.conf.j2 checksum=3101fce9eedf266304fd5f1fa45e8101d0788674 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None attributes=None
Nov 27 05:49:32 np0005537642 python3.9[55262]: ansible-community.general.ini_file Invoked with create=True group=root mode=0644 option=pids_limit owner=root path=/etc/containers/containers.conf section=containers setype=etc_t value=4096 backup=False state=present exclusive=True no_extra_spaces=False ignore_spaces=False allow_no_value=False modify_inactive_option=True follow=False unsafe_writes=False section_has_values=None values=None seuser=None serole=None selevel=None attributes=None
Nov 27 05:49:32 np0005537642 python3.9[55414]: ansible-community.general.ini_file Invoked with create=True group=root mode=0644 option=events_logger owner=root path=/etc/containers/containers.conf section=engine setype=etc_t value="journald" backup=False state=present exclusive=True no_extra_spaces=False ignore_spaces=False allow_no_value=False modify_inactive_option=True follow=False unsafe_writes=False section_has_values=None values=None seuser=None serole=None selevel=None attributes=None
Nov 27 05:49:33 np0005537642 python3.9[55566]: ansible-community.general.ini_file Invoked with create=True group=root mode=0644 option=runtime owner=root path=/etc/containers/containers.conf section=engine setype=etc_t value="crun" backup=False state=present exclusive=True no_extra_spaces=False ignore_spaces=False allow_no_value=False modify_inactive_option=True follow=False unsafe_writes=False section_has_values=None values=None seuser=None serole=None selevel=None attributes=None
Nov 27 05:49:34 np0005537642 python3.9[55718]: ansible-community.general.ini_file Invoked with create=True group=root mode=0644 option=network_backend owner=root path=/etc/containers/containers.conf section=network setype=etc_t value="netavark" backup=False state=present exclusive=True no_extra_spaces=False ignore_spaces=False allow_no_value=False modify_inactive_option=True follow=False unsafe_writes=False section_has_values=None values=None seuser=None serole=None selevel=None attributes=None
Nov 27 05:49:35 np0005537642 python3.9[55870]: ansible-ansible.legacy.dnf Invoked with name=['openssh-server'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Nov 27 05:49:38 np0005537642 python3.9[56023]: ansible-setup Invoked with gather_subset=['!all', '!min', 'distribution', 'distribution_major_version', 'distribution_version', 'os_family'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Nov 27 05:49:39 np0005537642 python3.9[56177]: ansible-stat Invoked with path=/run/ostree-booted follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Nov 27 05:49:39 np0005537642 python3.9[56329]: ansible-stat Invoked with path=/sbin/transactional-update follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Nov 27 05:49:40 np0005537642 python3.9[56481]: ansible-ansible.legacy.command Invoked with _raw_params=systemctl is-system-running _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 27 05:49:42 np0005537642 python3.9[56634]: ansible-service_facts Invoked
Nov 27 05:49:42 np0005537642 network[56651]: You are using 'network' service provided by 'network-scripts', which are now deprecated.
Nov 27 05:49:42 np0005537642 network[56652]: 'network-scripts' will be removed from distribution in near future.
Nov 27 05:49:42 np0005537642 network[56653]: It is advised to switch to 'NetworkManager' instead for network management.
Nov 27 05:49:49 np0005537642 python3.9[57106]: ansible-ansible.legacy.dnf Invoked with name=['chrony'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Nov 27 05:49:52 np0005537642 python3.9[57259]: ansible-package_facts Invoked with manager=['auto'] strategy=first
Nov 27 05:49:53 np0005537642 python3.9[57411]: ansible-ansible.legacy.stat Invoked with path=/etc/chrony.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 27 05:49:54 np0005537642 python3.9[57536]: ansible-ansible.legacy.copy Invoked with backup=True dest=/etc/chrony.conf mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1764240593.0431678-674-110124142289318/.source.conf follow=False _original_basename=chrony.conf.j2 checksum=cfb003e56d02d0d2c65555452eb1a05073fecdad force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 27 05:49:55 np0005537642 python3.9[57690]: ansible-ansible.legacy.stat Invoked with path=/etc/sysconfig/chronyd follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 27 05:49:55 np0005537642 python3.9[57815]: ansible-ansible.legacy.copy Invoked with backup=True dest=/etc/sysconfig/chronyd mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1764240594.537072-719-169843781533060/.source follow=False _original_basename=chronyd.sysconfig.j2 checksum=dd196b1ff1f915b23eebc37ec77405b5dd3df76c force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 27 05:49:57 np0005537642 python3.9[57969]: ansible-lineinfile Invoked with backup=True create=True dest=/etc/sysconfig/network line=PEERNTP=no mode=0644 regexp=^PEERNTP= state=present path=/etc/sysconfig/network encoding=utf-8 backrefs=False firstmatch=False unsafe_writes=False search_string=None insertafter=None insertbefore=None validate=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 27 05:49:59 np0005537642 python3.9[58123]: ansible-ansible.legacy.setup Invoked with gather_subset=['!all'] filter=['ansible_service_mgr'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Nov 27 05:50:00 np0005537642 python3.9[58207]: ansible-ansible.legacy.systemd Invoked with enabled=True name=chronyd state=started daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Nov 27 05:50:02 np0005537642 python3.9[58361]: ansible-ansible.legacy.setup Invoked with gather_subset=['!all'] filter=['ansible_service_mgr'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Nov 27 05:50:03 np0005537642 python3.9[58445]: ansible-ansible.legacy.systemd Invoked with name=chronyd state=restarted daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Nov 27 05:50:03 np0005537642 chronyd[792]: chronyd exiting
Nov 27 05:50:03 np0005537642 systemd[1]: Stopping NTP client/server...
Nov 27 05:50:03 np0005537642 systemd[1]: chronyd.service: Deactivated successfully.
Nov 27 05:50:03 np0005537642 systemd[1]: Stopped NTP client/server.
Nov 27 05:50:03 np0005537642 systemd[1]: Starting NTP client/server...
Nov 27 05:50:03 np0005537642 chronyd[58453]: chronyd version 4.8 starting (+CMDMON +REFCLOCK +RTC +PRIVDROP +SCFILTER +SIGND +NTS +SECHASH +IPV6 +DEBUG)
Nov 27 05:50:03 np0005537642 chronyd[58453]: Frequency -28.427 +/- 0.113 ppm read from /var/lib/chrony/drift
Nov 27 05:50:03 np0005537642 chronyd[58453]: Loaded seccomp filter (level 2)
Nov 27 05:50:03 np0005537642 systemd[1]: Started NTP client/server.
Nov 27 05:50:04 np0005537642 systemd[1]: session-12.scope: Deactivated successfully.
Nov 27 05:50:04 np0005537642 systemd[1]: session-12.scope: Consumed 29.978s CPU time.
Nov 27 05:50:04 np0005537642 systemd-logind[801]: Session 12 logged out. Waiting for processes to exit.
Nov 27 05:50:04 np0005537642 systemd-logind[801]: Removed session 12.
Nov 27 05:50:10 np0005537642 systemd-logind[801]: New session 13 of user zuul.
Nov 27 05:50:10 np0005537642 systemd[1]: Started Session 13 of User zuul.
Nov 27 05:50:11 np0005537642 python3.9[58634]: ansible-ansible.builtin.file Invoked with group=root mode=0750 owner=root path=/var/lib/edpm-config/firewall state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 27 05:50:12 np0005537642 python3.9[58786]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/firewall/ceph-networks.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 27 05:50:13 np0005537642 python3.9[58909]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/edpm-config/firewall/ceph-networks.yaml mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1764240611.5728984-62-85040902952219/.source.yaml follow=False _original_basename=firewall.yaml.j2 checksum=729ea8396013e3343245d6e934e0dcef55029ad2 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 27 05:50:13 np0005537642 systemd[1]: session-13.scope: Deactivated successfully.
Nov 27 05:50:13 np0005537642 systemd[1]: session-13.scope: Consumed 2.019s CPU time.
Nov 27 05:50:13 np0005537642 systemd-logind[801]: Session 13 logged out. Waiting for processes to exit.
Nov 27 05:50:13 np0005537642 systemd-logind[801]: Removed session 13.
Nov 27 05:50:20 np0005537642 systemd-logind[801]: New session 14 of user zuul.
Nov 27 05:50:20 np0005537642 systemd[1]: Started Session 14 of User zuul.
Nov 27 05:50:21 np0005537642 python3.9[59087]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Nov 27 05:50:23 np0005537642 python3.9[59243]: ansible-ansible.builtin.file Invoked with group=zuul mode=0770 owner=zuul path=/root/.config/containers recurse=True state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 27 05:50:24 np0005537642 python3.9[59418]: ansible-ansible.legacy.stat Invoked with path=/root/.config/containers/auth.json follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 27 05:50:25 np0005537642 python3.9[59541]: ansible-ansible.legacy.copy Invoked with dest=/root/.config/containers/auth.json group=zuul mode=0660 owner=zuul src=/home/zuul/.ansible/tmp/ansible-tmp-1764240623.3009017-83-252253297095916/.source.json _original_basename=.splf4_ge follow=False checksum=bf21a9e8fbc5a3846fb05b4fa0859e0917b2202f backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 27 05:50:26 np0005537642 python3.9[59693]: ansible-ansible.legacy.stat Invoked with path=/etc/sysconfig/podman_drop_in follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 27 05:50:27 np0005537642 python3.9[59816]: ansible-ansible.legacy.copy Invoked with dest=/etc/sysconfig/podman_drop_in mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1764240625.88089-152-271351307941184/.source _original_basename=.76ovq9a9 follow=False checksum=125299ce8dea7711a76292961206447f0043248b backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 27 05:50:28 np0005537642 python3.9[59968]: ansible-ansible.builtin.file Invoked with path=/var/local/libexec recurse=True setype=container_file_t state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Nov 27 05:50:29 np0005537642 python3.9[60120]: ansible-ansible.legacy.stat Invoked with path=/var/local/libexec/edpm-container-shutdown follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 27 05:50:30 np0005537642 python3.9[60243]: ansible-ansible.legacy.copy Invoked with dest=/var/local/libexec/edpm-container-shutdown group=root mode=0700 owner=root setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1764240628.722079-224-116707956155205/.source _original_basename=edpm-container-shutdown follow=False checksum=632c3792eb3dce4288b33ae7b265b71950d69f13 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None attributes=None
Nov 27 05:50:31 np0005537642 python3.9[60395]: ansible-ansible.legacy.stat Invoked with path=/var/local/libexec/edpm-start-podman-container follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 27 05:50:31 np0005537642 python3.9[60518]: ansible-ansible.legacy.copy Invoked with dest=/var/local/libexec/edpm-start-podman-container group=root mode=0700 owner=root setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1764240630.40271-224-152644563371350/.source _original_basename=edpm-start-podman-container follow=False checksum=b963c569d75a655c0ccae95d9bb4a2a9a4df27d1 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None attributes=None
Nov 27 05:50:32 np0005537642 python3.9[60670]: ansible-ansible.builtin.file Invoked with mode=420 path=/etc/systemd/system-preset state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 27 05:50:33 np0005537642 python3.9[60822]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/edpm-container-shutdown.service follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 27 05:50:34 np0005537642 python3.9[60945]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/edpm-container-shutdown.service group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1764240632.893864-335-265921793739207/.source.service _original_basename=edpm-container-shutdown-service follow=False checksum=6336835cb0f888670cc99de31e19c8c071444d33 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 27 05:50:35 np0005537642 python3.9[61097]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system-preset/91-edpm-container-shutdown.preset follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 27 05:50:36 np0005537642 python3.9[61220]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system-preset/91-edpm-container-shutdown.preset group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1764240634.6606789-380-228387709548209/.source.preset _original_basename=91-edpm-container-shutdown-preset follow=False checksum=b275e4375287528cb63464dd32f622c4f142a915 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 27 05:50:37 np0005537642 python3.9[61372]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True enabled=True name=edpm-container-shutdown state=started daemon_reexec=False scope=system no_block=False force=None masked=None
Nov 27 05:50:37 np0005537642 systemd[1]: Reloading.
Nov 27 05:50:38 np0005537642 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 27 05:50:38 np0005537642 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 27 05:50:38 np0005537642 systemd[1]: Reloading.
Nov 27 05:50:38 np0005537642 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 27 05:50:38 np0005537642 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 27 05:50:38 np0005537642 systemd[1]: Starting EDPM Container Shutdown...
Nov 27 05:50:38 np0005537642 systemd[1]: Finished EDPM Container Shutdown.
Nov 27 05:50:39 np0005537642 python3.9[61601]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/netns-placeholder.service follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 27 05:50:40 np0005537642 python3.9[61724]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/netns-placeholder.service group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1764240638.8293817-449-169752387568548/.source.service _original_basename=netns-placeholder-service follow=False checksum=b61b1b5918c20c877b8b226fbf34ff89a082d972 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 27 05:50:41 np0005537642 python3.9[61876]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system-preset/91-netns-placeholder.preset follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 27 05:50:41 np0005537642 python3.9[61999]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system-preset/91-netns-placeholder.preset group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1764240640.618757-494-204825482723344/.source.preset _original_basename=91-netns-placeholder-preset follow=False checksum=28b7b9aa893525d134a1eeda8a0a48fb25b736b9 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 27 05:50:42 np0005537642 python3.9[62151]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True enabled=True name=netns-placeholder state=started daemon_reexec=False scope=system no_block=False force=None masked=None
Nov 27 05:50:42 np0005537642 systemd[1]: Reloading.
Nov 27 05:50:43 np0005537642 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 27 05:50:43 np0005537642 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 27 05:50:43 np0005537642 systemd[1]: Reloading.
Nov 27 05:50:43 np0005537642 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 27 05:50:43 np0005537642 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 27 05:50:43 np0005537642 systemd[1]: Starting Create netns directory...
Nov 27 05:50:43 np0005537642 systemd[1]: run-netns-placeholder.mount: Deactivated successfully.
Nov 27 05:50:43 np0005537642 systemd[1]: netns-placeholder.service: Deactivated successfully.
Nov 27 05:50:43 np0005537642 systemd[1]: Finished Create netns directory.
Nov 27 05:50:44 np0005537642 python3.9[62377]: ansible-ansible.builtin.service_facts Invoked
Nov 27 05:50:45 np0005537642 network[62394]: You are using 'network' service provided by 'network-scripts', which are now deprecated.
Nov 27 05:50:45 np0005537642 network[62395]: 'network-scripts' will be removed from distribution in near future.
Nov 27 05:50:45 np0005537642 network[62396]: It is advised to switch to 'NetworkManager' instead for network management.
Nov 27 05:50:50 np0005537642 python3.9[62659]: ansible-ansible.builtin.systemd Invoked with enabled=False name=iptables.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Nov 27 05:50:50 np0005537642 systemd[1]: Reloading.
Nov 27 05:50:50 np0005537642 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 27 05:50:50 np0005537642 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 27 05:50:50 np0005537642 systemd[1]: Stopping IPv4 firewall with iptables...
Nov 27 05:50:51 np0005537642 iptables.init[62700]: iptables: Setting chains to policy ACCEPT: raw mangle filter nat [  OK  ]
Nov 27 05:50:51 np0005537642 iptables.init[62700]: iptables: Flushing firewall rules: [  OK  ]
Nov 27 05:50:51 np0005537642 systemd[1]: iptables.service: Deactivated successfully.
Nov 27 05:50:51 np0005537642 systemd[1]: Stopped IPv4 firewall with iptables.
Nov 27 05:50:52 np0005537642 python3.9[62896]: ansible-ansible.builtin.systemd Invoked with enabled=False name=ip6tables.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Nov 27 05:50:53 np0005537642 python3.9[63050]: ansible-ansible.builtin.systemd Invoked with enabled=True name=nftables state=started daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Nov 27 05:50:53 np0005537642 systemd[1]: Reloading.
Nov 27 05:50:53 np0005537642 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 27 05:50:53 np0005537642 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 27 05:50:53 np0005537642 systemd[1]: Starting Netfilter Tables...
Nov 27 05:50:53 np0005537642 systemd[1]: Finished Netfilter Tables.
Nov 27 05:50:54 np0005537642 python3.9[63244]: ansible-ansible.legacy.command Invoked with _raw_params=nft flush ruleset _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 27 05:50:56 np0005537642 python3.9[63397]: ansible-ansible.legacy.stat Invoked with path=/etc/ssh/sshd_config follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 27 05:50:56 np0005537642 python3.9[63522]: ansible-ansible.legacy.copy Invoked with dest=/etc/ssh/sshd_config mode=0600 src=/home/zuul/.ansible/tmp/ansible-tmp-1764240655.4839609-701-226233650078295/.source validate=/usr/sbin/sshd -T -f %s follow=False _original_basename=sshd_config_block.j2 checksum=6c79f4cb960ad444688fde322eeacb8402e22d79 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 27 05:50:57 np0005537642 python3.9[63675]: ansible-ansible.builtin.systemd Invoked with name=sshd state=reloaded daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Nov 27 05:50:57 np0005537642 systemd[1]: Reloading OpenSSH server daemon...
Nov 27 05:50:57 np0005537642 systemd[1]: Reloaded OpenSSH server daemon.
Nov 27 05:50:59 np0005537642 python3.9[63831]: ansible-ansible.builtin.file Invoked with group=root mode=0750 owner=root path=/var/lib/edpm-config/firewall state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 27 05:50:59 np0005537642 python3.9[63983]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/firewall/sshd-networks.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 27 05:51:01 np0005537642 python3.9[64106]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/edpm-config/firewall/sshd-networks.yaml group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1764240659.3164585-794-208920874527027/.source.yaml follow=False _original_basename=firewall.yaml.j2 checksum=0bfc8440fd8f39002ab90252479fb794f51b5ae8 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 27 05:51:02 np0005537642 python3.9[64258]: ansible-community.general.timezone Invoked with name=UTC hwclock=None
Nov 27 05:51:02 np0005537642 systemd[1]: Starting Time & Date Service...
Nov 27 05:51:02 np0005537642 systemd[1]: Started Time & Date Service.
Nov 27 05:51:03 np0005537642 python3.9[64414]: ansible-ansible.builtin.file Invoked with group=root mode=0750 owner=root path=/var/lib/edpm-config/firewall state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 27 05:51:04 np0005537642 python3.9[64566]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/firewall/edpm-nftables-base.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 27 05:51:05 np0005537642 python3.9[64689]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/edpm-config/firewall/edpm-nftables-base.yaml mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1764240664.0200932-899-23510149933754/.source.yaml follow=False _original_basename=base-rules.yaml.j2 checksum=450456afcafded6d4bdecceec7a02e806eebd8b3 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 27 05:51:06 np0005537642 python3.9[64841]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/firewall/edpm-nftables-user-rules.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 27 05:51:06 np0005537642 python3.9[64964]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/edpm-config/firewall/edpm-nftables-user-rules.yaml mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1764240665.5484116-944-155639976139711/.source.yaml _original_basename=.murfrrq5 follow=False checksum=97d170e1550eee4afc0af065b78cda302a97674c backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 27 05:51:08 np0005537642 python3.9[65116]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/iptables.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 27 05:51:08 np0005537642 python3.9[65239]: ansible-ansible.legacy.copy Invoked with dest=/etc/nftables/iptables.nft group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1764240667.134692-989-175993236354659/.source.nft _original_basename=iptables.nft follow=False checksum=3e02df08f1f3ab4a513e94056dbd390e3d38fe30 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 27 05:51:09 np0005537642 python3.9[65391]: ansible-ansible.legacy.command Invoked with _raw_params=nft -f /etc/nftables/iptables.nft _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 27 05:51:10 np0005537642 python3.9[65544]: ansible-ansible.legacy.command Invoked with _raw_params=nft -j list ruleset _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 27 05:51:11 np0005537642 python3[65697]: ansible-edpm_nftables_from_files Invoked with src=/var/lib/edpm-config/firewall
Nov 27 05:51:12 np0005537642 python3.9[65849]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-jumps.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 27 05:51:13 np0005537642 python3.9[65972]: ansible-ansible.legacy.copy Invoked with dest=/etc/nftables/edpm-jumps.nft group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1764240671.8996072-1106-223481352881160/.source.nft follow=False _original_basename=jump-chain.j2 checksum=4c6f036d2d5808f109acc0880c19aa74ca48c961 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 27 05:51:14 np0005537642 python3.9[66124]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-update-jumps.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 27 05:51:14 np0005537642 python3.9[66247]: ansible-ansible.legacy.copy Invoked with dest=/etc/nftables/edpm-update-jumps.nft group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1764240673.4449198-1151-219129567653662/.source.nft follow=False _original_basename=jump-chain.j2 checksum=4c6f036d2d5808f109acc0880c19aa74ca48c961 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 27 05:51:15 np0005537642 python3.9[66399]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-flushes.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 27 05:51:16 np0005537642 python3.9[66522]: ansible-ansible.legacy.copy Invoked with dest=/etc/nftables/edpm-flushes.nft group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1764240675.0018315-1196-139961169118192/.source.nft follow=False _original_basename=flush-chain.j2 checksum=d16337256a56373421842284fe09e4e6c7df417e backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 27 05:51:17 np0005537642 python3.9[66674]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-chains.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 27 05:51:18 np0005537642 python3.9[66797]: ansible-ansible.legacy.copy Invoked with dest=/etc/nftables/edpm-chains.nft group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1764240677.0215607-1241-158576363249174/.source.nft follow=False _original_basename=chains.j2 checksum=2079f3b60590a165d1d502e763170876fc8e2984 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 27 05:51:19 np0005537642 python3.9[66949]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-rules.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 27 05:51:20 np0005537642 python3.9[67072]: ansible-ansible.legacy.copy Invoked with dest=/etc/nftables/edpm-rules.nft group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1764240678.7189784-1286-60176093997860/.source.nft follow=False _original_basename=ruleset.j2 checksum=693377dc03e5b6b24713cb537b18b88774724e35 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 27 05:51:21 np0005537642 python3.9[67224]: ansible-ansible.builtin.file Invoked with group=root mode=0600 owner=root path=/etc/nftables/edpm-rules.nft.changed state=touch recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 27 05:51:22 np0005537642 python3.9[67376]: ansible-ansible.legacy.command Invoked with _raw_params=set -o pipefail; cat /etc/nftables/edpm-chains.nft /etc/nftables/edpm-flushes.nft /etc/nftables/edpm-rules.nft /etc/nftables/edpm-update-jumps.nft /etc/nftables/edpm-jumps.nft | nft -c -f - _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 27 05:51:23 np0005537642 python3.9[67535]: ansible-ansible.builtin.blockinfile Invoked with backup=False block=include "/etc/nftables/iptables.nft"#012include "/etc/nftables/edpm-chains.nft"#012include "/etc/nftables/edpm-rules.nft"#012include "/etc/nftables/edpm-jumps.nft"#012 path=/etc/sysconfig/nftables.conf validate=nft -c -f %s state=present marker=# {mark} ANSIBLE MANAGED BLOCK create=False marker_begin=BEGIN marker_end=END append_newline=False prepend_newline=False encoding=utf-8 unsafe_writes=False insertafter=None insertbefore=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 27 05:51:24 np0005537642 python3.9[67688]: ansible-ansible.builtin.file Invoked with group=hugetlbfs mode=0775 owner=zuul path=/dev/hugepages1G state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 27 05:51:25 np0005537642 python3.9[67840]: ansible-ansible.builtin.file Invoked with group=hugetlbfs mode=0775 owner=zuul path=/dev/hugepages2M state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 27 05:51:26 np0005537642 python3.9[67992]: ansible-ansible.posix.mount Invoked with fstype=hugetlbfs opts=pagesize=1G path=/dev/hugepages1G src=none state=mounted boot=True dump=0 opts_no_log=False passno=0 backup=False fstab=None
Nov 27 05:51:27 np0005537642 python3.9[68145]: ansible-ansible.posix.mount Invoked with fstype=hugetlbfs opts=pagesize=2M path=/dev/hugepages2M src=none state=mounted boot=True dump=0 opts_no_log=False passno=0 backup=False fstab=None
Nov 27 05:51:27 np0005537642 systemd-logind[801]: Session 14 logged out. Waiting for processes to exit.
Nov 27 05:51:27 np0005537642 systemd[1]: session-14.scope: Deactivated successfully.
Nov 27 05:51:27 np0005537642 systemd[1]: session-14.scope: Consumed 45.395s CPU time.
Nov 27 05:51:27 np0005537642 systemd-logind[801]: Removed session 14.
Nov 27 05:51:32 np0005537642 systemd[1]: systemd-timedated.service: Deactivated successfully.
Nov 27 05:51:34 np0005537642 systemd-logind[801]: New session 15 of user zuul.
Nov 27 05:51:34 np0005537642 systemd[1]: Started Session 15 of User zuul.
Nov 27 05:51:35 np0005537642 python3.9[68329]: ansible-ansible.builtin.tempfile Invoked with state=file prefix=ansible. suffix= path=None
Nov 27 05:51:36 np0005537642 python3.9[68481]: ansible-ansible.builtin.stat Invoked with path=/etc/ssh/ssh_known_hosts follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Nov 27 05:51:37 np0005537642 python3.9[68633]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'ssh_host_key_rsa_public', 'ssh_host_key_ed25519_public', 'ssh_host_key_ecdsa_public'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Nov 27 05:51:39 np0005537642 python3.9[68785]: ansible-ansible.builtin.blockinfile Invoked with block=compute-0.ctlplane.example.com,192.168.122.100,compute-0* ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQCdZu5XPKNGNh7Of8Swj4mSJrgASKYMcXQKChj3hlJh/KcJKITVhy4gIAGkoS3pcT0zytq3G96M8s59YBLZPe4dUzzNkIruKWGWTXqUl3v6UVu1IDX7041bcHK2r16weKHDUl4ReylQoBizwEyOVPF+6BLFv/+5EGKZSsVkKXPfo/4lv98G8EwzIe8wPYd5Jw6F5CYAB78S/yB+Fv0bRLtSHsxBWS7aF9mDzeXUtCieh7bluHIS5mtnofdG6rGATUykFM4qIKetcqW7ZMtrNs7KSf9v7W4ET10+WMnmXXR5bPQTd9P7NdImgpXaL+YO84DXwnkglgtB1GLRABzBth6DXeiAc2obhfooQbMHYYZgE9MtMyBoCf59kV/qbwdP/wx9cWyrTxfQkD6dndqD3+CmQkd/LuDiIyzXtkiimkHBzZhEeJrCkECMLSoTa8WyPlL+RZOr5kfoaIJmGAdcLLhue3g7oML7KYRvOc6MQRTcy+IMzB0usJyHJBp0wzq6Dsc=#012compute-0.ctlplane.example.com,192.168.122.100,compute-0* ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIOw09IDykHo+xfBgjXZNy3AcaD17Po4MgxV5RqNX0APD#012compute-0.ctlplane.example.com,192.168.122.100,compute-0* ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBIOtZwOF01LiJH78J19bww6O4Xad3BZMt3Ng3z89xby2tmBDJ1rK1QcBRdCtBDV8XTM2gJgS241BqSuiRVCaTBM=#012compute-2.ctlplane.example.com,192.168.122.102,compute-2* ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQCz95TRc2jFmfT5PKQJq5G4w4aBShBJcUhYhdvzggab+c7Zu4UI8fDxACiDqxh2k0WXI15/zX1nnuFGTUAEhpVwXYtXoplD5ru16b7FJP2s8AnW+sPflG5zqvgnV6CUgcw0N1RK6BW5+IUZ7njRb6k++oDMDV5tW2qsp/so/csbT/vAQ6tXoME7GoRJLh64/Ki7jyCa8HLEdsn/ee3gAuoUPKfrvjLfLcQxHfgv2g7nILs7S/Ep0elgUdKv8CBy5C7UrWxFUQYI+8PJHUsOjWL9S29EFvtqQkVPQfyA9XFlqdu+Sk9QpU6b1tzG0ZU+hxNj5avRNBOi84K4krQJPZ7fCcJcZmWJ7ZIUZPGXU2/yO893xwclmJ9q8ow/Ll0fkHRJnpwNv142NSrQwv+2QQnZ1ocDQRrctkI4DlY5/qmx4qWO1RawccVudUQ/wVFGDsJk0D9eC61VlVxWmLDnZ9ubDuDeMRW6BWVMI4rqLOHMCixSMRoNRBsgX+Onsiu0sds=#012compute-2.ctlplane.example.com,192.168.122.102,compute-2* ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIFrYXV5ZnyoSO89sKZo/z0z348jBaTvAP6akAkce5RmR#012compute-2.ctlplane.example.com,192.168.122.102,compute-2* ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBFF8T/dSaLv0SV05CLFF4TBNoB0Vrfb8Fr1nXSxU+1cU3kwexZhqgvIvp+7t8PzmisqH5tAaqofpLS4OIzFVqHQ=#012compute-1.ctlplane.example.com,192.168.122.101,compute-1* ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQC0G0wza5RR7ESuw+8vQyRClLSTGDiHfRizr56AqOVYarQyGtaz/MAB5uXgKkXgbsLuPvnx25Nz8sJ4bLqOignhFywV01mYXkWZ8avyR+pcItppU9CBk523PfZuo7IgzLnO8HId74p3nGyyYF2Pvwn2iFouDrpuNvkZrTRYnMVbtci2qvJibpStPQs+uZEW2+iko42k1DnbUrKvSjdanvGCGV9aswFVNWZgutmk8rINHKRjqxSlMzG1RwERytprHuTbE8dhgZpIGC8MCleZCWw6MgYZpMT+zkbn9h1RvMNqykJe8wm7xZ2kvh3GeT4U/x+vKcpZ4I37Z1EaYB00gqP27kijtTroJfxivRrWVkt1dGodf/zWvWDSfcpd3XCtXj7Dm2ArxqUIOY9s/r8vjexLYT1dl4C8hiws6FqDsboEgMABe6ISnEiz9y89iy4fe3MQsMJDxm+j7r+rPbgGBRIo656S5X70YICGfeoaaVB1rY8i8secHbksgGLGRQ+eO10=#012compute-1.ctlplane.example.com,192.168.122.101,compute-1* ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIAHLgaQ/4E/sQWekStaAMvt1Ph+En6Q5DwKotbLW8pRs#012compute-1.ctlplane.example.com,192.168.122.101,compute-1* ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBFKBumScSud+IijrfZ833jz4q+UeTNJNv/R4cNEeV6239TX/jAq+Tx9Lc1nQ7pzX20+PhXT33zREmhNvH/EIlME=#012 create=True mode=0644 path=/tmp/ansible.bka4n_5a state=present marker=# {mark} ANSIBLE MANAGED BLOCK backup=False marker_begin=BEGIN marker_end=END append_newline=False prepend_newline=False encoding=utf-8 unsafe_writes=False insertafter=None insertbefore=None validate=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 27 05:51:40 np0005537642 python3.9[68937]: ansible-ansible.legacy.command Invoked with _raw_params=cat '/tmp/ansible.bka4n_5a' > /etc/ssh/ssh_known_hosts _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 27 05:51:41 np0005537642 python3.9[69091]: ansible-ansible.builtin.file Invoked with path=/tmp/ansible.bka4n_5a state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 27 05:51:42 np0005537642 systemd-logind[801]: Session 15 logged out. Waiting for processes to exit.
Nov 27 05:51:42 np0005537642 systemd[1]: session-15.scope: Deactivated successfully.
Nov 27 05:51:42 np0005537642 systemd[1]: session-15.scope: Consumed 4.131s CPU time.
Nov 27 05:51:42 np0005537642 systemd-logind[801]: Removed session 15.
Nov 27 05:51:48 np0005537642 systemd-logind[801]: New session 16 of user zuul.
Nov 27 05:51:48 np0005537642 systemd[1]: Started Session 16 of User zuul.
Nov 27 05:51:49 np0005537642 python3.9[69269]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Nov 27 05:51:50 np0005537642 python3.9[69425]: ansible-ansible.builtin.systemd Invoked with enabled=True name=sshd daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None masked=None
Nov 27 05:51:52 np0005537642 python3.9[69579]: ansible-ansible.builtin.systemd Invoked with name=sshd state=started daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Nov 27 05:51:53 np0005537642 python3.9[69732]: ansible-ansible.legacy.command Invoked with _raw_params=nft -f /etc/nftables/edpm-chains.nft _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 27 05:51:54 np0005537642 python3.9[69885]: ansible-ansible.builtin.stat Invoked with path=/etc/nftables/edpm-rules.nft.changed follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Nov 27 05:51:55 np0005537642 python3.9[70039]: ansible-ansible.legacy.command Invoked with _raw_params=set -o pipefail; cat /etc/nftables/edpm-flushes.nft /etc/nftables/edpm-rules.nft /etc/nftables/edpm-update-jumps.nft | nft -f - _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 27 05:51:56 np0005537642 python3.9[70194]: ansible-ansible.builtin.file Invoked with path=/etc/nftables/edpm-rules.nft.changed state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 27 05:51:56 np0005537642 systemd[1]: session-16.scope: Deactivated successfully.
Nov 27 05:51:56 np0005537642 systemd[1]: session-16.scope: Consumed 4.608s CPU time.
Nov 27 05:51:57 np0005537642 systemd-logind[801]: Session 16 logged out. Waiting for processes to exit.
Nov 27 05:51:57 np0005537642 systemd-logind[801]: Removed session 16.
Nov 27 05:52:02 np0005537642 systemd-logind[801]: New session 17 of user zuul.
Nov 27 05:52:02 np0005537642 systemd[1]: Started Session 17 of User zuul.
Nov 27 05:52:04 np0005537642 python3.9[70372]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Nov 27 05:52:05 np0005537642 python3.9[70528]: ansible-ansible.legacy.setup Invoked with filter=['ansible_pkg_mgr'] gather_subset=['!all'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Nov 27 05:52:06 np0005537642 python3.9[70612]: ansible-ansible.legacy.dnf Invoked with name=['yum-utils'] allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None state=None
Nov 27 05:52:09 np0005537642 python3.9[70763]: ansible-ansible.legacy.command Invoked with _raw_params=needs-restarting -r _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 27 05:52:10 np0005537642 python3.9[70914]: ansible-ansible.builtin.find Invoked with paths=['/var/lib/openstack/reboot_required/'] patterns=[] read_whole_file=False file_type=file age_stamp=mtime recurse=False hidden=False follow=False get_checksum=False checksum_algorithm=sha1 use_regex=False exact_mode=True excludes=None contains=None age=None size=None depth=None mode=None encoding=None limit=None
Nov 27 05:52:11 np0005537642 python3.9[71064]: ansible-ansible.builtin.stat Invoked with path=/var/lib/config-data/puppet-generated follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Nov 27 05:52:11 np0005537642 rsyslogd[1004]: imjournal: journal files changed, reloading...  [v8.2510.0-2.el9 try https://www.rsyslog.com/e/0 ]
Nov 27 05:52:11 np0005537642 rsyslogd[1004]: imjournal: journal files changed, reloading...  [v8.2510.0-2.el9 try https://www.rsyslog.com/e/0 ]
Nov 27 05:52:12 np0005537642 python3.9[71215]: ansible-ansible.builtin.stat Invoked with path=/var/lib/openstack/config follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Nov 27 05:52:13 np0005537642 chronyd[58453]: Selected source 23.133.168.245 (pool.ntp.org)
Nov 27 05:52:13 np0005537642 systemd[1]: session-17.scope: Deactivated successfully.
Nov 27 05:52:13 np0005537642 systemd[1]: session-17.scope: Consumed 6.992s CPU time.
Nov 27 05:52:13 np0005537642 systemd-logind[801]: Session 17 logged out. Waiting for processes to exit.
Nov 27 05:52:13 np0005537642 systemd-logind[801]: Removed session 17.
Nov 27 05:52:23 np0005537642 systemd-logind[801]: New session 18 of user zuul.
Nov 27 05:52:23 np0005537642 systemd[1]: Started Session 18 of User zuul.
Nov 27 05:52:31 np0005537642 python3[71981]: ansible-ansible.legacy.setup Invoked with gather_subset=['all'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Nov 27 05:52:33 np0005537642 python3[72076]: ansible-ansible.legacy.dnf Invoked with name=['util-linux', 'lvm2', 'jq', 'podman'] state=present allow_downgrade=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_repoquery=True install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 allowerasing=False nobest=False use_backend=auto conf_file=None disable_excludes=None download_dir=None list=None releasever=None
Nov 27 05:52:34 np0005537642 python3[72103]: ansible-ansible.builtin.stat Invoked with path=/dev/loop3 follow=False get_md5=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Nov 27 05:52:35 np0005537642 python3[72129]: ansible-ansible.legacy.command Invoked with _raw_params=dd if=/dev/zero of=/var/lib/ceph-osd-0.img bs=1 count=0 seek=20G#012losetup /dev/loop3 /var/lib/ceph-osd-0.img#012lsblk _uses_shell=True stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 27 05:52:35 np0005537642 kernel: loop: module loaded
Nov 27 05:52:35 np0005537642 kernel: loop3: detected capacity change from 0 to 41943040
Nov 27 05:52:35 np0005537642 python3[72164]: ansible-ansible.legacy.command Invoked with _raw_params=pvcreate /dev/loop3#012vgcreate ceph_vg0 /dev/loop3#012lvcreate -n ceph_lv0 -l +100%FREE ceph_vg0#012lvs _uses_shell=True stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 27 05:52:36 np0005537642 lvm[72167]: PV /dev/loop3 not used.
Nov 27 05:52:36 np0005537642 lvm[72169]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Nov 27 05:52:36 np0005537642 systemd[1]: Started /usr/sbin/lvm vgchange -aay --autoactivation event ceph_vg0.
Nov 27 05:52:36 np0005537642 lvm[72179]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Nov 27 05:52:36 np0005537642 lvm[72179]: VG ceph_vg0 finished
Nov 27 05:52:36 np0005537642 lvm[72178]:  1 logical volume(s) in volume group "ceph_vg0" now active
Nov 27 05:52:36 np0005537642 systemd[1]: lvm-activate-ceph_vg0.service: Deactivated successfully.
Nov 27 05:52:36 np0005537642 python3[72257]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/ceph-osd-losetup-0.service follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Nov 27 05:52:37 np0005537642 python3[72330]: ansible-ansible.legacy.copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1764240756.475807-36737-156679707765712/source dest=/etc/systemd/system/ceph-osd-losetup-0.service mode=0644 force=True follow=False _original_basename=ceph-osd-losetup.service.j2 checksum=427b1db064a970126b729b07acf99fa7d0eecb9c backup=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 27 05:52:38 np0005537642 python3[72380]: ansible-ansible.builtin.systemd Invoked with state=started enabled=True name=ceph-osd-losetup-0.service daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Nov 27 05:52:38 np0005537642 systemd[1]: Reloading.
Nov 27 05:52:38 np0005537642 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 27 05:52:38 np0005537642 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 27 05:52:38 np0005537642 systemd[1]: Starting Ceph OSD losetup...
Nov 27 05:52:38 np0005537642 bash[72419]: /dev/loop3: [64513]:4327964 (/var/lib/ceph-osd-0.img)
Nov 27 05:52:38 np0005537642 systemd[1]: Finished Ceph OSD losetup.
Nov 27 05:52:38 np0005537642 lvm[72421]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Nov 27 05:52:38 np0005537642 lvm[72421]: VG ceph_vg0 finished
Nov 27 05:52:41 np0005537642 python3[72445]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'network'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Nov 27 05:52:44 np0005537642 python3[72538]: ansible-ansible.legacy.dnf Invoked with name=['centos-release-ceph-squid'] state=present allow_downgrade=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_repoquery=True install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 allowerasing=False nobest=False use_backend=auto conf_file=None disable_excludes=None download_dir=None list=None releasever=None
Nov 27 05:52:47 np0005537642 python3[72596]: ansible-ansible.legacy.dnf Invoked with name=['cephadm'] state=present allow_downgrade=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_repoquery=True install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 allowerasing=False nobest=False use_backend=auto conf_file=None disable_excludes=None download_dir=None list=None releasever=None
Nov 27 05:52:50 np0005537642 systemd[1]: Started /usr/bin/systemctl start man-db-cache-update.
Nov 27 05:52:50 np0005537642 systemd[1]: Starting man-db-cache-update.service...
Nov 27 05:52:51 np0005537642 python3[72710]: ansible-ansible.builtin.stat Invoked with path=/usr/sbin/cephadm follow=False get_md5=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Nov 27 05:52:52 np0005537642 python3[72738]: ansible-ansible.legacy.command Invoked with _raw_params=/usr/sbin/cephadm ls --no-detail _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 27 05:52:52 np0005537642 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Nov 27 05:52:52 np0005537642 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Nov 27 05:52:53 np0005537642 python3[72800]: ansible-ansible.builtin.file Invoked with path=/etc/ceph state=directory mode=0755 recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 27 05:52:53 np0005537642 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Nov 27 05:52:53 np0005537642 python3[72826]: ansible-ansible.builtin.file Invoked with path=/home/ceph-admin/specs owner=ceph-admin group=ceph-admin mode=0755 state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 27 05:52:54 np0005537642 systemd[1]: man-db-cache-update.service: Deactivated successfully.
Nov 27 05:52:54 np0005537642 systemd[1]: Finished man-db-cache-update.service.
Nov 27 05:52:54 np0005537642 systemd[1]: run-r9ca75efd491b41d8b22dc7180ad06452.service: Deactivated successfully.
Nov 27 05:52:54 np0005537642 python3[72905]: ansible-ansible.legacy.stat Invoked with path=/home/ceph-admin/specs/ceph_spec.yaml follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Nov 27 05:52:54 np0005537642 python3[72978]: ansible-ansible.legacy.copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1764240773.8762174-36929-23765680716516/source dest=/home/ceph-admin/specs/ceph_spec.yaml owner=ceph-admin group=ceph-admin mode=0644 _original_basename=ceph_spec.yml follow=False checksum=a2c84611a4e46cfce32a90c112eae0345cab6abb backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 27 05:52:55 np0005537642 python3[73080]: ansible-ansible.legacy.stat Invoked with path=/home/ceph-admin/assimilate_ceph.conf follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Nov 27 05:52:56 np0005537642 python3[73153]: ansible-ansible.legacy.copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1764240775.4671867-36947-96061211961680/source dest=/home/ceph-admin/assimilate_ceph.conf owner=ceph-admin group=ceph-admin mode=0644 _original_basename=initial_ceph.conf follow=False checksum=41828f7c2442fdf376911255e33c12863fc3b1b3 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 27 05:52:56 np0005537642 python3[73203]: ansible-ansible.builtin.stat Invoked with path=/home/ceph-admin/.ssh/id_rsa follow=False get_md5=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Nov 27 05:52:57 np0005537642 python3[73231]: ansible-ansible.builtin.stat Invoked with path=/home/ceph-admin/.ssh/id_rsa.pub follow=False get_md5=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Nov 27 05:52:57 np0005537642 python3[73259]: ansible-ansible.builtin.stat Invoked with path=/home/ceph-admin/assimilate_ceph.conf follow=False get_md5=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Nov 27 05:52:57 np0005537642 python3[73287]: ansible-ansible.legacy.command Invoked with _raw_params=/usr/sbin/cephadm bootstrap --skip-firewalld --ssh-private-key /home/ceph-admin/.ssh/id_rsa --ssh-public-key /home/ceph-admin/.ssh/id_rsa.pub --ssh-user ceph-admin --allow-fqdn-hostname --output-keyring /etc/ceph/ceph.client.admin.keyring --output-config /etc/ceph/ceph.conf --fsid 4c838139-e0c9-556a-a9ca-e4422f459af7 --config /home/ceph-admin/assimilate_ceph.conf \--skip-monitoring-stack --skip-dashboard --mon-ip 192.168.122.100#012 _uses_shell=True stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 27 05:52:58 np0005537642 systemd-logind[801]: New session 19 of user ceph-admin.
Nov 27 05:52:58 np0005537642 systemd[1]: Created slice User Slice of UID 42477.
Nov 27 05:52:58 np0005537642 systemd[1]: Starting User Runtime Directory /run/user/42477...
Nov 27 05:52:58 np0005537642 systemd[1]: Finished User Runtime Directory /run/user/42477.
Nov 27 05:52:58 np0005537642 systemd[1]: Starting User Manager for UID 42477...
Nov 27 05:52:58 np0005537642 systemd[73295]: Queued start job for default target Main User Target.
Nov 27 05:52:58 np0005537642 systemd[73295]: Created slice User Application Slice.
Nov 27 05:52:58 np0005537642 systemd[73295]: Started Mark boot as successful after the user session has run 2 minutes.
Nov 27 05:52:58 np0005537642 systemd[73295]: Started Daily Cleanup of User's Temporary Directories.
Nov 27 05:52:58 np0005537642 systemd[73295]: Reached target Paths.
Nov 27 05:52:58 np0005537642 systemd[73295]: Reached target Timers.
Nov 27 05:52:58 np0005537642 systemd[73295]: Starting D-Bus User Message Bus Socket...
Nov 27 05:52:58 np0005537642 systemd[73295]: Starting Create User's Volatile Files and Directories...
Nov 27 05:52:58 np0005537642 systemd[73295]: Finished Create User's Volatile Files and Directories.
Nov 27 05:52:58 np0005537642 systemd[73295]: Listening on D-Bus User Message Bus Socket.
Nov 27 05:52:58 np0005537642 systemd[73295]: Reached target Sockets.
Nov 27 05:52:58 np0005537642 systemd[73295]: Reached target Basic System.
Nov 27 05:52:58 np0005537642 systemd[73295]: Reached target Main User Target.
Nov 27 05:52:58 np0005537642 systemd[73295]: Startup finished in 141ms.
Nov 27 05:52:58 np0005537642 systemd[1]: Started User Manager for UID 42477.
Nov 27 05:52:58 np0005537642 systemd[1]: Started Session 19 of User ceph-admin.
Nov 27 05:52:58 np0005537642 systemd[1]: session-19.scope: Deactivated successfully.
Nov 27 05:52:58 np0005537642 systemd-logind[801]: Session 19 logged out. Waiting for processes to exit.
Nov 27 05:52:58 np0005537642 systemd-logind[801]: Removed session 19.
Nov 27 05:52:58 np0005537642 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Nov 27 05:52:58 np0005537642 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Nov 27 05:53:01 np0005537642 systemd[1]: var-lib-containers-storage-overlay-compat3295413149-merged.mount: Deactivated successfully.
Nov 27 05:53:02 np0005537642 systemd[1]: var-lib-containers-storage-overlay-compat3295413149-lower\x2dmapped.mount: Deactivated successfully.
Nov 27 05:53:08 np0005537642 systemd[1]: Stopping User Manager for UID 42477...
Nov 27 05:53:08 np0005537642 systemd[73295]: Activating special unit Exit the Session...
Nov 27 05:53:08 np0005537642 systemd[73295]: Stopped target Main User Target.
Nov 27 05:53:08 np0005537642 systemd[73295]: Stopped target Basic System.
Nov 27 05:53:08 np0005537642 systemd[73295]: Stopped target Paths.
Nov 27 05:53:08 np0005537642 systemd[73295]: Stopped target Sockets.
Nov 27 05:53:08 np0005537642 systemd[73295]: Stopped target Timers.
Nov 27 05:53:08 np0005537642 systemd[73295]: Stopped Mark boot as successful after the user session has run 2 minutes.
Nov 27 05:53:08 np0005537642 systemd[73295]: Stopped Daily Cleanup of User's Temporary Directories.
Nov 27 05:53:08 np0005537642 systemd[73295]: Closed D-Bus User Message Bus Socket.
Nov 27 05:53:08 np0005537642 systemd[73295]: Stopped Create User's Volatile Files and Directories.
Nov 27 05:53:08 np0005537642 systemd[73295]: Removed slice User Application Slice.
Nov 27 05:53:08 np0005537642 systemd[73295]: Reached target Shutdown.
Nov 27 05:53:08 np0005537642 systemd[73295]: Finished Exit the Session.
Nov 27 05:53:08 np0005537642 systemd[73295]: Reached target Exit the Session.
Nov 27 05:53:08 np0005537642 systemd[1]: user@42477.service: Deactivated successfully.
Nov 27 05:53:08 np0005537642 systemd[1]: Stopped User Manager for UID 42477.
Nov 27 05:53:08 np0005537642 systemd[1]: Stopping User Runtime Directory /run/user/42477...
Nov 27 05:53:08 np0005537642 systemd[1]: run-user-42477.mount: Deactivated successfully.
Nov 27 05:53:08 np0005537642 systemd[1]: user-runtime-dir@42477.service: Deactivated successfully.
Nov 27 05:53:08 np0005537642 systemd[1]: Stopped User Runtime Directory /run/user/42477.
Nov 27 05:53:08 np0005537642 systemd[1]: Removed slice User Slice of UID 42477.
Nov 27 05:53:17 np0005537642 podman[73390]: 2025-11-27 10:53:17.255401527 +0000 UTC m=+18.306043806 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Nov 27 05:53:17 np0005537642 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Nov 27 05:53:17 np0005537642 podman[73454]: 2025-11-27 10:53:17.364841908 +0000 UTC m=+0.065629356 container create 1eb6fc39e1c30814c14b13ad6d25c0b34041b6c905dcb066526b24a70265b77f (image=quay.io/ceph/ceph:v19, name=adoring_easley, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, OSD_FLAVOR=default)
Nov 27 05:53:17 np0005537642 systemd[1]: Created slice Virtual Machine and Container Slice.
Nov 27 05:53:17 np0005537642 systemd[1]: Started libpod-conmon-1eb6fc39e1c30814c14b13ad6d25c0b34041b6c905dcb066526b24a70265b77f.scope.
Nov 27 05:53:17 np0005537642 systemd[1]: Started libcrun container.
Nov 27 05:53:17 np0005537642 podman[73454]: 2025-11-27 10:53:17.341889866 +0000 UTC m=+0.042677314 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Nov 27 05:53:17 np0005537642 podman[73454]: 2025-11-27 10:53:17.471678335 +0000 UTC m=+0.172465773 container init 1eb6fc39e1c30814c14b13ad6d25c0b34041b6c905dcb066526b24a70265b77f (image=quay.io/ceph/ceph:v19, name=adoring_easley, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, OSD_FLAVOR=default, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, org.label-schema.schema-version=1.0)
Nov 27 05:53:17 np0005537642 podman[73454]: 2025-11-27 10:53:17.484773678 +0000 UTC m=+0.185561116 container start 1eb6fc39e1c30814c14b13ad6d25c0b34041b6c905dcb066526b24a70265b77f (image=quay.io/ceph/ceph:v19, name=adoring_easley, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 27 05:53:17 np0005537642 podman[73454]: 2025-11-27 10:53:17.488811822 +0000 UTC m=+0.189599320 container attach 1eb6fc39e1c30814c14b13ad6d25c0b34041b6c905dcb066526b24a70265b77f (image=quay.io/ceph/ceph:v19, name=adoring_easley, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 27 05:53:17 np0005537642 adoring_easley[73470]: ceph version 19.2.3 (c92aebb279828e9c3c1f5d24613efca272649e62) squid (stable)
Nov 27 05:53:17 np0005537642 systemd[1]: libpod-1eb6fc39e1c30814c14b13ad6d25c0b34041b6c905dcb066526b24a70265b77f.scope: Deactivated successfully.
Nov 27 05:53:17 np0005537642 podman[73454]: 2025-11-27 10:53:17.589661969 +0000 UTC m=+0.290449377 container died 1eb6fc39e1c30814c14b13ad6d25c0b34041b6c905dcb066526b24a70265b77f (image=quay.io/ceph/ceph:v19, name=adoring_easley, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_REF=squid, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 27 05:53:17 np0005537642 systemd[1]: var-lib-containers-storage-overlay-d6042d71152e1c6b873fc2d1e0c24bf61b821399709921b5eee49088e8ff8d9d-merged.mount: Deactivated successfully.
Nov 27 05:53:17 np0005537642 podman[73454]: 2025-11-27 10:53:17.639056693 +0000 UTC m=+0.339844131 container remove 1eb6fc39e1c30814c14b13ad6d25c0b34041b6c905dcb066526b24a70265b77f (image=quay.io/ceph/ceph:v19, name=adoring_easley, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0)
Nov 27 05:53:17 np0005537642 systemd[1]: libpod-conmon-1eb6fc39e1c30814c14b13ad6d25c0b34041b6c905dcb066526b24a70265b77f.scope: Deactivated successfully.
Nov 27 05:53:17 np0005537642 podman[73487]: 2025-11-27 10:53:17.712339516 +0000 UTC m=+0.044798364 container create 9818c2658998c2fa34df8970e60a5b64845bcdc2ab0c4a4e7c1b1e2b6b4600f8 (image=quay.io/ceph/ceph:v19, name=keen_dubinsky, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, OSD_FLAVOR=default, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 27 05:53:17 np0005537642 systemd[1]: Started libpod-conmon-9818c2658998c2fa34df8970e60a5b64845bcdc2ab0c4a4e7c1b1e2b6b4600f8.scope.
Nov 27 05:53:17 np0005537642 podman[73487]: 2025-11-27 10:53:17.69100004 +0000 UTC m=+0.023458888 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Nov 27 05:53:17 np0005537642 systemd[1]: Started libcrun container.
Nov 27 05:53:17 np0005537642 podman[73487]: 2025-11-27 10:53:17.837229867 +0000 UTC m=+0.169688705 container init 9818c2658998c2fa34df8970e60a5b64845bcdc2ab0c4a4e7c1b1e2b6b4600f8 (image=quay.io/ceph/ceph:v19, name=keen_dubinsky, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, ceph=True, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_REF=squid)
Nov 27 05:53:17 np0005537642 podman[73487]: 2025-11-27 10:53:17.847408926 +0000 UTC m=+0.179867744 container start 9818c2658998c2fa34df8970e60a5b64845bcdc2ab0c4a4e7c1b1e2b6b4600f8 (image=quay.io/ceph/ceph:v19, name=keen_dubinsky, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, OSD_FLAVOR=default)
Nov 27 05:53:17 np0005537642 keen_dubinsky[73503]: 167 167
Nov 27 05:53:17 np0005537642 systemd[1]: libpod-9818c2658998c2fa34df8970e60a5b64845bcdc2ab0c4a4e7c1b1e2b6b4600f8.scope: Deactivated successfully.
Nov 27 05:53:17 np0005537642 podman[73487]: 2025-11-27 10:53:17.85424176 +0000 UTC m=+0.186700668 container attach 9818c2658998c2fa34df8970e60a5b64845bcdc2ab0c4a4e7c1b1e2b6b4600f8 (image=quay.io/ceph/ceph:v19, name=keen_dubinsky, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, CEPH_REF=squid, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS)
Nov 27 05:53:17 np0005537642 podman[73487]: 2025-11-27 10:53:17.854644902 +0000 UTC m=+0.187103750 container died 9818c2658998c2fa34df8970e60a5b64845bcdc2ab0c4a4e7c1b1e2b6b4600f8 (image=quay.io/ceph/ceph:v19, name=keen_dubinsky, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0)
Nov 27 05:53:17 np0005537642 podman[73487]: 2025-11-27 10:53:17.908785201 +0000 UTC m=+0.241244019 container remove 9818c2658998c2fa34df8970e60a5b64845bcdc2ab0c4a4e7c1b1e2b6b4600f8 (image=quay.io/ceph/ceph:v19, name=keen_dubinsky, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, ceph=True)
Nov 27 05:53:17 np0005537642 systemd[1]: libpod-conmon-9818c2658998c2fa34df8970e60a5b64845bcdc2ab0c4a4e7c1b1e2b6b4600f8.scope: Deactivated successfully.
Nov 27 05:53:18 np0005537642 podman[73521]: 2025-11-27 10:53:18.010533696 +0000 UTC m=+0.070171319 container create 872c923c68ef5373755a652f7b38b31fdc6c90816f8ff6d30ce4f3e06e5f0b80 (image=quay.io/ceph/ceph:v19, name=brave_sammet, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_REF=squid)
Nov 27 05:53:18 np0005537642 systemd[1]: Started libpod-conmon-872c923c68ef5373755a652f7b38b31fdc6c90816f8ff6d30ce4f3e06e5f0b80.scope.
Nov 27 05:53:18 np0005537642 podman[73521]: 2025-11-27 10:53:17.979420099 +0000 UTC m=+0.039057772 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Nov 27 05:53:18 np0005537642 systemd[1]: Started libcrun container.
Nov 27 05:53:18 np0005537642 podman[73521]: 2025-11-27 10:53:18.109088786 +0000 UTC m=+0.168726399 container init 872c923c68ef5373755a652f7b38b31fdc6c90816f8ff6d30ce4f3e06e5f0b80 (image=quay.io/ceph/ceph:v19, name=brave_sammet, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, ceph=True, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 27 05:53:18 np0005537642 podman[73521]: 2025-11-27 10:53:18.121005418 +0000 UTC m=+0.180643011 container start 872c923c68ef5373755a652f7b38b31fdc6c90816f8ff6d30ce4f3e06e5f0b80 (image=quay.io/ceph/ceph:v19, name=brave_sammet, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325)
Nov 27 05:53:18 np0005537642 podman[73521]: 2025-11-27 10:53:18.135942985 +0000 UTC m=+0.195580588 container attach 872c923c68ef5373755a652f7b38b31fdc6c90816f8ff6d30ce4f3e06e5f0b80 (image=quay.io/ceph/ceph:v19, name=brave_sammet, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, ceph=True, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Nov 27 05:53:18 np0005537642 brave_sammet[73538]: AQCeLShp/xCbCRAAAciAcgDoIPv++GV3AqeuYA==
Nov 27 05:53:18 np0005537642 systemd[1]: libpod-872c923c68ef5373755a652f7b38b31fdc6c90816f8ff6d30ce4f3e06e5f0b80.scope: Deactivated successfully.
Nov 27 05:53:18 np0005537642 podman[73521]: 2025-11-27 10:53:18.164940476 +0000 UTC m=+0.224578129 container died 872c923c68ef5373755a652f7b38b31fdc6c90816f8ff6d30ce4f3e06e5f0b80 (image=quay.io/ceph/ceph:v19, name=brave_sammet, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2)
Nov 27 05:53:18 np0005537642 podman[73521]: 2025-11-27 10:53:18.271801095 +0000 UTC m=+0.331438708 container remove 872c923c68ef5373755a652f7b38b31fdc6c90816f8ff6d30ce4f3e06e5f0b80 (image=quay.io/ceph/ceph:v19, name=brave_sammet, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 27 05:53:18 np0005537642 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Nov 27 05:53:18 np0005537642 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Nov 27 05:53:18 np0005537642 systemd[1]: libpod-conmon-872c923c68ef5373755a652f7b38b31fdc6c90816f8ff6d30ce4f3e06e5f0b80.scope: Deactivated successfully.
Nov 27 05:53:18 np0005537642 podman[73558]: 2025-11-27 10:53:18.371728122 +0000 UTC m=+0.064270076 container create cad8dc015d6c4aae3c60b6a76c8f5154f3fbe63226ff3967ac6054339c9ee4ad (image=quay.io/ceph/ceph:v19, name=lucid_pasteur, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True)
Nov 27 05:53:18 np0005537642 systemd[1]: Started libpod-conmon-cad8dc015d6c4aae3c60b6a76c8f5154f3fbe63226ff3967ac6054339c9ee4ad.scope.
Nov 27 05:53:18 np0005537642 podman[73558]: 2025-11-27 10:53:18.343545799 +0000 UTC m=+0.036087753 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Nov 27 05:53:18 np0005537642 systemd[1]: Started libcrun container.
Nov 27 05:53:18 np0005537642 podman[73558]: 2025-11-27 10:53:18.474772077 +0000 UTC m=+0.167314071 container init cad8dc015d6c4aae3c60b6a76c8f5154f3fbe63226ff3967ac6054339c9ee4ad (image=quay.io/ceph/ceph:v19, name=lucid_pasteur, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, OSD_FLAVOR=default)
Nov 27 05:53:18 np0005537642 podman[73558]: 2025-11-27 10:53:18.483546448 +0000 UTC m=+0.176088402 container start cad8dc015d6c4aae3c60b6a76c8f5154f3fbe63226ff3967ac6054339c9ee4ad (image=quay.io/ceph/ceph:v19, name=lucid_pasteur, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default)
Nov 27 05:53:18 np0005537642 podman[73558]: 2025-11-27 10:53:18.488150609 +0000 UTC m=+0.180692563 container attach cad8dc015d6c4aae3c60b6a76c8f5154f3fbe63226ff3967ac6054339c9ee4ad (image=quay.io/ceph/ceph:v19, name=lucid_pasteur, CEPH_REF=squid, org.label-schema.build-date=20250325, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default)
Nov 27 05:53:18 np0005537642 lucid_pasteur[73574]: AQCeLShp5fH3HhAAiTID5T8AnmWmRJe7sqIJjQ==
Nov 27 05:53:18 np0005537642 systemd[1]: libpod-cad8dc015d6c4aae3c60b6a76c8f5154f3fbe63226ff3967ac6054339c9ee4ad.scope: Deactivated successfully.
Nov 27 05:53:18 np0005537642 podman[73558]: 2025-11-27 10:53:18.524520355 +0000 UTC m=+0.217062309 container died cad8dc015d6c4aae3c60b6a76c8f5154f3fbe63226ff3967ac6054339c9ee4ad (image=quay.io/ceph/ceph:v19, name=lucid_pasteur, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, OSD_FLAVOR=default, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, org.label-schema.build-date=20250325)
Nov 27 05:53:18 np0005537642 podman[73558]: 2025-11-27 10:53:18.589950935 +0000 UTC m=+0.282492849 container remove cad8dc015d6c4aae3c60b6a76c8f5154f3fbe63226ff3967ac6054339c9ee4ad (image=quay.io/ceph/ceph:v19, name=lucid_pasteur, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 27 05:53:18 np0005537642 systemd[1]: libpod-conmon-cad8dc015d6c4aae3c60b6a76c8f5154f3fbe63226ff3967ac6054339c9ee4ad.scope: Deactivated successfully.
Nov 27 05:53:18 np0005537642 podman[73593]: 2025-11-27 10:53:18.64172409 +0000 UTC m=+0.026508873 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Nov 27 05:53:18 np0005537642 podman[73593]: 2025-11-27 10:53:18.834211039 +0000 UTC m=+0.218995752 container create 030592f5573bbe3546e00a0ef019a647319b4e77227fdd172e5c6cee056ebfca (image=quay.io/ceph/ceph:v19, name=boring_perlman, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, CEPH_REF=squid, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 27 05:53:18 np0005537642 systemd[1]: Started libpod-conmon-030592f5573bbe3546e00a0ef019a647319b4e77227fdd172e5c6cee056ebfca.scope.
Nov 27 05:53:18 np0005537642 systemd[1]: Started libcrun container.
Nov 27 05:53:19 np0005537642 podman[73593]: 2025-11-27 10:53:19.004333135 +0000 UTC m=+0.389117898 container init 030592f5573bbe3546e00a0ef019a647319b4e77227fdd172e5c6cee056ebfca (image=quay.io/ceph/ceph:v19, name=boring_perlman, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Nov 27 05:53:19 np0005537642 podman[73593]: 2025-11-27 10:53:19.013838131 +0000 UTC m=+0.398622854 container start 030592f5573bbe3546e00a0ef019a647319b4e77227fdd172e5c6cee056ebfca (image=quay.io/ceph/ceph:v19, name=boring_perlman, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325)
Nov 27 05:53:19 np0005537642 podman[73593]: 2025-11-27 10:53:19.029545055 +0000 UTC m=+0.414329818 container attach 030592f5573bbe3546e00a0ef019a647319b4e77227fdd172e5c6cee056ebfca (image=quay.io/ceph/ceph:v19, name=boring_perlman, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, OSD_FLAVOR=default, ceph=True)
Nov 27 05:53:19 np0005537642 boring_perlman[73610]: AQCfLShpQ9b0AhAApskm7GvEteIM/rNAZZRvHw==
Nov 27 05:53:19 np0005537642 systemd[1]: libpod-030592f5573bbe3546e00a0ef019a647319b4e77227fdd172e5c6cee056ebfca.scope: Deactivated successfully.
Nov 27 05:53:19 np0005537642 podman[73593]: 2025-11-27 10:53:19.0550652 +0000 UTC m=+0.439849963 container died 030592f5573bbe3546e00a0ef019a647319b4e77227fdd172e5c6cee056ebfca (image=quay.io/ceph/ceph:v19, name=boring_perlman, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, OSD_FLAVOR=default, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 27 05:53:19 np0005537642 podman[73593]: 2025-11-27 10:53:19.157273945 +0000 UTC m=+0.542058658 container remove 030592f5573bbe3546e00a0ef019a647319b4e77227fdd172e5c6cee056ebfca (image=quay.io/ceph/ceph:v19, name=boring_perlman, CEPH_REF=squid, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True)
Nov 27 05:53:19 np0005537642 systemd[1]: libpod-conmon-030592f5573bbe3546e00a0ef019a647319b4e77227fdd172e5c6cee056ebfca.scope: Deactivated successfully.
Nov 27 05:53:19 np0005537642 podman[73627]: 2025-11-27 10:53:19.243275403 +0000 UTC m=+0.060907544 container create c1dc440415d7031944dc53ec423a21ee068e3672462179f939c3612416727113 (image=quay.io/ceph/ceph:v19, name=objective_fermat, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.build-date=20250325, io.buildah.version=1.40.1)
Nov 27 05:53:19 np0005537642 podman[73627]: 2025-11-27 10:53:19.211977911 +0000 UTC m=+0.029610132 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Nov 27 05:53:19 np0005537642 systemd[1]: Started libpod-conmon-c1dc440415d7031944dc53ec423a21ee068e3672462179f939c3612416727113.scope.
Nov 27 05:53:19 np0005537642 systemd[1]: Started libcrun container.
Nov 27 05:53:19 np0005537642 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/14e80e9d66ae8164f26f8196d0296ac54b2a19af442b83d9c5c68c888e235004/merged/tmp/monmap supports timestamps until 2038 (0x7fffffff)
Nov 27 05:53:19 np0005537642 podman[73627]: 2025-11-27 10:53:19.401043435 +0000 UTC m=+0.218675636 container init c1dc440415d7031944dc53ec423a21ee068e3672462179f939c3612416727113 (image=quay.io/ceph/ceph:v19, name=objective_fermat, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, OSD_FLAVOR=default)
Nov 27 05:53:19 np0005537642 podman[73627]: 2025-11-27 10:53:19.410917209 +0000 UTC m=+0.228549330 container start c1dc440415d7031944dc53ec423a21ee068e3672462179f939c3612416727113 (image=quay.io/ceph/ceph:v19, name=objective_fermat, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True)
Nov 27 05:53:19 np0005537642 podman[73627]: 2025-11-27 10:53:19.417818611 +0000 UTC m=+0.235450762 container attach c1dc440415d7031944dc53ec423a21ee068e3672462179f939c3612416727113 (image=quay.io/ceph/ceph:v19, name=objective_fermat, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Nov 27 05:53:19 np0005537642 objective_fermat[73644]: /usr/bin/monmaptool: monmap file /tmp/monmap
Nov 27 05:53:19 np0005537642 objective_fermat[73644]: setting min_mon_release = quincy
Nov 27 05:53:19 np0005537642 objective_fermat[73644]: /usr/bin/monmaptool: set fsid to 4c838139-e0c9-556a-a9ca-e4422f459af7
Nov 27 05:53:19 np0005537642 objective_fermat[73644]: /usr/bin/monmaptool: writing epoch 0 to /tmp/monmap (1 monitors)
Nov 27 05:53:19 np0005537642 systemd[1]: libpod-c1dc440415d7031944dc53ec423a21ee068e3672462179f939c3612416727113.scope: Deactivated successfully.
Nov 27 05:53:19 np0005537642 podman[73627]: 2025-11-27 10:53:19.462883083 +0000 UTC m=+0.280515234 container died c1dc440415d7031944dc53ec423a21ee068e3672462179f939c3612416727113 (image=quay.io/ceph/ceph:v19, name=objective_fermat, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 27 05:53:19 np0005537642 systemd[1]: var-lib-containers-storage-overlay-14e80e9d66ae8164f26f8196d0296ac54b2a19af442b83d9c5c68c888e235004-merged.mount: Deactivated successfully.
Nov 27 05:53:19 np0005537642 podman[73627]: 2025-11-27 10:53:19.541025274 +0000 UTC m=+0.358657415 container remove c1dc440415d7031944dc53ec423a21ee068e3672462179f939c3612416727113 (image=quay.io/ceph/ceph:v19, name=objective_fermat, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 27 05:53:19 np0005537642 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Nov 27 05:53:19 np0005537642 systemd[1]: libpod-conmon-c1dc440415d7031944dc53ec423a21ee068e3672462179f939c3612416727113.scope: Deactivated successfully.
Nov 27 05:53:19 np0005537642 podman[73662]: 2025-11-27 10:53:19.666147479 +0000 UTC m=+0.093061747 container create 49b2b81d8f2e7837b963f8b6425ab52ccdbd5141213d5a0cf32f68e391c81b22 (image=quay.io/ceph/ceph:v19, name=wonderful_germain, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_REF=squid, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.build-date=20250325, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 27 05:53:19 np0005537642 podman[73662]: 2025-11-27 10:53:19.611204252 +0000 UTC m=+0.038118550 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Nov 27 05:53:19 np0005537642 systemd[1]: Started libpod-conmon-49b2b81d8f2e7837b963f8b6425ab52ccdbd5141213d5a0cf32f68e391c81b22.scope.
Nov 27 05:53:19 np0005537642 systemd[1]: Started libcrun container.
Nov 27 05:53:19 np0005537642 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8528ab9e67bb1962a3b07e5e17050bf800c311445f77991c6533d380112f18f7/merged/tmp/keyring supports timestamps until 2038 (0x7fffffff)
Nov 27 05:53:19 np0005537642 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8528ab9e67bb1962a3b07e5e17050bf800c311445f77991c6533d380112f18f7/merged/tmp/monmap supports timestamps until 2038 (0x7fffffff)
Nov 27 05:53:19 np0005537642 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8528ab9e67bb1962a3b07e5e17050bf800c311445f77991c6533d380112f18f7/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 27 05:53:19 np0005537642 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8528ab9e67bb1962a3b07e5e17050bf800c311445f77991c6533d380112f18f7/merged/var/lib/ceph/mon/ceph-compute-0 supports timestamps until 2038 (0x7fffffff)
Nov 27 05:53:19 np0005537642 podman[73662]: 2025-11-27 10:53:19.766873404 +0000 UTC m=+0.193787732 container init 49b2b81d8f2e7837b963f8b6425ab52ccdbd5141213d5a0cf32f68e391c81b22 (image=quay.io/ceph/ceph:v19, name=wonderful_germain, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, CEPH_REF=squid, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 27 05:53:19 np0005537642 podman[73662]: 2025-11-27 10:53:19.777542316 +0000 UTC m=+0.204456564 container start 49b2b81d8f2e7837b963f8b6425ab52ccdbd5141213d5a0cf32f68e391c81b22 (image=quay.io/ceph/ceph:v19, name=wonderful_germain, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, CEPH_REF=squid, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 27 05:53:19 np0005537642 podman[73662]: 2025-11-27 10:53:19.793651029 +0000 UTC m=+0.220565367 container attach 49b2b81d8f2e7837b963f8b6425ab52ccdbd5141213d5a0cf32f68e391c81b22 (image=quay.io/ceph/ceph:v19, name=wonderful_germain, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, ceph=True, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid)
Nov 27 05:53:19 np0005537642 systemd[1]: libpod-49b2b81d8f2e7837b963f8b6425ab52ccdbd5141213d5a0cf32f68e391c81b22.scope: Deactivated successfully.
Nov 27 05:53:19 np0005537642 podman[73662]: 2025-11-27 10:53:19.946949087 +0000 UTC m=+0.373923978 container died 49b2b81d8f2e7837b963f8b6425ab52ccdbd5141213d5a0cf32f68e391c81b22 (image=quay.io/ceph/ceph:v19, name=wonderful_germain, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 27 05:53:20 np0005537642 podman[73662]: 2025-11-27 10:53:20.147879371 +0000 UTC m=+0.574793639 container remove 49b2b81d8f2e7837b963f8b6425ab52ccdbd5141213d5a0cf32f68e391c81b22 (image=quay.io/ceph/ceph:v19, name=wonderful_germain, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 27 05:53:20 np0005537642 systemd[1]: libpod-conmon-49b2b81d8f2e7837b963f8b6425ab52ccdbd5141213d5a0cf32f68e391c81b22.scope: Deactivated successfully.
Nov 27 05:53:20 np0005537642 systemd[1]: Reloading.
Nov 27 05:53:20 np0005537642 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 27 05:53:20 np0005537642 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 27 05:53:20 np0005537642 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Nov 27 05:53:20 np0005537642 systemd[1]: Reloading.
Nov 27 05:53:20 np0005537642 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 27 05:53:20 np0005537642 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 27 05:53:20 np0005537642 systemd[1]: Reached target All Ceph clusters and services.
Nov 27 05:53:20 np0005537642 systemd[1]: Reloading.
Nov 27 05:53:20 np0005537642 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 27 05:53:20 np0005537642 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 27 05:53:21 np0005537642 systemd[1]: Reached target Ceph cluster 4c838139-e0c9-556a-a9ca-e4422f459af7.
Nov 27 05:53:21 np0005537642 systemd[1]: Reloading.
Nov 27 05:53:21 np0005537642 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 27 05:53:21 np0005537642 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 27 05:53:21 np0005537642 systemd[1]: Reloading.
Nov 27 05:53:21 np0005537642 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 27 05:53:21 np0005537642 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 27 05:53:21 np0005537642 systemd[1]: Created slice Slice /system/ceph-4c838139-e0c9-556a-a9ca-e4422f459af7.
Nov 27 05:53:21 np0005537642 systemd[1]: Reached target System Time Set.
Nov 27 05:53:21 np0005537642 systemd[1]: Reached target System Time Synchronized.
Nov 27 05:53:21 np0005537642 systemd[1]: Starting Ceph mon.compute-0 for 4c838139-e0c9-556a-a9ca-e4422f459af7...
Nov 27 05:53:21 np0005537642 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Nov 27 05:53:21 np0005537642 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Nov 27 05:53:22 np0005537642 podman[73957]: 2025-11-27 10:53:22.081729366 +0000 UTC m=+0.108427064 container create 6aa10e3837b8b8ec2aa49aa0becb6ff902044c0eab78b7c95e68eaf05a24d091 (image=quay.io/ceph/ceph:v19, name=ceph-4c838139-e0c9-556a-a9ca-e4422f459af7-mon-compute-0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, OSD_FLAVOR=default, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid)
Nov 27 05:53:22 np0005537642 podman[73957]: 2025-11-27 10:53:22.001097177 +0000 UTC m=+0.027794825 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Nov 27 05:53:22 np0005537642 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2a01a9ffbd4c76ab6f333d05f958dbc04f0d3b2972c0150f83b462167c5e2136/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 27 05:53:22 np0005537642 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2a01a9ffbd4c76ab6f333d05f958dbc04f0d3b2972c0150f83b462167c5e2136/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 27 05:53:22 np0005537642 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2a01a9ffbd4c76ab6f333d05f958dbc04f0d3b2972c0150f83b462167c5e2136/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 27 05:53:22 np0005537642 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2a01a9ffbd4c76ab6f333d05f958dbc04f0d3b2972c0150f83b462167c5e2136/merged/var/lib/ceph/mon/ceph-compute-0 supports timestamps until 2038 (0x7fffffff)
Nov 27 05:53:22 np0005537642 podman[73957]: 2025-11-27 10:53:22.269233767 +0000 UTC m=+0.295931445 container init 6aa10e3837b8b8ec2aa49aa0becb6ff902044c0eab78b7c95e68eaf05a24d091 (image=quay.io/ceph/ceph:v19, name=ceph-4c838139-e0c9-556a-a9ca-e4422f459af7-mon-compute-0, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, ceph=True, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 27 05:53:22 np0005537642 podman[73957]: 2025-11-27 10:53:22.28097493 +0000 UTC m=+0.307672558 container start 6aa10e3837b8b8ec2aa49aa0becb6ff902044c0eab78b7c95e68eaf05a24d091 (image=quay.io/ceph/ceph:v19, name=ceph-4c838139-e0c9-556a-a9ca-e4422f459af7-mon-compute-0, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_REF=squid)
Nov 27 05:53:22 np0005537642 bash[73957]: 6aa10e3837b8b8ec2aa49aa0becb6ff902044c0eab78b7c95e68eaf05a24d091
Nov 27 05:53:22 np0005537642 systemd[1]: Started Ceph mon.compute-0 for 4c838139-e0c9-556a-a9ca-e4422f459af7.
Nov 27 05:53:22 np0005537642 ceph-mon[73977]: set uid:gid to 167:167 (ceph:ceph)
Nov 27 05:53:22 np0005537642 ceph-mon[73977]: ceph version 19.2.3 (c92aebb279828e9c3c1f5d24613efca272649e62) squid (stable), process ceph-mon, pid 2
Nov 27 05:53:22 np0005537642 ceph-mon[73977]: pidfile_write: ignore empty --pid-file
Nov 27 05:53:22 np0005537642 ceph-mon[73977]: load: jerasure load: lrc 
Nov 27 05:53:22 np0005537642 ceph-mon[73977]: rocksdb: RocksDB version: 7.9.2
Nov 27 05:53:22 np0005537642 ceph-mon[73977]: rocksdb: Git sha 0
Nov 27 05:53:22 np0005537642 ceph-mon[73977]: rocksdb: Compile date 2025-07-17 03:12:14
Nov 27 05:53:22 np0005537642 ceph-mon[73977]: rocksdb: DB SUMMARY
Nov 27 05:53:22 np0005537642 ceph-mon[73977]: rocksdb: DB Session ID:  IGS0OMY1LQ6IFPRGQWOK
Nov 27 05:53:22 np0005537642 ceph-mon[73977]: rocksdb: CURRENT file:  CURRENT
Nov 27 05:53:22 np0005537642 ceph-mon[73977]: rocksdb: IDENTITY file:  IDENTITY
Nov 27 05:53:22 np0005537642 ceph-mon[73977]: rocksdb: MANIFEST file:  MANIFEST-000005 size: 59 Bytes
Nov 27 05:53:22 np0005537642 ceph-mon[73977]: rocksdb: SST files in /var/lib/ceph/mon/ceph-compute-0/store.db dir, Total Num: 0, files: 
Nov 27 05:53:22 np0005537642 ceph-mon[73977]: rocksdb: Write Ahead Log file in /var/lib/ceph/mon/ceph-compute-0/store.db: 000004.log size: 807 ; 
Nov 27 05:53:22 np0005537642 ceph-mon[73977]: rocksdb:                         Options.error_if_exists: 0
Nov 27 05:53:22 np0005537642 ceph-mon[73977]: rocksdb:                       Options.create_if_missing: 0
Nov 27 05:53:22 np0005537642 ceph-mon[73977]: rocksdb:                         Options.paranoid_checks: 1
Nov 27 05:53:22 np0005537642 ceph-mon[73977]: rocksdb:             Options.flush_verify_memtable_count: 1
Nov 27 05:53:22 np0005537642 ceph-mon[73977]: rocksdb:                               Options.track_and_verify_wals_in_manifest: 0
Nov 27 05:53:22 np0005537642 ceph-mon[73977]: rocksdb:        Options.verify_sst_unique_id_in_manifest: 1
Nov 27 05:53:22 np0005537642 ceph-mon[73977]: rocksdb:                                     Options.env: 0x556ec9da3c20
Nov 27 05:53:22 np0005537642 ceph-mon[73977]: rocksdb:                                      Options.fs: PosixFileSystem
Nov 27 05:53:22 np0005537642 ceph-mon[73977]: rocksdb:                                Options.info_log: 0x556ecbdbcd60
Nov 27 05:53:22 np0005537642 ceph-mon[73977]: rocksdb:                Options.max_file_opening_threads: 16
Nov 27 05:53:22 np0005537642 ceph-mon[73977]: rocksdb:                              Options.statistics: (nil)
Nov 27 05:53:22 np0005537642 ceph-mon[73977]: rocksdb:                               Options.use_fsync: 0
Nov 27 05:53:22 np0005537642 ceph-mon[73977]: rocksdb:                       Options.max_log_file_size: 0
Nov 27 05:53:22 np0005537642 ceph-mon[73977]: rocksdb:                  Options.max_manifest_file_size: 1073741824
Nov 27 05:53:22 np0005537642 ceph-mon[73977]: rocksdb:                   Options.log_file_time_to_roll: 0
Nov 27 05:53:22 np0005537642 ceph-mon[73977]: rocksdb:                       Options.keep_log_file_num: 1000
Nov 27 05:53:22 np0005537642 ceph-mon[73977]: rocksdb:                    Options.recycle_log_file_num: 0
Nov 27 05:53:22 np0005537642 ceph-mon[73977]: rocksdb:                         Options.allow_fallocate: 1
Nov 27 05:53:22 np0005537642 ceph-mon[73977]: rocksdb:                        Options.allow_mmap_reads: 0
Nov 27 05:53:22 np0005537642 ceph-mon[73977]: rocksdb:                       Options.allow_mmap_writes: 0
Nov 27 05:53:22 np0005537642 ceph-mon[73977]: rocksdb:                        Options.use_direct_reads: 0
Nov 27 05:53:22 np0005537642 ceph-mon[73977]: rocksdb:                        Options.use_direct_io_for_flush_and_compaction: 0
Nov 27 05:53:22 np0005537642 ceph-mon[73977]: rocksdb:          Options.create_missing_column_families: 0
Nov 27 05:53:22 np0005537642 ceph-mon[73977]: rocksdb:                              Options.db_log_dir: 
Nov 27 05:53:22 np0005537642 ceph-mon[73977]: rocksdb:                                 Options.wal_dir: 
Nov 27 05:53:22 np0005537642 ceph-mon[73977]: rocksdb:                Options.table_cache_numshardbits: 6
Nov 27 05:53:22 np0005537642 ceph-mon[73977]: rocksdb:                         Options.WAL_ttl_seconds: 0
Nov 27 05:53:22 np0005537642 ceph-mon[73977]: rocksdb:                       Options.WAL_size_limit_MB: 0
Nov 27 05:53:22 np0005537642 ceph-mon[73977]: rocksdb:                        Options.max_write_batch_group_size_bytes: 1048576
Nov 27 05:53:22 np0005537642 ceph-mon[73977]: rocksdb:             Options.manifest_preallocation_size: 4194304
Nov 27 05:53:22 np0005537642 ceph-mon[73977]: rocksdb:                     Options.is_fd_close_on_exec: 1
Nov 27 05:53:22 np0005537642 ceph-mon[73977]: rocksdb:                   Options.advise_random_on_open: 1
Nov 27 05:53:22 np0005537642 ceph-mon[73977]: rocksdb:                    Options.db_write_buffer_size: 0
Nov 27 05:53:22 np0005537642 ceph-mon[73977]: rocksdb:                    Options.write_buffer_manager: 0x556ecbdc1900
Nov 27 05:53:22 np0005537642 ceph-mon[73977]: rocksdb:         Options.access_hint_on_compaction_start: 1
Nov 27 05:53:22 np0005537642 ceph-mon[73977]: rocksdb:           Options.random_access_max_buffer_size: 1048576
Nov 27 05:53:22 np0005537642 ceph-mon[73977]: rocksdb:                      Options.use_adaptive_mutex: 0
Nov 27 05:53:22 np0005537642 ceph-mon[73977]: rocksdb:                            Options.rate_limiter: (nil)
Nov 27 05:53:22 np0005537642 ceph-mon[73977]: rocksdb:     Options.sst_file_manager.rate_bytes_per_sec: 0
Nov 27 05:53:22 np0005537642 ceph-mon[73977]: rocksdb:                       Options.wal_recovery_mode: 2
Nov 27 05:53:22 np0005537642 ceph-mon[73977]: rocksdb:                  Options.enable_thread_tracking: 0
Nov 27 05:53:22 np0005537642 ceph-mon[73977]: rocksdb:                  Options.enable_pipelined_write: 0
Nov 27 05:53:22 np0005537642 ceph-mon[73977]: rocksdb:                  Options.unordered_write: 0
Nov 27 05:53:22 np0005537642 ceph-mon[73977]: rocksdb:         Options.allow_concurrent_memtable_write: 1
Nov 27 05:53:22 np0005537642 ceph-mon[73977]: rocksdb:      Options.enable_write_thread_adaptive_yield: 1
Nov 27 05:53:22 np0005537642 ceph-mon[73977]: rocksdb:             Options.write_thread_max_yield_usec: 100
Nov 27 05:53:22 np0005537642 ceph-mon[73977]: rocksdb:            Options.write_thread_slow_yield_usec: 3
Nov 27 05:53:22 np0005537642 ceph-mon[73977]: rocksdb:                               Options.row_cache: None
Nov 27 05:53:22 np0005537642 ceph-mon[73977]: rocksdb:                              Options.wal_filter: None
Nov 27 05:53:22 np0005537642 ceph-mon[73977]: rocksdb:             Options.avoid_flush_during_recovery: 0
Nov 27 05:53:22 np0005537642 ceph-mon[73977]: rocksdb:             Options.allow_ingest_behind: 0
Nov 27 05:53:22 np0005537642 ceph-mon[73977]: rocksdb:             Options.two_write_queues: 0
Nov 27 05:53:22 np0005537642 ceph-mon[73977]: rocksdb:             Options.manual_wal_flush: 0
Nov 27 05:53:22 np0005537642 ceph-mon[73977]: rocksdb:             Options.wal_compression: 0
Nov 27 05:53:22 np0005537642 ceph-mon[73977]: rocksdb:             Options.atomic_flush: 0
Nov 27 05:53:22 np0005537642 ceph-mon[73977]: rocksdb:             Options.avoid_unnecessary_blocking_io: 0
Nov 27 05:53:22 np0005537642 ceph-mon[73977]: rocksdb:                 Options.persist_stats_to_disk: 0
Nov 27 05:53:22 np0005537642 ceph-mon[73977]: rocksdb:                 Options.write_dbid_to_manifest: 0
Nov 27 05:53:22 np0005537642 ceph-mon[73977]: rocksdb:                 Options.log_readahead_size: 0
Nov 27 05:53:22 np0005537642 ceph-mon[73977]: rocksdb:                 Options.file_checksum_gen_factory: Unknown
Nov 27 05:53:22 np0005537642 ceph-mon[73977]: rocksdb:                 Options.best_efforts_recovery: 0
Nov 27 05:53:22 np0005537642 ceph-mon[73977]: rocksdb:                Options.max_bgerror_resume_count: 2147483647
Nov 27 05:53:22 np0005537642 ceph-mon[73977]: rocksdb:            Options.bgerror_resume_retry_interval: 1000000
Nov 27 05:53:22 np0005537642 ceph-mon[73977]: rocksdb:             Options.allow_data_in_errors: 0
Nov 27 05:53:22 np0005537642 ceph-mon[73977]: rocksdb:             Options.db_host_id: __hostname__
Nov 27 05:53:22 np0005537642 ceph-mon[73977]: rocksdb:             Options.enforce_single_del_contracts: true
Nov 27 05:53:22 np0005537642 ceph-mon[73977]: rocksdb:             Options.max_background_jobs: 2
Nov 27 05:53:22 np0005537642 ceph-mon[73977]: rocksdb:             Options.max_background_compactions: -1
Nov 27 05:53:22 np0005537642 ceph-mon[73977]: rocksdb:             Options.max_subcompactions: 1
Nov 27 05:53:22 np0005537642 ceph-mon[73977]: rocksdb:             Options.avoid_flush_during_shutdown: 0
Nov 27 05:53:22 np0005537642 ceph-mon[73977]: rocksdb:           Options.writable_file_max_buffer_size: 1048576
Nov 27 05:53:22 np0005537642 ceph-mon[73977]: rocksdb:             Options.delayed_write_rate : 16777216
Nov 27 05:53:22 np0005537642 ceph-mon[73977]: rocksdb:             Options.max_total_wal_size: 0
Nov 27 05:53:22 np0005537642 ceph-mon[73977]: rocksdb:             Options.delete_obsolete_files_period_micros: 21600000000
Nov 27 05:53:22 np0005537642 ceph-mon[73977]: rocksdb:                   Options.stats_dump_period_sec: 600
Nov 27 05:53:22 np0005537642 ceph-mon[73977]: rocksdb:                 Options.stats_persist_period_sec: 600
Nov 27 05:53:22 np0005537642 ceph-mon[73977]: rocksdb:                 Options.stats_history_buffer_size: 1048576
Nov 27 05:53:22 np0005537642 ceph-mon[73977]: rocksdb:                          Options.max_open_files: -1
Nov 27 05:53:22 np0005537642 ceph-mon[73977]: rocksdb:                          Options.bytes_per_sync: 0
Nov 27 05:53:22 np0005537642 ceph-mon[73977]: rocksdb:                      Options.wal_bytes_per_sync: 0
Nov 27 05:53:22 np0005537642 ceph-mon[73977]: rocksdb:                   Options.strict_bytes_per_sync: 0
Nov 27 05:53:22 np0005537642 ceph-mon[73977]: rocksdb:       Options.compaction_readahead_size: 0
Nov 27 05:53:22 np0005537642 ceph-mon[73977]: rocksdb:                  Options.max_background_flushes: -1
Nov 27 05:53:22 np0005537642 ceph-mon[73977]: rocksdb: Compression algorithms supported:
Nov 27 05:53:22 np0005537642 ceph-mon[73977]: rocksdb: #011kZSTD supported: 0
Nov 27 05:53:22 np0005537642 ceph-mon[73977]: rocksdb: #011kXpressCompression supported: 0
Nov 27 05:53:22 np0005537642 ceph-mon[73977]: rocksdb: #011kBZip2Compression supported: 0
Nov 27 05:53:22 np0005537642 ceph-mon[73977]: rocksdb: #011kZSTDNotFinalCompression supported: 0
Nov 27 05:53:22 np0005537642 ceph-mon[73977]: rocksdb: #011kLZ4Compression supported: 1
Nov 27 05:53:22 np0005537642 ceph-mon[73977]: rocksdb: #011kZlibCompression supported: 1
Nov 27 05:53:22 np0005537642 ceph-mon[73977]: rocksdb: #011kLZ4HCCompression supported: 1
Nov 27 05:53:22 np0005537642 ceph-mon[73977]: rocksdb: #011kSnappyCompression supported: 1
Nov 27 05:53:22 np0005537642 ceph-mon[73977]: rocksdb: Fast CRC32 supported: Supported on x86
Nov 27 05:53:22 np0005537642 ceph-mon[73977]: rocksdb: DMutex implementation: pthread_mutex_t
Nov 27 05:53:22 np0005537642 ceph-mon[73977]: rocksdb: [db/version_set.cc:5527] Recovering from manifest file: /var/lib/ceph/mon/ceph-compute-0/store.db/MANIFEST-000005
Nov 27 05:53:22 np0005537642 ceph-mon[73977]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [default]:
Nov 27 05:53:22 np0005537642 ceph-mon[73977]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Nov 27 05:53:22 np0005537642 ceph-mon[73977]: rocksdb:           Options.merge_operator: 
Nov 27 05:53:22 np0005537642 ceph-mon[73977]: rocksdb:        Options.compaction_filter: None
Nov 27 05:53:22 np0005537642 ceph-mon[73977]: rocksdb:        Options.compaction_filter_factory: None
Nov 27 05:53:22 np0005537642 ceph-mon[73977]: rocksdb:  Options.sst_partitioner_factory: None
Nov 27 05:53:22 np0005537642 ceph-mon[73977]: rocksdb:         Options.memtable_factory: SkipListFactory
Nov 27 05:53:22 np0005537642 ceph-mon[73977]: rocksdb:            Options.table_factory: BlockBasedTable
Nov 27 05:53:22 np0005537642 ceph-mon[73977]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x556ecbdbc500)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x556ecbde1350#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 536870912#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Nov 27 05:53:22 np0005537642 ceph-mon[73977]: rocksdb:        Options.write_buffer_size: 33554432
Nov 27 05:53:22 np0005537642 ceph-mon[73977]: rocksdb:  Options.max_write_buffer_number: 2
Nov 27 05:53:22 np0005537642 ceph-mon[73977]: rocksdb:          Options.compression: NoCompression
Nov 27 05:53:22 np0005537642 ceph-mon[73977]: rocksdb:                  Options.bottommost_compression: Disabled
Nov 27 05:53:22 np0005537642 ceph-mon[73977]: rocksdb:       Options.prefix_extractor: nullptr
Nov 27 05:53:22 np0005537642 ceph-mon[73977]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Nov 27 05:53:22 np0005537642 ceph-mon[73977]: rocksdb:             Options.num_levels: 7
Nov 27 05:53:22 np0005537642 ceph-mon[73977]: rocksdb:        Options.min_write_buffer_number_to_merge: 1
Nov 27 05:53:22 np0005537642 ceph-mon[73977]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Nov 27 05:53:22 np0005537642 ceph-mon[73977]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Nov 27 05:53:22 np0005537642 ceph-mon[73977]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Nov 27 05:53:22 np0005537642 ceph-mon[73977]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Nov 27 05:53:22 np0005537642 ceph-mon[73977]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Nov 27 05:53:22 np0005537642 ceph-mon[73977]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Nov 27 05:53:22 np0005537642 ceph-mon[73977]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Nov 27 05:53:22 np0005537642 ceph-mon[73977]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Nov 27 05:53:22 np0005537642 ceph-mon[73977]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Nov 27 05:53:22 np0005537642 ceph-mon[73977]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Nov 27 05:53:22 np0005537642 ceph-mon[73977]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Nov 27 05:53:22 np0005537642 ceph-mon[73977]: rocksdb:            Options.compression_opts.window_bits: -14
Nov 27 05:53:22 np0005537642 ceph-mon[73977]: rocksdb:                  Options.compression_opts.level: 32767
Nov 27 05:53:22 np0005537642 ceph-mon[73977]: rocksdb:               Options.compression_opts.strategy: 0
Nov 27 05:53:22 np0005537642 ceph-mon[73977]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Nov 27 05:53:22 np0005537642 ceph-mon[73977]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Nov 27 05:53:22 np0005537642 ceph-mon[73977]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Nov 27 05:53:22 np0005537642 ceph-mon[73977]: rocksdb:         Options.compression_opts.parallel_threads: 1
Nov 27 05:53:22 np0005537642 ceph-mon[73977]: rocksdb:                  Options.compression_opts.enabled: false
Nov 27 05:53:22 np0005537642 ceph-mon[73977]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Nov 27 05:53:22 np0005537642 ceph-mon[73977]: rocksdb:      Options.level0_file_num_compaction_trigger: 4
Nov 27 05:53:22 np0005537642 ceph-mon[73977]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Nov 27 05:53:22 np0005537642 ceph-mon[73977]: rocksdb:              Options.level0_stop_writes_trigger: 36
Nov 27 05:53:22 np0005537642 ceph-mon[73977]: rocksdb:                   Options.target_file_size_base: 67108864
Nov 27 05:53:22 np0005537642 ceph-mon[73977]: rocksdb:             Options.target_file_size_multiplier: 1
Nov 27 05:53:22 np0005537642 ceph-mon[73977]: rocksdb:                Options.max_bytes_for_level_base: 268435456
Nov 27 05:53:22 np0005537642 ceph-mon[73977]: rocksdb: Options.level_compaction_dynamic_level_bytes: 1
Nov 27 05:53:22 np0005537642 ceph-mon[73977]: rocksdb:          Options.max_bytes_for_level_multiplier: 10.000000
Nov 27 05:53:22 np0005537642 ceph-mon[73977]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Nov 27 05:53:22 np0005537642 ceph-mon[73977]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Nov 27 05:53:22 np0005537642 ceph-mon[73977]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Nov 27 05:53:22 np0005537642 ceph-mon[73977]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Nov 27 05:53:22 np0005537642 ceph-mon[73977]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Nov 27 05:53:22 np0005537642 ceph-mon[73977]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Nov 27 05:53:22 np0005537642 ceph-mon[73977]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Nov 27 05:53:22 np0005537642 ceph-mon[73977]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Nov 27 05:53:22 np0005537642 ceph-mon[73977]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Nov 27 05:53:22 np0005537642 ceph-mon[73977]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Nov 27 05:53:22 np0005537642 ceph-mon[73977]: rocksdb:                        Options.arena_block_size: 1048576
Nov 27 05:53:22 np0005537642 ceph-mon[73977]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Nov 27 05:53:22 np0005537642 ceph-mon[73977]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Nov 27 05:53:22 np0005537642 ceph-mon[73977]: rocksdb:                Options.disable_auto_compactions: 0
Nov 27 05:53:22 np0005537642 ceph-mon[73977]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Nov 27 05:53:22 np0005537642 ceph-mon[73977]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Nov 27 05:53:22 np0005537642 ceph-mon[73977]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Nov 27 05:53:22 np0005537642 ceph-mon[73977]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Nov 27 05:53:22 np0005537642 ceph-mon[73977]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Nov 27 05:53:22 np0005537642 ceph-mon[73977]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Nov 27 05:53:22 np0005537642 ceph-mon[73977]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Nov 27 05:53:22 np0005537642 ceph-mon[73977]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Nov 27 05:53:22 np0005537642 ceph-mon[73977]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Nov 27 05:53:22 np0005537642 ceph-mon[73977]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Nov 27 05:53:22 np0005537642 ceph-mon[73977]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Nov 27 05:53:22 np0005537642 ceph-mon[73977]: rocksdb:                   Options.inplace_update_support: 0
Nov 27 05:53:22 np0005537642 ceph-mon[73977]: rocksdb:                 Options.inplace_update_num_locks: 10000
Nov 27 05:53:22 np0005537642 ceph-mon[73977]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Nov 27 05:53:22 np0005537642 ceph-mon[73977]: rocksdb:               Options.memtable_whole_key_filtering: 0
Nov 27 05:53:22 np0005537642 ceph-mon[73977]: rocksdb:   Options.memtable_huge_page_size: 0
Nov 27 05:53:22 np0005537642 ceph-mon[73977]: rocksdb:                           Options.bloom_locality: 0
Nov 27 05:53:22 np0005537642 ceph-mon[73977]: rocksdb:                    Options.max_successive_merges: 0
Nov 27 05:53:22 np0005537642 ceph-mon[73977]: rocksdb:                Options.optimize_filters_for_hits: 0
Nov 27 05:53:22 np0005537642 ceph-mon[73977]: rocksdb:                Options.paranoid_file_checks: 0
Nov 27 05:53:22 np0005537642 ceph-mon[73977]: rocksdb:                Options.force_consistency_checks: 1
Nov 27 05:53:22 np0005537642 ceph-mon[73977]: rocksdb:                Options.report_bg_io_stats: 0
Nov 27 05:53:22 np0005537642 ceph-mon[73977]: rocksdb:                               Options.ttl: 2592000
Nov 27 05:53:22 np0005537642 ceph-mon[73977]: rocksdb:          Options.periodic_compaction_seconds: 0
Nov 27 05:53:22 np0005537642 ceph-mon[73977]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Nov 27 05:53:22 np0005537642 ceph-mon[73977]: rocksdb:    Options.preserve_internal_time_seconds: 0
Nov 27 05:53:22 np0005537642 ceph-mon[73977]: rocksdb:                       Options.enable_blob_files: false
Nov 27 05:53:22 np0005537642 ceph-mon[73977]: rocksdb:                           Options.min_blob_size: 0
Nov 27 05:53:22 np0005537642 ceph-mon[73977]: rocksdb:                          Options.blob_file_size: 268435456
Nov 27 05:53:22 np0005537642 ceph-mon[73977]: rocksdb:                   Options.blob_compression_type: NoCompression
Nov 27 05:53:22 np0005537642 ceph-mon[73977]: rocksdb:          Options.enable_blob_garbage_collection: false
Nov 27 05:53:22 np0005537642 ceph-mon[73977]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Nov 27 05:53:22 np0005537642 ceph-mon[73977]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Nov 27 05:53:22 np0005537642 ceph-mon[73977]: rocksdb:          Options.blob_compaction_readahead_size: 0
Nov 27 05:53:22 np0005537642 ceph-mon[73977]: rocksdb:                Options.blob_file_starting_level: 0
Nov 27 05:53:22 np0005537642 ceph-mon[73977]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Nov 27 05:53:22 np0005537642 ceph-mon[73977]: rocksdb: [db/version_set.cc:5566] Recovered from manifest file:/var/lib/ceph/mon/ceph-compute-0/store.db/MANIFEST-000005 succeeded,manifest_file_number is 5, next_file_number is 7, last_sequence is 0, log_number is 0,prev_log_number is 0,max_column_family is 0,min_log_number_to_keep is 0
Nov 27 05:53:22 np0005537642 ceph-mon[73977]: rocksdb: [db/version_set.cc:5581] Column family [default] (ID 0), log number is 0
Nov 27 05:53:22 np0005537642 ceph-mon[73977]: rocksdb: [db/db_impl/db_impl_open.cc:539] DB ID: 65f89df2-0592-497f-b5d7-5930e7c7d9aa
Nov 27 05:53:22 np0005537642 ceph-mon[73977]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764240802327550, "job": 1, "event": "recovery_started", "wal_files": [4]}
Nov 27 05:53:22 np0005537642 ceph-mon[73977]: rocksdb: [db/db_impl/db_impl_open.cc:1043] Recovering log #4 mode 2
Nov 27 05:53:22 np0005537642 ceph-mon[73977]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764240802329333, "cf_name": "default", "job": 1, "event": "table_file_creation", "file_number": 8, "file_size": 1944, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 1, "largest_seqno": 5, "table_properties": {"data_size": 819, "index_size": 31, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 69, "raw_key_size": 115, "raw_average_key_size": 23, "raw_value_size": 696, "raw_average_value_size": 139, "num_data_blocks": 1, "num_entries": 5, "num_filter_entries": 5, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764240802, "oldest_key_time": 0, "file_creation_time": 0, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "65f89df2-0592-497f-b5d7-5930e7c7d9aa", "db_session_id": "IGS0OMY1LQ6IFPRGQWOK", "orig_file_number": 8, "seqno_to_time_mapping": "N/A"}}
Nov 27 05:53:22 np0005537642 ceph-mon[73977]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764240802329428, "job": 1, "event": "recovery_finished"}
Nov 27 05:53:22 np0005537642 ceph-mon[73977]: rocksdb: [db/version_set.cc:5047] Creating manifest 10
Nov 27 05:53:22 np0005537642 ceph-mon[73977]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000004.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 27 05:53:22 np0005537642 ceph-mon[73977]: rocksdb: [db/db_impl/db_impl_open.cc:1987] SstFileManager instance 0x556ecbde2e00
Nov 27 05:53:22 np0005537642 ceph-mon[73977]: rocksdb: DB pointer 0x556ecbeec000
Nov 27 05:53:22 np0005537642 ceph-mon[73977]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Nov 27 05:53:22 np0005537642 ceph-mon[73977]: rocksdb: [db/db_impl/db_impl.cc:1111] #012** DB Stats **#012Uptime(secs): 0.0 total, 0.0 interval#012Cumulative writes: 0 writes, 0 keys, 0 commit groups, 0.0 writes per commit group, ingest: 0.00 GB, 0.00 MB/s#012Cumulative WAL: 0 writes, 0 syncs, 0.00 writes per sync, written: 0.00 GB, 0.00 MB/s#012Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent#012Interval writes: 0 writes, 0 keys, 0 commit groups, 0.0 writes per commit group, ingest: 0.00 MB, 0.00 MB/s#012Interval WAL: 0 writes, 0 syncs, 0.00 writes per sync, written: 0.00 GB, 0.00 MB/s#012Interval stall: 00:00:0.000 H:M:S, 0.0 percent#012#012** Compaction Stats [default] **#012Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012  L0      1/0    1.90 KB   0.2      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      1.1      0.00              0.00         1    0.002       0      0       0.0       0.0#012 Sum      1/0    1.90 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      1.1      0.00              0.00         1    0.002       0      0       0.0       0.0#012 Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      1.1      0.00              0.00         1    0.002       0      0       0.0       0.0#012#012** Compaction Stats [default] **#012Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      1.1      0.00              0.00         1    0.002       0      0       0.0       0.0#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 0.0 total, 0.0 interval#012Flush(GB): cumulative 0.000, interval 0.000#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.00 GB write, 0.14 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Interval compaction: 0.00 GB write, 0.14 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count#012Block cache BinnedLRUCache@0x556ecbde1350#2 capacity: 512.00 MB usage: 0.22 KB table_size: 0 occupancy: 18446744073709551615 collections: 1 last_copies: 0 last_secs: 1.3e-05 secs_since: 0#012Block cache entry stats(count,size,portion): FilterBlock(1,0.11 KB,2.08616e-05%) IndexBlock(1,0.11 KB,2.08616e-05%) Misc(1,0.00 KB,0%)#012#012** File Read Latency Histogram By Level [default] **
Nov 27 05:53:22 np0005537642 ceph-mon[73977]: starting mon.compute-0 rank 0 at public addrs [v2:192.168.122.100:3300/0,v1:192.168.122.100:6789/0] at bind addrs [v2:192.168.122.100:3300/0,v1:192.168.122.100:6789/0] mon_data /var/lib/ceph/mon/ceph-compute-0 fsid 4c838139-e0c9-556a-a9ca-e4422f459af7
Nov 27 05:53:22 np0005537642 ceph-mon[73977]: mon.compute-0@-1(???) e0 preinit fsid 4c838139-e0c9-556a-a9ca-e4422f459af7
Nov 27 05:53:22 np0005537642 ceph-mon[73977]: mon.compute-0@-1(probing) e0  my rank is now 0 (was -1)
Nov 27 05:53:22 np0005537642 ceph-mon[73977]: mon.compute-0@0(probing) e0 win_standalone_election
Nov 27 05:53:22 np0005537642 ceph-mon[73977]: paxos.0).electionLogic(0) init, first boot, initializing epoch at 1 
Nov 27 05:53:22 np0005537642 ceph-mon[73977]: mon.compute-0@0(electing) e0 collect_metadata vda:  no unique device id for vda: fallback method has no model nor serial
Nov 27 05:53:22 np0005537642 ceph-mon[73977]: log_channel(cluster) log [INF] : mon.compute-0 is new leader, mons compute-0 in quorum (ranks 0)
Nov 27 05:53:22 np0005537642 ceph-mon[73977]: mon.compute-0@0(leader).osd e0 create_pending setting backfillfull_ratio = 0.9
Nov 27 05:53:22 np0005537642 ceph-mon[73977]: mon.compute-0@0(leader).osd e0 create_pending setting full_ratio = 0.95
Nov 27 05:53:22 np0005537642 ceph-mon[73977]: mon.compute-0@0(leader).osd e0 create_pending setting nearfull_ratio = 0.85
Nov 27 05:53:22 np0005537642 ceph-mon[73977]: mon.compute-0@0(leader).osd e0 do_prune osdmap full prune enabled
Nov 27 05:53:22 np0005537642 ceph-mon[73977]: mon.compute-0@0(leader).osd e0 encode_pending skipping prime_pg_temp; mapping job did not start
Nov 27 05:53:22 np0005537642 ceph-mon[73977]: mon.compute-0@0(leader) e0 _apply_compatset_features enabling new quorum features: compat={},rocompat={},incompat={4=support erasure code pools,5=new-style osdmap encoding,6=support isa/lrc erasure code,7=support shec erasure code}
Nov 27 05:53:22 np0005537642 ceph-mon[73977]: mon.compute-0@0(leader).paxosservice(auth 0..0) refresh upgraded, format 3 -> 0
Nov 27 05:53:22 np0005537642 ceph-mon[73977]: mon.compute-0@0(probing) e1 win_standalone_election
Nov 27 05:53:22 np0005537642 ceph-mon[73977]: paxos.0).electionLogic(2) init, last seen epoch 2
Nov 27 05:53:22 np0005537642 ceph-mon[73977]: mon.compute-0@0(electing) e1 collect_metadata vda:  no unique device id for vda: fallback method has no model nor serial
Nov 27 05:53:22 np0005537642 ceph-mon[73977]: log_channel(cluster) log [INF] : mon.compute-0 is new leader, mons compute-0 in quorum (ranks 0)
Nov 27 05:53:22 np0005537642 ceph-mon[73977]: log_channel(cluster) log [DBG] : monmap epoch 1
Nov 27 05:53:22 np0005537642 ceph-mon[73977]: log_channel(cluster) log [DBG] : fsid 4c838139-e0c9-556a-a9ca-e4422f459af7
Nov 27 05:53:22 np0005537642 ceph-mon[73977]: log_channel(cluster) log [DBG] : last_changed 2025-11-27T10:53:19.458310+0000
Nov 27 05:53:22 np0005537642 ceph-mon[73977]: log_channel(cluster) log [DBG] : created 2025-11-27T10:53:19.458310+0000
Nov 27 05:53:22 np0005537642 ceph-mon[73977]: log_channel(cluster) log [DBG] : min_mon_release 19 (squid)
Nov 27 05:53:22 np0005537642 ceph-mon[73977]: log_channel(cluster) log [DBG] : election_strategy: 1
Nov 27 05:53:22 np0005537642 ceph-mon[73977]: log_channel(cluster) log [DBG] : 0: [v2:192.168.122.100:3300/0,v1:192.168.122.100:6789/0] mon.compute-0
Nov 27 05:53:22 np0005537642 ceph-mon[73977]: mon.compute-0@0(leader) e1 collect_metadata vda:  no unique device id for vda: fallback method has no model nor serial
Nov 27 05:53:22 np0005537642 ceph-mon[73977]: mgrc update_daemon_metadata mon.compute-0 metadata {addrs=[v2:192.168.122.100:3300/0,v1:192.168.122.100:6789/0],arch=x86_64,ceph_release=squid,ceph_version=ceph version 19.2.3 (c92aebb279828e9c3c1f5d24613efca272649e62) squid (stable),ceph_version_short=19.2.3,compression_algorithms=none, snappy, zlib, zstd, lz4,container_hostname=compute-0,container_image=quay.io/ceph/ceph:v19,cpu=AMD EPYC-Rome Processor,device_ids=,device_paths=vda=/dev/disk/by-path/pci-0000:00:04.0,devices=vda,distro=centos,distro_description=CentOS Stream 9,distro_version=9,hostname=compute-0,kernel_description=#1 SMP PREEMPT_DYNAMIC Thu Nov 20 14:15:03 UTC 2025,kernel_version=5.14.0-642.el9.x86_64,mem_swap_kb=1048572,mem_total_kb=7864324,os=Linux}
Nov 27 05:53:22 np0005537642 ceph-mon[73977]: mon.compute-0@0(leader).osd e0 create_pending setting backfillfull_ratio = 0.9
Nov 27 05:53:22 np0005537642 ceph-mon[73977]: mon.compute-0@0(leader).osd e0 create_pending setting full_ratio = 0.95
Nov 27 05:53:22 np0005537642 ceph-mon[73977]: mon.compute-0@0(leader).osd e0 create_pending setting nearfull_ratio = 0.85
Nov 27 05:53:22 np0005537642 ceph-mon[73977]: mon.compute-0@0(leader).osd e0 do_prune osdmap full prune enabled
Nov 27 05:53:22 np0005537642 ceph-mon[73977]: mon.compute-0@0(leader).osd e0 encode_pending skipping prime_pg_temp; mapping job did not start
Nov 27 05:53:22 np0005537642 ceph-mon[73977]: mon.compute-0@0(leader) e1 _apply_compatset_features enabling new quorum features: compat={},rocompat={},incompat={8=support monmap features,9=luminous ondisk layout,10=mimic ondisk layout,11=nautilus ondisk layout,12=octopus ondisk layout,13=pacific ondisk layout,14=quincy ondisk layout,15=reef ondisk layout,16=squid ondisk layout}
Nov 27 05:53:22 np0005537642 ceph-mon[73977]: mon.compute-0@0(leader).mds e1 new map
Nov 27 05:53:22 np0005537642 ceph-mon[73977]: mon.compute-0@0(leader).mds e1 print_map#012e1#012btime 2025-11-27T10:53:22:366160+0000#012enable_multiple, ever_enabled_multiple: 1,1#012default compat: compat={},rocompat={},incompat={1=base v0.20,2=client writeable ranges,3=default file layouts on dirs,4=dir inode in separate object,5=mds uses versioned encoding,6=dirfrag is stored in omap,8=no anchor table,9=file layout v2,10=snaprealm v2,11=minor log segments,12=quiesce subvolumes}#012legacy client fscid: -1#012 #012No filesystems configured
Nov 27 05:53:22 np0005537642 ceph-mon[73977]: mon.compute-0@0(leader).paxosservice(auth 0..0) refresh upgraded, format 3 -> 0
Nov 27 05:53:22 np0005537642 ceph-mon[73977]: log_channel(cluster) log [DBG] : fsmap 
Nov 27 05:53:22 np0005537642 podman[73994]: 2025-11-27 10:53:22.37472645 +0000 UTC m=+0.035405950 container create bfe90dc401dd9478598e7b0a8d7cb99e3a7930abdc0acc8dd1aaa098a6942efb (image=quay.io/ceph/ceph:v19, name=frosty_fermi, CEPH_REF=squid, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0)
Nov 27 05:53:22 np0005537642 ceph-mon[73977]: mon.compute-0@0(leader).osd e0 _set_cache_ratios kv ratio 0.25 inc ratio 0.375 full ratio 0.375
Nov 27 05:53:22 np0005537642 ceph-mon[73977]: mon.compute-0@0(leader).osd e0 register_cache_with_pcm pcm target: 2147483648 pcm max: 1020054732 pcm min: 134217728 inc_osd_cache size: 1
Nov 27 05:53:22 np0005537642 ceph-mon[73977]: mon.compute-0@0(leader).osd e1 e1: 0 total, 0 up, 0 in
Nov 27 05:53:22 np0005537642 ceph-mon[73977]: mon.compute-0@0(leader).osd e1 crush map has features 3314932999778484224, adjusting msgr requires
Nov 27 05:53:22 np0005537642 ceph-mon[73977]: mon.compute-0@0(leader).osd e1 crush map has features 288514050185494528, adjusting msgr requires
Nov 27 05:53:22 np0005537642 ceph-mon[73977]: mon.compute-0@0(leader).osd e1 crush map has features 288514050185494528, adjusting msgr requires
Nov 27 05:53:22 np0005537642 ceph-mon[73977]: mon.compute-0@0(leader).osd e1 crush map has features 288514050185494528, adjusting msgr requires
Nov 27 05:53:22 np0005537642 ceph-mon[73977]: mkfs 4c838139-e0c9-556a-a9ca-e4422f459af7
Nov 27 05:53:22 np0005537642 ceph-mon[73977]: mon.compute-0@0(leader).paxosservice(auth 1..1) refresh upgraded, format 0 -> 3
Nov 27 05:53:22 np0005537642 ceph-mon[73977]: log_channel(cluster) log [DBG] : osdmap e1: 0 total, 0 up, 0 in
Nov 27 05:53:22 np0005537642 ceph-mon[73977]: log_channel(cluster) log [DBG] : mgrmap e1: no daemons active
Nov 27 05:53:22 np0005537642 ceph-mon[73977]: mon.compute-0 is new leader, mons compute-0 in quorum (ranks 0)
Nov 27 05:53:22 np0005537642 systemd[1]: Started libpod-conmon-bfe90dc401dd9478598e7b0a8d7cb99e3a7930abdc0acc8dd1aaa098a6942efb.scope.
Nov 27 05:53:22 np0005537642 systemd[1]: Started libcrun container.
Nov 27 05:53:22 np0005537642 podman[73994]: 2025-11-27 10:53:22.359533131 +0000 UTC m=+0.020212631 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Nov 27 05:53:22 np0005537642 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0c0dc5f5c45fc35a998380a9d4a29e68a11f613f9580ddca823bce9924cb4508/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 27 05:53:22 np0005537642 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0c0dc5f5c45fc35a998380a9d4a29e68a11f613f9580ddca823bce9924cb4508/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Nov 27 05:53:22 np0005537642 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0c0dc5f5c45fc35a998380a9d4a29e68a11f613f9580ddca823bce9924cb4508/merged/var/lib/ceph/mon/ceph-compute-0 supports timestamps until 2038 (0x7fffffff)
Nov 27 05:53:22 np0005537642 podman[73994]: 2025-11-27 10:53:22.482291203 +0000 UTC m=+0.142970773 container init bfe90dc401dd9478598e7b0a8d7cb99e3a7930abdc0acc8dd1aaa098a6942efb (image=quay.io/ceph/ceph:v19, name=frosty_fermi, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, CEPH_REF=squid)
Nov 27 05:53:22 np0005537642 podman[73994]: 2025-11-27 10:53:22.493788815 +0000 UTC m=+0.154468315 container start bfe90dc401dd9478598e7b0a8d7cb99e3a7930abdc0acc8dd1aaa098a6942efb (image=quay.io/ceph/ceph:v19, name=frosty_fermi, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid)
Nov 27 05:53:22 np0005537642 podman[73994]: 2025-11-27 10:53:22.49744779 +0000 UTC m=+0.158127320 container attach bfe90dc401dd9478598e7b0a8d7cb99e3a7930abdc0acc8dd1aaa098a6942efb (image=quay.io/ceph/ceph:v19, name=frosty_fermi, org.label-schema.build-date=20250325, CEPH_REF=squid, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 27 05:53:22 np0005537642 ceph-mon[73977]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "status"} v 0)
Nov 27 05:53:22 np0005537642 ceph-mon[73977]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/461980104' entity='client.admin' cmd=[{"prefix": "status"}]: dispatch
Nov 27 05:53:22 np0005537642 frosty_fermi[74033]:  cluster:
Nov 27 05:53:22 np0005537642 frosty_fermi[74033]:    id:     4c838139-e0c9-556a-a9ca-e4422f459af7
Nov 27 05:53:22 np0005537642 frosty_fermi[74033]:    health: HEALTH_OK
Nov 27 05:53:22 np0005537642 frosty_fermi[74033]: 
Nov 27 05:53:22 np0005537642 frosty_fermi[74033]:  services:
Nov 27 05:53:22 np0005537642 frosty_fermi[74033]:    mon: 1 daemons, quorum compute-0 (age 0.335009s)
Nov 27 05:53:22 np0005537642 frosty_fermi[74033]:    mgr: no daemons active
Nov 27 05:53:22 np0005537642 frosty_fermi[74033]:    osd: 0 osds: 0 up, 0 in
Nov 27 05:53:22 np0005537642 frosty_fermi[74033]: 
Nov 27 05:53:22 np0005537642 frosty_fermi[74033]:  data:
Nov 27 05:53:22 np0005537642 frosty_fermi[74033]:    pools:   0 pools, 0 pgs
Nov 27 05:53:22 np0005537642 frosty_fermi[74033]:    objects: 0 objects, 0 B
Nov 27 05:53:22 np0005537642 frosty_fermi[74033]:    usage:   0 B used, 0 B / 0 B avail
Nov 27 05:53:22 np0005537642 frosty_fermi[74033]:    pgs:     
Nov 27 05:53:22 np0005537642 frosty_fermi[74033]: 
Nov 27 05:53:22 np0005537642 systemd[1]: libpod-bfe90dc401dd9478598e7b0a8d7cb99e3a7930abdc0acc8dd1aaa098a6942efb.scope: Deactivated successfully.
Nov 27 05:53:22 np0005537642 podman[73994]: 2025-11-27 10:53:22.718927271 +0000 UTC m=+0.379606791 container died bfe90dc401dd9478598e7b0a8d7cb99e3a7930abdc0acc8dd1aaa098a6942efb (image=quay.io/ceph/ceph:v19, name=frosty_fermi, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2)
Nov 27 05:53:22 np0005537642 podman[73994]: 2025-11-27 10:53:22.762212918 +0000 UTC m=+0.422892418 container remove bfe90dc401dd9478598e7b0a8d7cb99e3a7930abdc0acc8dd1aaa098a6942efb (image=quay.io/ceph/ceph:v19, name=frosty_fermi, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 27 05:53:22 np0005537642 systemd[1]: libpod-conmon-bfe90dc401dd9478598e7b0a8d7cb99e3a7930abdc0acc8dd1aaa098a6942efb.scope: Deactivated successfully.
Nov 27 05:53:22 np0005537642 podman[74071]: 2025-11-27 10:53:22.855824581 +0000 UTC m=+0.061515163 container create fc947cbf984b409cbd725309e4cca13101ab8318dd0e9fbb280254ee4d071587 (image=quay.io/ceph/ceph:v19, name=kind_lumiere, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS)
Nov 27 05:53:22 np0005537642 systemd[1]: Started libpod-conmon-fc947cbf984b409cbd725309e4cca13101ab8318dd0e9fbb280254ee4d071587.scope.
Nov 27 05:53:22 np0005537642 systemd[1]: Started libcrun container.
Nov 27 05:53:22 np0005537642 podman[74071]: 2025-11-27 10:53:22.8353766 +0000 UTC m=+0.041067212 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Nov 27 05:53:22 np0005537642 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/988bd883c0972914c083ed7ca9855f0d6b13a243cadb8fae37834bd9dd5d8b0d/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 27 05:53:22 np0005537642 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/988bd883c0972914c083ed7ca9855f0d6b13a243cadb8fae37834bd9dd5d8b0d/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 27 05:53:22 np0005537642 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/988bd883c0972914c083ed7ca9855f0d6b13a243cadb8fae37834bd9dd5d8b0d/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Nov 27 05:53:22 np0005537642 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/988bd883c0972914c083ed7ca9855f0d6b13a243cadb8fae37834bd9dd5d8b0d/merged/var/lib/ceph/mon/ceph-compute-0 supports timestamps until 2038 (0x7fffffff)
Nov 27 05:53:22 np0005537642 podman[74071]: 2025-11-27 10:53:22.955689734 +0000 UTC m=+0.161380346 container init fc947cbf984b409cbd725309e4cca13101ab8318dd0e9fbb280254ee4d071587 (image=quay.io/ceph/ceph:v19, name=kind_lumiere, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 27 05:53:22 np0005537642 podman[74071]: 2025-11-27 10:53:22.970203291 +0000 UTC m=+0.175893903 container start fc947cbf984b409cbd725309e4cca13101ab8318dd0e9fbb280254ee4d071587 (image=quay.io/ceph/ceph:v19, name=kind_lumiere, org.label-schema.build-date=20250325, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 27 05:53:22 np0005537642 podman[74071]: 2025-11-27 10:53:22.974794941 +0000 UTC m=+0.180485523 container attach fc947cbf984b409cbd725309e4cca13101ab8318dd0e9fbb280254ee4d071587 (image=quay.io/ceph/ceph:v19, name=kind_lumiere, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 27 05:53:23 np0005537642 ceph-mon[73977]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config assimilate-conf"} v 0)
Nov 27 05:53:23 np0005537642 ceph-mon[73977]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/1982172920' entity='client.admin' cmd=[{"prefix": "config assimilate-conf"}]: dispatch
Nov 27 05:53:23 np0005537642 ceph-mon[73977]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/1982172920' entity='client.admin' cmd='[{"prefix": "config assimilate-conf"}]': finished
Nov 27 05:53:23 np0005537642 kind_lumiere[74087]: 
Nov 27 05:53:23 np0005537642 kind_lumiere[74087]: [global]
Nov 27 05:53:23 np0005537642 kind_lumiere[74087]: #011fsid = 4c838139-e0c9-556a-a9ca-e4422f459af7
Nov 27 05:53:23 np0005537642 kind_lumiere[74087]: #011mon_host = [v2:192.168.122.100:3300,v1:192.168.122.100:6789]
Nov 27 05:53:23 np0005537642 systemd[1]: libpod-fc947cbf984b409cbd725309e4cca13101ab8318dd0e9fbb280254ee4d071587.scope: Deactivated successfully.
Nov 27 05:53:23 np0005537642 podman[74113]: 2025-11-27 10:53:23.264149039 +0000 UTC m=+0.037114412 container died fc947cbf984b409cbd725309e4cca13101ab8318dd0e9fbb280254ee4d071587 (image=quay.io/ceph/ceph:v19, name=kind_lumiere, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, CEPH_REF=squid, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 27 05:53:23 np0005537642 systemd[1]: var-lib-containers-storage-overlay-988bd883c0972914c083ed7ca9855f0d6b13a243cadb8fae37834bd9dd5d8b0d-merged.mount: Deactivated successfully.
Nov 27 05:53:23 np0005537642 podman[74113]: 2025-11-27 10:53:23.311890441 +0000 UTC m=+0.084855834 container remove fc947cbf984b409cbd725309e4cca13101ab8318dd0e9fbb280254ee4d071587 (image=quay.io/ceph/ceph:v19, name=kind_lumiere, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_REF=squid, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Nov 27 05:53:23 np0005537642 systemd[1]: libpod-conmon-fc947cbf984b409cbd725309e4cca13101ab8318dd0e9fbb280254ee4d071587.scope: Deactivated successfully.
Nov 27 05:53:23 np0005537642 ceph-mon[73977]: mon.compute-0 is new leader, mons compute-0 in quorum (ranks 0)
Nov 27 05:53:23 np0005537642 ceph-mon[73977]: from='client.? 192.168.122.100:0/1982172920' entity='client.admin' cmd=[{"prefix": "config assimilate-conf"}]: dispatch
Nov 27 05:53:23 np0005537642 ceph-mon[73977]: from='client.? 192.168.122.100:0/1982172920' entity='client.admin' cmd='[{"prefix": "config assimilate-conf"}]': finished
Nov 27 05:53:23 np0005537642 podman[74129]: 2025-11-27 10:53:23.405682713 +0000 UTC m=+0.056289083 container create 18b7590d0968fcbb9810504eec8bc406970bf8dcb1b63b25bf51887cb26f9773 (image=quay.io/ceph/ceph:v19, name=ecstatic_wilbur, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 27 05:53:23 np0005537642 systemd[1]: Started libpod-conmon-18b7590d0968fcbb9810504eec8bc406970bf8dcb1b63b25bf51887cb26f9773.scope.
Nov 27 05:53:23 np0005537642 podman[74129]: 2025-11-27 10:53:23.384863763 +0000 UTC m=+0.035470143 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Nov 27 05:53:23 np0005537642 systemd[1]: Started libcrun container.
Nov 27 05:53:23 np0005537642 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8f9936f1b70ae94f9c1a85372a08b8a1660a355c2caee23c3185367fc53b7260/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 27 05:53:23 np0005537642 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8f9936f1b70ae94f9c1a85372a08b8a1660a355c2caee23c3185367fc53b7260/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Nov 27 05:53:23 np0005537642 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8f9936f1b70ae94f9c1a85372a08b8a1660a355c2caee23c3185367fc53b7260/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 27 05:53:23 np0005537642 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8f9936f1b70ae94f9c1a85372a08b8a1660a355c2caee23c3185367fc53b7260/merged/var/lib/ceph/mon/ceph-compute-0 supports timestamps until 2038 (0x7fffffff)
Nov 27 05:53:23 np0005537642 podman[74129]: 2025-11-27 10:53:23.523881396 +0000 UTC m=+0.174487766 container init 18b7590d0968fcbb9810504eec8bc406970bf8dcb1b63b25bf51887cb26f9773 (image=quay.io/ceph/ceph:v19, name=ecstatic_wilbur, io.buildah.version=1.40.1, OSD_FLAVOR=default, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_REF=squid)
Nov 27 05:53:23 np0005537642 podman[74129]: 2025-11-27 10:53:23.53479791 +0000 UTC m=+0.185404270 container start 18b7590d0968fcbb9810504eec8bc406970bf8dcb1b63b25bf51887cb26f9773 (image=quay.io/ceph/ceph:v19, name=ecstatic_wilbur, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, OSD_FLAVOR=default, ceph=True, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2)
Nov 27 05:53:23 np0005537642 podman[74129]: 2025-11-27 10:53:23.561004247 +0000 UTC m=+0.211610597 container attach 18b7590d0968fcbb9810504eec8bc406970bf8dcb1b63b25bf51887cb26f9773 (image=quay.io/ceph/ceph:v19, name=ecstatic_wilbur, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, io.buildah.version=1.40.1, ceph=True)
Nov 27 05:53:23 np0005537642 ceph-mon[73977]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Nov 27 05:53:23 np0005537642 ceph-mon[73977]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2797004579' entity='client.admin' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 27 05:53:23 np0005537642 systemd[1]: libpod-18b7590d0968fcbb9810504eec8bc406970bf8dcb1b63b25bf51887cb26f9773.scope: Deactivated successfully.
Nov 27 05:53:23 np0005537642 podman[74129]: 2025-11-27 10:53:23.763639453 +0000 UTC m=+0.414245783 container died 18b7590d0968fcbb9810504eec8bc406970bf8dcb1b63b25bf51887cb26f9773 (image=quay.io/ceph/ceph:v19, name=ecstatic_wilbur, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_REF=squid, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 27 05:53:23 np0005537642 systemd[1]: var-lib-containers-storage-overlay-8f9936f1b70ae94f9c1a85372a08b8a1660a355c2caee23c3185367fc53b7260-merged.mount: Deactivated successfully.
Nov 27 05:53:23 np0005537642 podman[74129]: 2025-11-27 10:53:23.8067027 +0000 UTC m=+0.457309070 container remove 18b7590d0968fcbb9810504eec8bc406970bf8dcb1b63b25bf51887cb26f9773 (image=quay.io/ceph/ceph:v19, name=ecstatic_wilbur, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid)
Nov 27 05:53:23 np0005537642 systemd[1]: libpod-conmon-18b7590d0968fcbb9810504eec8bc406970bf8dcb1b63b25bf51887cb26f9773.scope: Deactivated successfully.
Nov 27 05:53:23 np0005537642 systemd[1]: Stopping Ceph mon.compute-0 for 4c838139-e0c9-556a-a9ca-e4422f459af7...
Nov 27 05:53:24 np0005537642 ceph-mon[73977]: received  signal: Terminated from /run/podman-init -- /usr/bin/ceph-mon -n mon.compute-0 -f --setuser ceph --setgroup ceph --default-log-to-file=false --default-log-to-journald=true --default-log-to-stderr=false --default-mon-cluster-log-to-file=false --default-mon-cluster-log-to-journald=true --default-mon-cluster-log-to-stderr=false  (PID: 1) UID: 0
Nov 27 05:53:24 np0005537642 ceph-4c838139-e0c9-556a-a9ca-e4422f459af7-mon-compute-0[73973]: 2025-11-27T10:53:24.097+0000 7f43e80c2640 -1 received  signal: Terminated from /run/podman-init -- /usr/bin/ceph-mon -n mon.compute-0 -f --setuser ceph --setgroup ceph --default-log-to-file=false --default-log-to-journald=true --default-log-to-stderr=false --default-mon-cluster-log-to-file=false --default-mon-cluster-log-to-journald=true --default-mon-cluster-log-to-stderr=false  (PID: 1) UID: 0
Nov 27 05:53:24 np0005537642 ceph-mon[73977]: mon.compute-0@0(leader) e1 *** Got Signal Terminated ***
Nov 27 05:53:24 np0005537642 ceph-4c838139-e0c9-556a-a9ca-e4422f459af7-mon-compute-0[73973]: 2025-11-27T10:53:24.097+0000 7f43e80c2640 -1 mon.compute-0@0(leader) e1 *** Got Signal Terminated ***
Nov 27 05:53:24 np0005537642 ceph-mon[73977]: mon.compute-0@0(leader) e1 shutdown
Nov 27 05:53:24 np0005537642 ceph-mon[73977]: rocksdb: [db/db_impl/db_impl.cc:496] Shutdown: canceling all background work
Nov 27 05:53:24 np0005537642 ceph-mon[73977]: rocksdb: [db/db_impl/db_impl.cc:704] Shutdown complete
Nov 27 05:53:24 np0005537642 podman[74215]: 2025-11-27 10:53:24.231881726 +0000 UTC m=+0.189813521 container died 6aa10e3837b8b8ec2aa49aa0becb6ff902044c0eab78b7c95e68eaf05a24d091 (image=quay.io/ceph/ceph:v19, name=ceph-4c838139-e0c9-556a-a9ca-e4422f459af7-mon-compute-0, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, io.buildah.version=1.40.1)
Nov 27 05:53:24 np0005537642 systemd[1]: var-lib-containers-storage-overlay-2a01a9ffbd4c76ab6f333d05f958dbc04f0d3b2972c0150f83b462167c5e2136-merged.mount: Deactivated successfully.
Nov 27 05:53:24 np0005537642 podman[74215]: 2025-11-27 10:53:24.274474471 +0000 UTC m=+0.232406266 container remove 6aa10e3837b8b8ec2aa49aa0becb6ff902044c0eab78b7c95e68eaf05a24d091 (image=quay.io/ceph/ceph:v19, name=ceph-4c838139-e0c9-556a-a9ca-e4422f459af7-mon-compute-0, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 27 05:53:24 np0005537642 bash[74215]: ceph-4c838139-e0c9-556a-a9ca-e4422f459af7-mon-compute-0
Nov 27 05:53:24 np0005537642 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Nov 27 05:53:24 np0005537642 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Nov 27 05:53:24 np0005537642 systemd[1]: ceph-4c838139-e0c9-556a-a9ca-e4422f459af7@mon.compute-0.service: Deactivated successfully.
Nov 27 05:53:24 np0005537642 systemd[1]: Stopped Ceph mon.compute-0 for 4c838139-e0c9-556a-a9ca-e4422f459af7.
Nov 27 05:53:24 np0005537642 systemd[1]: ceph-4c838139-e0c9-556a-a9ca-e4422f459af7@mon.compute-0.service: Consumed 1.248s CPU time.
Nov 27 05:53:24 np0005537642 systemd[1]: Starting Ceph mon.compute-0 for 4c838139-e0c9-556a-a9ca-e4422f459af7...
Nov 27 05:53:24 np0005537642 podman[74317]: 2025-11-27 10:53:24.775366132 +0000 UTC m=+0.065917575 container create 10d3b07b5dbe91b896d72c044972881d213b8aa535ac9c97588798b2ade7a7fa (image=quay.io/ceph/ceph:v19, name=ceph-4c838139-e0c9-556a-a9ca-e4422f459af7-mon-compute-0, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 27 05:53:24 np0005537642 podman[74317]: 2025-11-27 10:53:24.748515763 +0000 UTC m=+0.039067276 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Nov 27 05:53:24 np0005537642 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/cae732d8016c008e113ddcd8f93f4a627746da9c6f83acb6c27da9a42d1cb2c5/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 27 05:53:24 np0005537642 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/cae732d8016c008e113ddcd8f93f4a627746da9c6f83acb6c27da9a42d1cb2c5/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 27 05:53:24 np0005537642 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/cae732d8016c008e113ddcd8f93f4a627746da9c6f83acb6c27da9a42d1cb2c5/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 27 05:53:24 np0005537642 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/cae732d8016c008e113ddcd8f93f4a627746da9c6f83acb6c27da9a42d1cb2c5/merged/var/lib/ceph/mon/ceph-compute-0 supports timestamps until 2038 (0x7fffffff)
Nov 27 05:53:24 np0005537642 podman[74317]: 2025-11-27 10:53:24.853814877 +0000 UTC m=+0.144366370 container init 10d3b07b5dbe91b896d72c044972881d213b8aa535ac9c97588798b2ade7a7fa (image=quay.io/ceph/ceph:v19, name=ceph-4c838139-e0c9-556a-a9ca-e4422f459af7-mon-compute-0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 27 05:53:24 np0005537642 podman[74317]: 2025-11-27 10:53:24.86679223 +0000 UTC m=+0.157343673 container start 10d3b07b5dbe91b896d72c044972881d213b8aa535ac9c97588798b2ade7a7fa (image=quay.io/ceph/ceph:v19, name=ceph-4c838139-e0c9-556a-a9ca-e4422f459af7-mon-compute-0, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid)
Nov 27 05:53:24 np0005537642 bash[74317]: 10d3b07b5dbe91b896d72c044972881d213b8aa535ac9c97588798b2ade7a7fa
Nov 27 05:53:24 np0005537642 systemd[1]: Started Ceph mon.compute-0 for 4c838139-e0c9-556a-a9ca-e4422f459af7.
Nov 27 05:53:24 np0005537642 ceph-mon[74338]: set uid:gid to 167:167 (ceph:ceph)
Nov 27 05:53:24 np0005537642 ceph-mon[74338]: ceph version 19.2.3 (c92aebb279828e9c3c1f5d24613efca272649e62) squid (stable), process ceph-mon, pid 2
Nov 27 05:53:24 np0005537642 ceph-mon[74338]: pidfile_write: ignore empty --pid-file
Nov 27 05:53:24 np0005537642 ceph-mon[74338]: load: jerasure load: lrc 
Nov 27 05:53:24 np0005537642 ceph-mon[74338]: rocksdb: RocksDB version: 7.9.2
Nov 27 05:53:24 np0005537642 ceph-mon[74338]: rocksdb: Git sha 0
Nov 27 05:53:24 np0005537642 ceph-mon[74338]: rocksdb: Compile date 2025-07-17 03:12:14
Nov 27 05:53:24 np0005537642 ceph-mon[74338]: rocksdb: DB SUMMARY
Nov 27 05:53:24 np0005537642 ceph-mon[74338]: rocksdb: DB Session ID:  PS7NKDG3F09YEGXCLO27
Nov 27 05:53:24 np0005537642 ceph-mon[74338]: rocksdb: CURRENT file:  CURRENT
Nov 27 05:53:24 np0005537642 ceph-mon[74338]: rocksdb: IDENTITY file:  IDENTITY
Nov 27 05:53:24 np0005537642 ceph-mon[74338]: rocksdb: MANIFEST file:  MANIFEST-000010 size: 179 Bytes
Nov 27 05:53:24 np0005537642 ceph-mon[74338]: rocksdb: SST files in /var/lib/ceph/mon/ceph-compute-0/store.db dir, Total Num: 1, files: 000008.sst 
Nov 27 05:53:24 np0005537642 ceph-mon[74338]: rocksdb: Write Ahead Log file in /var/lib/ceph/mon/ceph-compute-0/store.db: 000009.log size: 58739 ; 
Nov 27 05:53:24 np0005537642 ceph-mon[74338]: rocksdb:                         Options.error_if_exists: 0
Nov 27 05:53:24 np0005537642 ceph-mon[74338]: rocksdb:                       Options.create_if_missing: 0
Nov 27 05:53:24 np0005537642 ceph-mon[74338]: rocksdb:                         Options.paranoid_checks: 1
Nov 27 05:53:24 np0005537642 ceph-mon[74338]: rocksdb:             Options.flush_verify_memtable_count: 1
Nov 27 05:53:24 np0005537642 ceph-mon[74338]: rocksdb:                               Options.track_and_verify_wals_in_manifest: 0
Nov 27 05:53:24 np0005537642 ceph-mon[74338]: rocksdb:        Options.verify_sst_unique_id_in_manifest: 1
Nov 27 05:53:24 np0005537642 ceph-mon[74338]: rocksdb:                                     Options.env: 0x55a93ae3cc20
Nov 27 05:53:24 np0005537642 ceph-mon[74338]: rocksdb:                                      Options.fs: PosixFileSystem
Nov 27 05:53:24 np0005537642 ceph-mon[74338]: rocksdb:                                Options.info_log: 0x55a93c935ac0
Nov 27 05:53:24 np0005537642 ceph-mon[74338]: rocksdb:                Options.max_file_opening_threads: 16
Nov 27 05:53:24 np0005537642 ceph-mon[74338]: rocksdb:                              Options.statistics: (nil)
Nov 27 05:53:24 np0005537642 ceph-mon[74338]: rocksdb:                               Options.use_fsync: 0
Nov 27 05:53:24 np0005537642 ceph-mon[74338]: rocksdb:                       Options.max_log_file_size: 0
Nov 27 05:53:24 np0005537642 ceph-mon[74338]: rocksdb:                  Options.max_manifest_file_size: 1073741824
Nov 27 05:53:24 np0005537642 ceph-mon[74338]: rocksdb:                   Options.log_file_time_to_roll: 0
Nov 27 05:53:24 np0005537642 ceph-mon[74338]: rocksdb:                       Options.keep_log_file_num: 1000
Nov 27 05:53:24 np0005537642 ceph-mon[74338]: rocksdb:                    Options.recycle_log_file_num: 0
Nov 27 05:53:24 np0005537642 ceph-mon[74338]: rocksdb:                         Options.allow_fallocate: 1
Nov 27 05:53:24 np0005537642 ceph-mon[74338]: rocksdb:                        Options.allow_mmap_reads: 0
Nov 27 05:53:24 np0005537642 ceph-mon[74338]: rocksdb:                       Options.allow_mmap_writes: 0
Nov 27 05:53:24 np0005537642 ceph-mon[74338]: rocksdb:                        Options.use_direct_reads: 0
Nov 27 05:53:24 np0005537642 ceph-mon[74338]: rocksdb:                        Options.use_direct_io_for_flush_and_compaction: 0
Nov 27 05:53:24 np0005537642 ceph-mon[74338]: rocksdb:          Options.create_missing_column_families: 0
Nov 27 05:53:24 np0005537642 ceph-mon[74338]: rocksdb:                              Options.db_log_dir: 
Nov 27 05:53:24 np0005537642 ceph-mon[74338]: rocksdb:                                 Options.wal_dir: 
Nov 27 05:53:24 np0005537642 ceph-mon[74338]: rocksdb:                Options.table_cache_numshardbits: 6
Nov 27 05:53:24 np0005537642 ceph-mon[74338]: rocksdb:                         Options.WAL_ttl_seconds: 0
Nov 27 05:53:24 np0005537642 ceph-mon[74338]: rocksdb:                       Options.WAL_size_limit_MB: 0
Nov 27 05:53:24 np0005537642 ceph-mon[74338]: rocksdb:                        Options.max_write_batch_group_size_bytes: 1048576
Nov 27 05:53:24 np0005537642 ceph-mon[74338]: rocksdb:             Options.manifest_preallocation_size: 4194304
Nov 27 05:53:24 np0005537642 ceph-mon[74338]: rocksdb:                     Options.is_fd_close_on_exec: 1
Nov 27 05:53:24 np0005537642 ceph-mon[74338]: rocksdb:                   Options.advise_random_on_open: 1
Nov 27 05:53:24 np0005537642 ceph-mon[74338]: rocksdb:                    Options.db_write_buffer_size: 0
Nov 27 05:53:24 np0005537642 ceph-mon[74338]: rocksdb:                    Options.write_buffer_manager: 0x55a93c939900
Nov 27 05:53:24 np0005537642 ceph-mon[74338]: rocksdb:         Options.access_hint_on_compaction_start: 1
Nov 27 05:53:24 np0005537642 ceph-mon[74338]: rocksdb:           Options.random_access_max_buffer_size: 1048576
Nov 27 05:53:24 np0005537642 ceph-mon[74338]: rocksdb:                      Options.use_adaptive_mutex: 0
Nov 27 05:53:24 np0005537642 ceph-mon[74338]: rocksdb:                            Options.rate_limiter: (nil)
Nov 27 05:53:24 np0005537642 ceph-mon[74338]: rocksdb:     Options.sst_file_manager.rate_bytes_per_sec: 0
Nov 27 05:53:24 np0005537642 ceph-mon[74338]: rocksdb:                       Options.wal_recovery_mode: 2
Nov 27 05:53:24 np0005537642 ceph-mon[74338]: rocksdb:                  Options.enable_thread_tracking: 0
Nov 27 05:53:24 np0005537642 ceph-mon[74338]: rocksdb:                  Options.enable_pipelined_write: 0
Nov 27 05:53:24 np0005537642 ceph-mon[74338]: rocksdb:                  Options.unordered_write: 0
Nov 27 05:53:24 np0005537642 ceph-mon[74338]: rocksdb:         Options.allow_concurrent_memtable_write: 1
Nov 27 05:53:24 np0005537642 ceph-mon[74338]: rocksdb:      Options.enable_write_thread_adaptive_yield: 1
Nov 27 05:53:24 np0005537642 ceph-mon[74338]: rocksdb:             Options.write_thread_max_yield_usec: 100
Nov 27 05:53:24 np0005537642 ceph-mon[74338]: rocksdb:            Options.write_thread_slow_yield_usec: 3
Nov 27 05:53:24 np0005537642 ceph-mon[74338]: rocksdb:                               Options.row_cache: None
Nov 27 05:53:24 np0005537642 ceph-mon[74338]: rocksdb:                              Options.wal_filter: None
Nov 27 05:53:24 np0005537642 ceph-mon[74338]: rocksdb:             Options.avoid_flush_during_recovery: 0
Nov 27 05:53:24 np0005537642 ceph-mon[74338]: rocksdb:             Options.allow_ingest_behind: 0
Nov 27 05:53:24 np0005537642 ceph-mon[74338]: rocksdb:             Options.two_write_queues: 0
Nov 27 05:53:24 np0005537642 ceph-mon[74338]: rocksdb:             Options.manual_wal_flush: 0
Nov 27 05:53:24 np0005537642 ceph-mon[74338]: rocksdb:             Options.wal_compression: 0
Nov 27 05:53:24 np0005537642 ceph-mon[74338]: rocksdb:             Options.atomic_flush: 0
Nov 27 05:53:24 np0005537642 ceph-mon[74338]: rocksdb:             Options.avoid_unnecessary_blocking_io: 0
Nov 27 05:53:24 np0005537642 ceph-mon[74338]: rocksdb:                 Options.persist_stats_to_disk: 0
Nov 27 05:53:24 np0005537642 ceph-mon[74338]: rocksdb:                 Options.write_dbid_to_manifest: 0
Nov 27 05:53:24 np0005537642 ceph-mon[74338]: rocksdb:                 Options.log_readahead_size: 0
Nov 27 05:53:24 np0005537642 ceph-mon[74338]: rocksdb:                 Options.file_checksum_gen_factory: Unknown
Nov 27 05:53:24 np0005537642 ceph-mon[74338]: rocksdb:                 Options.best_efforts_recovery: 0
Nov 27 05:53:24 np0005537642 ceph-mon[74338]: rocksdb:                Options.max_bgerror_resume_count: 2147483647
Nov 27 05:53:24 np0005537642 ceph-mon[74338]: rocksdb:            Options.bgerror_resume_retry_interval: 1000000
Nov 27 05:53:24 np0005537642 ceph-mon[74338]: rocksdb:             Options.allow_data_in_errors: 0
Nov 27 05:53:24 np0005537642 ceph-mon[74338]: rocksdb:             Options.db_host_id: __hostname__
Nov 27 05:53:24 np0005537642 ceph-mon[74338]: rocksdb:             Options.enforce_single_del_contracts: true
Nov 27 05:53:24 np0005537642 ceph-mon[74338]: rocksdb:             Options.max_background_jobs: 2
Nov 27 05:53:24 np0005537642 ceph-mon[74338]: rocksdb:             Options.max_background_compactions: -1
Nov 27 05:53:24 np0005537642 ceph-mon[74338]: rocksdb:             Options.max_subcompactions: 1
Nov 27 05:53:24 np0005537642 ceph-mon[74338]: rocksdb:             Options.avoid_flush_during_shutdown: 0
Nov 27 05:53:24 np0005537642 ceph-mon[74338]: rocksdb:           Options.writable_file_max_buffer_size: 1048576
Nov 27 05:53:24 np0005537642 ceph-mon[74338]: rocksdb:             Options.delayed_write_rate : 16777216
Nov 27 05:53:24 np0005537642 ceph-mon[74338]: rocksdb:             Options.max_total_wal_size: 0
Nov 27 05:53:24 np0005537642 ceph-mon[74338]: rocksdb:             Options.delete_obsolete_files_period_micros: 21600000000
Nov 27 05:53:24 np0005537642 ceph-mon[74338]: rocksdb:                   Options.stats_dump_period_sec: 600
Nov 27 05:53:24 np0005537642 ceph-mon[74338]: rocksdb:                 Options.stats_persist_period_sec: 600
Nov 27 05:53:24 np0005537642 ceph-mon[74338]: rocksdb:                 Options.stats_history_buffer_size: 1048576
Nov 27 05:53:24 np0005537642 ceph-mon[74338]: rocksdb:                          Options.max_open_files: -1
Nov 27 05:53:24 np0005537642 ceph-mon[74338]: rocksdb:                          Options.bytes_per_sync: 0
Nov 27 05:53:24 np0005537642 ceph-mon[74338]: rocksdb:                      Options.wal_bytes_per_sync: 0
Nov 27 05:53:24 np0005537642 ceph-mon[74338]: rocksdb:                   Options.strict_bytes_per_sync: 0
Nov 27 05:53:24 np0005537642 ceph-mon[74338]: rocksdb:       Options.compaction_readahead_size: 0
Nov 27 05:53:24 np0005537642 ceph-mon[74338]: rocksdb:                  Options.max_background_flushes: -1
Nov 27 05:53:24 np0005537642 ceph-mon[74338]: rocksdb: Compression algorithms supported:
Nov 27 05:53:24 np0005537642 ceph-mon[74338]: rocksdb: #011kZSTD supported: 0
Nov 27 05:53:24 np0005537642 ceph-mon[74338]: rocksdb: #011kXpressCompression supported: 0
Nov 27 05:53:24 np0005537642 ceph-mon[74338]: rocksdb: #011kBZip2Compression supported: 0
Nov 27 05:53:24 np0005537642 ceph-mon[74338]: rocksdb: #011kZSTDNotFinalCompression supported: 0
Nov 27 05:53:24 np0005537642 ceph-mon[74338]: rocksdb: #011kLZ4Compression supported: 1
Nov 27 05:53:24 np0005537642 ceph-mon[74338]: rocksdb: #011kZlibCompression supported: 1
Nov 27 05:53:24 np0005537642 ceph-mon[74338]: rocksdb: #011kLZ4HCCompression supported: 1
Nov 27 05:53:24 np0005537642 ceph-mon[74338]: rocksdb: #011kSnappyCompression supported: 1
Nov 27 05:53:24 np0005537642 ceph-mon[74338]: rocksdb: Fast CRC32 supported: Supported on x86
Nov 27 05:53:24 np0005537642 ceph-mon[74338]: rocksdb: DMutex implementation: pthread_mutex_t
Nov 27 05:53:24 np0005537642 ceph-mon[74338]: rocksdb: [db/version_set.cc:5527] Recovering from manifest file: /var/lib/ceph/mon/ceph-compute-0/store.db/MANIFEST-000010
Nov 27 05:53:24 np0005537642 ceph-mon[74338]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [default]:
Nov 27 05:53:24 np0005537642 ceph-mon[74338]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Nov 27 05:53:24 np0005537642 ceph-mon[74338]: rocksdb:           Options.merge_operator: 
Nov 27 05:53:24 np0005537642 ceph-mon[74338]: rocksdb:        Options.compaction_filter: None
Nov 27 05:53:24 np0005537642 ceph-mon[74338]: rocksdb:        Options.compaction_filter_factory: None
Nov 27 05:53:24 np0005537642 ceph-mon[74338]: rocksdb:  Options.sst_partitioner_factory: None
Nov 27 05:53:24 np0005537642 ceph-mon[74338]: rocksdb:         Options.memtable_factory: SkipListFactory
Nov 27 05:53:24 np0005537642 ceph-mon[74338]: rocksdb:            Options.table_factory: BlockBasedTable
Nov 27 05:53:24 np0005537642 ceph-mon[74338]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x55a93c934aa0)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x55a93c959350#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 536870912#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Nov 27 05:53:24 np0005537642 ceph-mon[74338]: rocksdb:        Options.write_buffer_size: 33554432
Nov 27 05:53:24 np0005537642 ceph-mon[74338]: rocksdb:  Options.max_write_buffer_number: 2
Nov 27 05:53:24 np0005537642 ceph-mon[74338]: rocksdb:          Options.compression: NoCompression
Nov 27 05:53:24 np0005537642 ceph-mon[74338]: rocksdb:                  Options.bottommost_compression: Disabled
Nov 27 05:53:24 np0005537642 ceph-mon[74338]: rocksdb:       Options.prefix_extractor: nullptr
Nov 27 05:53:24 np0005537642 ceph-mon[74338]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Nov 27 05:53:24 np0005537642 ceph-mon[74338]: rocksdb:             Options.num_levels: 7
Nov 27 05:53:24 np0005537642 ceph-mon[74338]: rocksdb:        Options.min_write_buffer_number_to_merge: 1
Nov 27 05:53:24 np0005537642 ceph-mon[74338]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Nov 27 05:53:24 np0005537642 ceph-mon[74338]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Nov 27 05:53:24 np0005537642 ceph-mon[74338]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Nov 27 05:53:24 np0005537642 ceph-mon[74338]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Nov 27 05:53:24 np0005537642 ceph-mon[74338]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Nov 27 05:53:24 np0005537642 ceph-mon[74338]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Nov 27 05:53:24 np0005537642 ceph-mon[74338]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Nov 27 05:53:24 np0005537642 ceph-mon[74338]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Nov 27 05:53:24 np0005537642 ceph-mon[74338]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Nov 27 05:53:24 np0005537642 ceph-mon[74338]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Nov 27 05:53:24 np0005537642 ceph-mon[74338]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Nov 27 05:53:24 np0005537642 ceph-mon[74338]: rocksdb:            Options.compression_opts.window_bits: -14
Nov 27 05:53:24 np0005537642 ceph-mon[74338]: rocksdb:                  Options.compression_opts.level: 32767
Nov 27 05:53:24 np0005537642 ceph-mon[74338]: rocksdb:               Options.compression_opts.strategy: 0
Nov 27 05:53:24 np0005537642 ceph-mon[74338]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Nov 27 05:53:24 np0005537642 ceph-mon[74338]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Nov 27 05:53:24 np0005537642 ceph-mon[74338]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Nov 27 05:53:24 np0005537642 ceph-mon[74338]: rocksdb:         Options.compression_opts.parallel_threads: 1
Nov 27 05:53:24 np0005537642 ceph-mon[74338]: rocksdb:                  Options.compression_opts.enabled: false
Nov 27 05:53:24 np0005537642 ceph-mon[74338]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Nov 27 05:53:24 np0005537642 ceph-mon[74338]: rocksdb:      Options.level0_file_num_compaction_trigger: 4
Nov 27 05:53:24 np0005537642 ceph-mon[74338]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Nov 27 05:53:24 np0005537642 ceph-mon[74338]: rocksdb:              Options.level0_stop_writes_trigger: 36
Nov 27 05:53:24 np0005537642 ceph-mon[74338]: rocksdb:                   Options.target_file_size_base: 67108864
Nov 27 05:53:24 np0005537642 ceph-mon[74338]: rocksdb:             Options.target_file_size_multiplier: 1
Nov 27 05:53:24 np0005537642 ceph-mon[74338]: rocksdb:                Options.max_bytes_for_level_base: 268435456
Nov 27 05:53:24 np0005537642 ceph-mon[74338]: rocksdb: Options.level_compaction_dynamic_level_bytes: 1
Nov 27 05:53:24 np0005537642 ceph-mon[74338]: rocksdb:          Options.max_bytes_for_level_multiplier: 10.000000
Nov 27 05:53:24 np0005537642 ceph-mon[74338]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Nov 27 05:53:24 np0005537642 ceph-mon[74338]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Nov 27 05:53:24 np0005537642 ceph-mon[74338]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Nov 27 05:53:24 np0005537642 ceph-mon[74338]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Nov 27 05:53:24 np0005537642 ceph-mon[74338]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Nov 27 05:53:24 np0005537642 ceph-mon[74338]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Nov 27 05:53:24 np0005537642 ceph-mon[74338]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Nov 27 05:53:24 np0005537642 ceph-mon[74338]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Nov 27 05:53:24 np0005537642 ceph-mon[74338]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Nov 27 05:53:24 np0005537642 ceph-mon[74338]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Nov 27 05:53:24 np0005537642 ceph-mon[74338]: rocksdb:                        Options.arena_block_size: 1048576
Nov 27 05:53:24 np0005537642 ceph-mon[74338]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Nov 27 05:53:24 np0005537642 ceph-mon[74338]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Nov 27 05:53:24 np0005537642 ceph-mon[74338]: rocksdb:                Options.disable_auto_compactions: 0
Nov 27 05:53:24 np0005537642 ceph-mon[74338]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Nov 27 05:53:24 np0005537642 ceph-mon[74338]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Nov 27 05:53:24 np0005537642 ceph-mon[74338]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Nov 27 05:53:24 np0005537642 ceph-mon[74338]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Nov 27 05:53:24 np0005537642 ceph-mon[74338]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Nov 27 05:53:24 np0005537642 ceph-mon[74338]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Nov 27 05:53:24 np0005537642 ceph-mon[74338]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Nov 27 05:53:24 np0005537642 ceph-mon[74338]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Nov 27 05:53:24 np0005537642 ceph-mon[74338]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Nov 27 05:53:24 np0005537642 ceph-mon[74338]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Nov 27 05:53:24 np0005537642 ceph-mon[74338]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Nov 27 05:53:24 np0005537642 ceph-mon[74338]: rocksdb:                   Options.inplace_update_support: 0
Nov 27 05:53:24 np0005537642 ceph-mon[74338]: rocksdb:                 Options.inplace_update_num_locks: 10000
Nov 27 05:53:24 np0005537642 ceph-mon[74338]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Nov 27 05:53:24 np0005537642 ceph-mon[74338]: rocksdb:               Options.memtable_whole_key_filtering: 0
Nov 27 05:53:24 np0005537642 ceph-mon[74338]: rocksdb:   Options.memtable_huge_page_size: 0
Nov 27 05:53:24 np0005537642 ceph-mon[74338]: rocksdb:                           Options.bloom_locality: 0
Nov 27 05:53:24 np0005537642 ceph-mon[74338]: rocksdb:                    Options.max_successive_merges: 0
Nov 27 05:53:24 np0005537642 ceph-mon[74338]: rocksdb:                Options.optimize_filters_for_hits: 0
Nov 27 05:53:24 np0005537642 ceph-mon[74338]: rocksdb:                Options.paranoid_file_checks: 0
Nov 27 05:53:24 np0005537642 ceph-mon[74338]: rocksdb:                Options.force_consistency_checks: 1
Nov 27 05:53:24 np0005537642 ceph-mon[74338]: rocksdb:                Options.report_bg_io_stats: 0
Nov 27 05:53:24 np0005537642 ceph-mon[74338]: rocksdb:                               Options.ttl: 2592000
Nov 27 05:53:24 np0005537642 ceph-mon[74338]: rocksdb:          Options.periodic_compaction_seconds: 0
Nov 27 05:53:24 np0005537642 ceph-mon[74338]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Nov 27 05:53:24 np0005537642 ceph-mon[74338]: rocksdb:    Options.preserve_internal_time_seconds: 0
Nov 27 05:53:24 np0005537642 ceph-mon[74338]: rocksdb:                       Options.enable_blob_files: false
Nov 27 05:53:24 np0005537642 ceph-mon[74338]: rocksdb:                           Options.min_blob_size: 0
Nov 27 05:53:24 np0005537642 ceph-mon[74338]: rocksdb:                          Options.blob_file_size: 268435456
Nov 27 05:53:24 np0005537642 ceph-mon[74338]: rocksdb:                   Options.blob_compression_type: NoCompression
Nov 27 05:53:24 np0005537642 ceph-mon[74338]: rocksdb:          Options.enable_blob_garbage_collection: false
Nov 27 05:53:24 np0005537642 ceph-mon[74338]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Nov 27 05:53:24 np0005537642 ceph-mon[74338]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Nov 27 05:53:24 np0005537642 ceph-mon[74338]: rocksdb:          Options.blob_compaction_readahead_size: 0
Nov 27 05:53:24 np0005537642 ceph-mon[74338]: rocksdb:                Options.blob_file_starting_level: 0
Nov 27 05:53:24 np0005537642 ceph-mon[74338]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Nov 27 05:53:24 np0005537642 ceph-mon[74338]: rocksdb: [db/version_set.cc:5566] Recovered from manifest file:/var/lib/ceph/mon/ceph-compute-0/store.db/MANIFEST-000010 succeeded,manifest_file_number is 10, next_file_number is 12, last_sequence is 5, log_number is 5,prev_log_number is 0,max_column_family is 0,min_log_number_to_keep is 5
Nov 27 05:53:24 np0005537642 ceph-mon[74338]: rocksdb: [db/version_set.cc:5581] Column family [default] (ID 0), log number is 5
Nov 27 05:53:24 np0005537642 ceph-mon[74338]: rocksdb: [db/db_impl/db_impl_open.cc:539] DB ID: 65f89df2-0592-497f-b5d7-5930e7c7d9aa
Nov 27 05:53:24 np0005537642 ceph-mon[74338]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764240804936255, "job": 1, "event": "recovery_started", "wal_files": [9]}
Nov 27 05:53:24 np0005537642 ceph-mon[74338]: rocksdb: [db/db_impl/db_impl_open.cc:1043] Recovering log #9 mode 2
Nov 27 05:53:24 np0005537642 ceph-mon[74338]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764240804941852, "cf_name": "default", "job": 1, "event": "table_file_creation", "file_number": 13, "file_size": 58490, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 8, "largest_seqno": 137, "table_properties": {"data_size": 56964, "index_size": 168, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 325, "raw_key_size": 3182, "raw_average_key_size": 30, "raw_value_size": 54481, "raw_average_value_size": 523, "num_data_blocks": 9, "num_entries": 104, "num_filter_entries": 104, "num_deletions": 3, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764240804, "oldest_key_time": 0, "file_creation_time": 0, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "65f89df2-0592-497f-b5d7-5930e7c7d9aa", "db_session_id": "PS7NKDG3F09YEGXCLO27", "orig_file_number": 13, "seqno_to_time_mapping": "N/A"}}
Nov 27 05:53:24 np0005537642 ceph-mon[74338]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764240804942016, "job": 1, "event": "recovery_finished"}
Nov 27 05:53:24 np0005537642 ceph-mon[74338]: rocksdb: [db/version_set.cc:5047] Creating manifest 15
Nov 27 05:53:24 np0005537642 podman[74339]: 2025-11-27 10:53:24.952303184 +0000 UTC m=+0.055782998 container create e8ab0a98d14e0b8445477330a08623d1db7bf12bf9335d6dc18b779cff4df267 (image=quay.io/ceph/ceph:v19, name=adoring_kilby, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, CEPH_REF=squid, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1)
Nov 27 05:53:24 np0005537642 ceph-mon[74338]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000009.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 27 05:53:24 np0005537642 ceph-mon[74338]: rocksdb: [db/db_impl/db_impl_open.cc:1987] SstFileManager instance 0x55a93c95ae00
Nov 27 05:53:24 np0005537642 ceph-mon[74338]: rocksdb: DB pointer 0x55a93ca64000
Nov 27 05:53:24 np0005537642 ceph-mon[74338]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Nov 27 05:53:24 np0005537642 ceph-mon[74338]: rocksdb: [db/db_impl/db_impl.cc:1111] #012** DB Stats **#012Uptime(secs): 0.0 total, 0.0 interval#012Cumulative writes: 0 writes, 0 keys, 0 commit groups, 0.0 writes per commit group, ingest: 0.00 GB, 0.00 MB/s#012Cumulative WAL: 0 writes, 0 syncs, 0.00 writes per sync, written: 0.00 GB, 0.00 MB/s#012Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent#012Interval writes: 0 writes, 0 keys, 0 commit groups, 0.0 writes per commit group, ingest: 0.00 MB, 0.00 MB/s#012Interval WAL: 0 writes, 0 syncs, 0.00 writes per sync, written: 0.00 GB, 0.00 MB/s#012Interval stall: 00:00:0.000 H:M:S, 0.0 percent#012#012** Compaction Stats [default] **#012Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012  L0      2/0   59.02 KB   0.5      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0     10.8      0.01              0.00         1    0.005       0      0       0.0       0.0#012 Sum      2/0   59.02 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0     10.8      0.01              0.00         1    0.005       0      0       0.0       0.0#012 Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0     10.8      0.01              0.00         1    0.005       0      0       0.0       0.0#012#012** Compaction Stats [default] **#012Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0     10.8      0.01              0.00         1    0.005       0      0       0.0       0.0#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 0.0 total, 0.0 interval#012Flush(GB): cumulative 0.000, interval 0.000#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.00 GB write, 2.63 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Interval compaction: 0.00 GB write, 2.63 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count#012Block cache BinnedLRUCache@0x55a93c959350#2 capacity: 512.00 MB usage: 0.84 KB table_size: 0 occupancy: 18446744073709551615 collections: 1 last_copies: 0 last_secs: 3.7e-05 secs_since: 0#012Block cache entry stats(count,size,portion): FilterBlock(2,0.48 KB,9.23872e-05%) IndexBlock(2,0.36 KB,6.85453e-05%) Misc(1,0.00 KB,0%)#012#012** File Read Latency Histogram By Level [default] **
Nov 27 05:53:24 np0005537642 ceph-mon[74338]: starting mon.compute-0 rank 0 at public addrs [v2:192.168.122.100:3300/0,v1:192.168.122.100:6789/0] at bind addrs [v2:192.168.122.100:3300/0,v1:192.168.122.100:6789/0] mon_data /var/lib/ceph/mon/ceph-compute-0 fsid 4c838139-e0c9-556a-a9ca-e4422f459af7
Nov 27 05:53:24 np0005537642 ceph-mon[74338]: mon.compute-0@-1(???) e1 preinit fsid 4c838139-e0c9-556a-a9ca-e4422f459af7
Nov 27 05:53:24 np0005537642 ceph-mon[74338]: mon.compute-0@-1(???).mds e1 new map
Nov 27 05:53:24 np0005537642 ceph-mon[74338]: mon.compute-0@-1(???).mds e1 print_map#012e1#012btime 2025-11-27T10:53:22:366160+0000#012enable_multiple, ever_enabled_multiple: 1,1#012default compat: compat={},rocompat={},incompat={1=base v0.20,2=client writeable ranges,3=default file layouts on dirs,4=dir inode in separate object,5=mds uses versioned encoding,6=dirfrag is stored in omap,8=no anchor table,9=file layout v2,10=snaprealm v2,11=minor log segments,12=quiesce subvolumes}#012legacy client fscid: -1#012 #012No filesystems configured
Nov 27 05:53:24 np0005537642 ceph-mon[74338]: mon.compute-0@-1(???).osd e1 crush map has features 3314932999778484224, adjusting msgr requires
Nov 27 05:53:24 np0005537642 ceph-mon[74338]: mon.compute-0@-1(???).osd e1 crush map has features 288514050185494528, adjusting msgr requires
Nov 27 05:53:24 np0005537642 ceph-mon[74338]: mon.compute-0@-1(???).osd e1 crush map has features 288514050185494528, adjusting msgr requires
Nov 27 05:53:24 np0005537642 ceph-mon[74338]: mon.compute-0@-1(???).osd e1 crush map has features 288514050185494528, adjusting msgr requires
Nov 27 05:53:24 np0005537642 ceph-mon[74338]: mon.compute-0@-1(???).paxosservice(auth 1..2) refresh upgraded, format 0 -> 3
Nov 27 05:53:24 np0005537642 ceph-mon[74338]: mon.compute-0@-1(probing) e1  my rank is now 0 (was -1)
Nov 27 05:53:24 np0005537642 ceph-mon[74338]: mon.compute-0@0(probing) e1 win_standalone_election
Nov 27 05:53:24 np0005537642 ceph-mon[74338]: paxos.0).electionLogic(3) init, last seen epoch 3, mid-election, bumping
Nov 27 05:53:24 np0005537642 ceph-mon[74338]: mon.compute-0@0(electing) e1 collect_metadata vda:  no unique device id for vda: fallback method has no model nor serial
Nov 27 05:53:24 np0005537642 ceph-mon[74338]: log_channel(cluster) log [INF] : mon.compute-0 is new leader, mons compute-0 in quorum (ranks 0)
Nov 27 05:53:24 np0005537642 ceph-mon[74338]: log_channel(cluster) log [DBG] : monmap epoch 1
Nov 27 05:53:24 np0005537642 ceph-mon[74338]: log_channel(cluster) log [DBG] : fsid 4c838139-e0c9-556a-a9ca-e4422f459af7
Nov 27 05:53:24 np0005537642 ceph-mon[74338]: log_channel(cluster) log [DBG] : last_changed 2025-11-27T10:53:19.458310+0000
Nov 27 05:53:24 np0005537642 ceph-mon[74338]: log_channel(cluster) log [DBG] : created 2025-11-27T10:53:19.458310+0000
Nov 27 05:53:24 np0005537642 ceph-mon[74338]: log_channel(cluster) log [DBG] : min_mon_release 19 (squid)
Nov 27 05:53:24 np0005537642 ceph-mon[74338]: log_channel(cluster) log [DBG] : election_strategy: 1
Nov 27 05:53:24 np0005537642 ceph-mon[74338]: log_channel(cluster) log [DBG] : 0: [v2:192.168.122.100:3300/0,v1:192.168.122.100:6789/0] mon.compute-0
Nov 27 05:53:24 np0005537642 ceph-mon[74338]: mon.compute-0@0(leader) e1 collect_metadata vda:  no unique device id for vda: fallback method has no model nor serial
Nov 27 05:53:24 np0005537642 ceph-mon[74338]: log_channel(cluster) log [DBG] : fsmap 
Nov 27 05:53:24 np0005537642 ceph-mon[74338]: log_channel(cluster) log [DBG] : osdmap e1: 0 total, 0 up, 0 in
Nov 27 05:53:24 np0005537642 ceph-mon[74338]: log_channel(cluster) log [DBG] : mgrmap e1: no daemons active
Nov 27 05:53:25 np0005537642 systemd[1]: Started libpod-conmon-e8ab0a98d14e0b8445477330a08623d1db7bf12bf9335d6dc18b779cff4df267.scope.
Nov 27 05:53:25 np0005537642 podman[74339]: 2025-11-27 10:53:24.921451663 +0000 UTC m=+0.024931487 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Nov 27 05:53:25 np0005537642 ceph-mon[74338]: mon.compute-0 is new leader, mons compute-0 in quorum (ranks 0)
Nov 27 05:53:25 np0005537642 systemd[1]: Started libcrun container.
Nov 27 05:53:25 np0005537642 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f73cae4d891f8d39470318af39d692eed5fcc4701394aca2f36003dbe5884a0e/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 27 05:53:25 np0005537642 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f73cae4d891f8d39470318af39d692eed5fcc4701394aca2f36003dbe5884a0e/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 27 05:53:25 np0005537642 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f73cae4d891f8d39470318af39d692eed5fcc4701394aca2f36003dbe5884a0e/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Nov 27 05:53:25 np0005537642 podman[74339]: 2025-11-27 10:53:25.074760942 +0000 UTC m=+0.178240756 container init e8ab0a98d14e0b8445477330a08623d1db7bf12bf9335d6dc18b779cff4df267 (image=quay.io/ceph/ceph:v19, name=adoring_kilby, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, org.label-schema.schema-version=1.0, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Nov 27 05:53:25 np0005537642 podman[74339]: 2025-11-27 10:53:25.088366865 +0000 UTC m=+0.191846649 container start e8ab0a98d14e0b8445477330a08623d1db7bf12bf9335d6dc18b779cff4df267 (image=quay.io/ceph/ceph:v19, name=adoring_kilby, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1)
Nov 27 05:53:25 np0005537642 podman[74339]: 2025-11-27 10:53:25.092803558 +0000 UTC m=+0.196283422 container attach e8ab0a98d14e0b8445477330a08623d1db7bf12bf9335d6dc18b779cff4df267 (image=quay.io/ceph/ceph:v19, name=adoring_kilby, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Nov 27 05:53:25 np0005537642 ceph-mon[74338]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config set, name=public_network}] v 0)
Nov 27 05:53:25 np0005537642 systemd[1]: libpod-e8ab0a98d14e0b8445477330a08623d1db7bf12bf9335d6dc18b779cff4df267.scope: Deactivated successfully.
Nov 27 05:53:25 np0005537642 podman[74339]: 2025-11-27 10:53:25.357642299 +0000 UTC m=+0.461122083 container died e8ab0a98d14e0b8445477330a08623d1db7bf12bf9335d6dc18b779cff4df267 (image=quay.io/ceph/ceph:v19, name=adoring_kilby, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True)
Nov 27 05:53:25 np0005537642 systemd[1]: var-lib-containers-storage-overlay-f73cae4d891f8d39470318af39d692eed5fcc4701394aca2f36003dbe5884a0e-merged.mount: Deactivated successfully.
Nov 27 05:53:25 np0005537642 podman[74339]: 2025-11-27 10:53:25.400487305 +0000 UTC m=+0.503967119 container remove e8ab0a98d14e0b8445477330a08623d1db7bf12bf9335d6dc18b779cff4df267 (image=quay.io/ceph/ceph:v19, name=adoring_kilby, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, io.buildah.version=1.40.1)
Nov 27 05:53:25 np0005537642 systemd[1]: libpod-conmon-e8ab0a98d14e0b8445477330a08623d1db7bf12bf9335d6dc18b779cff4df267.scope: Deactivated successfully.
Nov 27 05:53:25 np0005537642 podman[74433]: 2025-11-27 10:53:25.499379632 +0000 UTC m=+0.065613220 container create 1fe47d7a9e4787d36e13536a33a0aebd4a1b855f106de8b56e1cb5a7e6a1c2a4 (image=quay.io/ceph/ceph:v19, name=elastic_albattani, ceph=True, CEPH_REF=squid, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2)
Nov 27 05:53:25 np0005537642 systemd[1]: Started libpod-conmon-1fe47d7a9e4787d36e13536a33a0aebd4a1b855f106de8b56e1cb5a7e6a1c2a4.scope.
Nov 27 05:53:25 np0005537642 podman[74433]: 2025-11-27 10:53:25.470448563 +0000 UTC m=+0.036682211 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Nov 27 05:53:25 np0005537642 systemd[1]: Started libcrun container.
Nov 27 05:53:25 np0005537642 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/962e50c44c1d34873b9bf2bbd637b9439eb7b914e3d58893c3cbd3d112ee43c5/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Nov 27 05:53:25 np0005537642 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/962e50c44c1d34873b9bf2bbd637b9439eb7b914e3d58893c3cbd3d112ee43c5/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 27 05:53:25 np0005537642 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/962e50c44c1d34873b9bf2bbd637b9439eb7b914e3d58893c3cbd3d112ee43c5/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 27 05:53:25 np0005537642 podman[74433]: 2025-11-27 10:53:25.602670599 +0000 UTC m=+0.168904187 container init 1fe47d7a9e4787d36e13536a33a0aebd4a1b855f106de8b56e1cb5a7e6a1c2a4 (image=quay.io/ceph/ceph:v19, name=elastic_albattani, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS)
Nov 27 05:53:25 np0005537642 podman[74433]: 2025-11-27 10:53:25.611624199 +0000 UTC m=+0.177857787 container start 1fe47d7a9e4787d36e13536a33a0aebd4a1b855f106de8b56e1cb5a7e6a1c2a4 (image=quay.io/ceph/ceph:v19, name=elastic_albattani, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_REF=squid, org.label-schema.build-date=20250325, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS)
Nov 27 05:53:25 np0005537642 podman[74433]: 2025-11-27 10:53:25.61685483 +0000 UTC m=+0.183088418 container attach 1fe47d7a9e4787d36e13536a33a0aebd4a1b855f106de8b56e1cb5a7e6a1c2a4 (image=quay.io/ceph/ceph:v19, name=elastic_albattani, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_REF=squid, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default)
Nov 27 05:53:25 np0005537642 ceph-mon[74338]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config set, name=cluster_network}] v 0)
Nov 27 05:53:25 np0005537642 systemd[1]: libpod-1fe47d7a9e4787d36e13536a33a0aebd4a1b855f106de8b56e1cb5a7e6a1c2a4.scope: Deactivated successfully.
Nov 27 05:53:25 np0005537642 podman[74433]: 2025-11-27 10:53:25.828013405 +0000 UTC m=+0.394247003 container died 1fe47d7a9e4787d36e13536a33a0aebd4a1b855f106de8b56e1cb5a7e6a1c2a4 (image=quay.io/ceph/ceph:v19, name=elastic_albattani, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 27 05:53:25 np0005537642 systemd[1]: var-lib-containers-storage-overlay-962e50c44c1d34873b9bf2bbd637b9439eb7b914e3d58893c3cbd3d112ee43c5-merged.mount: Deactivated successfully.
Nov 27 05:53:25 np0005537642 podman[74433]: 2025-11-27 10:53:25.890506714 +0000 UTC m=+0.456740302 container remove 1fe47d7a9e4787d36e13536a33a0aebd4a1b855f106de8b56e1cb5a7e6a1c2a4 (image=quay.io/ceph/ceph:v19, name=elastic_albattani, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2)
Nov 27 05:53:25 np0005537642 systemd[1]: libpod-conmon-1fe47d7a9e4787d36e13536a33a0aebd4a1b855f106de8b56e1cb5a7e6a1c2a4.scope: Deactivated successfully.
Nov 27 05:53:25 np0005537642 systemd[1]: Reloading.
Nov 27 05:53:26 np0005537642 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 27 05:53:26 np0005537642 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 27 05:53:26 np0005537642 systemd[1]: Reloading.
Nov 27 05:53:26 np0005537642 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 27 05:53:26 np0005537642 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 27 05:53:26 np0005537642 systemd[1]: Starting Ceph mgr.compute-0.qnrkij for 4c838139-e0c9-556a-a9ca-e4422f459af7...
Nov 27 05:53:26 np0005537642 podman[74616]: 2025-11-27 10:53:26.82213657 +0000 UTC m=+0.065955057 container create ce70338c0e33aa8f8cc30c8b7a083570016bd1228c68e9fa40e35b40f0d8911e (image=quay.io/ceph/ceph:v19, name=ceph-4c838139-e0c9-556a-a9ca-e4422f459af7-mgr-compute-0-qnrkij, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_REF=squid, org.label-schema.build-date=20250325, ceph=True)
Nov 27 05:53:26 np0005537642 podman[74616]: 2025-11-27 10:53:26.792886056 +0000 UTC m=+0.036704533 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Nov 27 05:53:26 np0005537642 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ceb3d0d2fb9f482189677fc48a5d8935f5440296d8e6f81b2ee0d3aeef76eeff/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 27 05:53:26 np0005537642 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ceb3d0d2fb9f482189677fc48a5d8935f5440296d8e6f81b2ee0d3aeef76eeff/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 27 05:53:26 np0005537642 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ceb3d0d2fb9f482189677fc48a5d8935f5440296d8e6f81b2ee0d3aeef76eeff/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 27 05:53:26 np0005537642 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ceb3d0d2fb9f482189677fc48a5d8935f5440296d8e6f81b2ee0d3aeef76eeff/merged/var/lib/ceph/mgr/ceph-compute-0.qnrkij supports timestamps until 2038 (0x7fffffff)
Nov 27 05:53:26 np0005537642 podman[74616]: 2025-11-27 10:53:26.910982074 +0000 UTC m=+0.154800611 container init ce70338c0e33aa8f8cc30c8b7a083570016bd1228c68e9fa40e35b40f0d8911e (image=quay.io/ceph/ceph:v19, name=ceph-4c838139-e0c9-556a-a9ca-e4422f459af7-mgr-compute-0-qnrkij, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_REF=squid, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default)
Nov 27 05:53:26 np0005537642 podman[74616]: 2025-11-27 10:53:26.917796201 +0000 UTC m=+0.161614688 container start ce70338c0e33aa8f8cc30c8b7a083570016bd1228c68e9fa40e35b40f0d8911e (image=quay.io/ceph/ceph:v19, name=ceph-4c838139-e0c9-556a-a9ca-e4422f459af7-mgr-compute-0-qnrkij, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, ceph=True, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, org.label-schema.license=GPLv2)
Nov 27 05:53:26 np0005537642 bash[74616]: ce70338c0e33aa8f8cc30c8b7a083570016bd1228c68e9fa40e35b40f0d8911e
Nov 27 05:53:26 np0005537642 systemd[1]: Started Ceph mgr.compute-0.qnrkij for 4c838139-e0c9-556a-a9ca-e4422f459af7.
Nov 27 05:53:26 np0005537642 ceph-mgr[74636]: set uid:gid to 167:167 (ceph:ceph)
Nov 27 05:53:26 np0005537642 ceph-mgr[74636]: ceph version 19.2.3 (c92aebb279828e9c3c1f5d24613efca272649e62) squid (stable), process ceph-mgr, pid 2
Nov 27 05:53:26 np0005537642 ceph-mgr[74636]: pidfile_write: ignore empty --pid-file
Nov 27 05:53:27 np0005537642 ceph-mgr[74636]: mgr[py] Loading python module 'alerts'
Nov 27 05:53:27 np0005537642 podman[74637]: 2025-11-27 10:53:27.043531796 +0000 UTC m=+0.068338181 container create 2620c2b6d02c44883644de0bd00078c9a28d510f7b150403f9cc0f2a0db2947d (image=quay.io/ceph/ceph:v19, name=vigilant_shockley, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 27 05:53:27 np0005537642 systemd[1]: Started libpod-conmon-2620c2b6d02c44883644de0bd00078c9a28d510f7b150403f9cc0f2a0db2947d.scope.
Nov 27 05:53:27 np0005537642 podman[74637]: 2025-11-27 10:53:27.011483508 +0000 UTC m=+0.036289893 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Nov 27 05:53:27 np0005537642 ceph-mgr[74636]: mgr[py] Module alerts has missing NOTIFY_TYPES member
Nov 27 05:53:27 np0005537642 ceph-mgr[74636]: mgr[py] Loading python module 'balancer'
Nov 27 05:53:27 np0005537642 ceph-4c838139-e0c9-556a-a9ca-e4422f459af7-mgr-compute-0-qnrkij[74632]: 2025-11-27T10:53:27.109+0000 7f3cabe09140 -1 mgr[py] Module alerts has missing NOTIFY_TYPES member
Nov 27 05:53:27 np0005537642 systemd[1]: Started libcrun container.
Nov 27 05:53:27 np0005537642 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4a54d8846a576aabc7320fa62fc06cf9d50b2e2c1d79d2fe61e05d305cb6b121/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 27 05:53:27 np0005537642 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4a54d8846a576aabc7320fa62fc06cf9d50b2e2c1d79d2fe61e05d305cb6b121/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Nov 27 05:53:27 np0005537642 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4a54d8846a576aabc7320fa62fc06cf9d50b2e2c1d79d2fe61e05d305cb6b121/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 27 05:53:27 np0005537642 podman[74637]: 2025-11-27 10:53:27.159516443 +0000 UTC m=+0.184322828 container init 2620c2b6d02c44883644de0bd00078c9a28d510f7b150403f9cc0f2a0db2947d (image=quay.io/ceph/ceph:v19, name=vigilant_shockley, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 27 05:53:27 np0005537642 podman[74637]: 2025-11-27 10:53:27.171143611 +0000 UTC m=+0.195949996 container start 2620c2b6d02c44883644de0bd00078c9a28d510f7b150403f9cc0f2a0db2947d (image=quay.io/ceph/ceph:v19, name=vigilant_shockley, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, io.buildah.version=1.40.1)
Nov 27 05:53:27 np0005537642 podman[74637]: 2025-11-27 10:53:27.174857139 +0000 UTC m=+0.199663524 container attach 2620c2b6d02c44883644de0bd00078c9a28d510f7b150403f9cc0f2a0db2947d (image=quay.io/ceph/ceph:v19, name=vigilant_shockley, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 27 05:53:27 np0005537642 ceph-mgr[74636]: mgr[py] Module balancer has missing NOTIFY_TYPES member
Nov 27 05:53:27 np0005537642 ceph-mgr[74636]: mgr[py] Loading python module 'cephadm'
Nov 27 05:53:27 np0005537642 ceph-4c838139-e0c9-556a-a9ca-e4422f459af7-mgr-compute-0-qnrkij[74632]: 2025-11-27T10:53:27.223+0000 7f3cabe09140 -1 mgr[py] Module balancer has missing NOTIFY_TYPES member
Nov 27 05:53:27 np0005537642 ceph-mon[74338]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "status", "format": "json-pretty"} v 0)
Nov 27 05:53:27 np0005537642 ceph-mon[74338]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/310389238' entity='client.admin' cmd=[{"prefix": "status", "format": "json-pretty"}]: dispatch
Nov 27 05:53:27 np0005537642 vigilant_shockley[74674]: 
Nov 27 05:53:27 np0005537642 vigilant_shockley[74674]: {
Nov 27 05:53:27 np0005537642 vigilant_shockley[74674]:    "fsid": "4c838139-e0c9-556a-a9ca-e4422f459af7",
Nov 27 05:53:27 np0005537642 vigilant_shockley[74674]:    "health": {
Nov 27 05:53:27 np0005537642 vigilant_shockley[74674]:        "status": "HEALTH_OK",
Nov 27 05:53:27 np0005537642 vigilant_shockley[74674]:        "checks": {},
Nov 27 05:53:27 np0005537642 vigilant_shockley[74674]:        "mutes": []
Nov 27 05:53:27 np0005537642 vigilant_shockley[74674]:    },
Nov 27 05:53:27 np0005537642 vigilant_shockley[74674]:    "election_epoch": 5,
Nov 27 05:53:27 np0005537642 vigilant_shockley[74674]:    "quorum": [
Nov 27 05:53:27 np0005537642 vigilant_shockley[74674]:        0
Nov 27 05:53:27 np0005537642 vigilant_shockley[74674]:    ],
Nov 27 05:53:27 np0005537642 vigilant_shockley[74674]:    "quorum_names": [
Nov 27 05:53:27 np0005537642 vigilant_shockley[74674]:        "compute-0"
Nov 27 05:53:27 np0005537642 vigilant_shockley[74674]:    ],
Nov 27 05:53:27 np0005537642 vigilant_shockley[74674]:    "quorum_age": 2,
Nov 27 05:53:27 np0005537642 vigilant_shockley[74674]:    "monmap": {
Nov 27 05:53:27 np0005537642 vigilant_shockley[74674]:        "epoch": 1,
Nov 27 05:53:27 np0005537642 vigilant_shockley[74674]:        "min_mon_release_name": "squid",
Nov 27 05:53:27 np0005537642 vigilant_shockley[74674]:        "num_mons": 1
Nov 27 05:53:27 np0005537642 vigilant_shockley[74674]:    },
Nov 27 05:53:27 np0005537642 vigilant_shockley[74674]:    "osdmap": {
Nov 27 05:53:27 np0005537642 vigilant_shockley[74674]:        "epoch": 1,
Nov 27 05:53:27 np0005537642 vigilant_shockley[74674]:        "num_osds": 0,
Nov 27 05:53:27 np0005537642 vigilant_shockley[74674]:        "num_up_osds": 0,
Nov 27 05:53:27 np0005537642 vigilant_shockley[74674]:        "osd_up_since": 0,
Nov 27 05:53:27 np0005537642 vigilant_shockley[74674]:        "num_in_osds": 0,
Nov 27 05:53:27 np0005537642 vigilant_shockley[74674]:        "osd_in_since": 0,
Nov 27 05:53:27 np0005537642 vigilant_shockley[74674]:        "num_remapped_pgs": 0
Nov 27 05:53:27 np0005537642 vigilant_shockley[74674]:    },
Nov 27 05:53:27 np0005537642 vigilant_shockley[74674]:    "pgmap": {
Nov 27 05:53:27 np0005537642 vigilant_shockley[74674]:        "pgs_by_state": [],
Nov 27 05:53:27 np0005537642 vigilant_shockley[74674]:        "num_pgs": 0,
Nov 27 05:53:27 np0005537642 vigilant_shockley[74674]:        "num_pools": 0,
Nov 27 05:53:27 np0005537642 vigilant_shockley[74674]:        "num_objects": 0,
Nov 27 05:53:27 np0005537642 vigilant_shockley[74674]:        "data_bytes": 0,
Nov 27 05:53:27 np0005537642 vigilant_shockley[74674]:        "bytes_used": 0,
Nov 27 05:53:27 np0005537642 vigilant_shockley[74674]:        "bytes_avail": 0,
Nov 27 05:53:27 np0005537642 vigilant_shockley[74674]:        "bytes_total": 0
Nov 27 05:53:27 np0005537642 vigilant_shockley[74674]:    },
Nov 27 05:53:27 np0005537642 vigilant_shockley[74674]:    "fsmap": {
Nov 27 05:53:27 np0005537642 vigilant_shockley[74674]:        "epoch": 1,
Nov 27 05:53:27 np0005537642 vigilant_shockley[74674]:        "btime": "2025-11-27T10:53:22:366160+0000",
Nov 27 05:53:27 np0005537642 vigilant_shockley[74674]:        "by_rank": [],
Nov 27 05:53:27 np0005537642 vigilant_shockley[74674]:        "up:standby": 0
Nov 27 05:53:27 np0005537642 vigilant_shockley[74674]:    },
Nov 27 05:53:27 np0005537642 vigilant_shockley[74674]:    "mgrmap": {
Nov 27 05:53:27 np0005537642 vigilant_shockley[74674]:        "available": false,
Nov 27 05:53:27 np0005537642 vigilant_shockley[74674]:        "num_standbys": 0,
Nov 27 05:53:27 np0005537642 vigilant_shockley[74674]:        "modules": [
Nov 27 05:53:27 np0005537642 vigilant_shockley[74674]:            "iostat",
Nov 27 05:53:27 np0005537642 vigilant_shockley[74674]:            "nfs",
Nov 27 05:53:27 np0005537642 vigilant_shockley[74674]:            "restful"
Nov 27 05:53:27 np0005537642 vigilant_shockley[74674]:        ],
Nov 27 05:53:27 np0005537642 vigilant_shockley[74674]:        "services": {}
Nov 27 05:53:27 np0005537642 vigilant_shockley[74674]:    },
Nov 27 05:53:27 np0005537642 vigilant_shockley[74674]:    "servicemap": {
Nov 27 05:53:27 np0005537642 vigilant_shockley[74674]:        "epoch": 1,
Nov 27 05:53:27 np0005537642 vigilant_shockley[74674]:        "modified": "2025-11-27T10:53:22.368539+0000",
Nov 27 05:53:27 np0005537642 vigilant_shockley[74674]:        "services": {}
Nov 27 05:53:27 np0005537642 vigilant_shockley[74674]:    },
Nov 27 05:53:27 np0005537642 vigilant_shockley[74674]:    "progress_events": {}
Nov 27 05:53:27 np0005537642 vigilant_shockley[74674]: }
Nov 27 05:53:27 np0005537642 systemd[1]: libpod-2620c2b6d02c44883644de0bd00078c9a28d510f7b150403f9cc0f2a0db2947d.scope: Deactivated successfully.
Nov 27 05:53:27 np0005537642 podman[74637]: 2025-11-27 10:53:27.393056972 +0000 UTC m=+0.417863327 container died 2620c2b6d02c44883644de0bd00078c9a28d510f7b150403f9cc0f2a0db2947d (image=quay.io/ceph/ceph:v19, name=vigilant_shockley, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2)
Nov 27 05:53:27 np0005537642 systemd[1]: var-lib-containers-storage-overlay-4a54d8846a576aabc7320fa62fc06cf9d50b2e2c1d79d2fe61e05d305cb6b121-merged.mount: Deactivated successfully.
Nov 27 05:53:27 np0005537642 podman[74637]: 2025-11-27 10:53:27.447767398 +0000 UTC m=+0.472573753 container remove 2620c2b6d02c44883644de0bd00078c9a28d510f7b150403f9cc0f2a0db2947d (image=quay.io/ceph/ceph:v19, name=vigilant_shockley, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Nov 27 05:53:27 np0005537642 systemd[1]: libpod-conmon-2620c2b6d02c44883644de0bd00078c9a28d510f7b150403f9cc0f2a0db2947d.scope: Deactivated successfully.
Nov 27 05:53:27 np0005537642 ceph-mgr[74636]: mgr[py] Loading python module 'crash'
Nov 27 05:53:28 np0005537642 ceph-mgr[74636]: mgr[py] Module crash has missing NOTIFY_TYPES member
Nov 27 05:53:28 np0005537642 ceph-4c838139-e0c9-556a-a9ca-e4422f459af7-mgr-compute-0-qnrkij[74632]: 2025-11-27T10:53:28.044+0000 7f3cabe09140 -1 mgr[py] Module crash has missing NOTIFY_TYPES member
Nov 27 05:53:28 np0005537642 ceph-mgr[74636]: mgr[py] Loading python module 'dashboard'
Nov 27 05:53:28 np0005537642 ceph-mgr[74636]: mgr[py] Loading python module 'devicehealth'
Nov 27 05:53:28 np0005537642 ceph-mgr[74636]: mgr[py] Module devicehealth has missing NOTIFY_TYPES member
Nov 27 05:53:28 np0005537642 ceph-4c838139-e0c9-556a-a9ca-e4422f459af7-mgr-compute-0-qnrkij[74632]: 2025-11-27T10:53:28.618+0000 7f3cabe09140 -1 mgr[py] Module devicehealth has missing NOTIFY_TYPES member
Nov 27 05:53:28 np0005537642 ceph-mgr[74636]: mgr[py] Loading python module 'diskprediction_local'
Nov 27 05:53:28 np0005537642 ceph-4c838139-e0c9-556a-a9ca-e4422f459af7-mgr-compute-0-qnrkij[74632]: /lib64/python3.9/site-packages/scipy/__init__.py:73: UserWarning: NumPy was imported from a Python sub-interpreter but NumPy does not properly support sub-interpreters. This will likely work for most users but might cause hard to track down issues or subtle bugs. A common user of the rare sub-interpreter feature is wsgi which also allows single-interpreter mode.
Nov 27 05:53:28 np0005537642 ceph-4c838139-e0c9-556a-a9ca-e4422f459af7-mgr-compute-0-qnrkij[74632]: Improvements in the case of bugs are welcome, but is not on the NumPy roadmap, and full support may require significant effort to achieve.
Nov 27 05:53:28 np0005537642 ceph-4c838139-e0c9-556a-a9ca-e4422f459af7-mgr-compute-0-qnrkij[74632]:  from numpy import show_config as show_numpy_config
Nov 27 05:53:28 np0005537642 ceph-mgr[74636]: mgr[py] Module diskprediction_local has missing NOTIFY_TYPES member
Nov 27 05:53:28 np0005537642 ceph-4c838139-e0c9-556a-a9ca-e4422f459af7-mgr-compute-0-qnrkij[74632]: 2025-11-27T10:53:28.775+0000 7f3cabe09140 -1 mgr[py] Module diskprediction_local has missing NOTIFY_TYPES member
Nov 27 05:53:28 np0005537642 ceph-mgr[74636]: mgr[py] Loading python module 'influx'
Nov 27 05:53:28 np0005537642 ceph-mgr[74636]: mgr[py] Module influx has missing NOTIFY_TYPES member
Nov 27 05:53:28 np0005537642 ceph-4c838139-e0c9-556a-a9ca-e4422f459af7-mgr-compute-0-qnrkij[74632]: 2025-11-27T10:53:28.840+0000 7f3cabe09140 -1 mgr[py] Module influx has missing NOTIFY_TYPES member
Nov 27 05:53:28 np0005537642 ceph-mgr[74636]: mgr[py] Loading python module 'insights'
Nov 27 05:53:28 np0005537642 ceph-mgr[74636]: mgr[py] Loading python module 'iostat'
Nov 27 05:53:28 np0005537642 ceph-mgr[74636]: mgr[py] Module iostat has missing NOTIFY_TYPES member
Nov 27 05:53:28 np0005537642 ceph-4c838139-e0c9-556a-a9ca-e4422f459af7-mgr-compute-0-qnrkij[74632]: 2025-11-27T10:53:28.962+0000 7f3cabe09140 -1 mgr[py] Module iostat has missing NOTIFY_TYPES member
Nov 27 05:53:28 np0005537642 ceph-mgr[74636]: mgr[py] Loading python module 'k8sevents'
Nov 27 05:53:29 np0005537642 ceph-mgr[74636]: mgr[py] Loading python module 'localpool'
Nov 27 05:53:29 np0005537642 ceph-mgr[74636]: mgr[py] Loading python module 'mds_autoscaler'
Nov 27 05:53:29 np0005537642 podman[74722]: 2025-11-27 10:53:29.598929155 +0000 UTC m=+0.110741986 container create 68b3421abb88160a2f39bfa64702719b7eba20464d80c500bad276d90f6017a5 (image=quay.io/ceph/ceph:v19, name=nice_bhaskara, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_REF=squid, org.label-schema.build-date=20250325, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 27 05:53:29 np0005537642 podman[74722]: 2025-11-27 10:53:29.516448706 +0000 UTC m=+0.028261567 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Nov 27 05:53:29 np0005537642 ceph-mgr[74636]: mgr[py] Loading python module 'mirroring'
Nov 27 05:53:29 np0005537642 systemd[1]: Started libpod-conmon-68b3421abb88160a2f39bfa64702719b7eba20464d80c500bad276d90f6017a5.scope.
Nov 27 05:53:29 np0005537642 systemd[1]: Started libcrun container.
Nov 27 05:53:29 np0005537642 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1bd8aecc3120db2bb999184289cefff703d704d6f8e19f9b349ece5a8a4263e4/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Nov 27 05:53:29 np0005537642 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1bd8aecc3120db2bb999184289cefff703d704d6f8e19f9b349ece5a8a4263e4/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 27 05:53:29 np0005537642 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1bd8aecc3120db2bb999184289cefff703d704d6f8e19f9b349ece5a8a4263e4/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 27 05:53:29 np0005537642 podman[74722]: 2025-11-27 10:53:29.699686811 +0000 UTC m=+0.211499732 container init 68b3421abb88160a2f39bfa64702719b7eba20464d80c500bad276d90f6017a5 (image=quay.io/ceph/ceph:v19, name=nice_bhaskara, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_REF=squid, org.label-schema.build-date=20250325)
Nov 27 05:53:29 np0005537642 podman[74722]: 2025-11-27 10:53:29.706396223 +0000 UTC m=+0.218209094 container start 68b3421abb88160a2f39bfa64702719b7eba20464d80c500bad276d90f6017a5 (image=quay.io/ceph/ceph:v19, name=nice_bhaskara, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, ceph=True, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 27 05:53:29 np0005537642 podman[74722]: 2025-11-27 10:53:29.709881421 +0000 UTC m=+0.221694272 container attach 68b3421abb88160a2f39bfa64702719b7eba20464d80c500bad276d90f6017a5 (image=quay.io/ceph/ceph:v19, name=nice_bhaskara, OSD_FLAVOR=default, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2)
Nov 27 05:53:29 np0005537642 ceph-mgr[74636]: mgr[py] Loading python module 'nfs'
Nov 27 05:53:29 np0005537642 ceph-mon[74338]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "status", "format": "json-pretty"} v 0)
Nov 27 05:53:29 np0005537642 ceph-mon[74338]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3150117275' entity='client.admin' cmd=[{"prefix": "status", "format": "json-pretty"}]: dispatch
Nov 27 05:53:29 np0005537642 nice_bhaskara[74739]: 
Nov 27 05:53:29 np0005537642 nice_bhaskara[74739]: {
Nov 27 05:53:29 np0005537642 nice_bhaskara[74739]:    "fsid": "4c838139-e0c9-556a-a9ca-e4422f459af7",
Nov 27 05:53:29 np0005537642 nice_bhaskara[74739]:    "health": {
Nov 27 05:53:29 np0005537642 nice_bhaskara[74739]:        "status": "HEALTH_OK",
Nov 27 05:53:29 np0005537642 nice_bhaskara[74739]:        "checks": {},
Nov 27 05:53:29 np0005537642 nice_bhaskara[74739]:        "mutes": []
Nov 27 05:53:29 np0005537642 nice_bhaskara[74739]:    },
Nov 27 05:53:29 np0005537642 nice_bhaskara[74739]:    "election_epoch": 5,
Nov 27 05:53:29 np0005537642 nice_bhaskara[74739]:    "quorum": [
Nov 27 05:53:29 np0005537642 nice_bhaskara[74739]:        0
Nov 27 05:53:29 np0005537642 nice_bhaskara[74739]:    ],
Nov 27 05:53:29 np0005537642 nice_bhaskara[74739]:    "quorum_names": [
Nov 27 05:53:29 np0005537642 nice_bhaskara[74739]:        "compute-0"
Nov 27 05:53:29 np0005537642 nice_bhaskara[74739]:    ],
Nov 27 05:53:29 np0005537642 nice_bhaskara[74739]:    "quorum_age": 4,
Nov 27 05:53:29 np0005537642 nice_bhaskara[74739]:    "monmap": {
Nov 27 05:53:29 np0005537642 nice_bhaskara[74739]:        "epoch": 1,
Nov 27 05:53:29 np0005537642 nice_bhaskara[74739]:        "min_mon_release_name": "squid",
Nov 27 05:53:29 np0005537642 nice_bhaskara[74739]:        "num_mons": 1
Nov 27 05:53:29 np0005537642 nice_bhaskara[74739]:    },
Nov 27 05:53:29 np0005537642 nice_bhaskara[74739]:    "osdmap": {
Nov 27 05:53:29 np0005537642 nice_bhaskara[74739]:        "epoch": 1,
Nov 27 05:53:29 np0005537642 nice_bhaskara[74739]:        "num_osds": 0,
Nov 27 05:53:29 np0005537642 nice_bhaskara[74739]:        "num_up_osds": 0,
Nov 27 05:53:29 np0005537642 nice_bhaskara[74739]:        "osd_up_since": 0,
Nov 27 05:53:29 np0005537642 nice_bhaskara[74739]:        "num_in_osds": 0,
Nov 27 05:53:29 np0005537642 nice_bhaskara[74739]:        "osd_in_since": 0,
Nov 27 05:53:29 np0005537642 nice_bhaskara[74739]:        "num_remapped_pgs": 0
Nov 27 05:53:29 np0005537642 nice_bhaskara[74739]:    },
Nov 27 05:53:29 np0005537642 nice_bhaskara[74739]:    "pgmap": {
Nov 27 05:53:29 np0005537642 nice_bhaskara[74739]:        "pgs_by_state": [],
Nov 27 05:53:29 np0005537642 nice_bhaskara[74739]:        "num_pgs": 0,
Nov 27 05:53:29 np0005537642 nice_bhaskara[74739]:        "num_pools": 0,
Nov 27 05:53:29 np0005537642 nice_bhaskara[74739]:        "num_objects": 0,
Nov 27 05:53:29 np0005537642 nice_bhaskara[74739]:        "data_bytes": 0,
Nov 27 05:53:29 np0005537642 nice_bhaskara[74739]:        "bytes_used": 0,
Nov 27 05:53:29 np0005537642 nice_bhaskara[74739]:        "bytes_avail": 0,
Nov 27 05:53:29 np0005537642 nice_bhaskara[74739]:        "bytes_total": 0
Nov 27 05:53:29 np0005537642 nice_bhaskara[74739]:    },
Nov 27 05:53:29 np0005537642 nice_bhaskara[74739]:    "fsmap": {
Nov 27 05:53:29 np0005537642 nice_bhaskara[74739]:        "epoch": 1,
Nov 27 05:53:29 np0005537642 nice_bhaskara[74739]:        "btime": "2025-11-27T10:53:22:366160+0000",
Nov 27 05:53:29 np0005537642 nice_bhaskara[74739]:        "by_rank": [],
Nov 27 05:53:29 np0005537642 nice_bhaskara[74739]:        "up:standby": 0
Nov 27 05:53:29 np0005537642 nice_bhaskara[74739]:    },
Nov 27 05:53:29 np0005537642 nice_bhaskara[74739]:    "mgrmap": {
Nov 27 05:53:29 np0005537642 nice_bhaskara[74739]:        "available": false,
Nov 27 05:53:29 np0005537642 nice_bhaskara[74739]:        "num_standbys": 0,
Nov 27 05:53:29 np0005537642 nice_bhaskara[74739]:        "modules": [
Nov 27 05:53:29 np0005537642 nice_bhaskara[74739]:            "iostat",
Nov 27 05:53:29 np0005537642 nice_bhaskara[74739]:            "nfs",
Nov 27 05:53:29 np0005537642 nice_bhaskara[74739]:            "restful"
Nov 27 05:53:29 np0005537642 nice_bhaskara[74739]:        ],
Nov 27 05:53:29 np0005537642 nice_bhaskara[74739]:        "services": {}
Nov 27 05:53:29 np0005537642 nice_bhaskara[74739]:    },
Nov 27 05:53:29 np0005537642 nice_bhaskara[74739]:    "servicemap": {
Nov 27 05:53:29 np0005537642 nice_bhaskara[74739]:        "epoch": 1,
Nov 27 05:53:29 np0005537642 nice_bhaskara[74739]:        "modified": "2025-11-27T10:53:22.368539+0000",
Nov 27 05:53:29 np0005537642 nice_bhaskara[74739]:        "services": {}
Nov 27 05:53:29 np0005537642 nice_bhaskara[74739]:    },
Nov 27 05:53:29 np0005537642 nice_bhaskara[74739]:    "progress_events": {}
Nov 27 05:53:29 np0005537642 nice_bhaskara[74739]: }
Nov 27 05:53:29 np0005537642 systemd[1]: libpod-68b3421abb88160a2f39bfa64702719b7eba20464d80c500bad276d90f6017a5.scope: Deactivated successfully.
Nov 27 05:53:29 np0005537642 podman[74722]: 2025-11-27 10:53:29.929730143 +0000 UTC m=+0.441542974 container died 68b3421abb88160a2f39bfa64702719b7eba20464d80c500bad276d90f6017a5 (image=quay.io/ceph/ceph:v19, name=nice_bhaskara, ceph=True, CEPH_REF=squid, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1)
Nov 27 05:53:29 np0005537642 systemd[1]: var-lib-containers-storage-overlay-1bd8aecc3120db2bb999184289cefff703d704d6f8e19f9b349ece5a8a4263e4-merged.mount: Deactivated successfully.
Nov 27 05:53:29 np0005537642 podman[74722]: 2025-11-27 10:53:29.967694315 +0000 UTC m=+0.479507146 container remove 68b3421abb88160a2f39bfa64702719b7eba20464d80c500bad276d90f6017a5 (image=quay.io/ceph/ceph:v19, name=nice_bhaskara, CEPH_REF=squid, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, ceph=True, io.buildah.version=1.40.1)
Nov 27 05:53:29 np0005537642 systemd[1]: libpod-conmon-68b3421abb88160a2f39bfa64702719b7eba20464d80c500bad276d90f6017a5.scope: Deactivated successfully.
Nov 27 05:53:30 np0005537642 ceph-mgr[74636]: mgr[py] Module nfs has missing NOTIFY_TYPES member
Nov 27 05:53:30 np0005537642 ceph-4c838139-e0c9-556a-a9ca-e4422f459af7-mgr-compute-0-qnrkij[74632]: 2025-11-27T10:53:30.018+0000 7f3cabe09140 -1 mgr[py] Module nfs has missing NOTIFY_TYPES member
Nov 27 05:53:30 np0005537642 ceph-mgr[74636]: mgr[py] Loading python module 'orchestrator'
Nov 27 05:53:30 np0005537642 ceph-mgr[74636]: mgr[py] Module orchestrator has missing NOTIFY_TYPES member
Nov 27 05:53:30 np0005537642 ceph-4c838139-e0c9-556a-a9ca-e4422f459af7-mgr-compute-0-qnrkij[74632]: 2025-11-27T10:53:30.220+0000 7f3cabe09140 -1 mgr[py] Module orchestrator has missing NOTIFY_TYPES member
Nov 27 05:53:30 np0005537642 ceph-mgr[74636]: mgr[py] Loading python module 'osd_perf_query'
Nov 27 05:53:30 np0005537642 ceph-mgr[74636]: mgr[py] Module osd_perf_query has missing NOTIFY_TYPES member
Nov 27 05:53:30 np0005537642 ceph-4c838139-e0c9-556a-a9ca-e4422f459af7-mgr-compute-0-qnrkij[74632]: 2025-11-27T10:53:30.295+0000 7f3cabe09140 -1 mgr[py] Module osd_perf_query has missing NOTIFY_TYPES member
Nov 27 05:53:30 np0005537642 ceph-mgr[74636]: mgr[py] Loading python module 'osd_support'
Nov 27 05:53:30 np0005537642 ceph-mgr[74636]: mgr[py] Module osd_support has missing NOTIFY_TYPES member
Nov 27 05:53:30 np0005537642 ceph-4c838139-e0c9-556a-a9ca-e4422f459af7-mgr-compute-0-qnrkij[74632]: 2025-11-27T10:53:30.360+0000 7f3cabe09140 -1 mgr[py] Module osd_support has missing NOTIFY_TYPES member
Nov 27 05:53:30 np0005537642 ceph-mgr[74636]: mgr[py] Loading python module 'pg_autoscaler'
Nov 27 05:53:30 np0005537642 ceph-mgr[74636]: mgr[py] Module pg_autoscaler has missing NOTIFY_TYPES member
Nov 27 05:53:30 np0005537642 ceph-4c838139-e0c9-556a-a9ca-e4422f459af7-mgr-compute-0-qnrkij[74632]: 2025-11-27T10:53:30.440+0000 7f3cabe09140 -1 mgr[py] Module pg_autoscaler has missing NOTIFY_TYPES member
Nov 27 05:53:30 np0005537642 ceph-mgr[74636]: mgr[py] Loading python module 'progress'
Nov 27 05:53:30 np0005537642 ceph-mgr[74636]: mgr[py] Module progress has missing NOTIFY_TYPES member
Nov 27 05:53:30 np0005537642 ceph-4c838139-e0c9-556a-a9ca-e4422f459af7-mgr-compute-0-qnrkij[74632]: 2025-11-27T10:53:30.509+0000 7f3cabe09140 -1 mgr[py] Module progress has missing NOTIFY_TYPES member
Nov 27 05:53:30 np0005537642 ceph-mgr[74636]: mgr[py] Loading python module 'prometheus'
Nov 27 05:53:30 np0005537642 ceph-mgr[74636]: mgr[py] Module prometheus has missing NOTIFY_TYPES member
Nov 27 05:53:30 np0005537642 ceph-4c838139-e0c9-556a-a9ca-e4422f459af7-mgr-compute-0-qnrkij[74632]: 2025-11-27T10:53:30.858+0000 7f3cabe09140 -1 mgr[py] Module prometheus has missing NOTIFY_TYPES member
Nov 27 05:53:30 np0005537642 ceph-mgr[74636]: mgr[py] Loading python module 'rbd_support'
Nov 27 05:53:30 np0005537642 ceph-mgr[74636]: mgr[py] Module rbd_support has missing NOTIFY_TYPES member
Nov 27 05:53:30 np0005537642 ceph-4c838139-e0c9-556a-a9ca-e4422f459af7-mgr-compute-0-qnrkij[74632]: 2025-11-27T10:53:30.959+0000 7f3cabe09140 -1 mgr[py] Module rbd_support has missing NOTIFY_TYPES member
Nov 27 05:53:30 np0005537642 ceph-mgr[74636]: mgr[py] Loading python module 'restful'
Nov 27 05:53:31 np0005537642 ceph-mgr[74636]: mgr[py] Loading python module 'rgw'
Nov 27 05:53:31 np0005537642 ceph-mgr[74636]: mgr[py] Module rgw has missing NOTIFY_TYPES member
Nov 27 05:53:31 np0005537642 ceph-4c838139-e0c9-556a-a9ca-e4422f459af7-mgr-compute-0-qnrkij[74632]: 2025-11-27T10:53:31.373+0000 7f3cabe09140 -1 mgr[py] Module rgw has missing NOTIFY_TYPES member
Nov 27 05:53:31 np0005537642 ceph-mgr[74636]: mgr[py] Loading python module 'rook'
Nov 27 05:53:31 np0005537642 ceph-mgr[74636]: mgr[py] Module rook has missing NOTIFY_TYPES member
Nov 27 05:53:31 np0005537642 ceph-4c838139-e0c9-556a-a9ca-e4422f459af7-mgr-compute-0-qnrkij[74632]: 2025-11-27T10:53:31.879+0000 7f3cabe09140 -1 mgr[py] Module rook has missing NOTIFY_TYPES member
Nov 27 05:53:31 np0005537642 ceph-mgr[74636]: mgr[py] Loading python module 'selftest'
Nov 27 05:53:31 np0005537642 ceph-mgr[74636]: mgr[py] Module selftest has missing NOTIFY_TYPES member
Nov 27 05:53:31 np0005537642 ceph-4c838139-e0c9-556a-a9ca-e4422f459af7-mgr-compute-0-qnrkij[74632]: 2025-11-27T10:53:31.946+0000 7f3cabe09140 -1 mgr[py] Module selftest has missing NOTIFY_TYPES member
Nov 27 05:53:31 np0005537642 ceph-mgr[74636]: mgr[py] Loading python module 'snap_schedule'
Nov 27 05:53:32 np0005537642 ceph-mgr[74636]: mgr[py] Module snap_schedule has missing NOTIFY_TYPES member
Nov 27 05:53:32 np0005537642 ceph-4c838139-e0c9-556a-a9ca-e4422f459af7-mgr-compute-0-qnrkij[74632]: 2025-11-27T10:53:32.030+0000 7f3cabe09140 -1 mgr[py] Module snap_schedule has missing NOTIFY_TYPES member
Nov 27 05:53:32 np0005537642 ceph-mgr[74636]: mgr[py] Loading python module 'stats'
Nov 27 05:53:32 np0005537642 ceph-mgr[74636]: mgr[py] Loading python module 'status'
Nov 27 05:53:32 np0005537642 podman[74779]: 2025-11-27 10:53:32.020755624 +0000 UTC m=+0.028447577 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Nov 27 05:53:32 np0005537642 podman[74779]: 2025-11-27 10:53:32.154471561 +0000 UTC m=+0.162163494 container create 6d9cc0fb30a56bf1814dc3e3bd3637bef107b60e7063eb3ffe45ea783ea40d7c (image=quay.io/ceph/ceph:v19, name=tender_mendeleev, OSD_FLAVOR=default, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS)
Nov 27 05:53:32 np0005537642 ceph-mgr[74636]: mgr[py] Module status has missing NOTIFY_TYPES member
Nov 27 05:53:32 np0005537642 ceph-4c838139-e0c9-556a-a9ca-e4422f459af7-mgr-compute-0-qnrkij[74632]: 2025-11-27T10:53:32.173+0000 7f3cabe09140 -1 mgr[py] Module status has missing NOTIFY_TYPES member
Nov 27 05:53:32 np0005537642 ceph-mgr[74636]: mgr[py] Loading python module 'telegraf'
Nov 27 05:53:32 np0005537642 ceph-mgr[74636]: mgr[py] Module telegraf has missing NOTIFY_TYPES member
Nov 27 05:53:32 np0005537642 ceph-4c838139-e0c9-556a-a9ca-e4422f459af7-mgr-compute-0-qnrkij[74632]: 2025-11-27T10:53:32.252+0000 7f3cabe09140 -1 mgr[py] Module telegraf has missing NOTIFY_TYPES member
Nov 27 05:53:32 np0005537642 ceph-mgr[74636]: mgr[py] Loading python module 'telemetry'
Nov 27 05:53:32 np0005537642 systemd[1]: Started libpod-conmon-6d9cc0fb30a56bf1814dc3e3bd3637bef107b60e7063eb3ffe45ea783ea40d7c.scope.
Nov 27 05:53:32 np0005537642 systemd[1]: Started libcrun container.
Nov 27 05:53:32 np0005537642 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c30a9c8aa13db4278fc383b3ee4ea3aafb99693da05431acea235a52ee3e6bb9/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Nov 27 05:53:32 np0005537642 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c30a9c8aa13db4278fc383b3ee4ea3aafb99693da05431acea235a52ee3e6bb9/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 27 05:53:32 np0005537642 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c30a9c8aa13db4278fc383b3ee4ea3aafb99693da05431acea235a52ee3e6bb9/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 27 05:53:32 np0005537642 podman[74779]: 2025-11-27 10:53:32.320873828 +0000 UTC m=+0.328565771 container init 6d9cc0fb30a56bf1814dc3e3bd3637bef107b60e7063eb3ffe45ea783ea40d7c (image=quay.io/ceph/ceph:v19, name=tender_mendeleev, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default)
Nov 27 05:53:32 np0005537642 podman[74779]: 2025-11-27 10:53:32.33111652 +0000 UTC m=+0.338808483 container start 6d9cc0fb30a56bf1814dc3e3bd3637bef107b60e7063eb3ffe45ea783ea40d7c (image=quay.io/ceph/ceph:v19, name=tender_mendeleev, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 27 05:53:32 np0005537642 ceph-mgr[74636]: mgr[py] Module telemetry has missing NOTIFY_TYPES member
Nov 27 05:53:32 np0005537642 ceph-4c838139-e0c9-556a-a9ca-e4422f459af7-mgr-compute-0-qnrkij[74632]: 2025-11-27T10:53:32.412+0000 7f3cabe09140 -1 mgr[py] Module telemetry has missing NOTIFY_TYPES member
Nov 27 05:53:32 np0005537642 ceph-mgr[74636]: mgr[py] Loading python module 'test_orchestrator'
Nov 27 05:53:32 np0005537642 podman[74779]: 2025-11-27 10:53:32.53884256 +0000 UTC m=+0.546534583 container attach 6d9cc0fb30a56bf1814dc3e3bd3637bef107b60e7063eb3ffe45ea783ea40d7c (image=quay.io/ceph/ceph:v19, name=tender_mendeleev, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2)
Nov 27 05:53:32 np0005537642 ceph-mon[74338]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "status", "format": "json-pretty"} v 0)
Nov 27 05:53:32 np0005537642 ceph-mon[74338]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/124830335' entity='client.admin' cmd=[{"prefix": "status", "format": "json-pretty"}]: dispatch
Nov 27 05:53:32 np0005537642 tender_mendeleev[74795]: 
Nov 27 05:53:32 np0005537642 tender_mendeleev[74795]: {
Nov 27 05:53:32 np0005537642 tender_mendeleev[74795]:    "fsid": "4c838139-e0c9-556a-a9ca-e4422f459af7",
Nov 27 05:53:32 np0005537642 tender_mendeleev[74795]:    "health": {
Nov 27 05:53:32 np0005537642 tender_mendeleev[74795]:        "status": "HEALTH_OK",
Nov 27 05:53:32 np0005537642 tender_mendeleev[74795]:        "checks": {},
Nov 27 05:53:32 np0005537642 tender_mendeleev[74795]:        "mutes": []
Nov 27 05:53:32 np0005537642 tender_mendeleev[74795]:    },
Nov 27 05:53:32 np0005537642 tender_mendeleev[74795]:    "election_epoch": 5,
Nov 27 05:53:32 np0005537642 tender_mendeleev[74795]:    "quorum": [
Nov 27 05:53:32 np0005537642 tender_mendeleev[74795]:        0
Nov 27 05:53:32 np0005537642 tender_mendeleev[74795]:    ],
Nov 27 05:53:32 np0005537642 tender_mendeleev[74795]:    "quorum_names": [
Nov 27 05:53:32 np0005537642 tender_mendeleev[74795]:        "compute-0"
Nov 27 05:53:32 np0005537642 tender_mendeleev[74795]:    ],
Nov 27 05:53:32 np0005537642 tender_mendeleev[74795]:    "quorum_age": 7,
Nov 27 05:53:32 np0005537642 tender_mendeleev[74795]:    "monmap": {
Nov 27 05:53:32 np0005537642 tender_mendeleev[74795]:        "epoch": 1,
Nov 27 05:53:32 np0005537642 tender_mendeleev[74795]:        "min_mon_release_name": "squid",
Nov 27 05:53:32 np0005537642 tender_mendeleev[74795]:        "num_mons": 1
Nov 27 05:53:32 np0005537642 tender_mendeleev[74795]:    },
Nov 27 05:53:32 np0005537642 tender_mendeleev[74795]:    "osdmap": {
Nov 27 05:53:32 np0005537642 tender_mendeleev[74795]:        "epoch": 1,
Nov 27 05:53:32 np0005537642 tender_mendeleev[74795]:        "num_osds": 0,
Nov 27 05:53:32 np0005537642 tender_mendeleev[74795]:        "num_up_osds": 0,
Nov 27 05:53:32 np0005537642 tender_mendeleev[74795]:        "osd_up_since": 0,
Nov 27 05:53:32 np0005537642 tender_mendeleev[74795]:        "num_in_osds": 0,
Nov 27 05:53:32 np0005537642 tender_mendeleev[74795]:        "osd_in_since": 0,
Nov 27 05:53:32 np0005537642 tender_mendeleev[74795]:        "num_remapped_pgs": 0
Nov 27 05:53:32 np0005537642 tender_mendeleev[74795]:    },
Nov 27 05:53:32 np0005537642 tender_mendeleev[74795]:    "pgmap": {
Nov 27 05:53:32 np0005537642 tender_mendeleev[74795]:        "pgs_by_state": [],
Nov 27 05:53:32 np0005537642 tender_mendeleev[74795]:        "num_pgs": 0,
Nov 27 05:53:32 np0005537642 tender_mendeleev[74795]:        "num_pools": 0,
Nov 27 05:53:32 np0005537642 tender_mendeleev[74795]:        "num_objects": 0,
Nov 27 05:53:32 np0005537642 tender_mendeleev[74795]:        "data_bytes": 0,
Nov 27 05:53:32 np0005537642 tender_mendeleev[74795]:        "bytes_used": 0,
Nov 27 05:53:32 np0005537642 tender_mendeleev[74795]:        "bytes_avail": 0,
Nov 27 05:53:32 np0005537642 tender_mendeleev[74795]:        "bytes_total": 0
Nov 27 05:53:32 np0005537642 tender_mendeleev[74795]:    },
Nov 27 05:53:32 np0005537642 tender_mendeleev[74795]:    "fsmap": {
Nov 27 05:53:32 np0005537642 tender_mendeleev[74795]:        "epoch": 1,
Nov 27 05:53:32 np0005537642 tender_mendeleev[74795]:        "btime": "2025-11-27T10:53:22:366160+0000",
Nov 27 05:53:32 np0005537642 tender_mendeleev[74795]:        "by_rank": [],
Nov 27 05:53:32 np0005537642 tender_mendeleev[74795]:        "up:standby": 0
Nov 27 05:53:32 np0005537642 tender_mendeleev[74795]:    },
Nov 27 05:53:32 np0005537642 tender_mendeleev[74795]:    "mgrmap": {
Nov 27 05:53:32 np0005537642 tender_mendeleev[74795]:        "available": false,
Nov 27 05:53:32 np0005537642 tender_mendeleev[74795]:        "num_standbys": 0,
Nov 27 05:53:32 np0005537642 tender_mendeleev[74795]:        "modules": [
Nov 27 05:53:32 np0005537642 tender_mendeleev[74795]:            "iostat",
Nov 27 05:53:32 np0005537642 tender_mendeleev[74795]:            "nfs",
Nov 27 05:53:32 np0005537642 tender_mendeleev[74795]:            "restful"
Nov 27 05:53:32 np0005537642 tender_mendeleev[74795]:        ],
Nov 27 05:53:32 np0005537642 tender_mendeleev[74795]:        "services": {}
Nov 27 05:53:32 np0005537642 tender_mendeleev[74795]:    },
Nov 27 05:53:32 np0005537642 tender_mendeleev[74795]:    "servicemap": {
Nov 27 05:53:32 np0005537642 tender_mendeleev[74795]:        "epoch": 1,
Nov 27 05:53:32 np0005537642 tender_mendeleev[74795]:        "modified": "2025-11-27T10:53:22.368539+0000",
Nov 27 05:53:32 np0005537642 tender_mendeleev[74795]:        "services": {}
Nov 27 05:53:32 np0005537642 tender_mendeleev[74795]:    },
Nov 27 05:53:32 np0005537642 tender_mendeleev[74795]:    "progress_events": {}
Nov 27 05:53:32 np0005537642 tender_mendeleev[74795]: }
Nov 27 05:53:32 np0005537642 systemd[1]: libpod-6d9cc0fb30a56bf1814dc3e3bd3637bef107b60e7063eb3ffe45ea783ea40d7c.scope: Deactivated successfully.
Nov 27 05:53:32 np0005537642 podman[74779]: 2025-11-27 10:53:32.570484789 +0000 UTC m=+0.578176712 container died 6d9cc0fb30a56bf1814dc3e3bd3637bef107b60e7063eb3ffe45ea783ea40d7c (image=quay.io/ceph/ceph:v19, name=tender_mendeleev, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, org.label-schema.build-date=20250325)
Nov 27 05:53:32 np0005537642 systemd[1]: var-lib-containers-storage-overlay-c30a9c8aa13db4278fc383b3ee4ea3aafb99693da05431acea235a52ee3e6bb9-merged.mount: Deactivated successfully.
Nov 27 05:53:32 np0005537642 podman[74779]: 2025-11-27 10:53:32.61594042 +0000 UTC m=+0.623632343 container remove 6d9cc0fb30a56bf1814dc3e3bd3637bef107b60e7063eb3ffe45ea783ea40d7c (image=quay.io/ceph/ceph:v19, name=tender_mendeleev, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 27 05:53:32 np0005537642 systemd[1]: libpod-conmon-6d9cc0fb30a56bf1814dc3e3bd3637bef107b60e7063eb3ffe45ea783ea40d7c.scope: Deactivated successfully.
Nov 27 05:53:32 np0005537642 ceph-mgr[74636]: mgr[py] Module test_orchestrator has missing NOTIFY_TYPES member
Nov 27 05:53:32 np0005537642 ceph-mgr[74636]: mgr[py] Loading python module 'volumes'
Nov 27 05:53:32 np0005537642 ceph-4c838139-e0c9-556a-a9ca-e4422f459af7-mgr-compute-0-qnrkij[74632]: 2025-11-27T10:53:32.632+0000 7f3cabe09140 -1 mgr[py] Module test_orchestrator has missing NOTIFY_TYPES member
Nov 27 05:53:32 np0005537642 ceph-mgr[74636]: mgr[py] Module volumes has missing NOTIFY_TYPES member
Nov 27 05:53:32 np0005537642 ceph-mgr[74636]: mgr[py] Loading python module 'zabbix'
Nov 27 05:53:32 np0005537642 ceph-4c838139-e0c9-556a-a9ca-e4422f459af7-mgr-compute-0-qnrkij[74632]: 2025-11-27T10:53:32.896+0000 7f3cabe09140 -1 mgr[py] Module volumes has missing NOTIFY_TYPES member
Nov 27 05:53:32 np0005537642 ceph-mgr[74636]: mgr[py] Module zabbix has missing NOTIFY_TYPES member
Nov 27 05:53:32 np0005537642 ceph-4c838139-e0c9-556a-a9ca-e4422f459af7-mgr-compute-0-qnrkij[74632]: 2025-11-27T10:53:32.965+0000 7f3cabe09140 -1 mgr[py] Module zabbix has missing NOTIFY_TYPES member
Nov 27 05:53:32 np0005537642 ceph-mgr[74636]: ms_deliver_dispatch: unhandled message 0x55ea0b2149c0 mon_map magic: 0 from mon.0 v2:192.168.122.100:3300/0
Nov 27 05:53:32 np0005537642 ceph-mon[74338]: log_channel(cluster) log [INF] : Activating manager daemon compute-0.qnrkij
Nov 27 05:53:32 np0005537642 ceph-mgr[74636]: mgr handle_mgr_map Activating!
Nov 27 05:53:32 np0005537642 ceph-mon[74338]: log_channel(cluster) log [DBG] : mgrmap e2: compute-0.qnrkij(active, starting, since 0.0114442s)
Nov 27 05:53:32 np0005537642 ceph-mgr[74636]: mgr handle_mgr_map I am now activating
Nov 27 05:53:32 np0005537642 ceph-mon[74338]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mds metadata"} v 0)
Nov 27 05:53:32 np0005537642 ceph-mon[74338]: log_channel(audit) log [DBG] : from='mgr.14102 192.168.122.100:0/125125434' entity='mgr.compute-0.qnrkij' cmd=[{"prefix": "mds metadata"}]: dispatch
Nov 27 05:53:32 np0005537642 ceph-mon[74338]: mon.compute-0@0(leader).mds e1 all = 1
Nov 27 05:53:32 np0005537642 ceph-mon[74338]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata"} v 0)
Nov 27 05:53:32 np0005537642 ceph-mon[74338]: log_channel(audit) log [DBG] : from='mgr.14102 192.168.122.100:0/125125434' entity='mgr.compute-0.qnrkij' cmd=[{"prefix": "osd metadata"}]: dispatch
Nov 27 05:53:32 np0005537642 ceph-mon[74338]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon metadata"} v 0)
Nov 27 05:53:32 np0005537642 ceph-mon[74338]: log_channel(audit) log [DBG] : from='mgr.14102 192.168.122.100:0/125125434' entity='mgr.compute-0.qnrkij' cmd=[{"prefix": "mon metadata"}]: dispatch
Nov 27 05:53:32 np0005537642 ceph-mon[74338]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon metadata", "id": "compute-0"} v 0)
Nov 27 05:53:32 np0005537642 ceph-mon[74338]: log_channel(audit) log [DBG] : from='mgr.14102 192.168.122.100:0/125125434' entity='mgr.compute-0.qnrkij' cmd=[{"prefix": "mon metadata", "id": "compute-0"}]: dispatch
Nov 27 05:53:32 np0005537642 ceph-mon[74338]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mgr metadata", "who": "compute-0.qnrkij", "id": "compute-0.qnrkij"} v 0)
Nov 27 05:53:32 np0005537642 ceph-mon[74338]: log_channel(audit) log [DBG] : from='mgr.14102 192.168.122.100:0/125125434' entity='mgr.compute-0.qnrkij' cmd=[{"prefix": "mgr metadata", "who": "compute-0.qnrkij", "id": "compute-0.qnrkij"}]: dispatch
Nov 27 05:53:32 np0005537642 ceph-mgr[74636]: [balancer DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Nov 27 05:53:32 np0005537642 ceph-mgr[74636]: mgr load Constructed class from module: balancer
Nov 27 05:53:32 np0005537642 ceph-mgr[74636]: [crash DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Nov 27 05:53:32 np0005537642 ceph-mgr[74636]: mgr load Constructed class from module: crash
Nov 27 05:53:32 np0005537642 ceph-mgr[74636]: [devicehealth DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Nov 27 05:53:32 np0005537642 ceph-mgr[74636]: mgr load Constructed class from module: devicehealth
Nov 27 05:53:32 np0005537642 ceph-mgr[74636]: [iostat DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Nov 27 05:53:32 np0005537642 ceph-mgr[74636]: mgr load Constructed class from module: iostat
Nov 27 05:53:32 np0005537642 ceph-mgr[74636]: [devicehealth INFO root] Starting
Nov 27 05:53:32 np0005537642 ceph-mon[74338]: log_channel(cluster) log [INF] : Manager daemon compute-0.qnrkij is now available
Nov 27 05:53:32 np0005537642 ceph-mgr[74636]: [nfs DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Nov 27 05:53:32 np0005537642 ceph-mgr[74636]: mgr load Constructed class from module: nfs
Nov 27 05:53:32 np0005537642 ceph-mgr[74636]: [balancer INFO root] Starting
Nov 27 05:53:32 np0005537642 ceph-mgr[74636]: [orchestrator DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Nov 27 05:53:32 np0005537642 ceph-mgr[74636]: mgr load Constructed class from module: orchestrator
Nov 27 05:53:32 np0005537642 ceph-mgr[74636]: [balancer INFO root] Optimize plan auto_2025-11-27_10:53:32
Nov 27 05:53:32 np0005537642 ceph-mgr[74636]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Nov 27 05:53:32 np0005537642 ceph-mgr[74636]: [balancer INFO root] do_upmap
Nov 27 05:53:32 np0005537642 ceph-mgr[74636]: [balancer INFO root] No pools available
Nov 27 05:53:32 np0005537642 ceph-mgr[74636]: [pg_autoscaler DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Nov 27 05:53:32 np0005537642 ceph-mgr[74636]: mgr load Constructed class from module: pg_autoscaler
Nov 27 05:53:33 np0005537642 ceph-mgr[74636]: [progress DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Nov 27 05:53:33 np0005537642 ceph-mgr[74636]: mgr load Constructed class from module: progress
Nov 27 05:53:33 np0005537642 ceph-mgr[74636]: [pg_autoscaler INFO root] _maybe_adjust
Nov 27 05:53:33 np0005537642 ceph-mgr[74636]: [rbd_support DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Nov 27 05:53:33 np0005537642 ceph-mgr[74636]: [progress INFO root] Loading...
Nov 27 05:53:33 np0005537642 ceph-mgr[74636]: [progress INFO root] No stored events to load
Nov 27 05:53:33 np0005537642 ceph-mgr[74636]: [progress INFO root] Loaded [] historic events
Nov 27 05:53:33 np0005537642 ceph-mgr[74636]: [progress INFO root] Loaded OSDMap, ready.
Nov 27 05:53:33 np0005537642 ceph-mgr[74636]: [rbd_support INFO root] recovery thread starting
Nov 27 05:53:33 np0005537642 ceph-mgr[74636]: [rbd_support INFO root] starting setup
Nov 27 05:53:33 np0005537642 ceph-mgr[74636]: mgr load Constructed class from module: rbd_support
Nov 27 05:53:33 np0005537642 ceph-mgr[74636]: [restful DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Nov 27 05:53:33 np0005537642 ceph-mgr[74636]: mgr load Constructed class from module: restful
Nov 27 05:53:33 np0005537642 ceph-mgr[74636]: [status DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Nov 27 05:53:33 np0005537642 ceph-mgr[74636]: mgr load Constructed class from module: status
Nov 27 05:53:33 np0005537642 ceph-mgr[74636]: [restful INFO root] server_addr: :: server_port: 8003
Nov 27 05:53:33 np0005537642 ceph-mgr[74636]: [restful WARNING root] server not running: no certificate configured
Nov 27 05:53:33 np0005537642 ceph-mgr[74636]: [telemetry DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Nov 27 05:53:33 np0005537642 ceph-mgr[74636]: mgr load Constructed class from module: telemetry
Nov 27 05:53:33 np0005537642 ceph-mgr[74636]: [volumes DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Nov 27 05:53:33 np0005537642 ceph-mon[74338]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/compute-0.qnrkij/mirror_snapshot_schedule"} v 0)
Nov 27 05:53:33 np0005537642 ceph-mon[74338]: log_channel(audit) log [INF] : from='mgr.14102 192.168.122.100:0/125125434' entity='mgr.compute-0.qnrkij' cmd=[{"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/compute-0.qnrkij/mirror_snapshot_schedule"}]: dispatch
Nov 27 05:53:33 np0005537642 ceph-mon[74338]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/telemetry/report_id}] v 0)
Nov 27 05:53:33 np0005537642 ceph-mgr[74636]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Nov 27 05:53:33 np0005537642 ceph-mgr[74636]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: starting
Nov 27 05:53:33 np0005537642 ceph-mgr[74636]: [rbd_support INFO root] PerfHandler: starting
Nov 27 05:53:33 np0005537642 ceph-mgr[74636]: [rbd_support INFO root] TaskHandler: starting
Nov 27 05:53:33 np0005537642 ceph-mon[74338]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/compute-0.qnrkij/trash_purge_schedule"} v 0)
Nov 27 05:53:33 np0005537642 ceph-mon[74338]: log_channel(audit) log [INF] : from='mgr.14102 192.168.122.100:0/125125434' entity='mgr.compute-0.qnrkij' cmd=[{"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/compute-0.qnrkij/trash_purge_schedule"}]: dispatch
Nov 27 05:53:33 np0005537642 ceph-mon[74338]: log_channel(audit) log [INF] : from='mgr.14102 192.168.122.100:0/125125434' entity='mgr.compute-0.qnrkij' 
Nov 27 05:53:33 np0005537642 ceph-mon[74338]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/telemetry/salt}] v 0)
Nov 27 05:53:33 np0005537642 ceph-mgr[74636]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Nov 27 05:53:33 np0005537642 ceph-mgr[74636]: [rbd_support INFO root] TrashPurgeScheduleHandler: starting
Nov 27 05:53:33 np0005537642 ceph-mgr[74636]: [rbd_support INFO root] setup complete
Nov 27 05:53:33 np0005537642 ceph-mon[74338]: log_channel(audit) log [INF] : from='mgr.14102 192.168.122.100:0/125125434' entity='mgr.compute-0.qnrkij' 
Nov 27 05:53:33 np0005537642 ceph-mon[74338]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/telemetry/collection}] v 0)
Nov 27 05:53:33 np0005537642 ceph-mgr[74636]: mgr load Constructed class from module: volumes
Nov 27 05:53:33 np0005537642 ceph-mon[74338]: log_channel(audit) log [INF] : from='mgr.14102 192.168.122.100:0/125125434' entity='mgr.compute-0.qnrkij' 
Nov 27 05:53:33 np0005537642 ceph-mon[74338]: Activating manager daemon compute-0.qnrkij
Nov 27 05:53:33 np0005537642 ceph-mon[74338]: Manager daemon compute-0.qnrkij is now available
Nov 27 05:53:33 np0005537642 ceph-mon[74338]: from='mgr.14102 192.168.122.100:0/125125434' entity='mgr.compute-0.qnrkij' cmd=[{"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/compute-0.qnrkij/mirror_snapshot_schedule"}]: dispatch
Nov 27 05:53:33 np0005537642 ceph-mon[74338]: from='mgr.14102 192.168.122.100:0/125125434' entity='mgr.compute-0.qnrkij' cmd=[{"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/compute-0.qnrkij/trash_purge_schedule"}]: dispatch
Nov 27 05:53:33 np0005537642 ceph-mon[74338]: from='mgr.14102 192.168.122.100:0/125125434' entity='mgr.compute-0.qnrkij' 
Nov 27 05:53:33 np0005537642 ceph-mon[74338]: from='mgr.14102 192.168.122.100:0/125125434' entity='mgr.compute-0.qnrkij' 
Nov 27 05:53:33 np0005537642 ceph-mon[74338]: from='mgr.14102 192.168.122.100:0/125125434' entity='mgr.compute-0.qnrkij' 
Nov 27 05:53:33 np0005537642 ceph-mon[74338]: log_channel(cluster) log [DBG] : mgrmap e3: compute-0.qnrkij(active, since 1.02395s)
Nov 27 05:53:34 np0005537642 podman[74913]: 2025-11-27 10:53:34.703349568 +0000 UTC m=+0.056361566 container create 4816387a3faf6873e0f874176cf1b3f0f6219b4bab99bcfd5ef3362f332bf6d8 (image=quay.io/ceph/ceph:v19, name=fervent_wescoff, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Nov 27 05:53:34 np0005537642 systemd[1]: Started libpod-conmon-4816387a3faf6873e0f874176cf1b3f0f6219b4bab99bcfd5ef3362f332bf6d8.scope.
Nov 27 05:53:34 np0005537642 podman[74913]: 2025-11-27 10:53:34.675728252 +0000 UTC m=+0.028740321 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Nov 27 05:53:34 np0005537642 systemd[1]: Started libcrun container.
Nov 27 05:53:34 np0005537642 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b812f1cd5621617e3d88cd3bba7ae0f8cc156d903b77451741e7710dd2f4cb03/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Nov 27 05:53:34 np0005537642 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b812f1cd5621617e3d88cd3bba7ae0f8cc156d903b77451741e7710dd2f4cb03/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 27 05:53:34 np0005537642 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b812f1cd5621617e3d88cd3bba7ae0f8cc156d903b77451741e7710dd2f4cb03/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 27 05:53:34 np0005537642 podman[74913]: 2025-11-27 10:53:34.794780087 +0000 UTC m=+0.147792125 container init 4816387a3faf6873e0f874176cf1b3f0f6219b4bab99bcfd5ef3362f332bf6d8 (image=quay.io/ceph/ceph:v19, name=fervent_wescoff, CEPH_REF=squid, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True)
Nov 27 05:53:34 np0005537642 podman[74913]: 2025-11-27 10:53:34.800621267 +0000 UTC m=+0.153633265 container start 4816387a3faf6873e0f874176cf1b3f0f6219b4bab99bcfd5ef3362f332bf6d8 (image=quay.io/ceph/ceph:v19, name=fervent_wescoff, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.license=GPLv2, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 27 05:53:34 np0005537642 podman[74913]: 2025-11-27 10:53:34.804295773 +0000 UTC m=+0.157307771 container attach 4816387a3faf6873e0f874176cf1b3f0f6219b4bab99bcfd5ef3362f332bf6d8 (image=quay.io/ceph/ceph:v19, name=fervent_wescoff, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, OSD_FLAVOR=default, org.label-schema.license=GPLv2, ceph=True, io.buildah.version=1.40.1)
Nov 27 05:53:34 np0005537642 ceph-mgr[74636]: mgr.server send_report Not sending PG status to monitor yet, waiting for OSDs
Nov 27 05:53:35 np0005537642 ceph-mon[74338]: log_channel(cluster) log [DBG] : mgrmap e4: compute-0.qnrkij(active, since 2s)
Nov 27 05:53:35 np0005537642 ceph-mon[74338]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "status", "format": "json-pretty"} v 0)
Nov 27 05:53:35 np0005537642 ceph-mon[74338]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3510035926' entity='client.admin' cmd=[{"prefix": "status", "format": "json-pretty"}]: dispatch
Nov 27 05:53:35 np0005537642 fervent_wescoff[74929]: 
Nov 27 05:53:35 np0005537642 fervent_wescoff[74929]: {
Nov 27 05:53:35 np0005537642 fervent_wescoff[74929]:    "fsid": "4c838139-e0c9-556a-a9ca-e4422f459af7",
Nov 27 05:53:35 np0005537642 fervent_wescoff[74929]:    "health": {
Nov 27 05:53:35 np0005537642 fervent_wescoff[74929]:        "status": "HEALTH_OK",
Nov 27 05:53:35 np0005537642 fervent_wescoff[74929]:        "checks": {},
Nov 27 05:53:35 np0005537642 fervent_wescoff[74929]:        "mutes": []
Nov 27 05:53:35 np0005537642 fervent_wescoff[74929]:    },
Nov 27 05:53:35 np0005537642 fervent_wescoff[74929]:    "election_epoch": 5,
Nov 27 05:53:35 np0005537642 fervent_wescoff[74929]:    "quorum": [
Nov 27 05:53:35 np0005537642 fervent_wescoff[74929]:        0
Nov 27 05:53:35 np0005537642 fervent_wescoff[74929]:    ],
Nov 27 05:53:35 np0005537642 fervent_wescoff[74929]:    "quorum_names": [
Nov 27 05:53:35 np0005537642 fervent_wescoff[74929]:        "compute-0"
Nov 27 05:53:35 np0005537642 fervent_wescoff[74929]:    ],
Nov 27 05:53:35 np0005537642 fervent_wescoff[74929]:    "quorum_age": 10,
Nov 27 05:53:35 np0005537642 fervent_wescoff[74929]:    "monmap": {
Nov 27 05:53:35 np0005537642 fervent_wescoff[74929]:        "epoch": 1,
Nov 27 05:53:35 np0005537642 fervent_wescoff[74929]:        "min_mon_release_name": "squid",
Nov 27 05:53:35 np0005537642 fervent_wescoff[74929]:        "num_mons": 1
Nov 27 05:53:35 np0005537642 fervent_wescoff[74929]:    },
Nov 27 05:53:35 np0005537642 fervent_wescoff[74929]:    "osdmap": {
Nov 27 05:53:35 np0005537642 fervent_wescoff[74929]:        "epoch": 1,
Nov 27 05:53:35 np0005537642 fervent_wescoff[74929]:        "num_osds": 0,
Nov 27 05:53:35 np0005537642 fervent_wescoff[74929]:        "num_up_osds": 0,
Nov 27 05:53:35 np0005537642 fervent_wescoff[74929]:        "osd_up_since": 0,
Nov 27 05:53:35 np0005537642 fervent_wescoff[74929]:        "num_in_osds": 0,
Nov 27 05:53:35 np0005537642 fervent_wescoff[74929]:        "osd_in_since": 0,
Nov 27 05:53:35 np0005537642 fervent_wescoff[74929]:        "num_remapped_pgs": 0
Nov 27 05:53:35 np0005537642 fervent_wescoff[74929]:    },
Nov 27 05:53:35 np0005537642 fervent_wescoff[74929]:    "pgmap": {
Nov 27 05:53:35 np0005537642 fervent_wescoff[74929]:        "pgs_by_state": [],
Nov 27 05:53:35 np0005537642 fervent_wescoff[74929]:        "num_pgs": 0,
Nov 27 05:53:35 np0005537642 fervent_wescoff[74929]:        "num_pools": 0,
Nov 27 05:53:35 np0005537642 fervent_wescoff[74929]:        "num_objects": 0,
Nov 27 05:53:35 np0005537642 fervent_wescoff[74929]:        "data_bytes": 0,
Nov 27 05:53:35 np0005537642 fervent_wescoff[74929]:        "bytes_used": 0,
Nov 27 05:53:35 np0005537642 fervent_wescoff[74929]:        "bytes_avail": 0,
Nov 27 05:53:35 np0005537642 fervent_wescoff[74929]:        "bytes_total": 0
Nov 27 05:53:35 np0005537642 fervent_wescoff[74929]:    },
Nov 27 05:53:35 np0005537642 fervent_wescoff[74929]:    "fsmap": {
Nov 27 05:53:35 np0005537642 fervent_wescoff[74929]:        "epoch": 1,
Nov 27 05:53:35 np0005537642 fervent_wescoff[74929]:        "btime": "2025-11-27T10:53:22:366160+0000",
Nov 27 05:53:35 np0005537642 fervent_wescoff[74929]:        "by_rank": [],
Nov 27 05:53:35 np0005537642 fervent_wescoff[74929]:        "up:standby": 0
Nov 27 05:53:35 np0005537642 fervent_wescoff[74929]:    },
Nov 27 05:53:35 np0005537642 fervent_wescoff[74929]:    "mgrmap": {
Nov 27 05:53:35 np0005537642 fervent_wescoff[74929]:        "available": true,
Nov 27 05:53:35 np0005537642 fervent_wescoff[74929]:        "num_standbys": 0,
Nov 27 05:53:35 np0005537642 fervent_wescoff[74929]:        "modules": [
Nov 27 05:53:35 np0005537642 fervent_wescoff[74929]:            "iostat",
Nov 27 05:53:35 np0005537642 fervent_wescoff[74929]:            "nfs",
Nov 27 05:53:35 np0005537642 fervent_wescoff[74929]:            "restful"
Nov 27 05:53:35 np0005537642 fervent_wescoff[74929]:        ],
Nov 27 05:53:35 np0005537642 fervent_wescoff[74929]:        "services": {}
Nov 27 05:53:35 np0005537642 fervent_wescoff[74929]:    },
Nov 27 05:53:35 np0005537642 fervent_wescoff[74929]:    "servicemap": {
Nov 27 05:53:35 np0005537642 fervent_wescoff[74929]:        "epoch": 1,
Nov 27 05:53:35 np0005537642 fervent_wescoff[74929]:        "modified": "2025-11-27T10:53:22.368539+0000",
Nov 27 05:53:35 np0005537642 fervent_wescoff[74929]:        "services": {}
Nov 27 05:53:35 np0005537642 fervent_wescoff[74929]:    },
Nov 27 05:53:35 np0005537642 fervent_wescoff[74929]:    "progress_events": {}
Nov 27 05:53:35 np0005537642 fervent_wescoff[74929]: }
Nov 27 05:53:35 np0005537642 systemd[1]: libpod-4816387a3faf6873e0f874176cf1b3f0f6219b4bab99bcfd5ef3362f332bf6d8.scope: Deactivated successfully.
Nov 27 05:53:35 np0005537642 podman[74913]: 2025-11-27 10:53:35.232459784 +0000 UTC m=+0.585471802 container died 4816387a3faf6873e0f874176cf1b3f0f6219b4bab99bcfd5ef3362f332bf6d8 (image=quay.io/ceph/ceph:v19, name=fervent_wescoff, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, OSD_FLAVOR=default, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 27 05:53:35 np0005537642 systemd[1]: var-lib-containers-storage-overlay-b812f1cd5621617e3d88cd3bba7ae0f8cc156d903b77451741e7710dd2f4cb03-merged.mount: Deactivated successfully.
Nov 27 05:53:35 np0005537642 podman[74913]: 2025-11-27 10:53:35.273894202 +0000 UTC m=+0.626906190 container remove 4816387a3faf6873e0f874176cf1b3f0f6219b4bab99bcfd5ef3362f332bf6d8 (image=quay.io/ceph/ceph:v19, name=fervent_wescoff, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 27 05:53:35 np0005537642 systemd[1]: libpod-conmon-4816387a3faf6873e0f874176cf1b3f0f6219b4bab99bcfd5ef3362f332bf6d8.scope: Deactivated successfully.
Nov 27 05:53:35 np0005537642 podman[74966]: 2025-11-27 10:53:35.348824089 +0000 UTC m=+0.050070074 container create 0ddf4a5c5d5c98edb3cdba792e39aef149a353f43a14b55957725485d6ef1440 (image=quay.io/ceph/ceph:v19, name=objective_torvalds, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_REF=squid)
Nov 27 05:53:35 np0005537642 systemd[1]: Started libpod-conmon-0ddf4a5c5d5c98edb3cdba792e39aef149a353f43a14b55957725485d6ef1440.scope.
Nov 27 05:53:35 np0005537642 podman[74966]: 2025-11-27 10:53:35.324742243 +0000 UTC m=+0.025988278 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Nov 27 05:53:35 np0005537642 systemd[1]: Started libcrun container.
Nov 27 05:53:35 np0005537642 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2fd644323e85fb2cc11d92fff4bc356ae1ab46e359dae7e09c21ffc3f8c445c1/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 27 05:53:35 np0005537642 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2fd644323e85fb2cc11d92fff4bc356ae1ab46e359dae7e09c21ffc3f8c445c1/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Nov 27 05:53:35 np0005537642 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2fd644323e85fb2cc11d92fff4bc356ae1ab46e359dae7e09c21ffc3f8c445c1/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 27 05:53:35 np0005537642 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2fd644323e85fb2cc11d92fff4bc356ae1ab46e359dae7e09c21ffc3f8c445c1/merged/var/lib/ceph/user.conf supports timestamps until 2038 (0x7fffffff)
Nov 27 05:53:35 np0005537642 podman[74966]: 2025-11-27 10:53:35.449929081 +0000 UTC m=+0.151175056 container init 0ddf4a5c5d5c98edb3cdba792e39aef149a353f43a14b55957725485d6ef1440 (image=quay.io/ceph/ceph:v19, name=objective_torvalds, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, CEPH_REF=squid, OSD_FLAVOR=default, ceph=True, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 27 05:53:35 np0005537642 podman[74966]: 2025-11-27 10:53:35.460246917 +0000 UTC m=+0.161492892 container start 0ddf4a5c5d5c98edb3cdba792e39aef149a353f43a14b55957725485d6ef1440 (image=quay.io/ceph/ceph:v19, name=objective_torvalds, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.build-date=20250325, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0)
Nov 27 05:53:35 np0005537642 podman[74966]: 2025-11-27 10:53:35.464101972 +0000 UTC m=+0.165347997 container attach 0ddf4a5c5d5c98edb3cdba792e39aef149a353f43a14b55957725485d6ef1440 (image=quay.io/ceph/ceph:v19, name=objective_torvalds, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, CEPH_REF=squid, ceph=True, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS)
Nov 27 05:53:35 np0005537642 ceph-mon[74338]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config assimilate-conf"} v 0)
Nov 27 05:53:35 np0005537642 ceph-mon[74338]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/2332806498' entity='client.admin' cmd=[{"prefix": "config assimilate-conf"}]: dispatch
Nov 27 05:53:35 np0005537642 objective_torvalds[74982]: 
Nov 27 05:53:35 np0005537642 objective_torvalds[74982]: [global]
Nov 27 05:53:35 np0005537642 objective_torvalds[74982]: #011fsid = 4c838139-e0c9-556a-a9ca-e4422f459af7
Nov 27 05:53:35 np0005537642 objective_torvalds[74982]: #011mon_host = [v2:192.168.122.100:3300,v1:192.168.122.100:6789]
Nov 27 05:53:35 np0005537642 systemd[1]: libpod-0ddf4a5c5d5c98edb3cdba792e39aef149a353f43a14b55957725485d6ef1440.scope: Deactivated successfully.
Nov 27 05:53:35 np0005537642 podman[74966]: 2025-11-27 10:53:35.833549094 +0000 UTC m=+0.534795039 container died 0ddf4a5c5d5c98edb3cdba792e39aef149a353f43a14b55957725485d6ef1440 (image=quay.io/ceph/ceph:v19, name=objective_torvalds, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 27 05:53:35 np0005537642 systemd[1]: var-lib-containers-storage-overlay-2fd644323e85fb2cc11d92fff4bc356ae1ab46e359dae7e09c21ffc3f8c445c1-merged.mount: Deactivated successfully.
Nov 27 05:53:35 np0005537642 podman[74966]: 2025-11-27 10:53:35.867728724 +0000 UTC m=+0.568974699 container remove 0ddf4a5c5d5c98edb3cdba792e39aef149a353f43a14b55957725485d6ef1440 (image=quay.io/ceph/ceph:v19, name=objective_torvalds, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, OSD_FLAVOR=default, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 27 05:53:35 np0005537642 systemd[1]: libpod-conmon-0ddf4a5c5d5c98edb3cdba792e39aef149a353f43a14b55957725485d6ef1440.scope: Deactivated successfully.
Nov 27 05:53:35 np0005537642 podman[75021]: 2025-11-27 10:53:35.935889746 +0000 UTC m=+0.044156230 container create 5355c0845d028a3eac13e38c0d9c4146d6a7e8610cf21a24b96c0b79fb5f29cb (image=quay.io/ceph/ceph:v19, name=funny_satoshi, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS)
Nov 27 05:53:35 np0005537642 systemd[1]: Started libpod-conmon-5355c0845d028a3eac13e38c0d9c4146d6a7e8610cf21a24b96c0b79fb5f29cb.scope.
Nov 27 05:53:35 np0005537642 systemd[1]: Started libcrun container.
Nov 27 05:53:35 np0005537642 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/01e9b027add4012a68d6468387743c4a1ffd4a162d02d7d1e63e1037f5cfa390/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 27 05:53:35 np0005537642 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/01e9b027add4012a68d6468387743c4a1ffd4a162d02d7d1e63e1037f5cfa390/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Nov 27 05:53:35 np0005537642 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/01e9b027add4012a68d6468387743c4a1ffd4a162d02d7d1e63e1037f5cfa390/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 27 05:53:36 np0005537642 podman[75021]: 2025-11-27 10:53:36.004245497 +0000 UTC m=+0.112512091 container init 5355c0845d028a3eac13e38c0d9c4146d6a7e8610cf21a24b96c0b79fb5f29cb (image=quay.io/ceph/ceph:v19, name=funny_satoshi, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_REF=squid, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 27 05:53:36 np0005537642 podman[75021]: 2025-11-27 10:53:35.917312024 +0000 UTC m=+0.025578538 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Nov 27 05:53:36 np0005537642 podman[75021]: 2025-11-27 10:53:36.015752029 +0000 UTC m=+0.124018553 container start 5355c0845d028a3eac13e38c0d9c4146d6a7e8610cf21a24b96c0b79fb5f29cb (image=quay.io/ceph/ceph:v19, name=funny_satoshi, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS)
Nov 27 05:53:36 np0005537642 ceph-mon[74338]: from='client.? 192.168.122.100:0/2332806498' entity='client.admin' cmd=[{"prefix": "config assimilate-conf"}]: dispatch
Nov 27 05:53:36 np0005537642 podman[75021]: 2025-11-27 10:53:36.019642496 +0000 UTC m=+0.127908990 container attach 5355c0845d028a3eac13e38c0d9c4146d6a7e8610cf21a24b96c0b79fb5f29cb (image=quay.io/ceph/ceph:v19, name=funny_satoshi, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0)
Nov 27 05:53:36 np0005537642 ceph-mon[74338]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mgr module enable", "module": "cephadm"} v 0)
Nov 27 05:53:36 np0005537642 ceph-mon[74338]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/1105185965' entity='client.admin' cmd=[{"prefix": "mgr module enable", "module": "cephadm"}]: dispatch
Nov 27 05:53:36 np0005537642 ceph-mgr[74636]: mgr.server send_report Not sending PG status to monitor yet, waiting for OSDs
Nov 27 05:53:37 np0005537642 ceph-mon[74338]: from='client.? 192.168.122.100:0/1105185965' entity='client.admin' cmd=[{"prefix": "mgr module enable", "module": "cephadm"}]: dispatch
Nov 27 05:53:37 np0005537642 ceph-mon[74338]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/1105185965' entity='client.admin' cmd='[{"prefix": "mgr module enable", "module": "cephadm"}]': finished
Nov 27 05:53:37 np0005537642 ceph-mgr[74636]: mgr handle_mgr_map respawning because set of enabled modules changed!
Nov 27 05:53:37 np0005537642 ceph-mgr[74636]: mgr respawn  e: '/usr/bin/ceph-mgr'
Nov 27 05:53:37 np0005537642 ceph-mgr[74636]: mgr respawn  0: '/usr/bin/ceph-mgr'
Nov 27 05:53:37 np0005537642 ceph-mgr[74636]: mgr respawn  1: '-n'
Nov 27 05:53:37 np0005537642 ceph-mgr[74636]: mgr respawn  2: 'mgr.compute-0.qnrkij'
Nov 27 05:53:37 np0005537642 ceph-mgr[74636]: mgr respawn  3: '-f'
Nov 27 05:53:37 np0005537642 ceph-mgr[74636]: mgr respawn  4: '--setuser'
Nov 27 05:53:37 np0005537642 ceph-mgr[74636]: mgr respawn  5: 'ceph'
Nov 27 05:53:37 np0005537642 ceph-mgr[74636]: mgr respawn  6: '--setgroup'
Nov 27 05:53:37 np0005537642 ceph-mgr[74636]: mgr respawn  7: 'ceph'
Nov 27 05:53:37 np0005537642 ceph-mgr[74636]: mgr respawn  8: '--default-log-to-file=false'
Nov 27 05:53:37 np0005537642 ceph-mgr[74636]: mgr respawn  9: '--default-log-to-journald=true'
Nov 27 05:53:37 np0005537642 ceph-mgr[74636]: mgr respawn  10: '--default-log-to-stderr=false'
Nov 27 05:53:37 np0005537642 ceph-mgr[74636]: mgr respawn respawning with exe /usr/bin/ceph-mgr
Nov 27 05:53:37 np0005537642 ceph-mgr[74636]: mgr respawn  exe_path /proc/self/exe
Nov 27 05:53:37 np0005537642 ceph-mon[74338]: log_channel(cluster) log [DBG] : mgrmap e5: compute-0.qnrkij(active, since 4s)
Nov 27 05:53:37 np0005537642 systemd[1]: libpod-5355c0845d028a3eac13e38c0d9c4146d6a7e8610cf21a24b96c0b79fb5f29cb.scope: Deactivated successfully.
Nov 27 05:53:37 np0005537642 podman[75063]: 2025-11-27 10:53:37.124646402 +0000 UTC m=+0.041622278 container died 5355c0845d028a3eac13e38c0d9c4146d6a7e8610cf21a24b96c0b79fb5f29cb (image=quay.io/ceph/ceph:v19, name=funny_satoshi, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_REF=squid, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 27 05:53:37 np0005537642 systemd[1]: var-lib-containers-storage-overlay-01e9b027add4012a68d6468387743c4a1ffd4a162d02d7d1e63e1037f5cfa390-merged.mount: Deactivated successfully.
Nov 27 05:53:37 np0005537642 ceph-4c838139-e0c9-556a-a9ca-e4422f459af7-mgr-compute-0-qnrkij[74632]: ignoring --setuser ceph since I am not root
Nov 27 05:53:37 np0005537642 ceph-4c838139-e0c9-556a-a9ca-e4422f459af7-mgr-compute-0-qnrkij[74632]: ignoring --setgroup ceph since I am not root
Nov 27 05:53:37 np0005537642 ceph-mgr[74636]: ceph version 19.2.3 (c92aebb279828e9c3c1f5d24613efca272649e62) squid (stable), process ceph-mgr, pid 2
Nov 27 05:53:37 np0005537642 ceph-mgr[74636]: pidfile_write: ignore empty --pid-file
Nov 27 05:53:37 np0005537642 podman[75063]: 2025-11-27 10:53:37.169110786 +0000 UTC m=+0.086086672 container remove 5355c0845d028a3eac13e38c0d9c4146d6a7e8610cf21a24b96c0b79fb5f29cb (image=quay.io/ceph/ceph:v19, name=funny_satoshi, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, org.label-schema.build-date=20250325)
Nov 27 05:53:37 np0005537642 systemd[1]: libpod-conmon-5355c0845d028a3eac13e38c0d9c4146d6a7e8610cf21a24b96c0b79fb5f29cb.scope: Deactivated successfully.
Nov 27 05:53:37 np0005537642 ceph-mgr[74636]: mgr[py] Loading python module 'alerts'
Nov 27 05:53:37 np0005537642 ceph-mgr[74636]: mgr[py] Module alerts has missing NOTIFY_TYPES member
Nov 27 05:53:37 np0005537642 ceph-4c838139-e0c9-556a-a9ca-e4422f459af7-mgr-compute-0-qnrkij[74632]: 2025-11-27T10:53:37.301+0000 7f7b628b6140 -1 mgr[py] Module alerts has missing NOTIFY_TYPES member
Nov 27 05:53:37 np0005537642 ceph-mgr[74636]: mgr[py] Loading python module 'balancer'
Nov 27 05:53:37 np0005537642 podman[75098]: 2025-11-27 10:53:37.2166858 +0000 UTC m=+0.022404246 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Nov 27 05:53:37 np0005537642 ceph-mgr[74636]: mgr[py] Module balancer has missing NOTIFY_TYPES member
Nov 27 05:53:37 np0005537642 ceph-4c838139-e0c9-556a-a9ca-e4422f459af7-mgr-compute-0-qnrkij[74632]: 2025-11-27T10:53:37.377+0000 7f7b628b6140 -1 mgr[py] Module balancer has missing NOTIFY_TYPES member
Nov 27 05:53:37 np0005537642 ceph-mgr[74636]: mgr[py] Loading python module 'cephadm'
Nov 27 05:53:37 np0005537642 podman[75098]: 2025-11-27 10:53:37.411876678 +0000 UTC m=+0.217595094 container create f7bf24b07f016337e97362abbb10ffe6d65d9169fac15b39810a37ef68b6551a (image=quay.io/ceph/ceph:v19, name=elated_lewin, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Nov 27 05:53:37 np0005537642 systemd[1]: Started libpod-conmon-f7bf24b07f016337e97362abbb10ffe6d65d9169fac15b39810a37ef68b6551a.scope.
Nov 27 05:53:37 np0005537642 systemd[1]: Started libcrun container.
Nov 27 05:53:37 np0005537642 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ea5c8b6a1024c62dac0e60d3f2ff4d88405fbb88980a991bc81bc8fa3a44aaf1/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 27 05:53:37 np0005537642 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ea5c8b6a1024c62dac0e60d3f2ff4d88405fbb88980a991bc81bc8fa3a44aaf1/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 27 05:53:37 np0005537642 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ea5c8b6a1024c62dac0e60d3f2ff4d88405fbb88980a991bc81bc8fa3a44aaf1/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Nov 27 05:53:37 np0005537642 podman[75098]: 2025-11-27 10:53:37.765800725 +0000 UTC m=+0.571519202 container init f7bf24b07f016337e97362abbb10ffe6d65d9169fac15b39810a37ef68b6551a (image=quay.io/ceph/ceph:v19, name=elated_lewin, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 27 05:53:37 np0005537642 podman[75098]: 2025-11-27 10:53:37.776192924 +0000 UTC m=+0.581911340 container start f7bf24b07f016337e97362abbb10ffe6d65d9169fac15b39810a37ef68b6551a (image=quay.io/ceph/ceph:v19, name=elated_lewin, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, ceph=True, OSD_FLAVOR=default, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, org.label-schema.license=GPLv2)
Nov 27 05:53:37 np0005537642 podman[75098]: 2025-11-27 10:53:37.945362324 +0000 UTC m=+0.751080800 container attach f7bf24b07f016337e97362abbb10ffe6d65d9169fac15b39810a37ef68b6551a (image=quay.io/ceph/ceph:v19, name=elated_lewin, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, CEPH_REF=squid, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0)
Nov 27 05:53:38 np0005537642 ceph-mgr[74636]: mgr[py] Loading python module 'crash'
Nov 27 05:53:38 np0005537642 ceph-mgr[74636]: mgr[py] Module crash has missing NOTIFY_TYPES member
Nov 27 05:53:38 np0005537642 ceph-mgr[74636]: mgr[py] Loading python module 'dashboard'
Nov 27 05:53:38 np0005537642 ceph-4c838139-e0c9-556a-a9ca-e4422f459af7-mgr-compute-0-qnrkij[74632]: 2025-11-27T10:53:38.146+0000 7f7b628b6140 -1 mgr[py] Module crash has missing NOTIFY_TYPES member
Nov 27 05:53:38 np0005537642 ceph-mon[74338]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mgr stat"} v 0)
Nov 27 05:53:38 np0005537642 ceph-mon[74338]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4019267255' entity='client.admin' cmd=[{"prefix": "mgr stat"}]: dispatch
Nov 27 05:53:38 np0005537642 elated_lewin[75114]: {
Nov 27 05:53:38 np0005537642 elated_lewin[75114]:    "epoch": 5,
Nov 27 05:53:38 np0005537642 elated_lewin[75114]:    "available": true,
Nov 27 05:53:38 np0005537642 elated_lewin[75114]:    "active_name": "compute-0.qnrkij",
Nov 27 05:53:38 np0005537642 elated_lewin[75114]:    "num_standby": 0
Nov 27 05:53:38 np0005537642 elated_lewin[75114]: }
Nov 27 05:53:38 np0005537642 ceph-mgr[74636]: mgr[py] Loading python module 'devicehealth'
Nov 27 05:53:38 np0005537642 systemd[1]: libpod-f7bf24b07f016337e97362abbb10ffe6d65d9169fac15b39810a37ef68b6551a.scope: Deactivated successfully.
Nov 27 05:53:38 np0005537642 podman[75098]: 2025-11-27 10:53:38.701165859 +0000 UTC m=+1.506884245 container died f7bf24b07f016337e97362abbb10ffe6d65d9169fac15b39810a37ef68b6551a (image=quay.io/ceph/ceph:v19, name=elated_lewin, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0)
Nov 27 05:53:38 np0005537642 ceph-4c838139-e0c9-556a-a9ca-e4422f459af7-mgr-compute-0-qnrkij[74632]: 2025-11-27T10:53:38.754+0000 7f7b628b6140 -1 mgr[py] Module devicehealth has missing NOTIFY_TYPES member
Nov 27 05:53:38 np0005537642 ceph-mgr[74636]: mgr[py] Module devicehealth has missing NOTIFY_TYPES member
Nov 27 05:53:38 np0005537642 ceph-mgr[74636]: mgr[py] Loading python module 'diskprediction_local'
Nov 27 05:53:38 np0005537642 ceph-4c838139-e0c9-556a-a9ca-e4422f459af7-mgr-compute-0-qnrkij[74632]: /lib64/python3.9/site-packages/scipy/__init__.py:73: UserWarning: NumPy was imported from a Python sub-interpreter but NumPy does not properly support sub-interpreters. This will likely work for most users but might cause hard to track down issues or subtle bugs. A common user of the rare sub-interpreter feature is wsgi which also allows single-interpreter mode.
Nov 27 05:53:38 np0005537642 ceph-4c838139-e0c9-556a-a9ca-e4422f459af7-mgr-compute-0-qnrkij[74632]: Improvements in the case of bugs are welcome, but is not on the NumPy roadmap, and full support may require significant effort to achieve.
Nov 27 05:53:38 np0005537642 ceph-4c838139-e0c9-556a-a9ca-e4422f459af7-mgr-compute-0-qnrkij[74632]:  from numpy import show_config as show_numpy_config
Nov 27 05:53:38 np0005537642 ceph-4c838139-e0c9-556a-a9ca-e4422f459af7-mgr-compute-0-qnrkij[74632]: 2025-11-27T10:53:38.908+0000 7f7b628b6140 -1 mgr[py] Module diskprediction_local has missing NOTIFY_TYPES member
Nov 27 05:53:38 np0005537642 ceph-mgr[74636]: mgr[py] Module diskprediction_local has missing NOTIFY_TYPES member
Nov 27 05:53:38 np0005537642 ceph-mgr[74636]: mgr[py] Loading python module 'influx'
Nov 27 05:53:38 np0005537642 ceph-4c838139-e0c9-556a-a9ca-e4422f459af7-mgr-compute-0-qnrkij[74632]: 2025-11-27T10:53:38.977+0000 7f7b628b6140 -1 mgr[py] Module influx has missing NOTIFY_TYPES member
Nov 27 05:53:38 np0005537642 ceph-mgr[74636]: mgr[py] Module influx has missing NOTIFY_TYPES member
Nov 27 05:53:38 np0005537642 ceph-mgr[74636]: mgr[py] Loading python module 'insights'
Nov 27 05:53:39 np0005537642 ceph-mgr[74636]: mgr[py] Loading python module 'iostat'
Nov 27 05:53:39 np0005537642 ceph-4c838139-e0c9-556a-a9ca-e4422f459af7-mgr-compute-0-qnrkij[74632]: 2025-11-27T10:53:39.108+0000 7f7b628b6140 -1 mgr[py] Module iostat has missing NOTIFY_TYPES member
Nov 27 05:53:39 np0005537642 ceph-mgr[74636]: mgr[py] Module iostat has missing NOTIFY_TYPES member
Nov 27 05:53:39 np0005537642 ceph-mgr[74636]: mgr[py] Loading python module 'k8sevents'
Nov 27 05:53:39 np0005537642 ceph-mon[74338]: from='client.? 192.168.122.100:0/1105185965' entity='client.admin' cmd='[{"prefix": "mgr module enable", "module": "cephadm"}]': finished
Nov 27 05:53:39 np0005537642 ceph-mgr[74636]: mgr[py] Loading python module 'localpool'
Nov 27 05:53:39 np0005537642 ceph-mgr[74636]: mgr[py] Loading python module 'mds_autoscaler'
Nov 27 05:53:39 np0005537642 ceph-mgr[74636]: mgr[py] Loading python module 'mirroring'
Nov 27 05:53:39 np0005537642 ceph-mgr[74636]: mgr[py] Loading python module 'nfs'
Nov 27 05:53:40 np0005537642 ceph-4c838139-e0c9-556a-a9ca-e4422f459af7-mgr-compute-0-qnrkij[74632]: 2025-11-27T10:53:40.093+0000 7f7b628b6140 -1 mgr[py] Module nfs has missing NOTIFY_TYPES member
Nov 27 05:53:40 np0005537642 ceph-mgr[74636]: mgr[py] Module nfs has missing NOTIFY_TYPES member
Nov 27 05:53:40 np0005537642 ceph-mgr[74636]: mgr[py] Loading python module 'orchestrator'
Nov 27 05:53:40 np0005537642 ceph-4c838139-e0c9-556a-a9ca-e4422f459af7-mgr-compute-0-qnrkij[74632]: 2025-11-27T10:53:40.306+0000 7f7b628b6140 -1 mgr[py] Module orchestrator has missing NOTIFY_TYPES member
Nov 27 05:53:40 np0005537642 ceph-mgr[74636]: mgr[py] Module orchestrator has missing NOTIFY_TYPES member
Nov 27 05:53:40 np0005537642 ceph-mgr[74636]: mgr[py] Loading python module 'osd_perf_query'
Nov 27 05:53:40 np0005537642 ceph-4c838139-e0c9-556a-a9ca-e4422f459af7-mgr-compute-0-qnrkij[74632]: 2025-11-27T10:53:40.383+0000 7f7b628b6140 -1 mgr[py] Module osd_perf_query has missing NOTIFY_TYPES member
Nov 27 05:53:40 np0005537642 ceph-mgr[74636]: mgr[py] Module osd_perf_query has missing NOTIFY_TYPES member
Nov 27 05:53:40 np0005537642 ceph-mgr[74636]: mgr[py] Loading python module 'osd_support'
Nov 27 05:53:40 np0005537642 ceph-4c838139-e0c9-556a-a9ca-e4422f459af7-mgr-compute-0-qnrkij[74632]: 2025-11-27T10:53:40.451+0000 7f7b628b6140 -1 mgr[py] Module osd_support has missing NOTIFY_TYPES member
Nov 27 05:53:40 np0005537642 ceph-mgr[74636]: mgr[py] Module osd_support has missing NOTIFY_TYPES member
Nov 27 05:53:40 np0005537642 ceph-mgr[74636]: mgr[py] Loading python module 'pg_autoscaler'
Nov 27 05:53:40 np0005537642 ceph-4c838139-e0c9-556a-a9ca-e4422f459af7-mgr-compute-0-qnrkij[74632]: 2025-11-27T10:53:40.527+0000 7f7b628b6140 -1 mgr[py] Module pg_autoscaler has missing NOTIFY_TYPES member
Nov 27 05:53:40 np0005537642 ceph-mgr[74636]: mgr[py] Module pg_autoscaler has missing NOTIFY_TYPES member
Nov 27 05:53:40 np0005537642 ceph-mgr[74636]: mgr[py] Loading python module 'progress'
Nov 27 05:53:40 np0005537642 ceph-4c838139-e0c9-556a-a9ca-e4422f459af7-mgr-compute-0-qnrkij[74632]: 2025-11-27T10:53:40.593+0000 7f7b628b6140 -1 mgr[py] Module progress has missing NOTIFY_TYPES member
Nov 27 05:53:40 np0005537642 ceph-mgr[74636]: mgr[py] Module progress has missing NOTIFY_TYPES member
Nov 27 05:53:40 np0005537642 ceph-mgr[74636]: mgr[py] Loading python module 'prometheus'
Nov 27 05:53:40 np0005537642 ceph-4c838139-e0c9-556a-a9ca-e4422f459af7-mgr-compute-0-qnrkij[74632]: 2025-11-27T10:53:40.935+0000 7f7b628b6140 -1 mgr[py] Module prometheus has missing NOTIFY_TYPES member
Nov 27 05:53:40 np0005537642 ceph-mgr[74636]: mgr[py] Module prometheus has missing NOTIFY_TYPES member
Nov 27 05:53:40 np0005537642 ceph-mgr[74636]: mgr[py] Loading python module 'rbd_support'
Nov 27 05:53:41 np0005537642 ceph-4c838139-e0c9-556a-a9ca-e4422f459af7-mgr-compute-0-qnrkij[74632]: 2025-11-27T10:53:41.027+0000 7f7b628b6140 -1 mgr[py] Module rbd_support has missing NOTIFY_TYPES member
Nov 27 05:53:41 np0005537642 ceph-mgr[74636]: mgr[py] Module rbd_support has missing NOTIFY_TYPES member
Nov 27 05:53:41 np0005537642 ceph-mgr[74636]: mgr[py] Loading python module 'restful'
Nov 27 05:53:41 np0005537642 ceph-mgr[74636]: mgr[py] Loading python module 'rgw'
Nov 27 05:53:41 np0005537642 ceph-4c838139-e0c9-556a-a9ca-e4422f459af7-mgr-compute-0-qnrkij[74632]: 2025-11-27T10:53:41.431+0000 7f7b628b6140 -1 mgr[py] Module rgw has missing NOTIFY_TYPES member
Nov 27 05:53:41 np0005537642 ceph-mgr[74636]: mgr[py] Module rgw has missing NOTIFY_TYPES member
Nov 27 05:53:41 np0005537642 ceph-mgr[74636]: mgr[py] Loading python module 'rook'
Nov 27 05:53:41 np0005537642 systemd[1]: var-lib-containers-storage-overlay-ea5c8b6a1024c62dac0e60d3f2ff4d88405fbb88980a991bc81bc8fa3a44aaf1-merged.mount: Deactivated successfully.
Nov 27 05:53:41 np0005537642 ceph-4c838139-e0c9-556a-a9ca-e4422f459af7-mgr-compute-0-qnrkij[74632]: 2025-11-27T10:53:41.941+0000 7f7b628b6140 -1 mgr[py] Module rook has missing NOTIFY_TYPES member
Nov 27 05:53:41 np0005537642 ceph-mgr[74636]: mgr[py] Module rook has missing NOTIFY_TYPES member
Nov 27 05:53:41 np0005537642 ceph-mgr[74636]: mgr[py] Loading python module 'selftest'
Nov 27 05:53:42 np0005537642 ceph-4c838139-e0c9-556a-a9ca-e4422f459af7-mgr-compute-0-qnrkij[74632]: 2025-11-27T10:53:42.007+0000 7f7b628b6140 -1 mgr[py] Module selftest has missing NOTIFY_TYPES member
Nov 27 05:53:42 np0005537642 ceph-mgr[74636]: mgr[py] Module selftest has missing NOTIFY_TYPES member
Nov 27 05:53:42 np0005537642 ceph-mgr[74636]: mgr[py] Loading python module 'snap_schedule'
Nov 27 05:53:42 np0005537642 ceph-4c838139-e0c9-556a-a9ca-e4422f459af7-mgr-compute-0-qnrkij[74632]: 2025-11-27T10:53:42.079+0000 7f7b628b6140 -1 mgr[py] Module snap_schedule has missing NOTIFY_TYPES member
Nov 27 05:53:42 np0005537642 ceph-mgr[74636]: mgr[py] Module snap_schedule has missing NOTIFY_TYPES member
Nov 27 05:53:42 np0005537642 ceph-mgr[74636]: mgr[py] Loading python module 'stats'
Nov 27 05:53:42 np0005537642 ceph-mgr[74636]: mgr[py] Loading python module 'status'
Nov 27 05:53:42 np0005537642 ceph-4c838139-e0c9-556a-a9ca-e4422f459af7-mgr-compute-0-qnrkij[74632]: 2025-11-27T10:53:42.221+0000 7f7b628b6140 -1 mgr[py] Module status has missing NOTIFY_TYPES member
Nov 27 05:53:42 np0005537642 ceph-mgr[74636]: mgr[py] Module status has missing NOTIFY_TYPES member
Nov 27 05:53:42 np0005537642 ceph-mgr[74636]: mgr[py] Loading python module 'telegraf'
Nov 27 05:53:42 np0005537642 ceph-4c838139-e0c9-556a-a9ca-e4422f459af7-mgr-compute-0-qnrkij[74632]: 2025-11-27T10:53:42.290+0000 7f7b628b6140 -1 mgr[py] Module telegraf has missing NOTIFY_TYPES member
Nov 27 05:53:42 np0005537642 ceph-mgr[74636]: mgr[py] Module telegraf has missing NOTIFY_TYPES member
Nov 27 05:53:42 np0005537642 ceph-mgr[74636]: mgr[py] Loading python module 'telemetry'
Nov 27 05:53:42 np0005537642 ceph-4c838139-e0c9-556a-a9ca-e4422f459af7-mgr-compute-0-qnrkij[74632]: 2025-11-27T10:53:42.445+0000 7f7b628b6140 -1 mgr[py] Module telemetry has missing NOTIFY_TYPES member
Nov 27 05:53:42 np0005537642 ceph-mgr[74636]: mgr[py] Module telemetry has missing NOTIFY_TYPES member
Nov 27 05:53:42 np0005537642 ceph-mgr[74636]: mgr[py] Loading python module 'test_orchestrator'
Nov 27 05:53:42 np0005537642 ceph-4c838139-e0c9-556a-a9ca-e4422f459af7-mgr-compute-0-qnrkij[74632]: 2025-11-27T10:53:42.668+0000 7f7b628b6140 -1 mgr[py] Module test_orchestrator has missing NOTIFY_TYPES member
Nov 27 05:53:42 np0005537642 ceph-mgr[74636]: mgr[py] Module test_orchestrator has missing NOTIFY_TYPES member
Nov 27 05:53:42 np0005537642 ceph-mgr[74636]: mgr[py] Loading python module 'volumes'
Nov 27 05:53:42 np0005537642 ceph-4c838139-e0c9-556a-a9ca-e4422f459af7-mgr-compute-0-qnrkij[74632]: 2025-11-27T10:53:42.941+0000 7f7b628b6140 -1 mgr[py] Module volumes has missing NOTIFY_TYPES member
Nov 27 05:53:42 np0005537642 ceph-mgr[74636]: mgr[py] Module volumes has missing NOTIFY_TYPES member
Nov 27 05:53:42 np0005537642 ceph-mgr[74636]: mgr[py] Loading python module 'zabbix'
Nov 27 05:53:43 np0005537642 ceph-4c838139-e0c9-556a-a9ca-e4422f459af7-mgr-compute-0-qnrkij[74632]: 2025-11-27T10:53:43.015+0000 7f7b628b6140 -1 mgr[py] Module zabbix has missing NOTIFY_TYPES member
Nov 27 05:53:43 np0005537642 ceph-mgr[74636]: mgr[py] Module zabbix has missing NOTIFY_TYPES member
Nov 27 05:53:43 np0005537642 ceph-mon[74338]: log_channel(cluster) log [INF] : Active manager daemon compute-0.qnrkij restarted
Nov 27 05:53:43 np0005537642 ceph-mon[74338]: mon.compute-0@0(leader).osd e1 do_prune osdmap full prune enabled
Nov 27 05:53:43 np0005537642 ceph-mon[74338]: mon.compute-0@0(leader).osd e1 encode_pending skipping prime_pg_temp; mapping job did not start
Nov 27 05:53:43 np0005537642 ceph-mon[74338]: log_channel(cluster) log [INF] : Activating manager daemon compute-0.qnrkij
Nov 27 05:53:43 np0005537642 ceph-mgr[74636]: ms_deliver_dispatch: unhandled message 0x557705b96d00 mon_map magic: 0 from mon.0 v2:192.168.122.100:3300/0
Nov 27 05:53:46 np0005537642 ceph-mon[74338]: mon.compute-0@0(leader).osd e1 _set_cache_ratios kv ratio 0.25 inc ratio 0.375 full ratio 0.375
Nov 27 05:53:46 np0005537642 ceph-mon[74338]: mon.compute-0@0(leader).osd e1 register_cache_with_pcm pcm target: 2147483648 pcm max: 1020054732 pcm min: 134217728 inc_osd_cache size: 1
Nov 27 05:53:46 np0005537642 ceph-mon[74338]: mon.compute-0@0(leader).osd e2 e2: 0 total, 0 up, 0 in
Nov 27 05:53:46 np0005537642 ceph-mgr[74636]: mgr handle_mgr_map Activating!
Nov 27 05:53:46 np0005537642 ceph-mon[74338]: log_channel(cluster) log [DBG] : osdmap e2: 0 total, 0 up, 0 in
Nov 27 05:53:46 np0005537642 ceph-mon[74338]: log_channel(cluster) log [DBG] : mgrmap e6: compute-0.qnrkij(active, starting, since 3s)
Nov 27 05:53:46 np0005537642 ceph-mgr[74636]: mgr handle_mgr_map I am now activating
Nov 27 05:53:46 np0005537642 podman[75098]: 2025-11-27 10:53:46.775694895 +0000 UTC m=+9.581413321 container remove f7bf24b07f016337e97362abbb10ffe6d65d9169fac15b39810a37ef68b6551a (image=quay.io/ceph/ceph:v19, name=elated_lewin, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, CEPH_REF=squid, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1)
Nov 27 05:53:46 np0005537642 ceph-mon[74338]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon metadata", "id": "compute-0"} v 0)
Nov 27 05:53:46 np0005537642 ceph-mon[74338]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/397881370' entity='mgr.compute-0.qnrkij' cmd=[{"prefix": "mon metadata", "id": "compute-0"}]: dispatch
Nov 27 05:53:46 np0005537642 ceph-mon[74338]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mgr metadata", "who": "compute-0.qnrkij", "id": "compute-0.qnrkij"} v 0)
Nov 27 05:53:46 np0005537642 ceph-mon[74338]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/397881370' entity='mgr.compute-0.qnrkij' cmd=[{"prefix": "mgr metadata", "who": "compute-0.qnrkij", "id": "compute-0.qnrkij"}]: dispatch
Nov 27 05:53:46 np0005537642 ceph-mon[74338]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mds metadata"} v 0)
Nov 27 05:53:46 np0005537642 ceph-mon[74338]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/397881370' entity='mgr.compute-0.qnrkij' cmd=[{"prefix": "mds metadata"}]: dispatch
Nov 27 05:53:46 np0005537642 ceph-mon[74338]: mon.compute-0@0(leader).mds e1 all = 1
Nov 27 05:53:46 np0005537642 ceph-mon[74338]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata"} v 0)
Nov 27 05:53:46 np0005537642 ceph-mon[74338]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/397881370' entity='mgr.compute-0.qnrkij' cmd=[{"prefix": "osd metadata"}]: dispatch
Nov 27 05:53:46 np0005537642 ceph-mon[74338]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon metadata"} v 0)
Nov 27 05:53:46 np0005537642 ceph-mon[74338]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/397881370' entity='mgr.compute-0.qnrkij' cmd=[{"prefix": "mon metadata"}]: dispatch
Nov 27 05:53:46 np0005537642 ceph-mgr[74636]: [balancer DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Nov 27 05:53:46 np0005537642 ceph-mgr[74636]: mgr load Constructed class from module: balancer
Nov 27 05:53:46 np0005537642 ceph-mgr[74636]: [balancer INFO root] Starting
Nov 27 05:53:46 np0005537642 ceph-mgr[74636]: [cephadm DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Nov 27 05:53:46 np0005537642 ceph-mon[74338]: log_channel(cluster) log [INF] : Manager daemon compute-0.qnrkij is now available
Nov 27 05:53:46 np0005537642 ceph-mgr[74636]: [balancer INFO root] Optimize plan auto_2025-11-27_10:53:46
Nov 27 05:53:46 np0005537642 ceph-mgr[74636]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Nov 27 05:53:46 np0005537642 ceph-mgr[74636]: [balancer INFO root] do_upmap
Nov 27 05:53:46 np0005537642 ceph-mgr[74636]: [balancer INFO root] No pools available
Nov 27 05:53:46 np0005537642 ceph-mgr[74636]: [cephadm INFO cephadm.migrations] Found migration_current of "None". Setting to last migration.
Nov 27 05:53:46 np0005537642 ceph-mgr[74636]: log_channel(cephadm) log [INF] : Found migration_current of "None". Setting to last migration.
Nov 27 05:53:46 np0005537642 ceph-mon[74338]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config set, name=mgr/cephadm/migration_current}] v 0)
Nov 27 05:53:46 np0005537642 ceph-mon[74338]: Active manager daemon compute-0.qnrkij restarted
Nov 27 05:53:46 np0005537642 ceph-mon[74338]: Activating manager daemon compute-0.qnrkij
Nov 27 05:53:46 np0005537642 podman[75168]: 2025-11-27 10:53:46.836035131 +0000 UTC m=+0.026807138 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Nov 27 05:53:47 np0005537642 ceph-mon[74338]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/397881370' entity='mgr.compute-0.qnrkij' 
Nov 27 05:53:47 np0005537642 ceph-mon[74338]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/config_checks}] v 0)
Nov 27 05:53:48 np0005537642 ceph-mon[74338]: log_channel(cluster) log [DBG] : mgrmap e7: compute-0.qnrkij(active, since 5s)
Nov 27 05:53:48 np0005537642 ceph-mon[74338]: Manager daemon compute-0.qnrkij is now available
Nov 27 05:53:48 np0005537642 ceph-mon[74338]: Found migration_current of "None". Setting to last migration.
Nov 27 05:53:48 np0005537642 ceph-mon[74338]: from='mgr.14122 192.168.122.100:0/397881370' entity='mgr.compute-0.qnrkij' 
Nov 27 05:53:48 np0005537642 ceph-mon[74338]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/397881370' entity='mgr.compute-0.qnrkij' 
Nov 27 05:53:48 np0005537642 ceph-mgr[74636]: mgr load Constructed class from module: cephadm
Nov 27 05:53:48 np0005537642 ceph-mgr[74636]: [crash DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Nov 27 05:53:48 np0005537642 ceph-mgr[74636]: mgr load Constructed class from module: crash
Nov 27 05:53:48 np0005537642 ceph-mon[74338]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config dump", "format": "json"} v 0)
Nov 27 05:53:48 np0005537642 ceph-mon[74338]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/397881370' entity='mgr.compute-0.qnrkij' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch
Nov 27 05:53:48 np0005537642 ceph-mgr[74636]: [devicehealth DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Nov 27 05:53:48 np0005537642 ceph-mgr[74636]: mgr load Constructed class from module: devicehealth
Nov 27 05:53:48 np0005537642 ceph-mgr[74636]: [devicehealth INFO root] Starting
Nov 27 05:53:48 np0005537642 ceph-mgr[74636]: [iostat DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Nov 27 05:53:48 np0005537642 ceph-mgr[74636]: mgr load Constructed class from module: iostat
Nov 27 05:53:48 np0005537642 ceph-mon[74338]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config dump", "format": "json"} v 0)
Nov 27 05:53:48 np0005537642 ceph-mon[74338]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/397881370' entity='mgr.compute-0.qnrkij' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch
Nov 27 05:53:48 np0005537642 ceph-mgr[74636]: [nfs DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Nov 27 05:53:48 np0005537642 ceph-mgr[74636]: mgr load Constructed class from module: nfs
Nov 27 05:53:48 np0005537642 ceph-mgr[74636]: [orchestrator DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Nov 27 05:53:48 np0005537642 ceph-mgr[74636]: mgr load Constructed class from module: orchestrator
Nov 27 05:53:48 np0005537642 ceph-mgr[74636]: [pg_autoscaler DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Nov 27 05:53:48 np0005537642 ceph-mgr[74636]: mgr load Constructed class from module: pg_autoscaler
Nov 27 05:53:48 np0005537642 ceph-mgr[74636]: [progress DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Nov 27 05:53:48 np0005537642 ceph-mgr[74636]: mgr load Constructed class from module: progress
Nov 27 05:53:48 np0005537642 ceph-mgr[74636]: [rbd_support DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Nov 27 05:53:48 np0005537642 ceph-mgr[74636]: [progress INFO root] Loading...
Nov 27 05:53:48 np0005537642 ceph-mgr[74636]: [progress INFO root] No stored events to load
Nov 27 05:53:48 np0005537642 ceph-mgr[74636]: [progress INFO root] Loaded [] historic events
Nov 27 05:53:48 np0005537642 ceph-mgr[74636]: [progress INFO root] Loaded OSDMap, ready.
Nov 27 05:53:48 np0005537642 ceph-mgr[74636]: [pg_autoscaler INFO root] _maybe_adjust
Nov 27 05:53:48 np0005537642 ceph-mgr[74636]: [rbd_support INFO root] recovery thread starting
Nov 27 05:53:48 np0005537642 ceph-mgr[74636]: [rbd_support INFO root] starting setup
Nov 27 05:53:48 np0005537642 ceph-mgr[74636]: mgr load Constructed class from module: rbd_support
Nov 27 05:53:48 np0005537642 ceph-mgr[74636]: [restful DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Nov 27 05:53:48 np0005537642 ceph-mgr[74636]: mgr load Constructed class from module: restful
Nov 27 05:53:48 np0005537642 ceph-mgr[74636]: [restful INFO root] server_addr: :: server_port: 8003
Nov 27 05:53:48 np0005537642 ceph-mon[74338]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/compute-0.qnrkij/mirror_snapshot_schedule"} v 0)
Nov 27 05:53:48 np0005537642 ceph-mon[74338]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/397881370' entity='mgr.compute-0.qnrkij' cmd=[{"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/compute-0.qnrkij/mirror_snapshot_schedule"}]: dispatch
Nov 27 05:53:48 np0005537642 ceph-mgr[74636]: [status DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Nov 27 05:53:48 np0005537642 ceph-mgr[74636]: mgr load Constructed class from module: status
Nov 27 05:53:48 np0005537642 ceph-mgr[74636]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Nov 27 05:53:48 np0005537642 ceph-mgr[74636]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: starting
Nov 27 05:53:48 np0005537642 ceph-mgr[74636]: [telemetry DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Nov 27 05:53:48 np0005537642 ceph-mgr[74636]: mgr load Constructed class from module: telemetry
Nov 27 05:53:48 np0005537642 ceph-mgr[74636]: [volumes DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Nov 27 05:53:48 np0005537642 ceph-mgr[74636]: [restful WARNING root] server not running: no certificate configured
Nov 27 05:53:48 np0005537642 ceph-mgr[74636]: [rbd_support INFO root] PerfHandler: starting
Nov 27 05:53:48 np0005537642 ceph-mgr[74636]: [rbd_support INFO root] TaskHandler: starting
Nov 27 05:53:48 np0005537642 ceph-mon[74338]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/compute-0.qnrkij/trash_purge_schedule"} v 0)
Nov 27 05:53:48 np0005537642 ceph-mon[74338]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/397881370' entity='mgr.compute-0.qnrkij' cmd=[{"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/compute-0.qnrkij/trash_purge_schedule"}]: dispatch
Nov 27 05:53:48 np0005537642 ceph-mgr[74636]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Nov 27 05:53:48 np0005537642 ceph-mgr[74636]: [rbd_support INFO root] TrashPurgeScheduleHandler: starting
Nov 27 05:53:48 np0005537642 ceph-mgr[74636]: [rbd_support INFO root] setup complete
Nov 27 05:53:48 np0005537642 ceph-mgr[74636]: mgr load Constructed class from module: volumes
Nov 27 05:53:48 np0005537642 ceph-mgr[74636]: mgr.server send_report Not sending PG status to monitor yet, waiting for OSDs
Nov 27 05:53:48 np0005537642 podman[75168]: 2025-11-27 10:53:48.870860635 +0000 UTC m=+2.061632622 container create 63d8d1f2aa3922c0921c5e839d180557d0bda4ab171a9abef0f538600cfebbc7 (image=quay.io/ceph/ceph:v19, name=gracious_vaughan, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325)
Nov 27 05:53:48 np0005537642 systemd[1]: libpod-conmon-f7bf24b07f016337e97362abbb10ffe6d65d9169fac15b39810a37ef68b6551a.scope: Deactivated successfully.
Nov 27 05:53:48 np0005537642 systemd[1]: Started libpod-conmon-63d8d1f2aa3922c0921c5e839d180557d0bda4ab171a9abef0f538600cfebbc7.scope.
Nov 27 05:53:48 np0005537642 systemd[1]: Started libcrun container.
Nov 27 05:53:48 np0005537642 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d9e299e394120a1c832e0606b98d2bd3623e0801c3073380438641e5ad214c25/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 27 05:53:48 np0005537642 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d9e299e394120a1c832e0606b98d2bd3623e0801c3073380438641e5ad214c25/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Nov 27 05:53:48 np0005537642 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d9e299e394120a1c832e0606b98d2bd3623e0801c3073380438641e5ad214c25/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 27 05:53:48 np0005537642 podman[75168]: 2025-11-27 10:53:48.979443896 +0000 UTC m=+2.170215913 container init 63d8d1f2aa3922c0921c5e839d180557d0bda4ab171a9abef0f538600cfebbc7 (image=quay.io/ceph/ceph:v19, name=gracious_vaughan, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, CEPH_REF=squid, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 27 05:53:48 np0005537642 podman[75168]: 2025-11-27 10:53:48.989142471 +0000 UTC m=+2.179914438 container start 63d8d1f2aa3922c0921c5e839d180557d0bda4ab171a9abef0f538600cfebbc7 (image=quay.io/ceph/ceph:v19, name=gracious_vaughan, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 27 05:53:48 np0005537642 podman[75168]: 2025-11-27 10:53:48.992953574 +0000 UTC m=+2.183725581 container attach 63d8d1f2aa3922c0921c5e839d180557d0bda4ab171a9abef0f538600cfebbc7 (image=quay.io/ceph/ceph:v19, name=gracious_vaughan, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 27 05:53:49 np0005537642 ceph-mon[74338]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/cert_store.cert.agent_endpoint_root_cert}] v 0)
Nov 27 05:53:49 np0005537642 ceph-mon[74338]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/397881370' entity='mgr.compute-0.qnrkij' 
Nov 27 05:53:49 np0005537642 ceph-mon[74338]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/cert_store.key.agent_endpoint_key}] v 0)
Nov 27 05:53:49 np0005537642 ceph-mon[74338]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/397881370' entity='mgr.compute-0.qnrkij' 
Nov 27 05:53:49 np0005537642 ceph-mgr[74636]: log_channel(audit) log [DBG] : from='client.14132 -' entity='client.admin' cmd=[{"prefix": "get_command_descriptions"}]: dispatch
Nov 27 05:53:49 np0005537642 ceph-mgr[74636]: log_channel(audit) log [DBG] : from='client.14132 -' entity='client.admin' cmd=[{"prefix": "mgr_status"}]: dispatch
Nov 27 05:53:49 np0005537642 gracious_vaughan[75297]: {
Nov 27 05:53:49 np0005537642 gracious_vaughan[75297]:    "mgrmap_epoch": 7,
Nov 27 05:53:49 np0005537642 gracious_vaughan[75297]:    "initialized": true
Nov 27 05:53:49 np0005537642 gracious_vaughan[75297]: }
Nov 27 05:53:49 np0005537642 systemd[1]: libpod-63d8d1f2aa3922c0921c5e839d180557d0bda4ab171a9abef0f538600cfebbc7.scope: Deactivated successfully.
Nov 27 05:53:49 np0005537642 podman[75168]: 2025-11-27 10:53:49.162931933 +0000 UTC m=+2.353703920 container died 63d8d1f2aa3922c0921c5e839d180557d0bda4ab171a9abef0f538600cfebbc7 (image=quay.io/ceph/ceph:v19, name=gracious_vaughan, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, ceph=True, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 27 05:53:49 np0005537642 systemd[1]: var-lib-containers-storage-overlay-d9e299e394120a1c832e0606b98d2bd3623e0801c3073380438641e5ad214c25-merged.mount: Deactivated successfully.
Nov 27 05:53:49 np0005537642 podman[75168]: 2025-11-27 10:53:49.209877286 +0000 UTC m=+2.400649243 container remove 63d8d1f2aa3922c0921c5e839d180557d0bda4ab171a9abef0f538600cfebbc7 (image=quay.io/ceph/ceph:v19, name=gracious_vaughan, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.build-date=20250325)
Nov 27 05:53:49 np0005537642 systemd[1]: libpod-conmon-63d8d1f2aa3922c0921c5e839d180557d0bda4ab171a9abef0f538600cfebbc7.scope: Deactivated successfully.
Nov 27 05:53:49 np0005537642 ceph-mon[74338]: log_channel(cluster) log [DBG] : mgrmap e8: compute-0.qnrkij(active, since 6s)
Nov 27 05:53:49 np0005537642 ceph-mon[74338]: from='mgr.14122 192.168.122.100:0/397881370' entity='mgr.compute-0.qnrkij' 
Nov 27 05:53:49 np0005537642 ceph-mon[74338]: from='mgr.14122 192.168.122.100:0/397881370' entity='mgr.compute-0.qnrkij' cmd=[{"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/compute-0.qnrkij/mirror_snapshot_schedule"}]: dispatch
Nov 27 05:53:49 np0005537642 ceph-mon[74338]: from='mgr.14122 192.168.122.100:0/397881370' entity='mgr.compute-0.qnrkij' cmd=[{"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/compute-0.qnrkij/trash_purge_schedule"}]: dispatch
Nov 27 05:53:49 np0005537642 ceph-mon[74338]: from='mgr.14122 192.168.122.100:0/397881370' entity='mgr.compute-0.qnrkij' 
Nov 27 05:53:49 np0005537642 ceph-mon[74338]: from='mgr.14122 192.168.122.100:0/397881370' entity='mgr.compute-0.qnrkij' 
Nov 27 05:53:49 np0005537642 podman[75335]: 2025-11-27 10:53:49.313555732 +0000 UTC m=+0.072400596 container create 9bd87faa9b0830683b5181d0141421fa7b156f4b99c02a5e8c95582556b26663 (image=quay.io/ceph/ceph:v19, name=quizzical_rosalind, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, ceph=True, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 27 05:53:49 np0005537642 systemd[1]: Started libpod-conmon-9bd87faa9b0830683b5181d0141421fa7b156f4b99c02a5e8c95582556b26663.scope.
Nov 27 05:53:49 np0005537642 podman[75335]: 2025-11-27 10:53:49.278446867 +0000 UTC m=+0.037291781 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Nov 27 05:53:49 np0005537642 systemd[1]: Started libcrun container.
Nov 27 05:53:49 np0005537642 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/687cc6b29b2b653854227138fbcbe6640a6b9159bcc073043c72809e7892bdd1/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Nov 27 05:53:49 np0005537642 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/687cc6b29b2b653854227138fbcbe6640a6b9159bcc073043c72809e7892bdd1/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 27 05:53:49 np0005537642 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/687cc6b29b2b653854227138fbcbe6640a6b9159bcc073043c72809e7892bdd1/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 27 05:53:49 np0005537642 podman[75335]: 2025-11-27 10:53:49.414827513 +0000 UTC m=+0.173672367 container init 9bd87faa9b0830683b5181d0141421fa7b156f4b99c02a5e8c95582556b26663 (image=quay.io/ceph/ceph:v19, name=quizzical_rosalind, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 27 05:53:49 np0005537642 podman[75335]: 2025-11-27 10:53:49.424113868 +0000 UTC m=+0.182958703 container start 9bd87faa9b0830683b5181d0141421fa7b156f4b99c02a5e8c95582556b26663 (image=quay.io/ceph/ceph:v19, name=quizzical_rosalind, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Nov 27 05:53:49 np0005537642 podman[75335]: 2025-11-27 10:53:49.43288967 +0000 UTC m=+0.191734504 container attach 9bd87faa9b0830683b5181d0141421fa7b156f4b99c02a5e8c95582556b26663 (image=quay.io/ceph/ceph:v19, name=quizzical_rosalind, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_REF=squid, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 27 05:53:49 np0005537642 ceph-mgr[74636]: [cephadm INFO cherrypy.error] [27/Nov/2025:10:53:49] ENGINE Bus STARTING
Nov 27 05:53:49 np0005537642 ceph-mgr[74636]: log_channel(cephadm) log [INF] : [27/Nov/2025:10:53:49] ENGINE Bus STARTING
Nov 27 05:53:49 np0005537642 ceph-mgr[74636]: log_channel(audit) log [DBG] : from='client.14134 -' entity='client.admin' cmd=[{"prefix": "orch set backend", "module_name": "cephadm", "target": ["mon-mgr", ""]}]: dispatch
Nov 27 05:53:49 np0005537642 ceph-mon[74338]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config set, name=mgr/orchestrator/orchestrator}] v 0)
Nov 27 05:53:49 np0005537642 ceph-mon[74338]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/397881370' entity='mgr.compute-0.qnrkij' 
Nov 27 05:53:49 np0005537642 ceph-mon[74338]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config dump", "format": "json"} v 0)
Nov 27 05:53:49 np0005537642 ceph-mon[74338]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/397881370' entity='mgr.compute-0.qnrkij' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch
Nov 27 05:53:49 np0005537642 systemd[1]: libpod-9bd87faa9b0830683b5181d0141421fa7b156f4b99c02a5e8c95582556b26663.scope: Deactivated successfully.
Nov 27 05:53:49 np0005537642 podman[75335]: 2025-11-27 10:53:49.855334806 +0000 UTC m=+0.614179640 container died 9bd87faa9b0830683b5181d0141421fa7b156f4b99c02a5e8c95582556b26663 (image=quay.io/ceph/ceph:v19, name=quizzical_rosalind, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, ceph=True, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 27 05:53:49 np0005537642 systemd[1]: var-lib-containers-storage-overlay-687cc6b29b2b653854227138fbcbe6640a6b9159bcc073043c72809e7892bdd1-merged.mount: Deactivated successfully.
Nov 27 05:53:49 np0005537642 podman[75335]: 2025-11-27 10:53:49.893332549 +0000 UTC m=+0.652177373 container remove 9bd87faa9b0830683b5181d0141421fa7b156f4b99c02a5e8c95582556b26663 (image=quay.io/ceph/ceph:v19, name=quizzical_rosalind, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, ceph=True, org.label-schema.build-date=20250325)
Nov 27 05:53:49 np0005537642 systemd[1]: libpod-conmon-9bd87faa9b0830683b5181d0141421fa7b156f4b99c02a5e8c95582556b26663.scope: Deactivated successfully.
Nov 27 05:53:49 np0005537642 ceph-mgr[74636]: [cephadm INFO cherrypy.error] [27/Nov/2025:10:53:49] ENGINE Serving on https://192.168.122.100:7150
Nov 27 05:53:49 np0005537642 ceph-mgr[74636]: log_channel(cephadm) log [INF] : [27/Nov/2025:10:53:49] ENGINE Serving on https://192.168.122.100:7150
Nov 27 05:53:49 np0005537642 ceph-mgr[74636]: [cephadm INFO cherrypy.error] [27/Nov/2025:10:53:49] ENGINE Client ('192.168.122.100', 55444) lost — peer dropped the TLS connection suddenly, during handshake: (6, 'TLS/SSL connection has been closed (EOF) (_ssl.c:1147)')
Nov 27 05:53:49 np0005537642 ceph-mgr[74636]: log_channel(cephadm) log [INF] : [27/Nov/2025:10:53:49] ENGINE Client ('192.168.122.100', 55444) lost — peer dropped the TLS connection suddenly, during handshake: (6, 'TLS/SSL connection has been closed (EOF) (_ssl.c:1147)')
Nov 27 05:53:49 np0005537642 podman[75401]: 2025-11-27 10:53:49.96168347 +0000 UTC m=+0.045837301 container create cfb19f374c8624913c1c6cbbf1e6aceaddd0f9d5ad888fac42d11c9c445a01e8 (image=quay.io/ceph/ceph:v19, name=inspiring_roentgen, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True)
Nov 27 05:53:49 np0005537642 ceph-mon[74338]: mon.compute-0@0(leader).osd e2 _set_new_cache_sizes cache_size:1019919369 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 27 05:53:50 np0005537642 systemd[1]: Started libpod-conmon-cfb19f374c8624913c1c6cbbf1e6aceaddd0f9d5ad888fac42d11c9c445a01e8.scope.
Nov 27 05:53:50 np0005537642 ceph-mgr[74636]: [cephadm INFO cherrypy.error] [27/Nov/2025:10:53:50] ENGINE Serving on http://192.168.122.100:8765
Nov 27 05:53:50 np0005537642 ceph-mgr[74636]: log_channel(cephadm) log [INF] : [27/Nov/2025:10:53:50] ENGINE Serving on http://192.168.122.100:8765
Nov 27 05:53:50 np0005537642 ceph-mgr[74636]: [cephadm INFO cherrypy.error] [27/Nov/2025:10:53:50] ENGINE Bus STARTED
Nov 27 05:53:50 np0005537642 ceph-mgr[74636]: log_channel(cephadm) log [INF] : [27/Nov/2025:10:53:50] ENGINE Bus STARTED
Nov 27 05:53:50 np0005537642 ceph-mon[74338]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config dump", "format": "json"} v 0)
Nov 27 05:53:50 np0005537642 ceph-mon[74338]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/397881370' entity='mgr.compute-0.qnrkij' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch
Nov 27 05:53:50 np0005537642 systemd[1]: Started libcrun container.
Nov 27 05:53:50 np0005537642 podman[75401]: 2025-11-27 10:53:49.938281957 +0000 UTC m=+0.022435818 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Nov 27 05:53:50 np0005537642 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9fd509d9b94f2fda8906c2278e0b16f41a5347b938ed3c7b6ba21f643da66b8e/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Nov 27 05:53:50 np0005537642 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9fd509d9b94f2fda8906c2278e0b16f41a5347b938ed3c7b6ba21f643da66b8e/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 27 05:53:50 np0005537642 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9fd509d9b94f2fda8906c2278e0b16f41a5347b938ed3c7b6ba21f643da66b8e/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 27 05:53:50 np0005537642 podman[75401]: 2025-11-27 10:53:50.110977465 +0000 UTC m=+0.195131376 container init cfb19f374c8624913c1c6cbbf1e6aceaddd0f9d5ad888fac42d11c9c445a01e8 (image=quay.io/ceph/ceph:v19, name=inspiring_roentgen, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, OSD_FLAVOR=default, ceph=True, CEPH_REF=squid)
Nov 27 05:53:50 np0005537642 podman[75401]: 2025-11-27 10:53:50.115772485 +0000 UTC m=+0.199926306 container start cfb19f374c8624913c1c6cbbf1e6aceaddd0f9d5ad888fac42d11c9c445a01e8 (image=quay.io/ceph/ceph:v19, name=inspiring_roentgen, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_REF=squid, org.label-schema.build-date=20250325, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1)
Nov 27 05:53:50 np0005537642 podman[75401]: 2025-11-27 10:53:50.175773065 +0000 UTC m=+0.259926897 container attach cfb19f374c8624913c1c6cbbf1e6aceaddd0f9d5ad888fac42d11c9c445a01e8 (image=quay.io/ceph/ceph:v19, name=inspiring_roentgen, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Nov 27 05:53:50 np0005537642 ceph-mon[74338]: [27/Nov/2025:10:53:49] ENGINE Bus STARTING
Nov 27 05:53:50 np0005537642 ceph-mon[74338]: from='mgr.14122 192.168.122.100:0/397881370' entity='mgr.compute-0.qnrkij' 
Nov 27 05:53:50 np0005537642 ceph-mon[74338]: [27/Nov/2025:10:53:49] ENGINE Serving on https://192.168.122.100:7150
Nov 27 05:53:50 np0005537642 ceph-mon[74338]: [27/Nov/2025:10:53:49] ENGINE Client ('192.168.122.100', 55444) lost — peer dropped the TLS connection suddenly, during handshake: (6, 'TLS/SSL connection has been closed (EOF) (_ssl.c:1147)')
Nov 27 05:53:50 np0005537642 ceph-mon[74338]: [27/Nov/2025:10:53:50] ENGINE Serving on http://192.168.122.100:8765
Nov 27 05:53:50 np0005537642 ceph-mon[74338]: [27/Nov/2025:10:53:50] ENGINE Bus STARTED
Nov 27 05:53:50 np0005537642 ceph-mgr[74636]: log_channel(audit) log [DBG] : from='client.14136 -' entity='client.admin' cmd=[{"prefix": "cephadm set-user", "user": "ceph-admin", "target": ["mon-mgr", ""]}]: dispatch
Nov 27 05:53:50 np0005537642 ceph-mon[74338]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/ssh_user}] v 0)
Nov 27 05:53:50 np0005537642 ceph-mon[74338]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/397881370' entity='mgr.compute-0.qnrkij' 
Nov 27 05:53:50 np0005537642 ceph-mgr[74636]: [cephadm INFO root] Set ssh ssh_user
Nov 27 05:53:50 np0005537642 ceph-mgr[74636]: log_channel(cephadm) log [INF] : Set ssh ssh_user
Nov 27 05:53:50 np0005537642 ceph-mon[74338]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/ssh_config}] v 0)
Nov 27 05:53:50 np0005537642 ceph-mon[74338]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/397881370' entity='mgr.compute-0.qnrkij' 
Nov 27 05:53:50 np0005537642 ceph-mgr[74636]: [cephadm INFO root] Set ssh ssh_config
Nov 27 05:53:50 np0005537642 ceph-mgr[74636]: log_channel(cephadm) log [INF] : Set ssh ssh_config
Nov 27 05:53:50 np0005537642 ceph-mgr[74636]: [cephadm INFO root] ssh user set to ceph-admin. sudo will be used
Nov 27 05:53:50 np0005537642 ceph-mgr[74636]: log_channel(cephadm) log [INF] : ssh user set to ceph-admin. sudo will be used
Nov 27 05:53:50 np0005537642 inspiring_roentgen[75428]: ssh user set to ceph-admin. sudo will be used
Nov 27 05:53:50 np0005537642 systemd[1]: libpod-cfb19f374c8624913c1c6cbbf1e6aceaddd0f9d5ad888fac42d11c9c445a01e8.scope: Deactivated successfully.
Nov 27 05:53:50 np0005537642 podman[75401]: 2025-11-27 10:53:50.518461043 +0000 UTC m=+0.602614904 container died cfb19f374c8624913c1c6cbbf1e6aceaddd0f9d5ad888fac42d11c9c445a01e8 (image=quay.io/ceph/ceph:v19, name=inspiring_roentgen, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, io.buildah.version=1.40.1, OSD_FLAVOR=default, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325)
Nov 27 05:53:50 np0005537642 systemd[1]: var-lib-containers-storage-overlay-9fd509d9b94f2fda8906c2278e0b16f41a5347b938ed3c7b6ba21f643da66b8e-merged.mount: Deactivated successfully.
Nov 27 05:53:50 np0005537642 podman[75401]: 2025-11-27 10:53:50.606559542 +0000 UTC m=+0.690713393 container remove cfb19f374c8624913c1c6cbbf1e6aceaddd0f9d5ad888fac42d11c9c445a01e8 (image=quay.io/ceph/ceph:v19, name=inspiring_roentgen, OSD_FLAVOR=default, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 27 05:53:50 np0005537642 systemd[1]: libpod-conmon-cfb19f374c8624913c1c6cbbf1e6aceaddd0f9d5ad888fac42d11c9c445a01e8.scope: Deactivated successfully.
Nov 27 05:53:50 np0005537642 podman[75468]: 2025-11-27 10:53:50.733560357 +0000 UTC m=+0.090195140 container create 3d33265a63b954ee1b0ef6e34b0d75760f72b822bccbe6a60b1067b839d0d70f (image=quay.io/ceph/ceph:v19, name=beautiful_shirley, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, org.label-schema.build-date=20250325)
Nov 27 05:53:50 np0005537642 systemd[1]: Started libpod-conmon-3d33265a63b954ee1b0ef6e34b0d75760f72b822bccbe6a60b1067b839d0d70f.scope.
Nov 27 05:53:50 np0005537642 podman[75468]: 2025-11-27 10:53:50.689080492 +0000 UTC m=+0.045715325 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Nov 27 05:53:50 np0005537642 ceph-mgr[74636]: mgr.server send_report Not sending PG status to monitor yet, waiting for OSDs
Nov 27 05:53:50 np0005537642 systemd[1]: Started libcrun container.
Nov 27 05:53:50 np0005537642 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e384fdc3b4e8d0cb0373f55ed6a9b82e1454b3ff11dca6b32f7cfd9501478f22/merged/tmp/cephadm-ssh-key supports timestamps until 2038 (0x7fffffff)
Nov 27 05:53:50 np0005537642 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e384fdc3b4e8d0cb0373f55ed6a9b82e1454b3ff11dca6b32f7cfd9501478f22/merged/tmp/cephadm-ssh-key.pub supports timestamps until 2038 (0x7fffffff)
Nov 27 05:53:50 np0005537642 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e384fdc3b4e8d0cb0373f55ed6a9b82e1454b3ff11dca6b32f7cfd9501478f22/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 27 05:53:50 np0005537642 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e384fdc3b4e8d0cb0373f55ed6a9b82e1454b3ff11dca6b32f7cfd9501478f22/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 27 05:53:50 np0005537642 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e384fdc3b4e8d0cb0373f55ed6a9b82e1454b3ff11dca6b32f7cfd9501478f22/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Nov 27 05:53:50 np0005537642 podman[75468]: 2025-11-27 10:53:50.83132438 +0000 UTC m=+0.187959213 container init 3d33265a63b954ee1b0ef6e34b0d75760f72b822bccbe6a60b1067b839d0d70f (image=quay.io/ceph/ceph:v19, name=beautiful_shirley, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0)
Nov 27 05:53:50 np0005537642 podman[75468]: 2025-11-27 10:53:50.844397557 +0000 UTC m=+0.201032310 container start 3d33265a63b954ee1b0ef6e34b0d75760f72b822bccbe6a60b1067b839d0d70f (image=quay.io/ceph/ceph:v19, name=beautiful_shirley, OSD_FLAVOR=default, ceph=True, org.label-schema.license=GPLv2, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS)
Nov 27 05:53:50 np0005537642 podman[75468]: 2025-11-27 10:53:50.852809991 +0000 UTC m=+0.209444924 container attach 3d33265a63b954ee1b0ef6e34b0d75760f72b822bccbe6a60b1067b839d0d70f (image=quay.io/ceph/ceph:v19, name=beautiful_shirley, org.label-schema.build-date=20250325, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 27 05:53:51 np0005537642 ceph-mgr[74636]: log_channel(audit) log [DBG] : from='client.14138 -' entity='client.admin' cmd=[{"prefix": "cephadm set-priv-key", "target": ["mon-mgr", ""]}]: dispatch
Nov 27 05:53:51 np0005537642 ceph-mon[74338]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/ssh_identity_key}] v 0)
Nov 27 05:53:51 np0005537642 ceph-mon[74338]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/397881370' entity='mgr.compute-0.qnrkij' 
Nov 27 05:53:51 np0005537642 ceph-mgr[74636]: [cephadm INFO root] Set ssh ssh_identity_key
Nov 27 05:53:51 np0005537642 ceph-mgr[74636]: log_channel(cephadm) log [INF] : Set ssh ssh_identity_key
Nov 27 05:53:51 np0005537642 ceph-mgr[74636]: [cephadm INFO root] Set ssh private key
Nov 27 05:53:51 np0005537642 ceph-mgr[74636]: log_channel(cephadm) log [INF] : Set ssh private key
Nov 27 05:53:51 np0005537642 systemd[1]: libpod-3d33265a63b954ee1b0ef6e34b0d75760f72b822bccbe6a60b1067b839d0d70f.scope: Deactivated successfully.
Nov 27 05:53:51 np0005537642 podman[75468]: 2025-11-27 10:53:51.369648547 +0000 UTC m=+0.726283340 container died 3d33265a63b954ee1b0ef6e34b0d75760f72b822bccbe6a60b1067b839d0d70f (image=quay.io/ceph/ceph:v19, name=beautiful_shirley, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0)
Nov 27 05:53:51 np0005537642 ceph-mon[74338]: from='mgr.14122 192.168.122.100:0/397881370' entity='mgr.compute-0.qnrkij' 
Nov 27 05:53:51 np0005537642 ceph-mon[74338]: Set ssh ssh_user
Nov 27 05:53:51 np0005537642 ceph-mon[74338]: from='mgr.14122 192.168.122.100:0/397881370' entity='mgr.compute-0.qnrkij' 
Nov 27 05:53:51 np0005537642 ceph-mon[74338]: Set ssh ssh_config
Nov 27 05:53:51 np0005537642 ceph-mon[74338]: ssh user set to ceph-admin. sudo will be used
Nov 27 05:53:51 np0005537642 ceph-mon[74338]: from='mgr.14122 192.168.122.100:0/397881370' entity='mgr.compute-0.qnrkij' 
Nov 27 05:53:51 np0005537642 systemd[1]: var-lib-containers-storage-overlay-e384fdc3b4e8d0cb0373f55ed6a9b82e1454b3ff11dca6b32f7cfd9501478f22-merged.mount: Deactivated successfully.
Nov 27 05:53:51 np0005537642 podman[75468]: 2025-11-27 10:53:51.817194397 +0000 UTC m=+1.173829170 container remove 3d33265a63b954ee1b0ef6e34b0d75760f72b822bccbe6a60b1067b839d0d70f (image=quay.io/ceph/ceph:v19, name=beautiful_shirley, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2)
Nov 27 05:53:51 np0005537642 systemd[1]: libpod-conmon-3d33265a63b954ee1b0ef6e34b0d75760f72b822bccbe6a60b1067b839d0d70f.scope: Deactivated successfully.
Nov 27 05:53:51 np0005537642 podman[75522]: 2025-11-27 10:53:51.894644775 +0000 UTC m=+0.055825761 container create c359b20071690463798577af0c1b74a22868b4198fa52922e105562eabe63048 (image=quay.io/ceph/ceph:v19, name=confident_poitras, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.schema-version=1.0)
Nov 27 05:53:51 np0005537642 podman[75522]: 2025-11-27 10:53:51.865754728 +0000 UTC m=+0.026935734 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Nov 27 05:53:52 np0005537642 systemd[1]: Started libpod-conmon-c359b20071690463798577af0c1b74a22868b4198fa52922e105562eabe63048.scope.
Nov 27 05:53:52 np0005537642 systemd[1]: Started libcrun container.
Nov 27 05:53:52 np0005537642 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/aff3076aab0f34105c101fc43b17ecc4bbf6ae9d3d50134bebea44f01f6c0a47/merged/tmp/cephadm-ssh-key.pub supports timestamps until 2038 (0x7fffffff)
Nov 27 05:53:52 np0005537642 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/aff3076aab0f34105c101fc43b17ecc4bbf6ae9d3d50134bebea44f01f6c0a47/merged/tmp/cephadm-ssh-key supports timestamps until 2038 (0x7fffffff)
Nov 27 05:53:52 np0005537642 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/aff3076aab0f34105c101fc43b17ecc4bbf6ae9d3d50134bebea44f01f6c0a47/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 27 05:53:52 np0005537642 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/aff3076aab0f34105c101fc43b17ecc4bbf6ae9d3d50134bebea44f01f6c0a47/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Nov 27 05:53:52 np0005537642 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/aff3076aab0f34105c101fc43b17ecc4bbf6ae9d3d50134bebea44f01f6c0a47/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 27 05:53:52 np0005537642 podman[75522]: 2025-11-27 10:53:52.182738992 +0000 UTC m=+0.343920038 container init c359b20071690463798577af0c1b74a22868b4198fa52922e105562eabe63048 (image=quay.io/ceph/ceph:v19, name=confident_poitras, OSD_FLAVOR=default, ceph=True, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Nov 27 05:53:52 np0005537642 podman[75522]: 2025-11-27 10:53:52.191496522 +0000 UTC m=+0.352677548 container start c359b20071690463798577af0c1b74a22868b4198fa52922e105562eabe63048 (image=quay.io/ceph/ceph:v19, name=confident_poitras, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.vendor=CentOS)
Nov 27 05:53:52 np0005537642 podman[75522]: 2025-11-27 10:53:52.208794263 +0000 UTC m=+0.369975289 container attach c359b20071690463798577af0c1b74a22868b4198fa52922e105562eabe63048 (image=quay.io/ceph/ceph:v19, name=confident_poitras, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.vendor=CentOS, CEPH_REF=squid, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 27 05:53:52 np0005537642 ceph-mgr[74636]: log_channel(audit) log [DBG] : from='client.14140 -' entity='client.admin' cmd=[{"prefix": "cephadm set-pub-key", "target": ["mon-mgr", ""]}]: dispatch
Nov 27 05:53:52 np0005537642 ceph-mon[74338]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/ssh_identity_pub}] v 0)
Nov 27 05:53:52 np0005537642 ceph-mon[74338]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/397881370' entity='mgr.compute-0.qnrkij' 
Nov 27 05:53:52 np0005537642 ceph-mgr[74636]: [cephadm INFO root] Set ssh ssh_identity_pub
Nov 27 05:53:52 np0005537642 ceph-mgr[74636]: log_channel(cephadm) log [INF] : Set ssh ssh_identity_pub
Nov 27 05:53:52 np0005537642 systemd[1]: libpod-c359b20071690463798577af0c1b74a22868b4198fa52922e105562eabe63048.scope: Deactivated successfully.
Nov 27 05:53:52 np0005537642 podman[75522]: 2025-11-27 10:53:52.643722957 +0000 UTC m=+0.804903973 container died c359b20071690463798577af0c1b74a22868b4198fa52922e105562eabe63048 (image=quay.io/ceph/ceph:v19, name=confident_poitras, org.label-schema.license=GPLv2, ceph=True, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS)
Nov 27 05:53:52 np0005537642 ceph-mgr[74636]: mgr.server send_report Not sending PG status to monitor yet, waiting for OSDs
Nov 27 05:53:52 np0005537642 ceph-mon[74338]: Set ssh ssh_identity_key
Nov 27 05:53:52 np0005537642 ceph-mon[74338]: Set ssh private key
Nov 27 05:53:52 np0005537642 systemd[1]: var-lib-containers-storage-overlay-aff3076aab0f34105c101fc43b17ecc4bbf6ae9d3d50134bebea44f01f6c0a47-merged.mount: Deactivated successfully.
Nov 27 05:53:53 np0005537642 podman[75522]: 2025-11-27 10:53:53.235036908 +0000 UTC m=+1.396217914 container remove c359b20071690463798577af0c1b74a22868b4198fa52922e105562eabe63048 (image=quay.io/ceph/ceph:v19, name=confident_poitras, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, ceph=True, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Nov 27 05:53:53 np0005537642 systemd[1]: libpod-conmon-c359b20071690463798577af0c1b74a22868b4198fa52922e105562eabe63048.scope: Deactivated successfully.
Nov 27 05:53:53 np0005537642 podman[75577]: 2025-11-27 10:53:53.372314557 +0000 UTC m=+0.116091833 container create 1ddfe71e0dd4c64b3d828e0afe47f41917871217d5d6252cda430be7d7ff82aa (image=quay.io/ceph/ceph:v19, name=amazing_hertz, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, CEPH_REF=squid, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 27 05:53:53 np0005537642 podman[75577]: 2025-11-27 10:53:53.285838866 +0000 UTC m=+0.029616152 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Nov 27 05:53:53 np0005537642 systemd[1]: Started libpod-conmon-1ddfe71e0dd4c64b3d828e0afe47f41917871217d5d6252cda430be7d7ff82aa.scope.
Nov 27 05:53:53 np0005537642 systemd[1]: Started libcrun container.
Nov 27 05:53:53 np0005537642 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d07ee8bf47577e49be9e6b727aa343dc122d7b4c66abb8994b9e9eaf6a05eff6/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Nov 27 05:53:53 np0005537642 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d07ee8bf47577e49be9e6b727aa343dc122d7b4c66abb8994b9e9eaf6a05eff6/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 27 05:53:53 np0005537642 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d07ee8bf47577e49be9e6b727aa343dc122d7b4c66abb8994b9e9eaf6a05eff6/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 27 05:53:53 np0005537642 podman[75577]: 2025-11-27 10:53:53.553180798 +0000 UTC m=+0.296958114 container init 1ddfe71e0dd4c64b3d828e0afe47f41917871217d5d6252cda430be7d7ff82aa (image=quay.io/ceph/ceph:v19, name=amazing_hertz, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.build-date=20250325, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 27 05:53:53 np0005537642 podman[75577]: 2025-11-27 10:53:53.563620389 +0000 UTC m=+0.307397655 container start 1ddfe71e0dd4c64b3d828e0afe47f41917871217d5d6252cda430be7d7ff82aa (image=quay.io/ceph/ceph:v19, name=amazing_hertz, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.vendor=CentOS, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1)
Nov 27 05:53:53 np0005537642 podman[75577]: 2025-11-27 10:53:53.610611644 +0000 UTC m=+0.354388970 container attach 1ddfe71e0dd4c64b3d828e0afe47f41917871217d5d6252cda430be7d7ff82aa (image=quay.io/ceph/ceph:v19, name=amazing_hertz, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325)
Nov 27 05:53:53 np0005537642 ceph-mon[74338]: from='mgr.14122 192.168.122.100:0/397881370' entity='mgr.compute-0.qnrkij' 
Nov 27 05:53:53 np0005537642 ceph-mon[74338]: Set ssh ssh_identity_pub
Nov 27 05:53:53 np0005537642 ceph-mgr[74636]: log_channel(audit) log [DBG] : from='client.14142 -' entity='client.admin' cmd=[{"prefix": "cephadm get-pub-key", "target": ["mon-mgr", ""]}]: dispatch
Nov 27 05:53:53 np0005537642 amazing_hertz[75593]: ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQDQbTZ+lyOhQ85+p5jQ6t9U5bPl967DwjKUnZh4qDpxG/6k0TzCwycMmSGjv22qq4PNBZqTYhKDDdvvTqe+xqaOXRgLsxi4Oyk7MIYQF7PyZAXuWsyFldrP9fz/gX0YjNcZEkQmOWLae7nCjYuUtUz/yN6ti+at9MrXPLOexi2yWW9HuXfCOBR4abNy1rbx/Y3iEi6DRZ8cqisd83jJMNS7mlg3oy+xKijp4wGBFtxVjPWUofsSlFHY9IoG/i0174VFMGaKV/peiheOWCbF4Qb4tMA52Do0LDfI61ZNHWJP0tPcutbbi6Gq/KqlU4l5wRzYCn7EH3xoCz4suQ3f1CVUPUKgeHfVWSb9esk/NSgx8YVLTBqprnrtB2moROH+S/EYHgavJXSOqPnV7K4ZiIpX3vykvIFVm/XaWgCWM+dJPpcy8rLOtOCSoAfJWGH6taMlY0BsKxV0lI0puApsG0cCRv2U0LZzRFq9UnP3XnFFUZ0g79OEtKwvBX0Uxh207dU= zuul@controller
Nov 27 05:53:53 np0005537642 systemd[1]: libpod-1ddfe71e0dd4c64b3d828e0afe47f41917871217d5d6252cda430be7d7ff82aa.scope: Deactivated successfully.
Nov 27 05:53:53 np0005537642 podman[75577]: 2025-11-27 10:53:53.929372284 +0000 UTC m=+0.673149560 container died 1ddfe71e0dd4c64b3d828e0afe47f41917871217d5d6252cda430be7d7ff82aa (image=quay.io/ceph/ceph:v19, name=amazing_hertz, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 27 05:53:54 np0005537642 systemd[1]: var-lib-containers-storage-overlay-d07ee8bf47577e49be9e6b727aa343dc122d7b4c66abb8994b9e9eaf6a05eff6-merged.mount: Deactivated successfully.
Nov 27 05:53:54 np0005537642 podman[75577]: 2025-11-27 10:53:54.403948282 +0000 UTC m=+1.147725548 container remove 1ddfe71e0dd4c64b3d828e0afe47f41917871217d5d6252cda430be7d7ff82aa (image=quay.io/ceph/ceph:v19, name=amazing_hertz, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0)
Nov 27 05:53:54 np0005537642 podman[75631]: 2025-11-27 10:53:54.470895855 +0000 UTC m=+0.035921615 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Nov 27 05:53:54 np0005537642 podman[75631]: 2025-11-27 10:53:54.619957719 +0000 UTC m=+0.184983449 container create 8bdcc4ad9cc17aa7d411a9d358e2b394c208a9f2d346fa163ac0744f20042af3 (image=quay.io/ceph/ceph:v19, name=admiring_solomon, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, OSD_FLAVOR=default, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325)
Nov 27 05:53:54 np0005537642 systemd[1]: Started libpod-conmon-8bdcc4ad9cc17aa7d411a9d358e2b394c208a9f2d346fa163ac0744f20042af3.scope.
Nov 27 05:53:54 np0005537642 systemd[1]: Started libcrun container.
Nov 27 05:53:54 np0005537642 ceph-mgr[74636]: mgr.server send_report Not sending PG status to monitor yet, waiting for OSDs
Nov 27 05:53:54 np0005537642 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7bf1bc45ea596c3040b64d29c9e22417cf7d2a2f292e7c31d0dff0ad3d1e599a/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Nov 27 05:53:54 np0005537642 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7bf1bc45ea596c3040b64d29c9e22417cf7d2a2f292e7c31d0dff0ad3d1e599a/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 27 05:53:54 np0005537642 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7bf1bc45ea596c3040b64d29c9e22417cf7d2a2f292e7c31d0dff0ad3d1e599a/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 27 05:53:54 np0005537642 systemd[1]: libpod-conmon-1ddfe71e0dd4c64b3d828e0afe47f41917871217d5d6252cda430be7d7ff82aa.scope: Deactivated successfully.
Nov 27 05:53:54 np0005537642 podman[75631]: 2025-11-27 10:53:54.870493014 +0000 UTC m=+0.435518794 container init 8bdcc4ad9cc17aa7d411a9d358e2b394c208a9f2d346fa163ac0744f20042af3 (image=quay.io/ceph/ceph:v19, name=admiring_solomon, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1)
Nov 27 05:53:54 np0005537642 podman[75631]: 2025-11-27 10:53:54.880264773 +0000 UTC m=+0.445290493 container start 8bdcc4ad9cc17aa7d411a9d358e2b394c208a9f2d346fa163ac0744f20042af3 (image=quay.io/ceph/ceph:v19, name=admiring_solomon, CEPH_REF=squid, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 27 05:53:54 np0005537642 podman[75631]: 2025-11-27 10:53:54.945780548 +0000 UTC m=+0.510806268 container attach 8bdcc4ad9cc17aa7d411a9d358e2b394c208a9f2d346fa163ac0744f20042af3 (image=quay.io/ceph/ceph:v19, name=admiring_solomon, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid)
Nov 27 05:53:55 np0005537642 ceph-mon[74338]: mon.compute-0@0(leader).osd e2 _set_new_cache_sizes cache_size:1020052991 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 27 05:53:55 np0005537642 ceph-mgr[74636]: log_channel(audit) log [DBG] : from='client.14144 -' entity='client.admin' cmd=[{"prefix": "orch host add", "hostname": "compute-0", "addr": "192.168.122.100", "target": ["mon-mgr", ""]}]: dispatch
Nov 27 05:53:55 np0005537642 systemd-logind[801]: New session 21 of user ceph-admin.
Nov 27 05:53:55 np0005537642 systemd[1]: Created slice User Slice of UID 42477.
Nov 27 05:53:55 np0005537642 systemd[1]: Starting User Runtime Directory /run/user/42477...
Nov 27 05:53:55 np0005537642 systemd[1]: Finished User Runtime Directory /run/user/42477.
Nov 27 05:53:55 np0005537642 systemd[1]: Starting User Manager for UID 42477...
Nov 27 05:53:55 np0005537642 systemd[75677]: Queued start job for default target Main User Target.
Nov 27 05:53:55 np0005537642 systemd[75677]: Created slice User Application Slice.
Nov 27 05:53:55 np0005537642 systemd[75677]: Started Mark boot as successful after the user session has run 2 minutes.
Nov 27 05:53:55 np0005537642 systemd[75677]: Started Daily Cleanup of User's Temporary Directories.
Nov 27 05:53:55 np0005537642 systemd[75677]: Reached target Paths.
Nov 27 05:53:55 np0005537642 systemd[75677]: Reached target Timers.
Nov 27 05:53:55 np0005537642 systemd[75677]: Starting D-Bus User Message Bus Socket...
Nov 27 05:53:55 np0005537642 systemd[75677]: Starting Create User's Volatile Files and Directories...
Nov 27 05:53:55 np0005537642 systemd-logind[801]: New session 23 of user ceph-admin.
Nov 27 05:53:55 np0005537642 systemd[75677]: Listening on D-Bus User Message Bus Socket.
Nov 27 05:53:55 np0005537642 systemd[75677]: Reached target Sockets.
Nov 27 05:53:55 np0005537642 systemd[75677]: Finished Create User's Volatile Files and Directories.
Nov 27 05:53:55 np0005537642 systemd[75677]: Reached target Basic System.
Nov 27 05:53:55 np0005537642 systemd[75677]: Reached target Main User Target.
Nov 27 05:53:55 np0005537642 systemd[75677]: Startup finished in 134ms.
Nov 27 05:53:55 np0005537642 systemd[1]: Started User Manager for UID 42477.
Nov 27 05:53:55 np0005537642 systemd[1]: Started Session 21 of User ceph-admin.
Nov 27 05:53:55 np0005537642 systemd[1]: Started Session 23 of User ceph-admin.
Nov 27 05:53:56 np0005537642 systemd-logind[801]: New session 24 of user ceph-admin.
Nov 27 05:53:56 np0005537642 systemd[1]: Started Session 24 of User ceph-admin.
Nov 27 05:53:56 np0005537642 systemd-logind[801]: New session 25 of user ceph-admin.
Nov 27 05:53:56 np0005537642 systemd[1]: Started Session 25 of User ceph-admin.
Nov 27 05:53:56 np0005537642 ceph-mgr[74636]: [cephadm INFO cephadm.serve] Deploying cephadm binary to compute-0
Nov 27 05:53:56 np0005537642 ceph-mgr[74636]: log_channel(cephadm) log [INF] : Deploying cephadm binary to compute-0
Nov 27 05:53:56 np0005537642 ceph-mgr[74636]: mgr.server send_report Not sending PG status to monitor yet, waiting for OSDs
Nov 27 05:53:57 np0005537642 systemd-logind[801]: New session 26 of user ceph-admin.
Nov 27 05:53:57 np0005537642 systemd[1]: Started Session 26 of User ceph-admin.
Nov 27 05:53:57 np0005537642 ceph-mon[74338]: Deploying cephadm binary to compute-0
Nov 27 05:53:57 np0005537642 systemd-logind[801]: New session 27 of user ceph-admin.
Nov 27 05:53:57 np0005537642 systemd[1]: Started Session 27 of User ceph-admin.
Nov 27 05:53:57 np0005537642 systemd-logind[801]: New session 28 of user ceph-admin.
Nov 27 05:53:57 np0005537642 systemd[1]: Started Session 28 of User ceph-admin.
Nov 27 05:53:58 np0005537642 systemd-logind[801]: New session 29 of user ceph-admin.
Nov 27 05:53:58 np0005537642 systemd[1]: Started Session 29 of User ceph-admin.
Nov 27 05:53:58 np0005537642 systemd-logind[801]: New session 30 of user ceph-admin.
Nov 27 05:53:58 np0005537642 systemd[1]: Started Session 30 of User ceph-admin.
Nov 27 05:53:58 np0005537642 ceph-mgr[74636]: mgr.server send_report Not sending PG status to monitor yet, waiting for OSDs
Nov 27 05:53:58 np0005537642 systemd-logind[801]: New session 31 of user ceph-admin.
Nov 27 05:53:58 np0005537642 systemd[1]: Started Session 31 of User ceph-admin.
Nov 27 05:54:00 np0005537642 ceph-mon[74338]: mon.compute-0@0(leader).osd e2 _set_new_cache_sizes cache_size:1020054709 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 27 05:54:00 np0005537642 systemd-logind[801]: New session 32 of user ceph-admin.
Nov 27 05:54:00 np0005537642 systemd[1]: Started Session 32 of User ceph-admin.
Nov 27 05:54:00 np0005537642 systemd-logind[801]: New session 33 of user ceph-admin.
Nov 27 05:54:00 np0005537642 systemd[1]: Started Session 33 of User ceph-admin.
Nov 27 05:54:00 np0005537642 ceph-mgr[74636]: mgr.server send_report Not sending PG status to monitor yet, waiting for OSDs
Nov 27 05:54:01 np0005537642 ceph-mon[74338]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/inventory}] v 0)
Nov 27 05:54:01 np0005537642 ceph-mon[74338]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/397881370' entity='mgr.compute-0.qnrkij' 
Nov 27 05:54:01 np0005537642 ceph-mgr[74636]: [cephadm INFO root] Added host compute-0
Nov 27 05:54:01 np0005537642 ceph-mgr[74636]: log_channel(cephadm) log [INF] : Added host compute-0
Nov 27 05:54:01 np0005537642 ceph-mon[74338]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config dump", "format": "json"} v 0)
Nov 27 05:54:01 np0005537642 ceph-mon[74338]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/397881370' entity='mgr.compute-0.qnrkij' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch
Nov 27 05:54:01 np0005537642 admiring_solomon[75647]: Added host 'compute-0' with addr '192.168.122.100'
Nov 27 05:54:01 np0005537642 systemd[1]: libpod-8bdcc4ad9cc17aa7d411a9d358e2b394c208a9f2d346fa163ac0744f20042af3.scope: Deactivated successfully.
Nov 27 05:54:01 np0005537642 podman[75631]: 2025-11-27 10:54:01.17013079 +0000 UTC m=+6.735156560 container died 8bdcc4ad9cc17aa7d411a9d358e2b394c208a9f2d346fa163ac0744f20042af3 (image=quay.io/ceph/ceph:v19, name=admiring_solomon, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, ceph=True)
Nov 27 05:54:01 np0005537642 systemd[1]: var-lib-containers-storage-overlay-7bf1bc45ea596c3040b64d29c9e22417cf7d2a2f292e7c31d0dff0ad3d1e599a-merged.mount: Deactivated successfully.
Nov 27 05:54:01 np0005537642 podman[75631]: 2025-11-27 10:54:01.396417083 +0000 UTC m=+6.961442793 container remove 8bdcc4ad9cc17aa7d411a9d358e2b394c208a9f2d346fa163ac0744f20042af3 (image=quay.io/ceph/ceph:v19, name=admiring_solomon, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, OSD_FLAVOR=default, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True)
Nov 27 05:54:01 np0005537642 systemd[1]: libpod-conmon-8bdcc4ad9cc17aa7d411a9d358e2b394c208a9f2d346fa163ac0744f20042af3.scope: Deactivated successfully.
Nov 27 05:54:01 np0005537642 podman[76096]: 2025-11-27 10:54:01.484082485 +0000 UTC m=+0.060076599 container create 08a1e5392020f85a3e0def0ce049f5147105bea1cb01e93ef9884526009abedd (image=quay.io/ceph/ceph:v19, name=sharp_blackburn, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_REF=squid, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 27 05:54:01 np0005537642 systemd[1]: Started libpod-conmon-08a1e5392020f85a3e0def0ce049f5147105bea1cb01e93ef9884526009abedd.scope.
Nov 27 05:54:01 np0005537642 podman[76096]: 2025-11-27 10:54:01.454317229 +0000 UTC m=+0.030311373 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Nov 27 05:54:01 np0005537642 systemd[1]: Started libcrun container.
Nov 27 05:54:01 np0005537642 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5e7b20f0ce3026965ceee21490e7f8beaed7c4eb52645ba21d8723f0ac2b4615/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 27 05:54:01 np0005537642 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5e7b20f0ce3026965ceee21490e7f8beaed7c4eb52645ba21d8723f0ac2b4615/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Nov 27 05:54:01 np0005537642 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5e7b20f0ce3026965ceee21490e7f8beaed7c4eb52645ba21d8723f0ac2b4615/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 27 05:54:01 np0005537642 podman[76096]: 2025-11-27 10:54:01.600119053 +0000 UTC m=+0.176113237 container init 08a1e5392020f85a3e0def0ce049f5147105bea1cb01e93ef9884526009abedd (image=quay.io/ceph/ceph:v19, name=sharp_blackburn, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, ceph=True, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 27 05:54:01 np0005537642 podman[76096]: 2025-11-27 10:54:01.614568104 +0000 UTC m=+0.190562188 container start 08a1e5392020f85a3e0def0ce049f5147105bea1cb01e93ef9884526009abedd (image=quay.io/ceph/ceph:v19, name=sharp_blackburn, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default)
Nov 27 05:54:01 np0005537642 podman[76096]: 2025-11-27 10:54:01.619675839 +0000 UTC m=+0.195670033 container attach 08a1e5392020f85a3e0def0ce049f5147105bea1cb01e93ef9884526009abedd (image=quay.io/ceph/ceph:v19, name=sharp_blackburn, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Nov 27 05:54:01 np0005537642 ceph-mgr[74636]: log_channel(audit) log [DBG] : from='client.14146 -' entity='client.admin' cmd=[{"prefix": "orch apply", "service_type": "mon", "target": ["mon-mgr", ""]}]: dispatch
Nov 27 05:54:01 np0005537642 ceph-mgr[74636]: [cephadm INFO root] Saving service mon spec with placement count:5
Nov 27 05:54:01 np0005537642 ceph-mgr[74636]: log_channel(cephadm) log [INF] : Saving service mon spec with placement count:5
Nov 27 05:54:01 np0005537642 ceph-mon[74338]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.mon}] v 0)
Nov 27 05:54:02 np0005537642 ceph-mon[74338]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/397881370' entity='mgr.compute-0.qnrkij' 
Nov 27 05:54:02 np0005537642 sharp_blackburn[76123]: Scheduled mon update...
Nov 27 05:54:02 np0005537642 systemd[1]: libpod-08a1e5392020f85a3e0def0ce049f5147105bea1cb01e93ef9884526009abedd.scope: Deactivated successfully.
Nov 27 05:54:02 np0005537642 podman[76096]: 2025-11-27 10:54:02.026659818 +0000 UTC m=+0.602653952 container died 08a1e5392020f85a3e0def0ce049f5147105bea1cb01e93ef9884526009abedd (image=quay.io/ceph/ceph:v19, name=sharp_blackburn, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 27 05:54:02 np0005537642 systemd[1]: var-lib-containers-storage-overlay-5e7b20f0ce3026965ceee21490e7f8beaed7c4eb52645ba21d8723f0ac2b4615-merged.mount: Deactivated successfully.
Nov 27 05:54:02 np0005537642 ceph-mon[74338]: from='mgr.14122 192.168.122.100:0/397881370' entity='mgr.compute-0.qnrkij' 
Nov 27 05:54:02 np0005537642 ceph-mon[74338]: Added host compute-0
Nov 27 05:54:02 np0005537642 ceph-mon[74338]: from='mgr.14122 192.168.122.100:0/397881370' entity='mgr.compute-0.qnrkij' 
Nov 27 05:54:02 np0005537642 podman[76124]: 2025-11-27 10:54:02.255649917 +0000 UTC m=+0.703573580 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Nov 27 05:54:02 np0005537642 podman[76096]: 2025-11-27 10:54:02.273100143 +0000 UTC m=+0.849094267 container remove 08a1e5392020f85a3e0def0ce049f5147105bea1cb01e93ef9884526009abedd (image=quay.io/ceph/ceph:v19, name=sharp_blackburn, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 27 05:54:02 np0005537642 systemd[1]: libpod-conmon-08a1e5392020f85a3e0def0ce049f5147105bea1cb01e93ef9884526009abedd.scope: Deactivated successfully.
Nov 27 05:54:02 np0005537642 podman[76184]: 2025-11-27 10:54:02.424651712 +0000 UTC m=+0.116255496 container create c56539c087b9e08abfa6ebab8316b52b2c0cf5c4803a710a073d8d9f16c5a14e (image=quay.io/ceph/ceph:v19, name=keen_almeida, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 27 05:54:02 np0005537642 podman[76184]: 2025-11-27 10:54:02.349945288 +0000 UTC m=+0.041549132 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Nov 27 05:54:02 np0005537642 systemd[1]: Started libpod-conmon-c56539c087b9e08abfa6ebab8316b52b2c0cf5c4803a710a073d8d9f16c5a14e.scope.
Nov 27 05:54:02 np0005537642 systemd[1]: Started libcrun container.
Nov 27 05:54:02 np0005537642 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0f8161eb9544ded4bcb92627a856e6f38e178f8d878855ee7c63d65ed56fde8d/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Nov 27 05:54:02 np0005537642 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0f8161eb9544ded4bcb92627a856e6f38e178f8d878855ee7c63d65ed56fde8d/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 27 05:54:02 np0005537642 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0f8161eb9544ded4bcb92627a856e6f38e178f8d878855ee7c63d65ed56fde8d/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 27 05:54:02 np0005537642 podman[76202]: 2025-11-27 10:54:02.581758767 +0000 UTC m=+0.211081861 container create 20e87bfe31a08c0e1bb9caf5c41858e9261f3d2c21c082f985c49e5e9996c87b (image=quay.io/ceph/ceph:v19, name=pensive_leavitt, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, CEPH_REF=squid, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 27 05:54:02 np0005537642 podman[76202]: 2025-11-27 10:54:02.49391521 +0000 UTC m=+0.123238354 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Nov 27 05:54:02 np0005537642 podman[76184]: 2025-11-27 10:54:02.61597028 +0000 UTC m=+0.307574054 container init c56539c087b9e08abfa6ebab8316b52b2c0cf5c4803a710a073d8d9f16c5a14e (image=quay.io/ceph/ceph:v19, name=keen_almeida, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 27 05:54:02 np0005537642 podman[76184]: 2025-11-27 10:54:02.627132167 +0000 UTC m=+0.318735941 container start c56539c087b9e08abfa6ebab8316b52b2c0cf5c4803a710a073d8d9f16c5a14e (image=quay.io/ceph/ceph:v19, name=keen_almeida, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, CEPH_REF=squid, org.label-schema.build-date=20250325)
Nov 27 05:54:02 np0005537642 podman[76184]: 2025-11-27 10:54:02.631677176 +0000 UTC m=+0.323280950 container attach c56539c087b9e08abfa6ebab8316b52b2c0cf5c4803a710a073d8d9f16c5a14e (image=quay.io/ceph/ceph:v19, name=keen_almeida, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, CEPH_REF=squid, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 27 05:54:02 np0005537642 systemd[1]: Started libpod-conmon-20e87bfe31a08c0e1bb9caf5c41858e9261f3d2c21c082f985c49e5e9996c87b.scope.
Nov 27 05:54:02 np0005537642 systemd[1]: Started libcrun container.
Nov 27 05:54:02 np0005537642 podman[76202]: 2025-11-27 10:54:02.706849543 +0000 UTC m=+0.336172657 container init 20e87bfe31a08c0e1bb9caf5c41858e9261f3d2c21c082f985c49e5e9996c87b (image=quay.io/ceph/ceph:v19, name=pensive_leavitt, CEPH_REF=squid, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default)
Nov 27 05:54:02 np0005537642 podman[76202]: 2025-11-27 10:54:02.712982448 +0000 UTC m=+0.342305542 container start 20e87bfe31a08c0e1bb9caf5c41858e9261f3d2c21c082f985c49e5e9996c87b (image=quay.io/ceph/ceph:v19, name=pensive_leavitt, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.build-date=20250325)
Nov 27 05:54:02 np0005537642 podman[76202]: 2025-11-27 10:54:02.716865888 +0000 UTC m=+0.346188982 container attach 20e87bfe31a08c0e1bb9caf5c41858e9261f3d2c21c082f985c49e5e9996c87b (image=quay.io/ceph/ceph:v19, name=pensive_leavitt, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, ceph=True)
Nov 27 05:54:02 np0005537642 ceph-mgr[74636]: mgr.server send_report Not sending PG status to monitor yet, waiting for OSDs
Nov 27 05:54:02 np0005537642 pensive_leavitt[76228]: ceph version 19.2.3 (c92aebb279828e9c3c1f5d24613efca272649e62) squid (stable)
Nov 27 05:54:02 np0005537642 systemd[1]: libpod-20e87bfe31a08c0e1bb9caf5c41858e9261f3d2c21c082f985c49e5e9996c87b.scope: Deactivated successfully.
Nov 27 05:54:02 np0005537642 podman[76202]: 2025-11-27 10:54:02.832217017 +0000 UTC m=+0.461540071 container died 20e87bfe31a08c0e1bb9caf5c41858e9261f3d2c21c082f985c49e5e9996c87b (image=quay.io/ceph/ceph:v19, name=pensive_leavitt, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 27 05:54:02 np0005537642 systemd[1]: var-lib-containers-storage-overlay-220946374e8512035b74741f68107e7f85851b984b36d8153a48e5693e38000e-merged.mount: Deactivated successfully.
Nov 27 05:54:02 np0005537642 podman[76202]: 2025-11-27 10:54:02.862352113 +0000 UTC m=+0.491675167 container remove 20e87bfe31a08c0e1bb9caf5c41858e9261f3d2c21c082f985c49e5e9996c87b (image=quay.io/ceph/ceph:v19, name=pensive_leavitt, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Nov 27 05:54:02 np0005537642 ceph-mon[74338]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config set, name=container_image}] v 0)
Nov 27 05:54:02 np0005537642 ceph-mon[74338]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/397881370' entity='mgr.compute-0.qnrkij' 
Nov 27 05:54:02 np0005537642 systemd[1]: libpod-conmon-20e87bfe31a08c0e1bb9caf5c41858e9261f3d2c21c082f985c49e5e9996c87b.scope: Deactivated successfully.
Nov 27 05:54:03 np0005537642 ceph-mgr[74636]: log_channel(audit) log [DBG] : from='client.14148 -' entity='client.admin' cmd=[{"prefix": "orch apply", "service_type": "mgr", "target": ["mon-mgr", ""]}]: dispatch
Nov 27 05:54:03 np0005537642 ceph-mgr[74636]: [cephadm INFO root] Saving service mgr spec with placement count:2
Nov 27 05:54:03 np0005537642 ceph-mgr[74636]: log_channel(cephadm) log [INF] : Saving service mgr spec with placement count:2
Nov 27 05:54:03 np0005537642 ceph-mon[74338]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.mgr}] v 0)
Nov 27 05:54:03 np0005537642 ceph-mon[74338]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/397881370' entity='mgr.compute-0.qnrkij' 
Nov 27 05:54:03 np0005537642 keen_almeida[76221]: Scheduled mgr update...
Nov 27 05:54:03 np0005537642 systemd[1]: libpod-c56539c087b9e08abfa6ebab8316b52b2c0cf5c4803a710a073d8d9f16c5a14e.scope: Deactivated successfully.
Nov 27 05:54:03 np0005537642 podman[76184]: 2025-11-27 10:54:03.079662861 +0000 UTC m=+0.771266645 container died c56539c087b9e08abfa6ebab8316b52b2c0cf5c4803a710a073d8d9f16c5a14e (image=quay.io/ceph/ceph:v19, name=keen_almeida, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, CEPH_REF=squid)
Nov 27 05:54:03 np0005537642 systemd[1]: var-lib-containers-storage-overlay-0f8161eb9544ded4bcb92627a856e6f38e178f8d878855ee7c63d65ed56fde8d-merged.mount: Deactivated successfully.
Nov 27 05:54:03 np0005537642 podman[76184]: 2025-11-27 10:54:03.122855568 +0000 UTC m=+0.814459352 container remove c56539c087b9e08abfa6ebab8316b52b2c0cf5c4803a710a073d8d9f16c5a14e (image=quay.io/ceph/ceph:v19, name=keen_almeida, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, ceph=True, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS)
Nov 27 05:54:03 np0005537642 systemd[1]: libpod-conmon-c56539c087b9e08abfa6ebab8316b52b2c0cf5c4803a710a073d8d9f16c5a14e.scope: Deactivated successfully.
Nov 27 05:54:03 np0005537642 ceph-mon[74338]: Saving service mon spec with placement count:5
Nov 27 05:54:03 np0005537642 ceph-mon[74338]: from='mgr.14122 192.168.122.100:0/397881370' entity='mgr.compute-0.qnrkij' 
Nov 27 05:54:03 np0005537642 ceph-mon[74338]: from='mgr.14122 192.168.122.100:0/397881370' entity='mgr.compute-0.qnrkij' 
Nov 27 05:54:03 np0005537642 podman[76326]: 2025-11-27 10:54:03.217268392 +0000 UTC m=+0.060102669 container create 9d0fcee0a2ebc974f0f86b9218cac4c7370055f6c64ca09442ed06898096a12a (image=quay.io/ceph/ceph:v19, name=blissful_jemison, ceph=True, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0)
Nov 27 05:54:03 np0005537642 systemd[1]: Started libpod-conmon-9d0fcee0a2ebc974f0f86b9218cac4c7370055f6c64ca09442ed06898096a12a.scope.
Nov 27 05:54:03 np0005537642 podman[76326]: 2025-11-27 10:54:03.196546183 +0000 UTC m=+0.039380510 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Nov 27 05:54:03 np0005537642 systemd[1]: Started libcrun container.
Nov 27 05:54:03 np0005537642 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a5ccb79f5b33f553652e8e48ef7fdf62642f96f6bd602e4a3ec803289847e023/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 27 05:54:03 np0005537642 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a5ccb79f5b33f553652e8e48ef7fdf62642f96f6bd602e4a3ec803289847e023/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Nov 27 05:54:03 np0005537642 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a5ccb79f5b33f553652e8e48ef7fdf62642f96f6bd602e4a3ec803289847e023/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 27 05:54:03 np0005537642 podman[76326]: 2025-11-27 10:54:03.318766596 +0000 UTC m=+0.161600883 container init 9d0fcee0a2ebc974f0f86b9218cac4c7370055f6c64ca09442ed06898096a12a (image=quay.io/ceph/ceph:v19, name=blissful_jemison, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, CEPH_REF=squid, io.buildah.version=1.40.1)
Nov 27 05:54:03 np0005537642 podman[76326]: 2025-11-27 10:54:03.324570871 +0000 UTC m=+0.167405158 container start 9d0fcee0a2ebc974f0f86b9218cac4c7370055f6c64ca09442ed06898096a12a (image=quay.io/ceph/ceph:v19, name=blissful_jemison, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, ceph=True, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 27 05:54:03 np0005537642 podman[76326]: 2025-11-27 10:54:03.32910022 +0000 UTC m=+0.171934487 container attach 9d0fcee0a2ebc974f0f86b9218cac4c7370055f6c64ca09442ed06898096a12a (image=quay.io/ceph/ceph:v19, name=blissful_jemison, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0)
Nov 27 05:54:03 np0005537642 ceph-mon[74338]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Nov 27 05:54:03 np0005537642 ceph-mon[74338]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/397881370' entity='mgr.compute-0.qnrkij' 
Nov 27 05:54:03 np0005537642 ceph-mgr[74636]: log_channel(audit) log [DBG] : from='client.14150 -' entity='client.admin' cmd=[{"prefix": "orch apply", "service_type": "crash", "target": ["mon-mgr", ""]}]: dispatch
Nov 27 05:54:03 np0005537642 ceph-mgr[74636]: [cephadm INFO root] Saving service crash spec with placement *
Nov 27 05:54:03 np0005537642 ceph-mgr[74636]: log_channel(cephadm) log [INF] : Saving service crash spec with placement *
Nov 27 05:54:03 np0005537642 ceph-mon[74338]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.crash}] v 0)
Nov 27 05:54:03 np0005537642 ceph-mon[74338]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/397881370' entity='mgr.compute-0.qnrkij' 
Nov 27 05:54:03 np0005537642 blissful_jemison[76349]: Scheduled crash update...
Nov 27 05:54:03 np0005537642 systemd[1]: libpod-9d0fcee0a2ebc974f0f86b9218cac4c7370055f6c64ca09442ed06898096a12a.scope: Deactivated successfully.
Nov 27 05:54:03 np0005537642 podman[76326]: 2025-11-27 10:54:03.760226195 +0000 UTC m=+0.603060512 container died 9d0fcee0a2ebc974f0f86b9218cac4c7370055f6c64ca09442ed06898096a12a (image=quay.io/ceph/ceph:v19, name=blissful_jemison, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS)
Nov 27 05:54:03 np0005537642 systemd[1]: var-lib-containers-storage-overlay-a5ccb79f5b33f553652e8e48ef7fdf62642f96f6bd602e4a3ec803289847e023-merged.mount: Deactivated successfully.
Nov 27 05:54:03 np0005537642 podman[76326]: 2025-11-27 10:54:03.80331217 +0000 UTC m=+0.646146417 container remove 9d0fcee0a2ebc974f0f86b9218cac4c7370055f6c64ca09442ed06898096a12a (image=quay.io/ceph/ceph:v19, name=blissful_jemison, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 27 05:54:03 np0005537642 systemd[1]: libpod-conmon-9d0fcee0a2ebc974f0f86b9218cac4c7370055f6c64ca09442ed06898096a12a.scope: Deactivated successfully.
Nov 27 05:54:03 np0005537642 podman[76473]: 2025-11-27 10:54:03.875039529 +0000 UTC m=+0.049880239 container create 81e09a6219ccd89e3301ad623b39d4b3531dd947456815c5808030917a554d30 (image=quay.io/ceph/ceph:v19, name=brave_wilbur, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default)
Nov 27 05:54:03 np0005537642 systemd[1]: Started libpod-conmon-81e09a6219ccd89e3301ad623b39d4b3531dd947456815c5808030917a554d30.scope.
Nov 27 05:54:03 np0005537642 podman[76473]: 2025-11-27 10:54:03.847887757 +0000 UTC m=+0.022728537 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Nov 27 05:54:03 np0005537642 systemd[1]: Started libcrun container.
Nov 27 05:54:03 np0005537642 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/72635ee9e56e50274e904e2064a6d6ea119cff9f33f9d2554e990f8aa2329601/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 27 05:54:03 np0005537642 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/72635ee9e56e50274e904e2064a6d6ea119cff9f33f9d2554e990f8aa2329601/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Nov 27 05:54:03 np0005537642 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/72635ee9e56e50274e904e2064a6d6ea119cff9f33f9d2554e990f8aa2329601/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 27 05:54:03 np0005537642 podman[76473]: 2025-11-27 10:54:03.970861203 +0000 UTC m=+0.145702003 container init 81e09a6219ccd89e3301ad623b39d4b3531dd947456815c5808030917a554d30 (image=quay.io/ceph/ceph:v19, name=brave_wilbur, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default)
Nov 27 05:54:03 np0005537642 podman[76473]: 2025-11-27 10:54:03.982655278 +0000 UTC m=+0.157496008 container start 81e09a6219ccd89e3301ad623b39d4b3531dd947456815c5808030917a554d30 (image=quay.io/ceph/ceph:v19, name=brave_wilbur, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 27 05:54:03 np0005537642 podman[76473]: 2025-11-27 10:54:03.987176527 +0000 UTC m=+0.162017277 container attach 81e09a6219ccd89e3301ad623b39d4b3531dd947456815c5808030917a554d30 (image=quay.io/ceph/ceph:v19, name=brave_wilbur, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325)
Nov 27 05:54:04 np0005537642 ceph-mon[74338]: Saving service mgr spec with placement count:2
Nov 27 05:54:04 np0005537642 ceph-mon[74338]: from='mgr.14122 192.168.122.100:0/397881370' entity='mgr.compute-0.qnrkij' 
Nov 27 05:54:04 np0005537642 ceph-mon[74338]: from='mgr.14122 192.168.122.100:0/397881370' entity='mgr.compute-0.qnrkij' 
Nov 27 05:54:04 np0005537642 podman[76543]: 2025-11-27 10:54:04.212304807 +0000 UTC m=+0.083108494 container exec 10d3b07b5dbe91b896d72c044972881d213b8aa535ac9c97588798b2ade7a7fa (image=quay.io/ceph/ceph:v19, name=ceph-4c838139-e0c9-556a-a9ca-e4422f459af7-mon-compute-0, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0)
Nov 27 05:54:04 np0005537642 podman[76543]: 2025-11-27 10:54:04.324029653 +0000 UTC m=+0.194833290 container exec_died 10d3b07b5dbe91b896d72c044972881d213b8aa535ac9c97588798b2ade7a7fa (image=quay.io/ceph/ceph:v19, name=ceph-4c838139-e0c9-556a-a9ca-e4422f459af7-mon-compute-0, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_REF=squid, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Nov 27 05:54:04 np0005537642 ceph-mon[74338]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config set, name=mgr/cephadm/container_init}] v 0)
Nov 27 05:54:04 np0005537642 ceph-mon[74338]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/310734776' entity='client.admin' 
Nov 27 05:54:04 np0005537642 systemd[1]: libpod-81e09a6219ccd89e3301ad623b39d4b3531dd947456815c5808030917a554d30.scope: Deactivated successfully.
Nov 27 05:54:04 np0005537642 podman[76473]: 2025-11-27 10:54:04.410747488 +0000 UTC m=+0.585588228 container died 81e09a6219ccd89e3301ad623b39d4b3531dd947456815c5808030917a554d30 (image=quay.io/ceph/ceph:v19, name=brave_wilbur, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 27 05:54:04 np0005537642 systemd[1]: var-lib-containers-storage-overlay-72635ee9e56e50274e904e2064a6d6ea119cff9f33f9d2554e990f8aa2329601-merged.mount: Deactivated successfully.
Nov 27 05:54:04 np0005537642 podman[76473]: 2025-11-27 10:54:04.463416015 +0000 UTC m=+0.638256755 container remove 81e09a6219ccd89e3301ad623b39d4b3531dd947456815c5808030917a554d30 (image=quay.io/ceph/ceph:v19, name=brave_wilbur, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 27 05:54:04 np0005537642 systemd[1]: libpod-conmon-81e09a6219ccd89e3301ad623b39d4b3531dd947456815c5808030917a554d30.scope: Deactivated successfully.
Nov 27 05:54:04 np0005537642 ceph-mon[74338]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Nov 27 05:54:04 np0005537642 podman[76619]: 2025-11-27 10:54:04.507965481 +0000 UTC m=+0.020607327 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Nov 27 05:54:04 np0005537642 podman[76619]: 2025-11-27 10:54:04.751677609 +0000 UTC m=+0.264319445 container create acbf86ad67fd7c85e94f8c3cdbdc770db57546a37737f1abc5d13e8b92f9dcff (image=quay.io/ceph/ceph:v19, name=angry_benz, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.schema-version=1.0)
Nov 27 05:54:04 np0005537642 ceph-mon[74338]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/397881370' entity='mgr.compute-0.qnrkij' 
Nov 27 05:54:04 np0005537642 systemd[1]: Started libpod-conmon-acbf86ad67fd7c85e94f8c3cdbdc770db57546a37737f1abc5d13e8b92f9dcff.scope.
Nov 27 05:54:04 np0005537642 ceph-mgr[74636]: mgr.server send_report Not sending PG status to monitor yet, waiting for OSDs
Nov 27 05:54:04 np0005537642 systemd[1]: Started libcrun container.
Nov 27 05:54:04 np0005537642 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/11b699f3394c7bbcba3f0c250da6144ba2f2b7848d4705368a10fe2f7ccd7a86/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Nov 27 05:54:04 np0005537642 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/11b699f3394c7bbcba3f0c250da6144ba2f2b7848d4705368a10fe2f7ccd7a86/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 27 05:54:04 np0005537642 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/11b699f3394c7bbcba3f0c250da6144ba2f2b7848d4705368a10fe2f7ccd7a86/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 27 05:54:04 np0005537642 podman[76619]: 2025-11-27 10:54:04.862107278 +0000 UTC m=+0.374749174 container init acbf86ad67fd7c85e94f8c3cdbdc770db57546a37737f1abc5d13e8b92f9dcff (image=quay.io/ceph/ceph:v19, name=angry_benz, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_REF=squid, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 27 05:54:04 np0005537642 podman[76619]: 2025-11-27 10:54:04.868978313 +0000 UTC m=+0.381620149 container start acbf86ad67fd7c85e94f8c3cdbdc770db57546a37737f1abc5d13e8b92f9dcff (image=quay.io/ceph/ceph:v19, name=angry_benz, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 27 05:54:04 np0005537642 podman[76619]: 2025-11-27 10:54:04.872743 +0000 UTC m=+0.385384866 container attach acbf86ad67fd7c85e94f8c3cdbdc770db57546a37737f1abc5d13e8b92f9dcff (image=quay.io/ceph/ceph:v19, name=angry_benz, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, ceph=True, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 27 05:54:05 np0005537642 ceph-mon[74338]: mon.compute-0@0(leader).osd e2 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 27 05:54:05 np0005537642 systemd[1]: proc-sys-fs-binfmt_misc.automount: Got automount request for /proc/sys/fs/binfmt_misc, triggered by 76721 (sysctl)
Nov 27 05:54:05 np0005537642 systemd[1]: Mounting Arbitrary Executable File Formats File System...
Nov 27 05:54:05 np0005537642 systemd[1]: Mounted Arbitrary Executable File Formats File System.
Nov 27 05:54:05 np0005537642 ceph-mon[74338]: Saving service crash spec with placement *
Nov 27 05:54:05 np0005537642 ceph-mon[74338]: from='client.? 192.168.122.100:0/310734776' entity='client.admin' 
Nov 27 05:54:05 np0005537642 ceph-mon[74338]: from='mgr.14122 192.168.122.100:0/397881370' entity='mgr.compute-0.qnrkij' 
Nov 27 05:54:05 np0005537642 ceph-mgr[74636]: log_channel(audit) log [DBG] : from='client.14154 -' entity='client.admin' cmd=[{"prefix": "orch client-keyring set", "entity": "client.admin", "placement": "label:_admin", "target": ["mon-mgr", ""]}]: dispatch
Nov 27 05:54:05 np0005537642 ceph-mon[74338]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/client_keyrings}] v 0)
Nov 27 05:54:05 np0005537642 ceph-mon[74338]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/397881370' entity='mgr.compute-0.qnrkij' 
Nov 27 05:54:05 np0005537642 systemd[1]: libpod-acbf86ad67fd7c85e94f8c3cdbdc770db57546a37737f1abc5d13e8b92f9dcff.scope: Deactivated successfully.
Nov 27 05:54:05 np0005537642 podman[76619]: 2025-11-27 10:54:05.320330544 +0000 UTC m=+0.832972430 container died acbf86ad67fd7c85e94f8c3cdbdc770db57546a37737f1abc5d13e8b92f9dcff (image=quay.io/ceph/ceph:v19, name=angry_benz, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default)
Nov 27 05:54:05 np0005537642 systemd[1]: var-lib-containers-storage-overlay-11b699f3394c7bbcba3f0c250da6144ba2f2b7848d4705368a10fe2f7ccd7a86-merged.mount: Deactivated successfully.
Nov 27 05:54:05 np0005537642 podman[76619]: 2025-11-27 10:54:05.377869129 +0000 UTC m=+0.890510995 container remove acbf86ad67fd7c85e94f8c3cdbdc770db57546a37737f1abc5d13e8b92f9dcff (image=quay.io/ceph/ceph:v19, name=angry_benz, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 27 05:54:05 np0005537642 systemd[1]: libpod-conmon-acbf86ad67fd7c85e94f8c3cdbdc770db57546a37737f1abc5d13e8b92f9dcff.scope: Deactivated successfully.
Nov 27 05:54:05 np0005537642 podman[76747]: 2025-11-27 10:54:05.475426032 +0000 UTC m=+0.064230667 container create e3f7361056c93ad714f5217c276bd501a2f47cede96a3abdbef4eb80cd853ac9 (image=quay.io/ceph/ceph:v19, name=vibrant_galileo, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 27 05:54:05 np0005537642 systemd[1]: Started libpod-conmon-e3f7361056c93ad714f5217c276bd501a2f47cede96a3abdbef4eb80cd853ac9.scope.
Nov 27 05:54:05 np0005537642 podman[76747]: 2025-11-27 10:54:05.449729892 +0000 UTC m=+0.038534567 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Nov 27 05:54:05 np0005537642 systemd[1]: Started libcrun container.
Nov 27 05:54:05 np0005537642 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0c6ba45ff31b225a9f54cbced8f627eddff90839130793c1637824b647da9710/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 27 05:54:05 np0005537642 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0c6ba45ff31b225a9f54cbced8f627eddff90839130793c1637824b647da9710/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Nov 27 05:54:05 np0005537642 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0c6ba45ff31b225a9f54cbced8f627eddff90839130793c1637824b647da9710/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 27 05:54:05 np0005537642 podman[76747]: 2025-11-27 10:54:05.586997474 +0000 UTC m=+0.175802119 container init e3f7361056c93ad714f5217c276bd501a2f47cede96a3abdbef4eb80cd853ac9 (image=quay.io/ceph/ceph:v19, name=vibrant_galileo, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2)
Nov 27 05:54:05 np0005537642 podman[76747]: 2025-11-27 10:54:05.599728636 +0000 UTC m=+0.188533241 container start e3f7361056c93ad714f5217c276bd501a2f47cede96a3abdbef4eb80cd853ac9 (image=quay.io/ceph/ceph:v19, name=vibrant_galileo, ceph=True, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_REF=squid, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 27 05:54:05 np0005537642 podman[76747]: 2025-11-27 10:54:05.60375504 +0000 UTC m=+0.192559665 container attach e3f7361056c93ad714f5217c276bd501a2f47cede96a3abdbef4eb80cd853ac9 (image=quay.io/ceph/ceph:v19, name=vibrant_galileo, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default)
Nov 27 05:54:05 np0005537642 ceph-mgr[74636]: log_channel(audit) log [DBG] : from='client.14156 -' entity='client.admin' cmd=[{"prefix": "orch host label add", "hostname": "compute-0", "label": "_admin", "target": ["mon-mgr", ""]}]: dispatch
Nov 27 05:54:05 np0005537642 ceph-mon[74338]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/inventory}] v 0)
Nov 27 05:54:06 np0005537642 ceph-mon[74338]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Nov 27 05:54:06 np0005537642 ceph-mon[74338]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/397881370' entity='mgr.compute-0.qnrkij' 
Nov 27 05:54:06 np0005537642 ceph-mgr[74636]: [cephadm INFO root] Added label _admin to host compute-0
Nov 27 05:54:06 np0005537642 ceph-mgr[74636]: log_channel(cephadm) log [INF] : Added label _admin to host compute-0
Nov 27 05:54:06 np0005537642 vibrant_galileo[76777]: Added label _admin to host compute-0
Nov 27 05:54:06 np0005537642 ceph-mon[74338]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/397881370' entity='mgr.compute-0.qnrkij' 
Nov 27 05:54:06 np0005537642 systemd[1]: libpod-e3f7361056c93ad714f5217c276bd501a2f47cede96a3abdbef4eb80cd853ac9.scope: Deactivated successfully.
Nov 27 05:54:06 np0005537642 podman[76747]: 2025-11-27 10:54:06.133949071 +0000 UTC m=+0.722753716 container died e3f7361056c93ad714f5217c276bd501a2f47cede96a3abdbef4eb80cd853ac9 (image=quay.io/ceph/ceph:v19, name=vibrant_galileo, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, ceph=True, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Nov 27 05:54:06 np0005537642 ceph-mon[74338]: from='mgr.14122 192.168.122.100:0/397881370' entity='mgr.compute-0.qnrkij' 
Nov 27 05:54:06 np0005537642 ceph-mon[74338]: from='mgr.14122 192.168.122.100:0/397881370' entity='mgr.compute-0.qnrkij' 
Nov 27 05:54:06 np0005537642 ceph-mon[74338]: Added label _admin to host compute-0
Nov 27 05:54:06 np0005537642 ceph-mon[74338]: from='mgr.14122 192.168.122.100:0/397881370' entity='mgr.compute-0.qnrkij' 
Nov 27 05:54:06 np0005537642 systemd[1]: var-lib-containers-storage-overlay-0c6ba45ff31b225a9f54cbced8f627eddff90839130793c1637824b647da9710-merged.mount: Deactivated successfully.
Nov 27 05:54:06 np0005537642 podman[76747]: 2025-11-27 10:54:06.560897908 +0000 UTC m=+1.149702533 container remove e3f7361056c93ad714f5217c276bd501a2f47cede96a3abdbef4eb80cd853ac9 (image=quay.io/ceph/ceph:v19, name=vibrant_galileo, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 27 05:54:06 np0005537642 systemd[1]: libpod-conmon-e3f7361056c93ad714f5217c276bd501a2f47cede96a3abdbef4eb80cd853ac9.scope: Deactivated successfully.
Nov 27 05:54:06 np0005537642 podman[76946]: 2025-11-27 10:54:06.655432805 +0000 UTC m=+0.062587220 container create e97a5b60f44e6e1e768e409eb9b1d60e25b20f155d449103cd4b99c0851039b0 (image=quay.io/ceph/ceph:v19, name=zealous_nobel, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, ceph=True, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 27 05:54:06 np0005537642 systemd[1]: Started libpod-conmon-e97a5b60f44e6e1e768e409eb9b1d60e25b20f155d449103cd4b99c0851039b0.scope.
Nov 27 05:54:06 np0005537642 podman[76946]: 2025-11-27 10:54:06.629761936 +0000 UTC m=+0.036916431 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Nov 27 05:54:06 np0005537642 systemd[1]: Started libcrun container.
Nov 27 05:54:06 np0005537642 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c7ec6745dc220b63614b52208d8dd639fb53742025a8b3c1bce471cb242daae9/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 27 05:54:06 np0005537642 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c7ec6745dc220b63614b52208d8dd639fb53742025a8b3c1bce471cb242daae9/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Nov 27 05:54:06 np0005537642 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c7ec6745dc220b63614b52208d8dd639fb53742025a8b3c1bce471cb242daae9/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 27 05:54:06 np0005537642 podman[76946]: 2025-11-27 10:54:06.761359526 +0000 UTC m=+0.168514041 container init e97a5b60f44e6e1e768e409eb9b1d60e25b20f155d449103cd4b99c0851039b0 (image=quay.io/ceph/ceph:v19, name=zealous_nobel, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, ceph=True, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 27 05:54:06 np0005537642 podman[76946]: 2025-11-27 10:54:06.771806573 +0000 UTC m=+0.178961018 container start e97a5b60f44e6e1e768e409eb9b1d60e25b20f155d449103cd4b99c0851039b0 (image=quay.io/ceph/ceph:v19, name=zealous_nobel, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 27 05:54:06 np0005537642 podman[76946]: 2025-11-27 10:54:06.776916859 +0000 UTC m=+0.184071314 container attach e97a5b60f44e6e1e768e409eb9b1d60e25b20f155d449103cd4b99c0851039b0 (image=quay.io/ceph/ceph:v19, name=zealous_nobel, org.label-schema.vendor=CentOS, CEPH_REF=squid, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 27 05:54:06 np0005537642 ceph-mgr[74636]: mgr.server send_report Giving up on OSDs that haven't reported yet, sending potentially incomplete PG state to mon
Nov 27 05:54:06 np0005537642 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v3: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Nov 27 05:54:06 np0005537642 ceph-mon[74338]: log_channel(cluster) log [WRN] : Health check failed: OSD count 0 < osd_pool_default_size 1 (TOO_FEW_OSDS)
Nov 27 05:54:06 np0005537642 podman[76992]: 2025-11-27 10:54:06.865105414 +0000 UTC m=+0.052137782 container create 655982973f7cf424fe19bc31d0229c604f1c9776ff4af576cfa23a73ddaf225d (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=modest_fermi, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True)
Nov 27 05:54:06 np0005537642 systemd[1]: Started libpod-conmon-655982973f7cf424fe19bc31d0229c604f1c9776ff4af576cfa23a73ddaf225d.scope.
Nov 27 05:54:06 np0005537642 systemd[1]: Started libcrun container.
Nov 27 05:54:06 np0005537642 podman[76992]: 2025-11-27 10:54:06.844999893 +0000 UTC m=+0.032032291 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Nov 27 05:54:06 np0005537642 podman[76992]: 2025-11-27 10:54:06.941697322 +0000 UTC m=+0.128729780 container init 655982973f7cf424fe19bc31d0229c604f1c9776ff4af576cfa23a73ddaf225d (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=modest_fermi, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 27 05:54:06 np0005537642 podman[76992]: 2025-11-27 10:54:06.951750017 +0000 UTC m=+0.138782405 container start 655982973f7cf424fe19bc31d0229c604f1c9776ff4af576cfa23a73ddaf225d (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=modest_fermi, CEPH_REF=squid, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 27 05:54:06 np0005537642 podman[76992]: 2025-11-27 10:54:06.955621427 +0000 UTC m=+0.142653885 container attach 655982973f7cf424fe19bc31d0229c604f1c9776ff4af576cfa23a73ddaf225d (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=modest_fermi, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0)
Nov 27 05:54:06 np0005537642 modest_fermi[77010]: 167 167
Nov 27 05:54:06 np0005537642 systemd[1]: libpod-655982973f7cf424fe19bc31d0229c604f1c9776ff4af576cfa23a73ddaf225d.scope: Deactivated successfully.
Nov 27 05:54:06 np0005537642 podman[76992]: 2025-11-27 10:54:06.958379226 +0000 UTC m=+0.145411614 container died 655982973f7cf424fe19bc31d0229c604f1c9776ff4af576cfa23a73ddaf225d (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=modest_fermi, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 27 05:54:06 np0005537642 systemd[1]: var-lib-containers-storage-overlay-93109859d711b9cfe8fa77ac65def414d3907ce2c7012a9a008873b9652f1ec5-merged.mount: Deactivated successfully.
Nov 27 05:54:07 np0005537642 podman[76992]: 2025-11-27 10:54:07.005771713 +0000 UTC m=+0.192804111 container remove 655982973f7cf424fe19bc31d0229c604f1c9776ff4af576cfa23a73ddaf225d (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=modest_fermi, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.build-date=20250325)
Nov 27 05:54:07 np0005537642 systemd[1]: libpod-conmon-655982973f7cf424fe19bc31d0229c604f1c9776ff4af576cfa23a73ddaf225d.scope: Deactivated successfully.
Nov 27 05:54:07 np0005537642 ceph-mon[74338]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/dashboard/cluster/status}] v 0)
Nov 27 05:54:07 np0005537642 ceph-mon[74338]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/1591323041' entity='client.admin' 
Nov 27 05:54:07 np0005537642 zealous_nobel[76987]: set mgr/dashboard/cluster/status
Nov 27 05:54:07 np0005537642 systemd[1]: libpod-e97a5b60f44e6e1e768e409eb9b1d60e25b20f155d449103cd4b99c0851039b0.scope: Deactivated successfully.
Nov 27 05:54:07 np0005537642 conmon[76987]: conmon e97a5b60f44e6e1e768e <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-e97a5b60f44e6e1e768e409eb9b1d60e25b20f155d449103cd4b99c0851039b0.scope/container/memory.events
Nov 27 05:54:07 np0005537642 podman[76946]: 2025-11-27 10:54:07.301454328 +0000 UTC m=+0.708608743 container died e97a5b60f44e6e1e768e409eb9b1d60e25b20f155d449103cd4b99c0851039b0 (image=quay.io/ceph/ceph:v19, name=zealous_nobel, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 27 05:54:07 np0005537642 systemd[1]: var-lib-containers-storage-overlay-c7ec6745dc220b63614b52208d8dd639fb53742025a8b3c1bce471cb242daae9-merged.mount: Deactivated successfully.
Nov 27 05:54:07 np0005537642 podman[76946]: 2025-11-27 10:54:07.337358419 +0000 UTC m=+0.744512864 container remove e97a5b60f44e6e1e768e409eb9b1d60e25b20f155d449103cd4b99c0851039b0 (image=quay.io/ceph/ceph:v19, name=zealous_nobel, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, io.buildah.version=1.40.1)
Nov 27 05:54:07 np0005537642 systemd[1]: libpod-conmon-e97a5b60f44e6e1e768e409eb9b1d60e25b20f155d449103cd4b99c0851039b0.scope: Deactivated successfully.
Nov 27 05:54:07 np0005537642 ceph-mon[74338]: Health check failed: OSD count 0 < osd_pool_default_size 1 (TOO_FEW_OSDS)
Nov 27 05:54:07 np0005537642 ceph-mon[74338]: from='client.? 192.168.122.100:0/1591323041' entity='client.admin' 
Nov 27 05:54:07 np0005537642 podman[77064]: 2025-11-27 10:54:07.592735968 +0000 UTC m=+0.062750615 container create ae4b997f1d835179e579001fc1a5bf97f2346b981aa2649738f088bb7773b4b7 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=vigorous_carver, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True)
Nov 27 05:54:07 np0005537642 systemd[1]: Started libpod-conmon-ae4b997f1d835179e579001fc1a5bf97f2346b981aa2649738f088bb7773b4b7.scope.
Nov 27 05:54:07 np0005537642 podman[77064]: 2025-11-27 10:54:07.564816815 +0000 UTC m=+0.034831502 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Nov 27 05:54:07 np0005537642 systemd[1]: Started libcrun container.
Nov 27 05:54:07 np0005537642 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f82e6e11b1cbee6178ad7a8404437158e98f6f9f27a629c510a6211fbb3fcba0/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 27 05:54:07 np0005537642 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f82e6e11b1cbee6178ad7a8404437158e98f6f9f27a629c510a6211fbb3fcba0/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 27 05:54:07 np0005537642 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f82e6e11b1cbee6178ad7a8404437158e98f6f9f27a629c510a6211fbb3fcba0/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 27 05:54:07 np0005537642 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f82e6e11b1cbee6178ad7a8404437158e98f6f9f27a629c510a6211fbb3fcba0/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 27 05:54:07 np0005537642 podman[77064]: 2025-11-27 10:54:07.686069001 +0000 UTC m=+0.156083608 container init ae4b997f1d835179e579001fc1a5bf97f2346b981aa2649738f088bb7773b4b7 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=vigorous_carver, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, OSD_FLAVOR=default, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325)
Nov 27 05:54:07 np0005537642 podman[77064]: 2025-11-27 10:54:07.694937613 +0000 UTC m=+0.164952230 container start ae4b997f1d835179e579001fc1a5bf97f2346b981aa2649738f088bb7773b4b7 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=vigorous_carver, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_REF=squid)
Nov 27 05:54:07 np0005537642 podman[77064]: 2025-11-27 10:54:07.698750802 +0000 UTC m=+0.168765409 container attach ae4b997f1d835179e579001fc1a5bf97f2346b981aa2649738f088bb7773b4b7 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=vigorous_carver, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 27 05:54:07 np0005537642 python3[77111]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v19 --fsid 4c838139-e0c9-556a-a9ca-e4422f459af7 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   config set mgr mgr/cephadm/use_repo_digest false#012 _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 27 05:54:07 np0005537642 podman[77114]: 2025-11-27 10:54:07.912513748 +0000 UTC m=+0.042161939 container create 21cdd5099a2f48e36166632579669b1f1a16d4b7752023492294a677c7035878 (image=quay.io/ceph/ceph:v19, name=compassionate_austin, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 27 05:54:07 np0005537642 systemd[1]: Started libpod-conmon-21cdd5099a2f48e36166632579669b1f1a16d4b7752023492294a677c7035878.scope.
Nov 27 05:54:07 np0005537642 systemd[1]: Started libcrun container.
Nov 27 05:54:07 np0005537642 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0273a02bbdf225e2c01ff7edf59dd5c9a23fc3e09c251ddbe75500198c687718/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Nov 27 05:54:07 np0005537642 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0273a02bbdf225e2c01ff7edf59dd5c9a23fc3e09c251ddbe75500198c687718/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 27 05:54:07 np0005537642 podman[77114]: 2025-11-27 10:54:07.894563978 +0000 UTC m=+0.024212189 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Nov 27 05:54:08 np0005537642 podman[77114]: 2025-11-27 10:54:08.008401884 +0000 UTC m=+0.138050065 container init 21cdd5099a2f48e36166632579669b1f1a16d4b7752023492294a677c7035878 (image=quay.io/ceph/ceph:v19, name=compassionate_austin, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, CEPH_REF=squid, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.license=GPLv2)
Nov 27 05:54:08 np0005537642 podman[77114]: 2025-11-27 10:54:08.014878628 +0000 UTC m=+0.144526819 container start 21cdd5099a2f48e36166632579669b1f1a16d4b7752023492294a677c7035878 (image=quay.io/ceph/ceph:v19, name=compassionate_austin, org.label-schema.license=GPLv2, ceph=True, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 27 05:54:08 np0005537642 podman[77114]: 2025-11-27 10:54:08.04202673 +0000 UTC m=+0.171674951 container attach 21cdd5099a2f48e36166632579669b1f1a16d4b7752023492294a677c7035878 (image=quay.io/ceph/ceph:v19, name=compassionate_austin, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2)
Nov 27 05:54:08 np0005537642 ceph-mon[74338]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config set, name=mgr/cephadm/use_repo_digest}] v 0)
Nov 27 05:54:08 np0005537642 ceph-mon[74338]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/1516753175' entity='client.admin' 
Nov 27 05:54:08 np0005537642 systemd[1]: libpod-21cdd5099a2f48e36166632579669b1f1a16d4b7752023492294a677c7035878.scope: Deactivated successfully.
Nov 27 05:54:08 np0005537642 podman[77114]: 2025-11-27 10:54:08.41317701 +0000 UTC m=+0.542825201 container died 21cdd5099a2f48e36166632579669b1f1a16d4b7752023492294a677c7035878 (image=quay.io/ceph/ceph:v19, name=compassionate_austin, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1)
Nov 27 05:54:08 np0005537642 systemd[1]: var-lib-containers-storage-overlay-0273a02bbdf225e2c01ff7edf59dd5c9a23fc3e09c251ddbe75500198c687718-merged.mount: Deactivated successfully.
Nov 27 05:54:08 np0005537642 podman[77114]: 2025-11-27 10:54:08.448834614 +0000 UTC m=+0.578482785 container remove 21cdd5099a2f48e36166632579669b1f1a16d4b7752023492294a677c7035878 (image=quay.io/ceph/ceph:v19, name=compassionate_austin, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 27 05:54:08 np0005537642 systemd[1]: libpod-conmon-21cdd5099a2f48e36166632579669b1f1a16d4b7752023492294a677c7035878.scope: Deactivated successfully.
Nov 27 05:54:08 np0005537642 vigorous_carver[77081]: [
Nov 27 05:54:08 np0005537642 vigorous_carver[77081]:    {
Nov 27 05:54:08 np0005537642 vigorous_carver[77081]:        "available": false,
Nov 27 05:54:08 np0005537642 vigorous_carver[77081]:        "being_replaced": false,
Nov 27 05:54:08 np0005537642 vigorous_carver[77081]:        "ceph_device_lvm": false,
Nov 27 05:54:08 np0005537642 vigorous_carver[77081]:        "device_id": "QEMU_DVD-ROM_QM00001",
Nov 27 05:54:08 np0005537642 vigorous_carver[77081]:        "lsm_data": {},
Nov 27 05:54:08 np0005537642 vigorous_carver[77081]:        "lvs": [],
Nov 27 05:54:08 np0005537642 vigorous_carver[77081]:        "path": "/dev/sr0",
Nov 27 05:54:08 np0005537642 vigorous_carver[77081]:        "rejected_reasons": [
Nov 27 05:54:08 np0005537642 vigorous_carver[77081]:            "Insufficient space (<5GB)",
Nov 27 05:54:08 np0005537642 vigorous_carver[77081]:            "Has a FileSystem"
Nov 27 05:54:08 np0005537642 vigorous_carver[77081]:        ],
Nov 27 05:54:08 np0005537642 vigorous_carver[77081]:        "sys_api": {
Nov 27 05:54:08 np0005537642 vigorous_carver[77081]:            "actuators": null,
Nov 27 05:54:08 np0005537642 vigorous_carver[77081]:            "device_nodes": [
Nov 27 05:54:08 np0005537642 vigorous_carver[77081]:                "sr0"
Nov 27 05:54:08 np0005537642 vigorous_carver[77081]:            ],
Nov 27 05:54:08 np0005537642 vigorous_carver[77081]:            "devname": "sr0",
Nov 27 05:54:08 np0005537642 vigorous_carver[77081]:            "human_readable_size": "482.00 KB",
Nov 27 05:54:08 np0005537642 vigorous_carver[77081]:            "id_bus": "ata",
Nov 27 05:54:08 np0005537642 vigorous_carver[77081]:            "model": "QEMU DVD-ROM",
Nov 27 05:54:08 np0005537642 vigorous_carver[77081]:            "nr_requests": "2",
Nov 27 05:54:08 np0005537642 vigorous_carver[77081]:            "parent": "/dev/sr0",
Nov 27 05:54:08 np0005537642 vigorous_carver[77081]:            "partitions": {},
Nov 27 05:54:08 np0005537642 vigorous_carver[77081]:            "path": "/dev/sr0",
Nov 27 05:54:08 np0005537642 vigorous_carver[77081]:            "removable": "1",
Nov 27 05:54:08 np0005537642 vigorous_carver[77081]:            "rev": "2.5+",
Nov 27 05:54:08 np0005537642 vigorous_carver[77081]:            "ro": "0",
Nov 27 05:54:08 np0005537642 vigorous_carver[77081]:            "rotational": "1",
Nov 27 05:54:08 np0005537642 vigorous_carver[77081]:            "sas_address": "",
Nov 27 05:54:08 np0005537642 vigorous_carver[77081]:            "sas_device_handle": "",
Nov 27 05:54:08 np0005537642 vigorous_carver[77081]:            "scheduler_mode": "mq-deadline",
Nov 27 05:54:08 np0005537642 vigorous_carver[77081]:            "sectors": 0,
Nov 27 05:54:08 np0005537642 vigorous_carver[77081]:            "sectorsize": "2048",
Nov 27 05:54:08 np0005537642 vigorous_carver[77081]:            "size": 493568.0,
Nov 27 05:54:08 np0005537642 vigorous_carver[77081]:            "support_discard": "2048",
Nov 27 05:54:08 np0005537642 vigorous_carver[77081]:            "type": "disk",
Nov 27 05:54:08 np0005537642 vigorous_carver[77081]:            "vendor": "QEMU"
Nov 27 05:54:08 np0005537642 vigorous_carver[77081]:        }
Nov 27 05:54:08 np0005537642 vigorous_carver[77081]:    }
Nov 27 05:54:08 np0005537642 vigorous_carver[77081]: ]
Nov 27 05:54:08 np0005537642 systemd[1]: libpod-ae4b997f1d835179e579001fc1a5bf97f2346b981aa2649738f088bb7773b4b7.scope: Deactivated successfully.
Nov 27 05:54:08 np0005537642 podman[78224]: 2025-11-27 10:54:08.633984007 +0000 UTC m=+0.025142766 container died ae4b997f1d835179e579001fc1a5bf97f2346b981aa2649738f088bb7773b4b7 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=vigorous_carver, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, OSD_FLAVOR=default, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 27 05:54:08 np0005537642 systemd[1]: var-lib-containers-storage-overlay-f82e6e11b1cbee6178ad7a8404437158e98f6f9f27a629c510a6211fbb3fcba0-merged.mount: Deactivated successfully.
Nov 27 05:54:08 np0005537642 podman[78224]: 2025-11-27 10:54:08.68969293 +0000 UTC m=+0.080851669 container remove ae4b997f1d835179e579001fc1a5bf97f2346b981aa2649738f088bb7773b4b7 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=vigorous_carver, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_REF=squid, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1)
Nov 27 05:54:08 np0005537642 systemd[1]: libpod-conmon-ae4b997f1d835179e579001fc1a5bf97f2346b981aa2649738f088bb7773b4b7.scope: Deactivated successfully.
Nov 27 05:54:08 np0005537642 ceph-mon[74338]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Nov 27 05:54:08 np0005537642 ceph-mon[74338]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/397881370' entity='mgr.compute-0.qnrkij' 
Nov 27 05:54:08 np0005537642 ceph-mon[74338]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Nov 27 05:54:08 np0005537642 ceph-mon[74338]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/397881370' entity='mgr.compute-0.qnrkij' 
Nov 27 05:54:08 np0005537642 ceph-mon[74338]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Nov 27 05:54:08 np0005537642 ceph-mon[74338]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/397881370' entity='mgr.compute-0.qnrkij' 
Nov 27 05:54:08 np0005537642 ceph-mon[74338]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Nov 27 05:54:08 np0005537642 ceph-mon[74338]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/397881370' entity='mgr.compute-0.qnrkij' 
Nov 27 05:54:08 np0005537642 ceph-mon[74338]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"} v 0)
Nov 27 05:54:08 np0005537642 ceph-mon[74338]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/397881370' entity='mgr.compute-0.qnrkij' cmd=[{"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"}]: dispatch
Nov 27 05:54:08 np0005537642 ceph-mon[74338]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Nov 27 05:54:08 np0005537642 ceph-mon[74338]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/397881370' entity='mgr.compute-0.qnrkij' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 27 05:54:08 np0005537642 ceph-mon[74338]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Nov 27 05:54:08 np0005537642 ceph-mon[74338]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/397881370' entity='mgr.compute-0.qnrkij' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 27 05:54:08 np0005537642 ceph-mgr[74636]: [cephadm INFO cephadm.serve] Updating compute-0:/etc/ceph/ceph.conf
Nov 27 05:54:08 np0005537642 ceph-mgr[74636]: log_channel(cephadm) log [INF] : Updating compute-0:/etc/ceph/ceph.conf
Nov 27 05:54:08 np0005537642 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v4: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Nov 27 05:54:09 np0005537642 ceph-mgr[74636]: [cephadm INFO cephadm.serve] Updating compute-0:/var/lib/ceph/4c838139-e0c9-556a-a9ca-e4422f459af7/config/ceph.conf
Nov 27 05:54:09 np0005537642 ceph-mgr[74636]: log_channel(cephadm) log [INF] : Updating compute-0:/var/lib/ceph/4c838139-e0c9-556a-a9ca-e4422f459af7/config/ceph.conf
Nov 27 05:54:09 np0005537642 ansible-async_wrapper.py[78598]: Invoked with j212744135698 30 /home/zuul/.ansible/tmp/ansible-tmp-1764240848.7993803-36989-191379763698941/AnsiballZ_command.py _
Nov 27 05:54:09 np0005537642 ansible-async_wrapper.py[78660]: Starting module and watcher
Nov 27 05:54:09 np0005537642 ansible-async_wrapper.py[78660]: Start watching 78663 (30)
Nov 27 05:54:09 np0005537642 ansible-async_wrapper.py[78663]: Start module (78663)
Nov 27 05:54:09 np0005537642 ansible-async_wrapper.py[78598]: Return async_wrapper task started.
Nov 27 05:54:09 np0005537642 ceph-mon[74338]: from='client.? 192.168.122.100:0/1516753175' entity='client.admin' 
Nov 27 05:54:09 np0005537642 ceph-mon[74338]: from='mgr.14122 192.168.122.100:0/397881370' entity='mgr.compute-0.qnrkij' 
Nov 27 05:54:09 np0005537642 ceph-mon[74338]: from='mgr.14122 192.168.122.100:0/397881370' entity='mgr.compute-0.qnrkij' 
Nov 27 05:54:09 np0005537642 ceph-mon[74338]: from='mgr.14122 192.168.122.100:0/397881370' entity='mgr.compute-0.qnrkij' 
Nov 27 05:54:09 np0005537642 ceph-mon[74338]: from='mgr.14122 192.168.122.100:0/397881370' entity='mgr.compute-0.qnrkij' 
Nov 27 05:54:09 np0005537642 ceph-mon[74338]: from='mgr.14122 192.168.122.100:0/397881370' entity='mgr.compute-0.qnrkij' cmd=[{"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"}]: dispatch
Nov 27 05:54:09 np0005537642 ceph-mon[74338]: from='mgr.14122 192.168.122.100:0/397881370' entity='mgr.compute-0.qnrkij' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 27 05:54:09 np0005537642 ceph-mon[74338]: Updating compute-0:/etc/ceph/ceph.conf
Nov 27 05:54:09 np0005537642 python3[78664]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v19 --fsid 4c838139-e0c9-556a-a9ca-e4422f459af7 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   orch status --format json _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 27 05:54:09 np0005537642 podman[78742]: 2025-11-27 10:54:09.702945373 +0000 UTC m=+0.023426067 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Nov 27 05:54:09 np0005537642 ceph-mgr[74636]: [cephadm INFO cephadm.serve] Updating compute-0:/etc/ceph/ceph.client.admin.keyring
Nov 27 05:54:09 np0005537642 ceph-mgr[74636]: log_channel(cephadm) log [INF] : Updating compute-0:/etc/ceph/ceph.client.admin.keyring
Nov 27 05:54:09 np0005537642 podman[78742]: 2025-11-27 10:54:09.918386987 +0000 UTC m=+0.238867661 container create da4f34a24d51e75a54192a24fcc0b04d8a1576e69c091b1c42728b8ff689c6ec (image=quay.io/ceph/ceph:v19, name=laughing_poincare, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Nov 27 05:54:10 np0005537642 ceph-mon[74338]: mon.compute-0@0(leader).osd e2 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 27 05:54:10 np0005537642 systemd[1]: Started libpod-conmon-da4f34a24d51e75a54192a24fcc0b04d8a1576e69c091b1c42728b8ff689c6ec.scope.
Nov 27 05:54:10 np0005537642 systemd[1]: Started libcrun container.
Nov 27 05:54:10 np0005537642 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/002440cdff859efaa497d9d5d812843776b0ad74912c87e10327ad9215f0892e/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Nov 27 05:54:10 np0005537642 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/002440cdff859efaa497d9d5d812843776b0ad74912c87e10327ad9215f0892e/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 27 05:54:10 np0005537642 podman[78742]: 2025-11-27 10:54:10.363974534 +0000 UTC m=+0.684455248 container init da4f34a24d51e75a54192a24fcc0b04d8a1576e69c091b1c42728b8ff689c6ec (image=quay.io/ceph/ceph:v19, name=laughing_poincare, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, OSD_FLAVOR=default, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 27 05:54:10 np0005537642 podman[78742]: 2025-11-27 10:54:10.373921287 +0000 UTC m=+0.694401971 container start da4f34a24d51e75a54192a24fcc0b04d8a1576e69c091b1c42728b8ff689c6ec (image=quay.io/ceph/ceph:v19, name=laughing_poincare, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 27 05:54:10 np0005537642 podman[78742]: 2025-11-27 10:54:10.389670654 +0000 UTC m=+0.710151408 container attach da4f34a24d51e75a54192a24fcc0b04d8a1576e69c091b1c42728b8ff689c6ec (image=quay.io/ceph/ceph:v19, name=laughing_poincare, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, OSD_FLAVOR=default, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid)
Nov 27 05:54:10 np0005537642 ceph-mgr[74636]: [cephadm INFO cephadm.serve] Updating compute-0:/var/lib/ceph/4c838139-e0c9-556a-a9ca-e4422f459af7/config/ceph.client.admin.keyring
Nov 27 05:54:10 np0005537642 ceph-mgr[74636]: log_channel(cephadm) log [INF] : Updating compute-0:/var/lib/ceph/4c838139-e0c9-556a-a9ca-e4422f459af7/config/ceph.client.admin.keyring
Nov 27 05:54:10 np0005537642 ceph-mgr[74636]: log_channel(audit) log [DBG] : from='client.14162 -' entity='client.admin' cmd=[{"prefix": "orch status", "target": ["mon-mgr", ""], "format": "json"}]: dispatch
Nov 27 05:54:10 np0005537642 laughing_poincare[78948]: 
Nov 27 05:54:10 np0005537642 laughing_poincare[78948]: {"available": true, "backend": "cephadm", "paused": false, "workers": 10}
Nov 27 05:54:10 np0005537642 systemd[1]: libpod-da4f34a24d51e75a54192a24fcc0b04d8a1576e69c091b1c42728b8ff689c6ec.scope: Deactivated successfully.
Nov 27 05:54:10 np0005537642 podman[78742]: 2025-11-27 10:54:10.779019681 +0000 UTC m=+1.099500375 container died da4f34a24d51e75a54192a24fcc0b04d8a1576e69c091b1c42728b8ff689c6ec (image=quay.io/ceph/ceph:v19, name=laughing_poincare, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_REF=squid, ceph=True, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 27 05:54:10 np0005537642 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v5: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Nov 27 05:54:10 np0005537642 systemd[1]: var-lib-containers-storage-overlay-002440cdff859efaa497d9d5d812843776b0ad74912c87e10327ad9215f0892e-merged.mount: Deactivated successfully.
Nov 27 05:54:10 np0005537642 podman[78742]: 2025-11-27 10:54:10.957949177 +0000 UTC m=+1.278429861 container remove da4f34a24d51e75a54192a24fcc0b04d8a1576e69c091b1c42728b8ff689c6ec (image=quay.io/ceph/ceph:v19, name=laughing_poincare, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, OSD_FLAVOR=default, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS)
Nov 27 05:54:10 np0005537642 systemd[1]: libpod-conmon-da4f34a24d51e75a54192a24fcc0b04d8a1576e69c091b1c42728b8ff689c6ec.scope: Deactivated successfully.
Nov 27 05:54:10 np0005537642 ansible-async_wrapper.py[78663]: Module complete (78663)
Nov 27 05:54:11 np0005537642 python3[79260]: ansible-ansible.legacy.async_status Invoked with jid=j212744135698.78598 mode=status _async_dir=/root/.ansible_async
Nov 27 05:54:11 np0005537642 ceph-mon[74338]: Updating compute-0:/var/lib/ceph/4c838139-e0c9-556a-a9ca-e4422f459af7/config/ceph.conf
Nov 27 05:54:11 np0005537642 ceph-mon[74338]: Updating compute-0:/etc/ceph/ceph.client.admin.keyring
Nov 27 05:54:11 np0005537642 ceph-mon[74338]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Nov 27 05:54:11 np0005537642 ceph-mon[74338]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/397881370' entity='mgr.compute-0.qnrkij' 
Nov 27 05:54:11 np0005537642 ceph-mon[74338]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Nov 27 05:54:11 np0005537642 ceph-mon[74338]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/397881370' entity='mgr.compute-0.qnrkij' 
Nov 27 05:54:11 np0005537642 ceph-mon[74338]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Nov 27 05:54:11 np0005537642 python3[79434]: ansible-ansible.legacy.async_status Invoked with jid=j212744135698.78598 mode=cleanup _async_dir=/root/.ansible_async
Nov 27 05:54:11 np0005537642 ceph-mon[74338]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/397881370' entity='mgr.compute-0.qnrkij' 
Nov 27 05:54:11 np0005537642 ceph-mgr[74636]: [progress INFO root] update: starting ev 9c39c92d-42f5-42ce-90e9-b0e3f10166a1 (Updating crash deployment (+1 -> 1))
Nov 27 05:54:11 np0005537642 ceph-mon[74338]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get-or-create", "entity": "client.crash.compute-0", "caps": ["mon", "profile crash", "mgr", "profile crash"]} v 0)
Nov 27 05:54:11 np0005537642 ceph-mon[74338]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/397881370' entity='mgr.compute-0.qnrkij' cmd=[{"prefix": "auth get-or-create", "entity": "client.crash.compute-0", "caps": ["mon", "profile crash", "mgr", "profile crash"]}]: dispatch
Nov 27 05:54:11 np0005537642 ceph-mon[74338]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/397881370' entity='mgr.compute-0.qnrkij' cmd='[{"prefix": "auth get-or-create", "entity": "client.crash.compute-0", "caps": ["mon", "profile crash", "mgr", "profile crash"]}]': finished
Nov 27 05:54:11 np0005537642 ceph-mon[74338]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Nov 27 05:54:11 np0005537642 ceph-mon[74338]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/397881370' entity='mgr.compute-0.qnrkij' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 27 05:54:11 np0005537642 ceph-mgr[74636]: [cephadm INFO cephadm.serve] Deploying daemon crash.compute-0 on compute-0
Nov 27 05:54:11 np0005537642 ceph-mgr[74636]: log_channel(cephadm) log [INF] : Deploying daemon crash.compute-0 on compute-0
Nov 27 05:54:12 np0005537642 python3[79517]: ansible-ansible.builtin.stat Invoked with path=/home/ceph-admin/specs/ceph_spec.yaml follow=False get_md5=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Nov 27 05:54:12 np0005537642 podman[79555]: 2025-11-27 10:54:12.278350681 +0000 UTC m=+0.039070681 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Nov 27 05:54:12 np0005537642 ceph-mon[74338]: Updating compute-0:/var/lib/ceph/4c838139-e0c9-556a-a9ca-e4422f459af7/config/ceph.client.admin.keyring
Nov 27 05:54:12 np0005537642 ceph-mon[74338]: from='mgr.14122 192.168.122.100:0/397881370' entity='mgr.compute-0.qnrkij' 
Nov 27 05:54:12 np0005537642 ceph-mon[74338]: from='mgr.14122 192.168.122.100:0/397881370' entity='mgr.compute-0.qnrkij' 
Nov 27 05:54:12 np0005537642 ceph-mon[74338]: from='mgr.14122 192.168.122.100:0/397881370' entity='mgr.compute-0.qnrkij' 
Nov 27 05:54:12 np0005537642 ceph-mon[74338]: from='mgr.14122 192.168.122.100:0/397881370' entity='mgr.compute-0.qnrkij' cmd=[{"prefix": "auth get-or-create", "entity": "client.crash.compute-0", "caps": ["mon", "profile crash", "mgr", "profile crash"]}]: dispatch
Nov 27 05:54:12 np0005537642 ceph-mon[74338]: from='mgr.14122 192.168.122.100:0/397881370' entity='mgr.compute-0.qnrkij' cmd='[{"prefix": "auth get-or-create", "entity": "client.crash.compute-0", "caps": ["mon", "profile crash", "mgr", "profile crash"]}]': finished
Nov 27 05:54:12 np0005537642 podman[79555]: 2025-11-27 10:54:12.444120943 +0000 UTC m=+0.204840913 container create f1ed810282f1f437e1ee9ffe93605ee7f28d3cfec70c70f59067b4d8cfc3796e (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=recursing_sammet, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1)
Nov 27 05:54:12 np0005537642 systemd[1]: Started libpod-conmon-f1ed810282f1f437e1ee9ffe93605ee7f28d3cfec70c70f59067b4d8cfc3796e.scope.
Nov 27 05:54:12 np0005537642 systemd[1]: Started libcrun container.
Nov 27 05:54:12 np0005537642 podman[79555]: 2025-11-27 10:54:12.670886379 +0000 UTC m=+0.431606359 container init f1ed810282f1f437e1ee9ffe93605ee7f28d3cfec70c70f59067b4d8cfc3796e (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=recursing_sammet, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, CEPH_REF=squid)
Nov 27 05:54:12 np0005537642 podman[79555]: 2025-11-27 10:54:12.680930945 +0000 UTC m=+0.441650925 container start f1ed810282f1f437e1ee9ffe93605ee7f28d3cfec70c70f59067b4d8cfc3796e (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=recursing_sammet, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_REF=squid, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Nov 27 05:54:12 np0005537642 recursing_sammet[79597]: 167 167
Nov 27 05:54:12 np0005537642 systemd[1]: libpod-f1ed810282f1f437e1ee9ffe93605ee7f28d3cfec70c70f59067b4d8cfc3796e.scope: Deactivated successfully.
Nov 27 05:54:12 np0005537642 python3[79599]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z   --volume /home/ceph-admin/specs/ceph_spec.yaml:/home/ceph_spec.yaml:z   --entrypoint ceph quay.io/ceph/ceph:v19 --fsid 4c838139-e0c9-556a-a9ca-e4422f459af7 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   orch status --format json _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 27 05:54:12 np0005537642 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v6: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Nov 27 05:54:12 np0005537642 podman[79555]: 2025-11-27 10:54:12.893188188 +0000 UTC m=+0.653908238 container attach f1ed810282f1f437e1ee9ffe93605ee7f28d3cfec70c70f59067b4d8cfc3796e (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=recursing_sammet, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_REF=squid, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 27 05:54:12 np0005537642 podman[79555]: 2025-11-27 10:54:12.894106465 +0000 UTC m=+0.654826445 container died f1ed810282f1f437e1ee9ffe93605ee7f28d3cfec70c70f59067b4d8cfc3796e (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=recursing_sammet, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_REF=squid, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 27 05:54:13 np0005537642 systemd[1]: var-lib-containers-storage-overlay-43605a2f989424e3da7e84b956f96e4d9f60af7c65e9ef37235c6aa9d26bb461-merged.mount: Deactivated successfully.
Nov 27 05:54:13 np0005537642 ceph-mon[74338]: Deploying daemon crash.compute-0 on compute-0
Nov 27 05:54:13 np0005537642 podman[79555]: 2025-11-27 10:54:13.891539248 +0000 UTC m=+1.652259198 container remove f1ed810282f1f437e1ee9ffe93605ee7f28d3cfec70c70f59067b4d8cfc3796e (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=recursing_sammet, CEPH_REF=squid, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 27 05:54:14 np0005537642 podman[79614]: 2025-11-27 10:54:13.912422421 +0000 UTC m=+1.160526980 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Nov 27 05:54:14 np0005537642 podman[79614]: 2025-11-27 10:54:14.116371548 +0000 UTC m=+1.364476067 container create aaec488ee4934d1db757e8a41deb30b1057fdd0bb3531e3f93c7058073198085 (image=quay.io/ceph/ceph:v19, name=quirky_williams, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, ceph=True)
Nov 27 05:54:14 np0005537642 systemd[1]: Started libpod-conmon-aaec488ee4934d1db757e8a41deb30b1057fdd0bb3531e3f93c7058073198085.scope.
Nov 27 05:54:14 np0005537642 systemd[1]: Reloading.
Nov 27 05:54:14 np0005537642 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 27 05:54:14 np0005537642 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 27 05:54:14 np0005537642 systemd[1]: libpod-conmon-f1ed810282f1f437e1ee9ffe93605ee7f28d3cfec70c70f59067b4d8cfc3796e.scope: Deactivated successfully.
Nov 27 05:54:14 np0005537642 systemd[1]: Started libcrun container.
Nov 27 05:54:14 np0005537642 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d5a3a252193f040112eb51e6abe0def3f7b54763f83f4a882298b83815aa701b/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 27 05:54:14 np0005537642 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d5a3a252193f040112eb51e6abe0def3f7b54763f83f4a882298b83815aa701b/merged/home/ceph_spec.yaml supports timestamps until 2038 (0x7fffffff)
Nov 27 05:54:14 np0005537642 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d5a3a252193f040112eb51e6abe0def3f7b54763f83f4a882298b83815aa701b/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Nov 27 05:54:14 np0005537642 ansible-async_wrapper.py[78660]: Done in kid B.
Nov 27 05:54:14 np0005537642 podman[79614]: 2025-11-27 10:54:14.472056848 +0000 UTC m=+1.720161327 container init aaec488ee4934d1db757e8a41deb30b1057fdd0bb3531e3f93c7058073198085 (image=quay.io/ceph/ceph:v19, name=quirky_williams, io.buildah.version=1.40.1, CEPH_REF=squid, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Nov 27 05:54:14 np0005537642 podman[79614]: 2025-11-27 10:54:14.486606082 +0000 UTC m=+1.734710571 container start aaec488ee4934d1db757e8a41deb30b1057fdd0bb3531e3f93c7058073198085 (image=quay.io/ceph/ceph:v19, name=quirky_williams, CEPH_REF=squid, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 27 05:54:14 np0005537642 podman[79614]: 2025-11-27 10:54:14.490738429 +0000 UTC m=+1.738842938 container attach aaec488ee4934d1db757e8a41deb30b1057fdd0bb3531e3f93c7058073198085 (image=quay.io/ceph/ceph:v19, name=quirky_williams, ceph=True, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid)
Nov 27 05:54:14 np0005537642 systemd[1]: Reloading.
Nov 27 05:54:14 np0005537642 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 27 05:54:14 np0005537642 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 27 05:54:14 np0005537642 systemd[1]: Starting Ceph crash.compute-0 for 4c838139-e0c9-556a-a9ca-e4422f459af7...
Nov 27 05:54:14 np0005537642 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v7: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Nov 27 05:54:14 np0005537642 ceph-mgr[74636]: log_channel(audit) log [DBG] : from='client.14164 -' entity='client.admin' cmd=[{"prefix": "orch status", "target": ["mon-mgr", ""], "format": "json"}]: dispatch
Nov 27 05:54:14 np0005537642 quirky_williams[79632]: 
Nov 27 05:54:14 np0005537642 quirky_williams[79632]: {"available": true, "backend": "cephadm", "paused": false, "workers": 10}
Nov 27 05:54:14 np0005537642 systemd[1]: libpod-aaec488ee4934d1db757e8a41deb30b1057fdd0bb3531e3f93c7058073198085.scope: Deactivated successfully.
Nov 27 05:54:14 np0005537642 podman[79614]: 2025-11-27 10:54:14.895387882 +0000 UTC m=+2.143492401 container died aaec488ee4934d1db757e8a41deb30b1057fdd0bb3531e3f93c7058073198085 (image=quay.io/ceph/ceph:v19, name=quirky_williams, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_REF=squid, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 27 05:54:14 np0005537642 systemd[1]: var-lib-containers-storage-overlay-d5a3a252193f040112eb51e6abe0def3f7b54763f83f4a882298b83815aa701b-merged.mount: Deactivated successfully.
Nov 27 05:54:14 np0005537642 podman[79614]: 2025-11-27 10:54:14.953827753 +0000 UTC m=+2.201932242 container remove aaec488ee4934d1db757e8a41deb30b1057fdd0bb3531e3f93c7058073198085 (image=quay.io/ceph/ceph:v19, name=quirky_williams, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 27 05:54:14 np0005537642 systemd[1]: libpod-conmon-aaec488ee4934d1db757e8a41deb30b1057fdd0bb3531e3f93c7058073198085.scope: Deactivated successfully.
Nov 27 05:54:15 np0005537642 podman[79791]: 2025-11-27 10:54:15.027962551 +0000 UTC m=+0.040670797 container create ecb89941845f22dd3be41b7f723064e5b2ca71bd132d5f010ac371b52464b0bd (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-4c838139-e0c9-556a-a9ca-e4422f459af7-crash-compute-0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, ceph=True)
Nov 27 05:54:15 np0005537642 ceph-mon[74338]: mon.compute-0@0(leader).osd e2 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 27 05:54:15 np0005537642 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c52001ef76f360be0062536fb78b806613dc2dea621f8396cd4db3ff7891fcb1/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 27 05:54:15 np0005537642 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c52001ef76f360be0062536fb78b806613dc2dea621f8396cd4db3ff7891fcb1/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 27 05:54:15 np0005537642 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c52001ef76f360be0062536fb78b806613dc2dea621f8396cd4db3ff7891fcb1/merged/etc/ceph/ceph.client.crash.compute-0.keyring supports timestamps until 2038 (0x7fffffff)
Nov 27 05:54:15 np0005537642 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c52001ef76f360be0062536fb78b806613dc2dea621f8396cd4db3ff7891fcb1/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 27 05:54:15 np0005537642 podman[79791]: 2025-11-27 10:54:15.011444751 +0000 UTC m=+0.024152987 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Nov 27 05:54:15 np0005537642 podman[79791]: 2025-11-27 10:54:15.123833926 +0000 UTC m=+0.136542242 container init ecb89941845f22dd3be41b7f723064e5b2ca71bd132d5f010ac371b52464b0bd (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-4c838139-e0c9-556a-a9ca-e4422f459af7-crash-compute-0, ceph=True, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Nov 27 05:54:15 np0005537642 podman[79791]: 2025-11-27 10:54:15.133013047 +0000 UTC m=+0.145721323 container start ecb89941845f22dd3be41b7f723064e5b2ca71bd132d5f010ac371b52464b0bd (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-4c838139-e0c9-556a-a9ca-e4422f459af7-crash-compute-0, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, CEPH_REF=squid, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 27 05:54:15 np0005537642 bash[79791]: ecb89941845f22dd3be41b7f723064e5b2ca71bd132d5f010ac371b52464b0bd
Nov 27 05:54:15 np0005537642 systemd[1]: Started Ceph crash.compute-0 for 4c838139-e0c9-556a-a9ca-e4422f459af7.
Nov 27 05:54:15 np0005537642 ceph-mon[74338]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Nov 27 05:54:15 np0005537642 ceph-mon[74338]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/397881370' entity='mgr.compute-0.qnrkij' 
Nov 27 05:54:15 np0005537642 ceph-mon[74338]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Nov 27 05:54:15 np0005537642 ceph-mon[74338]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/397881370' entity='mgr.compute-0.qnrkij' 
Nov 27 05:54:15 np0005537642 ceph-mon[74338]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.crash}] v 0)
Nov 27 05:54:15 np0005537642 ceph-mon[74338]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/397881370' entity='mgr.compute-0.qnrkij' 
Nov 27 05:54:15 np0005537642 ceph-mgr[74636]: [progress INFO root] complete: finished ev 9c39c92d-42f5-42ce-90e9-b0e3f10166a1 (Updating crash deployment (+1 -> 1))
Nov 27 05:54:15 np0005537642 ceph-mgr[74636]: [progress INFO root] Completed event 9c39c92d-42f5-42ce-90e9-b0e3f10166a1 (Updating crash deployment (+1 -> 1)) in 4 seconds
Nov 27 05:54:15 np0005537642 ceph-mon[74338]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.crash}] v 0)
Nov 27 05:54:15 np0005537642 ceph-4c838139-e0c9-556a-a9ca-e4422f459af7-crash-compute-0[79806]: INFO:ceph-crash:pinging cluster to exercise our key
Nov 27 05:54:15 np0005537642 ceph-mon[74338]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/397881370' entity='mgr.compute-0.qnrkij' 
Nov 27 05:54:15 np0005537642 ceph-mon[74338]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.mon}] v 0)
Nov 27 05:54:15 np0005537642 ceph-mon[74338]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/397881370' entity='mgr.compute-0.qnrkij' 
Nov 27 05:54:15 np0005537642 ceph-mon[74338]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.mgr}] v 0)
Nov 27 05:54:15 np0005537642 ceph-mon[74338]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/397881370' entity='mgr.compute-0.qnrkij' 
Nov 27 05:54:15 np0005537642 ceph-4c838139-e0c9-556a-a9ca-e4422f459af7-crash-compute-0[79806]: 2025-11-27T10:54:15.350+0000 7fe271e43640 -1 auth: unable to find a keyring on /etc/ceph/ceph.client.admin.keyring,/etc/ceph/ceph.keyring,/etc/ceph/keyring,/etc/ceph/keyring.bin: (2) No such file or directory
Nov 27 05:54:15 np0005537642 ceph-4c838139-e0c9-556a-a9ca-e4422f459af7-crash-compute-0[79806]: 2025-11-27T10:54:15.350+0000 7fe271e43640 -1 AuthRegistry(0x7fe26c0698f0) no keyring found at /etc/ceph/ceph.client.admin.keyring,/etc/ceph/ceph.keyring,/etc/ceph/keyring,/etc/ceph/keyring.bin, disabling cephx
Nov 27 05:54:15 np0005537642 ceph-4c838139-e0c9-556a-a9ca-e4422f459af7-crash-compute-0[79806]: 2025-11-27T10:54:15.351+0000 7fe271e43640 -1 auth: unable to find a keyring on /etc/ceph/ceph.client.admin.keyring,/etc/ceph/ceph.keyring,/etc/ceph/keyring,/etc/ceph/keyring.bin: (2) No such file or directory
Nov 27 05:54:15 np0005537642 ceph-4c838139-e0c9-556a-a9ca-e4422f459af7-crash-compute-0[79806]: 2025-11-27T10:54:15.351+0000 7fe271e43640 -1 AuthRegistry(0x7fe271e41ff0) no keyring found at /etc/ceph/ceph.client.admin.keyring,/etc/ceph/ceph.keyring,/etc/ceph/keyring,/etc/ceph/keyring.bin, disabling cephx
Nov 27 05:54:15 np0005537642 ceph-4c838139-e0c9-556a-a9ca-e4422f459af7-crash-compute-0[79806]: 2025-11-27T10:54:15.352+0000 7fe270e41640 -1 monclient(hunting): handle_auth_bad_method server allowed_methods [2] but i only support [1]
Nov 27 05:54:15 np0005537642 ceph-4c838139-e0c9-556a-a9ca-e4422f459af7-crash-compute-0[79806]: 2025-11-27T10:54:15.352+0000 7fe271e43640 -1 monclient: authenticate NOTE: no keyring found; disabled cephx authentication
Nov 27 05:54:15 np0005537642 ceph-4c838139-e0c9-556a-a9ca-e4422f459af7-crash-compute-0[79806]: [errno 13] RADOS permission denied (error connecting to the cluster)
Nov 27 05:54:15 np0005537642 ceph-4c838139-e0c9-556a-a9ca-e4422f459af7-crash-compute-0[79806]: INFO:ceph-crash:monitoring path /var/lib/ceph/crash, delay 600s
Nov 27 05:54:15 np0005537642 python3[79877]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z   --volume /home/ceph-admin/specs/ceph_spec.yaml:/home/ceph_spec.yaml:z   --entrypoint ceph quay.io/ceph/ceph:v19 --fsid 4c838139-e0c9-556a-a9ca-e4422f459af7 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   config set global log_to_file true _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 27 05:54:15 np0005537642 podman[79924]: 2025-11-27 10:54:15.589711899 +0000 UTC m=+0.065505283 container create 9216f8f027b56dd307434130baa9eac589e7a5cd63c3e3857d2c9549a4c6b3eb (image=quay.io/ceph/ceph:v19, name=sharp_taussig, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, ceph=True, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 27 05:54:15 np0005537642 systemd[1]: Started libpod-conmon-9216f8f027b56dd307434130baa9eac589e7a5cd63c3e3857d2c9549a4c6b3eb.scope.
Nov 27 05:54:15 np0005537642 podman[79924]: 2025-11-27 10:54:15.55808613 +0000 UTC m=+0.033879584 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Nov 27 05:54:15 np0005537642 systemd[1]: Started libcrun container.
Nov 27 05:54:15 np0005537642 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/37dfc97d0fe420f99cb4767bc17c1b1c5d4d9f2f54ff1bb909028a933b055608/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Nov 27 05:54:15 np0005537642 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/37dfc97d0fe420f99cb4767bc17c1b1c5d4d9f2f54ff1bb909028a933b055608/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 27 05:54:15 np0005537642 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/37dfc97d0fe420f99cb4767bc17c1b1c5d4d9f2f54ff1bb909028a933b055608/merged/home/ceph_spec.yaml supports timestamps until 2038 (0x7fffffff)
Nov 27 05:54:15 np0005537642 podman[79924]: 2025-11-27 10:54:15.703855364 +0000 UTC m=+0.179648808 container init 9216f8f027b56dd307434130baa9eac589e7a5cd63c3e3857d2c9549a4c6b3eb (image=quay.io/ceph/ceph:v19, name=sharp_taussig, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, CEPH_REF=squid, org.label-schema.schema-version=1.0, ceph=True)
Nov 27 05:54:15 np0005537642 podman[79924]: 2025-11-27 10:54:15.715648509 +0000 UTC m=+0.191441893 container start 9216f8f027b56dd307434130baa9eac589e7a5cd63c3e3857d2c9549a4c6b3eb (image=quay.io/ceph/ceph:v19, name=sharp_taussig, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, ceph=True, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.build-date=20250325, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1)
Nov 27 05:54:15 np0005537642 podman[79924]: 2025-11-27 10:54:15.719855718 +0000 UTC m=+0.195649122 container attach 9216f8f027b56dd307434130baa9eac589e7a5cd63c3e3857d2c9549a4c6b3eb (image=quay.io/ceph/ceph:v19, name=sharp_taussig, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, CEPH_REF=squid, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Nov 27 05:54:16 np0005537642 ceph-mon[74338]: from='mgr.14122 192.168.122.100:0/397881370' entity='mgr.compute-0.qnrkij' 
Nov 27 05:54:16 np0005537642 ceph-mon[74338]: from='mgr.14122 192.168.122.100:0/397881370' entity='mgr.compute-0.qnrkij' 
Nov 27 05:54:16 np0005537642 ceph-mon[74338]: from='mgr.14122 192.168.122.100:0/397881370' entity='mgr.compute-0.qnrkij' 
Nov 27 05:54:16 np0005537642 ceph-mon[74338]: from='mgr.14122 192.168.122.100:0/397881370' entity='mgr.compute-0.qnrkij' 
Nov 27 05:54:16 np0005537642 ceph-mon[74338]: from='mgr.14122 192.168.122.100:0/397881370' entity='mgr.compute-0.qnrkij' 
Nov 27 05:54:16 np0005537642 ceph-mon[74338]: from='mgr.14122 192.168.122.100:0/397881370' entity='mgr.compute-0.qnrkij' 
Nov 27 05:54:16 np0005537642 ceph-mon[74338]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config set, name=log_to_file}] v 0)
Nov 27 05:54:16 np0005537642 ceph-mon[74338]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/2288156278' entity='client.admin' 
Nov 27 05:54:16 np0005537642 systemd[1]: libpod-9216f8f027b56dd307434130baa9eac589e7a5cd63c3e3857d2c9549a4c6b3eb.scope: Deactivated successfully.
Nov 27 05:54:16 np0005537642 podman[79924]: 2025-11-27 10:54:16.111665576 +0000 UTC m=+0.587458970 container died 9216f8f027b56dd307434130baa9eac589e7a5cd63c3e3857d2c9549a4c6b3eb (image=quay.io/ceph/ceph:v19, name=sharp_taussig, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, CEPH_REF=squid, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 27 05:54:16 np0005537642 systemd[1]: var-lib-containers-storage-overlay-37dfc97d0fe420f99cb4767bc17c1b1c5d4d9f2f54ff1bb909028a933b055608-merged.mount: Deactivated successfully.
Nov 27 05:54:16 np0005537642 podman[79924]: 2025-11-27 10:54:16.16668308 +0000 UTC m=+0.642476474 container remove 9216f8f027b56dd307434130baa9eac589e7a5cd63c3e3857d2c9549a4c6b3eb (image=quay.io/ceph/ceph:v19, name=sharp_taussig, org.label-schema.build-date=20250325, ceph=True, io.buildah.version=1.40.1, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 27 05:54:16 np0005537642 systemd[1]: libpod-conmon-9216f8f027b56dd307434130baa9eac589e7a5cd63c3e3857d2c9549a4c6b3eb.scope: Deactivated successfully.
Nov 27 05:54:16 np0005537642 podman[80029]: 2025-11-27 10:54:16.18109172 +0000 UTC m=+0.093036996 container exec 10d3b07b5dbe91b896d72c044972881d213b8aa535ac9c97588798b2ade7a7fa (image=quay.io/ceph/ceph:v19, name=ceph-4c838139-e0c9-556a-a9ca-e4422f459af7-mon-compute-0, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_REF=squid, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 27 05:54:16 np0005537642 podman[80029]: 2025-11-27 10:54:16.280828045 +0000 UTC m=+0.192773331 container exec_died 10d3b07b5dbe91b896d72c044972881d213b8aa535ac9c97588798b2ade7a7fa (image=quay.io/ceph/ceph:v19, name=ceph-4c838139-e0c9-556a-a9ca-e4422f459af7-mon-compute-0, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default)
Nov 27 05:54:16 np0005537642 python3[80115]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z   --volume /home/ceph-admin/specs/ceph_spec.yaml:/home/ceph_spec.yaml:z   --entrypoint ceph quay.io/ceph/ceph:v19 --fsid 4c838139-e0c9-556a-a9ca-e4422f459af7 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   config set global mon_cluster_log_to_file true _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 27 05:54:16 np0005537642 ceph-mon[74338]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Nov 27 05:54:16 np0005537642 ceph-mon[74338]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/397881370' entity='mgr.compute-0.qnrkij' 
Nov 27 05:54:16 np0005537642 ceph-mon[74338]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Nov 27 05:54:16 np0005537642 ceph-mon[74338]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/397881370' entity='mgr.compute-0.qnrkij' 
Nov 27 05:54:16 np0005537642 ceph-mon[74338]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Nov 27 05:54:16 np0005537642 ceph-mon[74338]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/397881370' entity='mgr.compute-0.qnrkij' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 27 05:54:16 np0005537642 ceph-mon[74338]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Nov 27 05:54:16 np0005537642 ceph-mon[74338]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/397881370' entity='mgr.compute-0.qnrkij' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 27 05:54:16 np0005537642 ceph-mon[74338]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Nov 27 05:54:16 np0005537642 ceph-mon[74338]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/397881370' entity='mgr.compute-0.qnrkij' 
Nov 27 05:54:16 np0005537642 podman[80136]: 2025-11-27 10:54:16.611485654 +0000 UTC m=+0.049890789 container create b8a97003bf86c3821c6f989b2899f0b8114cdd23d1a27b385d1525a1d35278be (image=quay.io/ceph/ceph:v19, name=loving_brahmagupta, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Nov 27 05:54:16 np0005537642 systemd[1]: Started libpod-conmon-b8a97003bf86c3821c6f989b2899f0b8114cdd23d1a27b385d1525a1d35278be.scope.
Nov 27 05:54:16 np0005537642 systemd[1]: Started libcrun container.
Nov 27 05:54:16 np0005537642 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4e511f4840c184b7b2d2968f5ed52233e35f6e0c73e3cfb0a4ba2c163ddf6999/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Nov 27 05:54:16 np0005537642 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4e511f4840c184b7b2d2968f5ed52233e35f6e0c73e3cfb0a4ba2c163ddf6999/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 27 05:54:16 np0005537642 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4e511f4840c184b7b2d2968f5ed52233e35f6e0c73e3cfb0a4ba2c163ddf6999/merged/home/ceph_spec.yaml supports timestamps until 2038 (0x7fffffff)
Nov 27 05:54:16 np0005537642 ceph-mon[74338]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/alertmanager/web_user}] v 0)
Nov 27 05:54:16 np0005537642 podman[80136]: 2025-11-27 10:54:16.588625374 +0000 UTC m=+0.027030539 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Nov 27 05:54:16 np0005537642 ceph-mon[74338]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/397881370' entity='mgr.compute-0.qnrkij' 
Nov 27 05:54:16 np0005537642 ceph-mon[74338]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/alertmanager/web_password}] v 0)
Nov 27 05:54:16 np0005537642 podman[80136]: 2025-11-27 10:54:16.692884648 +0000 UTC m=+0.131289803 container init b8a97003bf86c3821c6f989b2899f0b8114cdd23d1a27b385d1525a1d35278be (image=quay.io/ceph/ceph:v19, name=loving_brahmagupta, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0)
Nov 27 05:54:16 np0005537642 ceph-mon[74338]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/397881370' entity='mgr.compute-0.qnrkij' 
Nov 27 05:54:16 np0005537642 ceph-mon[74338]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/prometheus/web_user}] v 0)
Nov 27 05:54:16 np0005537642 ceph-mon[74338]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/397881370' entity='mgr.compute-0.qnrkij' 
Nov 27 05:54:16 np0005537642 ceph-mon[74338]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/prometheus/web_password}] v 0)
Nov 27 05:54:16 np0005537642 podman[80136]: 2025-11-27 10:54:16.700770632 +0000 UTC m=+0.139175767 container start b8a97003bf86c3821c6f989b2899f0b8114cdd23d1a27b385d1525a1d35278be (image=quay.io/ceph/ceph:v19, name=loving_brahmagupta, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, CEPH_REF=squid, org.label-schema.schema-version=1.0)
Nov 27 05:54:16 np0005537642 podman[80136]: 2025-11-27 10:54:16.704146918 +0000 UTC m=+0.142552073 container attach b8a97003bf86c3821c6f989b2899f0b8114cdd23d1a27b385d1525a1d35278be (image=quay.io/ceph/ceph:v19, name=loving_brahmagupta, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_REF=squid, ceph=True, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325)
Nov 27 05:54:16 np0005537642 ceph-mon[74338]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/397881370' entity='mgr.compute-0.qnrkij' 
Nov 27 05:54:16 np0005537642 ceph-mgr[74636]: [cephadm INFO cephadm.serve] Reconfiguring mon.compute-0 (unknown last config time)...
Nov 27 05:54:16 np0005537642 ceph-mon[74338]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "mon."} v 0)
Nov 27 05:54:16 np0005537642 ceph-mon[74338]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/397881370' entity='mgr.compute-0.qnrkij' cmd=[{"prefix": "auth get", "entity": "mon."}]: dispatch
Nov 27 05:54:16 np0005537642 ceph-mgr[74636]: log_channel(cephadm) log [INF] : Reconfiguring mon.compute-0 (unknown last config time)...
Nov 27 05:54:16 np0005537642 ceph-mon[74338]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config get", "who": "mon", "key": "public_network"} v 0)
Nov 27 05:54:16 np0005537642 ceph-mon[74338]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/397881370' entity='mgr.compute-0.qnrkij' cmd=[{"prefix": "config get", "who": "mon", "key": "public_network"}]: dispatch
Nov 27 05:54:16 np0005537642 ceph-mon[74338]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Nov 27 05:54:16 np0005537642 ceph-mon[74338]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/397881370' entity='mgr.compute-0.qnrkij' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 27 05:54:16 np0005537642 ceph-mgr[74636]: [cephadm INFO cephadm.serve] Reconfiguring daemon mon.compute-0 on compute-0
Nov 27 05:54:16 np0005537642 ceph-mgr[74636]: log_channel(cephadm) log [INF] : Reconfiguring daemon mon.compute-0 on compute-0
Nov 27 05:54:16 np0005537642 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v8: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Nov 27 05:54:17 np0005537642 ceph-mon[74338]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config set, name=mon_cluster_log_to_file}] v 0)
Nov 27 05:54:17 np0005537642 ceph-mon[74338]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/2353628274' entity='client.admin' 
Nov 27 05:54:17 np0005537642 systemd[1]: libpod-b8a97003bf86c3821c6f989b2899f0b8114cdd23d1a27b385d1525a1d35278be.scope: Deactivated successfully.
Nov 27 05:54:17 np0005537642 podman[80265]: 2025-11-27 10:54:17.166563293 +0000 UTC m=+0.023470808 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Nov 27 05:54:17 np0005537642 ceph-mon[74338]: from='client.? 192.168.122.100:0/2288156278' entity='client.admin' 
Nov 27 05:54:17 np0005537642 ceph-mon[74338]: from='mgr.14122 192.168.122.100:0/397881370' entity='mgr.compute-0.qnrkij' 
Nov 27 05:54:17 np0005537642 ceph-mon[74338]: from='mgr.14122 192.168.122.100:0/397881370' entity='mgr.compute-0.qnrkij' 
Nov 27 05:54:17 np0005537642 ceph-mon[74338]: from='mgr.14122 192.168.122.100:0/397881370' entity='mgr.compute-0.qnrkij' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 27 05:54:17 np0005537642 ceph-mon[74338]: from='mgr.14122 192.168.122.100:0/397881370' entity='mgr.compute-0.qnrkij' 
Nov 27 05:54:17 np0005537642 ceph-mon[74338]: from='mgr.14122 192.168.122.100:0/397881370' entity='mgr.compute-0.qnrkij' 
Nov 27 05:54:17 np0005537642 ceph-mon[74338]: from='mgr.14122 192.168.122.100:0/397881370' entity='mgr.compute-0.qnrkij' 
Nov 27 05:54:17 np0005537642 ceph-mon[74338]: from='mgr.14122 192.168.122.100:0/397881370' entity='mgr.compute-0.qnrkij' 
Nov 27 05:54:17 np0005537642 ceph-mon[74338]: from='mgr.14122 192.168.122.100:0/397881370' entity='mgr.compute-0.qnrkij' 
Nov 27 05:54:17 np0005537642 ceph-mon[74338]: from='mgr.14122 192.168.122.100:0/397881370' entity='mgr.compute-0.qnrkij' cmd=[{"prefix": "auth get", "entity": "mon."}]: dispatch
Nov 27 05:54:17 np0005537642 podman[80265]: 2025-11-27 10:54:17.359658122 +0000 UTC m=+0.216565597 container create b440b82dbc0669975b520c020b4506f20d2927b03ccfe73dee737670f77267da (image=quay.io/ceph/ceph:v19, name=gracious_fermi, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, CEPH_REF=squid, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2)
Nov 27 05:54:17 np0005537642 systemd[1]: Started libpod-conmon-b440b82dbc0669975b520c020b4506f20d2927b03ccfe73dee737670f77267da.scope.
Nov 27 05:54:17 np0005537642 systemd[1]: Started libcrun container.
Nov 27 05:54:17 np0005537642 podman[80265]: 2025-11-27 10:54:17.624147259 +0000 UTC m=+0.481054754 container init b440b82dbc0669975b520c020b4506f20d2927b03ccfe73dee737670f77267da (image=quay.io/ceph/ceph:v19, name=gracious_fermi, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, CEPH_REF=squid, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 27 05:54:17 np0005537642 podman[80265]: 2025-11-27 10:54:17.629555253 +0000 UTC m=+0.486462738 container start b440b82dbc0669975b520c020b4506f20d2927b03ccfe73dee737670f77267da (image=quay.io/ceph/ceph:v19, name=gracious_fermi, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, OSD_FLAVOR=default, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Nov 27 05:54:17 np0005537642 gracious_fermi[80294]: 167 167
Nov 27 05:54:17 np0005537642 systemd[1]: libpod-b440b82dbc0669975b520c020b4506f20d2927b03ccfe73dee737670f77267da.scope: Deactivated successfully.
Nov 27 05:54:17 np0005537642 podman[80265]: 2025-11-27 10:54:17.697013141 +0000 UTC m=+0.553920696 container attach b440b82dbc0669975b520c020b4506f20d2927b03ccfe73dee737670f77267da (image=quay.io/ceph/ceph:v19, name=gracious_fermi, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0)
Nov 27 05:54:17 np0005537642 podman[80265]: 2025-11-27 10:54:17.697874975 +0000 UTC m=+0.554782460 container died b440b82dbc0669975b520c020b4506f20d2927b03ccfe73dee737670f77267da (image=quay.io/ceph/ceph:v19, name=gracious_fermi, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325)
Nov 27 05:54:18 np0005537642 podman[80136]: 2025-11-27 10:54:18.146082406 +0000 UTC m=+1.584487581 container died b8a97003bf86c3821c6f989b2899f0b8114cdd23d1a27b385d1525a1d35278be (image=quay.io/ceph/ceph:v19, name=loving_brahmagupta, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 27 05:54:18 np0005537642 ceph-mgr[74636]: [progress INFO root] Writing back 1 completed events
Nov 27 05:54:18 np0005537642 ceph-mgr[74636]: [volumes INFO mgr_util] scanning for idle connections..
Nov 27 05:54:18 np0005537642 ceph-mgr[74636]: [volumes INFO mgr_util] cleaning up connections: []
Nov 27 05:54:18 np0005537642 ceph-mgr[74636]: [volumes INFO mgr_util] scanning for idle connections..
Nov 27 05:54:18 np0005537642 ceph-mgr[74636]: [volumes INFO mgr_util] cleaning up connections: []
Nov 27 05:54:18 np0005537642 ceph-mgr[74636]: [volumes INFO mgr_util] scanning for idle connections..
Nov 27 05:54:18 np0005537642 ceph-mgr[74636]: [volumes INFO mgr_util] cleaning up connections: []
Nov 27 05:54:18 np0005537642 ceph-mon[74338]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/progress/completed}] v 0)
Nov 27 05:54:18 np0005537642 ceph-mon[74338]: Reconfiguring mon.compute-0 (unknown last config time)...
Nov 27 05:54:18 np0005537642 ceph-mon[74338]: Reconfiguring daemon mon.compute-0 on compute-0
Nov 27 05:54:18 np0005537642 ceph-mon[74338]: from='client.? 192.168.122.100:0/2353628274' entity='client.admin' 
Nov 27 05:54:18 np0005537642 systemd[1]: var-lib-containers-storage-overlay-4e511f4840c184b7b2d2968f5ed52233e35f6e0c73e3cfb0a4ba2c163ddf6999-merged.mount: Deactivated successfully.
Nov 27 05:54:18 np0005537642 ceph-mon[74338]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/397881370' entity='mgr.compute-0.qnrkij' 
Nov 27 05:54:18 np0005537642 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v9: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Nov 27 05:54:18 np0005537642 podman[80280]: 2025-11-27 10:54:18.916770343 +0000 UTC m=+1.693212821 container remove b8a97003bf86c3821c6f989b2899f0b8114cdd23d1a27b385d1525a1d35278be (image=quay.io/ceph/ceph:v19, name=loving_brahmagupta, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, org.label-schema.license=GPLv2, ceph=True, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1)
Nov 27 05:54:18 np0005537642 systemd[1]: libpod-conmon-b8a97003bf86c3821c6f989b2899f0b8114cdd23d1a27b385d1525a1d35278be.scope: Deactivated successfully.
Nov 27 05:54:19 np0005537642 systemd[1]: var-lib-containers-storage-overlay-8d87c363842e173d79b06377e90c91e98d989a411adabb90329e44129e11db16-merged.mount: Deactivated successfully.
Nov 27 05:54:19 np0005537642 python3[80340]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z   --volume /home/ceph-admin/specs/ceph_spec.yaml:/home/ceph_spec.yaml:z   --entrypoint ceph quay.io/ceph/ceph:v19 --fsid 4c838139-e0c9-556a-a9ca-e4422f459af7 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   osd set-require-min-compat-client mimic#012 _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 27 05:54:19 np0005537642 podman[80265]: 2025-11-27 10:54:19.455482027 +0000 UTC m=+2.312389552 container remove b440b82dbc0669975b520c020b4506f20d2927b03ccfe73dee737670f77267da (image=quay.io/ceph/ceph:v19, name=gracious_fermi, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_REF=squid, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325)
Nov 27 05:54:19 np0005537642 systemd[1]: libpod-conmon-b440b82dbc0669975b520c020b4506f20d2927b03ccfe73dee737670f77267da.scope: Deactivated successfully.
Nov 27 05:54:19 np0005537642 podman[80341]: 2025-11-27 10:54:19.570141076 +0000 UTC m=+0.127209847 container create 21d23420caf386df89e61bc2553afc0f21c9702ea9d6dcb42e8a2ff3d60aa68a (image=quay.io/ceph/ceph:v19, name=gifted_northcutt, CEPH_REF=squid, ceph=True, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 27 05:54:19 np0005537642 podman[80341]: 2025-11-27 10:54:19.491602773 +0000 UTC m=+0.048671614 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Nov 27 05:54:19 np0005537642 ceph-mon[74338]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Nov 27 05:54:19 np0005537642 ceph-mon[74338]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/397881370' entity='mgr.compute-0.qnrkij' 
Nov 27 05:54:19 np0005537642 systemd[1]: Started libpod-conmon-21d23420caf386df89e61bc2553afc0f21c9702ea9d6dcb42e8a2ff3d60aa68a.scope.
Nov 27 05:54:19 np0005537642 systemd[1]: Started libcrun container.
Nov 27 05:54:19 np0005537642 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/93810d36c920b057fbcbe2f4f9697342735309e95df6b723d0d9d8d3ceb30f95/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Nov 27 05:54:19 np0005537642 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/93810d36c920b057fbcbe2f4f9697342735309e95df6b723d0d9d8d3ceb30f95/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 27 05:54:19 np0005537642 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/93810d36c920b057fbcbe2f4f9697342735309e95df6b723d0d9d8d3ceb30f95/merged/home/ceph_spec.yaml supports timestamps until 2038 (0x7fffffff)
Nov 27 05:54:19 np0005537642 ceph-mon[74338]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Nov 27 05:54:19 np0005537642 ceph-mon[74338]: from='mgr.14122 192.168.122.100:0/397881370' entity='mgr.compute-0.qnrkij' 
Nov 27 05:54:19 np0005537642 podman[80341]: 2025-11-27 10:54:19.938795315 +0000 UTC m=+0.495864076 container init 21d23420caf386df89e61bc2553afc0f21c9702ea9d6dcb42e8a2ff3d60aa68a (image=quay.io/ceph/ceph:v19, name=gifted_northcutt, io.buildah.version=1.40.1, ceph=True, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 27 05:54:19 np0005537642 podman[80341]: 2025-11-27 10:54:19.948708387 +0000 UTC m=+0.505777168 container start 21d23420caf386df89e61bc2553afc0f21c9702ea9d6dcb42e8a2ff3d60aa68a (image=quay.io/ceph/ceph:v19, name=gifted_northcutt, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 27 05:54:20 np0005537642 podman[80341]: 2025-11-27 10:54:20.063528321 +0000 UTC m=+0.620597112 container attach 21d23420caf386df89e61bc2553afc0f21c9702ea9d6dcb42e8a2ff3d60aa68a (image=quay.io/ceph/ceph:v19, name=gifted_northcutt, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, ceph=True, OSD_FLAVOR=default, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Nov 27 05:54:20 np0005537642 ceph-mon[74338]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/397881370' entity='mgr.compute-0.qnrkij' 
Nov 27 05:54:20 np0005537642 ceph-mgr[74636]: [cephadm INFO cephadm.serve] Reconfiguring mgr.compute-0.qnrkij (unknown last config time)...
Nov 27 05:54:20 np0005537642 ceph-mgr[74636]: log_channel(cephadm) log [INF] : Reconfiguring mgr.compute-0.qnrkij (unknown last config time)...
Nov 27 05:54:20 np0005537642 ceph-mon[74338]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get-or-create", "entity": "mgr.compute-0.qnrkij", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]} v 0)
Nov 27 05:54:20 np0005537642 ceph-mon[74338]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/397881370' entity='mgr.compute-0.qnrkij' cmd=[{"prefix": "auth get-or-create", "entity": "mgr.compute-0.qnrkij", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]}]: dispatch
Nov 27 05:54:20 np0005537642 ceph-mon[74338]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mgr services"} v 0)
Nov 27 05:54:20 np0005537642 ceph-mon[74338]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/397881370' entity='mgr.compute-0.qnrkij' cmd=[{"prefix": "mgr services"}]: dispatch
Nov 27 05:54:20 np0005537642 ceph-mon[74338]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Nov 27 05:54:20 np0005537642 ceph-mon[74338]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/397881370' entity='mgr.compute-0.qnrkij' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 27 05:54:20 np0005537642 ceph-mon[74338]: mon.compute-0@0(leader).osd e2 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 27 05:54:20 np0005537642 ceph-mgr[74636]: [cephadm INFO cephadm.serve] Reconfiguring daemon mgr.compute-0.qnrkij on compute-0
Nov 27 05:54:20 np0005537642 ceph-mgr[74636]: log_channel(cephadm) log [INF] : Reconfiguring daemon mgr.compute-0.qnrkij on compute-0
Nov 27 05:54:20 np0005537642 ceph-mon[74338]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd set-require-min-compat-client", "version": "mimic"} v 0)
Nov 27 05:54:20 np0005537642 ceph-mon[74338]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/958741504' entity='client.admin' cmd=[{"prefix": "osd set-require-min-compat-client", "version": "mimic"}]: dispatch
Nov 27 05:54:20 np0005537642 podman[80447]: 2025-11-27 10:54:20.636959631 +0000 UTC m=+0.035991484 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Nov 27 05:54:20 np0005537642 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v10: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Nov 27 05:54:20 np0005537642 podman[80447]: 2025-11-27 10:54:20.935819357 +0000 UTC m=+0.334851170 container create dea42c192efb952fb714212bc04d9e30b062ae55bb3d399c972613ce79e38cde (image=quay.io/ceph/ceph:v19, name=determined_bhabha, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2)
Nov 27 05:54:21 np0005537642 ceph-mon[74338]: from='mgr.14122 192.168.122.100:0/397881370' entity='mgr.compute-0.qnrkij' 
Nov 27 05:54:21 np0005537642 ceph-mon[74338]: from='mgr.14122 192.168.122.100:0/397881370' entity='mgr.compute-0.qnrkij' 
Nov 27 05:54:21 np0005537642 ceph-mon[74338]: Reconfiguring mgr.compute-0.qnrkij (unknown last config time)...
Nov 27 05:54:21 np0005537642 ceph-mon[74338]: from='mgr.14122 192.168.122.100:0/397881370' entity='mgr.compute-0.qnrkij' cmd=[{"prefix": "auth get-or-create", "entity": "mgr.compute-0.qnrkij", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]}]: dispatch
Nov 27 05:54:21 np0005537642 ceph-mon[74338]: Reconfiguring daemon mgr.compute-0.qnrkij on compute-0
Nov 27 05:54:21 np0005537642 ceph-mon[74338]: from='client.? 192.168.122.100:0/958741504' entity='client.admin' cmd=[{"prefix": "osd set-require-min-compat-client", "version": "mimic"}]: dispatch
Nov 27 05:54:21 np0005537642 systemd[1]: Started libpod-conmon-dea42c192efb952fb714212bc04d9e30b062ae55bb3d399c972613ce79e38cde.scope.
Nov 27 05:54:21 np0005537642 ceph-mon[74338]: mon.compute-0@0(leader).osd e2 do_prune osdmap full prune enabled
Nov 27 05:54:21 np0005537642 ceph-mon[74338]: mon.compute-0@0(leader).osd e2 encode_pending skipping prime_pg_temp; mapping job did not start
Nov 27 05:54:21 np0005537642 ceph-mon[74338]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/958741504' entity='client.admin' cmd='[{"prefix": "osd set-require-min-compat-client", "version": "mimic"}]': finished
Nov 27 05:54:21 np0005537642 ceph-mon[74338]: mon.compute-0@0(leader).osd e3 e3: 0 total, 0 up, 0 in
Nov 27 05:54:21 np0005537642 gifted_northcutt[80357]: set require_min_compat_client to mimic
Nov 27 05:54:21 np0005537642 systemd[1]: Started libcrun container.
Nov 27 05:54:21 np0005537642 systemd[1]: libpod-21d23420caf386df89e61bc2553afc0f21c9702ea9d6dcb42e8a2ff3d60aa68a.scope: Deactivated successfully.
Nov 27 05:54:21 np0005537642 ceph-mon[74338]: log_channel(cluster) log [DBG] : osdmap e3: 0 total, 0 up, 0 in
Nov 27 05:54:21 np0005537642 podman[80447]: 2025-11-27 10:54:21.371889412 +0000 UTC m=+0.770921225 container init dea42c192efb952fb714212bc04d9e30b062ae55bb3d399c972613ce79e38cde (image=quay.io/ceph/ceph:v19, name=determined_bhabha, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_REF=squid, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 27 05:54:21 np0005537642 podman[80341]: 2025-11-27 10:54:21.374179677 +0000 UTC m=+1.931248418 container died 21d23420caf386df89e61bc2553afc0f21c9702ea9d6dcb42e8a2ff3d60aa68a (image=quay.io/ceph/ceph:v19, name=gifted_northcutt, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, CEPH_REF=squid, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 27 05:54:21 np0005537642 podman[80447]: 2025-11-27 10:54:21.380598509 +0000 UTC m=+0.779630292 container start dea42c192efb952fb714212bc04d9e30b062ae55bb3d399c972613ce79e38cde (image=quay.io/ceph/ceph:v19, name=determined_bhabha, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_REF=squid, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 27 05:54:21 np0005537642 determined_bhabha[80463]: 167 167
Nov 27 05:54:21 np0005537642 systemd[1]: libpod-dea42c192efb952fb714212bc04d9e30b062ae55bb3d399c972613ce79e38cde.scope: Deactivated successfully.
Nov 27 05:54:21 np0005537642 podman[80447]: 2025-11-27 10:54:21.594940893 +0000 UTC m=+0.993972686 container attach dea42c192efb952fb714212bc04d9e30b062ae55bb3d399c972613ce79e38cde (image=quay.io/ceph/ceph:v19, name=determined_bhabha, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, ceph=True, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0)
Nov 27 05:54:21 np0005537642 podman[80447]: 2025-11-27 10:54:21.595326163 +0000 UTC m=+0.994357956 container died dea42c192efb952fb714212bc04d9e30b062ae55bb3d399c972613ce79e38cde (image=quay.io/ceph/ceph:v19, name=determined_bhabha, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.build-date=20250325, ceph=True, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, CEPH_REF=squid, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 27 05:54:22 np0005537642 systemd[1]: var-lib-containers-storage-overlay-20060faea2be0ffaa63249ea4101d003415141a3fdaa72260837fd237be37b56-merged.mount: Deactivated successfully.
Nov 27 05:54:22 np0005537642 ceph-mon[74338]: from='client.? 192.168.122.100:0/958741504' entity='client.admin' cmd='[{"prefix": "osd set-require-min-compat-client", "version": "mimic"}]': finished
Nov 27 05:54:22 np0005537642 podman[80480]: 2025-11-27 10:54:22.296957905 +0000 UTC m=+0.895434812 container remove dea42c192efb952fb714212bc04d9e30b062ae55bb3d399c972613ce79e38cde (image=quay.io/ceph/ceph:v19, name=determined_bhabha, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_REF=squid)
Nov 27 05:54:22 np0005537642 systemd[1]: libpod-conmon-dea42c192efb952fb714212bc04d9e30b062ae55bb3d399c972613ce79e38cde.scope: Deactivated successfully.
Nov 27 05:54:22 np0005537642 systemd[1]: var-lib-containers-storage-overlay-93810d36c920b057fbcbe2f4f9697342735309e95df6b723d0d9d8d3ceb30f95-merged.mount: Deactivated successfully.
Nov 27 05:54:22 np0005537642 ceph-mon[74338]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Nov 27 05:54:22 np0005537642 ceph-mon[74338]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/397881370' entity='mgr.compute-0.qnrkij' 
Nov 27 05:54:22 np0005537642 ceph-mon[74338]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Nov 27 05:54:22 np0005537642 podman[80341]: 2025-11-27 10:54:22.728071322 +0000 UTC m=+3.285140093 container remove 21d23420caf386df89e61bc2553afc0f21c9702ea9d6dcb42e8a2ff3d60aa68a (image=quay.io/ceph/ceph:v19, name=gifted_northcutt, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, org.label-schema.schema-version=1.0, CEPH_REF=squid, org.label-schema.vendor=CentOS)
Nov 27 05:54:22 np0005537642 systemd[1]: libpod-conmon-21d23420caf386df89e61bc2553afc0f21c9702ea9d6dcb42e8a2ff3d60aa68a.scope: Deactivated successfully.
Nov 27 05:54:22 np0005537642 ceph-mon[74338]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/397881370' entity='mgr.compute-0.qnrkij' 
Nov 27 05:54:22 np0005537642 ceph-mon[74338]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Nov 27 05:54:22 np0005537642 ceph-mon[74338]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/397881370' entity='mgr.compute-0.qnrkij' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 27 05:54:22 np0005537642 ceph-mon[74338]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Nov 27 05:54:22 np0005537642 ceph-mon[74338]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/397881370' entity='mgr.compute-0.qnrkij' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 27 05:54:22 np0005537642 ceph-mon[74338]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Nov 27 05:54:22 np0005537642 ceph-mon[74338]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/397881370' entity='mgr.compute-0.qnrkij' 
Nov 27 05:54:22 np0005537642 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v12: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Nov 27 05:54:23 np0005537642 python3[80548]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z   --volume /home/ceph-admin/specs/ceph_spec.yaml:/home/ceph_spec.yaml:z   --entrypoint ceph quay.io/ceph/ceph:v19 --fsid 4c838139-e0c9-556a-a9ca-e4422f459af7 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   orch apply --in-file /home/ceph_spec.yaml _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 27 05:54:23 np0005537642 podman[80549]: 2025-11-27 10:54:23.507400375 +0000 UTC m=+0.028265063 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Nov 27 05:54:24 np0005537642 podman[80549]: 2025-11-27 10:54:24.386268792 +0000 UTC m=+0.907133440 container create feefd3d7dcb8921057a22288d121c11230d9513d35aee900315c8d63b6ee1923 (image=quay.io/ceph/ceph:v19, name=elegant_ellis, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, OSD_FLAVOR=default, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 27 05:54:24 np0005537642 ceph-mon[74338]: from='mgr.14122 192.168.122.100:0/397881370' entity='mgr.compute-0.qnrkij' 
Nov 27 05:54:24 np0005537642 ceph-mon[74338]: from='mgr.14122 192.168.122.100:0/397881370' entity='mgr.compute-0.qnrkij' 
Nov 27 05:54:24 np0005537642 ceph-mon[74338]: from='mgr.14122 192.168.122.100:0/397881370' entity='mgr.compute-0.qnrkij' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 27 05:54:24 np0005537642 ceph-mon[74338]: from='mgr.14122 192.168.122.100:0/397881370' entity='mgr.compute-0.qnrkij' 
Nov 27 05:54:24 np0005537642 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v13: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Nov 27 05:54:24 np0005537642 systemd[1]: Started libpod-conmon-feefd3d7dcb8921057a22288d121c11230d9513d35aee900315c8d63b6ee1923.scope.
Nov 27 05:54:25 np0005537642 systemd[1]: Started libcrun container.
Nov 27 05:54:25 np0005537642 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/491d184fda2741c8a17cea47a1e0809bda1e732c0f533d70758d8359091f76bd/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 27 05:54:25 np0005537642 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/491d184fda2741c8a17cea47a1e0809bda1e732c0f533d70758d8359091f76bd/merged/home/ceph_spec.yaml supports timestamps until 2038 (0x7fffffff)
Nov 27 05:54:25 np0005537642 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/491d184fda2741c8a17cea47a1e0809bda1e732c0f533d70758d8359091f76bd/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Nov 27 05:54:25 np0005537642 ceph-mon[74338]: mon.compute-0@0(leader).osd e3 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 27 05:54:25 np0005537642 podman[80549]: 2025-11-27 10:54:25.686628151 +0000 UTC m=+2.207492789 container init feefd3d7dcb8921057a22288d121c11230d9513d35aee900315c8d63b6ee1923 (image=quay.io/ceph/ceph:v19, name=elegant_ellis, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.build-date=20250325)
Nov 27 05:54:25 np0005537642 podman[80549]: 2025-11-27 10:54:25.700491814 +0000 UTC m=+2.221356422 container start feefd3d7dcb8921057a22288d121c11230d9513d35aee900315c8d63b6ee1923 (image=quay.io/ceph/ceph:v19, name=elegant_ellis, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_REF=squid, io.buildah.version=1.40.1, ceph=True, OSD_FLAVOR=default, org.label-schema.build-date=20250325)
Nov 27 05:54:26 np0005537642 podman[80549]: 2025-11-27 10:54:26.023921012 +0000 UTC m=+2.544785660 container attach feefd3d7dcb8921057a22288d121c11230d9513d35aee900315c8d63b6ee1923 (image=quay.io/ceph/ceph:v19, name=elegant_ellis, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_REF=squid, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 27 05:54:26 np0005537642 ceph-mgr[74636]: log_channel(audit) log [DBG] : from='client.14172 -' entity='client.admin' cmd=[{"prefix": "orch apply", "target": ["mon-mgr", ""]}]: dispatch
Nov 27 05:54:26 np0005537642 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v14: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Nov 27 05:54:26 np0005537642 ceph-mon[74338]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/inventory}] v 0)
Nov 27 05:54:27 np0005537642 ceph-mon[74338]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/397881370' entity='mgr.compute-0.qnrkij' 
Nov 27 05:54:27 np0005537642 ceph-mon[74338]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/inventory}] v 0)
Nov 27 05:54:27 np0005537642 ceph-mon[74338]: from='mgr.14122 192.168.122.100:0/397881370' entity='mgr.compute-0.qnrkij' 
Nov 27 05:54:28 np0005537642 ceph-mon[74338]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/397881370' entity='mgr.compute-0.qnrkij' 
Nov 27 05:54:28 np0005537642 ceph-mon[74338]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/inventory}] v 0)
Nov 27 05:54:28 np0005537642 ceph-mon[74338]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/397881370' entity='mgr.compute-0.qnrkij' 
Nov 27 05:54:28 np0005537642 ceph-mon[74338]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/inventory}] v 0)
Nov 27 05:54:28 np0005537642 ceph-mon[74338]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/397881370' entity='mgr.compute-0.qnrkij' 
Nov 27 05:54:28 np0005537642 ceph-mgr[74636]: [cephadm INFO root] Added host compute-0
Nov 27 05:54:28 np0005537642 ceph-mgr[74636]: log_channel(cephadm) log [INF] : Added host compute-0
Nov 27 05:54:28 np0005537642 ceph-mon[74338]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Nov 27 05:54:28 np0005537642 ceph-mon[74338]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/397881370' entity='mgr.compute-0.qnrkij' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 27 05:54:28 np0005537642 ceph-mon[74338]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Nov 27 05:54:28 np0005537642 ceph-mon[74338]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/397881370' entity='mgr.compute-0.qnrkij' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 27 05:54:28 np0005537642 ceph-mon[74338]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Nov 27 05:54:28 np0005537642 ceph-mon[74338]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/397881370' entity='mgr.compute-0.qnrkij' 
Nov 27 05:54:28 np0005537642 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v15: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Nov 27 05:54:29 np0005537642 ceph-mon[74338]: from='mgr.14122 192.168.122.100:0/397881370' entity='mgr.compute-0.qnrkij' 
Nov 27 05:54:29 np0005537642 ceph-mon[74338]: from='mgr.14122 192.168.122.100:0/397881370' entity='mgr.compute-0.qnrkij' 
Nov 27 05:54:29 np0005537642 ceph-mon[74338]: from='mgr.14122 192.168.122.100:0/397881370' entity='mgr.compute-0.qnrkij' 
Nov 27 05:54:29 np0005537642 ceph-mon[74338]: from='mgr.14122 192.168.122.100:0/397881370' entity='mgr.compute-0.qnrkij' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 27 05:54:29 np0005537642 ceph-mon[74338]: from='mgr.14122 192.168.122.100:0/397881370' entity='mgr.compute-0.qnrkij' 
Nov 27 05:54:30 np0005537642 ceph-mgr[74636]: [cephadm INFO cephadm.serve] Deploying cephadm binary to compute-1
Nov 27 05:54:30 np0005537642 ceph-mgr[74636]: log_channel(cephadm) log [INF] : Deploying cephadm binary to compute-1
Nov 27 05:54:30 np0005537642 ceph-mon[74338]: Added host compute-0
Nov 27 05:54:30 np0005537642 ceph-mon[74338]: mon.compute-0@0(leader).osd e3 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 27 05:54:30 np0005537642 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v16: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Nov 27 05:54:31 np0005537642 ceph-mon[74338]: Deploying cephadm binary to compute-1
Nov 27 05:54:32 np0005537642 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v17: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Nov 27 05:54:34 np0005537642 ceph-mon[74338]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/inventory}] v 0)
Nov 27 05:54:34 np0005537642 ceph-mon[74338]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/397881370' entity='mgr.compute-0.qnrkij' 
Nov 27 05:54:34 np0005537642 ceph-mgr[74636]: [cephadm INFO root] Added host compute-1
Nov 27 05:54:34 np0005537642 ceph-mgr[74636]: log_channel(cephadm) log [INF] : Added host compute-1
Nov 27 05:54:34 np0005537642 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v18: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Nov 27 05:54:34 np0005537642 ceph-mon[74338]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1}] v 0)
Nov 27 05:54:35 np0005537642 ceph-mon[74338]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/397881370' entity='mgr.compute-0.qnrkij' 
Nov 27 05:54:35 np0005537642 ceph-mon[74338]: mon.compute-0@0(leader).osd e3 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 27 05:54:35 np0005537642 ceph-mon[74338]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1}] v 0)
Nov 27 05:54:35 np0005537642 ceph-mon[74338]: from='mgr.14122 192.168.122.100:0/397881370' entity='mgr.compute-0.qnrkij' 
Nov 27 05:54:35 np0005537642 ceph-mon[74338]: Added host compute-1
Nov 27 05:54:35 np0005537642 ceph-mon[74338]: from='mgr.14122 192.168.122.100:0/397881370' entity='mgr.compute-0.qnrkij' 
Nov 27 05:54:35 np0005537642 ceph-mgr[74636]: [cephadm INFO cephadm.serve] Deploying cephadm binary to compute-2
Nov 27 05:54:35 np0005537642 ceph-mgr[74636]: log_channel(cephadm) log [INF] : Deploying cephadm binary to compute-2
Nov 27 05:54:35 np0005537642 ceph-mon[74338]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/397881370' entity='mgr.compute-0.qnrkij' 
Nov 27 05:54:36 np0005537642 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v19: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Nov 27 05:54:37 np0005537642 ceph-mon[74338]: Deploying cephadm binary to compute-2
Nov 27 05:54:37 np0005537642 ceph-mon[74338]: from='mgr.14122 192.168.122.100:0/397881370' entity='mgr.compute-0.qnrkij' 
Nov 27 05:54:37 np0005537642 ceph-mon[74338]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1}] v 0)
Nov 27 05:54:37 np0005537642 ceph-mon[74338]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/397881370' entity='mgr.compute-0.qnrkij' 
Nov 27 05:54:38 np0005537642 ceph-mon[74338]: from='mgr.14122 192.168.122.100:0/397881370' entity='mgr.compute-0.qnrkij' 
Nov 27 05:54:38 np0005537642 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v20: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Nov 27 05:54:40 np0005537642 ceph-mon[74338]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/inventory}] v 0)
Nov 27 05:54:40 np0005537642 ceph-mon[74338]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/397881370' entity='mgr.compute-0.qnrkij' 
Nov 27 05:54:40 np0005537642 ceph-mgr[74636]: [cephadm INFO root] Added host compute-2
Nov 27 05:54:40 np0005537642 ceph-mgr[74636]: log_channel(cephadm) log [INF] : Added host compute-2
Nov 27 05:54:40 np0005537642 ceph-mgr[74636]: [cephadm INFO root] Saving service mon spec with placement compute-0;compute-1;compute-2
Nov 27 05:54:40 np0005537642 ceph-mgr[74636]: log_channel(cephadm) log [INF] : Saving service mon spec with placement compute-0;compute-1;compute-2
Nov 27 05:54:40 np0005537642 ceph-mon[74338]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.mon}] v 0)
Nov 27 05:54:40 np0005537642 ceph-mon[74338]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/397881370' entity='mgr.compute-0.qnrkij' 
Nov 27 05:54:40 np0005537642 ceph-mgr[74636]: [cephadm INFO root] Saving service mgr spec with placement compute-0;compute-1;compute-2
Nov 27 05:54:40 np0005537642 ceph-mgr[74636]: log_channel(cephadm) log [INF] : Saving service mgr spec with placement compute-0;compute-1;compute-2
Nov 27 05:54:40 np0005537642 ceph-mon[74338]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.mgr}] v 0)
Nov 27 05:54:40 np0005537642 ceph-mon[74338]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/397881370' entity='mgr.compute-0.qnrkij' 
Nov 27 05:54:40 np0005537642 ceph-mgr[74636]: [cephadm INFO root] Marking host: compute-0 for OSDSpec preview refresh.
Nov 27 05:54:40 np0005537642 ceph-mgr[74636]: log_channel(cephadm) log [INF] : Marking host: compute-0 for OSDSpec preview refresh.
Nov 27 05:54:40 np0005537642 ceph-mgr[74636]: [cephadm INFO root] Marking host: compute-1 for OSDSpec preview refresh.
Nov 27 05:54:40 np0005537642 ceph-mgr[74636]: log_channel(cephadm) log [INF] : Marking host: compute-1 for OSDSpec preview refresh.
Nov 27 05:54:40 np0005537642 ceph-mgr[74636]: [cephadm INFO root] Saving service osd.default_drive_group spec with placement compute-0;compute-1;compute-2
Nov 27 05:54:40 np0005537642 ceph-mgr[74636]: log_channel(cephadm) log [INF] : Saving service osd.default_drive_group spec with placement compute-0;compute-1;compute-2
Nov 27 05:54:40 np0005537642 ceph-mon[74338]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.osd.default_drive_group}] v 0)
Nov 27 05:54:40 np0005537642 ceph-mon[74338]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/397881370' entity='mgr.compute-0.qnrkij' 
Nov 27 05:54:40 np0005537642 elegant_ellis[80565]: Added host 'compute-0' with addr '192.168.122.100'
Nov 27 05:54:40 np0005537642 elegant_ellis[80565]: Added host 'compute-1' with addr '192.168.122.101'
Nov 27 05:54:40 np0005537642 elegant_ellis[80565]: Added host 'compute-2' with addr '192.168.122.102'
Nov 27 05:54:40 np0005537642 elegant_ellis[80565]: Scheduled mon update...
Nov 27 05:54:40 np0005537642 elegant_ellis[80565]: Scheduled mgr update...
Nov 27 05:54:40 np0005537642 elegant_ellis[80565]: Scheduled osd.default_drive_group update...
Nov 27 05:54:40 np0005537642 ceph-mon[74338]: mon.compute-0@0(leader).osd e3 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 27 05:54:40 np0005537642 systemd[1]: libpod-feefd3d7dcb8921057a22288d121c11230d9513d35aee900315c8d63b6ee1923.scope: Deactivated successfully.
Nov 27 05:54:40 np0005537642 podman[80549]: 2025-11-27 10:54:40.518860667 +0000 UTC m=+17.039725315 container died feefd3d7dcb8921057a22288d121c11230d9513d35aee900315c8d63b6ee1923 (image=quay.io/ceph/ceph:v19, name=elegant_ellis, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 27 05:54:40 np0005537642 systemd[1]: var-lib-containers-storage-overlay-491d184fda2741c8a17cea47a1e0809bda1e732c0f533d70758d8359091f76bd-merged.mount: Deactivated successfully.
Nov 27 05:54:40 np0005537642 podman[80549]: 2025-11-27 10:54:40.777758261 +0000 UTC m=+17.298622869 container remove feefd3d7dcb8921057a22288d121c11230d9513d35aee900315c8d63b6ee1923 (image=quay.io/ceph/ceph:v19, name=elegant_ellis, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_REF=squid, org.label-schema.build-date=20250325, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Nov 27 05:54:40 np0005537642 systemd[1]: libpod-conmon-feefd3d7dcb8921057a22288d121c11230d9513d35aee900315c8d63b6ee1923.scope: Deactivated successfully.
Nov 27 05:54:40 np0005537642 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v21: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Nov 27 05:54:41 np0005537642 ceph-mon[74338]: from='mgr.14122 192.168.122.100:0/397881370' entity='mgr.compute-0.qnrkij' 
Nov 27 05:54:41 np0005537642 ceph-mon[74338]: Added host compute-2
Nov 27 05:54:41 np0005537642 ceph-mon[74338]: Saving service mon spec with placement compute-0;compute-1;compute-2
Nov 27 05:54:41 np0005537642 ceph-mon[74338]: from='mgr.14122 192.168.122.100:0/397881370' entity='mgr.compute-0.qnrkij' 
Nov 27 05:54:41 np0005537642 ceph-mon[74338]: Saving service mgr spec with placement compute-0;compute-1;compute-2
Nov 27 05:54:41 np0005537642 ceph-mon[74338]: from='mgr.14122 192.168.122.100:0/397881370' entity='mgr.compute-0.qnrkij' 
Nov 27 05:54:41 np0005537642 ceph-mon[74338]: Marking host: compute-0 for OSDSpec preview refresh.
Nov 27 05:54:41 np0005537642 ceph-mon[74338]: Marking host: compute-1 for OSDSpec preview refresh.
Nov 27 05:54:41 np0005537642 ceph-mon[74338]: Saving service osd.default_drive_group spec with placement compute-0;compute-1;compute-2
Nov 27 05:54:41 np0005537642 ceph-mon[74338]: from='mgr.14122 192.168.122.100:0/397881370' entity='mgr.compute-0.qnrkij' 
Nov 27 05:54:41 np0005537642 python3[80725]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z   --volume /home/ceph-admin/specs/ceph_spec.yaml:/home/ceph_spec.yaml:z   --entrypoint ceph quay.io/ceph/ceph:v19 --fsid 4c838139-e0c9-556a-a9ca-e4422f459af7 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   status --format json | jq .osdmap.num_up_osds _uses_shell=True stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 27 05:54:41 np0005537642 podman[80727]: 2025-11-27 10:54:41.437322725 +0000 UTC m=+0.057986178 container create 7e0a232dd1454e8b59d33930393f13a1a0446d3f9a2eba827f27c5e3588da023 (image=quay.io/ceph/ceph:v19, name=strange_engelbart, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, OSD_FLAVOR=default, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, CEPH_REF=squid)
Nov 27 05:54:41 np0005537642 systemd[1]: Started libpod-conmon-7e0a232dd1454e8b59d33930393f13a1a0446d3f9a2eba827f27c5e3588da023.scope.
Nov 27 05:54:41 np0005537642 systemd[1]: Started libcrun container.
Nov 27 05:54:41 np0005537642 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/30886ed6754396bfc764b00bdf356e9358e35c5ba6d5e3dabe6b82440bfb6c35/merged/home/ceph_spec.yaml supports timestamps until 2038 (0x7fffffff)
Nov 27 05:54:41 np0005537642 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/30886ed6754396bfc764b00bdf356e9358e35c5ba6d5e3dabe6b82440bfb6c35/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Nov 27 05:54:41 np0005537642 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/30886ed6754396bfc764b00bdf356e9358e35c5ba6d5e3dabe6b82440bfb6c35/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 27 05:54:41 np0005537642 podman[80727]: 2025-11-27 10:54:41.416966339 +0000 UTC m=+0.037629802 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Nov 27 05:54:41 np0005537642 podman[80727]: 2025-11-27 10:54:41.522500494 +0000 UTC m=+0.143163937 container init 7e0a232dd1454e8b59d33930393f13a1a0446d3f9a2eba827f27c5e3588da023 (image=quay.io/ceph/ceph:v19, name=strange_engelbart, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, ceph=True)
Nov 27 05:54:41 np0005537642 podman[80727]: 2025-11-27 10:54:41.528065779 +0000 UTC m=+0.148729242 container start 7e0a232dd1454e8b59d33930393f13a1a0446d3f9a2eba827f27c5e3588da023 (image=quay.io/ceph/ceph:v19, name=strange_engelbart, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2)
Nov 27 05:54:41 np0005537642 podman[80727]: 2025-11-27 10:54:41.531913964 +0000 UTC m=+0.152577477 container attach 7e0a232dd1454e8b59d33930393f13a1a0446d3f9a2eba827f27c5e3588da023 (image=quay.io/ceph/ceph:v19, name=strange_engelbart, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_REF=squid, OSD_FLAVOR=default, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, ceph=True)
Nov 27 05:54:41 np0005537642 ceph-mon[74338]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "status", "format": "json"} v 0)
Nov 27 05:54:41 np0005537642 ceph-mon[74338]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1315928621' entity='client.admin' cmd=[{"prefix": "status", "format": "json"}]: dispatch
Nov 27 05:54:41 np0005537642 strange_engelbart[80742]: 
Nov 27 05:54:41 np0005537642 strange_engelbart[80742]: {"fsid":"4c838139-e0c9-556a-a9ca-e4422f459af7","health":{"status":"HEALTH_WARN","checks":{"TOO_FEW_OSDS":{"severity":"HEALTH_WARN","summary":{"message":"OSD count 0 < osd_pool_default_size 1","count":1},"muted":false}},"mutes":[]},"election_epoch":5,"quorum":[0],"quorum_names":["compute-0"],"quorum_age":76,"monmap":{"epoch":1,"min_mon_release_name":"squid","num_mons":1},"osdmap":{"epoch":3,"num_osds":0,"num_up_osds":0,"osd_up_since":0,"num_in_osds":0,"osd_in_since":0,"num_remapped_pgs":0},"pgmap":{"pgs_by_state":[],"num_pgs":0,"num_pools":0,"num_objects":0,"data_bytes":0,"bytes_used":0,"bytes_avail":0,"bytes_total":0},"fsmap":{"epoch":1,"btime":"2025-11-27T10:53:22:366160+0000","by_rank":[],"up:standby":0},"mgrmap":{"available":true,"num_standbys":0,"modules":["cephadm","iostat","nfs","restful"],"services":{}},"servicemap":{"epoch":1,"modified":"2025-11-27T10:53:22.368539+0000","services":{}},"progress_events":{}}
Nov 27 05:54:41 np0005537642 systemd[1]: libpod-7e0a232dd1454e8b59d33930393f13a1a0446d3f9a2eba827f27c5e3588da023.scope: Deactivated successfully.
Nov 27 05:54:41 np0005537642 podman[80727]: 2025-11-27 10:54:41.968428292 +0000 UTC m=+0.589091725 container died 7e0a232dd1454e8b59d33930393f13a1a0446d3f9a2eba827f27c5e3588da023 (image=quay.io/ceph/ceph:v19, name=strange_engelbart, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_REF=squid, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 27 05:54:41 np0005537642 systemd[1]: var-lib-containers-storage-overlay-30886ed6754396bfc764b00bdf356e9358e35c5ba6d5e3dabe6b82440bfb6c35-merged.mount: Deactivated successfully.
Nov 27 05:54:42 np0005537642 podman[80727]: 2025-11-27 10:54:42.013612618 +0000 UTC m=+0.634276081 container remove 7e0a232dd1454e8b59d33930393f13a1a0446d3f9a2eba827f27c5e3588da023 (image=quay.io/ceph/ceph:v19, name=strange_engelbart, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 27 05:54:42 np0005537642 systemd[1]: libpod-conmon-7e0a232dd1454e8b59d33930393f13a1a0446d3f9a2eba827f27c5e3588da023.scope: Deactivated successfully.
Nov 27 05:54:42 np0005537642 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v22: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Nov 27 05:54:44 np0005537642 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v23: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Nov 27 05:54:45 np0005537642 ceph-mon[74338]: mon.compute-0@0(leader).osd e3 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 27 05:54:46 np0005537642 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v24: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Nov 27 05:54:46 np0005537642 ceph-mgr[74636]: [balancer INFO root] Optimize plan auto_2025-11-27_10:54:46
Nov 27 05:54:46 np0005537642 ceph-mgr[74636]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Nov 27 05:54:46 np0005537642 ceph-mgr[74636]: [balancer INFO root] do_upmap
Nov 27 05:54:46 np0005537642 ceph-mgr[74636]: [balancer INFO root] No pools available
Nov 27 05:54:48 np0005537642 ceph-mgr[74636]: [pg_autoscaler INFO root] _maybe_adjust
Nov 27 05:54:48 np0005537642 ceph-mgr[74636]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Nov 27 05:54:48 np0005537642 ceph-mgr[74636]: [volumes INFO mgr_util] scanning for idle connections..
Nov 27 05:54:48 np0005537642 ceph-mgr[74636]: [volumes INFO mgr_util] cleaning up connections: []
Nov 27 05:54:48 np0005537642 ceph-mgr[74636]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Nov 27 05:54:48 np0005537642 ceph-mgr[74636]: [volumes INFO mgr_util] scanning for idle connections..
Nov 27 05:54:48 np0005537642 ceph-mgr[74636]: [volumes INFO mgr_util] cleaning up connections: []
Nov 27 05:54:48 np0005537642 ceph-mgr[74636]: [volumes INFO mgr_util] scanning for idle connections..
Nov 27 05:54:48 np0005537642 ceph-mgr[74636]: [volumes INFO mgr_util] cleaning up connections: []
Nov 27 05:54:48 np0005537642 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v25: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Nov 27 05:54:50 np0005537642 ceph-mon[74338]: mon.compute-0@0(leader).osd e3 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 27 05:54:50 np0005537642 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v26: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Nov 27 05:54:52 np0005537642 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v27: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Nov 27 05:54:54 np0005537642 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v28: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Nov 27 05:54:55 np0005537642 ceph-mon[74338]: mon.compute-0@0(leader).osd e3 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 27 05:54:56 np0005537642 ceph-mon[74338]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1.devices.0}] v 0)
Nov 27 05:54:56 np0005537642 ceph-mon[74338]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/397881370' entity='mgr.compute-0.qnrkij' 
Nov 27 05:54:56 np0005537642 ceph-mon[74338]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1}] v 0)
Nov 27 05:54:56 np0005537642 ceph-mon[74338]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/397881370' entity='mgr.compute-0.qnrkij' 
Nov 27 05:54:56 np0005537642 ceph-mon[74338]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1.devices.0}] v 0)
Nov 27 05:54:56 np0005537642 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v29: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Nov 27 05:54:56 np0005537642 ceph-mon[74338]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/397881370' entity='mgr.compute-0.qnrkij' 
Nov 27 05:54:56 np0005537642 ceph-mon[74338]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1}] v 0)
Nov 27 05:54:57 np0005537642 ceph-mon[74338]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/397881370' entity='mgr.compute-0.qnrkij' 
Nov 27 05:54:57 np0005537642 ceph-mon[74338]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config rm", "who": "osd/host:compute-1", "name": "osd_memory_target"} v 0)
Nov 27 05:54:57 np0005537642 ceph-mon[74338]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/397881370' entity='mgr.compute-0.qnrkij' cmd=[{"prefix": "config rm", "who": "osd/host:compute-1", "name": "osd_memory_target"}]: dispatch
Nov 27 05:54:57 np0005537642 ceph-mon[74338]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Nov 27 05:54:57 np0005537642 ceph-mon[74338]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/397881370' entity='mgr.compute-0.qnrkij' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 27 05:54:57 np0005537642 ceph-mon[74338]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Nov 27 05:54:57 np0005537642 ceph-mon[74338]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/397881370' entity='mgr.compute-0.qnrkij' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 27 05:54:57 np0005537642 ceph-mgr[74636]: [cephadm INFO cephadm.serve] Updating compute-1:/etc/ceph/ceph.conf
Nov 27 05:54:57 np0005537642 ceph-mgr[74636]: log_channel(cephadm) log [INF] : Updating compute-1:/etc/ceph/ceph.conf
Nov 27 05:54:57 np0005537642 ceph-mon[74338]: from='mgr.14122 192.168.122.100:0/397881370' entity='mgr.compute-0.qnrkij' 
Nov 27 05:54:57 np0005537642 ceph-mon[74338]: from='mgr.14122 192.168.122.100:0/397881370' entity='mgr.compute-0.qnrkij' 
Nov 27 05:54:57 np0005537642 ceph-mon[74338]: from='mgr.14122 192.168.122.100:0/397881370' entity='mgr.compute-0.qnrkij' 
Nov 27 05:54:57 np0005537642 ceph-mon[74338]: from='mgr.14122 192.168.122.100:0/397881370' entity='mgr.compute-0.qnrkij' 
Nov 27 05:54:57 np0005537642 ceph-mon[74338]: from='mgr.14122 192.168.122.100:0/397881370' entity='mgr.compute-0.qnrkij' cmd=[{"prefix": "config rm", "who": "osd/host:compute-1", "name": "osd_memory_target"}]: dispatch
Nov 27 05:54:57 np0005537642 ceph-mon[74338]: from='mgr.14122 192.168.122.100:0/397881370' entity='mgr.compute-0.qnrkij' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 27 05:54:57 np0005537642 ceph-mon[74338]: Updating compute-1:/etc/ceph/ceph.conf
Nov 27 05:54:57 np0005537642 ceph-mgr[74636]: [cephadm INFO cephadm.serve] Updating compute-1:/var/lib/ceph/4c838139-e0c9-556a-a9ca-e4422f459af7/config/ceph.conf
Nov 27 05:54:57 np0005537642 ceph-mgr[74636]: log_channel(cephadm) log [INF] : Updating compute-1:/var/lib/ceph/4c838139-e0c9-556a-a9ca-e4422f459af7/config/ceph.conf
Nov 27 05:54:58 np0005537642 ceph-mgr[74636]: [cephadm INFO cephadm.serve] Updating compute-1:/etc/ceph/ceph.client.admin.keyring
Nov 27 05:54:58 np0005537642 ceph-mgr[74636]: log_channel(cephadm) log [INF] : Updating compute-1:/etc/ceph/ceph.client.admin.keyring
Nov 27 05:54:58 np0005537642 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v30: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Nov 27 05:54:58 np0005537642 ceph-mon[74338]: Updating compute-1:/var/lib/ceph/4c838139-e0c9-556a-a9ca-e4422f459af7/config/ceph.conf
Nov 27 05:54:59 np0005537642 ceph-mgr[74636]: [cephadm INFO cephadm.serve] Updating compute-1:/var/lib/ceph/4c838139-e0c9-556a-a9ca-e4422f459af7/config/ceph.client.admin.keyring
Nov 27 05:54:59 np0005537642 ceph-mgr[74636]: log_channel(cephadm) log [INF] : Updating compute-1:/var/lib/ceph/4c838139-e0c9-556a-a9ca-e4422f459af7/config/ceph.client.admin.keyring
Nov 27 05:54:59 np0005537642 ceph-mon[74338]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1.devices.0}] v 0)
Nov 27 05:54:59 np0005537642 ceph-mon[74338]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/397881370' entity='mgr.compute-0.qnrkij' 
Nov 27 05:54:59 np0005537642 ceph-mon[74338]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1}] v 0)
Nov 27 05:54:59 np0005537642 ceph-mon[74338]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/397881370' entity='mgr.compute-0.qnrkij' 
Nov 27 05:55:00 np0005537642 ceph-mon[74338]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Nov 27 05:55:00 np0005537642 ceph-mon[74338]: Updating compute-1:/etc/ceph/ceph.client.admin.keyring
Nov 27 05:55:00 np0005537642 ceph-mon[74338]: Updating compute-1:/var/lib/ceph/4c838139-e0c9-556a-a9ca-e4422f459af7/config/ceph.client.admin.keyring
Nov 27 05:55:00 np0005537642 ceph-mon[74338]: from='mgr.14122 192.168.122.100:0/397881370' entity='mgr.compute-0.qnrkij' 
Nov 27 05:55:00 np0005537642 ceph-mon[74338]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/397881370' entity='mgr.compute-0.qnrkij' 
Nov 27 05:55:00 np0005537642 ceph-mgr[74636]: [cephadm ERROR cephadm.serve] Failed to apply mon spec MONSpec.from_json(yaml.safe_load('''service_type: mon#012service_name: mon#012placement:#012  hosts:#012  - compute-0#012  - compute-1#012  - compute-2#012''')): Cannot place <MONSpec for service_name=mon> on compute-2: Unknown hosts
Nov 27 05:55:00 np0005537642 ceph-mgr[74636]: log_channel(cephadm) log [ERR] : Failed to apply mon spec MONSpec.from_json(yaml.safe_load('''service_type: mon#012service_name: mon#012placement:#012  hosts:#012  - compute-0#012  - compute-1#012  - compute-2#012''')): Cannot place <MONSpec for service_name=mon> on compute-2: Unknown hosts
Nov 27 05:55:00 np0005537642 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v31: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Nov 27 05:55:00 np0005537642 ceph-mgr[74636]: [cephadm ERROR cephadm.serve] Failed to apply mgr spec ServiceSpec.from_json(yaml.safe_load('''service_type: mgr#012service_name: mgr#012placement:#012  hosts:#012  - compute-0#012  - compute-1#012  - compute-2#012''')): Cannot place <ServiceSpec for service_name=mgr> on compute-2: Unknown hosts
Nov 27 05:55:00 np0005537642 ceph-mgr[74636]: log_channel(cephadm) log [ERR] : Failed to apply mgr spec ServiceSpec.from_json(yaml.safe_load('''service_type: mgr#012service_name: mgr#012placement:#012  hosts:#012  - compute-0#012  - compute-1#012  - compute-2#012''')): Cannot place <ServiceSpec for service_name=mgr> on compute-2: Unknown hosts
Nov 27 05:55:00 np0005537642 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v32: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Nov 27 05:55:00 np0005537642 ceph-4c838139-e0c9-556a-a9ca-e4422f459af7-mgr-compute-0-qnrkij[74632]: 2025-11-27T10:55:00.403+0000 7f7af0d1a640 -1 log_channel(cephadm) log [ERR] : Failed to apply mon spec MONSpec.from_json(yaml.safe_load('''service_type: mon
Nov 27 05:55:00 np0005537642 ceph-mgr[74636]: [progress INFO root] update: starting ev 077183f5-76c1-480d-990b-b8fccfc93eff (Updating crash deployment (+1 -> 2))
Nov 27 05:55:00 np0005537642 ceph-4c838139-e0c9-556a-a9ca-e4422f459af7-mgr-compute-0-qnrkij[74632]: service_name: mon
Nov 27 05:55:00 np0005537642 ceph-4c838139-e0c9-556a-a9ca-e4422f459af7-mgr-compute-0-qnrkij[74632]: placement:
Nov 27 05:55:00 np0005537642 ceph-4c838139-e0c9-556a-a9ca-e4422f459af7-mgr-compute-0-qnrkij[74632]:  hosts:
Nov 27 05:55:00 np0005537642 ceph-4c838139-e0c9-556a-a9ca-e4422f459af7-mgr-compute-0-qnrkij[74632]:  - compute-0
Nov 27 05:55:00 np0005537642 ceph-4c838139-e0c9-556a-a9ca-e4422f459af7-mgr-compute-0-qnrkij[74632]:  - compute-1
Nov 27 05:55:00 np0005537642 ceph-4c838139-e0c9-556a-a9ca-e4422f459af7-mgr-compute-0-qnrkij[74632]:  - compute-2
Nov 27 05:55:00 np0005537642 ceph-4c838139-e0c9-556a-a9ca-e4422f459af7-mgr-compute-0-qnrkij[74632]: ''')): Cannot place <MONSpec for service_name=mon> on compute-2: Unknown hosts
Nov 27 05:55:00 np0005537642 ceph-4c838139-e0c9-556a-a9ca-e4422f459af7-mgr-compute-0-qnrkij[74632]: 2025-11-27T10:55:00.404+0000 7f7af0d1a640 -1 log_channel(cephadm) log [ERR] : Failed to apply mgr spec ServiceSpec.from_json(yaml.safe_load('''service_type: mgr
Nov 27 05:55:00 np0005537642 ceph-4c838139-e0c9-556a-a9ca-e4422f459af7-mgr-compute-0-qnrkij[74632]: service_name: mgr
Nov 27 05:55:00 np0005537642 ceph-4c838139-e0c9-556a-a9ca-e4422f459af7-mgr-compute-0-qnrkij[74632]: placement:
Nov 27 05:55:00 np0005537642 ceph-4c838139-e0c9-556a-a9ca-e4422f459af7-mgr-compute-0-qnrkij[74632]:  hosts:
Nov 27 05:55:00 np0005537642 ceph-4c838139-e0c9-556a-a9ca-e4422f459af7-mgr-compute-0-qnrkij[74632]:  - compute-0
Nov 27 05:55:00 np0005537642 ceph-4c838139-e0c9-556a-a9ca-e4422f459af7-mgr-compute-0-qnrkij[74632]:  - compute-1
Nov 27 05:55:00 np0005537642 ceph-4c838139-e0c9-556a-a9ca-e4422f459af7-mgr-compute-0-qnrkij[74632]:  - compute-2
Nov 27 05:55:00 np0005537642 ceph-4c838139-e0c9-556a-a9ca-e4422f459af7-mgr-compute-0-qnrkij[74632]: ''')): Cannot place <ServiceSpec for service_name=mgr> on compute-2: Unknown hosts
Nov 27 05:55:00 np0005537642 ceph-mon[74338]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get-or-create", "entity": "client.crash.compute-1", "caps": ["mon", "profile crash", "mgr", "profile crash"]} v 0)
Nov 27 05:55:00 np0005537642 ceph-mon[74338]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/397881370' entity='mgr.compute-0.qnrkij' cmd=[{"prefix": "auth get-or-create", "entity": "client.crash.compute-1", "caps": ["mon", "profile crash", "mgr", "profile crash"]}]: dispatch
Nov 27 05:55:00 np0005537642 ceph-mon[74338]: mon.compute-0@0(leader).osd e3 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 27 05:55:00 np0005537642 ceph-mon[74338]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/397881370' entity='mgr.compute-0.qnrkij' cmd='[{"prefix": "auth get-or-create", "entity": "client.crash.compute-1", "caps": ["mon", "profile crash", "mgr", "profile crash"]}]': finished
Nov 27 05:55:00 np0005537642 ceph-mon[74338]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Nov 27 05:55:00 np0005537642 ceph-mon[74338]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/397881370' entity='mgr.compute-0.qnrkij' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 27 05:55:00 np0005537642 ceph-mgr[74636]: [cephadm INFO cephadm.serve] Deploying daemon crash.compute-1 on compute-1
Nov 27 05:55:00 np0005537642 ceph-mgr[74636]: log_channel(cephadm) log [INF] : Deploying daemon crash.compute-1 on compute-1
Nov 27 05:55:01 np0005537642 ceph-mon[74338]: from='mgr.14122 192.168.122.100:0/397881370' entity='mgr.compute-0.qnrkij' 
Nov 27 05:55:01 np0005537642 ceph-mon[74338]: from='mgr.14122 192.168.122.100:0/397881370' entity='mgr.compute-0.qnrkij' 
Nov 27 05:55:01 np0005537642 ceph-mon[74338]: from='mgr.14122 192.168.122.100:0/397881370' entity='mgr.compute-0.qnrkij' cmd=[{"prefix": "auth get-or-create", "entity": "client.crash.compute-1", "caps": ["mon", "profile crash", "mgr", "profile crash"]}]: dispatch
Nov 27 05:55:01 np0005537642 ceph-mon[74338]: from='mgr.14122 192.168.122.100:0/397881370' entity='mgr.compute-0.qnrkij' cmd='[{"prefix": "auth get-or-create", "entity": "client.crash.compute-1", "caps": ["mon", "profile crash", "mgr", "profile crash"]}]': finished
Nov 27 05:55:01 np0005537642 ceph-mon[74338]: log_channel(cluster) log [WRN] : Health check failed: Failed to apply 2 service(s): mon,mgr (CEPHADM_APPLY_SPEC_FAIL)
Nov 27 05:55:02 np0005537642 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v33: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Nov 27 05:55:02 np0005537642 ceph-mon[74338]: Failed to apply mon spec MONSpec.from_json(yaml.safe_load('''service_type: mon#012service_name: mon#012placement:#012  hosts:#012  - compute-0#012  - compute-1#012  - compute-2#012''')): Cannot place <MONSpec for service_name=mon> on compute-2: Unknown hosts
Nov 27 05:55:02 np0005537642 ceph-mon[74338]: Failed to apply mgr spec ServiceSpec.from_json(yaml.safe_load('''service_type: mgr#012service_name: mgr#012placement:#012  hosts:#012  - compute-0#012  - compute-1#012  - compute-2#012''')): Cannot place <ServiceSpec for service_name=mgr> on compute-2: Unknown hosts
Nov 27 05:55:02 np0005537642 ceph-mon[74338]: Deploying daemon crash.compute-1 on compute-1
Nov 27 05:55:02 np0005537642 ceph-mon[74338]: Health check failed: Failed to apply 2 service(s): mon,mgr (CEPHADM_APPLY_SPEC_FAIL)
Nov 27 05:55:03 np0005537642 ceph-mon[74338]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1.devices.0}] v 0)
Nov 27 05:55:03 np0005537642 ceph-mon[74338]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/397881370' entity='mgr.compute-0.qnrkij' 
Nov 27 05:55:03 np0005537642 ceph-mon[74338]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1}] v 0)
Nov 27 05:55:03 np0005537642 ceph-mon[74338]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/397881370' entity='mgr.compute-0.qnrkij' 
Nov 27 05:55:03 np0005537642 ceph-mon[74338]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.crash}] v 0)
Nov 27 05:55:03 np0005537642 ceph-mon[74338]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/397881370' entity='mgr.compute-0.qnrkij' 
Nov 27 05:55:03 np0005537642 ceph-mgr[74636]: [progress INFO root] complete: finished ev 077183f5-76c1-480d-990b-b8fccfc93eff (Updating crash deployment (+1 -> 2))
Nov 27 05:55:03 np0005537642 ceph-mgr[74636]: [progress INFO root] Completed event 077183f5-76c1-480d-990b-b8fccfc93eff (Updating crash deployment (+1 -> 2)) in 3 seconds
Nov 27 05:55:03 np0005537642 ceph-mon[74338]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.crash}] v 0)
Nov 27 05:55:03 np0005537642 ceph-mon[74338]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/397881370' entity='mgr.compute-0.qnrkij' 
Nov 27 05:55:03 np0005537642 ceph-mon[74338]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Nov 27 05:55:03 np0005537642 ceph-mon[74338]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/397881370' entity='mgr.compute-0.qnrkij' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 27 05:55:03 np0005537642 ceph-mon[74338]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Nov 27 05:55:03 np0005537642 ceph-mon[74338]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/397881370' entity='mgr.compute-0.qnrkij' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 27 05:55:03 np0005537642 ceph-mon[74338]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Nov 27 05:55:03 np0005537642 ceph-mon[74338]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/397881370' entity='mgr.compute-0.qnrkij' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 27 05:55:04 np0005537642 ceph-mon[74338]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Nov 27 05:55:04 np0005537642 ceph-mon[74338]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/397881370' entity='mgr.compute-0.qnrkij' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 27 05:55:04 np0005537642 ceph-mon[74338]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Nov 27 05:55:04 np0005537642 ceph-mon[74338]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/397881370' entity='mgr.compute-0.qnrkij' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 27 05:55:04 np0005537642 ceph-mon[74338]: from='mgr.14122 192.168.122.100:0/397881370' entity='mgr.compute-0.qnrkij' 
Nov 27 05:55:04 np0005537642 ceph-mon[74338]: from='mgr.14122 192.168.122.100:0/397881370' entity='mgr.compute-0.qnrkij' 
Nov 27 05:55:04 np0005537642 ceph-mon[74338]: from='mgr.14122 192.168.122.100:0/397881370' entity='mgr.compute-0.qnrkij' 
Nov 27 05:55:04 np0005537642 ceph-mon[74338]: from='mgr.14122 192.168.122.100:0/397881370' entity='mgr.compute-0.qnrkij' 
Nov 27 05:55:04 np0005537642 ceph-mon[74338]: from='mgr.14122 192.168.122.100:0/397881370' entity='mgr.compute-0.qnrkij' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 27 05:55:04 np0005537642 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v34: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Nov 27 05:55:04 np0005537642 podman[80868]: 2025-11-27 10:55:04.790650934 +0000 UTC m=+0.030808350 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Nov 27 05:55:05 np0005537642 podman[80868]: 2025-11-27 10:55:05.194367644 +0000 UTC m=+0.434525000 container create cba1bb7c398be407d1d6d7f4f5b39b4ea0c13ff08ce1fee37477187ac9982926 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=competent_roentgen, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_REF=squid, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS)
Nov 27 05:55:05 np0005537642 systemd[1]: Started libpod-conmon-cba1bb7c398be407d1d6d7f4f5b39b4ea0c13ff08ce1fee37477187ac9982926.scope.
Nov 27 05:55:05 np0005537642 ceph-mon[74338]: mon.compute-0@0(leader).osd e3 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 27 05:55:05 np0005537642 systemd[1]: Started libcrun container.
Nov 27 05:55:05 np0005537642 podman[80868]: 2025-11-27 10:55:05.943027253 +0000 UTC m=+1.183184679 container init cba1bb7c398be407d1d6d7f4f5b39b4ea0c13ff08ce1fee37477187ac9982926 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=competent_roentgen, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 27 05:55:05 np0005537642 podman[80868]: 2025-11-27 10:55:05.950365501 +0000 UTC m=+1.190522857 container start cba1bb7c398be407d1d6d7f4f5b39b4ea0c13ff08ce1fee37477187ac9982926 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=competent_roentgen, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Nov 27 05:55:05 np0005537642 competent_roentgen[80885]: 167 167
Nov 27 05:55:05 np0005537642 systemd[1]: libpod-cba1bb7c398be407d1d6d7f4f5b39b4ea0c13ff08ce1fee37477187ac9982926.scope: Deactivated successfully.
Nov 27 05:55:06 np0005537642 ceph-mon[74338]: from='mgr.14122 192.168.122.100:0/397881370' entity='mgr.compute-0.qnrkij' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 27 05:55:06 np0005537642 podman[80868]: 2025-11-27 10:55:06.090489037 +0000 UTC m=+1.330646383 container attach cba1bb7c398be407d1d6d7f4f5b39b4ea0c13ff08ce1fee37477187ac9982926 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=competent_roentgen, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2)
Nov 27 05:55:06 np0005537642 podman[80868]: 2025-11-27 10:55:06.091024043 +0000 UTC m=+1.331181409 container died cba1bb7c398be407d1d6d7f4f5b39b4ea0c13ff08ce1fee37477187ac9982926 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=competent_roentgen, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 27 05:55:06 np0005537642 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v35: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Nov 27 05:55:06 np0005537642 ceph-mon[74338]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd new", "uuid": "ad1a348c-1187-4e69-8a57-5bb9e05dd78e"} v 0)
Nov 27 05:55:06 np0005537642 ceph-mon[74338]: log_channel(audit) log [INF] : from='client.? 192.168.122.101:0/30453594' entity='client.bootstrap-osd' cmd=[{"prefix": "osd new", "uuid": "ad1a348c-1187-4e69-8a57-5bb9e05dd78e"}]: dispatch
Nov 27 05:55:06 np0005537642 ceph-mon[74338]: mon.compute-0@0(leader).osd e3 do_prune osdmap full prune enabled
Nov 27 05:55:06 np0005537642 ceph-mon[74338]: mon.compute-0@0(leader).osd e3 encode_pending skipping prime_pg_temp; mapping job did not start
Nov 27 05:55:06 np0005537642 systemd[1]: var-lib-containers-storage-overlay-181b55aeb563fe198cef4c5f992e2e6fea2c320a249d3ad95868f702f3193f33-merged.mount: Deactivated successfully.
Nov 27 05:55:06 np0005537642 ceph-mon[74338]: log_channel(audit) log [INF] : from='client.? 192.168.122.101:0/30453594' entity='client.bootstrap-osd' cmd='[{"prefix": "osd new", "uuid": "ad1a348c-1187-4e69-8a57-5bb9e05dd78e"}]': finished
Nov 27 05:55:06 np0005537642 ceph-mon[74338]: mon.compute-0@0(leader).osd e4 e4: 1 total, 0 up, 1 in
Nov 27 05:55:06 np0005537642 ceph-mon[74338]: log_channel(cluster) log [DBG] : osdmap e4: 1 total, 0 up, 1 in
Nov 27 05:55:06 np0005537642 ceph-mon[74338]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 0} v 0)
Nov 27 05:55:06 np0005537642 ceph-mon[74338]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/397881370' entity='mgr.compute-0.qnrkij' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch
Nov 27 05:55:06 np0005537642 ceph-mgr[74636]: mgr finish mon failed to return metadata for osd.0: (2) No such file or directory
Nov 27 05:55:07 np0005537642 ceph-mon[74338]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon getmap"} v 0)
Nov 27 05:55:07 np0005537642 ceph-mon[74338]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/2012160899' entity='client.bootstrap-osd' cmd=[{"prefix": "mon getmap"}]: dispatch
Nov 27 05:55:07 np0005537642 podman[80868]: 2025-11-27 10:55:07.661619163 +0000 UTC m=+2.901776479 container remove cba1bb7c398be407d1d6d7f4f5b39b4ea0c13ff08ce1fee37477187ac9982926 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=competent_roentgen, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, OSD_FLAVOR=default)
Nov 27 05:55:07 np0005537642 ceph-mon[74338]: from='client.? 192.168.122.101:0/30453594' entity='client.bootstrap-osd' cmd=[{"prefix": "osd new", "uuid": "ad1a348c-1187-4e69-8a57-5bb9e05dd78e"}]: dispatch
Nov 27 05:55:07 np0005537642 ceph-mon[74338]: from='client.? 192.168.122.101:0/30453594' entity='client.bootstrap-osd' cmd='[{"prefix": "osd new", "uuid": "ad1a348c-1187-4e69-8a57-5bb9e05dd78e"}]': finished
Nov 27 05:55:07 np0005537642 systemd[1]: libpod-conmon-cba1bb7c398be407d1d6d7f4f5b39b4ea0c13ff08ce1fee37477187ac9982926.scope: Deactivated successfully.
Nov 27 05:55:07 np0005537642 podman[80909]: 2025-11-27 10:55:07.829359981 +0000 UTC m=+0.029046236 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Nov 27 05:55:07 np0005537642 podman[80909]: 2025-11-27 10:55:07.935556556 +0000 UTC m=+0.135242801 container create c6174648abd7eb2bf5f2b25a7030843fb4c69836e011240a6e31e07c44912e22 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=jovial_khorana, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.license=GPLv2)
Nov 27 05:55:08 np0005537642 systemd[1]: Started libpod-conmon-c6174648abd7eb2bf5f2b25a7030843fb4c69836e011240a6e31e07c44912e22.scope.
Nov 27 05:55:08 np0005537642 systemd[1]: Started libcrun container.
Nov 27 05:55:08 np0005537642 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2cb6bb77adb2239aa87de3e2c6b0d6ca218a8420446ba18ebbf487a617364768/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 27 05:55:08 np0005537642 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2cb6bb77adb2239aa87de3e2c6b0d6ca218a8420446ba18ebbf487a617364768/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 27 05:55:08 np0005537642 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2cb6bb77adb2239aa87de3e2c6b0d6ca218a8420446ba18ebbf487a617364768/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 27 05:55:08 np0005537642 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2cb6bb77adb2239aa87de3e2c6b0d6ca218a8420446ba18ebbf487a617364768/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 27 05:55:08 np0005537642 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2cb6bb77adb2239aa87de3e2c6b0d6ca218a8420446ba18ebbf487a617364768/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Nov 27 05:55:08 np0005537642 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v37: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Nov 27 05:55:08 np0005537642 podman[80909]: 2025-11-27 10:55:08.586812282 +0000 UTC m=+0.786498547 container init c6174648abd7eb2bf5f2b25a7030843fb4c69836e011240a6e31e07c44912e22 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=jovial_khorana, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0)
Nov 27 05:55:08 np0005537642 podman[80909]: 2025-11-27 10:55:08.599868051 +0000 UTC m=+0.799554296 container start c6174648abd7eb2bf5f2b25a7030843fb4c69836e011240a6e31e07c44912e22 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=jovial_khorana, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_REF=squid, ceph=True, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 27 05:55:08 np0005537642 ceph-mgr[74636]: [progress INFO root] Writing back 2 completed events
Nov 27 05:55:08 np0005537642 ceph-mon[74338]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/progress/completed}] v 0)
Nov 27 05:55:08 np0005537642 ceph-mon[74338]: log_channel(cluster) log [INF] : Health check cleared: TOO_FEW_OSDS (was: OSD count 0 < osd_pool_default_size 1)
Nov 27 05:55:08 np0005537642 podman[80909]: 2025-11-27 10:55:08.94781534 +0000 UTC m=+1.147501645 container attach c6174648abd7eb2bf5f2b25a7030843fb4c69836e011240a6e31e07c44912e22 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=jovial_khorana, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 27 05:55:08 np0005537642 ceph-mon[74338]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/397881370' entity='mgr.compute-0.qnrkij' 
Nov 27 05:55:09 np0005537642 jovial_khorana[80925]: --> passed data devices: 0 physical, 1 LVM
Nov 27 05:55:09 np0005537642 jovial_khorana[80925]: Running command: /usr/bin/ceph-authtool --gen-print-key
Nov 27 05:55:09 np0005537642 jovial_khorana[80925]: Running command: /usr/bin/ceph-authtool --gen-print-key
Nov 27 05:55:09 np0005537642 jovial_khorana[80925]: Running command: /usr/bin/ceph --cluster ceph --name client.bootstrap-osd --keyring /var/lib/ceph/bootstrap-osd/ceph.keyring -i - osd new 047f3e15-ba18-4c86-b24b-f8e9584c5eff
Nov 27 05:55:09 np0005537642 ceph-mon[74338]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd new", "uuid": "047f3e15-ba18-4c86-b24b-f8e9584c5eff"} v 0)
Nov 27 05:55:09 np0005537642 ceph-mon[74338]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/1959054132' entity='client.bootstrap-osd' cmd=[{"prefix": "osd new", "uuid": "047f3e15-ba18-4c86-b24b-f8e9584c5eff"}]: dispatch
Nov 27 05:55:09 np0005537642 ceph-mon[74338]: mon.compute-0@0(leader).osd e4 do_prune osdmap full prune enabled
Nov 27 05:55:09 np0005537642 ceph-mon[74338]: mon.compute-0@0(leader).osd e4 encode_pending skipping prime_pg_temp; mapping job did not start
Nov 27 05:55:09 np0005537642 ceph-mon[74338]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/1959054132' entity='client.bootstrap-osd' cmd='[{"prefix": "osd new", "uuid": "047f3e15-ba18-4c86-b24b-f8e9584c5eff"}]': finished
Nov 27 05:55:09 np0005537642 ceph-mon[74338]: mon.compute-0@0(leader).osd e5 e5: 2 total, 0 up, 2 in
Nov 27 05:55:10 np0005537642 ceph-mon[74338]: log_channel(cluster) log [DBG] : osdmap e5: 2 total, 0 up, 2 in
Nov 27 05:55:10 np0005537642 ceph-mon[74338]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 0} v 0)
Nov 27 05:55:10 np0005537642 ceph-mon[74338]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/397881370' entity='mgr.compute-0.qnrkij' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch
Nov 27 05:55:10 np0005537642 ceph-mgr[74636]: mgr finish mon failed to return metadata for osd.0: (2) No such file or directory
Nov 27 05:55:10 np0005537642 ceph-mgr[74636]: mgr finish mon failed to return metadata for osd.1: (2) No such file or directory
Nov 27 05:55:10 np0005537642 ceph-mon[74338]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 1} v 0)
Nov 27 05:55:10 np0005537642 ceph-mon[74338]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/397881370' entity='mgr.compute-0.qnrkij' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch
Nov 27 05:55:10 np0005537642 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v39: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Nov 27 05:55:10 np0005537642 ceph-mon[74338]: mon.compute-0@0(leader).osd e5 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 27 05:55:11 np0005537642 ceph-mon[74338]: Health check cleared: TOO_FEW_OSDS (was: OSD count 0 < osd_pool_default_size 1)
Nov 27 05:55:11 np0005537642 ceph-mon[74338]: from='mgr.14122 192.168.122.100:0/397881370' entity='mgr.compute-0.qnrkij' 
Nov 27 05:55:11 np0005537642 ceph-mon[74338]: from='client.? 192.168.122.100:0/1959054132' entity='client.bootstrap-osd' cmd=[{"prefix": "osd new", "uuid": "047f3e15-ba18-4c86-b24b-f8e9584c5eff"}]: dispatch
Nov 27 05:55:11 np0005537642 ceph-mon[74338]: from='client.? 192.168.122.100:0/1959054132' entity='client.bootstrap-osd' cmd='[{"prefix": "osd new", "uuid": "047f3e15-ba18-4c86-b24b-f8e9584c5eff"}]': finished
Nov 27 05:55:11 np0005537642 jovial_khorana[80925]: Running command: /usr/bin/mount -t tmpfs tmpfs /var/lib/ceph/osd/ceph-1
Nov 27 05:55:11 np0005537642 jovial_khorana[80925]: Running command: /usr/bin/chown -h ceph:ceph /dev/ceph_vg0/ceph_lv0
Nov 27 05:55:11 np0005537642 jovial_khorana[80925]: Running command: /usr/bin/chown -R ceph:ceph /dev/dm-0
Nov 27 05:55:11 np0005537642 jovial_khorana[80925]: Running command: /usr/bin/ln -s /dev/ceph_vg0/ceph_lv0 /var/lib/ceph/osd/ceph-1/block
Nov 27 05:55:11 np0005537642 jovial_khorana[80925]: Running command: /usr/bin/ceph --cluster ceph --name client.bootstrap-osd --keyring /var/lib/ceph/bootstrap-osd/ceph.keyring mon getmap -o /var/lib/ceph/osd/ceph-1/activate.monmap
Nov 27 05:55:11 np0005537642 lvm[80988]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Nov 27 05:55:11 np0005537642 lvm[80988]: VG ceph_vg0 finished
Nov 27 05:55:12 np0005537642 ceph-mon[74338]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon getmap"} v 0)
Nov 27 05:55:12 np0005537642 ceph-mon[74338]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1859649820' entity='client.bootstrap-osd' cmd=[{"prefix": "mon getmap"}]: dispatch
Nov 27 05:55:12 np0005537642 jovial_khorana[80925]: stderr: got monmap epoch 1
Nov 27 05:55:12 np0005537642 jovial_khorana[80925]: --> Creating keyring file for osd.1
Nov 27 05:55:12 np0005537642 jovial_khorana[80925]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-1/keyring
Nov 27 05:55:12 np0005537642 jovial_khorana[80925]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-1/
Nov 27 05:55:12 np0005537642 jovial_khorana[80925]: Running command: /usr/bin/ceph-osd --cluster ceph --osd-objectstore bluestore --mkfs -i 1 --monmap /var/lib/ceph/osd/ceph-1/activate.monmap --keyfile - --osdspec-affinity default_drive_group --osd-data /var/lib/ceph/osd/ceph-1/ --osd-uuid 047f3e15-ba18-4c86-b24b-f8e9584c5eff --setuser ceph --setgroup ceph
Nov 27 05:55:12 np0005537642 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v40: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Nov 27 05:55:12 np0005537642 python3[81058]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z   --volume /home/ceph-admin/specs/ceph_spec.yaml:/home/ceph_spec.yaml:z   --entrypoint ceph quay.io/ceph/ceph:v19 --fsid 4c838139-e0c9-556a-a9ca-e4422f459af7 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   status --format json | jq .osdmap.num_up_osds _uses_shell=True stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 27 05:55:12 np0005537642 podman[81060]: 2025-11-27 10:55:12.482886559 +0000 UTC m=+0.040609802 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Nov 27 05:55:12 np0005537642 podman[81060]: 2025-11-27 10:55:12.787345161 +0000 UTC m=+0.345068384 container create 24af39d9eaea14d84e6e70c7fe9ad95febe2770a8a9bacfdf6ac1054397ebe5c (image=quay.io/ceph/ceph:v19, name=peaceful_jang, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1)
Nov 27 05:55:12 np0005537642 ceph-mon[74338]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "osd.0"} v 0)
Nov 27 05:55:12 np0005537642 ceph-mon[74338]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/397881370' entity='mgr.compute-0.qnrkij' cmd=[{"prefix": "auth get", "entity": "osd.0"}]: dispatch
Nov 27 05:55:12 np0005537642 ceph-mon[74338]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Nov 27 05:55:12 np0005537642 ceph-mon[74338]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/397881370' entity='mgr.compute-0.qnrkij' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 27 05:55:12 np0005537642 ceph-mgr[74636]: [cephadm INFO cephadm.serve] Deploying daemon osd.0 on compute-1
Nov 27 05:55:12 np0005537642 ceph-mgr[74636]: log_channel(cephadm) log [INF] : Deploying daemon osd.0 on compute-1
Nov 27 05:55:13 np0005537642 systemd[1]: Started libpod-conmon-24af39d9eaea14d84e6e70c7fe9ad95febe2770a8a9bacfdf6ac1054397ebe5c.scope.
Nov 27 05:55:13 np0005537642 systemd[1]: Started libcrun container.
Nov 27 05:55:13 np0005537642 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4edd3b67f759b31e3f03fbaf6d1d2a7e5a347ab738b366d2565ad2e1fbb49e3d/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Nov 27 05:55:13 np0005537642 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4edd3b67f759b31e3f03fbaf6d1d2a7e5a347ab738b366d2565ad2e1fbb49e3d/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 27 05:55:13 np0005537642 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4edd3b67f759b31e3f03fbaf6d1d2a7e5a347ab738b366d2565ad2e1fbb49e3d/merged/home/ceph_spec.yaml supports timestamps until 2038 (0x7fffffff)
Nov 27 05:55:13 np0005537642 podman[81060]: 2025-11-27 10:55:13.397896204 +0000 UTC m=+0.955619477 container init 24af39d9eaea14d84e6e70c7fe9ad95febe2770a8a9bacfdf6ac1054397ebe5c (image=quay.io/ceph/ceph:v19, name=peaceful_jang, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325)
Nov 27 05:55:13 np0005537642 podman[81060]: 2025-11-27 10:55:13.410896722 +0000 UTC m=+0.968619935 container start 24af39d9eaea14d84e6e70c7fe9ad95febe2770a8a9bacfdf6ac1054397ebe5c (image=quay.io/ceph/ceph:v19, name=peaceful_jang, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Nov 27 05:55:13 np0005537642 ceph-mon[74338]: from='mgr.14122 192.168.122.100:0/397881370' entity='mgr.compute-0.qnrkij' cmd=[{"prefix": "auth get", "entity": "osd.0"}]: dispatch
Nov 27 05:55:13 np0005537642 ceph-mon[74338]: Deploying daemon osd.0 on compute-1
Nov 27 05:55:13 np0005537642 podman[81060]: 2025-11-27 10:55:13.559077317 +0000 UTC m=+1.116800510 container attach 24af39d9eaea14d84e6e70c7fe9ad95febe2770a8a9bacfdf6ac1054397ebe5c (image=quay.io/ceph/ceph:v19, name=peaceful_jang, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, ceph=True, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0)
Nov 27 05:55:13 np0005537642 ceph-mon[74338]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "status", "format": "json"} v 0)
Nov 27 05:55:13 np0005537642 ceph-mon[74338]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3989430752' entity='client.admin' cmd=[{"prefix": "status", "format": "json"}]: dispatch
Nov 27 05:55:13 np0005537642 peaceful_jang[81085]: 
Nov 27 05:55:13 np0005537642 peaceful_jang[81085]: {"fsid":"4c838139-e0c9-556a-a9ca-e4422f459af7","health":{"status":"HEALTH_WARN","checks":{"CEPHADM_APPLY_SPEC_FAIL":{"severity":"HEALTH_WARN","summary":{"message":"Failed to apply 2 service(s): mon,mgr","count":2},"muted":false}},"mutes":[]},"election_epoch":5,"quorum":[0],"quorum_names":["compute-0"],"quorum_age":108,"monmap":{"epoch":1,"min_mon_release_name":"squid","num_mons":1},"osdmap":{"epoch":5,"num_osds":2,"num_up_osds":0,"osd_up_since":0,"num_in_osds":2,"osd_in_since":1764240909,"num_remapped_pgs":0},"pgmap":{"pgs_by_state":[],"num_pgs":0,"num_pools":0,"num_objects":0,"data_bytes":0,"bytes_used":0,"bytes_avail":0,"bytes_total":0},"fsmap":{"epoch":1,"btime":"2025-11-27T10:53:22:366160+0000","by_rank":[],"up:standby":0},"mgrmap":{"available":true,"num_standbys":0,"modules":["cephadm","iostat","nfs","restful"],"services":{}},"servicemap":{"epoch":2,"modified":"2025-11-27T10:54:48.818414+0000","services":{}},"progress_events":{}}
Nov 27 05:55:13 np0005537642 systemd[1]: libpod-24af39d9eaea14d84e6e70c7fe9ad95febe2770a8a9bacfdf6ac1054397ebe5c.scope: Deactivated successfully.
Nov 27 05:55:13 np0005537642 podman[81060]: 2025-11-27 10:55:13.966505808 +0000 UTC m=+1.524228991 container died 24af39d9eaea14d84e6e70c7fe9ad95febe2770a8a9bacfdf6ac1054397ebe5c (image=quay.io/ceph/ceph:v19, name=peaceful_jang, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS)
Nov 27 05:55:14 np0005537642 systemd[1]: var-lib-containers-storage-overlay-4edd3b67f759b31e3f03fbaf6d1d2a7e5a347ab738b366d2565ad2e1fbb49e3d-merged.mount: Deactivated successfully.
Nov 27 05:55:14 np0005537642 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v41: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Nov 27 05:55:14 np0005537642 podman[81060]: 2025-11-27 10:55:14.667974521 +0000 UTC m=+2.225697704 container remove 24af39d9eaea14d84e6e70c7fe9ad95febe2770a8a9bacfdf6ac1054397ebe5c (image=quay.io/ceph/ceph:v19, name=peaceful_jang, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2)
Nov 27 05:55:14 np0005537642 systemd[1]: libpod-conmon-24af39d9eaea14d84e6e70c7fe9ad95febe2770a8a9bacfdf6ac1054397ebe5c.scope: Deactivated successfully.
Nov 27 05:55:15 np0005537642 ceph-mon[74338]: mon.compute-0@0(leader).osd e5 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 27 05:55:16 np0005537642 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v42: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Nov 27 05:55:17 np0005537642 ceph-mon[74338]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1.devices.0}] v 0)
Nov 27 05:55:17 np0005537642 ceph-mon[74338]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/397881370' entity='mgr.compute-0.qnrkij' 
Nov 27 05:55:17 np0005537642 ceph-mon[74338]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1}] v 0)
Nov 27 05:55:17 np0005537642 ceph-mon[74338]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/397881370' entity='mgr.compute-0.qnrkij' 
Nov 27 05:55:18 np0005537642 ceph-mon[74338]: from='mgr.14122 192.168.122.100:0/397881370' entity='mgr.compute-0.qnrkij' 
Nov 27 05:55:18 np0005537642 ceph-mon[74338]: from='mgr.14122 192.168.122.100:0/397881370' entity='mgr.compute-0.qnrkij' 
Nov 27 05:55:18 np0005537642 ceph-mgr[74636]: [volumes INFO mgr_util] scanning for idle connections..
Nov 27 05:55:18 np0005537642 ceph-mgr[74636]: [volumes INFO mgr_util] cleaning up connections: []
Nov 27 05:55:18 np0005537642 ceph-mgr[74636]: [volumes INFO mgr_util] scanning for idle connections..
Nov 27 05:55:18 np0005537642 ceph-mgr[74636]: [volumes INFO mgr_util] cleaning up connections: []
Nov 27 05:55:18 np0005537642 ceph-mgr[74636]: [volumes INFO mgr_util] scanning for idle connections..
Nov 27 05:55:18 np0005537642 ceph-mgr[74636]: [volumes INFO mgr_util] cleaning up connections: []
Nov 27 05:55:18 np0005537642 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v43: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Nov 27 05:55:19 np0005537642 ceph-mon[74338]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1.devices.0}] v 0)
Nov 27 05:55:19 np0005537642 ceph-mon[74338]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/397881370' entity='mgr.compute-0.qnrkij' 
Nov 27 05:55:19 np0005537642 ceph-mon[74338]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1}] v 0)
Nov 27 05:55:20 np0005537642 ceph-mon[74338]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/397881370' entity='mgr.compute-0.qnrkij' 
Nov 27 05:55:20 np0005537642 ceph-mon[74338]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["0"]} v 0)
Nov 27 05:55:20 np0005537642 ceph-mon[74338]: log_channel(audit) log [INF] : from='osd.0 [v2:192.168.122.101:6800/310437009,v1:192.168.122.101:6801/310437009]' entity='osd.0' cmd=[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["0"]}]: dispatch
Nov 27 05:55:20 np0005537642 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v44: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Nov 27 05:55:20 np0005537642 ceph-mon[74338]: mon.compute-0@0(leader).osd e5 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 27 05:55:20 np0005537642 ceph-mon[74338]: from='mgr.14122 192.168.122.100:0/397881370' entity='mgr.compute-0.qnrkij' 
Nov 27 05:55:20 np0005537642 ceph-mon[74338]: from='mgr.14122 192.168.122.100:0/397881370' entity='mgr.compute-0.qnrkij' 
Nov 27 05:55:20 np0005537642 ceph-mon[74338]: from='osd.0 [v2:192.168.122.101:6800/310437009,v1:192.168.122.101:6801/310437009]' entity='osd.0' cmd=[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["0"]}]: dispatch
Nov 27 05:55:21 np0005537642 ceph-mon[74338]: mon.compute-0@0(leader).osd e5 do_prune osdmap full prune enabled
Nov 27 05:55:21 np0005537642 ceph-mon[74338]: mon.compute-0@0(leader).osd e5 encode_pending skipping prime_pg_temp; mapping job did not start
Nov 27 05:55:21 np0005537642 ceph-mon[74338]: log_channel(audit) log [INF] : from='osd.0 [v2:192.168.122.101:6800/310437009,v1:192.168.122.101:6801/310437009]' entity='osd.0' cmd='[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["0"]}]': finished
Nov 27 05:55:21 np0005537642 ceph-mon[74338]: mon.compute-0@0(leader).osd e6 e6: 2 total, 0 up, 2 in
Nov 27 05:55:21 np0005537642 ceph-mon[74338]: log_channel(cluster) log [DBG] : osdmap e6: 2 total, 0 up, 2 in
Nov 27 05:55:21 np0005537642 ceph-mon[74338]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd crush create-or-move", "id": 0, "weight":0.0195, "args": ["host=compute-1", "root=default"]} v 0)
Nov 27 05:55:21 np0005537642 ceph-mon[74338]: log_channel(audit) log [INF] : from='osd.0 [v2:192.168.122.101:6800/310437009,v1:192.168.122.101:6801/310437009]' entity='osd.0' cmd=[{"prefix": "osd crush create-or-move", "id": 0, "weight":0.0195, "args": ["host=compute-1", "root=default"]}]: dispatch
Nov 27 05:55:21 np0005537642 ceph-mon[74338]: mon.compute-0@0(leader).osd e6 create-or-move crush item name 'osd.0' initial_weight 0.0195 at location {host=compute-1,root=default}
Nov 27 05:55:21 np0005537642 ceph-mon[74338]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 0} v 0)
Nov 27 05:55:21 np0005537642 ceph-mon[74338]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/397881370' entity='mgr.compute-0.qnrkij' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch
Nov 27 05:55:21 np0005537642 ceph-mon[74338]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 1} v 0)
Nov 27 05:55:21 np0005537642 ceph-mgr[74636]: mgr finish mon failed to return metadata for osd.0: (2) No such file or directory
Nov 27 05:55:21 np0005537642 ceph-mon[74338]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/397881370' entity='mgr.compute-0.qnrkij' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch
Nov 27 05:55:21 np0005537642 ceph-mgr[74636]: mgr finish mon failed to return metadata for osd.1: (2) No such file or directory
Nov 27 05:55:21 np0005537642 jovial_khorana[80925]: stderr: 2025-11-27T10:55:12.183+0000 7f3ba1bb1740 -1 bluestore(/var/lib/ceph/osd/ceph-1//block) No valid bdev label found
Nov 27 05:55:21 np0005537642 jovial_khorana[80925]: stderr: 2025-11-27T10:55:12.448+0000 7f3ba1bb1740 -1 bluestore(/var/lib/ceph/osd/ceph-1/) _read_fsid unparsable uuid
Nov 27 05:55:21 np0005537642 jovial_khorana[80925]: --> ceph-volume lvm prepare successful for: ceph_vg0/ceph_lv0
Nov 27 05:55:21 np0005537642 jovial_khorana[80925]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-1
Nov 27 05:55:21 np0005537642 jovial_khorana[80925]: Running command: /usr/bin/ceph-bluestore-tool --cluster=ceph prime-osd-dir --dev /dev/ceph_vg0/ceph_lv0 --path /var/lib/ceph/osd/ceph-1 --no-mon-config
Nov 27 05:55:21 np0005537642 jovial_khorana[80925]: Running command: /usr/bin/ln -snf /dev/ceph_vg0/ceph_lv0 /var/lib/ceph/osd/ceph-1/block
Nov 27 05:55:21 np0005537642 jovial_khorana[80925]: Running command: /usr/bin/chown -h ceph:ceph /var/lib/ceph/osd/ceph-1/block
Nov 27 05:55:21 np0005537642 jovial_khorana[80925]: Running command: /usr/bin/chown -R ceph:ceph /dev/dm-0
Nov 27 05:55:21 np0005537642 jovial_khorana[80925]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-1
Nov 27 05:55:21 np0005537642 jovial_khorana[80925]: --> ceph-volume lvm activate successful for osd ID: 1
Nov 27 05:55:21 np0005537642 jovial_khorana[80925]: --> ceph-volume lvm create successful for: ceph_vg0/ceph_lv0
Nov 27 05:55:22 np0005537642 systemd[1]: libpod-c6174648abd7eb2bf5f2b25a7030843fb4c69836e011240a6e31e07c44912e22.scope: Deactivated successfully.
Nov 27 05:55:22 np0005537642 systemd[1]: libpod-c6174648abd7eb2bf5f2b25a7030843fb4c69836e011240a6e31e07c44912e22.scope: Consumed 2.451s CPU time.
Nov 27 05:55:22 np0005537642 podman[80909]: 2025-11-27 10:55:22.00328847 +0000 UTC m=+14.202974675 container died c6174648abd7eb2bf5f2b25a7030843fb4c69836e011240a6e31e07c44912e22 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=jovial_khorana, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325)
Nov 27 05:55:22 np0005537642 ceph-mon[74338]: from='osd.0 [v2:192.168.122.101:6800/310437009,v1:192.168.122.101:6801/310437009]' entity='osd.0' cmd='[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["0"]}]': finished
Nov 27 05:55:22 np0005537642 ceph-mon[74338]: from='osd.0 [v2:192.168.122.101:6800/310437009,v1:192.168.122.101:6801/310437009]' entity='osd.0' cmd=[{"prefix": "osd crush create-or-move", "id": 0, "weight":0.0195, "args": ["host=compute-1", "root=default"]}]: dispatch
Nov 27 05:55:22 np0005537642 ceph-mon[74338]: mon.compute-0@0(leader).osd e6 do_prune osdmap full prune enabled
Nov 27 05:55:22 np0005537642 ceph-mon[74338]: mon.compute-0@0(leader).osd e6 encode_pending skipping prime_pg_temp; mapping job did not start
Nov 27 05:55:22 np0005537642 ceph-mon[74338]: log_channel(audit) log [INF] : from='osd.0 [v2:192.168.122.101:6800/310437009,v1:192.168.122.101:6801/310437009]' entity='osd.0' cmd='[{"prefix": "osd crush create-or-move", "id": 0, "weight":0.0195, "args": ["host=compute-1", "root=default"]}]': finished
Nov 27 05:55:22 np0005537642 ceph-mon[74338]: mon.compute-0@0(leader).osd e7 e7: 2 total, 0 up, 2 in
Nov 27 05:55:22 np0005537642 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v46: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Nov 27 05:55:22 np0005537642 ceph-mon[74338]: log_channel(cluster) log [DBG] : osdmap e7: 2 total, 0 up, 2 in
Nov 27 05:55:22 np0005537642 ceph-mon[74338]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 0} v 0)
Nov 27 05:55:22 np0005537642 ceph-mon[74338]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/397881370' entity='mgr.compute-0.qnrkij' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch
Nov 27 05:55:22 np0005537642 ceph-mon[74338]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 1} v 0)
Nov 27 05:55:22 np0005537642 ceph-mon[74338]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/397881370' entity='mgr.compute-0.qnrkij' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch
Nov 27 05:55:22 np0005537642 ceph-mgr[74636]: mgr finish mon failed to return metadata for osd.0: (2) No such file or directory
Nov 27 05:55:22 np0005537642 ceph-mgr[74636]: mgr finish mon failed to return metadata for osd.1: (2) No such file or directory
Nov 27 05:55:22 np0005537642 ceph-mgr[74636]: mgr.server handle_open ignoring open from osd.0 v2:192.168.122.101:6800/310437009; not ready for session (expect reconnect)
Nov 27 05:55:22 np0005537642 ceph-mon[74338]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 0} v 0)
Nov 27 05:55:22 np0005537642 ceph-mon[74338]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/397881370' entity='mgr.compute-0.qnrkij' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch
Nov 27 05:55:22 np0005537642 ceph-mgr[74636]: mgr finish mon failed to return metadata for osd.0: (2) No such file or directory
Nov 27 05:55:22 np0005537642 systemd[1]: var-lib-containers-storage-overlay-2cb6bb77adb2239aa87de3e2c6b0d6ca218a8420446ba18ebbf487a617364768-merged.mount: Deactivated successfully.
Nov 27 05:55:23 np0005537642 podman[80909]: 2025-11-27 10:55:23.544713261 +0000 UTC m=+15.744399476 container remove c6174648abd7eb2bf5f2b25a7030843fb4c69836e011240a6e31e07c44912e22 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=jovial_khorana, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Nov 27 05:55:23 np0005537642 systemd[1]: libpod-conmon-c6174648abd7eb2bf5f2b25a7030843fb4c69836e011240a6e31e07c44912e22.scope: Deactivated successfully.
Nov 27 05:55:23 np0005537642 ceph-mgr[74636]: mgr.server handle_open ignoring open from osd.0 v2:192.168.122.101:6800/310437009; not ready for session (expect reconnect)
Nov 27 05:55:23 np0005537642 ceph-mon[74338]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 0} v 0)
Nov 27 05:55:23 np0005537642 ceph-mon[74338]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/397881370' entity='mgr.compute-0.qnrkij' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch
Nov 27 05:55:23 np0005537642 ceph-mgr[74636]: mgr finish mon failed to return metadata for osd.0: (2) No such file or directory
Nov 27 05:55:24 np0005537642 ceph-mon[74338]: from='osd.0 [v2:192.168.122.101:6800/310437009,v1:192.168.122.101:6801/310437009]' entity='osd.0' cmd='[{"prefix": "osd crush create-or-move", "id": 0, "weight":0.0195, "args": ["host=compute-1", "root=default"]}]': finished
Nov 27 05:55:24 np0005537642 podman[82084]: 2025-11-27 10:55:24.299196814 +0000 UTC m=+0.058126073 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Nov 27 05:55:24 np0005537642 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v48: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Nov 27 05:55:24 np0005537642 podman[82084]: 2025-11-27 10:55:24.553163202 +0000 UTC m=+0.312092371 container create fc344b29d594f065375e26ff64512fe9f87f7cb6e00d0b0af4be9b878e19be92 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=silly_jones, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_REF=squid, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325)
Nov 27 05:55:24 np0005537642 ceph-mgr[74636]: mgr.server handle_open ignoring open from osd.0 v2:192.168.122.101:6800/310437009; not ready for session (expect reconnect)
Nov 27 05:55:24 np0005537642 ceph-mon[74338]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 0} v 0)
Nov 27 05:55:24 np0005537642 ceph-mon[74338]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/397881370' entity='mgr.compute-0.qnrkij' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch
Nov 27 05:55:24 np0005537642 ceph-mgr[74636]: mgr finish mon failed to return metadata for osd.0: (2) No such file or directory
Nov 27 05:55:24 np0005537642 systemd[1]: Started libpod-conmon-fc344b29d594f065375e26ff64512fe9f87f7cb6e00d0b0af4be9b878e19be92.scope.
Nov 27 05:55:24 np0005537642 systemd[1]: Started libcrun container.
Nov 27 05:55:24 np0005537642 podman[82084]: 2025-11-27 10:55:24.858273304 +0000 UTC m=+0.617202553 container init fc344b29d594f065375e26ff64512fe9f87f7cb6e00d0b0af4be9b878e19be92 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=silly_jones, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, ceph=True, org.label-schema.build-date=20250325, CEPH_REF=squid)
Nov 27 05:55:24 np0005537642 podman[82084]: 2025-11-27 10:55:24.870448577 +0000 UTC m=+0.629377766 container start fc344b29d594f065375e26ff64512fe9f87f7cb6e00d0b0af4be9b878e19be92 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=silly_jones, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Nov 27 05:55:24 np0005537642 silly_jones[82100]: 167 167
Nov 27 05:55:24 np0005537642 systemd[1]: libpod-fc344b29d594f065375e26ff64512fe9f87f7cb6e00d0b0af4be9b878e19be92.scope: Deactivated successfully.
Nov 27 05:55:24 np0005537642 podman[82084]: 2025-11-27 10:55:24.989799403 +0000 UTC m=+0.748728602 container attach fc344b29d594f065375e26ff64512fe9f87f7cb6e00d0b0af4be9b878e19be92 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=silly_jones, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325)
Nov 27 05:55:24 np0005537642 podman[82084]: 2025-11-27 10:55:24.990763562 +0000 UTC m=+0.749692751 container died fc344b29d594f065375e26ff64512fe9f87f7cb6e00d0b0af4be9b878e19be92 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=silly_jones, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 27 05:55:25 np0005537642 ceph-mon[74338]: mon.compute-0@0(leader).osd e7 do_prune osdmap full prune enabled
Nov 27 05:55:25 np0005537642 ceph-mon[74338]: mon.compute-0@0(leader).osd e7 encode_pending skipping prime_pg_temp; mapping job did not start
Nov 27 05:55:25 np0005537642 systemd[1]: var-lib-containers-storage-overlay-f110e6ce923349e334eb97fc84b7916a369457ddeccdc20179c88263e0e81c6f-merged.mount: Deactivated successfully.
Nov 27 05:55:25 np0005537642 ceph-mgr[74636]: mgr.server handle_open ignoring open from osd.0 v2:192.168.122.101:6800/310437009; not ready for session (expect reconnect)
Nov 27 05:55:25 np0005537642 ceph-mon[74338]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 0} v 0)
Nov 27 05:55:25 np0005537642 ceph-mon[74338]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/397881370' entity='mgr.compute-0.qnrkij' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch
Nov 27 05:55:25 np0005537642 ceph-mgr[74636]: mgr finish mon failed to return metadata for osd.0: (2) No such file or directory
Nov 27 05:55:25 np0005537642 ceph-mon[74338]: mon.compute-0@0(leader).osd e8 e8: 2 total, 1 up, 2 in
Nov 27 05:55:25 np0005537642 podman[82084]: 2025-11-27 10:55:25.895417018 +0000 UTC m=+1.654346207 container remove fc344b29d594f065375e26ff64512fe9f87f7cb6e00d0b0af4be9b878e19be92 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=silly_jones, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Nov 27 05:55:25 np0005537642 ceph-mon[74338]: log_channel(cluster) log [INF] : osd.0 [v2:192.168.122.101:6800/310437009,v1:192.168.122.101:6801/310437009] boot
Nov 27 05:55:25 np0005537642 ceph-mon[74338]: log_channel(cluster) log [DBG] : osdmap e8: 2 total, 1 up, 2 in
Nov 27 05:55:25 np0005537642 ceph-mon[74338]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 0} v 0)
Nov 27 05:55:25 np0005537642 ceph-mon[74338]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/397881370' entity='mgr.compute-0.qnrkij' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch
Nov 27 05:55:25 np0005537642 ceph-mon[74338]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 1} v 0)
Nov 27 05:55:25 np0005537642 ceph-mon[74338]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/397881370' entity='mgr.compute-0.qnrkij' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch
Nov 27 05:55:25 np0005537642 ceph-mgr[74636]: mgr finish mon failed to return metadata for osd.1: (2) No such file or directory
Nov 27 05:55:25 np0005537642 systemd[1]: libpod-conmon-fc344b29d594f065375e26ff64512fe9f87f7cb6e00d0b0af4be9b878e19be92.scope: Deactivated successfully.
Nov 27 05:55:26 np0005537642 podman[82125]: 2025-11-27 10:55:26.122690081 +0000 UTC m=+0.044134837 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Nov 27 05:55:26 np0005537642 podman[82125]: 2025-11-27 10:55:26.288441099 +0000 UTC m=+0.209885855 container create 67a29eb7b95dee37b528e7199529791b5b0600acca021fb65862fadbec5f21f9 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=bold_gagarin, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True)
Nov 27 05:55:26 np0005537642 systemd[1]: Started libpod-conmon-67a29eb7b95dee37b528e7199529791b5b0600acca021fb65862fadbec5f21f9.scope.
Nov 27 05:55:26 np0005537642 systemd[1]: Started libcrun container.
Nov 27 05:55:26 np0005537642 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/69e784aa5ed5c3d9a6da326704728d98529b62112ef0730771f2c37908d3fc84/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 27 05:55:26 np0005537642 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/69e784aa5ed5c3d9a6da326704728d98529b62112ef0730771f2c37908d3fc84/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 27 05:55:26 np0005537642 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/69e784aa5ed5c3d9a6da326704728d98529b62112ef0730771f2c37908d3fc84/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 27 05:55:26 np0005537642 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/69e784aa5ed5c3d9a6da326704728d98529b62112ef0730771f2c37908d3fc84/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 27 05:55:26 np0005537642 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v50: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Nov 27 05:55:26 np0005537642 ceph-mgr[74636]: [devicehealth INFO root] creating mgr pool
Nov 27 05:55:26 np0005537642 ceph-mon[74338]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool create", "format": "json", "pool": ".mgr", "pg_num": 1, "pg_num_min": 1, "pg_num_max": 32, "yes_i_really_mean_it": true} v 0)
Nov 27 05:55:26 np0005537642 ceph-mon[74338]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/397881370' entity='mgr.compute-0.qnrkij' cmd=[{"prefix": "osd pool create", "format": "json", "pool": ".mgr", "pg_num": 1, "pg_num_min": 1, "pg_num_max": 32, "yes_i_really_mean_it": true}]: dispatch
Nov 27 05:55:26 np0005537642 podman[82125]: 2025-11-27 10:55:26.540893142 +0000 UTC m=+0.462337888 container init 67a29eb7b95dee37b528e7199529791b5b0600acca021fb65862fadbec5f21f9 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=bold_gagarin, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, ceph=True, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 27 05:55:26 np0005537642 podman[82125]: 2025-11-27 10:55:26.553309002 +0000 UTC m=+0.474753718 container start 67a29eb7b95dee37b528e7199529791b5b0600acca021fb65862fadbec5f21f9 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=bold_gagarin, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True)
Nov 27 05:55:26 np0005537642 ceph-mon[74338]: OSD bench result of 8307.622028 IOPS is not within the threshold limit range of 50.000000 IOPS and 500.000000 IOPS for osd.0. IOPS capacity is unchanged at 315.000000 IOPS. The recommendation is to establish the osd's IOPS capacity using other benchmark tools (e.g. Fio) and then override osd_mclock_max_capacity_iops_[hdd|ssd].
Nov 27 05:55:26 np0005537642 ceph-mon[74338]: osd.0 [v2:192.168.122.101:6800/310437009,v1:192.168.122.101:6801/310437009] boot
Nov 27 05:55:26 np0005537642 ceph-mon[74338]: from='mgr.14122 192.168.122.100:0/397881370' entity='mgr.compute-0.qnrkij' cmd=[{"prefix": "osd pool create", "format": "json", "pool": ".mgr", "pg_num": 1, "pg_num_min": 1, "pg_num_max": 32, "yes_i_really_mean_it": true}]: dispatch
Nov 27 05:55:26 np0005537642 podman[82125]: 2025-11-27 10:55:26.816115364 +0000 UTC m=+0.737560090 container attach 67a29eb7b95dee37b528e7199529791b5b0600acca021fb65862fadbec5f21f9 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=bold_gagarin, CEPH_REF=squid, ceph=True, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default)
Nov 27 05:55:26 np0005537642 bold_gagarin[82142]: {
Nov 27 05:55:26 np0005537642 bold_gagarin[82142]:    "1": [
Nov 27 05:55:26 np0005537642 bold_gagarin[82142]:        {
Nov 27 05:55:26 np0005537642 bold_gagarin[82142]:            "devices": [
Nov 27 05:55:26 np0005537642 bold_gagarin[82142]:                "/dev/loop3"
Nov 27 05:55:26 np0005537642 bold_gagarin[82142]:            ],
Nov 27 05:55:26 np0005537642 bold_gagarin[82142]:            "lv_name": "ceph_lv0",
Nov 27 05:55:26 np0005537642 bold_gagarin[82142]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Nov 27 05:55:26 np0005537642 bold_gagarin[82142]:            "lv_size": "21470642176",
Nov 27 05:55:26 np0005537642 bold_gagarin[82142]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=whPowo-sd77-WkNQ-nG3J-nhwn-01QM-SzpkeN,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=4c838139-e0c9-556a-a9ca-e4422f459af7,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=047f3e15-ba18-4c86-b24b-f8e9584c5eff,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Nov 27 05:55:26 np0005537642 bold_gagarin[82142]:            "lv_uuid": "whPowo-sd77-WkNQ-nG3J-nhwn-01QM-SzpkeN",
Nov 27 05:55:26 np0005537642 bold_gagarin[82142]:            "name": "ceph_lv0",
Nov 27 05:55:26 np0005537642 bold_gagarin[82142]:            "path": "/dev/ceph_vg0/ceph_lv0",
Nov 27 05:55:26 np0005537642 bold_gagarin[82142]:            "tags": {
Nov 27 05:55:26 np0005537642 bold_gagarin[82142]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Nov 27 05:55:26 np0005537642 bold_gagarin[82142]:                "ceph.block_uuid": "whPowo-sd77-WkNQ-nG3J-nhwn-01QM-SzpkeN",
Nov 27 05:55:26 np0005537642 bold_gagarin[82142]:                "ceph.cephx_lockbox_secret": "",
Nov 27 05:55:26 np0005537642 bold_gagarin[82142]:                "ceph.cluster_fsid": "4c838139-e0c9-556a-a9ca-e4422f459af7",
Nov 27 05:55:26 np0005537642 bold_gagarin[82142]:                "ceph.cluster_name": "ceph",
Nov 27 05:55:26 np0005537642 bold_gagarin[82142]:                "ceph.crush_device_class": "",
Nov 27 05:55:26 np0005537642 bold_gagarin[82142]:                "ceph.encrypted": "0",
Nov 27 05:55:26 np0005537642 bold_gagarin[82142]:                "ceph.osd_fsid": "047f3e15-ba18-4c86-b24b-f8e9584c5eff",
Nov 27 05:55:26 np0005537642 bold_gagarin[82142]:                "ceph.osd_id": "1",
Nov 27 05:55:26 np0005537642 bold_gagarin[82142]:                "ceph.osdspec_affinity": "default_drive_group",
Nov 27 05:55:26 np0005537642 bold_gagarin[82142]:                "ceph.type": "block",
Nov 27 05:55:26 np0005537642 bold_gagarin[82142]:                "ceph.vdo": "0",
Nov 27 05:55:26 np0005537642 bold_gagarin[82142]:                "ceph.with_tpm": "0"
Nov 27 05:55:26 np0005537642 bold_gagarin[82142]:            },
Nov 27 05:55:26 np0005537642 bold_gagarin[82142]:            "type": "block",
Nov 27 05:55:26 np0005537642 bold_gagarin[82142]:            "vg_name": "ceph_vg0"
Nov 27 05:55:26 np0005537642 bold_gagarin[82142]:        }
Nov 27 05:55:26 np0005537642 bold_gagarin[82142]:    ]
Nov 27 05:55:26 np0005537642 bold_gagarin[82142]: }
Nov 27 05:55:26 np0005537642 systemd[1]: libpod-67a29eb7b95dee37b528e7199529791b5b0600acca021fb65862fadbec5f21f9.scope: Deactivated successfully.
Nov 27 05:55:26 np0005537642 ceph-mon[74338]: mon.compute-0@0(leader).osd e8 do_prune osdmap full prune enabled
Nov 27 05:55:26 np0005537642 ceph-mon[74338]: mon.compute-0@0(leader).osd e8 encode_pending skipping prime_pg_temp; mapping job did not start
Nov 27 05:55:26 np0005537642 podman[82153]: 2025-11-27 10:55:26.910170336 +0000 UTC m=+0.027560322 container died 67a29eb7b95dee37b528e7199529791b5b0600acca021fb65862fadbec5f21f9 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=bold_gagarin, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_REF=squid, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 27 05:55:27 np0005537642 ceph-mon[74338]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/397881370' entity='mgr.compute-0.qnrkij' cmd='[{"prefix": "osd pool create", "format": "json", "pool": ".mgr", "pg_num": 1, "pg_num_min": 1, "pg_num_max": 32, "yes_i_really_mean_it": true}]': finished
Nov 27 05:55:27 np0005537642 ceph-mon[74338]: mon.compute-0@0(leader).osd e9 e9: 2 total, 1 up, 2 in
Nov 27 05:55:27 np0005537642 ceph-mon[74338]: mon.compute-0@0(leader).osd e9 crush map has features 3314933000852226048, adjusting msgr requires
Nov 27 05:55:27 np0005537642 ceph-mon[74338]: mon.compute-0@0(leader).osd e9 crush map has features 288514051259236352, adjusting msgr requires
Nov 27 05:55:27 np0005537642 ceph-mon[74338]: mon.compute-0@0(leader).osd e9 crush map has features 288514051259236352, adjusting msgr requires
Nov 27 05:55:27 np0005537642 ceph-mon[74338]: mon.compute-0@0(leader).osd e9 crush map has features 288514051259236352, adjusting msgr requires
Nov 27 05:55:27 np0005537642 ceph-mon[74338]: log_channel(cluster) log [DBG] : osdmap e9: 2 total, 1 up, 2 in
Nov 27 05:55:27 np0005537642 ceph-mon[74338]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 1} v 0)
Nov 27 05:55:27 np0005537642 ceph-mon[74338]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/397881370' entity='mgr.compute-0.qnrkij' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch
Nov 27 05:55:27 np0005537642 ceph-mgr[74636]: mgr finish mon failed to return metadata for osd.1: (2) No such file or directory
Nov 27 05:55:27 np0005537642 ceph-mon[74338]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool application enable", "format": "json", "pool": ".mgr", "app": "mgr", "yes_i_really_mean_it": true} v 0)
Nov 27 05:55:27 np0005537642 ceph-mon[74338]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/397881370' entity='mgr.compute-0.qnrkij' cmd=[{"prefix": "osd pool application enable", "format": "json", "pool": ".mgr", "app": "mgr", "yes_i_really_mean_it": true}]: dispatch
Nov 27 05:55:27 np0005537642 systemd[1]: var-lib-containers-storage-overlay-69e784aa5ed5c3d9a6da326704728d98529b62112ef0730771f2c37908d3fc84-merged.mount: Deactivated successfully.
Nov 27 05:55:27 np0005537642 podman[82153]: 2025-11-27 10:55:27.943200076 +0000 UTC m=+1.060590022 container remove 67a29eb7b95dee37b528e7199529791b5b0600acca021fb65862fadbec5f21f9 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=bold_gagarin, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, ceph=True, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.build-date=20250325)
Nov 27 05:55:27 np0005537642 systemd[1]: libpod-conmon-67a29eb7b95dee37b528e7199529791b5b0600acca021fb65862fadbec5f21f9.scope: Deactivated successfully.
Nov 27 05:55:28 np0005537642 ceph-mon[74338]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "osd.1"} v 0)
Nov 27 05:55:28 np0005537642 ceph-mon[74338]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/397881370' entity='mgr.compute-0.qnrkij' cmd=[{"prefix": "auth get", "entity": "osd.1"}]: dispatch
Nov 27 05:55:28 np0005537642 ceph-mon[74338]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Nov 27 05:55:28 np0005537642 ceph-mon[74338]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/397881370' entity='mgr.compute-0.qnrkij' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 27 05:55:28 np0005537642 ceph-mgr[74636]: [cephadm INFO cephadm.serve] Deploying daemon osd.1 on compute-0
Nov 27 05:55:28 np0005537642 ceph-mgr[74636]: log_channel(cephadm) log [INF] : Deploying daemon osd.1 on compute-0
Nov 27 05:55:28 np0005537642 ceph-mon[74338]: mon.compute-0@0(leader).osd e9 do_prune osdmap full prune enabled
Nov 27 05:55:28 np0005537642 ceph-mon[74338]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/397881370' entity='mgr.compute-0.qnrkij' cmd='[{"prefix": "osd pool application enable", "format": "json", "pool": ".mgr", "app": "mgr", "yes_i_really_mean_it": true}]': finished
Nov 27 05:55:28 np0005537642 ceph-mon[74338]: mon.compute-0@0(leader).osd e10 e10: 2 total, 1 up, 2 in
Nov 27 05:55:28 np0005537642 ceph-mon[74338]: log_channel(cluster) log [DBG] : osdmap e10: 2 total, 1 up, 2 in
Nov 27 05:55:28 np0005537642 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v53: 1 pgs: 1 unknown; 0 B data, 426 MiB used, 20 GiB / 20 GiB avail
Nov 27 05:55:28 np0005537642 ceph-mon[74338]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 1} v 0)
Nov 27 05:55:28 np0005537642 ceph-mon[74338]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/397881370' entity='mgr.compute-0.qnrkij' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch
Nov 27 05:55:28 np0005537642 ceph-mgr[74636]: mgr finish mon failed to return metadata for osd.1: (2) No such file or directory
Nov 27 05:55:28 np0005537642 ceph-mgr[74636]: [devicehealth INFO root] creating main.db for devicehealth
Nov 27 05:55:28 np0005537642 ceph-mon[74338]: from='mgr.14122 192.168.122.100:0/397881370' entity='mgr.compute-0.qnrkij' cmd='[{"prefix": "osd pool create", "format": "json", "pool": ".mgr", "pg_num": 1, "pg_num_min": 1, "pg_num_max": 32, "yes_i_really_mean_it": true}]': finished
Nov 27 05:55:28 np0005537642 ceph-mon[74338]: from='mgr.14122 192.168.122.100:0/397881370' entity='mgr.compute-0.qnrkij' cmd=[{"prefix": "osd pool application enable", "format": "json", "pool": ".mgr", "app": "mgr", "yes_i_really_mean_it": true}]: dispatch
Nov 27 05:55:28 np0005537642 ceph-mon[74338]: from='mgr.14122 192.168.122.100:0/397881370' entity='mgr.compute-0.qnrkij' cmd=[{"prefix": "auth get", "entity": "osd.1"}]: dispatch
Nov 27 05:55:28 np0005537642 ceph-mgr[74636]: [devicehealth INFO root] Check health
Nov 27 05:55:28 np0005537642 ceph-mgr[74636]: [devicehealth ERROR root] Fail to parse JSON result from daemon osd.1 ()
Nov 27 05:55:28 np0005537642 ceph-mon[74338]: log_channel(audit) log [INF] : from='admin socket' entity='admin socket' cmd='smart' args=[json]: dispatch
Nov 27 05:55:28 np0005537642 podman[82262]: 2025-11-27 10:55:28.716991321 +0000 UTC m=+0.125758552 container create 464a54b9e390e936b583ad3d47876d4557cf2a9285217a2bf62f9b553e86732b (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=confident_herschel, org.label-schema.build-date=20250325, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 27 05:55:28 np0005537642 podman[82262]: 2025-11-27 10:55:28.628231607 +0000 UTC m=+0.036998828 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Nov 27 05:55:28 np0005537642 ceph-mon[74338]: log_channel(audit) log [INF] : from='admin socket' entity='admin socket' cmd=smart args=[json]: finished
Nov 27 05:55:28 np0005537642 ceph-mon[74338]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon metadata", "id": "compute-0"} v 0)
Nov 27 05:55:28 np0005537642 ceph-mon[74338]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/397881370' entity='mgr.compute-0.qnrkij' cmd=[{"prefix": "mon metadata", "id": "compute-0"}]: dispatch
Nov 27 05:55:29 np0005537642 systemd[1]: Started libpod-conmon-464a54b9e390e936b583ad3d47876d4557cf2a9285217a2bf62f9b553e86732b.scope.
Nov 27 05:55:29 np0005537642 systemd[1]: Started libcrun container.
Nov 27 05:55:29 np0005537642 podman[82262]: 2025-11-27 10:55:29.313445163 +0000 UTC m=+0.722212404 container init 464a54b9e390e936b583ad3d47876d4557cf2a9285217a2bf62f9b553e86732b (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=confident_herschel, CEPH_REF=squid, org.label-schema.build-date=20250325, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, org.label-schema.license=GPLv2)
Nov 27 05:55:29 np0005537642 ceph-mon[74338]: mon.compute-0@0(leader).osd e10 do_prune osdmap full prune enabled
Nov 27 05:55:29 np0005537642 podman[82262]: 2025-11-27 10:55:29.326160916 +0000 UTC m=+0.734928147 container start 464a54b9e390e936b583ad3d47876d4557cf2a9285217a2bf62f9b553e86732b (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=confident_herschel, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325)
Nov 27 05:55:29 np0005537642 confident_herschel[82289]: 167 167
Nov 27 05:55:29 np0005537642 systemd[1]: libpod-464a54b9e390e936b583ad3d47876d4557cf2a9285217a2bf62f9b553e86732b.scope: Deactivated successfully.
Nov 27 05:55:29 np0005537642 podman[82262]: 2025-11-27 10:55:29.4023672 +0000 UTC m=+0.811134461 container attach 464a54b9e390e936b583ad3d47876d4557cf2a9285217a2bf62f9b553e86732b (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=confident_herschel, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, OSD_FLAVOR=default, io.buildah.version=1.40.1, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 27 05:55:29 np0005537642 podman[82262]: 2025-11-27 10:55:29.403035574 +0000 UTC m=+0.811802785 container died 464a54b9e390e936b583ad3d47876d4557cf2a9285217a2bf62f9b553e86732b (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=confident_herschel, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, OSD_FLAVOR=default)
Nov 27 05:55:29 np0005537642 ceph-mon[74338]: mon.compute-0@0(leader).osd e11 e11: 2 total, 1 up, 2 in
Nov 27 05:55:29 np0005537642 ceph-mon[74338]: log_channel(cluster) log [DBG] : osdmap e11: 2 total, 1 up, 2 in
Nov 27 05:55:29 np0005537642 ceph-mon[74338]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 1} v 0)
Nov 27 05:55:29 np0005537642 ceph-mon[74338]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/397881370' entity='mgr.compute-0.qnrkij' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch
Nov 27 05:55:29 np0005537642 ceph-mgr[74636]: mgr finish mon failed to return metadata for osd.1: (2) No such file or directory
Nov 27 05:55:29 np0005537642 ceph-mon[74338]: Deploying daemon osd.1 on compute-0
Nov 27 05:55:29 np0005537642 ceph-mon[74338]: from='mgr.14122 192.168.122.100:0/397881370' entity='mgr.compute-0.qnrkij' cmd='[{"prefix": "osd pool application enable", "format": "json", "pool": ".mgr", "app": "mgr", "yes_i_really_mean_it": true}]': finished
Nov 27 05:55:29 np0005537642 ceph-mon[74338]: from='admin socket' entity='admin socket' cmd='smart' args=[json]: dispatch
Nov 27 05:55:29 np0005537642 ceph-mon[74338]: from='admin socket' entity='admin socket' cmd=smart args=[json]: finished
Nov 27 05:55:29 np0005537642 systemd[1]: var-lib-containers-storage-overlay-f9a07c5ae141f91565eb497e5275afc4e09e5ac108637eafae61e9c4c6661f35-merged.mount: Deactivated successfully.
Nov 27 05:55:30 np0005537642 ceph-mon[74338]: log_channel(cluster) log [DBG] : mgrmap e9: compute-0.qnrkij(active, since 107s)
Nov 27 05:55:30 np0005537642 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v55: 1 pgs: 1 unknown; 0 B data, 426 MiB used, 20 GiB / 20 GiB avail
Nov 27 05:55:30 np0005537642 podman[82262]: 2025-11-27 10:55:30.554757907 +0000 UTC m=+1.963525158 container remove 464a54b9e390e936b583ad3d47876d4557cf2a9285217a2bf62f9b553e86732b (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=confident_herschel, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 27 05:55:30 np0005537642 systemd[1]: libpod-conmon-464a54b9e390e936b583ad3d47876d4557cf2a9285217a2bf62f9b553e86732b.scope: Deactivated successfully.
Nov 27 05:55:30 np0005537642 ceph-mon[74338]: mon.compute-0@0(leader).osd e11 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 27 05:55:31 np0005537642 podman[82319]: 2025-11-27 10:55:31.151368332 +0000 UTC m=+0.117093108 container create e938e952b1575b7d9b3b78f405bd09cadcfe87145b51d486e9194113f33a21e9 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-4c838139-e0c9-556a-a9ca-e4422f459af7-osd-1-activate-test, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, io.buildah.version=1.40.1, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 27 05:55:31 np0005537642 podman[82319]: 2025-11-27 10:55:31.079082896 +0000 UTC m=+0.044807692 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Nov 27 05:55:31 np0005537642 systemd[1]: Started libpod-conmon-e938e952b1575b7d9b3b78f405bd09cadcfe87145b51d486e9194113f33a21e9.scope.
Nov 27 05:55:31 np0005537642 systemd[1]: Started libcrun container.
Nov 27 05:55:31 np0005537642 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9d59e9bf1204bbf1fdc43feecd3cc75f050de82526fa45e4316097d9b2927e49/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 27 05:55:31 np0005537642 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9d59e9bf1204bbf1fdc43feecd3cc75f050de82526fa45e4316097d9b2927e49/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 27 05:55:31 np0005537642 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9d59e9bf1204bbf1fdc43feecd3cc75f050de82526fa45e4316097d9b2927e49/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 27 05:55:31 np0005537642 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9d59e9bf1204bbf1fdc43feecd3cc75f050de82526fa45e4316097d9b2927e49/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 27 05:55:31 np0005537642 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9d59e9bf1204bbf1fdc43feecd3cc75f050de82526fa45e4316097d9b2927e49/merged/var/lib/ceph/osd/ceph-1 supports timestamps until 2038 (0x7fffffff)
Nov 27 05:55:31 np0005537642 podman[82319]: 2025-11-27 10:55:31.503093254 +0000 UTC m=+0.468818090 container init e938e952b1575b7d9b3b78f405bd09cadcfe87145b51d486e9194113f33a21e9 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-4c838139-e0c9-556a-a9ca-e4422f459af7-osd-1-activate-test, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Nov 27 05:55:31 np0005537642 podman[82319]: 2025-11-27 10:55:31.512544385 +0000 UTC m=+0.478269171 container start e938e952b1575b7d9b3b78f405bd09cadcfe87145b51d486e9194113f33a21e9 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-4c838139-e0c9-556a-a9ca-e4422f459af7-osd-1-activate-test, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 27 05:55:31 np0005537642 podman[82319]: 2025-11-27 10:55:31.582521549 +0000 UTC m=+0.548246305 container attach e938e952b1575b7d9b3b78f405bd09cadcfe87145b51d486e9194113f33a21e9 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-4c838139-e0c9-556a-a9ca-e4422f459af7-osd-1-activate-test, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, org.label-schema.build-date=20250325, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2)
Nov 27 05:55:31 np0005537642 ceph-4c838139-e0c9-556a-a9ca-e4422f459af7-osd-1-activate-test[82336]: usage: ceph-volume activate [-h] [--osd-id OSD_ID] [--osd-uuid OSD_FSID]
Nov 27 05:55:31 np0005537642 ceph-4c838139-e0c9-556a-a9ca-e4422f459af7-osd-1-activate-test[82336]:                            [--no-systemd] [--no-tmpfs]
Nov 27 05:55:31 np0005537642 ceph-4c838139-e0c9-556a-a9ca-e4422f459af7-osd-1-activate-test[82336]: ceph-volume activate: error: unrecognized arguments: --bad-option
Nov 27 05:55:31 np0005537642 systemd[1]: libpod-e938e952b1575b7d9b3b78f405bd09cadcfe87145b51d486e9194113f33a21e9.scope: Deactivated successfully.
Nov 27 05:55:31 np0005537642 podman[82319]: 2025-11-27 10:55:31.711427741 +0000 UTC m=+0.677152487 container died e938e952b1575b7d9b3b78f405bd09cadcfe87145b51d486e9194113f33a21e9 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-4c838139-e0c9-556a-a9ca-e4422f459af7-osd-1-activate-test, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, org.label-schema.build-date=20250325, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 27 05:55:31 np0005537642 systemd[1]: var-lib-containers-storage-overlay-9d59e9bf1204bbf1fdc43feecd3cc75f050de82526fa45e4316097d9b2927e49-merged.mount: Deactivated successfully.
Nov 27 05:55:32 np0005537642 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v56: 1 pgs: 1 active+clean; 449 KiB data, 427 MiB used, 20 GiB / 20 GiB avail
Nov 27 05:55:32 np0005537642 podman[82319]: 2025-11-27 10:55:32.666875297 +0000 UTC m=+1.632600083 container remove e938e952b1575b7d9b3b78f405bd09cadcfe87145b51d486e9194113f33a21e9 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-4c838139-e0c9-556a-a9ca-e4422f459af7-osd-1-activate-test, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_REF=squid, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0)
Nov 27 05:55:32 np0005537642 systemd[1]: libpod-conmon-e938e952b1575b7d9b3b78f405bd09cadcfe87145b51d486e9194113f33a21e9.scope: Deactivated successfully.
Nov 27 05:55:33 np0005537642 systemd[1]: Reloading.
Nov 27 05:55:33 np0005537642 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 27 05:55:33 np0005537642 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 27 05:55:34 np0005537642 systemd[1]: Reloading.
Nov 27 05:55:34 np0005537642 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 27 05:55:34 np0005537642 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 27 05:55:34 np0005537642 systemd[1]: Starting Ceph osd.1 for 4c838139-e0c9-556a-a9ca-e4422f459af7...
Nov 27 05:55:34 np0005537642 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v57: 1 pgs: 1 active+clean; 449 KiB data, 427 MiB used, 20 GiB / 20 GiB avail
Nov 27 05:55:34 np0005537642 podman[82501]: 2025-11-27 10:55:34.694258321 +0000 UTC m=+0.036401624 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Nov 27 05:55:34 np0005537642 podman[82501]: 2025-11-27 10:55:34.964685316 +0000 UTC m=+0.306828569 container create 1c6028d74ec54b058d5500898e61d5f07816ce668cdcda3137011bbb3f845a88 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-4c838139-e0c9-556a-a9ca-e4422f459af7-osd-1-activate, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, CEPH_REF=squid, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2)
Nov 27 05:55:35 np0005537642 systemd[1]: Started libcrun container.
Nov 27 05:55:35 np0005537642 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fb72e3e2368a170962cc6fbd3a004105652480fb20886b7dabe740c0d3dae267/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 27 05:55:35 np0005537642 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fb72e3e2368a170962cc6fbd3a004105652480fb20886b7dabe740c0d3dae267/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 27 05:55:35 np0005537642 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fb72e3e2368a170962cc6fbd3a004105652480fb20886b7dabe740c0d3dae267/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 27 05:55:35 np0005537642 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fb72e3e2368a170962cc6fbd3a004105652480fb20886b7dabe740c0d3dae267/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 27 05:55:35 np0005537642 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fb72e3e2368a170962cc6fbd3a004105652480fb20886b7dabe740c0d3dae267/merged/var/lib/ceph/osd/ceph-1 supports timestamps until 2038 (0x7fffffff)
Nov 27 05:55:35 np0005537642 podman[82501]: 2025-11-27 10:55:35.357138918 +0000 UTC m=+0.699282221 container init 1c6028d74ec54b058d5500898e61d5f07816ce668cdcda3137011bbb3f845a88 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-4c838139-e0c9-556a-a9ca-e4422f459af7-osd-1-activate, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1)
Nov 27 05:55:35 np0005537642 podman[82501]: 2025-11-27 10:55:35.3675363 +0000 UTC m=+0.709679563 container start 1c6028d74ec54b058d5500898e61d5f07816ce668cdcda3137011bbb3f845a88 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-4c838139-e0c9-556a-a9ca-e4422f459af7-osd-1-activate, ceph=True, org.label-schema.license=GPLv2, CEPH_REF=squid, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 27 05:55:35 np0005537642 podman[82501]: 2025-11-27 10:55:35.483609115 +0000 UTC m=+0.825752428 container attach 1c6028d74ec54b058d5500898e61d5f07816ce668cdcda3137011bbb3f845a88 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-4c838139-e0c9-556a-a9ca-e4422f459af7-osd-1-activate, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, ceph=True)
Nov 27 05:55:35 np0005537642 ceph-4c838139-e0c9-556a-a9ca-e4422f459af7-osd-1-activate[82516]: Running command: /usr/bin/ceph-authtool --gen-print-key
Nov 27 05:55:35 np0005537642 bash[82501]: Running command: /usr/bin/ceph-authtool --gen-print-key
Nov 27 05:55:35 np0005537642 ceph-4c838139-e0c9-556a-a9ca-e4422f459af7-osd-1-activate[82516]: Running command: /usr/bin/ceph-authtool --gen-print-key
Nov 27 05:55:35 np0005537642 bash[82501]: Running command: /usr/bin/ceph-authtool --gen-print-key
Nov 27 05:55:35 np0005537642 ceph-mon[74338]: mon.compute-0@0(leader).osd e11 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 27 05:55:36 np0005537642 lvm[82597]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Nov 27 05:55:36 np0005537642 lvm[82597]: VG ceph_vg0 finished
Nov 27 05:55:36 np0005537642 lvm[82601]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Nov 27 05:55:36 np0005537642 lvm[82601]: VG ceph_vg0 finished
Nov 27 05:55:36 np0005537642 ceph-4c838139-e0c9-556a-a9ca-e4422f459af7-osd-1-activate[82516]: --> Failed to activate via raw: did not find any matching OSD to activate
Nov 27 05:55:36 np0005537642 ceph-4c838139-e0c9-556a-a9ca-e4422f459af7-osd-1-activate[82516]: Running command: /usr/bin/ceph-authtool --gen-print-key
Nov 27 05:55:36 np0005537642 bash[82501]: --> Failed to activate via raw: did not find any matching OSD to activate
Nov 27 05:55:36 np0005537642 bash[82501]: Running command: /usr/bin/ceph-authtool --gen-print-key
Nov 27 05:55:36 np0005537642 ceph-4c838139-e0c9-556a-a9ca-e4422f459af7-osd-1-activate[82516]: Running command: /usr/bin/ceph-authtool --gen-print-key
Nov 27 05:55:36 np0005537642 bash[82501]: Running command: /usr/bin/ceph-authtool --gen-print-key
Nov 27 05:55:36 np0005537642 lvm[82605]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Nov 27 05:55:36 np0005537642 lvm[82605]: VG ceph_vg0 finished
Nov 27 05:55:36 np0005537642 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v58: 1 pgs: 1 active+clean; 449 KiB data, 427 MiB used, 20 GiB / 20 GiB avail
Nov 27 05:55:36 np0005537642 lvm[82611]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Nov 27 05:55:36 np0005537642 lvm[82611]: VG ceph_vg0 finished
Nov 27 05:55:36 np0005537642 ceph-4c838139-e0c9-556a-a9ca-e4422f459af7-osd-1-activate[82516]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-1
Nov 27 05:55:36 np0005537642 bash[82501]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-1
Nov 27 05:55:36 np0005537642 ceph-4c838139-e0c9-556a-a9ca-e4422f459af7-osd-1-activate[82516]: Running command: /usr/bin/ceph-bluestore-tool --cluster=ceph prime-osd-dir --dev /dev/ceph_vg0/ceph_lv0 --path /var/lib/ceph/osd/ceph-1 --no-mon-config
Nov 27 05:55:36 np0005537642 bash[82501]: Running command: /usr/bin/ceph-bluestore-tool --cluster=ceph prime-osd-dir --dev /dev/ceph_vg0/ceph_lv0 --path /var/lib/ceph/osd/ceph-1 --no-mon-config
Nov 27 05:55:36 np0005537642 ceph-4c838139-e0c9-556a-a9ca-e4422f459af7-osd-1-activate[82516]: Running command: /usr/bin/ln -snf /dev/ceph_vg0/ceph_lv0 /var/lib/ceph/osd/ceph-1/block
Nov 27 05:55:36 np0005537642 bash[82501]: Running command: /usr/bin/ln -snf /dev/ceph_vg0/ceph_lv0 /var/lib/ceph/osd/ceph-1/block
Nov 27 05:55:36 np0005537642 ceph-4c838139-e0c9-556a-a9ca-e4422f459af7-osd-1-activate[82516]: Running command: /usr/bin/chown -h ceph:ceph /var/lib/ceph/osd/ceph-1/block
Nov 27 05:55:36 np0005537642 bash[82501]: Running command: /usr/bin/chown -h ceph:ceph /var/lib/ceph/osd/ceph-1/block
Nov 27 05:55:36 np0005537642 ceph-4c838139-e0c9-556a-a9ca-e4422f459af7-osd-1-activate[82516]: Running command: /usr/bin/chown -R ceph:ceph /dev/dm-0
Nov 27 05:55:36 np0005537642 bash[82501]: Running command: /usr/bin/chown -R ceph:ceph /dev/dm-0
Nov 27 05:55:36 np0005537642 ceph-4c838139-e0c9-556a-a9ca-e4422f459af7-osd-1-activate[82516]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-1
Nov 27 05:55:36 np0005537642 bash[82501]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-1
Nov 27 05:55:36 np0005537642 ceph-4c838139-e0c9-556a-a9ca-e4422f459af7-osd-1-activate[82516]: --> ceph-volume lvm activate successful for osd ID: 1
Nov 27 05:55:36 np0005537642 bash[82501]: --> ceph-volume lvm activate successful for osd ID: 1
Nov 27 05:55:36 np0005537642 systemd[1]: libpod-1c6028d74ec54b058d5500898e61d5f07816ce668cdcda3137011bbb3f845a88.scope: Deactivated successfully.
Nov 27 05:55:36 np0005537642 systemd[1]: libpod-1c6028d74ec54b058d5500898e61d5f07816ce668cdcda3137011bbb3f845a88.scope: Consumed 1.624s CPU time.
Nov 27 05:55:36 np0005537642 podman[82501]: 2025-11-27 10:55:36.88829526 +0000 UTC m=+2.230438533 container died 1c6028d74ec54b058d5500898e61d5f07816ce668cdcda3137011bbb3f845a88 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-4c838139-e0c9-556a-a9ca-e4422f459af7-osd-1-activate, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 27 05:55:38 np0005537642 systemd[1]: var-lib-containers-storage-overlay-fb72e3e2368a170962cc6fbd3a004105652480fb20886b7dabe740c0d3dae267-merged.mount: Deactivated successfully.
Nov 27 05:55:38 np0005537642 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v59: 1 pgs: 1 active+clean; 449 KiB data, 427 MiB used, 20 GiB / 20 GiB avail
Nov 27 05:55:39 np0005537642 podman[82501]: 2025-11-27 10:55:39.039888543 +0000 UTC m=+4.382031796 container remove 1c6028d74ec54b058d5500898e61d5f07816ce668cdcda3137011bbb3f845a88 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-4c838139-e0c9-556a-a9ca-e4422f459af7-osd-1-activate, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_REF=squid, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, ceph=True, io.buildah.version=1.40.1)
Nov 27 05:55:39 np0005537642 podman[82756]: 2025-11-27 10:55:39.373279664 +0000 UTC m=+0.067490069 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Nov 27 05:55:39 np0005537642 podman[82756]: 2025-11-27 10:55:39.54404324 +0000 UTC m=+0.238253635 container create 103aa49f82a1b164e1d55fa44e89bc1a991e670b05a75db0a5ff6719be5f9530 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-4c838139-e0c9-556a-a9ca-e4422f459af7-osd-1, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, ceph=True, CEPH_REF=squid, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2)
Nov 27 05:55:39 np0005537642 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6478859f2e9e5c94688322f9d960e5c4e4d62ff957fd623b38e8dd891f1b24ce/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 27 05:55:39 np0005537642 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6478859f2e9e5c94688322f9d960e5c4e4d62ff957fd623b38e8dd891f1b24ce/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 27 05:55:39 np0005537642 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6478859f2e9e5c94688322f9d960e5c4e4d62ff957fd623b38e8dd891f1b24ce/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 27 05:55:39 np0005537642 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6478859f2e9e5c94688322f9d960e5c4e4d62ff957fd623b38e8dd891f1b24ce/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 27 05:55:39 np0005537642 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6478859f2e9e5c94688322f9d960e5c4e4d62ff957fd623b38e8dd891f1b24ce/merged/var/lib/ceph/osd/ceph-1 supports timestamps until 2038 (0x7fffffff)
Nov 27 05:55:39 np0005537642 podman[82756]: 2025-11-27 10:55:39.871744446 +0000 UTC m=+0.565954911 container init 103aa49f82a1b164e1d55fa44e89bc1a991e670b05a75db0a5ff6719be5f9530 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-4c838139-e0c9-556a-a9ca-e4422f459af7-osd-1, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2)
Nov 27 05:55:39 np0005537642 podman[82756]: 2025-11-27 10:55:39.881958594 +0000 UTC m=+0.576168999 container start 103aa49f82a1b164e1d55fa44e89bc1a991e670b05a75db0a5ff6719be5f9530 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-4c838139-e0c9-556a-a9ca-e4422f459af7-osd-1, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, ceph=True, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Nov 27 05:55:39 np0005537642 ceph-osd[82775]: set uid:gid to 167:167 (ceph:ceph)
Nov 27 05:55:39 np0005537642 ceph-osd[82775]: ceph version 19.2.3 (c92aebb279828e9c3c1f5d24613efca272649e62) squid (stable), process ceph-osd, pid 2
Nov 27 05:55:39 np0005537642 ceph-osd[82775]: pidfile_write: ignore empty --pid-file
Nov 27 05:55:39 np0005537642 ceph-osd[82775]: bdev(0x562f56bf1800 /var/lib/ceph/osd/ceph-1/block) open path /var/lib/ceph/osd/ceph-1/block
Nov 27 05:55:39 np0005537642 ceph-osd[82775]: bdev(0x562f56bf1800 /var/lib/ceph/osd/ceph-1/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-1/block failed: (22) Invalid argument
Nov 27 05:55:39 np0005537642 ceph-osd[82775]: bdev(0x562f56bf1800 /var/lib/ceph/osd/ceph-1/block) open backing device/file reports st_blksize 512, using bdev_block_size 4096 anyway
Nov 27 05:55:39 np0005537642 ceph-osd[82775]: bdev(0x562f56bf1800 /var/lib/ceph/osd/ceph-1/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Nov 27 05:55:39 np0005537642 ceph-osd[82775]: bdev(0x562f56bf1800 /var/lib/ceph/osd/ceph-1/block) close
Nov 27 05:55:39 np0005537642 bash[82756]: 103aa49f82a1b164e1d55fa44e89bc1a991e670b05a75db0a5ff6719be5f9530
Nov 27 05:55:39 np0005537642 systemd[1]: Started Ceph osd.1 for 4c838139-e0c9-556a-a9ca-e4422f459af7.
Nov 27 05:55:40 np0005537642 ceph-mon[74338]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Nov 27 05:55:40 np0005537642 ceph-osd[82775]: bdev(0x562f56bf1800 /var/lib/ceph/osd/ceph-1/block) open path /var/lib/ceph/osd/ceph-1/block
Nov 27 05:55:40 np0005537642 ceph-osd[82775]: bdev(0x562f56bf1800 /var/lib/ceph/osd/ceph-1/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-1/block failed: (22) Invalid argument
Nov 27 05:55:40 np0005537642 ceph-osd[82775]: bdev(0x562f56bf1800 /var/lib/ceph/osd/ceph-1/block) open backing device/file reports st_blksize 512, using bdev_block_size 4096 anyway
Nov 27 05:55:40 np0005537642 ceph-osd[82775]: bdev(0x562f56bf1800 /var/lib/ceph/osd/ceph-1/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Nov 27 05:55:40 np0005537642 ceph-osd[82775]: bdev(0x562f56bf1800 /var/lib/ceph/osd/ceph-1/block) close
Nov 27 05:55:40 np0005537642 ceph-osd[82775]: bdev(0x562f56bf1800 /var/lib/ceph/osd/ceph-1/block) open path /var/lib/ceph/osd/ceph-1/block
Nov 27 05:55:40 np0005537642 ceph-osd[82775]: bdev(0x562f56bf1800 /var/lib/ceph/osd/ceph-1/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-1/block failed: (22) Invalid argument
Nov 27 05:55:40 np0005537642 ceph-osd[82775]: bdev(0x562f56bf1800 /var/lib/ceph/osd/ceph-1/block) open backing device/file reports st_blksize 512, using bdev_block_size 4096 anyway
Nov 27 05:55:40 np0005537642 ceph-osd[82775]: bdev(0x562f56bf1800 /var/lib/ceph/osd/ceph-1/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Nov 27 05:55:40 np0005537642 ceph-osd[82775]: bdev(0x562f56bf1800 /var/lib/ceph/osd/ceph-1/block) close
Nov 27 05:55:40 np0005537642 ceph-mon[74338]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/397881370' entity='mgr.compute-0.qnrkij' 
Nov 27 05:55:40 np0005537642 ceph-mon[74338]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Nov 27 05:55:40 np0005537642 ceph-osd[82775]: bdev(0x562f56bf1800 /var/lib/ceph/osd/ceph-1/block) open path /var/lib/ceph/osd/ceph-1/block
Nov 27 05:55:40 np0005537642 ceph-osd[82775]: bdev(0x562f56bf1800 /var/lib/ceph/osd/ceph-1/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-1/block failed: (22) Invalid argument
Nov 27 05:55:40 np0005537642 ceph-osd[82775]: bdev(0x562f56bf1800 /var/lib/ceph/osd/ceph-1/block) open backing device/file reports st_blksize 512, using bdev_block_size 4096 anyway
Nov 27 05:55:40 np0005537642 ceph-osd[82775]: bdev(0x562f56bf1800 /var/lib/ceph/osd/ceph-1/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Nov 27 05:55:40 np0005537642 ceph-osd[82775]: bdev(0x562f56bf1800 /var/lib/ceph/osd/ceph-1/block) close
Nov 27 05:55:40 np0005537642 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v60: 1 pgs: 1 active+clean; 449 KiB data, 427 MiB used, 20 GiB / 20 GiB avail
Nov 27 05:55:40 np0005537642 ceph-mon[74338]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/397881370' entity='mgr.compute-0.qnrkij' 
Nov 27 05:55:40 np0005537642 ceph-osd[82775]: bdev(0x562f56bf1800 /var/lib/ceph/osd/ceph-1/block) open path /var/lib/ceph/osd/ceph-1/block
Nov 27 05:55:40 np0005537642 ceph-osd[82775]: bdev(0x562f56bf1800 /var/lib/ceph/osd/ceph-1/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-1/block failed: (22) Invalid argument
Nov 27 05:55:40 np0005537642 ceph-osd[82775]: bdev(0x562f56bf1800 /var/lib/ceph/osd/ceph-1/block) open backing device/file reports st_blksize 512, using bdev_block_size 4096 anyway
Nov 27 05:55:40 np0005537642 ceph-osd[82775]: bdev(0x562f56bf1800 /var/lib/ceph/osd/ceph-1/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Nov 27 05:55:40 np0005537642 ceph-osd[82775]: bdev(0x562f56bf1800 /var/lib/ceph/osd/ceph-1/block) close
Nov 27 05:55:40 np0005537642 ceph-osd[82775]: bdev(0x562f56bf1800 /var/lib/ceph/osd/ceph-1/block) open path /var/lib/ceph/osd/ceph-1/block
Nov 27 05:55:40 np0005537642 ceph-osd[82775]: bdev(0x562f56bf1800 /var/lib/ceph/osd/ceph-1/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-1/block failed: (22) Invalid argument
Nov 27 05:55:40 np0005537642 ceph-osd[82775]: bdev(0x562f56bf1800 /var/lib/ceph/osd/ceph-1/block) open backing device/file reports st_blksize 512, using bdev_block_size 4096 anyway
Nov 27 05:55:40 np0005537642 ceph-osd[82775]: bdev(0x562f56bf1800 /var/lib/ceph/osd/ceph-1/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Nov 27 05:55:40 np0005537642 ceph-osd[82775]: bluestore(/var/lib/ceph/osd/ceph-1) _set_cache_sizes cache_size 1073741824 meta 0.45 kv 0.45 kv_onode 0.04 data 0.06
Nov 27 05:55:40 np0005537642 ceph-osd[82775]: bdev(0x562f56bf1c00 /var/lib/ceph/osd/ceph-1/block) open path /var/lib/ceph/osd/ceph-1/block
Nov 27 05:55:40 np0005537642 ceph-osd[82775]: bdev(0x562f56bf1c00 /var/lib/ceph/osd/ceph-1/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-1/block failed: (22) Invalid argument
Nov 27 05:55:40 np0005537642 ceph-osd[82775]: bdev(0x562f56bf1c00 /var/lib/ceph/osd/ceph-1/block) open backing device/file reports st_blksize 512, using bdev_block_size 4096 anyway
Nov 27 05:55:40 np0005537642 ceph-osd[82775]: bdev(0x562f56bf1c00 /var/lib/ceph/osd/ceph-1/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Nov 27 05:55:40 np0005537642 ceph-osd[82775]: bluefs add_block_device bdev 1 path /var/lib/ceph/osd/ceph-1/block size 20 GiB
Nov 27 05:55:40 np0005537642 ceph-osd[82775]: bdev(0x562f56bf1c00 /var/lib/ceph/osd/ceph-1/block) close
Nov 27 05:55:40 np0005537642 ceph-osd[82775]: bdev(0x562f56bf1800 /var/lib/ceph/osd/ceph-1/block) close
Nov 27 05:55:40 np0005537642 ceph-osd[82775]: starting osd.1 osd_data /var/lib/ceph/osd/ceph-1 /var/lib/ceph/osd/ceph-1/journal
Nov 27 05:55:40 np0005537642 ceph-osd[82775]: load: jerasure load: lrc 
Nov 27 05:55:40 np0005537642 ceph-osd[82775]: bdev(0x562f57a8cc00 /var/lib/ceph/osd/ceph-1/block) open path /var/lib/ceph/osd/ceph-1/block
Nov 27 05:55:40 np0005537642 ceph-osd[82775]: bdev(0x562f57a8cc00 /var/lib/ceph/osd/ceph-1/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-1/block failed: (22) Invalid argument
Nov 27 05:55:40 np0005537642 ceph-osd[82775]: bdev(0x562f57a8cc00 /var/lib/ceph/osd/ceph-1/block) open backing device/file reports st_blksize 512, using bdev_block_size 4096 anyway
Nov 27 05:55:40 np0005537642 ceph-osd[82775]: bdev(0x562f57a8cc00 /var/lib/ceph/osd/ceph-1/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Nov 27 05:55:40 np0005537642 ceph-osd[82775]: bluestore(/var/lib/ceph/osd/ceph-1) _set_cache_sizes cache_size 1073741824 meta 0.45 kv 0.45 kv_onode 0.04 data 0.06
Nov 27 05:55:40 np0005537642 ceph-osd[82775]: bdev(0x562f57a8cc00 /var/lib/ceph/osd/ceph-1/block) close
Nov 27 05:55:40 np0005537642 ceph-mon[74338]: mon.compute-0@0(leader).osd e11 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 27 05:55:41 np0005537642 ceph-osd[82775]: bdev(0x562f57a8cc00 /var/lib/ceph/osd/ceph-1/block) open path /var/lib/ceph/osd/ceph-1/block
Nov 27 05:55:41 np0005537642 ceph-osd[82775]: bdev(0x562f57a8cc00 /var/lib/ceph/osd/ceph-1/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-1/block failed: (22) Invalid argument
Nov 27 05:55:41 np0005537642 ceph-osd[82775]: bdev(0x562f57a8cc00 /var/lib/ceph/osd/ceph-1/block) open backing device/file reports st_blksize 512, using bdev_block_size 4096 anyway
Nov 27 05:55:41 np0005537642 ceph-osd[82775]: bdev(0x562f57a8cc00 /var/lib/ceph/osd/ceph-1/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Nov 27 05:55:41 np0005537642 ceph-osd[82775]: bluestore(/var/lib/ceph/osd/ceph-1) _set_cache_sizes cache_size 1073741824 meta 0.45 kv 0.45 kv_onode 0.04 data 0.06
Nov 27 05:55:41 np0005537642 ceph-osd[82775]: bdev(0x562f57a8cc00 /var/lib/ceph/osd/ceph-1/block) close
Nov 27 05:55:41 np0005537642 podman[82892]: 2025-11-27 10:55:41.108642911 +0000 UTC m=+0.037746564 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Nov 27 05:55:41 np0005537642 podman[82892]: 2025-11-27 10:55:41.251776571 +0000 UTC m=+0.180880184 container create ec3eabea12b9e5e7ab88827234b2048bdcf8aa59e3f3207ceab5ae7607aa46d0 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=bold_murdock, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0)
Nov 27 05:55:41 np0005537642 ceph-mon[74338]: from='mgr.14122 192.168.122.100:0/397881370' entity='mgr.compute-0.qnrkij' 
Nov 27 05:55:41 np0005537642 ceph-mon[74338]: from='mgr.14122 192.168.122.100:0/397881370' entity='mgr.compute-0.qnrkij' 
Nov 27 05:55:41 np0005537642 ceph-osd[82775]: mClockScheduler: set_osd_capacity_params_from_config: osd_bandwidth_cost_per_io: 499321.90 bytes/io, osd_bandwidth_capacity_per_shard 157286400.00 bytes/second
Nov 27 05:55:41 np0005537642 ceph-osd[82775]: osd.1:0.OSDShard using op scheduler mclock_scheduler, cutoff=196
Nov 27 05:55:41 np0005537642 ceph-osd[82775]: bdev(0x562f57a8cc00 /var/lib/ceph/osd/ceph-1/block) open path /var/lib/ceph/osd/ceph-1/block
Nov 27 05:55:41 np0005537642 ceph-osd[82775]: bdev(0x562f57a8cc00 /var/lib/ceph/osd/ceph-1/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-1/block failed: (22) Invalid argument
Nov 27 05:55:41 np0005537642 ceph-osd[82775]: bdev(0x562f57a8cc00 /var/lib/ceph/osd/ceph-1/block) open backing device/file reports st_blksize 512, using bdev_block_size 4096 anyway
Nov 27 05:55:41 np0005537642 ceph-osd[82775]: bdev(0x562f57a8cc00 /var/lib/ceph/osd/ceph-1/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Nov 27 05:55:41 np0005537642 ceph-osd[82775]: bdev(0x562f57a8cc00 /var/lib/ceph/osd/ceph-1/block) close
Nov 27 05:55:41 np0005537642 ceph-osd[82775]: bdev(0x562f57a8cc00 /var/lib/ceph/osd/ceph-1/block) open path /var/lib/ceph/osd/ceph-1/block
Nov 27 05:55:41 np0005537642 ceph-osd[82775]: bdev(0x562f57a8cc00 /var/lib/ceph/osd/ceph-1/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-1/block failed: (22) Invalid argument
Nov 27 05:55:41 np0005537642 ceph-osd[82775]: bdev(0x562f57a8cc00 /var/lib/ceph/osd/ceph-1/block) open backing device/file reports st_blksize 512, using bdev_block_size 4096 anyway
Nov 27 05:55:41 np0005537642 ceph-osd[82775]: bdev(0x562f57a8cc00 /var/lib/ceph/osd/ceph-1/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Nov 27 05:55:41 np0005537642 ceph-osd[82775]: bdev(0x562f57a8cc00 /var/lib/ceph/osd/ceph-1/block) close
Nov 27 05:55:41 np0005537642 systemd[1]: Started libpod-conmon-ec3eabea12b9e5e7ab88827234b2048bdcf8aa59e3f3207ceab5ae7607aa46d0.scope.
Nov 27 05:55:41 np0005537642 systemd[1]: Started libcrun container.
Nov 27 05:55:41 np0005537642 podman[82892]: 2025-11-27 10:55:41.678738504 +0000 UTC m=+0.607842197 container init ec3eabea12b9e5e7ab88827234b2048bdcf8aa59e3f3207ceab5ae7607aa46d0 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=bold_murdock, CEPH_REF=squid, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2)
Nov 27 05:55:41 np0005537642 podman[82892]: 2025-11-27 10:55:41.692507091 +0000 UTC m=+0.621610704 container start ec3eabea12b9e5e7ab88827234b2048bdcf8aa59e3f3207ceab5ae7607aa46d0 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=bold_murdock, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_REF=squid, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 27 05:55:41 np0005537642 bold_murdock[82923]: 167 167
Nov 27 05:55:41 np0005537642 systemd[1]: libpod-ec3eabea12b9e5e7ab88827234b2048bdcf8aa59e3f3207ceab5ae7607aa46d0.scope: Deactivated successfully.
Nov 27 05:55:41 np0005537642 ceph-osd[82775]: bdev(0x562f57a8cc00 /var/lib/ceph/osd/ceph-1/block) open path /var/lib/ceph/osd/ceph-1/block
Nov 27 05:55:41 np0005537642 ceph-osd[82775]: bdev(0x562f57a8cc00 /var/lib/ceph/osd/ceph-1/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-1/block failed: (22) Invalid argument
Nov 27 05:55:41 np0005537642 ceph-osd[82775]: bdev(0x562f57a8cc00 /var/lib/ceph/osd/ceph-1/block) open backing device/file reports st_blksize 512, using bdev_block_size 4096 anyway
Nov 27 05:55:41 np0005537642 ceph-osd[82775]: bdev(0x562f57a8cc00 /var/lib/ceph/osd/ceph-1/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Nov 27 05:55:41 np0005537642 ceph-osd[82775]: bdev(0x562f57a8cc00 /var/lib/ceph/osd/ceph-1/block) close
Nov 27 05:55:41 np0005537642 ceph-osd[82775]: bdev(0x562f57a8cc00 /var/lib/ceph/osd/ceph-1/block) open path /var/lib/ceph/osd/ceph-1/block
Nov 27 05:55:41 np0005537642 ceph-osd[82775]: bdev(0x562f57a8cc00 /var/lib/ceph/osd/ceph-1/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-1/block failed: (22) Invalid argument
Nov 27 05:55:41 np0005537642 ceph-osd[82775]: bdev(0x562f57a8cc00 /var/lib/ceph/osd/ceph-1/block) open backing device/file reports st_blksize 512, using bdev_block_size 4096 anyway
Nov 27 05:55:41 np0005537642 ceph-osd[82775]: bdev(0x562f57a8cc00 /var/lib/ceph/osd/ceph-1/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Nov 27 05:55:41 np0005537642 ceph-osd[82775]: bluestore(/var/lib/ceph/osd/ceph-1) _set_cache_sizes cache_size 1073741824 meta 0.45 kv 0.45 kv_onode 0.04 data 0.06
Nov 27 05:55:41 np0005537642 ceph-osd[82775]: bdev(0x562f57a8d000 /var/lib/ceph/osd/ceph-1/block) open path /var/lib/ceph/osd/ceph-1/block
Nov 27 05:55:41 np0005537642 ceph-osd[82775]: bdev(0x562f57a8d000 /var/lib/ceph/osd/ceph-1/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-1/block failed: (22) Invalid argument
Nov 27 05:55:41 np0005537642 ceph-osd[82775]: bdev(0x562f57a8d000 /var/lib/ceph/osd/ceph-1/block) open backing device/file reports st_blksize 512, using bdev_block_size 4096 anyway
Nov 27 05:55:41 np0005537642 ceph-osd[82775]: bdev(0x562f57a8d000 /var/lib/ceph/osd/ceph-1/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Nov 27 05:55:41 np0005537642 ceph-osd[82775]: bluefs add_block_device bdev 1 path /var/lib/ceph/osd/ceph-1/block size 20 GiB
Nov 27 05:55:41 np0005537642 ceph-osd[82775]: bluefs mount
Nov 27 05:55:41 np0005537642 ceph-osd[82775]: bluefs _init_alloc shared, id 1, capacity 0x4ffc00000, block size 0x10000
Nov 27 05:55:41 np0005537642 ceph-osd[82775]: bluefs mount final locked allocations 0 <0x0~0, [0x0~0], 0x0~0> => <0x0~0, [0x0~0], 0x0~0>
Nov 27 05:55:41 np0005537642 ceph-osd[82775]: bluefs mount final locked allocations 1 <0x0~0, [0x0~0], 0x0~0> => <0x0~0, [0x0~0], 0x0~0>
Nov 27 05:55:41 np0005537642 ceph-osd[82775]: bluefs mount final locked allocations 2 <0x0~0, [0x0~0], 0x0~0> => <0x0~0, [0x0~0], 0x0~0>
Nov 27 05:55:41 np0005537642 ceph-osd[82775]: bluefs mount final locked allocations 3 <0x0~0, [0x0~0], 0x0~0> => <0x0~0, [0x0~0], 0x0~0>
Nov 27 05:55:41 np0005537642 ceph-osd[82775]: bluefs mount final locked allocations 4 <0x0~0, [0x0~0], 0x0~0> => <0x0~0, [0x0~0], 0x0~0>
Nov 27 05:55:41 np0005537642 ceph-osd[82775]: bluefs mount shared_bdev_used = 0
Nov 27 05:55:41 np0005537642 ceph-osd[82775]: bluestore(/var/lib/ceph/osd/ceph-1) _prepare_db_environment set db_paths to db,20397110067 db.slow,20397110067
Nov 27 05:55:41 np0005537642 ceph-osd[82775]: rocksdb: RocksDB version: 7.9.2
Nov 27 05:55:41 np0005537642 ceph-osd[82775]: rocksdb: Git sha 0
Nov 27 05:55:41 np0005537642 ceph-osd[82775]: rocksdb: Compile date 2025-07-17 03:12:14
Nov 27 05:55:41 np0005537642 ceph-osd[82775]: rocksdb: DB SUMMARY
Nov 27 05:55:41 np0005537642 ceph-osd[82775]: rocksdb: DB Session ID:  AE7GADLGY0E40TKLSHJO
Nov 27 05:55:41 np0005537642 ceph-osd[82775]: rocksdb: CURRENT file:  CURRENT
Nov 27 05:55:41 np0005537642 ceph-osd[82775]: rocksdb: IDENTITY file:  IDENTITY
Nov 27 05:55:41 np0005537642 ceph-osd[82775]: rocksdb: MANIFEST file:  MANIFEST-000032 size: 1007 Bytes
Nov 27 05:55:41 np0005537642 ceph-osd[82775]: rocksdb: SST files in db dir, Total Num: 1, files: 000030.sst 
Nov 27 05:55:41 np0005537642 ceph-osd[82775]: rocksdb: SST files in db.slow dir, Total Num: 0, files: 
Nov 27 05:55:41 np0005537642 ceph-osd[82775]: rocksdb: Write Ahead Log file in db.wal: 000031.log size: 5097 ; 
Nov 27 05:55:41 np0005537642 ceph-osd[82775]: rocksdb:                         Options.error_if_exists: 0
Nov 27 05:55:41 np0005537642 ceph-osd[82775]: rocksdb:                       Options.create_if_missing: 0
Nov 27 05:55:41 np0005537642 ceph-osd[82775]: rocksdb:                         Options.paranoid_checks: 1
Nov 27 05:55:41 np0005537642 ceph-osd[82775]: rocksdb:             Options.flush_verify_memtable_count: 1
Nov 27 05:55:41 np0005537642 ceph-osd[82775]: rocksdb:                               Options.track_and_verify_wals_in_manifest: 0
Nov 27 05:55:41 np0005537642 ceph-osd[82775]: rocksdb:        Options.verify_sst_unique_id_in_manifest: 1
Nov 27 05:55:41 np0005537642 ceph-osd[82775]: rocksdb:                                     Options.env: 0x562f57a5ddc0
Nov 27 05:55:41 np0005537642 ceph-osd[82775]: rocksdb:                                      Options.fs: LegacyFileSystem
Nov 27 05:55:41 np0005537642 ceph-osd[82775]: rocksdb:                                Options.info_log: 0x562f57a617a0
Nov 27 05:55:41 np0005537642 ceph-osd[82775]: rocksdb:                Options.max_file_opening_threads: 16
Nov 27 05:55:41 np0005537642 ceph-osd[82775]: rocksdb:                              Options.statistics: (nil)
Nov 27 05:55:41 np0005537642 ceph-osd[82775]: rocksdb:                               Options.use_fsync: 0
Nov 27 05:55:41 np0005537642 ceph-osd[82775]: rocksdb:                       Options.max_log_file_size: 0
Nov 27 05:55:41 np0005537642 ceph-osd[82775]: rocksdb:                  Options.max_manifest_file_size: 1073741824
Nov 27 05:55:41 np0005537642 ceph-osd[82775]: rocksdb:                   Options.log_file_time_to_roll: 0
Nov 27 05:55:41 np0005537642 ceph-osd[82775]: rocksdb:                       Options.keep_log_file_num: 1000
Nov 27 05:55:41 np0005537642 ceph-osd[82775]: rocksdb:                    Options.recycle_log_file_num: 0
Nov 27 05:55:41 np0005537642 ceph-osd[82775]: rocksdb:                         Options.allow_fallocate: 1
Nov 27 05:55:41 np0005537642 ceph-osd[82775]: rocksdb:                        Options.allow_mmap_reads: 0
Nov 27 05:55:41 np0005537642 ceph-osd[82775]: rocksdb:                       Options.allow_mmap_writes: 0
Nov 27 05:55:41 np0005537642 ceph-osd[82775]: rocksdb:                        Options.use_direct_reads: 0
Nov 27 05:55:41 np0005537642 ceph-osd[82775]: rocksdb:                        Options.use_direct_io_for_flush_and_compaction: 0
Nov 27 05:55:41 np0005537642 ceph-osd[82775]: rocksdb:          Options.create_missing_column_families: 0
Nov 27 05:55:41 np0005537642 ceph-osd[82775]: rocksdb:                              Options.db_log_dir: 
Nov 27 05:55:41 np0005537642 ceph-osd[82775]: rocksdb:                                 Options.wal_dir: db.wal
Nov 27 05:55:41 np0005537642 ceph-osd[82775]: rocksdb:                Options.table_cache_numshardbits: 6
Nov 27 05:55:41 np0005537642 ceph-osd[82775]: rocksdb:                         Options.WAL_ttl_seconds: 0
Nov 27 05:55:41 np0005537642 ceph-osd[82775]: rocksdb:                       Options.WAL_size_limit_MB: 0
Nov 27 05:55:41 np0005537642 ceph-osd[82775]: rocksdb:                        Options.max_write_batch_group_size_bytes: 1048576
Nov 27 05:55:41 np0005537642 ceph-osd[82775]: rocksdb:             Options.manifest_preallocation_size: 4194304
Nov 27 05:55:41 np0005537642 ceph-osd[82775]: rocksdb:                     Options.is_fd_close_on_exec: 1
Nov 27 05:55:41 np0005537642 ceph-osd[82775]: rocksdb:                   Options.advise_random_on_open: 1
Nov 27 05:55:41 np0005537642 ceph-osd[82775]: rocksdb:                    Options.db_write_buffer_size: 0
Nov 27 05:55:41 np0005537642 ceph-osd[82775]: rocksdb:                    Options.write_buffer_manager: 0x562f57b58a00
Nov 27 05:55:41 np0005537642 ceph-osd[82775]: rocksdb:         Options.access_hint_on_compaction_start: 1
Nov 27 05:55:41 np0005537642 ceph-osd[82775]: rocksdb:           Options.random_access_max_buffer_size: 1048576
Nov 27 05:55:41 np0005537642 ceph-osd[82775]: rocksdb:                      Options.use_adaptive_mutex: 0
Nov 27 05:55:41 np0005537642 ceph-osd[82775]: rocksdb:                            Options.rate_limiter: (nil)
Nov 27 05:55:41 np0005537642 ceph-osd[82775]: rocksdb:     Options.sst_file_manager.rate_bytes_per_sec: 0
Nov 27 05:55:41 np0005537642 ceph-osd[82775]: rocksdb:                       Options.wal_recovery_mode: 2
Nov 27 05:55:41 np0005537642 ceph-osd[82775]: rocksdb:                  Options.enable_thread_tracking: 0
Nov 27 05:55:41 np0005537642 ceph-osd[82775]: rocksdb:                  Options.enable_pipelined_write: 0
Nov 27 05:55:41 np0005537642 ceph-osd[82775]: rocksdb:                  Options.unordered_write: 0
Nov 27 05:55:41 np0005537642 ceph-osd[82775]: rocksdb:         Options.allow_concurrent_memtable_write: 1
Nov 27 05:55:41 np0005537642 ceph-osd[82775]: rocksdb:      Options.enable_write_thread_adaptive_yield: 1
Nov 27 05:55:41 np0005537642 ceph-osd[82775]: rocksdb:             Options.write_thread_max_yield_usec: 100
Nov 27 05:55:41 np0005537642 ceph-osd[82775]: rocksdb:            Options.write_thread_slow_yield_usec: 3
Nov 27 05:55:41 np0005537642 ceph-osd[82775]: rocksdb:                               Options.row_cache: None
Nov 27 05:55:41 np0005537642 ceph-osd[82775]: rocksdb:                              Options.wal_filter: None
Nov 27 05:55:41 np0005537642 ceph-osd[82775]: rocksdb:             Options.avoid_flush_during_recovery: 0
Nov 27 05:55:41 np0005537642 ceph-osd[82775]: rocksdb:             Options.allow_ingest_behind: 0
Nov 27 05:55:41 np0005537642 ceph-osd[82775]: rocksdb:             Options.two_write_queues: 0
Nov 27 05:55:41 np0005537642 ceph-osd[82775]: rocksdb:             Options.manual_wal_flush: 0
Nov 27 05:55:41 np0005537642 ceph-osd[82775]: rocksdb:             Options.wal_compression: 0
Nov 27 05:55:41 np0005537642 ceph-osd[82775]: rocksdb:             Options.atomic_flush: 0
Nov 27 05:55:41 np0005537642 ceph-osd[82775]: rocksdb:             Options.avoid_unnecessary_blocking_io: 0
Nov 27 05:55:41 np0005537642 ceph-osd[82775]: rocksdb:                 Options.persist_stats_to_disk: 0
Nov 27 05:55:41 np0005537642 ceph-osd[82775]: rocksdb:                 Options.write_dbid_to_manifest: 0
Nov 27 05:55:41 np0005537642 ceph-osd[82775]: rocksdb:                 Options.log_readahead_size: 0
Nov 27 05:55:41 np0005537642 ceph-osd[82775]: rocksdb:                 Options.file_checksum_gen_factory: Unknown
Nov 27 05:55:41 np0005537642 ceph-osd[82775]: rocksdb:                 Options.best_efforts_recovery: 0
Nov 27 05:55:41 np0005537642 ceph-osd[82775]: rocksdb:                Options.max_bgerror_resume_count: 2147483647
Nov 27 05:55:41 np0005537642 ceph-osd[82775]: rocksdb:            Options.bgerror_resume_retry_interval: 1000000
Nov 27 05:55:41 np0005537642 ceph-osd[82775]: rocksdb:             Options.allow_data_in_errors: 0
Nov 27 05:55:41 np0005537642 ceph-osd[82775]: rocksdb:             Options.db_host_id: __hostname__
Nov 27 05:55:41 np0005537642 ceph-osd[82775]: rocksdb:             Options.enforce_single_del_contracts: true
Nov 27 05:55:41 np0005537642 ceph-osd[82775]: rocksdb:             Options.max_background_jobs: 4
Nov 27 05:55:41 np0005537642 ceph-osd[82775]: rocksdb:             Options.max_background_compactions: -1
Nov 27 05:55:41 np0005537642 ceph-osd[82775]: rocksdb:             Options.max_subcompactions: 1
Nov 27 05:55:41 np0005537642 ceph-osd[82775]: rocksdb:             Options.avoid_flush_during_shutdown: 0
Nov 27 05:55:41 np0005537642 ceph-osd[82775]: rocksdb:           Options.writable_file_max_buffer_size: 0
Nov 27 05:55:41 np0005537642 ceph-osd[82775]: rocksdb:             Options.delayed_write_rate : 16777216
Nov 27 05:55:41 np0005537642 ceph-osd[82775]: rocksdb:             Options.max_total_wal_size: 1073741824
Nov 27 05:55:41 np0005537642 ceph-osd[82775]: rocksdb:             Options.delete_obsolete_files_period_micros: 21600000000
Nov 27 05:55:41 np0005537642 ceph-osd[82775]: rocksdb:                   Options.stats_dump_period_sec: 600
Nov 27 05:55:41 np0005537642 ceph-osd[82775]: rocksdb:                 Options.stats_persist_period_sec: 600
Nov 27 05:55:41 np0005537642 ceph-osd[82775]: rocksdb:                 Options.stats_history_buffer_size: 1048576
Nov 27 05:55:41 np0005537642 ceph-osd[82775]: rocksdb:                          Options.max_open_files: -1
Nov 27 05:55:41 np0005537642 ceph-osd[82775]: rocksdb:                          Options.bytes_per_sync: 0
Nov 27 05:55:41 np0005537642 ceph-osd[82775]: rocksdb:                      Options.wal_bytes_per_sync: 0
Nov 27 05:55:41 np0005537642 ceph-osd[82775]: rocksdb:                   Options.strict_bytes_per_sync: 0
Nov 27 05:55:41 np0005537642 ceph-osd[82775]: rocksdb:       Options.compaction_readahead_size: 2097152
Nov 27 05:55:41 np0005537642 ceph-osd[82775]: rocksdb:                  Options.max_background_flushes: -1
Nov 27 05:55:41 np0005537642 ceph-osd[82775]: rocksdb: Compression algorithms supported:
Nov 27 05:55:41 np0005537642 ceph-osd[82775]: rocksdb: #011kZSTD supported: 0
Nov 27 05:55:41 np0005537642 ceph-osd[82775]: rocksdb: #011kXpressCompression supported: 0
Nov 27 05:55:41 np0005537642 ceph-osd[82775]: rocksdb: #011kBZip2Compression supported: 0
Nov 27 05:55:41 np0005537642 ceph-osd[82775]: rocksdb: #011kZSTDNotFinalCompression supported: 0
Nov 27 05:55:41 np0005537642 ceph-osd[82775]: rocksdb: #011kLZ4Compression supported: 1
Nov 27 05:55:41 np0005537642 ceph-osd[82775]: rocksdb: #011kZlibCompression supported: 1
Nov 27 05:55:41 np0005537642 ceph-osd[82775]: rocksdb: #011kLZ4HCCompression supported: 1
Nov 27 05:55:41 np0005537642 ceph-osd[82775]: rocksdb: #011kSnappyCompression supported: 1
Nov 27 05:55:41 np0005537642 ceph-osd[82775]: rocksdb: Fast CRC32 supported: Supported on x86
Nov 27 05:55:41 np0005537642 ceph-osd[82775]: rocksdb: DMutex implementation: pthread_mutex_t
Nov 27 05:55:41 np0005537642 ceph-osd[82775]: rocksdb: [db/db_impl/db_impl_readonly.cc:25] Opening the db in read only mode
Nov 27 05:55:41 np0005537642 ceph-osd[82775]: rocksdb: [db/version_set.cc:5527] Recovering from manifest file: db/MANIFEST-000032
Nov 27 05:55:41 np0005537642 ceph-osd[82775]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [default]:
Nov 27 05:55:41 np0005537642 ceph-osd[82775]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Nov 27 05:55:41 np0005537642 ceph-osd[82775]: rocksdb:           Options.merge_operator: .T:int64_array.b:bitwise_xor
Nov 27 05:55:41 np0005537642 ceph-osd[82775]: rocksdb:        Options.compaction_filter: None
Nov 27 05:55:41 np0005537642 ceph-osd[82775]: rocksdb:        Options.compaction_filter_factory: None
Nov 27 05:55:41 np0005537642 ceph-osd[82775]: rocksdb:  Options.sst_partitioner_factory: None
Nov 27 05:55:41 np0005537642 ceph-osd[82775]: rocksdb:         Options.memtable_factory: SkipListFactory
Nov 27 05:55:41 np0005537642 ceph-osd[82775]: rocksdb:            Options.table_factory: BlockBasedTable
Nov 27 05:55:41 np0005537642 ceph-osd[82775]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x562f57a61b60)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x562f56c87350#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 483183820#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Nov 27 05:55:41 np0005537642 ceph-osd[82775]: rocksdb:        Options.write_buffer_size: 16777216
Nov 27 05:55:41 np0005537642 ceph-osd[82775]: rocksdb:  Options.max_write_buffer_number: 64
Nov 27 05:55:41 np0005537642 ceph-osd[82775]: rocksdb:          Options.compression: LZ4
Nov 27 05:55:41 np0005537642 ceph-osd[82775]: rocksdb:                  Options.bottommost_compression: Disabled
Nov 27 05:55:41 np0005537642 ceph-osd[82775]: rocksdb:       Options.prefix_extractor: nullptr
Nov 27 05:55:41 np0005537642 ceph-osd[82775]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Nov 27 05:55:41 np0005537642 ceph-osd[82775]: rocksdb:             Options.num_levels: 7
Nov 27 05:55:41 np0005537642 ceph-osd[82775]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Nov 27 05:55:41 np0005537642 ceph-osd[82775]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Nov 27 05:55:41 np0005537642 ceph-osd[82775]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Nov 27 05:55:41 np0005537642 ceph-osd[82775]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Nov 27 05:55:41 np0005537642 ceph-osd[82775]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Nov 27 05:55:41 np0005537642 ceph-osd[82775]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Nov 27 05:55:41 np0005537642 ceph-osd[82775]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Nov 27 05:55:41 np0005537642 ceph-osd[82775]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Nov 27 05:55:41 np0005537642 ceph-osd[82775]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Nov 27 05:55:41 np0005537642 ceph-osd[82775]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Nov 27 05:55:41 np0005537642 ceph-osd[82775]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Nov 27 05:55:41 np0005537642 ceph-osd[82775]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Nov 27 05:55:41 np0005537642 ceph-osd[82775]: rocksdb:            Options.compression_opts.window_bits: -14
Nov 27 05:55:41 np0005537642 ceph-osd[82775]: rocksdb:                  Options.compression_opts.level: 32767
Nov 27 05:55:41 np0005537642 ceph-osd[82775]: rocksdb:               Options.compression_opts.strategy: 0
Nov 27 05:55:41 np0005537642 ceph-osd[82775]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Nov 27 05:55:41 np0005537642 ceph-osd[82775]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Nov 27 05:55:41 np0005537642 ceph-osd[82775]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Nov 27 05:55:41 np0005537642 ceph-osd[82775]: rocksdb:         Options.compression_opts.parallel_threads: 1
Nov 27 05:55:41 np0005537642 ceph-osd[82775]: rocksdb:                  Options.compression_opts.enabled: false
Nov 27 05:55:41 np0005537642 ceph-osd[82775]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Nov 27 05:55:41 np0005537642 ceph-osd[82775]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Nov 27 05:55:41 np0005537642 ceph-osd[82775]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Nov 27 05:55:41 np0005537642 ceph-osd[82775]: rocksdb:              Options.level0_stop_writes_trigger: 36
Nov 27 05:55:41 np0005537642 ceph-osd[82775]: rocksdb:                   Options.target_file_size_base: 67108864
Nov 27 05:55:41 np0005537642 ceph-osd[82775]: rocksdb:             Options.target_file_size_multiplier: 1
Nov 27 05:55:41 np0005537642 ceph-osd[82775]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Nov 27 05:55:41 np0005537642 ceph-osd[82775]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Nov 27 05:55:41 np0005537642 ceph-osd[82775]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Nov 27 05:55:41 np0005537642 ceph-osd[82775]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Nov 27 05:55:41 np0005537642 ceph-osd[82775]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Nov 27 05:55:41 np0005537642 ceph-osd[82775]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Nov 27 05:55:41 np0005537642 ceph-osd[82775]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Nov 27 05:55:41 np0005537642 ceph-osd[82775]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Nov 27 05:55:41 np0005537642 ceph-osd[82775]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Nov 27 05:55:41 np0005537642 ceph-osd[82775]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Nov 27 05:55:41 np0005537642 ceph-osd[82775]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Nov 27 05:55:41 np0005537642 ceph-osd[82775]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Nov 27 05:55:41 np0005537642 ceph-osd[82775]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Nov 27 05:55:41 np0005537642 ceph-osd[82775]: rocksdb:                        Options.arena_block_size: 1048576
Nov 27 05:55:41 np0005537642 ceph-osd[82775]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Nov 27 05:55:41 np0005537642 ceph-osd[82775]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Nov 27 05:55:41 np0005537642 ceph-osd[82775]: rocksdb:                Options.disable_auto_compactions: 0
Nov 27 05:55:41 np0005537642 ceph-osd[82775]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Nov 27 05:55:41 np0005537642 ceph-osd[82775]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Nov 27 05:55:41 np0005537642 ceph-osd[82775]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Nov 27 05:55:41 np0005537642 ceph-osd[82775]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Nov 27 05:55:41 np0005537642 ceph-osd[82775]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Nov 27 05:55:41 np0005537642 ceph-osd[82775]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Nov 27 05:55:41 np0005537642 ceph-osd[82775]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Nov 27 05:55:41 np0005537642 ceph-osd[82775]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Nov 27 05:55:41 np0005537642 ceph-osd[82775]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Nov 27 05:55:41 np0005537642 ceph-osd[82775]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Nov 27 05:55:41 np0005537642 ceph-osd[82775]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Nov 27 05:55:41 np0005537642 ceph-osd[82775]: rocksdb:                   Options.inplace_update_support: 0
Nov 27 05:55:41 np0005537642 ceph-osd[82775]: rocksdb:                 Options.inplace_update_num_locks: 10000
Nov 27 05:55:41 np0005537642 ceph-osd[82775]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Nov 27 05:55:41 np0005537642 ceph-osd[82775]: rocksdb:               Options.memtable_whole_key_filtering: 0
Nov 27 05:55:41 np0005537642 ceph-osd[82775]: rocksdb:   Options.memtable_huge_page_size: 0
Nov 27 05:55:41 np0005537642 ceph-osd[82775]: rocksdb:                           Options.bloom_locality: 0
Nov 27 05:55:41 np0005537642 ceph-osd[82775]: rocksdb:                    Options.max_successive_merges: 0
Nov 27 05:55:41 np0005537642 ceph-osd[82775]: rocksdb:                Options.optimize_filters_for_hits: 0
Nov 27 05:55:41 np0005537642 ceph-osd[82775]: rocksdb:                Options.paranoid_file_checks: 0
Nov 27 05:55:41 np0005537642 ceph-osd[82775]: rocksdb:                Options.force_consistency_checks: 1
Nov 27 05:55:41 np0005537642 ceph-osd[82775]: rocksdb:                Options.report_bg_io_stats: 0
Nov 27 05:55:41 np0005537642 ceph-osd[82775]: rocksdb:                               Options.ttl: 2592000
Nov 27 05:55:41 np0005537642 ceph-osd[82775]: rocksdb:          Options.periodic_compaction_seconds: 0
Nov 27 05:55:41 np0005537642 ceph-osd[82775]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Nov 27 05:55:41 np0005537642 ceph-osd[82775]: rocksdb:    Options.preserve_internal_time_seconds: 0
Nov 27 05:55:41 np0005537642 ceph-osd[82775]: rocksdb:                       Options.enable_blob_files: false
Nov 27 05:55:41 np0005537642 ceph-osd[82775]: rocksdb:                           Options.min_blob_size: 0
Nov 27 05:55:41 np0005537642 ceph-osd[82775]: rocksdb:                          Options.blob_file_size: 268435456
Nov 27 05:55:41 np0005537642 ceph-osd[82775]: rocksdb:                   Options.blob_compression_type: NoCompression
Nov 27 05:55:41 np0005537642 ceph-osd[82775]: rocksdb:          Options.enable_blob_garbage_collection: false
Nov 27 05:55:41 np0005537642 ceph-osd[82775]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Nov 27 05:55:41 np0005537642 ceph-osd[82775]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Nov 27 05:55:41 np0005537642 ceph-osd[82775]: rocksdb:          Options.blob_compaction_readahead_size: 0
Nov 27 05:55:41 np0005537642 ceph-osd[82775]: rocksdb:                Options.blob_file_starting_level: 0
Nov 27 05:55:41 np0005537642 ceph-osd[82775]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Nov 27 05:55:41 np0005537642 ceph-osd[82775]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [m-0]:
Nov 27 05:55:41 np0005537642 ceph-osd[82775]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Nov 27 05:55:41 np0005537642 ceph-osd[82775]: rocksdb:           Options.merge_operator: None
Nov 27 05:55:41 np0005537642 ceph-osd[82775]: rocksdb:        Options.compaction_filter: None
Nov 27 05:55:41 np0005537642 ceph-osd[82775]: rocksdb:        Options.compaction_filter_factory: None
Nov 27 05:55:41 np0005537642 ceph-osd[82775]: rocksdb:  Options.sst_partitioner_factory: None
Nov 27 05:55:41 np0005537642 ceph-osd[82775]: rocksdb:         Options.memtable_factory: SkipListFactory
Nov 27 05:55:41 np0005537642 ceph-osd[82775]: rocksdb:            Options.table_factory: BlockBasedTable
Nov 27 05:55:41 np0005537642 ceph-osd[82775]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x562f57a61b60)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x562f56c87350#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 483183820#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Nov 27 05:55:41 np0005537642 ceph-osd[82775]: rocksdb:        Options.write_buffer_size: 16777216
Nov 27 05:55:41 np0005537642 ceph-osd[82775]: rocksdb:  Options.max_write_buffer_number: 64
Nov 27 05:55:41 np0005537642 ceph-osd[82775]: rocksdb:          Options.compression: LZ4
Nov 27 05:55:41 np0005537642 ceph-osd[82775]: rocksdb:                  Options.bottommost_compression: Disabled
Nov 27 05:55:41 np0005537642 ceph-osd[82775]: rocksdb:       Options.prefix_extractor: nullptr
Nov 27 05:55:41 np0005537642 ceph-osd[82775]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Nov 27 05:55:41 np0005537642 ceph-osd[82775]: rocksdb:             Options.num_levels: 7
Nov 27 05:55:41 np0005537642 ceph-osd[82775]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Nov 27 05:55:41 np0005537642 ceph-osd[82775]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Nov 27 05:55:41 np0005537642 ceph-osd[82775]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Nov 27 05:55:41 np0005537642 ceph-osd[82775]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Nov 27 05:55:41 np0005537642 ceph-osd[82775]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Nov 27 05:55:41 np0005537642 ceph-osd[82775]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Nov 27 05:55:41 np0005537642 ceph-osd[82775]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Nov 27 05:55:41 np0005537642 ceph-osd[82775]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Nov 27 05:55:41 np0005537642 ceph-osd[82775]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Nov 27 05:55:41 np0005537642 ceph-osd[82775]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Nov 27 05:55:41 np0005537642 ceph-osd[82775]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Nov 27 05:55:41 np0005537642 ceph-osd[82775]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Nov 27 05:55:41 np0005537642 ceph-osd[82775]: rocksdb:            Options.compression_opts.window_bits: -14
Nov 27 05:55:41 np0005537642 ceph-osd[82775]: rocksdb:                  Options.compression_opts.level: 32767
Nov 27 05:55:41 np0005537642 ceph-osd[82775]: rocksdb:               Options.compression_opts.strategy: 0
Nov 27 05:55:41 np0005537642 ceph-osd[82775]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Nov 27 05:55:41 np0005537642 ceph-osd[82775]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Nov 27 05:55:41 np0005537642 ceph-osd[82775]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Nov 27 05:55:41 np0005537642 ceph-osd[82775]: rocksdb:         Options.compression_opts.parallel_threads: 1
Nov 27 05:55:41 np0005537642 ceph-osd[82775]: rocksdb:                  Options.compression_opts.enabled: false
Nov 27 05:55:41 np0005537642 ceph-osd[82775]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Nov 27 05:55:41 np0005537642 ceph-osd[82775]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Nov 27 05:55:41 np0005537642 ceph-osd[82775]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Nov 27 05:55:41 np0005537642 ceph-osd[82775]: rocksdb:              Options.level0_stop_writes_trigger: 36
Nov 27 05:55:41 np0005537642 ceph-osd[82775]: rocksdb:                   Options.target_file_size_base: 67108864
Nov 27 05:55:41 np0005537642 ceph-osd[82775]: rocksdb:             Options.target_file_size_multiplier: 1
Nov 27 05:55:41 np0005537642 ceph-osd[82775]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Nov 27 05:55:41 np0005537642 ceph-osd[82775]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Nov 27 05:55:41 np0005537642 ceph-osd[82775]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Nov 27 05:55:41 np0005537642 ceph-osd[82775]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Nov 27 05:55:41 np0005537642 ceph-osd[82775]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Nov 27 05:55:41 np0005537642 ceph-osd[82775]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Nov 27 05:55:41 np0005537642 ceph-osd[82775]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Nov 27 05:55:41 np0005537642 ceph-osd[82775]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Nov 27 05:55:41 np0005537642 ceph-osd[82775]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Nov 27 05:55:41 np0005537642 ceph-osd[82775]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Nov 27 05:55:41 np0005537642 ceph-osd[82775]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Nov 27 05:55:41 np0005537642 ceph-osd[82775]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Nov 27 05:55:41 np0005537642 ceph-osd[82775]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Nov 27 05:55:41 np0005537642 ceph-osd[82775]: rocksdb:                        Options.arena_block_size: 1048576
Nov 27 05:55:41 np0005537642 ceph-osd[82775]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Nov 27 05:55:41 np0005537642 ceph-osd[82775]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Nov 27 05:55:41 np0005537642 ceph-osd[82775]: rocksdb:                Options.disable_auto_compactions: 0
Nov 27 05:55:41 np0005537642 ceph-osd[82775]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Nov 27 05:55:41 np0005537642 ceph-osd[82775]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Nov 27 05:55:41 np0005537642 ceph-osd[82775]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Nov 27 05:55:41 np0005537642 ceph-osd[82775]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Nov 27 05:55:41 np0005537642 ceph-osd[82775]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Nov 27 05:55:41 np0005537642 ceph-osd[82775]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Nov 27 05:55:41 np0005537642 ceph-osd[82775]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Nov 27 05:55:41 np0005537642 ceph-osd[82775]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Nov 27 05:55:41 np0005537642 ceph-osd[82775]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Nov 27 05:55:41 np0005537642 ceph-osd[82775]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Nov 27 05:55:41 np0005537642 ceph-osd[82775]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Nov 27 05:55:41 np0005537642 ceph-osd[82775]: rocksdb:                   Options.inplace_update_support: 0
Nov 27 05:55:41 np0005537642 ceph-osd[82775]: rocksdb:                 Options.inplace_update_num_locks: 10000
Nov 27 05:55:41 np0005537642 ceph-osd[82775]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Nov 27 05:55:41 np0005537642 ceph-osd[82775]: rocksdb:               Options.memtable_whole_key_filtering: 0
Nov 27 05:55:41 np0005537642 ceph-osd[82775]: rocksdb:   Options.memtable_huge_page_size: 0
Nov 27 05:55:41 np0005537642 ceph-osd[82775]: rocksdb:                           Options.bloom_locality: 0
Nov 27 05:55:41 np0005537642 ceph-osd[82775]: rocksdb:                    Options.max_successive_merges: 0
Nov 27 05:55:41 np0005537642 ceph-osd[82775]: rocksdb:                Options.optimize_filters_for_hits: 0
Nov 27 05:55:41 np0005537642 ceph-osd[82775]: rocksdb:                Options.paranoid_file_checks: 0
Nov 27 05:55:41 np0005537642 ceph-osd[82775]: rocksdb:                Options.force_consistency_checks: 1
Nov 27 05:55:41 np0005537642 ceph-osd[82775]: rocksdb:                Options.report_bg_io_stats: 0
Nov 27 05:55:41 np0005537642 ceph-osd[82775]: rocksdb:                               Options.ttl: 2592000
Nov 27 05:55:41 np0005537642 ceph-osd[82775]: rocksdb:          Options.periodic_compaction_seconds: 0
Nov 27 05:55:41 np0005537642 ceph-osd[82775]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Nov 27 05:55:41 np0005537642 ceph-osd[82775]: rocksdb:    Options.preserve_internal_time_seconds: 0
Nov 27 05:55:41 np0005537642 ceph-osd[82775]: rocksdb:                       Options.enable_blob_files: false
Nov 27 05:55:41 np0005537642 ceph-osd[82775]: rocksdb:                           Options.min_blob_size: 0
Nov 27 05:55:41 np0005537642 ceph-osd[82775]: rocksdb:                          Options.blob_file_size: 268435456
Nov 27 05:55:41 np0005537642 ceph-osd[82775]: rocksdb:                   Options.blob_compression_type: NoCompression
Nov 27 05:55:41 np0005537642 ceph-osd[82775]: rocksdb:          Options.enable_blob_garbage_collection: false
Nov 27 05:55:41 np0005537642 ceph-osd[82775]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Nov 27 05:55:41 np0005537642 ceph-osd[82775]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Nov 27 05:55:41 np0005537642 ceph-osd[82775]: rocksdb:          Options.blob_compaction_readahead_size: 0
Nov 27 05:55:41 np0005537642 ceph-osd[82775]: rocksdb:                Options.blob_file_starting_level: 0
Nov 27 05:55:41 np0005537642 ceph-osd[82775]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Nov 27 05:55:41 np0005537642 ceph-osd[82775]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [m-1]:
Nov 27 05:55:41 np0005537642 ceph-osd[82775]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Nov 27 05:55:41 np0005537642 ceph-osd[82775]: rocksdb:           Options.merge_operator: None
Nov 27 05:55:41 np0005537642 ceph-osd[82775]: rocksdb:        Options.compaction_filter: None
Nov 27 05:55:41 np0005537642 ceph-osd[82775]: rocksdb:        Options.compaction_filter_factory: None
Nov 27 05:55:41 np0005537642 ceph-osd[82775]: rocksdb:  Options.sst_partitioner_factory: None
Nov 27 05:55:41 np0005537642 ceph-osd[82775]: rocksdb:         Options.memtable_factory: SkipListFactory
Nov 27 05:55:41 np0005537642 ceph-osd[82775]: rocksdb:            Options.table_factory: BlockBasedTable
Nov 27 05:55:41 np0005537642 ceph-osd[82775]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x562f57a61b60)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x562f56c87350#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 483183820#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Nov 27 05:55:41 np0005537642 ceph-osd[82775]: rocksdb:        Options.write_buffer_size: 16777216
Nov 27 05:55:41 np0005537642 ceph-osd[82775]: rocksdb:  Options.max_write_buffer_number: 64
Nov 27 05:55:41 np0005537642 ceph-osd[82775]: rocksdb:          Options.compression: LZ4
Nov 27 05:55:41 np0005537642 ceph-osd[82775]: rocksdb:                  Options.bottommost_compression: Disabled
Nov 27 05:55:41 np0005537642 ceph-osd[82775]: rocksdb:       Options.prefix_extractor: nullptr
Nov 27 05:55:41 np0005537642 ceph-osd[82775]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Nov 27 05:55:41 np0005537642 ceph-osd[82775]: rocksdb:             Options.num_levels: 7
Nov 27 05:55:41 np0005537642 ceph-osd[82775]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Nov 27 05:55:41 np0005537642 ceph-osd[82775]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Nov 27 05:55:41 np0005537642 ceph-osd[82775]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Nov 27 05:55:41 np0005537642 ceph-osd[82775]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Nov 27 05:55:41 np0005537642 ceph-osd[82775]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Nov 27 05:55:41 np0005537642 ceph-osd[82775]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Nov 27 05:55:41 np0005537642 ceph-osd[82775]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Nov 27 05:55:41 np0005537642 ceph-osd[82775]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Nov 27 05:55:41 np0005537642 ceph-osd[82775]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Nov 27 05:55:41 np0005537642 ceph-osd[82775]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Nov 27 05:55:41 np0005537642 ceph-osd[82775]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Nov 27 05:55:41 np0005537642 ceph-osd[82775]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Nov 27 05:55:41 np0005537642 ceph-osd[82775]: rocksdb:            Options.compression_opts.window_bits: -14
Nov 27 05:55:41 np0005537642 ceph-osd[82775]: rocksdb:                  Options.compression_opts.level: 32767
Nov 27 05:55:41 np0005537642 ceph-osd[82775]: rocksdb:               Options.compression_opts.strategy: 0
Nov 27 05:55:41 np0005537642 ceph-osd[82775]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Nov 27 05:55:41 np0005537642 ceph-osd[82775]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Nov 27 05:55:41 np0005537642 ceph-osd[82775]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Nov 27 05:55:41 np0005537642 ceph-osd[82775]: rocksdb:         Options.compression_opts.parallel_threads: 1
Nov 27 05:55:41 np0005537642 ceph-osd[82775]: rocksdb:                  Options.compression_opts.enabled: false
Nov 27 05:55:41 np0005537642 ceph-osd[82775]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Nov 27 05:55:41 np0005537642 ceph-osd[82775]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Nov 27 05:55:41 np0005537642 ceph-osd[82775]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Nov 27 05:55:41 np0005537642 ceph-osd[82775]: rocksdb:              Options.level0_stop_writes_trigger: 36
Nov 27 05:55:41 np0005537642 ceph-osd[82775]: rocksdb:                   Options.target_file_size_base: 67108864
Nov 27 05:55:41 np0005537642 ceph-osd[82775]: rocksdb:             Options.target_file_size_multiplier: 1
Nov 27 05:55:41 np0005537642 ceph-osd[82775]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Nov 27 05:55:41 np0005537642 ceph-osd[82775]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Nov 27 05:55:41 np0005537642 ceph-osd[82775]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Nov 27 05:55:41 np0005537642 ceph-osd[82775]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Nov 27 05:55:41 np0005537642 ceph-osd[82775]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Nov 27 05:55:41 np0005537642 ceph-osd[82775]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Nov 27 05:55:41 np0005537642 ceph-osd[82775]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Nov 27 05:55:41 np0005537642 ceph-osd[82775]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Nov 27 05:55:41 np0005537642 ceph-osd[82775]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Nov 27 05:55:41 np0005537642 ceph-osd[82775]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Nov 27 05:55:41 np0005537642 ceph-osd[82775]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Nov 27 05:55:41 np0005537642 ceph-osd[82775]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Nov 27 05:55:41 np0005537642 ceph-osd[82775]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Nov 27 05:55:41 np0005537642 ceph-osd[82775]: rocksdb:                        Options.arena_block_size: 1048576
Nov 27 05:55:41 np0005537642 ceph-osd[82775]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Nov 27 05:55:41 np0005537642 ceph-osd[82775]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Nov 27 05:55:41 np0005537642 ceph-osd[82775]: rocksdb:                Options.disable_auto_compactions: 0
Nov 27 05:55:41 np0005537642 ceph-osd[82775]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Nov 27 05:55:41 np0005537642 ceph-osd[82775]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Nov 27 05:55:41 np0005537642 ceph-osd[82775]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Nov 27 05:55:41 np0005537642 ceph-osd[82775]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Nov 27 05:55:41 np0005537642 ceph-osd[82775]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Nov 27 05:55:41 np0005537642 ceph-osd[82775]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Nov 27 05:55:41 np0005537642 ceph-osd[82775]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Nov 27 05:55:41 np0005537642 ceph-osd[82775]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Nov 27 05:55:41 np0005537642 ceph-osd[82775]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Nov 27 05:55:41 np0005537642 ceph-osd[82775]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Nov 27 05:55:41 np0005537642 ceph-osd[82775]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Nov 27 05:55:41 np0005537642 ceph-osd[82775]: rocksdb:                   Options.inplace_update_support: 0
Nov 27 05:55:41 np0005537642 ceph-osd[82775]: rocksdb:                 Options.inplace_update_num_locks: 10000
Nov 27 05:55:41 np0005537642 ceph-osd[82775]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Nov 27 05:55:41 np0005537642 ceph-osd[82775]: rocksdb:               Options.memtable_whole_key_filtering: 0
Nov 27 05:55:41 np0005537642 ceph-osd[82775]: rocksdb:   Options.memtable_huge_page_size: 0
Nov 27 05:55:41 np0005537642 ceph-osd[82775]: rocksdb:                           Options.bloom_locality: 0
Nov 27 05:55:41 np0005537642 ceph-osd[82775]: rocksdb:                    Options.max_successive_merges: 0
Nov 27 05:55:41 np0005537642 ceph-osd[82775]: rocksdb:                Options.optimize_filters_for_hits: 0
Nov 27 05:55:41 np0005537642 ceph-osd[82775]: rocksdb:                Options.paranoid_file_checks: 0
Nov 27 05:55:41 np0005537642 ceph-osd[82775]: rocksdb:                Options.force_consistency_checks: 1
Nov 27 05:55:41 np0005537642 ceph-osd[82775]: rocksdb:                Options.report_bg_io_stats: 0
Nov 27 05:55:41 np0005537642 ceph-osd[82775]: rocksdb:                               Options.ttl: 2592000
Nov 27 05:55:41 np0005537642 ceph-osd[82775]: rocksdb:          Options.periodic_compaction_seconds: 0
Nov 27 05:55:41 np0005537642 ceph-osd[82775]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Nov 27 05:55:41 np0005537642 ceph-osd[82775]: rocksdb:    Options.preserve_internal_time_seconds: 0
Nov 27 05:55:41 np0005537642 ceph-osd[82775]: rocksdb:                       Options.enable_blob_files: false
Nov 27 05:55:41 np0005537642 ceph-osd[82775]: rocksdb:                           Options.min_blob_size: 0
Nov 27 05:55:41 np0005537642 ceph-osd[82775]: rocksdb:                          Options.blob_file_size: 268435456
Nov 27 05:55:41 np0005537642 ceph-osd[82775]: rocksdb:                   Options.blob_compression_type: NoCompression
Nov 27 05:55:41 np0005537642 ceph-osd[82775]: rocksdb:          Options.enable_blob_garbage_collection: false
Nov 27 05:55:41 np0005537642 ceph-osd[82775]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Nov 27 05:55:41 np0005537642 ceph-osd[82775]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Nov 27 05:55:41 np0005537642 ceph-osd[82775]: rocksdb:          Options.blob_compaction_readahead_size: 0
Nov 27 05:55:41 np0005537642 ceph-osd[82775]: rocksdb:                Options.blob_file_starting_level: 0
Nov 27 05:55:41 np0005537642 ceph-osd[82775]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Nov 27 05:55:41 np0005537642 ceph-osd[82775]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [m-2]:
Nov 27 05:55:41 np0005537642 ceph-osd[82775]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Nov 27 05:55:41 np0005537642 ceph-osd[82775]: rocksdb:           Options.merge_operator: None
Nov 27 05:55:41 np0005537642 ceph-osd[82775]: rocksdb:        Options.compaction_filter: None
Nov 27 05:55:41 np0005537642 ceph-osd[82775]: rocksdb:        Options.compaction_filter_factory: None
Nov 27 05:55:41 np0005537642 ceph-osd[82775]: rocksdb:  Options.sst_partitioner_factory: None
Nov 27 05:55:41 np0005537642 ceph-osd[82775]: rocksdb:         Options.memtable_factory: SkipListFactory
Nov 27 05:55:41 np0005537642 ceph-osd[82775]: rocksdb:            Options.table_factory: BlockBasedTable
Nov 27 05:55:41 np0005537642 ceph-osd[82775]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x562f57a61b60)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x562f56c87350#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 483183820#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Nov 27 05:55:41 np0005537642 ceph-osd[82775]: rocksdb:        Options.write_buffer_size: 16777216
Nov 27 05:55:41 np0005537642 ceph-osd[82775]: rocksdb:  Options.max_write_buffer_number: 64
Nov 27 05:55:41 np0005537642 ceph-osd[82775]: rocksdb:          Options.compression: LZ4
Nov 27 05:55:41 np0005537642 ceph-osd[82775]: rocksdb:                  Options.bottommost_compression: Disabled
Nov 27 05:55:41 np0005537642 ceph-osd[82775]: rocksdb:       Options.prefix_extractor: nullptr
Nov 27 05:55:41 np0005537642 ceph-osd[82775]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Nov 27 05:55:41 np0005537642 ceph-osd[82775]: rocksdb:             Options.num_levels: 7
Nov 27 05:55:41 np0005537642 ceph-osd[82775]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Nov 27 05:55:41 np0005537642 ceph-osd[82775]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Nov 27 05:55:41 np0005537642 ceph-osd[82775]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Nov 27 05:55:41 np0005537642 ceph-osd[82775]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Nov 27 05:55:41 np0005537642 ceph-osd[82775]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Nov 27 05:55:41 np0005537642 ceph-osd[82775]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Nov 27 05:55:41 np0005537642 ceph-osd[82775]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Nov 27 05:55:41 np0005537642 ceph-osd[82775]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Nov 27 05:55:41 np0005537642 ceph-osd[82775]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Nov 27 05:55:41 np0005537642 ceph-osd[82775]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Nov 27 05:55:41 np0005537642 ceph-osd[82775]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Nov 27 05:55:41 np0005537642 ceph-osd[82775]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Nov 27 05:55:41 np0005537642 ceph-osd[82775]: rocksdb:            Options.compression_opts.window_bits: -14
Nov 27 05:55:41 np0005537642 ceph-osd[82775]: rocksdb:                  Options.compression_opts.level: 32767
Nov 27 05:55:41 np0005537642 ceph-osd[82775]: rocksdb:               Options.compression_opts.strategy: 0
Nov 27 05:55:41 np0005537642 ceph-osd[82775]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Nov 27 05:55:41 np0005537642 ceph-osd[82775]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Nov 27 05:55:41 np0005537642 ceph-osd[82775]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Nov 27 05:55:41 np0005537642 ceph-osd[82775]: rocksdb:         Options.compression_opts.parallel_threads: 1
Nov 27 05:55:41 np0005537642 ceph-osd[82775]: rocksdb:                  Options.compression_opts.enabled: false
Nov 27 05:55:41 np0005537642 ceph-osd[82775]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Nov 27 05:55:41 np0005537642 ceph-osd[82775]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Nov 27 05:55:41 np0005537642 ceph-osd[82775]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Nov 27 05:55:41 np0005537642 ceph-osd[82775]: rocksdb:              Options.level0_stop_writes_trigger: 36
Nov 27 05:55:41 np0005537642 ceph-osd[82775]: rocksdb:                   Options.target_file_size_base: 67108864
Nov 27 05:55:41 np0005537642 ceph-osd[82775]: rocksdb:             Options.target_file_size_multiplier: 1
Nov 27 05:55:41 np0005537642 ceph-osd[82775]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Nov 27 05:55:41 np0005537642 ceph-osd[82775]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Nov 27 05:55:41 np0005537642 ceph-osd[82775]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Nov 27 05:55:41 np0005537642 ceph-osd[82775]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Nov 27 05:55:41 np0005537642 ceph-osd[82775]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Nov 27 05:55:41 np0005537642 ceph-osd[82775]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Nov 27 05:55:41 np0005537642 ceph-osd[82775]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Nov 27 05:55:41 np0005537642 ceph-osd[82775]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Nov 27 05:55:41 np0005537642 ceph-osd[82775]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Nov 27 05:55:41 np0005537642 ceph-osd[82775]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Nov 27 05:55:41 np0005537642 ceph-osd[82775]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Nov 27 05:55:41 np0005537642 ceph-osd[82775]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Nov 27 05:55:41 np0005537642 ceph-osd[82775]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Nov 27 05:55:41 np0005537642 ceph-osd[82775]: rocksdb:                        Options.arena_block_size: 1048576
Nov 27 05:55:41 np0005537642 ceph-osd[82775]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Nov 27 05:55:41 np0005537642 ceph-osd[82775]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Nov 27 05:55:41 np0005537642 ceph-osd[82775]: rocksdb:                Options.disable_auto_compactions: 0
Nov 27 05:55:41 np0005537642 ceph-osd[82775]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Nov 27 05:55:41 np0005537642 ceph-osd[82775]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Nov 27 05:55:41 np0005537642 ceph-osd[82775]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Nov 27 05:55:41 np0005537642 ceph-osd[82775]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Nov 27 05:55:41 np0005537642 ceph-osd[82775]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Nov 27 05:55:41 np0005537642 ceph-osd[82775]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Nov 27 05:55:41 np0005537642 ceph-osd[82775]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Nov 27 05:55:41 np0005537642 ceph-osd[82775]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Nov 27 05:55:41 np0005537642 ceph-osd[82775]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Nov 27 05:55:41 np0005537642 ceph-osd[82775]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Nov 27 05:55:41 np0005537642 ceph-osd[82775]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Nov 27 05:55:41 np0005537642 ceph-osd[82775]: rocksdb:                   Options.inplace_update_support: 0
Nov 27 05:55:41 np0005537642 ceph-osd[82775]: rocksdb:                 Options.inplace_update_num_locks: 10000
Nov 27 05:55:41 np0005537642 ceph-osd[82775]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Nov 27 05:55:41 np0005537642 ceph-osd[82775]: rocksdb:               Options.memtable_whole_key_filtering: 0
Nov 27 05:55:41 np0005537642 ceph-osd[82775]: rocksdb:   Options.memtable_huge_page_size: 0
Nov 27 05:55:41 np0005537642 ceph-osd[82775]: rocksdb:                           Options.bloom_locality: 0
Nov 27 05:55:41 np0005537642 ceph-osd[82775]: rocksdb:                    Options.max_successive_merges: 0
Nov 27 05:55:41 np0005537642 ceph-osd[82775]: rocksdb:                Options.optimize_filters_for_hits: 0
Nov 27 05:55:41 np0005537642 ceph-osd[82775]: rocksdb:                Options.paranoid_file_checks: 0
Nov 27 05:55:41 np0005537642 ceph-osd[82775]: rocksdb:                Options.force_consistency_checks: 1
Nov 27 05:55:41 np0005537642 ceph-osd[82775]: rocksdb:                Options.report_bg_io_stats: 0
Nov 27 05:55:41 np0005537642 ceph-osd[82775]: rocksdb:                               Options.ttl: 2592000
Nov 27 05:55:41 np0005537642 ceph-osd[82775]: rocksdb:          Options.periodic_compaction_seconds: 0
Nov 27 05:55:41 np0005537642 ceph-osd[82775]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Nov 27 05:55:41 np0005537642 ceph-osd[82775]: rocksdb:    Options.preserve_internal_time_seconds: 0
Nov 27 05:55:41 np0005537642 ceph-osd[82775]: rocksdb:                       Options.enable_blob_files: false
Nov 27 05:55:41 np0005537642 ceph-osd[82775]: rocksdb:                           Options.min_blob_size: 0
Nov 27 05:55:41 np0005537642 ceph-osd[82775]: rocksdb:                          Options.blob_file_size: 268435456
Nov 27 05:55:41 np0005537642 ceph-osd[82775]: rocksdb:                   Options.blob_compression_type: NoCompression
Nov 27 05:55:41 np0005537642 ceph-osd[82775]: rocksdb:          Options.enable_blob_garbage_collection: false
Nov 27 05:55:41 np0005537642 ceph-osd[82775]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Nov 27 05:55:41 np0005537642 ceph-osd[82775]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Nov 27 05:55:41 np0005537642 ceph-osd[82775]: rocksdb:          Options.blob_compaction_readahead_size: 0
Nov 27 05:55:41 np0005537642 ceph-osd[82775]: rocksdb:                Options.blob_file_starting_level: 0
Nov 27 05:55:41 np0005537642 ceph-osd[82775]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Nov 27 05:55:41 np0005537642 ceph-osd[82775]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [p-0]:
Nov 27 05:55:41 np0005537642 ceph-osd[82775]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Nov 27 05:55:41 np0005537642 ceph-osd[82775]: rocksdb:           Options.merge_operator: None
Nov 27 05:55:41 np0005537642 ceph-osd[82775]: rocksdb:        Options.compaction_filter: None
Nov 27 05:55:41 np0005537642 ceph-osd[82775]: rocksdb:        Options.compaction_filter_factory: None
Nov 27 05:55:41 np0005537642 ceph-osd[82775]: rocksdb:  Options.sst_partitioner_factory: None
Nov 27 05:55:41 np0005537642 ceph-osd[82775]: rocksdb:         Options.memtable_factory: SkipListFactory
Nov 27 05:55:41 np0005537642 ceph-osd[82775]: rocksdb:            Options.table_factory: BlockBasedTable
Nov 27 05:55:41 np0005537642 ceph-osd[82775]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x562f57a61b60)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x562f56c87350#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 483183820#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Nov 27 05:55:41 np0005537642 ceph-osd[82775]: rocksdb:        Options.write_buffer_size: 16777216
Nov 27 05:55:41 np0005537642 ceph-osd[82775]: rocksdb:  Options.max_write_buffer_number: 64
Nov 27 05:55:41 np0005537642 ceph-osd[82775]: rocksdb:          Options.compression: LZ4
Nov 27 05:55:41 np0005537642 ceph-osd[82775]: rocksdb:                  Options.bottommost_compression: Disabled
Nov 27 05:55:41 np0005537642 ceph-osd[82775]: rocksdb:       Options.prefix_extractor: nullptr
Nov 27 05:55:41 np0005537642 ceph-osd[82775]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Nov 27 05:55:41 np0005537642 ceph-osd[82775]: rocksdb:             Options.num_levels: 7
Nov 27 05:55:41 np0005537642 ceph-osd[82775]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Nov 27 05:55:41 np0005537642 ceph-osd[82775]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Nov 27 05:55:41 np0005537642 ceph-osd[82775]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Nov 27 05:55:41 np0005537642 ceph-osd[82775]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Nov 27 05:55:41 np0005537642 ceph-osd[82775]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Nov 27 05:55:41 np0005537642 ceph-osd[82775]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Nov 27 05:55:41 np0005537642 ceph-osd[82775]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Nov 27 05:55:41 np0005537642 ceph-osd[82775]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Nov 27 05:55:41 np0005537642 ceph-osd[82775]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Nov 27 05:55:41 np0005537642 ceph-osd[82775]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Nov 27 05:55:41 np0005537642 ceph-osd[82775]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Nov 27 05:55:41 np0005537642 ceph-osd[82775]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Nov 27 05:55:41 np0005537642 ceph-osd[82775]: rocksdb:            Options.compression_opts.window_bits: -14
Nov 27 05:55:41 np0005537642 ceph-osd[82775]: rocksdb:                  Options.compression_opts.level: 32767
Nov 27 05:55:41 np0005537642 ceph-osd[82775]: rocksdb:               Options.compression_opts.strategy: 0
Nov 27 05:55:41 np0005537642 ceph-osd[82775]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Nov 27 05:55:41 np0005537642 ceph-osd[82775]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Nov 27 05:55:41 np0005537642 ceph-osd[82775]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Nov 27 05:55:41 np0005537642 ceph-osd[82775]: rocksdb:         Options.compression_opts.parallel_threads: 1
Nov 27 05:55:41 np0005537642 ceph-osd[82775]: rocksdb:                  Options.compression_opts.enabled: false
Nov 27 05:55:41 np0005537642 ceph-osd[82775]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Nov 27 05:55:41 np0005537642 ceph-osd[82775]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Nov 27 05:55:41 np0005537642 ceph-osd[82775]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Nov 27 05:55:41 np0005537642 ceph-osd[82775]: rocksdb:              Options.level0_stop_writes_trigger: 36
Nov 27 05:55:41 np0005537642 ceph-osd[82775]: rocksdb:                   Options.target_file_size_base: 67108864
Nov 27 05:55:41 np0005537642 ceph-osd[82775]: rocksdb:             Options.target_file_size_multiplier: 1
Nov 27 05:55:41 np0005537642 ceph-osd[82775]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Nov 27 05:55:41 np0005537642 ceph-osd[82775]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Nov 27 05:55:41 np0005537642 ceph-osd[82775]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Nov 27 05:55:41 np0005537642 ceph-osd[82775]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Nov 27 05:55:41 np0005537642 ceph-osd[82775]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Nov 27 05:55:41 np0005537642 ceph-osd[82775]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Nov 27 05:55:41 np0005537642 ceph-osd[82775]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Nov 27 05:55:41 np0005537642 ceph-osd[82775]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Nov 27 05:55:41 np0005537642 ceph-osd[82775]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Nov 27 05:55:41 np0005537642 ceph-osd[82775]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Nov 27 05:55:41 np0005537642 ceph-osd[82775]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Nov 27 05:55:41 np0005537642 ceph-osd[82775]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Nov 27 05:55:41 np0005537642 ceph-osd[82775]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Nov 27 05:55:41 np0005537642 ceph-osd[82775]: rocksdb:                        Options.arena_block_size: 1048576
Nov 27 05:55:41 np0005537642 ceph-osd[82775]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Nov 27 05:55:41 np0005537642 ceph-osd[82775]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Nov 27 05:55:41 np0005537642 ceph-osd[82775]: rocksdb:                Options.disable_auto_compactions: 0
Nov 27 05:55:41 np0005537642 ceph-osd[82775]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Nov 27 05:55:41 np0005537642 ceph-osd[82775]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Nov 27 05:55:41 np0005537642 ceph-osd[82775]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Nov 27 05:55:41 np0005537642 ceph-osd[82775]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Nov 27 05:55:41 np0005537642 ceph-osd[82775]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Nov 27 05:55:41 np0005537642 ceph-osd[82775]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Nov 27 05:55:41 np0005537642 ceph-osd[82775]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Nov 27 05:55:41 np0005537642 ceph-osd[82775]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Nov 27 05:55:41 np0005537642 ceph-osd[82775]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Nov 27 05:55:41 np0005537642 ceph-osd[82775]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Nov 27 05:55:41 np0005537642 ceph-osd[82775]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Nov 27 05:55:41 np0005537642 ceph-osd[82775]: rocksdb:                   Options.inplace_update_support: 0
Nov 27 05:55:41 np0005537642 ceph-osd[82775]: rocksdb:                 Options.inplace_update_num_locks: 10000
Nov 27 05:55:41 np0005537642 ceph-osd[82775]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Nov 27 05:55:41 np0005537642 ceph-osd[82775]: rocksdb:               Options.memtable_whole_key_filtering: 0
Nov 27 05:55:41 np0005537642 ceph-osd[82775]: rocksdb:   Options.memtable_huge_page_size: 0
Nov 27 05:55:41 np0005537642 ceph-osd[82775]: rocksdb:                           Options.bloom_locality: 0
Nov 27 05:55:41 np0005537642 ceph-osd[82775]: rocksdb:                    Options.max_successive_merges: 0
Nov 27 05:55:41 np0005537642 ceph-osd[82775]: rocksdb:                Options.optimize_filters_for_hits: 0
Nov 27 05:55:41 np0005537642 ceph-osd[82775]: rocksdb:                Options.paranoid_file_checks: 0
Nov 27 05:55:41 np0005537642 ceph-osd[82775]: rocksdb:                Options.force_consistency_checks: 1
Nov 27 05:55:41 np0005537642 ceph-osd[82775]: rocksdb:                Options.report_bg_io_stats: 0
Nov 27 05:55:41 np0005537642 ceph-osd[82775]: rocksdb:                               Options.ttl: 2592000
Nov 27 05:55:41 np0005537642 ceph-osd[82775]: rocksdb:          Options.periodic_compaction_seconds: 0
Nov 27 05:55:41 np0005537642 ceph-osd[82775]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Nov 27 05:55:41 np0005537642 ceph-osd[82775]: rocksdb:    Options.preserve_internal_time_seconds: 0
Nov 27 05:55:41 np0005537642 ceph-osd[82775]: rocksdb:                       Options.enable_blob_files: false
Nov 27 05:55:41 np0005537642 ceph-osd[82775]: rocksdb:                           Options.min_blob_size: 0
Nov 27 05:55:41 np0005537642 ceph-osd[82775]: rocksdb:                          Options.blob_file_size: 268435456
Nov 27 05:55:41 np0005537642 ceph-osd[82775]: rocksdb:                   Options.blob_compression_type: NoCompression
Nov 27 05:55:41 np0005537642 ceph-osd[82775]: rocksdb:          Options.enable_blob_garbage_collection: false
Nov 27 05:55:41 np0005537642 ceph-osd[82775]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Nov 27 05:55:41 np0005537642 ceph-osd[82775]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Nov 27 05:55:41 np0005537642 ceph-osd[82775]: rocksdb:          Options.blob_compaction_readahead_size: 0
Nov 27 05:55:41 np0005537642 ceph-osd[82775]: rocksdb:                Options.blob_file_starting_level: 0
Nov 27 05:55:41 np0005537642 ceph-osd[82775]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Nov 27 05:55:41 np0005537642 ceph-osd[82775]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [p-1]:
Nov 27 05:55:41 np0005537642 ceph-osd[82775]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Nov 27 05:55:41 np0005537642 ceph-osd[82775]: rocksdb:           Options.merge_operator: None
Nov 27 05:55:41 np0005537642 ceph-osd[82775]: rocksdb:        Options.compaction_filter: None
Nov 27 05:55:41 np0005537642 ceph-osd[82775]: rocksdb:        Options.compaction_filter_factory: None
Nov 27 05:55:41 np0005537642 ceph-osd[82775]: rocksdb:  Options.sst_partitioner_factory: None
Nov 27 05:55:41 np0005537642 ceph-osd[82775]: rocksdb:         Options.memtable_factory: SkipListFactory
Nov 27 05:55:41 np0005537642 ceph-osd[82775]: rocksdb:            Options.table_factory: BlockBasedTable
Nov 27 05:55:41 np0005537642 ceph-osd[82775]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x562f57a61b60)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x562f56c87350#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 483183820#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Nov 27 05:55:41 np0005537642 ceph-osd[82775]: rocksdb:        Options.write_buffer_size: 16777216
Nov 27 05:55:41 np0005537642 ceph-osd[82775]: rocksdb:  Options.max_write_buffer_number: 64
Nov 27 05:55:41 np0005537642 ceph-osd[82775]: rocksdb:          Options.compression: LZ4
Nov 27 05:55:41 np0005537642 ceph-osd[82775]: rocksdb:                  Options.bottommost_compression: Disabled
Nov 27 05:55:41 np0005537642 ceph-osd[82775]: rocksdb:       Options.prefix_extractor: nullptr
Nov 27 05:55:41 np0005537642 ceph-osd[82775]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Nov 27 05:55:41 np0005537642 ceph-osd[82775]: rocksdb:             Options.num_levels: 7
Nov 27 05:55:41 np0005537642 ceph-osd[82775]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Nov 27 05:55:41 np0005537642 ceph-osd[82775]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Nov 27 05:55:41 np0005537642 ceph-osd[82775]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Nov 27 05:55:41 np0005537642 ceph-osd[82775]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Nov 27 05:55:41 np0005537642 ceph-osd[82775]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Nov 27 05:55:41 np0005537642 ceph-osd[82775]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Nov 27 05:55:41 np0005537642 ceph-osd[82775]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Nov 27 05:55:41 np0005537642 ceph-osd[82775]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Nov 27 05:55:41 np0005537642 ceph-osd[82775]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Nov 27 05:55:41 np0005537642 ceph-osd[82775]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Nov 27 05:55:41 np0005537642 ceph-osd[82775]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Nov 27 05:55:41 np0005537642 ceph-osd[82775]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Nov 27 05:55:41 np0005537642 ceph-osd[82775]: rocksdb:            Options.compression_opts.window_bits: -14
Nov 27 05:55:41 np0005537642 ceph-osd[82775]: rocksdb:                  Options.compression_opts.level: 32767
Nov 27 05:55:41 np0005537642 ceph-osd[82775]: rocksdb:               Options.compression_opts.strategy: 0
Nov 27 05:55:41 np0005537642 ceph-osd[82775]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Nov 27 05:55:41 np0005537642 ceph-osd[82775]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Nov 27 05:55:41 np0005537642 ceph-osd[82775]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Nov 27 05:55:41 np0005537642 ceph-osd[82775]: rocksdb:         Options.compression_opts.parallel_threads: 1
Nov 27 05:55:41 np0005537642 ceph-osd[82775]: rocksdb:                  Options.compression_opts.enabled: false
Nov 27 05:55:41 np0005537642 ceph-osd[82775]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Nov 27 05:55:41 np0005537642 ceph-osd[82775]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Nov 27 05:55:41 np0005537642 ceph-osd[82775]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Nov 27 05:55:41 np0005537642 ceph-osd[82775]: rocksdb:              Options.level0_stop_writes_trigger: 36
Nov 27 05:55:41 np0005537642 ceph-osd[82775]: rocksdb:                   Options.target_file_size_base: 67108864
Nov 27 05:55:41 np0005537642 ceph-osd[82775]: rocksdb:             Options.target_file_size_multiplier: 1
Nov 27 05:55:41 np0005537642 ceph-osd[82775]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Nov 27 05:55:41 np0005537642 ceph-osd[82775]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Nov 27 05:55:41 np0005537642 ceph-osd[82775]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Nov 27 05:55:41 np0005537642 ceph-osd[82775]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Nov 27 05:55:41 np0005537642 ceph-osd[82775]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Nov 27 05:55:41 np0005537642 ceph-osd[82775]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Nov 27 05:55:41 np0005537642 ceph-osd[82775]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Nov 27 05:55:41 np0005537642 ceph-osd[82775]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Nov 27 05:55:41 np0005537642 ceph-osd[82775]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Nov 27 05:55:41 np0005537642 ceph-osd[82775]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Nov 27 05:55:41 np0005537642 ceph-osd[82775]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Nov 27 05:55:41 np0005537642 ceph-osd[82775]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Nov 27 05:55:41 np0005537642 ceph-osd[82775]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Nov 27 05:55:41 np0005537642 ceph-osd[82775]: rocksdb:                        Options.arena_block_size: 1048576
Nov 27 05:55:41 np0005537642 ceph-osd[82775]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Nov 27 05:55:41 np0005537642 ceph-osd[82775]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Nov 27 05:55:41 np0005537642 ceph-osd[82775]: rocksdb:                Options.disable_auto_compactions: 0
Nov 27 05:55:41 np0005537642 ceph-osd[82775]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Nov 27 05:55:41 np0005537642 ceph-osd[82775]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Nov 27 05:55:41 np0005537642 ceph-osd[82775]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Nov 27 05:55:41 np0005537642 ceph-osd[82775]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Nov 27 05:55:41 np0005537642 ceph-osd[82775]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Nov 27 05:55:41 np0005537642 ceph-osd[82775]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Nov 27 05:55:41 np0005537642 ceph-osd[82775]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Nov 27 05:55:41 np0005537642 ceph-osd[82775]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Nov 27 05:55:41 np0005537642 ceph-osd[82775]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Nov 27 05:55:41 np0005537642 ceph-osd[82775]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Nov 27 05:55:41 np0005537642 ceph-osd[82775]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Nov 27 05:55:41 np0005537642 ceph-osd[82775]: rocksdb:                   Options.inplace_update_support: 0
Nov 27 05:55:41 np0005537642 ceph-osd[82775]: rocksdb:                 Options.inplace_update_num_locks: 10000
Nov 27 05:55:41 np0005537642 ceph-osd[82775]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Nov 27 05:55:41 np0005537642 ceph-osd[82775]: rocksdb:               Options.memtable_whole_key_filtering: 0
Nov 27 05:55:41 np0005537642 ceph-osd[82775]: rocksdb:   Options.memtable_huge_page_size: 0
Nov 27 05:55:41 np0005537642 ceph-osd[82775]: rocksdb:                           Options.bloom_locality: 0
Nov 27 05:55:41 np0005537642 ceph-osd[82775]: rocksdb:                    Options.max_successive_merges: 0
Nov 27 05:55:41 np0005537642 ceph-osd[82775]: rocksdb:                Options.optimize_filters_for_hits: 0
Nov 27 05:55:41 np0005537642 ceph-osd[82775]: rocksdb:                Options.paranoid_file_checks: 0
Nov 27 05:55:41 np0005537642 ceph-osd[82775]: rocksdb:                Options.force_consistency_checks: 1
Nov 27 05:55:41 np0005537642 ceph-osd[82775]: rocksdb:                Options.report_bg_io_stats: 0
Nov 27 05:55:41 np0005537642 ceph-osd[82775]: rocksdb:                               Options.ttl: 2592000
Nov 27 05:55:41 np0005537642 ceph-osd[82775]: rocksdb:          Options.periodic_compaction_seconds: 0
Nov 27 05:55:41 np0005537642 ceph-osd[82775]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Nov 27 05:55:41 np0005537642 ceph-osd[82775]: rocksdb:    Options.preserve_internal_time_seconds: 0
Nov 27 05:55:41 np0005537642 ceph-osd[82775]: rocksdb:                       Options.enable_blob_files: false
Nov 27 05:55:41 np0005537642 ceph-osd[82775]: rocksdb:                           Options.min_blob_size: 0
Nov 27 05:55:41 np0005537642 ceph-osd[82775]: rocksdb:                          Options.blob_file_size: 268435456
Nov 27 05:55:41 np0005537642 ceph-osd[82775]: rocksdb:                   Options.blob_compression_type: NoCompression
Nov 27 05:55:41 np0005537642 ceph-osd[82775]: rocksdb:          Options.enable_blob_garbage_collection: false
Nov 27 05:55:41 np0005537642 ceph-osd[82775]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Nov 27 05:55:41 np0005537642 ceph-osd[82775]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Nov 27 05:55:41 np0005537642 ceph-osd[82775]: rocksdb:          Options.blob_compaction_readahead_size: 0
Nov 27 05:55:41 np0005537642 ceph-osd[82775]: rocksdb:                Options.blob_file_starting_level: 0
Nov 27 05:55:41 np0005537642 ceph-osd[82775]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Nov 27 05:55:41 np0005537642 ceph-osd[82775]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [p-2]:
Nov 27 05:55:41 np0005537642 ceph-osd[82775]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Nov 27 05:55:41 np0005537642 ceph-osd[82775]: rocksdb:           Options.merge_operator: None
Nov 27 05:55:41 np0005537642 ceph-osd[82775]: rocksdb:        Options.compaction_filter: None
Nov 27 05:55:41 np0005537642 ceph-osd[82775]: rocksdb:        Options.compaction_filter_factory: None
Nov 27 05:55:41 np0005537642 ceph-osd[82775]: rocksdb:  Options.sst_partitioner_factory: None
Nov 27 05:55:41 np0005537642 ceph-osd[82775]: rocksdb:         Options.memtable_factory: SkipListFactory
Nov 27 05:55:41 np0005537642 ceph-osd[82775]: rocksdb:            Options.table_factory: BlockBasedTable
Nov 27 05:55:41 np0005537642 ceph-osd[82775]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x562f57a61b60)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x562f56c87350#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 483183820#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Nov 27 05:55:41 np0005537642 ceph-osd[82775]: rocksdb:        Options.write_buffer_size: 16777216
Nov 27 05:55:41 np0005537642 ceph-osd[82775]: rocksdb:  Options.max_write_buffer_number: 64
Nov 27 05:55:41 np0005537642 ceph-osd[82775]: rocksdb:          Options.compression: LZ4
Nov 27 05:55:41 np0005537642 ceph-osd[82775]: rocksdb:                  Options.bottommost_compression: Disabled
Nov 27 05:55:41 np0005537642 ceph-osd[82775]: rocksdb:       Options.prefix_extractor: nullptr
Nov 27 05:55:41 np0005537642 ceph-osd[82775]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Nov 27 05:55:41 np0005537642 ceph-osd[82775]: rocksdb:             Options.num_levels: 7
Nov 27 05:55:41 np0005537642 ceph-osd[82775]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Nov 27 05:55:41 np0005537642 ceph-osd[82775]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Nov 27 05:55:41 np0005537642 ceph-osd[82775]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Nov 27 05:55:41 np0005537642 ceph-osd[82775]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Nov 27 05:55:41 np0005537642 ceph-osd[82775]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Nov 27 05:55:41 np0005537642 ceph-osd[82775]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Nov 27 05:55:41 np0005537642 ceph-osd[82775]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Nov 27 05:55:41 np0005537642 ceph-osd[82775]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Nov 27 05:55:41 np0005537642 ceph-osd[82775]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Nov 27 05:55:41 np0005537642 ceph-osd[82775]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Nov 27 05:55:41 np0005537642 ceph-osd[82775]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Nov 27 05:55:41 np0005537642 ceph-osd[82775]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Nov 27 05:55:41 np0005537642 ceph-osd[82775]: rocksdb:            Options.compression_opts.window_bits: -14
Nov 27 05:55:41 np0005537642 ceph-osd[82775]: rocksdb:                  Options.compression_opts.level: 32767
Nov 27 05:55:41 np0005537642 ceph-osd[82775]: rocksdb:               Options.compression_opts.strategy: 0
Nov 27 05:55:41 np0005537642 ceph-osd[82775]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Nov 27 05:55:41 np0005537642 ceph-osd[82775]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Nov 27 05:55:41 np0005537642 ceph-osd[82775]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Nov 27 05:55:41 np0005537642 ceph-osd[82775]: rocksdb:         Options.compression_opts.parallel_threads: 1
Nov 27 05:55:41 np0005537642 ceph-osd[82775]: rocksdb:                  Options.compression_opts.enabled: false
Nov 27 05:55:41 np0005537642 ceph-osd[82775]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Nov 27 05:55:41 np0005537642 ceph-osd[82775]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Nov 27 05:55:41 np0005537642 ceph-osd[82775]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Nov 27 05:55:41 np0005537642 ceph-osd[82775]: rocksdb:              Options.level0_stop_writes_trigger: 36
Nov 27 05:55:41 np0005537642 ceph-osd[82775]: rocksdb:                   Options.target_file_size_base: 67108864
Nov 27 05:55:41 np0005537642 ceph-osd[82775]: rocksdb:             Options.target_file_size_multiplier: 1
Nov 27 05:55:41 np0005537642 ceph-osd[82775]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Nov 27 05:55:41 np0005537642 ceph-osd[82775]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Nov 27 05:55:41 np0005537642 ceph-osd[82775]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Nov 27 05:55:41 np0005537642 ceph-osd[82775]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Nov 27 05:55:41 np0005537642 ceph-osd[82775]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Nov 27 05:55:41 np0005537642 ceph-osd[82775]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Nov 27 05:55:41 np0005537642 ceph-osd[82775]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Nov 27 05:55:41 np0005537642 ceph-osd[82775]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Nov 27 05:55:41 np0005537642 ceph-osd[82775]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Nov 27 05:55:41 np0005537642 ceph-osd[82775]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Nov 27 05:55:41 np0005537642 ceph-osd[82775]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Nov 27 05:55:41 np0005537642 ceph-osd[82775]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Nov 27 05:55:41 np0005537642 ceph-osd[82775]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Nov 27 05:55:41 np0005537642 ceph-osd[82775]: rocksdb:                        Options.arena_block_size: 1048576
Nov 27 05:55:41 np0005537642 ceph-osd[82775]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Nov 27 05:55:41 np0005537642 ceph-osd[82775]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Nov 27 05:55:41 np0005537642 ceph-osd[82775]: rocksdb:                Options.disable_auto_compactions: 0
Nov 27 05:55:41 np0005537642 ceph-osd[82775]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Nov 27 05:55:41 np0005537642 ceph-osd[82775]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Nov 27 05:55:41 np0005537642 ceph-osd[82775]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Nov 27 05:55:41 np0005537642 ceph-osd[82775]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Nov 27 05:55:41 np0005537642 ceph-osd[82775]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Nov 27 05:55:41 np0005537642 ceph-osd[82775]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Nov 27 05:55:41 np0005537642 ceph-osd[82775]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Nov 27 05:55:41 np0005537642 ceph-osd[82775]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Nov 27 05:55:41 np0005537642 ceph-osd[82775]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Nov 27 05:55:41 np0005537642 ceph-osd[82775]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Nov 27 05:55:41 np0005537642 ceph-osd[82775]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Nov 27 05:55:41 np0005537642 ceph-osd[82775]: rocksdb:                   Options.inplace_update_support: 0
Nov 27 05:55:41 np0005537642 ceph-osd[82775]: rocksdb:                 Options.inplace_update_num_locks: 10000
Nov 27 05:55:41 np0005537642 ceph-osd[82775]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Nov 27 05:55:41 np0005537642 ceph-osd[82775]: rocksdb:               Options.memtable_whole_key_filtering: 0
Nov 27 05:55:41 np0005537642 ceph-osd[82775]: rocksdb:   Options.memtable_huge_page_size: 0
Nov 27 05:55:41 np0005537642 ceph-osd[82775]: rocksdb:                           Options.bloom_locality: 0
Nov 27 05:55:41 np0005537642 ceph-osd[82775]: rocksdb:                    Options.max_successive_merges: 0
Nov 27 05:55:41 np0005537642 ceph-osd[82775]: rocksdb:                Options.optimize_filters_for_hits: 0
Nov 27 05:55:41 np0005537642 ceph-osd[82775]: rocksdb:                Options.paranoid_file_checks: 0
Nov 27 05:55:41 np0005537642 ceph-osd[82775]: rocksdb:                Options.force_consistency_checks: 1
Nov 27 05:55:41 np0005537642 ceph-osd[82775]: rocksdb:                Options.report_bg_io_stats: 0
Nov 27 05:55:41 np0005537642 ceph-osd[82775]: rocksdb:                               Options.ttl: 2592000
Nov 27 05:55:41 np0005537642 ceph-osd[82775]: rocksdb:          Options.periodic_compaction_seconds: 0
Nov 27 05:55:41 np0005537642 ceph-osd[82775]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Nov 27 05:55:41 np0005537642 ceph-osd[82775]: rocksdb:    Options.preserve_internal_time_seconds: 0
Nov 27 05:55:41 np0005537642 ceph-osd[82775]: rocksdb:                       Options.enable_blob_files: false
Nov 27 05:55:41 np0005537642 ceph-osd[82775]: rocksdb:                           Options.min_blob_size: 0
Nov 27 05:55:41 np0005537642 ceph-osd[82775]: rocksdb:                          Options.blob_file_size: 268435456
Nov 27 05:55:41 np0005537642 ceph-osd[82775]: rocksdb:                   Options.blob_compression_type: NoCompression
Nov 27 05:55:41 np0005537642 ceph-osd[82775]: rocksdb:          Options.enable_blob_garbage_collection: false
Nov 27 05:55:41 np0005537642 ceph-osd[82775]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Nov 27 05:55:41 np0005537642 ceph-osd[82775]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Nov 27 05:55:41 np0005537642 ceph-osd[82775]: rocksdb:          Options.blob_compaction_readahead_size: 0
Nov 27 05:55:41 np0005537642 ceph-osd[82775]: rocksdb:                Options.blob_file_starting_level: 0
Nov 27 05:55:41 np0005537642 ceph-osd[82775]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Nov 27 05:55:41 np0005537642 ceph-osd[82775]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [O-0]:
Nov 27 05:55:41 np0005537642 ceph-osd[82775]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Nov 27 05:55:41 np0005537642 ceph-osd[82775]: rocksdb:           Options.merge_operator: None
Nov 27 05:55:41 np0005537642 ceph-osd[82775]: rocksdb:        Options.compaction_filter: None
Nov 27 05:55:41 np0005537642 ceph-osd[82775]: rocksdb:        Options.compaction_filter_factory: None
Nov 27 05:55:41 np0005537642 ceph-osd[82775]: rocksdb:  Options.sst_partitioner_factory: None
Nov 27 05:55:41 np0005537642 ceph-osd[82775]: rocksdb:         Options.memtable_factory: SkipListFactory
Nov 27 05:55:41 np0005537642 ceph-osd[82775]: rocksdb:            Options.table_factory: BlockBasedTable
Nov 27 05:55:41 np0005537642 ceph-osd[82775]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x562f57a61b80)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x562f56c869b0#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 536870912#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Nov 27 05:55:41 np0005537642 ceph-osd[82775]: rocksdb:        Options.write_buffer_size: 16777216
Nov 27 05:55:41 np0005537642 ceph-osd[82775]: rocksdb:  Options.max_write_buffer_number: 64
Nov 27 05:55:41 np0005537642 ceph-osd[82775]: rocksdb:          Options.compression: LZ4
Nov 27 05:55:41 np0005537642 ceph-osd[82775]: rocksdb:                  Options.bottommost_compression: Disabled
Nov 27 05:55:41 np0005537642 ceph-osd[82775]: rocksdb:       Options.prefix_extractor: nullptr
Nov 27 05:55:41 np0005537642 ceph-osd[82775]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Nov 27 05:55:41 np0005537642 ceph-osd[82775]: rocksdb:             Options.num_levels: 7
Nov 27 05:55:41 np0005537642 ceph-osd[82775]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Nov 27 05:55:41 np0005537642 ceph-osd[82775]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Nov 27 05:55:41 np0005537642 ceph-osd[82775]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Nov 27 05:55:41 np0005537642 ceph-osd[82775]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Nov 27 05:55:41 np0005537642 ceph-osd[82775]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Nov 27 05:55:41 np0005537642 ceph-osd[82775]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Nov 27 05:55:41 np0005537642 ceph-osd[82775]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Nov 27 05:55:41 np0005537642 ceph-osd[82775]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Nov 27 05:55:41 np0005537642 ceph-osd[82775]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Nov 27 05:55:41 np0005537642 ceph-osd[82775]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Nov 27 05:55:41 np0005537642 ceph-osd[82775]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Nov 27 05:55:41 np0005537642 ceph-osd[82775]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Nov 27 05:55:41 np0005537642 ceph-osd[82775]: rocksdb:            Options.compression_opts.window_bits: -14
Nov 27 05:55:41 np0005537642 ceph-osd[82775]: rocksdb:                  Options.compression_opts.level: 32767
Nov 27 05:55:41 np0005537642 ceph-osd[82775]: rocksdb:               Options.compression_opts.strategy: 0
Nov 27 05:55:41 np0005537642 ceph-osd[82775]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Nov 27 05:55:41 np0005537642 ceph-osd[82775]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Nov 27 05:55:41 np0005537642 ceph-osd[82775]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Nov 27 05:55:41 np0005537642 ceph-osd[82775]: rocksdb:         Options.compression_opts.parallel_threads: 1
Nov 27 05:55:41 np0005537642 ceph-osd[82775]: rocksdb:                  Options.compression_opts.enabled: false
Nov 27 05:55:41 np0005537642 ceph-osd[82775]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Nov 27 05:55:41 np0005537642 ceph-osd[82775]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Nov 27 05:55:41 np0005537642 ceph-osd[82775]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Nov 27 05:55:41 np0005537642 ceph-osd[82775]: rocksdb:              Options.level0_stop_writes_trigger: 36
Nov 27 05:55:41 np0005537642 ceph-osd[82775]: rocksdb:                   Options.target_file_size_base: 67108864
Nov 27 05:55:41 np0005537642 ceph-osd[82775]: rocksdb:             Options.target_file_size_multiplier: 1
Nov 27 05:55:41 np0005537642 ceph-osd[82775]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Nov 27 05:55:41 np0005537642 ceph-osd[82775]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Nov 27 05:55:41 np0005537642 ceph-osd[82775]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Nov 27 05:55:41 np0005537642 ceph-osd[82775]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Nov 27 05:55:41 np0005537642 ceph-osd[82775]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Nov 27 05:55:41 np0005537642 ceph-osd[82775]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Nov 27 05:55:41 np0005537642 ceph-osd[82775]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Nov 27 05:55:41 np0005537642 ceph-osd[82775]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Nov 27 05:55:41 np0005537642 ceph-osd[82775]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Nov 27 05:55:41 np0005537642 ceph-osd[82775]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Nov 27 05:55:41 np0005537642 ceph-osd[82775]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Nov 27 05:55:41 np0005537642 ceph-osd[82775]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Nov 27 05:55:41 np0005537642 ceph-osd[82775]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Nov 27 05:55:41 np0005537642 ceph-osd[82775]: rocksdb:                        Options.arena_block_size: 1048576
Nov 27 05:55:41 np0005537642 ceph-osd[82775]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Nov 27 05:55:41 np0005537642 ceph-osd[82775]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Nov 27 05:55:41 np0005537642 ceph-osd[82775]: rocksdb:                Options.disable_auto_compactions: 0
Nov 27 05:55:41 np0005537642 ceph-osd[82775]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Nov 27 05:55:41 np0005537642 ceph-osd[82775]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Nov 27 05:55:41 np0005537642 ceph-osd[82775]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Nov 27 05:55:41 np0005537642 ceph-osd[82775]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Nov 27 05:55:41 np0005537642 ceph-osd[82775]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Nov 27 05:55:41 np0005537642 ceph-osd[82775]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Nov 27 05:55:41 np0005537642 ceph-osd[82775]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Nov 27 05:55:41 np0005537642 ceph-osd[82775]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Nov 27 05:55:41 np0005537642 ceph-osd[82775]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Nov 27 05:55:41 np0005537642 ceph-osd[82775]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Nov 27 05:55:41 np0005537642 ceph-osd[82775]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Nov 27 05:55:41 np0005537642 ceph-osd[82775]: rocksdb:                   Options.inplace_update_support: 0
Nov 27 05:55:41 np0005537642 ceph-osd[82775]: rocksdb:                 Options.inplace_update_num_locks: 10000
Nov 27 05:55:41 np0005537642 ceph-osd[82775]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Nov 27 05:55:41 np0005537642 ceph-osd[82775]: rocksdb:               Options.memtable_whole_key_filtering: 0
Nov 27 05:55:41 np0005537642 ceph-osd[82775]: rocksdb:   Options.memtable_huge_page_size: 0
Nov 27 05:55:41 np0005537642 ceph-osd[82775]: rocksdb:                           Options.bloom_locality: 0
Nov 27 05:55:41 np0005537642 ceph-osd[82775]: rocksdb:                    Options.max_successive_merges: 0
Nov 27 05:55:41 np0005537642 ceph-osd[82775]: rocksdb:                Options.optimize_filters_for_hits: 0
Nov 27 05:55:41 np0005537642 ceph-osd[82775]: rocksdb:                Options.paranoid_file_checks: 0
Nov 27 05:55:41 np0005537642 ceph-osd[82775]: rocksdb:                Options.force_consistency_checks: 1
Nov 27 05:55:41 np0005537642 ceph-osd[82775]: rocksdb:                Options.report_bg_io_stats: 0
Nov 27 05:55:41 np0005537642 ceph-osd[82775]: rocksdb:                               Options.ttl: 2592000
Nov 27 05:55:41 np0005537642 ceph-osd[82775]: rocksdb:          Options.periodic_compaction_seconds: 0
Nov 27 05:55:41 np0005537642 ceph-osd[82775]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Nov 27 05:55:41 np0005537642 ceph-osd[82775]: rocksdb:    Options.preserve_internal_time_seconds: 0
Nov 27 05:55:41 np0005537642 ceph-osd[82775]: rocksdb:                       Options.enable_blob_files: false
Nov 27 05:55:41 np0005537642 ceph-osd[82775]: rocksdb:                           Options.min_blob_size: 0
Nov 27 05:55:41 np0005537642 ceph-osd[82775]: rocksdb:                          Options.blob_file_size: 268435456
Nov 27 05:55:41 np0005537642 ceph-osd[82775]: rocksdb:                   Options.blob_compression_type: NoCompression
Nov 27 05:55:41 np0005537642 ceph-osd[82775]: rocksdb:          Options.enable_blob_garbage_collection: false
Nov 27 05:55:41 np0005537642 ceph-osd[82775]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Nov 27 05:55:41 np0005537642 ceph-osd[82775]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Nov 27 05:55:41 np0005537642 ceph-osd[82775]: rocksdb:          Options.blob_compaction_readahead_size: 0
Nov 27 05:55:41 np0005537642 ceph-osd[82775]: rocksdb:                Options.blob_file_starting_level: 0
Nov 27 05:55:41 np0005537642 ceph-osd[82775]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Nov 27 05:55:41 np0005537642 ceph-osd[82775]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [O-1]:
Nov 27 05:55:41 np0005537642 ceph-osd[82775]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Nov 27 05:55:41 np0005537642 ceph-osd[82775]: rocksdb:           Options.merge_operator: None
Nov 27 05:55:41 np0005537642 ceph-osd[82775]: rocksdb:        Options.compaction_filter: None
Nov 27 05:55:41 np0005537642 ceph-osd[82775]: rocksdb:        Options.compaction_filter_factory: None
Nov 27 05:55:41 np0005537642 ceph-osd[82775]: rocksdb:  Options.sst_partitioner_factory: None
Nov 27 05:55:41 np0005537642 ceph-osd[82775]: rocksdb:         Options.memtable_factory: SkipListFactory
Nov 27 05:55:41 np0005537642 ceph-osd[82775]: rocksdb:            Options.table_factory: BlockBasedTable
Nov 27 05:55:41 np0005537642 ceph-osd[82775]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x562f57a61b80)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x562f56c869b0#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 536870912#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Nov 27 05:55:41 np0005537642 ceph-osd[82775]: rocksdb:        Options.write_buffer_size: 16777216
Nov 27 05:55:41 np0005537642 ceph-osd[82775]: rocksdb:  Options.max_write_buffer_number: 64
Nov 27 05:55:41 np0005537642 ceph-osd[82775]: rocksdb:          Options.compression: LZ4
Nov 27 05:55:41 np0005537642 ceph-osd[82775]: rocksdb:                  Options.bottommost_compression: Disabled
Nov 27 05:55:41 np0005537642 ceph-osd[82775]: rocksdb:       Options.prefix_extractor: nullptr
Nov 27 05:55:41 np0005537642 ceph-osd[82775]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Nov 27 05:55:41 np0005537642 ceph-osd[82775]: rocksdb:             Options.num_levels: 7
Nov 27 05:55:41 np0005537642 ceph-osd[82775]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Nov 27 05:55:41 np0005537642 ceph-osd[82775]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Nov 27 05:55:41 np0005537642 ceph-osd[82775]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Nov 27 05:55:41 np0005537642 ceph-osd[82775]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Nov 27 05:55:41 np0005537642 ceph-osd[82775]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Nov 27 05:55:41 np0005537642 ceph-osd[82775]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Nov 27 05:55:41 np0005537642 ceph-osd[82775]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Nov 27 05:55:41 np0005537642 ceph-osd[82775]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Nov 27 05:55:41 np0005537642 ceph-osd[82775]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Nov 27 05:55:41 np0005537642 ceph-osd[82775]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Nov 27 05:55:41 np0005537642 ceph-osd[82775]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Nov 27 05:55:41 np0005537642 ceph-osd[82775]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Nov 27 05:55:41 np0005537642 ceph-osd[82775]: rocksdb:            Options.compression_opts.window_bits: -14
Nov 27 05:55:41 np0005537642 ceph-osd[82775]: rocksdb:                  Options.compression_opts.level: 32767
Nov 27 05:55:41 np0005537642 ceph-osd[82775]: rocksdb:               Options.compression_opts.strategy: 0
Nov 27 05:55:41 np0005537642 ceph-osd[82775]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Nov 27 05:55:41 np0005537642 ceph-osd[82775]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Nov 27 05:55:41 np0005537642 ceph-osd[82775]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Nov 27 05:55:41 np0005537642 ceph-osd[82775]: rocksdb:         Options.compression_opts.parallel_threads: 1
Nov 27 05:55:41 np0005537642 ceph-osd[82775]: rocksdb:                  Options.compression_opts.enabled: false
Nov 27 05:55:41 np0005537642 ceph-osd[82775]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Nov 27 05:55:41 np0005537642 ceph-osd[82775]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Nov 27 05:55:41 np0005537642 ceph-osd[82775]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Nov 27 05:55:41 np0005537642 ceph-osd[82775]: rocksdb:              Options.level0_stop_writes_trigger: 36
Nov 27 05:55:41 np0005537642 ceph-osd[82775]: rocksdb:                   Options.target_file_size_base: 67108864
Nov 27 05:55:41 np0005537642 ceph-osd[82775]: rocksdb:             Options.target_file_size_multiplier: 1
Nov 27 05:55:41 np0005537642 ceph-osd[82775]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Nov 27 05:55:41 np0005537642 ceph-osd[82775]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Nov 27 05:55:41 np0005537642 ceph-osd[82775]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Nov 27 05:55:41 np0005537642 ceph-osd[82775]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Nov 27 05:55:41 np0005537642 ceph-osd[82775]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Nov 27 05:55:41 np0005537642 ceph-osd[82775]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Nov 27 05:55:41 np0005537642 ceph-osd[82775]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Nov 27 05:55:41 np0005537642 ceph-osd[82775]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Nov 27 05:55:41 np0005537642 ceph-osd[82775]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Nov 27 05:55:41 np0005537642 ceph-osd[82775]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Nov 27 05:55:41 np0005537642 ceph-osd[82775]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Nov 27 05:55:41 np0005537642 ceph-osd[82775]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Nov 27 05:55:41 np0005537642 ceph-osd[82775]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Nov 27 05:55:41 np0005537642 ceph-osd[82775]: rocksdb:                        Options.arena_block_size: 1048576
Nov 27 05:55:41 np0005537642 ceph-osd[82775]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Nov 27 05:55:41 np0005537642 ceph-osd[82775]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Nov 27 05:55:41 np0005537642 ceph-osd[82775]: rocksdb:                Options.disable_auto_compactions: 0
Nov 27 05:55:41 np0005537642 ceph-osd[82775]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Nov 27 05:55:41 np0005537642 ceph-osd[82775]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Nov 27 05:55:41 np0005537642 ceph-osd[82775]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Nov 27 05:55:41 np0005537642 ceph-osd[82775]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Nov 27 05:55:41 np0005537642 ceph-osd[82775]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Nov 27 05:55:41 np0005537642 ceph-osd[82775]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Nov 27 05:55:41 np0005537642 ceph-osd[82775]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Nov 27 05:55:41 np0005537642 ceph-osd[82775]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Nov 27 05:55:41 np0005537642 ceph-osd[82775]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Nov 27 05:55:41 np0005537642 ceph-osd[82775]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Nov 27 05:55:41 np0005537642 ceph-osd[82775]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Nov 27 05:55:41 np0005537642 ceph-osd[82775]: rocksdb:                   Options.inplace_update_support: 0
Nov 27 05:55:41 np0005537642 ceph-osd[82775]: rocksdb:                 Options.inplace_update_num_locks: 10000
Nov 27 05:55:41 np0005537642 ceph-osd[82775]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Nov 27 05:55:41 np0005537642 ceph-osd[82775]: rocksdb:               Options.memtable_whole_key_filtering: 0
Nov 27 05:55:41 np0005537642 ceph-osd[82775]: rocksdb:   Options.memtable_huge_page_size: 0
Nov 27 05:55:41 np0005537642 ceph-osd[82775]: rocksdb:                           Options.bloom_locality: 0
Nov 27 05:55:41 np0005537642 ceph-osd[82775]: rocksdb:                    Options.max_successive_merges: 0
Nov 27 05:55:41 np0005537642 ceph-osd[82775]: rocksdb:                Options.optimize_filters_for_hits: 0
Nov 27 05:55:41 np0005537642 ceph-osd[82775]: rocksdb:                Options.paranoid_file_checks: 0
Nov 27 05:55:41 np0005537642 ceph-osd[82775]: rocksdb:                Options.force_consistency_checks: 1
Nov 27 05:55:41 np0005537642 ceph-osd[82775]: rocksdb:                Options.report_bg_io_stats: 0
Nov 27 05:55:41 np0005537642 ceph-osd[82775]: rocksdb:                               Options.ttl: 2592000
Nov 27 05:55:41 np0005537642 ceph-osd[82775]: rocksdb:          Options.periodic_compaction_seconds: 0
Nov 27 05:55:41 np0005537642 ceph-osd[82775]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Nov 27 05:55:41 np0005537642 ceph-osd[82775]: rocksdb:    Options.preserve_internal_time_seconds: 0
Nov 27 05:55:41 np0005537642 ceph-osd[82775]: rocksdb:                       Options.enable_blob_files: false
Nov 27 05:55:41 np0005537642 ceph-osd[82775]: rocksdb:                           Options.min_blob_size: 0
Nov 27 05:55:41 np0005537642 ceph-osd[82775]: rocksdb:                          Options.blob_file_size: 268435456
Nov 27 05:55:41 np0005537642 ceph-osd[82775]: rocksdb:                   Options.blob_compression_type: NoCompression
Nov 27 05:55:41 np0005537642 ceph-osd[82775]: rocksdb:          Options.enable_blob_garbage_collection: false
Nov 27 05:55:41 np0005537642 ceph-osd[82775]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Nov 27 05:55:41 np0005537642 ceph-osd[82775]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Nov 27 05:55:41 np0005537642 ceph-osd[82775]: rocksdb:          Options.blob_compaction_readahead_size: 0
Nov 27 05:55:41 np0005537642 ceph-osd[82775]: rocksdb:                Options.blob_file_starting_level: 0
Nov 27 05:55:41 np0005537642 ceph-osd[82775]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Nov 27 05:55:41 np0005537642 ceph-osd[82775]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [O-2]:
Nov 27 05:55:41 np0005537642 ceph-osd[82775]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Nov 27 05:55:41 np0005537642 ceph-osd[82775]: rocksdb:           Options.merge_operator: None
Nov 27 05:55:41 np0005537642 ceph-osd[82775]: rocksdb:        Options.compaction_filter: None
Nov 27 05:55:41 np0005537642 ceph-osd[82775]: rocksdb:        Options.compaction_filter_factory: None
Nov 27 05:55:41 np0005537642 ceph-osd[82775]: rocksdb:  Options.sst_partitioner_factory: None
Nov 27 05:55:41 np0005537642 ceph-osd[82775]: rocksdb:         Options.memtable_factory: SkipListFactory
Nov 27 05:55:41 np0005537642 ceph-osd[82775]: rocksdb:            Options.table_factory: BlockBasedTable
Nov 27 05:55:41 np0005537642 ceph-osd[82775]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x562f57a61b80)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x562f56c869b0#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 536870912#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Nov 27 05:55:41 np0005537642 ceph-osd[82775]: rocksdb:        Options.write_buffer_size: 16777216
Nov 27 05:55:41 np0005537642 ceph-osd[82775]: rocksdb:  Options.max_write_buffer_number: 64
Nov 27 05:55:41 np0005537642 ceph-osd[82775]: rocksdb:          Options.compression: LZ4
Nov 27 05:55:41 np0005537642 ceph-osd[82775]: rocksdb:                  Options.bottommost_compression: Disabled
Nov 27 05:55:41 np0005537642 ceph-osd[82775]: rocksdb:       Options.prefix_extractor: nullptr
Nov 27 05:55:41 np0005537642 ceph-osd[82775]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Nov 27 05:55:41 np0005537642 ceph-osd[82775]: rocksdb:             Options.num_levels: 7
Nov 27 05:55:41 np0005537642 ceph-osd[82775]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Nov 27 05:55:41 np0005537642 ceph-osd[82775]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Nov 27 05:55:41 np0005537642 ceph-osd[82775]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Nov 27 05:55:41 np0005537642 ceph-osd[82775]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Nov 27 05:55:41 np0005537642 ceph-osd[82775]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Nov 27 05:55:41 np0005537642 ceph-osd[82775]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Nov 27 05:55:41 np0005537642 ceph-osd[82775]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Nov 27 05:55:41 np0005537642 ceph-osd[82775]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Nov 27 05:55:41 np0005537642 ceph-osd[82775]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Nov 27 05:55:41 np0005537642 ceph-osd[82775]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Nov 27 05:55:41 np0005537642 ceph-osd[82775]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Nov 27 05:55:41 np0005537642 ceph-osd[82775]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Nov 27 05:55:41 np0005537642 ceph-osd[82775]: rocksdb:            Options.compression_opts.window_bits: -14
Nov 27 05:55:41 np0005537642 ceph-osd[82775]: rocksdb:                  Options.compression_opts.level: 32767
Nov 27 05:55:41 np0005537642 ceph-osd[82775]: rocksdb:               Options.compression_opts.strategy: 0
Nov 27 05:55:41 np0005537642 ceph-osd[82775]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Nov 27 05:55:41 np0005537642 ceph-osd[82775]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Nov 27 05:55:41 np0005537642 ceph-osd[82775]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Nov 27 05:55:41 np0005537642 ceph-osd[82775]: rocksdb:         Options.compression_opts.parallel_threads: 1
Nov 27 05:55:41 np0005537642 ceph-osd[82775]: rocksdb:                  Options.compression_opts.enabled: false
Nov 27 05:55:41 np0005537642 ceph-osd[82775]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Nov 27 05:55:41 np0005537642 ceph-osd[82775]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Nov 27 05:55:41 np0005537642 ceph-osd[82775]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Nov 27 05:55:41 np0005537642 ceph-osd[82775]: rocksdb:              Options.level0_stop_writes_trigger: 36
Nov 27 05:55:41 np0005537642 ceph-osd[82775]: rocksdb:                   Options.target_file_size_base: 67108864
Nov 27 05:55:41 np0005537642 ceph-osd[82775]: rocksdb:             Options.target_file_size_multiplier: 1
Nov 27 05:55:41 np0005537642 ceph-osd[82775]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Nov 27 05:55:41 np0005537642 ceph-osd[82775]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Nov 27 05:55:41 np0005537642 ceph-osd[82775]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Nov 27 05:55:41 np0005537642 ceph-osd[82775]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Nov 27 05:55:41 np0005537642 ceph-osd[82775]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Nov 27 05:55:41 np0005537642 ceph-osd[82775]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Nov 27 05:55:41 np0005537642 ceph-osd[82775]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Nov 27 05:55:41 np0005537642 ceph-osd[82775]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Nov 27 05:55:41 np0005537642 ceph-osd[82775]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Nov 27 05:55:41 np0005537642 ceph-osd[82775]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Nov 27 05:55:41 np0005537642 ceph-osd[82775]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Nov 27 05:55:41 np0005537642 ceph-osd[82775]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Nov 27 05:55:41 np0005537642 ceph-osd[82775]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Nov 27 05:55:41 np0005537642 ceph-osd[82775]: rocksdb:                        Options.arena_block_size: 1048576
Nov 27 05:55:41 np0005537642 ceph-osd[82775]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Nov 27 05:55:41 np0005537642 ceph-osd[82775]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Nov 27 05:55:41 np0005537642 ceph-osd[82775]: rocksdb:                Options.disable_auto_compactions: 0
Nov 27 05:55:41 np0005537642 ceph-osd[82775]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Nov 27 05:55:41 np0005537642 ceph-osd[82775]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Nov 27 05:55:41 np0005537642 ceph-osd[82775]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Nov 27 05:55:41 np0005537642 ceph-osd[82775]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Nov 27 05:55:41 np0005537642 ceph-osd[82775]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Nov 27 05:55:41 np0005537642 ceph-osd[82775]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Nov 27 05:55:41 np0005537642 ceph-osd[82775]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Nov 27 05:55:41 np0005537642 ceph-osd[82775]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Nov 27 05:55:41 np0005537642 ceph-osd[82775]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Nov 27 05:55:41 np0005537642 ceph-osd[82775]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Nov 27 05:55:41 np0005537642 ceph-osd[82775]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Nov 27 05:55:41 np0005537642 ceph-osd[82775]: rocksdb:                   Options.inplace_update_support: 0
Nov 27 05:55:41 np0005537642 ceph-osd[82775]: rocksdb:                 Options.inplace_update_num_locks: 10000
Nov 27 05:55:41 np0005537642 ceph-osd[82775]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Nov 27 05:55:41 np0005537642 ceph-osd[82775]: rocksdb:               Options.memtable_whole_key_filtering: 0
Nov 27 05:55:41 np0005537642 ceph-osd[82775]: rocksdb:   Options.memtable_huge_page_size: 0
Nov 27 05:55:41 np0005537642 ceph-osd[82775]: rocksdb:                           Options.bloom_locality: 0
Nov 27 05:55:41 np0005537642 ceph-osd[82775]: rocksdb:                    Options.max_successive_merges: 0
Nov 27 05:55:41 np0005537642 ceph-osd[82775]: rocksdb:                Options.optimize_filters_for_hits: 0
Nov 27 05:55:41 np0005537642 ceph-osd[82775]: rocksdb:                Options.paranoid_file_checks: 0
Nov 27 05:55:41 np0005537642 ceph-osd[82775]: rocksdb:                Options.force_consistency_checks: 1
Nov 27 05:55:41 np0005537642 ceph-osd[82775]: rocksdb:                Options.report_bg_io_stats: 0
Nov 27 05:55:41 np0005537642 ceph-osd[82775]: rocksdb:                               Options.ttl: 2592000
Nov 27 05:55:41 np0005537642 ceph-osd[82775]: rocksdb:          Options.periodic_compaction_seconds: 0
Nov 27 05:55:41 np0005537642 ceph-osd[82775]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Nov 27 05:55:41 np0005537642 ceph-osd[82775]: rocksdb:    Options.preserve_internal_time_seconds: 0
Nov 27 05:55:41 np0005537642 ceph-osd[82775]: rocksdb:                       Options.enable_blob_files: false
Nov 27 05:55:41 np0005537642 ceph-osd[82775]: rocksdb:                           Options.min_blob_size: 0
Nov 27 05:55:41 np0005537642 ceph-osd[82775]: rocksdb:                          Options.blob_file_size: 268435456
Nov 27 05:55:41 np0005537642 ceph-osd[82775]: rocksdb:                   Options.blob_compression_type: NoCompression
Nov 27 05:55:41 np0005537642 ceph-osd[82775]: rocksdb:          Options.enable_blob_garbage_collection: false
Nov 27 05:55:41 np0005537642 ceph-osd[82775]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Nov 27 05:55:41 np0005537642 ceph-osd[82775]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Nov 27 05:55:41 np0005537642 ceph-osd[82775]: rocksdb:          Options.blob_compaction_readahead_size: 0
Nov 27 05:55:41 np0005537642 ceph-osd[82775]: rocksdb:                Options.blob_file_starting_level: 0
Nov 27 05:55:41 np0005537642 ceph-osd[82775]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Nov 27 05:55:41 np0005537642 ceph-osd[82775]: rocksdb: [db/column_family.cc:635] #011(skipping printing options)
Nov 27 05:55:41 np0005537642 ceph-osd[82775]: rocksdb: [db/column_family.cc:635] #011(skipping printing options)
Nov 27 05:55:41 np0005537642 ceph-osd[82775]: rocksdb: [db/version_set.cc:5566] Recovered from manifest file:db/MANIFEST-000032 succeeded,manifest_file_number is 32, next_file_number is 34, last_sequence is 12, log_number is 5,prev_log_number is 0,max_column_family is 11,min_log_number_to_keep is 5
Nov 27 05:55:41 np0005537642 ceph-osd[82775]: rocksdb: [db/version_set.cc:5581] Column family [default] (ID 0), log number is 5
Nov 27 05:55:41 np0005537642 ceph-osd[82775]: rocksdb: [db/version_set.cc:5581] Column family [m-0] (ID 1), log number is 5
Nov 27 05:55:41 np0005537642 ceph-osd[82775]: rocksdb: [db/version_set.cc:5581] Column family [m-1] (ID 2), log number is 5
Nov 27 05:55:41 np0005537642 ceph-osd[82775]: rocksdb: [db/version_set.cc:5581] Column family [m-2] (ID 3), log number is 5
Nov 27 05:55:41 np0005537642 ceph-osd[82775]: rocksdb: [db/version_set.cc:5581] Column family [p-0] (ID 4), log number is 5
Nov 27 05:55:41 np0005537642 ceph-osd[82775]: rocksdb: [db/version_set.cc:5581] Column family [p-1] (ID 5), log number is 5
Nov 27 05:55:41 np0005537642 ceph-osd[82775]: rocksdb: [db/version_set.cc:5581] Column family [p-2] (ID 6), log number is 5
Nov 27 05:55:41 np0005537642 ceph-osd[82775]: rocksdb: [db/version_set.cc:5581] Column family [O-0] (ID 7), log number is 5
Nov 27 05:55:41 np0005537642 ceph-osd[82775]: rocksdb: [db/version_set.cc:5581] Column family [O-1] (ID 8), log number is 5
Nov 27 05:55:41 np0005537642 ceph-osd[82775]: rocksdb: [db/version_set.cc:5581] Column family [O-2] (ID 9), log number is 5
Nov 27 05:55:41 np0005537642 ceph-osd[82775]: rocksdb: [db/version_set.cc:5581] Column family [L] (ID 10), log number is 5
Nov 27 05:55:41 np0005537642 ceph-osd[82775]: rocksdb: [db/version_set.cc:5581] Column family [P] (ID 11), log number is 5
Nov 27 05:55:41 np0005537642 ceph-osd[82775]: rocksdb: [db/db_impl/db_impl_open.cc:539] DB ID: 3247bce4-0644-453e-951d-cf466e3fa111
Nov 27 05:55:41 np0005537642 ceph-osd[82775]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764240941763500, "job": 1, "event": "recovery_started", "wal_files": [31]}
Nov 27 05:55:41 np0005537642 ceph-osd[82775]: rocksdb: [db/db_impl/db_impl_open.cc:1043] Recovering log #31 mode 2
Nov 27 05:55:41 np0005537642 ceph-osd[82775]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764240941763887, "job": 1, "event": "recovery_finished"}
Nov 27 05:55:41 np0005537642 ceph-osd[82775]: bluestore(/var/lib/ceph/osd/ceph-1) _open_db opened rocksdb path db options compression=kLZ4Compression,max_write_buffer_number=64,min_write_buffer_number_to_merge=6,compaction_style=kCompactionStyleLevel,write_buffer_size=16777216,max_background_jobs=4,level0_file_num_compaction_trigger=8,max_bytes_for_level_base=1073741824,max_bytes_for_level_multiplier=8,compaction_readahead_size=2MB,max_total_wal_size=1073741824,writable_file_max_buffer_size=0
Nov 27 05:55:41 np0005537642 ceph-osd[82775]: bluestore(/var/lib/ceph/osd/ceph-1) _open_super_meta old nid_max 1025
Nov 27 05:55:41 np0005537642 ceph-osd[82775]: bluestore(/var/lib/ceph/osd/ceph-1) _open_super_meta old blobid_max 10240
Nov 27 05:55:41 np0005537642 ceph-osd[82775]: bluestore(/var/lib/ceph/osd/ceph-1) _open_super_meta ondisk_format 4 compat_ondisk_format 3
Nov 27 05:55:41 np0005537642 ceph-osd[82775]: bluestore(/var/lib/ceph/osd/ceph-1) _open_super_meta min_alloc_size 0x1000
Nov 27 05:55:41 np0005537642 ceph-osd[82775]: freelist init
Nov 27 05:55:41 np0005537642 ceph-osd[82775]: freelist _read_cfg
Nov 27 05:55:41 np0005537642 ceph-osd[82775]: bluestore(/var/lib/ceph/osd/ceph-1) _init_alloc loaded 20 GiB in 2 extents, allocator type hybrid, capacity 0x4ffc00000, block size 0x1000, free 0x4ffbfd000, fragmentation 1.9e-07
Nov 27 05:55:41 np0005537642 ceph-osd[82775]: rocksdb: [db/db_impl/db_impl.cc:496] Shutdown: canceling all background work
Nov 27 05:55:41 np0005537642 ceph-osd[82775]: rocksdb: [db/db_impl/db_impl.cc:704] Shutdown complete
Nov 27 05:55:41 np0005537642 ceph-osd[82775]: bluefs umount
Nov 27 05:55:41 np0005537642 ceph-osd[82775]: bdev(0x562f57a8d000 /var/lib/ceph/osd/ceph-1/block) close
Nov 27 05:55:41 np0005537642 podman[82892]: 2025-11-27 10:55:41.800191928 +0000 UTC m=+0.729295521 container attach ec3eabea12b9e5e7ab88827234b2048bdcf8aa59e3f3207ceab5ae7607aa46d0 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=bold_murdock, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Nov 27 05:55:41 np0005537642 podman[82892]: 2025-11-27 10:55:41.801421926 +0000 UTC m=+0.730525509 container died ec3eabea12b9e5e7ab88827234b2048bdcf8aa59e3f3207ceab5ae7607aa46d0 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=bold_murdock, ceph=True, OSD_FLAVOR=default, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 27 05:55:41 np0005537642 ceph-osd[82775]: bdev(0x562f57a8d000 /var/lib/ceph/osd/ceph-1/block) open path /var/lib/ceph/osd/ceph-1/block
Nov 27 05:55:41 np0005537642 ceph-osd[82775]: bdev(0x562f57a8d000 /var/lib/ceph/osd/ceph-1/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-1/block failed: (22) Invalid argument
Nov 27 05:55:41 np0005537642 ceph-osd[82775]: bdev(0x562f57a8d000 /var/lib/ceph/osd/ceph-1/block) open backing device/file reports st_blksize 512, using bdev_block_size 4096 anyway
Nov 27 05:55:41 np0005537642 ceph-osd[82775]: bdev(0x562f57a8d000 /var/lib/ceph/osd/ceph-1/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Nov 27 05:55:41 np0005537642 ceph-osd[82775]: bluefs add_block_device bdev 1 path /var/lib/ceph/osd/ceph-1/block size 20 GiB
Nov 27 05:55:41 np0005537642 ceph-osd[82775]: bluefs mount
Nov 27 05:55:41 np0005537642 ceph-osd[82775]: bluefs _init_alloc shared, id 1, capacity 0x4ffc00000, block size 0x10000
Nov 27 05:55:41 np0005537642 ceph-osd[82775]: bluefs mount final locked allocations 0 <0x0~0, [0x0~0], 0x0~0> => <0x0~0, [0x0~0], 0x0~0>
Nov 27 05:55:41 np0005537642 ceph-osd[82775]: bluefs mount final locked allocations 1 <0x0~0, [0x0~0], 0x0~0> => <0x0~0, [0x0~0], 0x0~0>
Nov 27 05:55:41 np0005537642 ceph-osd[82775]: bluefs mount final locked allocations 2 <0x0~0, [0x0~0], 0x0~0> => <0x0~0, [0x0~0], 0x0~0>
Nov 27 05:55:41 np0005537642 ceph-osd[82775]: bluefs mount final locked allocations 3 <0x0~0, [0x0~0], 0x0~0> => <0x0~0, [0x0~0], 0x0~0>
Nov 27 05:55:41 np0005537642 ceph-osd[82775]: bluefs mount final locked allocations 4 <0x0~0, [0x0~0], 0x0~0> => <0x0~0, [0x0~0], 0x0~0>
Nov 27 05:55:41 np0005537642 ceph-osd[82775]: bluefs mount shared_bdev_used = 4718592
Nov 27 05:55:41 np0005537642 ceph-osd[82775]: bluestore(/var/lib/ceph/osd/ceph-1) _prepare_db_environment set db_paths to db,20397110067 db.slow,20397110067
Nov 27 05:55:42 np0005537642 ceph-osd[82775]: rocksdb: RocksDB version: 7.9.2
Nov 27 05:55:42 np0005537642 ceph-osd[82775]: rocksdb: Git sha 0
Nov 27 05:55:42 np0005537642 ceph-osd[82775]: rocksdb: Compile date 2025-07-17 03:12:14
Nov 27 05:55:42 np0005537642 ceph-osd[82775]: rocksdb: DB SUMMARY
Nov 27 05:55:42 np0005537642 ceph-osd[82775]: rocksdb: DB Session ID:  AE7GADLGY0E40TKLSHJP
Nov 27 05:55:42 np0005537642 ceph-osd[82775]: rocksdb: CURRENT file:  CURRENT
Nov 27 05:55:42 np0005537642 ceph-osd[82775]: rocksdb: IDENTITY file:  IDENTITY
Nov 27 05:55:42 np0005537642 ceph-osd[82775]: rocksdb: MANIFEST file:  MANIFEST-000032 size: 1007 Bytes
Nov 27 05:55:42 np0005537642 ceph-osd[82775]: rocksdb: SST files in db dir, Total Num: 1, files: 000030.sst 
Nov 27 05:55:42 np0005537642 ceph-osd[82775]: rocksdb: SST files in db.slow dir, Total Num: 0, files: 
Nov 27 05:55:42 np0005537642 ceph-osd[82775]: rocksdb: Write Ahead Log file in db.wal: 000031.log size: 5097 ; 
Nov 27 05:55:42 np0005537642 ceph-osd[82775]: rocksdb:                         Options.error_if_exists: 0
Nov 27 05:55:42 np0005537642 ceph-osd[82775]: rocksdb:                       Options.create_if_missing: 0
Nov 27 05:55:42 np0005537642 ceph-osd[82775]: rocksdb:                         Options.paranoid_checks: 1
Nov 27 05:55:42 np0005537642 ceph-osd[82775]: rocksdb:             Options.flush_verify_memtable_count: 1
Nov 27 05:55:42 np0005537642 ceph-osd[82775]: rocksdb:                               Options.track_and_verify_wals_in_manifest: 0
Nov 27 05:55:42 np0005537642 ceph-osd[82775]: rocksdb:        Options.verify_sst_unique_id_in_manifest: 1
Nov 27 05:55:42 np0005537642 ceph-osd[82775]: rocksdb:                                     Options.env: 0x562f57bfc2a0
Nov 27 05:55:42 np0005537642 ceph-osd[82775]: rocksdb:                                      Options.fs: LegacyFileSystem
Nov 27 05:55:42 np0005537642 ceph-osd[82775]: rocksdb:                                Options.info_log: 0x562f57d227c0
Nov 27 05:55:42 np0005537642 ceph-osd[82775]: rocksdb:                Options.max_file_opening_threads: 16
Nov 27 05:55:42 np0005537642 ceph-osd[82775]: rocksdb:                              Options.statistics: (nil)
Nov 27 05:55:42 np0005537642 ceph-osd[82775]: rocksdb:                               Options.use_fsync: 0
Nov 27 05:55:42 np0005537642 ceph-osd[82775]: rocksdb:                       Options.max_log_file_size: 0
Nov 27 05:55:42 np0005537642 ceph-osd[82775]: rocksdb:                  Options.max_manifest_file_size: 1073741824
Nov 27 05:55:42 np0005537642 ceph-osd[82775]: rocksdb:                   Options.log_file_time_to_roll: 0
Nov 27 05:55:42 np0005537642 ceph-osd[82775]: rocksdb:                       Options.keep_log_file_num: 1000
Nov 27 05:55:42 np0005537642 ceph-osd[82775]: rocksdb:                    Options.recycle_log_file_num: 0
Nov 27 05:55:42 np0005537642 ceph-osd[82775]: rocksdb:                         Options.allow_fallocate: 1
Nov 27 05:55:42 np0005537642 ceph-osd[82775]: rocksdb:                        Options.allow_mmap_reads: 0
Nov 27 05:55:42 np0005537642 ceph-osd[82775]: rocksdb:                       Options.allow_mmap_writes: 0
Nov 27 05:55:42 np0005537642 ceph-osd[82775]: rocksdb:                        Options.use_direct_reads: 0
Nov 27 05:55:42 np0005537642 ceph-osd[82775]: rocksdb:                        Options.use_direct_io_for_flush_and_compaction: 0
Nov 27 05:55:42 np0005537642 ceph-osd[82775]: rocksdb:          Options.create_missing_column_families: 0
Nov 27 05:55:42 np0005537642 ceph-osd[82775]: rocksdb:                              Options.db_log_dir: 
Nov 27 05:55:42 np0005537642 ceph-osd[82775]: rocksdb:                                 Options.wal_dir: db.wal
Nov 27 05:55:42 np0005537642 ceph-osd[82775]: rocksdb:                Options.table_cache_numshardbits: 6
Nov 27 05:55:42 np0005537642 ceph-osd[82775]: rocksdb:                         Options.WAL_ttl_seconds: 0
Nov 27 05:55:42 np0005537642 ceph-osd[82775]: rocksdb:                       Options.WAL_size_limit_MB: 0
Nov 27 05:55:42 np0005537642 ceph-osd[82775]: rocksdb:                        Options.max_write_batch_group_size_bytes: 1048576
Nov 27 05:55:42 np0005537642 ceph-osd[82775]: rocksdb:             Options.manifest_preallocation_size: 4194304
Nov 27 05:55:42 np0005537642 ceph-osd[82775]: rocksdb:                     Options.is_fd_close_on_exec: 1
Nov 27 05:55:42 np0005537642 ceph-osd[82775]: rocksdb:                   Options.advise_random_on_open: 1
Nov 27 05:55:42 np0005537642 ceph-osd[82775]: rocksdb:                    Options.db_write_buffer_size: 0
Nov 27 05:55:42 np0005537642 ceph-osd[82775]: rocksdb:                    Options.write_buffer_manager: 0x562f57b58aa0
Nov 27 05:55:42 np0005537642 ceph-osd[82775]: rocksdb:         Options.access_hint_on_compaction_start: 1
Nov 27 05:55:42 np0005537642 ceph-osd[82775]: rocksdb:           Options.random_access_max_buffer_size: 1048576
Nov 27 05:55:42 np0005537642 ceph-osd[82775]: rocksdb:                      Options.use_adaptive_mutex: 0
Nov 27 05:55:42 np0005537642 ceph-osd[82775]: rocksdb:                            Options.rate_limiter: (nil)
Nov 27 05:55:42 np0005537642 ceph-osd[82775]: rocksdb:     Options.sst_file_manager.rate_bytes_per_sec: 0
Nov 27 05:55:42 np0005537642 ceph-osd[82775]: rocksdb:                       Options.wal_recovery_mode: 2
Nov 27 05:55:42 np0005537642 ceph-osd[82775]: rocksdb:                  Options.enable_thread_tracking: 0
Nov 27 05:55:42 np0005537642 ceph-osd[82775]: rocksdb:                  Options.enable_pipelined_write: 0
Nov 27 05:55:42 np0005537642 ceph-osd[82775]: rocksdb:                  Options.unordered_write: 0
Nov 27 05:55:42 np0005537642 ceph-osd[82775]: rocksdb:         Options.allow_concurrent_memtable_write: 1
Nov 27 05:55:42 np0005537642 ceph-osd[82775]: rocksdb:      Options.enable_write_thread_adaptive_yield: 1
Nov 27 05:55:42 np0005537642 ceph-osd[82775]: rocksdb:             Options.write_thread_max_yield_usec: 100
Nov 27 05:55:42 np0005537642 ceph-osd[82775]: rocksdb:            Options.write_thread_slow_yield_usec: 3
Nov 27 05:55:42 np0005537642 ceph-osd[82775]: rocksdb:                               Options.row_cache: None
Nov 27 05:55:42 np0005537642 ceph-osd[82775]: rocksdb:                              Options.wal_filter: None
Nov 27 05:55:42 np0005537642 ceph-osd[82775]: rocksdb:             Options.avoid_flush_during_recovery: 0
Nov 27 05:55:42 np0005537642 ceph-osd[82775]: rocksdb:             Options.allow_ingest_behind: 0
Nov 27 05:55:42 np0005537642 ceph-osd[82775]: rocksdb:             Options.two_write_queues: 0
Nov 27 05:55:42 np0005537642 ceph-osd[82775]: rocksdb:             Options.manual_wal_flush: 0
Nov 27 05:55:42 np0005537642 ceph-osd[82775]: rocksdb:             Options.wal_compression: 0
Nov 27 05:55:42 np0005537642 ceph-osd[82775]: rocksdb:             Options.atomic_flush: 0
Nov 27 05:55:42 np0005537642 ceph-osd[82775]: rocksdb:             Options.avoid_unnecessary_blocking_io: 0
Nov 27 05:55:42 np0005537642 ceph-osd[82775]: rocksdb:                 Options.persist_stats_to_disk: 0
Nov 27 05:55:42 np0005537642 ceph-osd[82775]: rocksdb:                 Options.write_dbid_to_manifest: 0
Nov 27 05:55:42 np0005537642 ceph-osd[82775]: rocksdb:                 Options.log_readahead_size: 0
Nov 27 05:55:42 np0005537642 ceph-osd[82775]: rocksdb:                 Options.file_checksum_gen_factory: Unknown
Nov 27 05:55:42 np0005537642 ceph-osd[82775]: rocksdb:                 Options.best_efforts_recovery: 0
Nov 27 05:55:42 np0005537642 ceph-osd[82775]: rocksdb:                Options.max_bgerror_resume_count: 2147483647
Nov 27 05:55:42 np0005537642 ceph-osd[82775]: rocksdb:            Options.bgerror_resume_retry_interval: 1000000
Nov 27 05:55:42 np0005537642 ceph-osd[82775]: rocksdb:             Options.allow_data_in_errors: 0
Nov 27 05:55:42 np0005537642 ceph-osd[82775]: rocksdb:             Options.db_host_id: __hostname__
Nov 27 05:55:42 np0005537642 ceph-osd[82775]: rocksdb:             Options.enforce_single_del_contracts: true
Nov 27 05:55:42 np0005537642 ceph-osd[82775]: rocksdb:             Options.max_background_jobs: 4
Nov 27 05:55:42 np0005537642 ceph-osd[82775]: rocksdb:             Options.max_background_compactions: -1
Nov 27 05:55:42 np0005537642 ceph-osd[82775]: rocksdb:             Options.max_subcompactions: 1
Nov 27 05:55:42 np0005537642 ceph-osd[82775]: rocksdb:             Options.avoid_flush_during_shutdown: 0
Nov 27 05:55:42 np0005537642 ceph-osd[82775]: rocksdb:           Options.writable_file_max_buffer_size: 0
Nov 27 05:55:42 np0005537642 ceph-osd[82775]: rocksdb:             Options.delayed_write_rate : 16777216
Nov 27 05:55:42 np0005537642 ceph-osd[82775]: rocksdb:             Options.max_total_wal_size: 1073741824
Nov 27 05:55:42 np0005537642 ceph-osd[82775]: rocksdb:             Options.delete_obsolete_files_period_micros: 21600000000
Nov 27 05:55:42 np0005537642 ceph-osd[82775]: rocksdb:                   Options.stats_dump_period_sec: 600
Nov 27 05:55:42 np0005537642 ceph-osd[82775]: rocksdb:                 Options.stats_persist_period_sec: 600
Nov 27 05:55:42 np0005537642 ceph-osd[82775]: rocksdb:                 Options.stats_history_buffer_size: 1048576
Nov 27 05:55:42 np0005537642 ceph-osd[82775]: rocksdb:                          Options.max_open_files: -1
Nov 27 05:55:42 np0005537642 ceph-osd[82775]: rocksdb:                          Options.bytes_per_sync: 0
Nov 27 05:55:42 np0005537642 ceph-osd[82775]: rocksdb:                      Options.wal_bytes_per_sync: 0
Nov 27 05:55:42 np0005537642 ceph-osd[82775]: rocksdb:                   Options.strict_bytes_per_sync: 0
Nov 27 05:55:42 np0005537642 ceph-osd[82775]: rocksdb:       Options.compaction_readahead_size: 2097152
Nov 27 05:55:42 np0005537642 ceph-osd[82775]: rocksdb:                  Options.max_background_flushes: -1
Nov 27 05:55:42 np0005537642 ceph-osd[82775]: rocksdb: Compression algorithms supported:
Nov 27 05:55:42 np0005537642 ceph-osd[82775]: rocksdb: #011kZSTD supported: 0
Nov 27 05:55:42 np0005537642 ceph-osd[82775]: rocksdb: #011kXpressCompression supported: 0
Nov 27 05:55:42 np0005537642 ceph-osd[82775]: rocksdb: #011kBZip2Compression supported: 0
Nov 27 05:55:42 np0005537642 ceph-osd[82775]: rocksdb: #011kZSTDNotFinalCompression supported: 0
Nov 27 05:55:42 np0005537642 ceph-osd[82775]: rocksdb: #011kLZ4Compression supported: 1
Nov 27 05:55:42 np0005537642 ceph-osd[82775]: rocksdb: #011kZlibCompression supported: 1
Nov 27 05:55:42 np0005537642 ceph-osd[82775]: rocksdb: #011kLZ4HCCompression supported: 1
Nov 27 05:55:42 np0005537642 ceph-osd[82775]: rocksdb: #011kSnappyCompression supported: 1
Nov 27 05:55:42 np0005537642 ceph-osd[82775]: rocksdb: Fast CRC32 supported: Supported on x86
Nov 27 05:55:42 np0005537642 ceph-osd[82775]: rocksdb: DMutex implementation: pthread_mutex_t
Nov 27 05:55:42 np0005537642 ceph-osd[82775]: rocksdb: [db/version_set.cc:5527] Recovering from manifest file: db/MANIFEST-000032
Nov 27 05:55:42 np0005537642 ceph-osd[82775]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [default]:
Nov 27 05:55:42 np0005537642 ceph-osd[82775]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Nov 27 05:55:42 np0005537642 ceph-osd[82775]: rocksdb:           Options.merge_operator: .T:int64_array.b:bitwise_xor
Nov 27 05:55:42 np0005537642 ceph-osd[82775]: rocksdb:        Options.compaction_filter: None
Nov 27 05:55:42 np0005537642 ceph-osd[82775]: rocksdb:        Options.compaction_filter_factory: None
Nov 27 05:55:42 np0005537642 ceph-osd[82775]: rocksdb:  Options.sst_partitioner_factory: None
Nov 27 05:55:42 np0005537642 ceph-osd[82775]: rocksdb:         Options.memtable_factory: SkipListFactory
Nov 27 05:55:42 np0005537642 ceph-osd[82775]: rocksdb:            Options.table_factory: BlockBasedTable
Nov 27 05:55:42 np0005537642 ceph-osd[82775]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x562f57c24da0)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x562f56c869b0#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 483183820#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Nov 27 05:55:42 np0005537642 ceph-osd[82775]: rocksdb:        Options.write_buffer_size: 16777216
Nov 27 05:55:42 np0005537642 ceph-osd[82775]: rocksdb:  Options.max_write_buffer_number: 64
Nov 27 05:55:42 np0005537642 ceph-osd[82775]: rocksdb:          Options.compression: LZ4
Nov 27 05:55:42 np0005537642 ceph-osd[82775]: rocksdb:                  Options.bottommost_compression: Disabled
Nov 27 05:55:42 np0005537642 ceph-osd[82775]: rocksdb:       Options.prefix_extractor: nullptr
Nov 27 05:55:42 np0005537642 ceph-osd[82775]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Nov 27 05:55:42 np0005537642 ceph-osd[82775]: rocksdb:             Options.num_levels: 7
Nov 27 05:55:42 np0005537642 ceph-osd[82775]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Nov 27 05:55:42 np0005537642 ceph-osd[82775]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Nov 27 05:55:42 np0005537642 ceph-osd[82775]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Nov 27 05:55:42 np0005537642 ceph-osd[82775]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Nov 27 05:55:42 np0005537642 ceph-osd[82775]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Nov 27 05:55:42 np0005537642 ceph-osd[82775]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Nov 27 05:55:42 np0005537642 ceph-osd[82775]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Nov 27 05:55:42 np0005537642 ceph-osd[82775]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Nov 27 05:55:42 np0005537642 ceph-osd[82775]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Nov 27 05:55:42 np0005537642 ceph-osd[82775]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Nov 27 05:55:42 np0005537642 ceph-osd[82775]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Nov 27 05:55:42 np0005537642 ceph-osd[82775]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Nov 27 05:55:42 np0005537642 ceph-osd[82775]: rocksdb:            Options.compression_opts.window_bits: -14
Nov 27 05:55:42 np0005537642 ceph-osd[82775]: rocksdb:                  Options.compression_opts.level: 32767
Nov 27 05:55:42 np0005537642 ceph-osd[82775]: rocksdb:               Options.compression_opts.strategy: 0
Nov 27 05:55:42 np0005537642 ceph-osd[82775]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Nov 27 05:55:42 np0005537642 ceph-osd[82775]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Nov 27 05:55:42 np0005537642 ceph-osd[82775]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Nov 27 05:55:42 np0005537642 ceph-osd[82775]: rocksdb:         Options.compression_opts.parallel_threads: 1
Nov 27 05:55:42 np0005537642 ceph-osd[82775]: rocksdb:                  Options.compression_opts.enabled: false
Nov 27 05:55:42 np0005537642 ceph-osd[82775]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Nov 27 05:55:42 np0005537642 ceph-osd[82775]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Nov 27 05:55:42 np0005537642 ceph-osd[82775]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Nov 27 05:55:42 np0005537642 ceph-osd[82775]: rocksdb:              Options.level0_stop_writes_trigger: 36
Nov 27 05:55:42 np0005537642 ceph-osd[82775]: rocksdb:                   Options.target_file_size_base: 67108864
Nov 27 05:55:42 np0005537642 ceph-osd[82775]: rocksdb:             Options.target_file_size_multiplier: 1
Nov 27 05:55:42 np0005537642 ceph-osd[82775]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Nov 27 05:55:42 np0005537642 ceph-osd[82775]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Nov 27 05:55:42 np0005537642 ceph-osd[82775]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Nov 27 05:55:42 np0005537642 ceph-osd[82775]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Nov 27 05:55:42 np0005537642 ceph-osd[82775]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Nov 27 05:55:42 np0005537642 ceph-osd[82775]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Nov 27 05:55:42 np0005537642 ceph-osd[82775]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Nov 27 05:55:42 np0005537642 ceph-osd[82775]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Nov 27 05:55:42 np0005537642 ceph-osd[82775]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Nov 27 05:55:42 np0005537642 ceph-osd[82775]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Nov 27 05:55:42 np0005537642 ceph-osd[82775]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Nov 27 05:55:42 np0005537642 ceph-osd[82775]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Nov 27 05:55:42 np0005537642 ceph-osd[82775]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Nov 27 05:55:42 np0005537642 ceph-osd[82775]: rocksdb:                        Options.arena_block_size: 1048576
Nov 27 05:55:42 np0005537642 ceph-osd[82775]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Nov 27 05:55:42 np0005537642 ceph-osd[82775]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Nov 27 05:55:42 np0005537642 ceph-osd[82775]: rocksdb:                Options.disable_auto_compactions: 0
Nov 27 05:55:42 np0005537642 ceph-osd[82775]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Nov 27 05:55:42 np0005537642 ceph-osd[82775]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Nov 27 05:55:42 np0005537642 ceph-osd[82775]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Nov 27 05:55:42 np0005537642 ceph-osd[82775]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Nov 27 05:55:42 np0005537642 ceph-osd[82775]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Nov 27 05:55:42 np0005537642 ceph-osd[82775]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Nov 27 05:55:42 np0005537642 ceph-osd[82775]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Nov 27 05:55:42 np0005537642 ceph-osd[82775]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Nov 27 05:55:42 np0005537642 ceph-osd[82775]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Nov 27 05:55:42 np0005537642 ceph-osd[82775]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Nov 27 05:55:42 np0005537642 ceph-osd[82775]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Nov 27 05:55:42 np0005537642 ceph-osd[82775]: rocksdb:                   Options.inplace_update_support: 0
Nov 27 05:55:42 np0005537642 ceph-osd[82775]: rocksdb:                 Options.inplace_update_num_locks: 10000
Nov 27 05:55:42 np0005537642 ceph-osd[82775]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Nov 27 05:55:42 np0005537642 ceph-osd[82775]: rocksdb:               Options.memtable_whole_key_filtering: 0
Nov 27 05:55:42 np0005537642 ceph-osd[82775]: rocksdb:   Options.memtable_huge_page_size: 0
Nov 27 05:55:42 np0005537642 ceph-osd[82775]: rocksdb:                           Options.bloom_locality: 0
Nov 27 05:55:42 np0005537642 ceph-osd[82775]: rocksdb:                    Options.max_successive_merges: 0
Nov 27 05:55:42 np0005537642 ceph-osd[82775]: rocksdb:                Options.optimize_filters_for_hits: 0
Nov 27 05:55:42 np0005537642 ceph-osd[82775]: rocksdb:                Options.paranoid_file_checks: 0
Nov 27 05:55:42 np0005537642 ceph-osd[82775]: rocksdb:                Options.force_consistency_checks: 1
Nov 27 05:55:42 np0005537642 ceph-osd[82775]: rocksdb:                Options.report_bg_io_stats: 0
Nov 27 05:55:42 np0005537642 ceph-osd[82775]: rocksdb:                               Options.ttl: 2592000
Nov 27 05:55:42 np0005537642 ceph-osd[82775]: rocksdb:          Options.periodic_compaction_seconds: 0
Nov 27 05:55:42 np0005537642 ceph-osd[82775]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Nov 27 05:55:42 np0005537642 ceph-osd[82775]: rocksdb:    Options.preserve_internal_time_seconds: 0
Nov 27 05:55:42 np0005537642 ceph-osd[82775]: rocksdb:                       Options.enable_blob_files: false
Nov 27 05:55:42 np0005537642 ceph-osd[82775]: rocksdb:                           Options.min_blob_size: 0
Nov 27 05:55:42 np0005537642 ceph-osd[82775]: rocksdb:                          Options.blob_file_size: 268435456
Nov 27 05:55:42 np0005537642 ceph-osd[82775]: rocksdb:                   Options.blob_compression_type: NoCompression
Nov 27 05:55:42 np0005537642 ceph-osd[82775]: rocksdb:          Options.enable_blob_garbage_collection: false
Nov 27 05:55:42 np0005537642 ceph-osd[82775]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Nov 27 05:55:42 np0005537642 ceph-osd[82775]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Nov 27 05:55:42 np0005537642 ceph-osd[82775]: rocksdb:          Options.blob_compaction_readahead_size: 0
Nov 27 05:55:42 np0005537642 ceph-osd[82775]: rocksdb:                Options.blob_file_starting_level: 0
Nov 27 05:55:42 np0005537642 ceph-osd[82775]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Nov 27 05:55:42 np0005537642 ceph-osd[82775]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [m-0]:
Nov 27 05:55:42 np0005537642 ceph-osd[82775]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Nov 27 05:55:42 np0005537642 ceph-osd[82775]: rocksdb:           Options.merge_operator: None
Nov 27 05:55:42 np0005537642 ceph-osd[82775]: rocksdb:        Options.compaction_filter: None
Nov 27 05:55:42 np0005537642 ceph-osd[82775]: rocksdb:        Options.compaction_filter_factory: None
Nov 27 05:55:42 np0005537642 ceph-osd[82775]: rocksdb:  Options.sst_partitioner_factory: None
Nov 27 05:55:42 np0005537642 ceph-osd[82775]: rocksdb:         Options.memtable_factory: SkipListFactory
Nov 27 05:55:42 np0005537642 ceph-osd[82775]: rocksdb:            Options.table_factory: BlockBasedTable
Nov 27 05:55:42 np0005537642 ceph-osd[82775]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x562f57c24da0)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x562f56c869b0#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 483183820#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Nov 27 05:55:42 np0005537642 ceph-osd[82775]: rocksdb:        Options.write_buffer_size: 16777216
Nov 27 05:55:42 np0005537642 ceph-osd[82775]: rocksdb:  Options.max_write_buffer_number: 64
Nov 27 05:55:42 np0005537642 ceph-osd[82775]: rocksdb:          Options.compression: LZ4
Nov 27 05:55:42 np0005537642 ceph-osd[82775]: rocksdb:                  Options.bottommost_compression: Disabled
Nov 27 05:55:42 np0005537642 ceph-osd[82775]: rocksdb:       Options.prefix_extractor: nullptr
Nov 27 05:55:42 np0005537642 ceph-osd[82775]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Nov 27 05:55:42 np0005537642 ceph-osd[82775]: rocksdb:             Options.num_levels: 7
Nov 27 05:55:42 np0005537642 ceph-osd[82775]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Nov 27 05:55:42 np0005537642 ceph-osd[82775]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Nov 27 05:55:42 np0005537642 ceph-osd[82775]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Nov 27 05:55:42 np0005537642 ceph-osd[82775]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Nov 27 05:55:42 np0005537642 ceph-osd[82775]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Nov 27 05:55:42 np0005537642 ceph-osd[82775]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Nov 27 05:55:42 np0005537642 ceph-osd[82775]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Nov 27 05:55:42 np0005537642 ceph-osd[82775]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Nov 27 05:55:42 np0005537642 ceph-osd[82775]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Nov 27 05:55:42 np0005537642 ceph-osd[82775]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Nov 27 05:55:42 np0005537642 ceph-osd[82775]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Nov 27 05:55:42 np0005537642 ceph-osd[82775]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Nov 27 05:55:42 np0005537642 ceph-osd[82775]: rocksdb:            Options.compression_opts.window_bits: -14
Nov 27 05:55:42 np0005537642 ceph-osd[82775]: rocksdb:                  Options.compression_opts.level: 32767
Nov 27 05:55:42 np0005537642 ceph-osd[82775]: rocksdb:               Options.compression_opts.strategy: 0
Nov 27 05:55:42 np0005537642 ceph-osd[82775]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Nov 27 05:55:42 np0005537642 ceph-osd[82775]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Nov 27 05:55:42 np0005537642 ceph-osd[82775]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Nov 27 05:55:42 np0005537642 ceph-osd[82775]: rocksdb:         Options.compression_opts.parallel_threads: 1
Nov 27 05:55:42 np0005537642 ceph-osd[82775]: rocksdb:                  Options.compression_opts.enabled: false
Nov 27 05:55:42 np0005537642 ceph-osd[82775]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Nov 27 05:55:42 np0005537642 ceph-osd[82775]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Nov 27 05:55:42 np0005537642 ceph-osd[82775]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Nov 27 05:55:42 np0005537642 ceph-osd[82775]: rocksdb:              Options.level0_stop_writes_trigger: 36
Nov 27 05:55:42 np0005537642 ceph-osd[82775]: rocksdb:                   Options.target_file_size_base: 67108864
Nov 27 05:55:42 np0005537642 ceph-osd[82775]: rocksdb:             Options.target_file_size_multiplier: 1
Nov 27 05:55:42 np0005537642 ceph-osd[82775]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Nov 27 05:55:42 np0005537642 ceph-osd[82775]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Nov 27 05:55:42 np0005537642 ceph-osd[82775]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Nov 27 05:55:42 np0005537642 ceph-osd[82775]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Nov 27 05:55:42 np0005537642 ceph-osd[82775]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Nov 27 05:55:42 np0005537642 ceph-osd[82775]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Nov 27 05:55:42 np0005537642 ceph-osd[82775]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Nov 27 05:55:42 np0005537642 ceph-osd[82775]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Nov 27 05:55:42 np0005537642 ceph-osd[82775]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Nov 27 05:55:42 np0005537642 ceph-osd[82775]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Nov 27 05:55:42 np0005537642 ceph-osd[82775]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Nov 27 05:55:42 np0005537642 ceph-osd[82775]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Nov 27 05:55:42 np0005537642 ceph-osd[82775]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Nov 27 05:55:42 np0005537642 ceph-osd[82775]: rocksdb:                        Options.arena_block_size: 1048576
Nov 27 05:55:42 np0005537642 ceph-osd[82775]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Nov 27 05:55:42 np0005537642 ceph-osd[82775]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Nov 27 05:55:42 np0005537642 ceph-osd[82775]: rocksdb:                Options.disable_auto_compactions: 0
Nov 27 05:55:42 np0005537642 ceph-osd[82775]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Nov 27 05:55:42 np0005537642 ceph-osd[82775]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Nov 27 05:55:42 np0005537642 ceph-osd[82775]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Nov 27 05:55:42 np0005537642 ceph-osd[82775]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Nov 27 05:55:42 np0005537642 ceph-osd[82775]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Nov 27 05:55:42 np0005537642 ceph-osd[82775]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Nov 27 05:55:42 np0005537642 ceph-osd[82775]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Nov 27 05:55:42 np0005537642 ceph-osd[82775]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Nov 27 05:55:42 np0005537642 ceph-osd[82775]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Nov 27 05:55:42 np0005537642 ceph-osd[82775]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Nov 27 05:55:42 np0005537642 ceph-osd[82775]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Nov 27 05:55:42 np0005537642 ceph-osd[82775]: rocksdb:                   Options.inplace_update_support: 0
Nov 27 05:55:42 np0005537642 ceph-osd[82775]: rocksdb:                 Options.inplace_update_num_locks: 10000
Nov 27 05:55:42 np0005537642 ceph-osd[82775]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Nov 27 05:55:42 np0005537642 ceph-osd[82775]: rocksdb:               Options.memtable_whole_key_filtering: 0
Nov 27 05:55:42 np0005537642 ceph-osd[82775]: rocksdb:   Options.memtable_huge_page_size: 0
Nov 27 05:55:42 np0005537642 ceph-osd[82775]: rocksdb:                           Options.bloom_locality: 0
Nov 27 05:55:42 np0005537642 ceph-osd[82775]: rocksdb:                    Options.max_successive_merges: 0
Nov 27 05:55:42 np0005537642 ceph-osd[82775]: rocksdb:                Options.optimize_filters_for_hits: 0
Nov 27 05:55:42 np0005537642 ceph-osd[82775]: rocksdb:                Options.paranoid_file_checks: 0
Nov 27 05:55:42 np0005537642 ceph-osd[82775]: rocksdb:                Options.force_consistency_checks: 1
Nov 27 05:55:42 np0005537642 ceph-osd[82775]: rocksdb:                Options.report_bg_io_stats: 0
Nov 27 05:55:42 np0005537642 ceph-osd[82775]: rocksdb:                               Options.ttl: 2592000
Nov 27 05:55:42 np0005537642 ceph-osd[82775]: rocksdb:          Options.periodic_compaction_seconds: 0
Nov 27 05:55:42 np0005537642 ceph-osd[82775]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Nov 27 05:55:42 np0005537642 ceph-osd[82775]: rocksdb:    Options.preserve_internal_time_seconds: 0
Nov 27 05:55:42 np0005537642 ceph-osd[82775]: rocksdb:                       Options.enable_blob_files: false
Nov 27 05:55:42 np0005537642 ceph-osd[82775]: rocksdb:                           Options.min_blob_size: 0
Nov 27 05:55:42 np0005537642 ceph-osd[82775]: rocksdb:                          Options.blob_file_size: 268435456
Nov 27 05:55:42 np0005537642 ceph-osd[82775]: rocksdb:                   Options.blob_compression_type: NoCompression
Nov 27 05:55:42 np0005537642 ceph-osd[82775]: rocksdb:          Options.enable_blob_garbage_collection: false
Nov 27 05:55:42 np0005537642 ceph-osd[82775]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Nov 27 05:55:42 np0005537642 ceph-osd[82775]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Nov 27 05:55:42 np0005537642 ceph-osd[82775]: rocksdb:          Options.blob_compaction_readahead_size: 0
Nov 27 05:55:42 np0005537642 ceph-osd[82775]: rocksdb:                Options.blob_file_starting_level: 0
Nov 27 05:55:42 np0005537642 ceph-osd[82775]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Nov 27 05:55:42 np0005537642 ceph-osd[82775]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [m-1]:
Nov 27 05:55:42 np0005537642 ceph-osd[82775]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Nov 27 05:55:42 np0005537642 ceph-osd[82775]: rocksdb:           Options.merge_operator: None
Nov 27 05:55:42 np0005537642 ceph-osd[82775]: rocksdb:        Options.compaction_filter: None
Nov 27 05:55:42 np0005537642 ceph-osd[82775]: rocksdb:        Options.compaction_filter_factory: None
Nov 27 05:55:42 np0005537642 ceph-osd[82775]: rocksdb:  Options.sst_partitioner_factory: None
Nov 27 05:55:42 np0005537642 ceph-osd[82775]: rocksdb:         Options.memtable_factory: SkipListFactory
Nov 27 05:55:42 np0005537642 ceph-osd[82775]: rocksdb:            Options.table_factory: BlockBasedTable
Nov 27 05:55:42 np0005537642 ceph-osd[82775]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x562f57c24da0)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x562f56c869b0#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 483183820#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Nov 27 05:55:42 np0005537642 ceph-osd[82775]: rocksdb:        Options.write_buffer_size: 16777216
Nov 27 05:55:42 np0005537642 ceph-osd[82775]: rocksdb:  Options.max_write_buffer_number: 64
Nov 27 05:55:42 np0005537642 ceph-osd[82775]: rocksdb:          Options.compression: LZ4
Nov 27 05:55:42 np0005537642 ceph-osd[82775]: rocksdb:                  Options.bottommost_compression: Disabled
Nov 27 05:55:42 np0005537642 ceph-osd[82775]: rocksdb:       Options.prefix_extractor: nullptr
Nov 27 05:55:42 np0005537642 ceph-osd[82775]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Nov 27 05:55:42 np0005537642 ceph-osd[82775]: rocksdb:             Options.num_levels: 7
Nov 27 05:55:42 np0005537642 ceph-osd[82775]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Nov 27 05:55:42 np0005537642 ceph-osd[82775]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Nov 27 05:55:42 np0005537642 ceph-osd[82775]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Nov 27 05:55:42 np0005537642 ceph-osd[82775]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Nov 27 05:55:42 np0005537642 ceph-osd[82775]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Nov 27 05:55:42 np0005537642 ceph-osd[82775]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Nov 27 05:55:42 np0005537642 ceph-osd[82775]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Nov 27 05:55:42 np0005537642 ceph-osd[82775]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Nov 27 05:55:42 np0005537642 ceph-osd[82775]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Nov 27 05:55:42 np0005537642 ceph-osd[82775]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Nov 27 05:55:42 np0005537642 ceph-osd[82775]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Nov 27 05:55:42 np0005537642 ceph-osd[82775]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Nov 27 05:55:42 np0005537642 ceph-osd[82775]: rocksdb:            Options.compression_opts.window_bits: -14
Nov 27 05:55:42 np0005537642 ceph-osd[82775]: rocksdb:                  Options.compression_opts.level: 32767
Nov 27 05:55:42 np0005537642 ceph-osd[82775]: rocksdb:               Options.compression_opts.strategy: 0
Nov 27 05:55:42 np0005537642 ceph-osd[82775]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Nov 27 05:55:42 np0005537642 ceph-osd[82775]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Nov 27 05:55:42 np0005537642 ceph-osd[82775]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Nov 27 05:55:42 np0005537642 ceph-osd[82775]: rocksdb:         Options.compression_opts.parallel_threads: 1
Nov 27 05:55:42 np0005537642 ceph-osd[82775]: rocksdb:                  Options.compression_opts.enabled: false
Nov 27 05:55:42 np0005537642 ceph-osd[82775]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Nov 27 05:55:42 np0005537642 ceph-osd[82775]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Nov 27 05:55:42 np0005537642 ceph-osd[82775]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Nov 27 05:55:42 np0005537642 ceph-osd[82775]: rocksdb:              Options.level0_stop_writes_trigger: 36
Nov 27 05:55:42 np0005537642 ceph-osd[82775]: rocksdb:                   Options.target_file_size_base: 67108864
Nov 27 05:55:42 np0005537642 ceph-osd[82775]: rocksdb:             Options.target_file_size_multiplier: 1
Nov 27 05:55:42 np0005537642 ceph-osd[82775]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Nov 27 05:55:42 np0005537642 ceph-osd[82775]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Nov 27 05:55:42 np0005537642 ceph-osd[82775]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Nov 27 05:55:42 np0005537642 ceph-osd[82775]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Nov 27 05:55:42 np0005537642 ceph-osd[82775]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Nov 27 05:55:42 np0005537642 ceph-osd[82775]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Nov 27 05:55:42 np0005537642 ceph-osd[82775]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Nov 27 05:55:42 np0005537642 ceph-osd[82775]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Nov 27 05:55:42 np0005537642 ceph-osd[82775]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Nov 27 05:55:42 np0005537642 ceph-osd[82775]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Nov 27 05:55:42 np0005537642 ceph-osd[82775]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Nov 27 05:55:42 np0005537642 ceph-osd[82775]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Nov 27 05:55:42 np0005537642 ceph-osd[82775]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Nov 27 05:55:42 np0005537642 ceph-osd[82775]: rocksdb:                        Options.arena_block_size: 1048576
Nov 27 05:55:42 np0005537642 ceph-osd[82775]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Nov 27 05:55:42 np0005537642 ceph-osd[82775]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Nov 27 05:55:42 np0005537642 ceph-osd[82775]: rocksdb:                Options.disable_auto_compactions: 0
Nov 27 05:55:42 np0005537642 ceph-osd[82775]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Nov 27 05:55:42 np0005537642 ceph-osd[82775]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Nov 27 05:55:42 np0005537642 ceph-osd[82775]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Nov 27 05:55:42 np0005537642 ceph-osd[82775]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Nov 27 05:55:42 np0005537642 ceph-osd[82775]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Nov 27 05:55:42 np0005537642 ceph-osd[82775]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Nov 27 05:55:42 np0005537642 ceph-osd[82775]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Nov 27 05:55:42 np0005537642 ceph-osd[82775]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Nov 27 05:55:42 np0005537642 ceph-osd[82775]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Nov 27 05:55:42 np0005537642 ceph-osd[82775]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Nov 27 05:55:42 np0005537642 ceph-osd[82775]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Nov 27 05:55:42 np0005537642 ceph-osd[82775]: rocksdb:                   Options.inplace_update_support: 0
Nov 27 05:55:42 np0005537642 ceph-osd[82775]: rocksdb:                 Options.inplace_update_num_locks: 10000
Nov 27 05:55:42 np0005537642 ceph-osd[82775]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Nov 27 05:55:42 np0005537642 ceph-osd[82775]: rocksdb:               Options.memtable_whole_key_filtering: 0
Nov 27 05:55:42 np0005537642 ceph-osd[82775]: rocksdb:   Options.memtable_huge_page_size: 0
Nov 27 05:55:42 np0005537642 ceph-osd[82775]: rocksdb:                           Options.bloom_locality: 0
Nov 27 05:55:42 np0005537642 ceph-osd[82775]: rocksdb:                    Options.max_successive_merges: 0
Nov 27 05:55:42 np0005537642 ceph-osd[82775]: rocksdb:                Options.optimize_filters_for_hits: 0
Nov 27 05:55:42 np0005537642 ceph-osd[82775]: rocksdb:                Options.paranoid_file_checks: 0
Nov 27 05:55:42 np0005537642 ceph-osd[82775]: rocksdb:                Options.force_consistency_checks: 1
Nov 27 05:55:42 np0005537642 ceph-osd[82775]: rocksdb:                Options.report_bg_io_stats: 0
Nov 27 05:55:42 np0005537642 ceph-osd[82775]: rocksdb:                               Options.ttl: 2592000
Nov 27 05:55:42 np0005537642 ceph-osd[82775]: rocksdb:          Options.periodic_compaction_seconds: 0
Nov 27 05:55:42 np0005537642 ceph-osd[82775]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Nov 27 05:55:42 np0005537642 ceph-osd[82775]: rocksdb:    Options.preserve_internal_time_seconds: 0
Nov 27 05:55:42 np0005537642 ceph-osd[82775]: rocksdb:                       Options.enable_blob_files: false
Nov 27 05:55:42 np0005537642 ceph-osd[82775]: rocksdb:                           Options.min_blob_size: 0
Nov 27 05:55:42 np0005537642 ceph-osd[82775]: rocksdb:                          Options.blob_file_size: 268435456
Nov 27 05:55:42 np0005537642 ceph-osd[82775]: rocksdb:                   Options.blob_compression_type: NoCompression
Nov 27 05:55:42 np0005537642 ceph-osd[82775]: rocksdb:          Options.enable_blob_garbage_collection: false
Nov 27 05:55:42 np0005537642 ceph-osd[82775]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Nov 27 05:55:42 np0005537642 ceph-osd[82775]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Nov 27 05:55:42 np0005537642 ceph-osd[82775]: rocksdb:          Options.blob_compaction_readahead_size: 0
Nov 27 05:55:42 np0005537642 ceph-osd[82775]: rocksdb:                Options.blob_file_starting_level: 0
Nov 27 05:55:42 np0005537642 ceph-osd[82775]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Nov 27 05:55:42 np0005537642 ceph-osd[82775]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [m-2]:
Nov 27 05:55:42 np0005537642 ceph-osd[82775]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Nov 27 05:55:42 np0005537642 ceph-osd[82775]: rocksdb:           Options.merge_operator: None
Nov 27 05:55:42 np0005537642 ceph-osd[82775]: rocksdb:        Options.compaction_filter: None
Nov 27 05:55:42 np0005537642 ceph-osd[82775]: rocksdb:        Options.compaction_filter_factory: None
Nov 27 05:55:42 np0005537642 ceph-osd[82775]: rocksdb:  Options.sst_partitioner_factory: None
Nov 27 05:55:42 np0005537642 ceph-osd[82775]: rocksdb:         Options.memtable_factory: SkipListFactory
Nov 27 05:55:42 np0005537642 ceph-osd[82775]: rocksdb:            Options.table_factory: BlockBasedTable
Nov 27 05:55:42 np0005537642 ceph-osd[82775]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x562f57c24da0)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x562f56c869b0#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 483183820#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Nov 27 05:55:42 np0005537642 ceph-osd[82775]: rocksdb:        Options.write_buffer_size: 16777216
Nov 27 05:55:42 np0005537642 ceph-osd[82775]: rocksdb:  Options.max_write_buffer_number: 64
Nov 27 05:55:42 np0005537642 ceph-osd[82775]: rocksdb:          Options.compression: LZ4
Nov 27 05:55:42 np0005537642 ceph-osd[82775]: rocksdb:                  Options.bottommost_compression: Disabled
Nov 27 05:55:42 np0005537642 ceph-osd[82775]: rocksdb:       Options.prefix_extractor: nullptr
Nov 27 05:55:42 np0005537642 ceph-osd[82775]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Nov 27 05:55:42 np0005537642 ceph-osd[82775]: rocksdb:             Options.num_levels: 7
Nov 27 05:55:42 np0005537642 ceph-osd[82775]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Nov 27 05:55:42 np0005537642 ceph-osd[82775]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Nov 27 05:55:42 np0005537642 ceph-osd[82775]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Nov 27 05:55:42 np0005537642 ceph-osd[82775]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Nov 27 05:55:42 np0005537642 ceph-osd[82775]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Nov 27 05:55:42 np0005537642 ceph-osd[82775]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Nov 27 05:55:42 np0005537642 ceph-osd[82775]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Nov 27 05:55:42 np0005537642 ceph-osd[82775]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Nov 27 05:55:42 np0005537642 ceph-osd[82775]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Nov 27 05:55:42 np0005537642 ceph-osd[82775]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Nov 27 05:55:42 np0005537642 ceph-osd[82775]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Nov 27 05:55:42 np0005537642 ceph-osd[82775]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Nov 27 05:55:42 np0005537642 ceph-osd[82775]: rocksdb:            Options.compression_opts.window_bits: -14
Nov 27 05:55:42 np0005537642 ceph-osd[82775]: rocksdb:                  Options.compression_opts.level: 32767
Nov 27 05:55:42 np0005537642 ceph-osd[82775]: rocksdb:               Options.compression_opts.strategy: 0
Nov 27 05:55:42 np0005537642 ceph-osd[82775]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Nov 27 05:55:42 np0005537642 ceph-osd[82775]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Nov 27 05:55:42 np0005537642 ceph-osd[82775]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Nov 27 05:55:42 np0005537642 ceph-osd[82775]: rocksdb:         Options.compression_opts.parallel_threads: 1
Nov 27 05:55:42 np0005537642 ceph-osd[82775]: rocksdb:                  Options.compression_opts.enabled: false
Nov 27 05:55:42 np0005537642 ceph-osd[82775]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Nov 27 05:55:42 np0005537642 ceph-osd[82775]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Nov 27 05:55:42 np0005537642 ceph-osd[82775]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Nov 27 05:55:42 np0005537642 ceph-osd[82775]: rocksdb:              Options.level0_stop_writes_trigger: 36
Nov 27 05:55:42 np0005537642 ceph-osd[82775]: rocksdb:                   Options.target_file_size_base: 67108864
Nov 27 05:55:42 np0005537642 ceph-osd[82775]: rocksdb:             Options.target_file_size_multiplier: 1
Nov 27 05:55:42 np0005537642 ceph-osd[82775]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Nov 27 05:55:42 np0005537642 ceph-osd[82775]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Nov 27 05:55:42 np0005537642 ceph-osd[82775]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Nov 27 05:55:42 np0005537642 ceph-osd[82775]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Nov 27 05:55:42 np0005537642 ceph-osd[82775]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Nov 27 05:55:42 np0005537642 ceph-osd[82775]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Nov 27 05:55:42 np0005537642 ceph-osd[82775]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Nov 27 05:55:42 np0005537642 ceph-osd[82775]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Nov 27 05:55:42 np0005537642 ceph-osd[82775]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Nov 27 05:55:42 np0005537642 ceph-osd[82775]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Nov 27 05:55:42 np0005537642 ceph-osd[82775]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Nov 27 05:55:42 np0005537642 ceph-osd[82775]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Nov 27 05:55:42 np0005537642 ceph-osd[82775]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Nov 27 05:55:42 np0005537642 ceph-osd[82775]: rocksdb:                        Options.arena_block_size: 1048576
Nov 27 05:55:42 np0005537642 ceph-osd[82775]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Nov 27 05:55:42 np0005537642 ceph-osd[82775]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Nov 27 05:55:42 np0005537642 ceph-osd[82775]: rocksdb:                Options.disable_auto_compactions: 0
Nov 27 05:55:42 np0005537642 ceph-osd[82775]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Nov 27 05:55:42 np0005537642 ceph-osd[82775]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Nov 27 05:55:42 np0005537642 ceph-osd[82775]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Nov 27 05:55:42 np0005537642 ceph-osd[82775]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Nov 27 05:55:42 np0005537642 ceph-osd[82775]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Nov 27 05:55:42 np0005537642 ceph-osd[82775]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Nov 27 05:55:42 np0005537642 ceph-osd[82775]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Nov 27 05:55:42 np0005537642 ceph-osd[82775]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Nov 27 05:55:42 np0005537642 ceph-osd[82775]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Nov 27 05:55:42 np0005537642 ceph-osd[82775]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Nov 27 05:55:42 np0005537642 ceph-osd[82775]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Nov 27 05:55:42 np0005537642 ceph-osd[82775]: rocksdb:                   Options.inplace_update_support: 0
Nov 27 05:55:42 np0005537642 ceph-osd[82775]: rocksdb:                 Options.inplace_update_num_locks: 10000
Nov 27 05:55:42 np0005537642 ceph-osd[82775]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Nov 27 05:55:42 np0005537642 ceph-osd[82775]: rocksdb:               Options.memtable_whole_key_filtering: 0
Nov 27 05:55:42 np0005537642 ceph-osd[82775]: rocksdb:   Options.memtable_huge_page_size: 0
Nov 27 05:55:42 np0005537642 ceph-osd[82775]: rocksdb:                           Options.bloom_locality: 0
Nov 27 05:55:42 np0005537642 ceph-osd[82775]: rocksdb:                    Options.max_successive_merges: 0
Nov 27 05:55:42 np0005537642 ceph-osd[82775]: rocksdb:                Options.optimize_filters_for_hits: 0
Nov 27 05:55:42 np0005537642 ceph-osd[82775]: rocksdb:                Options.paranoid_file_checks: 0
Nov 27 05:55:42 np0005537642 ceph-osd[82775]: rocksdb:                Options.force_consistency_checks: 1
Nov 27 05:55:42 np0005537642 ceph-osd[82775]: rocksdb:                Options.report_bg_io_stats: 0
Nov 27 05:55:42 np0005537642 ceph-osd[82775]: rocksdb:                               Options.ttl: 2592000
Nov 27 05:55:42 np0005537642 ceph-osd[82775]: rocksdb:          Options.periodic_compaction_seconds: 0
Nov 27 05:55:42 np0005537642 ceph-osd[82775]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Nov 27 05:55:42 np0005537642 ceph-osd[82775]: rocksdb:    Options.preserve_internal_time_seconds: 0
Nov 27 05:55:42 np0005537642 ceph-osd[82775]: rocksdb:                       Options.enable_blob_files: false
Nov 27 05:55:42 np0005537642 ceph-osd[82775]: rocksdb:                           Options.min_blob_size: 0
Nov 27 05:55:42 np0005537642 ceph-osd[82775]: rocksdb:                          Options.blob_file_size: 268435456
Nov 27 05:55:42 np0005537642 ceph-osd[82775]: rocksdb:                   Options.blob_compression_type: NoCompression
Nov 27 05:55:42 np0005537642 ceph-osd[82775]: rocksdb:          Options.enable_blob_garbage_collection: false
Nov 27 05:55:42 np0005537642 ceph-osd[82775]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Nov 27 05:55:42 np0005537642 ceph-osd[82775]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Nov 27 05:55:42 np0005537642 ceph-osd[82775]: rocksdb:          Options.blob_compaction_readahead_size: 0
Nov 27 05:55:42 np0005537642 ceph-osd[82775]: rocksdb:                Options.blob_file_starting_level: 0
Nov 27 05:55:42 np0005537642 ceph-osd[82775]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Nov 27 05:55:42 np0005537642 ceph-osd[82775]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [p-0]:
Nov 27 05:55:42 np0005537642 ceph-osd[82775]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Nov 27 05:55:42 np0005537642 ceph-osd[82775]: rocksdb:           Options.merge_operator: None
Nov 27 05:55:42 np0005537642 ceph-osd[82775]: rocksdb:        Options.compaction_filter: None
Nov 27 05:55:42 np0005537642 ceph-osd[82775]: rocksdb:        Options.compaction_filter_factory: None
Nov 27 05:55:42 np0005537642 ceph-osd[82775]: rocksdb:  Options.sst_partitioner_factory: None
Nov 27 05:55:42 np0005537642 ceph-osd[82775]: rocksdb:         Options.memtable_factory: SkipListFactory
Nov 27 05:55:42 np0005537642 ceph-osd[82775]: rocksdb:            Options.table_factory: BlockBasedTable
Nov 27 05:55:42 np0005537642 ceph-osd[82775]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x562f57c24da0)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x562f56c869b0#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 483183820#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Nov 27 05:55:42 np0005537642 ceph-osd[82775]: rocksdb:        Options.write_buffer_size: 16777216
Nov 27 05:55:42 np0005537642 ceph-osd[82775]: rocksdb:  Options.max_write_buffer_number: 64
Nov 27 05:55:42 np0005537642 ceph-osd[82775]: rocksdb:          Options.compression: LZ4
Nov 27 05:55:42 np0005537642 ceph-osd[82775]: rocksdb:                  Options.bottommost_compression: Disabled
Nov 27 05:55:42 np0005537642 ceph-osd[82775]: rocksdb:       Options.prefix_extractor: nullptr
Nov 27 05:55:42 np0005537642 ceph-osd[82775]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Nov 27 05:55:42 np0005537642 ceph-osd[82775]: rocksdb:             Options.num_levels: 7
Nov 27 05:55:42 np0005537642 ceph-osd[82775]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Nov 27 05:55:42 np0005537642 ceph-osd[82775]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Nov 27 05:55:42 np0005537642 ceph-osd[82775]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Nov 27 05:55:42 np0005537642 ceph-osd[82775]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Nov 27 05:55:42 np0005537642 ceph-osd[82775]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Nov 27 05:55:42 np0005537642 ceph-osd[82775]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Nov 27 05:55:42 np0005537642 ceph-osd[82775]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Nov 27 05:55:42 np0005537642 ceph-osd[82775]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Nov 27 05:55:42 np0005537642 ceph-osd[82775]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Nov 27 05:55:42 np0005537642 ceph-osd[82775]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Nov 27 05:55:42 np0005537642 ceph-osd[82775]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Nov 27 05:55:42 np0005537642 ceph-osd[82775]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Nov 27 05:55:42 np0005537642 ceph-osd[82775]: rocksdb:            Options.compression_opts.window_bits: -14
Nov 27 05:55:42 np0005537642 ceph-osd[82775]: rocksdb:                  Options.compression_opts.level: 32767
Nov 27 05:55:42 np0005537642 ceph-osd[82775]: rocksdb:               Options.compression_opts.strategy: 0
Nov 27 05:55:42 np0005537642 ceph-osd[82775]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Nov 27 05:55:42 np0005537642 ceph-osd[82775]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Nov 27 05:55:42 np0005537642 ceph-osd[82775]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Nov 27 05:55:42 np0005537642 ceph-osd[82775]: rocksdb:         Options.compression_opts.parallel_threads: 1
Nov 27 05:55:42 np0005537642 ceph-osd[82775]: rocksdb:                  Options.compression_opts.enabled: false
Nov 27 05:55:42 np0005537642 ceph-osd[82775]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Nov 27 05:55:42 np0005537642 ceph-osd[82775]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Nov 27 05:55:42 np0005537642 ceph-osd[82775]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Nov 27 05:55:42 np0005537642 ceph-osd[82775]: rocksdb:              Options.level0_stop_writes_trigger: 36
Nov 27 05:55:42 np0005537642 ceph-osd[82775]: rocksdb:                   Options.target_file_size_base: 67108864
Nov 27 05:55:42 np0005537642 ceph-osd[82775]: rocksdb:             Options.target_file_size_multiplier: 1
Nov 27 05:55:42 np0005537642 ceph-osd[82775]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Nov 27 05:55:42 np0005537642 ceph-osd[82775]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Nov 27 05:55:42 np0005537642 ceph-osd[82775]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Nov 27 05:55:42 np0005537642 ceph-osd[82775]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Nov 27 05:55:42 np0005537642 ceph-osd[82775]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Nov 27 05:55:42 np0005537642 ceph-osd[82775]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Nov 27 05:55:42 np0005537642 ceph-osd[82775]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Nov 27 05:55:42 np0005537642 ceph-osd[82775]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Nov 27 05:55:42 np0005537642 ceph-osd[82775]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Nov 27 05:55:42 np0005537642 ceph-osd[82775]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Nov 27 05:55:42 np0005537642 ceph-osd[82775]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Nov 27 05:55:42 np0005537642 ceph-osd[82775]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Nov 27 05:55:42 np0005537642 ceph-osd[82775]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Nov 27 05:55:42 np0005537642 ceph-osd[82775]: rocksdb:                        Options.arena_block_size: 1048576
Nov 27 05:55:42 np0005537642 ceph-osd[82775]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Nov 27 05:55:42 np0005537642 ceph-osd[82775]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Nov 27 05:55:42 np0005537642 ceph-osd[82775]: rocksdb:                Options.disable_auto_compactions: 0
Nov 27 05:55:42 np0005537642 ceph-osd[82775]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Nov 27 05:55:42 np0005537642 ceph-osd[82775]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Nov 27 05:55:42 np0005537642 ceph-osd[82775]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Nov 27 05:55:42 np0005537642 ceph-osd[82775]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Nov 27 05:55:42 np0005537642 ceph-osd[82775]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Nov 27 05:55:42 np0005537642 ceph-osd[82775]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Nov 27 05:55:42 np0005537642 ceph-osd[82775]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Nov 27 05:55:42 np0005537642 ceph-osd[82775]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Nov 27 05:55:42 np0005537642 ceph-osd[82775]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Nov 27 05:55:42 np0005537642 ceph-osd[82775]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Nov 27 05:55:42 np0005537642 ceph-osd[82775]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Nov 27 05:55:42 np0005537642 ceph-osd[82775]: rocksdb:                   Options.inplace_update_support: 0
Nov 27 05:55:42 np0005537642 ceph-osd[82775]: rocksdb:                 Options.inplace_update_num_locks: 10000
Nov 27 05:55:42 np0005537642 ceph-osd[82775]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Nov 27 05:55:42 np0005537642 ceph-osd[82775]: rocksdb:               Options.memtable_whole_key_filtering: 0
Nov 27 05:55:42 np0005537642 ceph-osd[82775]: rocksdb:   Options.memtable_huge_page_size: 0
Nov 27 05:55:42 np0005537642 ceph-osd[82775]: rocksdb:                           Options.bloom_locality: 0
Nov 27 05:55:42 np0005537642 ceph-osd[82775]: rocksdb:                    Options.max_successive_merges: 0
Nov 27 05:55:42 np0005537642 ceph-osd[82775]: rocksdb:                Options.optimize_filters_for_hits: 0
Nov 27 05:55:42 np0005537642 ceph-osd[82775]: rocksdb:                Options.paranoid_file_checks: 0
Nov 27 05:55:42 np0005537642 ceph-osd[82775]: rocksdb:                Options.force_consistency_checks: 1
Nov 27 05:55:42 np0005537642 ceph-osd[82775]: rocksdb:                Options.report_bg_io_stats: 0
Nov 27 05:55:42 np0005537642 ceph-osd[82775]: rocksdb:                               Options.ttl: 2592000
Nov 27 05:55:42 np0005537642 ceph-osd[82775]: rocksdb:          Options.periodic_compaction_seconds: 0
Nov 27 05:55:42 np0005537642 ceph-osd[82775]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Nov 27 05:55:42 np0005537642 ceph-osd[82775]: rocksdb:    Options.preserve_internal_time_seconds: 0
Nov 27 05:55:42 np0005537642 ceph-osd[82775]: rocksdb:                       Options.enable_blob_files: false
Nov 27 05:55:42 np0005537642 ceph-osd[82775]: rocksdb:                           Options.min_blob_size: 0
Nov 27 05:55:42 np0005537642 ceph-osd[82775]: rocksdb:                          Options.blob_file_size: 268435456
Nov 27 05:55:42 np0005537642 ceph-osd[82775]: rocksdb:                   Options.blob_compression_type: NoCompression
Nov 27 05:55:42 np0005537642 ceph-osd[82775]: rocksdb:          Options.enable_blob_garbage_collection: false
Nov 27 05:55:42 np0005537642 ceph-osd[82775]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Nov 27 05:55:42 np0005537642 ceph-osd[82775]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Nov 27 05:55:42 np0005537642 ceph-osd[82775]: rocksdb:          Options.blob_compaction_readahead_size: 0
Nov 27 05:55:42 np0005537642 ceph-osd[82775]: rocksdb:                Options.blob_file_starting_level: 0
Nov 27 05:55:42 np0005537642 ceph-osd[82775]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Nov 27 05:55:42 np0005537642 ceph-osd[82775]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [p-1]:
Nov 27 05:55:42 np0005537642 ceph-osd[82775]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Nov 27 05:55:42 np0005537642 ceph-osd[82775]: rocksdb:           Options.merge_operator: None
Nov 27 05:55:42 np0005537642 ceph-osd[82775]: rocksdb:        Options.compaction_filter: None
Nov 27 05:55:42 np0005537642 ceph-osd[82775]: rocksdb:        Options.compaction_filter_factory: None
Nov 27 05:55:42 np0005537642 ceph-osd[82775]: rocksdb:  Options.sst_partitioner_factory: None
Nov 27 05:55:42 np0005537642 ceph-osd[82775]: rocksdb:         Options.memtable_factory: SkipListFactory
Nov 27 05:55:42 np0005537642 ceph-osd[82775]: rocksdb:            Options.table_factory: BlockBasedTable
Nov 27 05:55:42 np0005537642 ceph-osd[82775]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x562f57c24da0)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x562f56c869b0#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 483183820#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Nov 27 05:55:42 np0005537642 ceph-osd[82775]: rocksdb:        Options.write_buffer_size: 16777216
Nov 27 05:55:42 np0005537642 ceph-osd[82775]: rocksdb:  Options.max_write_buffer_number: 64
Nov 27 05:55:42 np0005537642 ceph-osd[82775]: rocksdb:          Options.compression: LZ4
Nov 27 05:55:42 np0005537642 ceph-osd[82775]: rocksdb:                  Options.bottommost_compression: Disabled
Nov 27 05:55:42 np0005537642 ceph-osd[82775]: rocksdb:       Options.prefix_extractor: nullptr
Nov 27 05:55:42 np0005537642 ceph-osd[82775]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Nov 27 05:55:42 np0005537642 ceph-osd[82775]: rocksdb:             Options.num_levels: 7
Nov 27 05:55:42 np0005537642 ceph-osd[82775]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Nov 27 05:55:42 np0005537642 ceph-osd[82775]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Nov 27 05:55:42 np0005537642 ceph-osd[82775]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Nov 27 05:55:42 np0005537642 ceph-osd[82775]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Nov 27 05:55:42 np0005537642 ceph-osd[82775]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Nov 27 05:55:42 np0005537642 ceph-osd[82775]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Nov 27 05:55:42 np0005537642 ceph-osd[82775]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Nov 27 05:55:42 np0005537642 ceph-osd[82775]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Nov 27 05:55:42 np0005537642 ceph-osd[82775]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Nov 27 05:55:42 np0005537642 ceph-osd[82775]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Nov 27 05:55:42 np0005537642 ceph-osd[82775]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Nov 27 05:55:42 np0005537642 ceph-osd[82775]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Nov 27 05:55:42 np0005537642 ceph-osd[82775]: rocksdb:            Options.compression_opts.window_bits: -14
Nov 27 05:55:42 np0005537642 ceph-osd[82775]: rocksdb:                  Options.compression_opts.level: 32767
Nov 27 05:55:42 np0005537642 ceph-osd[82775]: rocksdb:               Options.compression_opts.strategy: 0
Nov 27 05:55:42 np0005537642 ceph-osd[82775]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Nov 27 05:55:42 np0005537642 ceph-osd[82775]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Nov 27 05:55:42 np0005537642 ceph-osd[82775]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Nov 27 05:55:42 np0005537642 ceph-osd[82775]: rocksdb:         Options.compression_opts.parallel_threads: 1
Nov 27 05:55:42 np0005537642 ceph-osd[82775]: rocksdb:                  Options.compression_opts.enabled: false
Nov 27 05:55:42 np0005537642 ceph-osd[82775]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Nov 27 05:55:42 np0005537642 ceph-osd[82775]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Nov 27 05:55:42 np0005537642 ceph-osd[82775]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Nov 27 05:55:42 np0005537642 ceph-osd[82775]: rocksdb:              Options.level0_stop_writes_trigger: 36
Nov 27 05:55:42 np0005537642 ceph-osd[82775]: rocksdb:                   Options.target_file_size_base: 67108864
Nov 27 05:55:42 np0005537642 ceph-osd[82775]: rocksdb:             Options.target_file_size_multiplier: 1
Nov 27 05:55:42 np0005537642 ceph-osd[82775]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Nov 27 05:55:42 np0005537642 ceph-osd[82775]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Nov 27 05:55:42 np0005537642 ceph-osd[82775]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Nov 27 05:55:42 np0005537642 ceph-osd[82775]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Nov 27 05:55:42 np0005537642 ceph-osd[82775]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Nov 27 05:55:42 np0005537642 ceph-osd[82775]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Nov 27 05:55:42 np0005537642 ceph-osd[82775]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Nov 27 05:55:42 np0005537642 ceph-osd[82775]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Nov 27 05:55:42 np0005537642 ceph-osd[82775]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Nov 27 05:55:42 np0005537642 ceph-osd[82775]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Nov 27 05:55:42 np0005537642 ceph-osd[82775]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Nov 27 05:55:42 np0005537642 ceph-osd[82775]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Nov 27 05:55:42 np0005537642 ceph-osd[82775]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Nov 27 05:55:42 np0005537642 ceph-osd[82775]: rocksdb:                        Options.arena_block_size: 1048576
Nov 27 05:55:42 np0005537642 ceph-osd[82775]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Nov 27 05:55:42 np0005537642 ceph-osd[82775]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Nov 27 05:55:42 np0005537642 ceph-osd[82775]: rocksdb:                Options.disable_auto_compactions: 0
Nov 27 05:55:42 np0005537642 ceph-osd[82775]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Nov 27 05:55:42 np0005537642 ceph-osd[82775]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Nov 27 05:55:42 np0005537642 ceph-osd[82775]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Nov 27 05:55:42 np0005537642 ceph-osd[82775]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Nov 27 05:55:42 np0005537642 ceph-osd[82775]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Nov 27 05:55:42 np0005537642 ceph-osd[82775]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Nov 27 05:55:42 np0005537642 ceph-osd[82775]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Nov 27 05:55:42 np0005537642 ceph-osd[82775]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Nov 27 05:55:42 np0005537642 ceph-osd[82775]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Nov 27 05:55:42 np0005537642 ceph-osd[82775]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Nov 27 05:55:42 np0005537642 ceph-osd[82775]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Nov 27 05:55:42 np0005537642 ceph-osd[82775]: rocksdb:                   Options.inplace_update_support: 0
Nov 27 05:55:42 np0005537642 ceph-osd[82775]: rocksdb:                 Options.inplace_update_num_locks: 10000
Nov 27 05:55:42 np0005537642 ceph-osd[82775]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Nov 27 05:55:42 np0005537642 ceph-osd[82775]: rocksdb:               Options.memtable_whole_key_filtering: 0
Nov 27 05:55:42 np0005537642 ceph-osd[82775]: rocksdb:   Options.memtable_huge_page_size: 0
Nov 27 05:55:42 np0005537642 ceph-osd[82775]: rocksdb:                           Options.bloom_locality: 0
Nov 27 05:55:42 np0005537642 ceph-osd[82775]: rocksdb:                    Options.max_successive_merges: 0
Nov 27 05:55:42 np0005537642 ceph-osd[82775]: rocksdb:                Options.optimize_filters_for_hits: 0
Nov 27 05:55:42 np0005537642 ceph-osd[82775]: rocksdb:                Options.paranoid_file_checks: 0
Nov 27 05:55:42 np0005537642 ceph-osd[82775]: rocksdb:                Options.force_consistency_checks: 1
Nov 27 05:55:42 np0005537642 ceph-osd[82775]: rocksdb:                Options.report_bg_io_stats: 0
Nov 27 05:55:42 np0005537642 ceph-osd[82775]: rocksdb:                               Options.ttl: 2592000
Nov 27 05:55:42 np0005537642 ceph-osd[82775]: rocksdb:          Options.periodic_compaction_seconds: 0
Nov 27 05:55:42 np0005537642 ceph-osd[82775]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Nov 27 05:55:42 np0005537642 ceph-osd[82775]: rocksdb:    Options.preserve_internal_time_seconds: 0
Nov 27 05:55:42 np0005537642 ceph-osd[82775]: rocksdb:                       Options.enable_blob_files: false
Nov 27 05:55:42 np0005537642 ceph-osd[82775]: rocksdb:                           Options.min_blob_size: 0
Nov 27 05:55:42 np0005537642 ceph-osd[82775]: rocksdb:                          Options.blob_file_size: 268435456
Nov 27 05:55:42 np0005537642 ceph-osd[82775]: rocksdb:                   Options.blob_compression_type: NoCompression
Nov 27 05:55:42 np0005537642 ceph-osd[82775]: rocksdb:          Options.enable_blob_garbage_collection: false
Nov 27 05:55:42 np0005537642 ceph-osd[82775]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Nov 27 05:55:42 np0005537642 ceph-osd[82775]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Nov 27 05:55:42 np0005537642 ceph-osd[82775]: rocksdb:          Options.blob_compaction_readahead_size: 0
Nov 27 05:55:42 np0005537642 ceph-osd[82775]: rocksdb:                Options.blob_file_starting_level: 0
Nov 27 05:55:42 np0005537642 ceph-osd[82775]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Nov 27 05:55:42 np0005537642 ceph-osd[82775]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [p-2]:
Nov 27 05:55:42 np0005537642 ceph-osd[82775]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Nov 27 05:55:42 np0005537642 ceph-osd[82775]: rocksdb:           Options.merge_operator: None
Nov 27 05:55:42 np0005537642 ceph-osd[82775]: rocksdb:        Options.compaction_filter: None
Nov 27 05:55:42 np0005537642 ceph-osd[82775]: rocksdb:        Options.compaction_filter_factory: None
Nov 27 05:55:42 np0005537642 ceph-osd[82775]: rocksdb:  Options.sst_partitioner_factory: None
Nov 27 05:55:42 np0005537642 ceph-osd[82775]: rocksdb:         Options.memtable_factory: SkipListFactory
Nov 27 05:55:42 np0005537642 ceph-osd[82775]: rocksdb:            Options.table_factory: BlockBasedTable
Nov 27 05:55:42 np0005537642 ceph-osd[82775]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x562f57c24da0)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x562f56c869b0#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 483183820#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Nov 27 05:55:42 np0005537642 ceph-osd[82775]: rocksdb:        Options.write_buffer_size: 16777216
Nov 27 05:55:42 np0005537642 ceph-osd[82775]: rocksdb:  Options.max_write_buffer_number: 64
Nov 27 05:55:42 np0005537642 ceph-osd[82775]: rocksdb:          Options.compression: LZ4
Nov 27 05:55:42 np0005537642 ceph-osd[82775]: rocksdb:                  Options.bottommost_compression: Disabled
Nov 27 05:55:42 np0005537642 ceph-osd[82775]: rocksdb:       Options.prefix_extractor: nullptr
Nov 27 05:55:42 np0005537642 ceph-osd[82775]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Nov 27 05:55:42 np0005537642 ceph-osd[82775]: rocksdb:             Options.num_levels: 7
Nov 27 05:55:42 np0005537642 ceph-osd[82775]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Nov 27 05:55:42 np0005537642 ceph-osd[82775]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Nov 27 05:55:42 np0005537642 ceph-osd[82775]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Nov 27 05:55:42 np0005537642 ceph-osd[82775]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Nov 27 05:55:42 np0005537642 ceph-osd[82775]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Nov 27 05:55:42 np0005537642 ceph-osd[82775]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Nov 27 05:55:42 np0005537642 ceph-osd[82775]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Nov 27 05:55:42 np0005537642 ceph-osd[82775]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Nov 27 05:55:42 np0005537642 ceph-osd[82775]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Nov 27 05:55:42 np0005537642 ceph-osd[82775]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Nov 27 05:55:42 np0005537642 ceph-osd[82775]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Nov 27 05:55:42 np0005537642 ceph-osd[82775]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Nov 27 05:55:42 np0005537642 ceph-osd[82775]: rocksdb:            Options.compression_opts.window_bits: -14
Nov 27 05:55:42 np0005537642 ceph-osd[82775]: rocksdb:                  Options.compression_opts.level: 32767
Nov 27 05:55:42 np0005537642 ceph-osd[82775]: rocksdb:               Options.compression_opts.strategy: 0
Nov 27 05:55:42 np0005537642 ceph-osd[82775]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Nov 27 05:55:42 np0005537642 ceph-osd[82775]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Nov 27 05:55:42 np0005537642 ceph-osd[82775]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Nov 27 05:55:42 np0005537642 ceph-osd[82775]: rocksdb:         Options.compression_opts.parallel_threads: 1
Nov 27 05:55:42 np0005537642 ceph-osd[82775]: rocksdb:                  Options.compression_opts.enabled: false
Nov 27 05:55:42 np0005537642 ceph-osd[82775]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Nov 27 05:55:42 np0005537642 ceph-osd[82775]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Nov 27 05:55:42 np0005537642 ceph-osd[82775]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Nov 27 05:55:42 np0005537642 ceph-osd[82775]: rocksdb:              Options.level0_stop_writes_trigger: 36
Nov 27 05:55:42 np0005537642 ceph-osd[82775]: rocksdb:                   Options.target_file_size_base: 67108864
Nov 27 05:55:42 np0005537642 ceph-osd[82775]: rocksdb:             Options.target_file_size_multiplier: 1
Nov 27 05:55:42 np0005537642 ceph-osd[82775]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Nov 27 05:55:42 np0005537642 ceph-osd[82775]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Nov 27 05:55:42 np0005537642 ceph-osd[82775]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Nov 27 05:55:42 np0005537642 ceph-osd[82775]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Nov 27 05:55:42 np0005537642 ceph-osd[82775]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Nov 27 05:55:42 np0005537642 ceph-osd[82775]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Nov 27 05:55:42 np0005537642 ceph-osd[82775]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Nov 27 05:55:42 np0005537642 ceph-osd[82775]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Nov 27 05:55:42 np0005537642 ceph-osd[82775]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Nov 27 05:55:42 np0005537642 ceph-osd[82775]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Nov 27 05:55:42 np0005537642 ceph-osd[82775]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Nov 27 05:55:42 np0005537642 ceph-osd[82775]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Nov 27 05:55:42 np0005537642 ceph-osd[82775]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Nov 27 05:55:42 np0005537642 ceph-osd[82775]: rocksdb:                        Options.arena_block_size: 1048576
Nov 27 05:55:42 np0005537642 ceph-osd[82775]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Nov 27 05:55:42 np0005537642 ceph-osd[82775]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Nov 27 05:55:42 np0005537642 ceph-osd[82775]: rocksdb:                Options.disable_auto_compactions: 0
Nov 27 05:55:42 np0005537642 ceph-osd[82775]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Nov 27 05:55:42 np0005537642 ceph-osd[82775]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Nov 27 05:55:42 np0005537642 ceph-osd[82775]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Nov 27 05:55:42 np0005537642 ceph-osd[82775]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Nov 27 05:55:42 np0005537642 ceph-osd[82775]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Nov 27 05:55:42 np0005537642 ceph-osd[82775]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Nov 27 05:55:42 np0005537642 ceph-osd[82775]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Nov 27 05:55:42 np0005537642 ceph-osd[82775]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Nov 27 05:55:42 np0005537642 ceph-osd[82775]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Nov 27 05:55:42 np0005537642 ceph-osd[82775]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Nov 27 05:55:42 np0005537642 ceph-osd[82775]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Nov 27 05:55:42 np0005537642 ceph-osd[82775]: rocksdb:                   Options.inplace_update_support: 0
Nov 27 05:55:42 np0005537642 ceph-osd[82775]: rocksdb:                 Options.inplace_update_num_locks: 10000
Nov 27 05:55:42 np0005537642 ceph-osd[82775]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Nov 27 05:55:42 np0005537642 ceph-osd[82775]: rocksdb:               Options.memtable_whole_key_filtering: 0
Nov 27 05:55:42 np0005537642 ceph-osd[82775]: rocksdb:   Options.memtable_huge_page_size: 0
Nov 27 05:55:42 np0005537642 ceph-osd[82775]: rocksdb:                           Options.bloom_locality: 0
Nov 27 05:55:42 np0005537642 ceph-osd[82775]: rocksdb:                    Options.max_successive_merges: 0
Nov 27 05:55:42 np0005537642 ceph-osd[82775]: rocksdb:                Options.optimize_filters_for_hits: 0
Nov 27 05:55:42 np0005537642 ceph-osd[82775]: rocksdb:                Options.paranoid_file_checks: 0
Nov 27 05:55:42 np0005537642 ceph-osd[82775]: rocksdb:                Options.force_consistency_checks: 1
Nov 27 05:55:42 np0005537642 ceph-osd[82775]: rocksdb:                Options.report_bg_io_stats: 0
Nov 27 05:55:42 np0005537642 ceph-osd[82775]: rocksdb:                               Options.ttl: 2592000
Nov 27 05:55:42 np0005537642 ceph-osd[82775]: rocksdb:          Options.periodic_compaction_seconds: 0
Nov 27 05:55:42 np0005537642 ceph-osd[82775]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Nov 27 05:55:42 np0005537642 ceph-osd[82775]: rocksdb:    Options.preserve_internal_time_seconds: 0
Nov 27 05:55:42 np0005537642 ceph-osd[82775]: rocksdb:                       Options.enable_blob_files: false
Nov 27 05:55:42 np0005537642 ceph-osd[82775]: rocksdb:                           Options.min_blob_size: 0
Nov 27 05:55:42 np0005537642 ceph-osd[82775]: rocksdb:                          Options.blob_file_size: 268435456
Nov 27 05:55:42 np0005537642 ceph-osd[82775]: rocksdb:                   Options.blob_compression_type: NoCompression
Nov 27 05:55:42 np0005537642 ceph-osd[82775]: rocksdb:          Options.enable_blob_garbage_collection: false
Nov 27 05:55:42 np0005537642 ceph-osd[82775]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Nov 27 05:55:42 np0005537642 ceph-osd[82775]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Nov 27 05:55:42 np0005537642 ceph-osd[82775]: rocksdb:          Options.blob_compaction_readahead_size: 0
Nov 27 05:55:42 np0005537642 ceph-osd[82775]: rocksdb:                Options.blob_file_starting_level: 0
Nov 27 05:55:42 np0005537642 ceph-osd[82775]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Nov 27 05:55:42 np0005537642 ceph-osd[82775]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [O-0]:
Nov 27 05:55:42 np0005537642 ceph-osd[82775]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Nov 27 05:55:42 np0005537642 ceph-osd[82775]: rocksdb:           Options.merge_operator: None
Nov 27 05:55:42 np0005537642 ceph-osd[82775]: rocksdb:        Options.compaction_filter: None
Nov 27 05:55:42 np0005537642 ceph-osd[82775]: rocksdb:        Options.compaction_filter_factory: None
Nov 27 05:55:42 np0005537642 ceph-osd[82775]: rocksdb:  Options.sst_partitioner_factory: None
Nov 27 05:55:42 np0005537642 ceph-osd[82775]: rocksdb:         Options.memtable_factory: SkipListFactory
Nov 27 05:55:42 np0005537642 ceph-osd[82775]: rocksdb:            Options.table_factory: BlockBasedTable
Nov 27 05:55:42 np0005537642 ceph-osd[82775]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x562f57c24dc0)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x562f56c87090#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 536870912#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Nov 27 05:55:42 np0005537642 ceph-osd[82775]: rocksdb:        Options.write_buffer_size: 16777216
Nov 27 05:55:42 np0005537642 ceph-osd[82775]: rocksdb:  Options.max_write_buffer_number: 64
Nov 27 05:55:42 np0005537642 ceph-osd[82775]: rocksdb:          Options.compression: LZ4
Nov 27 05:55:42 np0005537642 ceph-osd[82775]: rocksdb:                  Options.bottommost_compression: Disabled
Nov 27 05:55:42 np0005537642 ceph-osd[82775]: rocksdb:       Options.prefix_extractor: nullptr
Nov 27 05:55:42 np0005537642 ceph-osd[82775]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Nov 27 05:55:42 np0005537642 ceph-osd[82775]: rocksdb:             Options.num_levels: 7
Nov 27 05:55:42 np0005537642 ceph-osd[82775]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Nov 27 05:55:42 np0005537642 ceph-osd[82775]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Nov 27 05:55:42 np0005537642 ceph-osd[82775]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Nov 27 05:55:42 np0005537642 ceph-osd[82775]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Nov 27 05:55:42 np0005537642 ceph-osd[82775]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Nov 27 05:55:42 np0005537642 ceph-osd[82775]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Nov 27 05:55:42 np0005537642 ceph-osd[82775]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Nov 27 05:55:42 np0005537642 ceph-osd[82775]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Nov 27 05:55:42 np0005537642 ceph-osd[82775]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Nov 27 05:55:42 np0005537642 ceph-osd[82775]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Nov 27 05:55:42 np0005537642 ceph-osd[82775]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Nov 27 05:55:42 np0005537642 ceph-osd[82775]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Nov 27 05:55:42 np0005537642 ceph-osd[82775]: rocksdb:            Options.compression_opts.window_bits: -14
Nov 27 05:55:42 np0005537642 ceph-osd[82775]: rocksdb:                  Options.compression_opts.level: 32767
Nov 27 05:55:42 np0005537642 ceph-osd[82775]: rocksdb:               Options.compression_opts.strategy: 0
Nov 27 05:55:42 np0005537642 ceph-osd[82775]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Nov 27 05:55:42 np0005537642 ceph-osd[82775]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Nov 27 05:55:42 np0005537642 ceph-osd[82775]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Nov 27 05:55:42 np0005537642 ceph-osd[82775]: rocksdb:         Options.compression_opts.parallel_threads: 1
Nov 27 05:55:42 np0005537642 ceph-osd[82775]: rocksdb:                  Options.compression_opts.enabled: false
Nov 27 05:55:42 np0005537642 ceph-osd[82775]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Nov 27 05:55:42 np0005537642 ceph-osd[82775]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Nov 27 05:55:42 np0005537642 ceph-osd[82775]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Nov 27 05:55:42 np0005537642 ceph-osd[82775]: rocksdb:              Options.level0_stop_writes_trigger: 36
Nov 27 05:55:42 np0005537642 ceph-osd[82775]: rocksdb:                   Options.target_file_size_base: 67108864
Nov 27 05:55:42 np0005537642 ceph-osd[82775]: rocksdb:             Options.target_file_size_multiplier: 1
Nov 27 05:55:42 np0005537642 ceph-osd[82775]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Nov 27 05:55:42 np0005537642 ceph-osd[82775]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Nov 27 05:55:42 np0005537642 ceph-osd[82775]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Nov 27 05:55:42 np0005537642 ceph-osd[82775]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Nov 27 05:55:42 np0005537642 ceph-osd[82775]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Nov 27 05:55:42 np0005537642 ceph-osd[82775]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Nov 27 05:55:42 np0005537642 ceph-osd[82775]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Nov 27 05:55:42 np0005537642 ceph-osd[82775]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Nov 27 05:55:42 np0005537642 ceph-osd[82775]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Nov 27 05:55:42 np0005537642 ceph-osd[82775]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Nov 27 05:55:42 np0005537642 ceph-osd[82775]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Nov 27 05:55:42 np0005537642 ceph-osd[82775]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Nov 27 05:55:42 np0005537642 ceph-osd[82775]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Nov 27 05:55:42 np0005537642 ceph-osd[82775]: rocksdb:                        Options.arena_block_size: 1048576
Nov 27 05:55:42 np0005537642 ceph-osd[82775]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Nov 27 05:55:42 np0005537642 ceph-osd[82775]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Nov 27 05:55:42 np0005537642 ceph-osd[82775]: rocksdb:                Options.disable_auto_compactions: 0
Nov 27 05:55:42 np0005537642 ceph-osd[82775]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Nov 27 05:55:42 np0005537642 ceph-osd[82775]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Nov 27 05:55:42 np0005537642 ceph-osd[82775]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Nov 27 05:55:42 np0005537642 ceph-osd[82775]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Nov 27 05:55:42 np0005537642 ceph-osd[82775]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Nov 27 05:55:42 np0005537642 ceph-osd[82775]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Nov 27 05:55:42 np0005537642 ceph-osd[82775]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Nov 27 05:55:42 np0005537642 ceph-osd[82775]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Nov 27 05:55:42 np0005537642 ceph-osd[82775]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Nov 27 05:55:42 np0005537642 ceph-osd[82775]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Nov 27 05:55:42 np0005537642 ceph-osd[82775]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Nov 27 05:55:42 np0005537642 ceph-osd[82775]: rocksdb:                   Options.inplace_update_support: 0
Nov 27 05:55:42 np0005537642 ceph-osd[82775]: rocksdb:                 Options.inplace_update_num_locks: 10000
Nov 27 05:55:42 np0005537642 ceph-osd[82775]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Nov 27 05:55:42 np0005537642 ceph-osd[82775]: rocksdb:               Options.memtable_whole_key_filtering: 0
Nov 27 05:55:42 np0005537642 ceph-osd[82775]: rocksdb:   Options.memtable_huge_page_size: 0
Nov 27 05:55:42 np0005537642 ceph-osd[82775]: rocksdb:                           Options.bloom_locality: 0
Nov 27 05:55:42 np0005537642 ceph-osd[82775]: rocksdb:                    Options.max_successive_merges: 0
Nov 27 05:55:42 np0005537642 ceph-osd[82775]: rocksdb:                Options.optimize_filters_for_hits: 0
Nov 27 05:55:42 np0005537642 ceph-osd[82775]: rocksdb:                Options.paranoid_file_checks: 0
Nov 27 05:55:42 np0005537642 ceph-osd[82775]: rocksdb:                Options.force_consistency_checks: 1
Nov 27 05:55:42 np0005537642 ceph-osd[82775]: rocksdb:                Options.report_bg_io_stats: 0
Nov 27 05:55:42 np0005537642 ceph-osd[82775]: rocksdb:                               Options.ttl: 2592000
Nov 27 05:55:42 np0005537642 ceph-osd[82775]: rocksdb:          Options.periodic_compaction_seconds: 0
Nov 27 05:55:42 np0005537642 ceph-osd[82775]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Nov 27 05:55:42 np0005537642 ceph-osd[82775]: rocksdb:    Options.preserve_internal_time_seconds: 0
Nov 27 05:55:42 np0005537642 ceph-osd[82775]: rocksdb:                       Options.enable_blob_files: false
Nov 27 05:55:42 np0005537642 ceph-osd[82775]: rocksdb:                           Options.min_blob_size: 0
Nov 27 05:55:42 np0005537642 ceph-osd[82775]: rocksdb:                          Options.blob_file_size: 268435456
Nov 27 05:55:42 np0005537642 ceph-osd[82775]: rocksdb:                   Options.blob_compression_type: NoCompression
Nov 27 05:55:42 np0005537642 ceph-osd[82775]: rocksdb:          Options.enable_blob_garbage_collection: false
Nov 27 05:55:42 np0005537642 ceph-osd[82775]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Nov 27 05:55:42 np0005537642 ceph-osd[82775]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Nov 27 05:55:42 np0005537642 ceph-osd[82775]: rocksdb:          Options.blob_compaction_readahead_size: 0
Nov 27 05:55:42 np0005537642 ceph-osd[82775]: rocksdb:                Options.blob_file_starting_level: 0
Nov 27 05:55:42 np0005537642 ceph-osd[82775]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Nov 27 05:55:42 np0005537642 ceph-osd[82775]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [O-1]:
Nov 27 05:55:42 np0005537642 ceph-osd[82775]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Nov 27 05:55:42 np0005537642 ceph-osd[82775]: rocksdb:           Options.merge_operator: None
Nov 27 05:55:42 np0005537642 ceph-osd[82775]: rocksdb:        Options.compaction_filter: None
Nov 27 05:55:42 np0005537642 ceph-osd[82775]: rocksdb:        Options.compaction_filter_factory: None
Nov 27 05:55:42 np0005537642 ceph-osd[82775]: rocksdb:  Options.sst_partitioner_factory: None
Nov 27 05:55:42 np0005537642 ceph-osd[82775]: rocksdb:         Options.memtable_factory: SkipListFactory
Nov 27 05:55:42 np0005537642 ceph-osd[82775]: rocksdb:            Options.table_factory: BlockBasedTable
Nov 27 05:55:42 np0005537642 ceph-osd[82775]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x562f57c24dc0)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x562f56c87090#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 536870912#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Nov 27 05:55:42 np0005537642 ceph-osd[82775]: rocksdb:        Options.write_buffer_size: 16777216
Nov 27 05:55:42 np0005537642 ceph-osd[82775]: rocksdb:  Options.max_write_buffer_number: 64
Nov 27 05:55:42 np0005537642 ceph-osd[82775]: rocksdb:          Options.compression: LZ4
Nov 27 05:55:42 np0005537642 ceph-osd[82775]: rocksdb:                  Options.bottommost_compression: Disabled
Nov 27 05:55:42 np0005537642 ceph-osd[82775]: rocksdb:       Options.prefix_extractor: nullptr
Nov 27 05:55:42 np0005537642 ceph-osd[82775]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Nov 27 05:55:42 np0005537642 ceph-osd[82775]: rocksdb:             Options.num_levels: 7
Nov 27 05:55:42 np0005537642 ceph-osd[82775]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Nov 27 05:55:42 np0005537642 ceph-osd[82775]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Nov 27 05:55:42 np0005537642 ceph-osd[82775]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Nov 27 05:55:42 np0005537642 ceph-osd[82775]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Nov 27 05:55:42 np0005537642 ceph-osd[82775]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Nov 27 05:55:42 np0005537642 ceph-osd[82775]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Nov 27 05:55:42 np0005537642 ceph-osd[82775]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Nov 27 05:55:42 np0005537642 ceph-osd[82775]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Nov 27 05:55:42 np0005537642 ceph-osd[82775]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Nov 27 05:55:42 np0005537642 ceph-osd[82775]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Nov 27 05:55:42 np0005537642 ceph-osd[82775]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Nov 27 05:55:42 np0005537642 ceph-osd[82775]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Nov 27 05:55:42 np0005537642 ceph-osd[82775]: rocksdb:            Options.compression_opts.window_bits: -14
Nov 27 05:55:42 np0005537642 ceph-osd[82775]: rocksdb:                  Options.compression_opts.level: 32767
Nov 27 05:55:42 np0005537642 ceph-osd[82775]: rocksdb:               Options.compression_opts.strategy: 0
Nov 27 05:55:42 np0005537642 ceph-osd[82775]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Nov 27 05:55:42 np0005537642 ceph-osd[82775]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Nov 27 05:55:42 np0005537642 ceph-osd[82775]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Nov 27 05:55:42 np0005537642 ceph-osd[82775]: rocksdb:         Options.compression_opts.parallel_threads: 1
Nov 27 05:55:42 np0005537642 ceph-osd[82775]: rocksdb:                  Options.compression_opts.enabled: false
Nov 27 05:55:42 np0005537642 ceph-osd[82775]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Nov 27 05:55:42 np0005537642 ceph-osd[82775]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Nov 27 05:55:42 np0005537642 ceph-osd[82775]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Nov 27 05:55:42 np0005537642 ceph-osd[82775]: rocksdb:              Options.level0_stop_writes_trigger: 36
Nov 27 05:55:42 np0005537642 ceph-osd[82775]: rocksdb:                   Options.target_file_size_base: 67108864
Nov 27 05:55:42 np0005537642 ceph-osd[82775]: rocksdb:             Options.target_file_size_multiplier: 1
Nov 27 05:55:42 np0005537642 ceph-osd[82775]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Nov 27 05:55:42 np0005537642 ceph-osd[82775]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Nov 27 05:55:42 np0005537642 ceph-osd[82775]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Nov 27 05:55:42 np0005537642 ceph-osd[82775]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Nov 27 05:55:42 np0005537642 ceph-osd[82775]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Nov 27 05:55:42 np0005537642 ceph-osd[82775]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Nov 27 05:55:42 np0005537642 ceph-osd[82775]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Nov 27 05:55:42 np0005537642 ceph-osd[82775]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Nov 27 05:55:42 np0005537642 ceph-osd[82775]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Nov 27 05:55:42 np0005537642 ceph-osd[82775]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Nov 27 05:55:42 np0005537642 ceph-osd[82775]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Nov 27 05:55:42 np0005537642 ceph-osd[82775]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Nov 27 05:55:42 np0005537642 ceph-osd[82775]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Nov 27 05:55:42 np0005537642 ceph-osd[82775]: rocksdb:                        Options.arena_block_size: 1048576
Nov 27 05:55:42 np0005537642 ceph-osd[82775]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Nov 27 05:55:42 np0005537642 ceph-osd[82775]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Nov 27 05:55:42 np0005537642 ceph-osd[82775]: rocksdb:                Options.disable_auto_compactions: 0
Nov 27 05:55:42 np0005537642 ceph-osd[82775]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Nov 27 05:55:42 np0005537642 ceph-osd[82775]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Nov 27 05:55:42 np0005537642 ceph-osd[82775]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Nov 27 05:55:42 np0005537642 ceph-osd[82775]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Nov 27 05:55:42 np0005537642 ceph-osd[82775]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Nov 27 05:55:42 np0005537642 ceph-osd[82775]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Nov 27 05:55:42 np0005537642 ceph-osd[82775]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Nov 27 05:55:42 np0005537642 ceph-osd[82775]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Nov 27 05:55:42 np0005537642 ceph-osd[82775]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Nov 27 05:55:42 np0005537642 ceph-osd[82775]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Nov 27 05:55:42 np0005537642 ceph-osd[82775]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Nov 27 05:55:42 np0005537642 ceph-osd[82775]: rocksdb:                   Options.inplace_update_support: 0
Nov 27 05:55:42 np0005537642 ceph-osd[82775]: rocksdb:                 Options.inplace_update_num_locks: 10000
Nov 27 05:55:42 np0005537642 ceph-osd[82775]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Nov 27 05:55:42 np0005537642 ceph-osd[82775]: rocksdb:               Options.memtable_whole_key_filtering: 0
Nov 27 05:55:42 np0005537642 ceph-osd[82775]: rocksdb:   Options.memtable_huge_page_size: 0
Nov 27 05:55:42 np0005537642 ceph-osd[82775]: rocksdb:                           Options.bloom_locality: 0
Nov 27 05:55:42 np0005537642 ceph-osd[82775]: rocksdb:                    Options.max_successive_merges: 0
Nov 27 05:55:42 np0005537642 ceph-osd[82775]: rocksdb:                Options.optimize_filters_for_hits: 0
Nov 27 05:55:42 np0005537642 ceph-osd[82775]: rocksdb:                Options.paranoid_file_checks: 0
Nov 27 05:55:42 np0005537642 ceph-osd[82775]: rocksdb:                Options.force_consistency_checks: 1
Nov 27 05:55:42 np0005537642 ceph-osd[82775]: rocksdb:                Options.report_bg_io_stats: 0
Nov 27 05:55:42 np0005537642 ceph-osd[82775]: rocksdb:                               Options.ttl: 2592000
Nov 27 05:55:42 np0005537642 ceph-osd[82775]: rocksdb:          Options.periodic_compaction_seconds: 0
Nov 27 05:55:42 np0005537642 ceph-osd[82775]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Nov 27 05:55:42 np0005537642 ceph-osd[82775]: rocksdb:    Options.preserve_internal_time_seconds: 0
Nov 27 05:55:42 np0005537642 ceph-osd[82775]: rocksdb:                       Options.enable_blob_files: false
Nov 27 05:55:42 np0005537642 ceph-osd[82775]: rocksdb:                           Options.min_blob_size: 0
Nov 27 05:55:42 np0005537642 ceph-osd[82775]: rocksdb:                          Options.blob_file_size: 268435456
Nov 27 05:55:42 np0005537642 ceph-osd[82775]: rocksdb:                   Options.blob_compression_type: NoCompression
Nov 27 05:55:42 np0005537642 ceph-osd[82775]: rocksdb:          Options.enable_blob_garbage_collection: false
Nov 27 05:55:42 np0005537642 ceph-osd[82775]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Nov 27 05:55:42 np0005537642 ceph-osd[82775]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Nov 27 05:55:42 np0005537642 ceph-osd[82775]: rocksdb:          Options.blob_compaction_readahead_size: 0
Nov 27 05:55:42 np0005537642 ceph-osd[82775]: rocksdb:                Options.blob_file_starting_level: 0
Nov 27 05:55:42 np0005537642 ceph-osd[82775]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Nov 27 05:55:42 np0005537642 ceph-osd[82775]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [O-2]:
Nov 27 05:55:42 np0005537642 ceph-osd[82775]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Nov 27 05:55:42 np0005537642 ceph-osd[82775]: rocksdb:           Options.merge_operator: None
Nov 27 05:55:42 np0005537642 ceph-osd[82775]: rocksdb:        Options.compaction_filter: None
Nov 27 05:55:42 np0005537642 ceph-osd[82775]: rocksdb:        Options.compaction_filter_factory: None
Nov 27 05:55:42 np0005537642 ceph-osd[82775]: rocksdb:  Options.sst_partitioner_factory: None
Nov 27 05:55:42 np0005537642 ceph-osd[82775]: rocksdb:         Options.memtable_factory: SkipListFactory
Nov 27 05:55:42 np0005537642 ceph-osd[82775]: rocksdb:            Options.table_factory: BlockBasedTable
Nov 27 05:55:42 np0005537642 ceph-osd[82775]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x562f57c24dc0)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x562f56c87090#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 536870912#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Nov 27 05:55:42 np0005537642 ceph-osd[82775]: rocksdb:        Options.write_buffer_size: 16777216
Nov 27 05:55:42 np0005537642 ceph-osd[82775]: rocksdb:  Options.max_write_buffer_number: 64
Nov 27 05:55:42 np0005537642 ceph-osd[82775]: rocksdb:          Options.compression: LZ4
Nov 27 05:55:42 np0005537642 ceph-osd[82775]: rocksdb:                  Options.bottommost_compression: Disabled
Nov 27 05:55:42 np0005537642 ceph-osd[82775]: rocksdb:       Options.prefix_extractor: nullptr
Nov 27 05:55:42 np0005537642 ceph-osd[82775]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Nov 27 05:55:42 np0005537642 ceph-osd[82775]: rocksdb:             Options.num_levels: 7
Nov 27 05:55:42 np0005537642 ceph-osd[82775]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Nov 27 05:55:42 np0005537642 ceph-osd[82775]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Nov 27 05:55:42 np0005537642 ceph-osd[82775]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Nov 27 05:55:42 np0005537642 ceph-osd[82775]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Nov 27 05:55:42 np0005537642 ceph-osd[82775]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Nov 27 05:55:42 np0005537642 ceph-osd[82775]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Nov 27 05:55:42 np0005537642 ceph-osd[82775]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Nov 27 05:55:42 np0005537642 ceph-osd[82775]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Nov 27 05:55:42 np0005537642 ceph-osd[82775]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Nov 27 05:55:42 np0005537642 ceph-osd[82775]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Nov 27 05:55:42 np0005537642 ceph-osd[82775]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Nov 27 05:55:42 np0005537642 ceph-osd[82775]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Nov 27 05:55:42 np0005537642 ceph-osd[82775]: rocksdb:            Options.compression_opts.window_bits: -14
Nov 27 05:55:42 np0005537642 ceph-osd[82775]: rocksdb:                  Options.compression_opts.level: 32767
Nov 27 05:55:42 np0005537642 ceph-osd[82775]: rocksdb:               Options.compression_opts.strategy: 0
Nov 27 05:55:42 np0005537642 ceph-osd[82775]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Nov 27 05:55:42 np0005537642 ceph-osd[82775]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Nov 27 05:55:42 np0005537642 ceph-osd[82775]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Nov 27 05:55:42 np0005537642 ceph-osd[82775]: rocksdb:         Options.compression_opts.parallel_threads: 1
Nov 27 05:55:42 np0005537642 ceph-osd[82775]: rocksdb:                  Options.compression_opts.enabled: false
Nov 27 05:55:42 np0005537642 ceph-osd[82775]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Nov 27 05:55:42 np0005537642 ceph-osd[82775]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Nov 27 05:55:42 np0005537642 ceph-osd[82775]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Nov 27 05:55:42 np0005537642 ceph-osd[82775]: rocksdb:              Options.level0_stop_writes_trigger: 36
Nov 27 05:55:42 np0005537642 ceph-osd[82775]: rocksdb:                   Options.target_file_size_base: 67108864
Nov 27 05:55:42 np0005537642 ceph-osd[82775]: rocksdb:             Options.target_file_size_multiplier: 1
Nov 27 05:55:42 np0005537642 ceph-osd[82775]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Nov 27 05:55:42 np0005537642 ceph-osd[82775]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Nov 27 05:55:42 np0005537642 ceph-osd[82775]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Nov 27 05:55:42 np0005537642 ceph-osd[82775]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Nov 27 05:55:42 np0005537642 ceph-osd[82775]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Nov 27 05:55:42 np0005537642 ceph-osd[82775]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Nov 27 05:55:42 np0005537642 ceph-osd[82775]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Nov 27 05:55:42 np0005537642 ceph-osd[82775]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Nov 27 05:55:42 np0005537642 ceph-osd[82775]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Nov 27 05:55:42 np0005537642 ceph-osd[82775]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Nov 27 05:55:42 np0005537642 ceph-osd[82775]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Nov 27 05:55:42 np0005537642 ceph-osd[82775]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Nov 27 05:55:42 np0005537642 ceph-osd[82775]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Nov 27 05:55:42 np0005537642 ceph-osd[82775]: rocksdb:                        Options.arena_block_size: 1048576
Nov 27 05:55:42 np0005537642 ceph-osd[82775]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Nov 27 05:55:42 np0005537642 ceph-osd[82775]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Nov 27 05:55:42 np0005537642 ceph-osd[82775]: rocksdb:                Options.disable_auto_compactions: 0
Nov 27 05:55:42 np0005537642 ceph-osd[82775]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Nov 27 05:55:42 np0005537642 ceph-osd[82775]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Nov 27 05:55:42 np0005537642 ceph-osd[82775]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Nov 27 05:55:42 np0005537642 ceph-osd[82775]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Nov 27 05:55:42 np0005537642 ceph-osd[82775]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Nov 27 05:55:42 np0005537642 ceph-osd[82775]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Nov 27 05:55:42 np0005537642 ceph-osd[82775]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Nov 27 05:55:42 np0005537642 ceph-osd[82775]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Nov 27 05:55:42 np0005537642 ceph-osd[82775]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Nov 27 05:55:42 np0005537642 ceph-osd[82775]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Nov 27 05:55:42 np0005537642 ceph-osd[82775]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Nov 27 05:55:42 np0005537642 ceph-osd[82775]: rocksdb:                   Options.inplace_update_support: 0
Nov 27 05:55:42 np0005537642 ceph-osd[82775]: rocksdb:                 Options.inplace_update_num_locks: 10000
Nov 27 05:55:42 np0005537642 ceph-osd[82775]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Nov 27 05:55:42 np0005537642 ceph-osd[82775]: rocksdb:               Options.memtable_whole_key_filtering: 0
Nov 27 05:55:42 np0005537642 ceph-osd[82775]: rocksdb:   Options.memtable_huge_page_size: 0
Nov 27 05:55:42 np0005537642 ceph-osd[82775]: rocksdb:                           Options.bloom_locality: 0
Nov 27 05:55:42 np0005537642 ceph-osd[82775]: rocksdb:                    Options.max_successive_merges: 0
Nov 27 05:55:42 np0005537642 ceph-osd[82775]: rocksdb:                Options.optimize_filters_for_hits: 0
Nov 27 05:55:42 np0005537642 ceph-osd[82775]: rocksdb:                Options.paranoid_file_checks: 0
Nov 27 05:55:42 np0005537642 ceph-osd[82775]: rocksdb:                Options.force_consistency_checks: 1
Nov 27 05:55:42 np0005537642 ceph-osd[82775]: rocksdb:                Options.report_bg_io_stats: 0
Nov 27 05:55:42 np0005537642 ceph-osd[82775]: rocksdb:                               Options.ttl: 2592000
Nov 27 05:55:42 np0005537642 ceph-osd[82775]: rocksdb:          Options.periodic_compaction_seconds: 0
Nov 27 05:55:42 np0005537642 ceph-osd[82775]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Nov 27 05:55:42 np0005537642 ceph-osd[82775]: rocksdb:    Options.preserve_internal_time_seconds: 0
Nov 27 05:55:42 np0005537642 ceph-osd[82775]: rocksdb:                       Options.enable_blob_files: false
Nov 27 05:55:42 np0005537642 ceph-osd[82775]: rocksdb:                           Options.min_blob_size: 0
Nov 27 05:55:42 np0005537642 ceph-osd[82775]: rocksdb:                          Options.blob_file_size: 268435456
Nov 27 05:55:42 np0005537642 ceph-osd[82775]: rocksdb:                   Options.blob_compression_type: NoCompression
Nov 27 05:55:42 np0005537642 ceph-osd[82775]: rocksdb:          Options.enable_blob_garbage_collection: false
Nov 27 05:55:42 np0005537642 ceph-osd[82775]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Nov 27 05:55:42 np0005537642 ceph-osd[82775]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Nov 27 05:55:42 np0005537642 ceph-osd[82775]: rocksdb:          Options.blob_compaction_readahead_size: 0
Nov 27 05:55:42 np0005537642 ceph-osd[82775]: rocksdb:                Options.blob_file_starting_level: 0
Nov 27 05:55:42 np0005537642 ceph-osd[82775]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Nov 27 05:55:42 np0005537642 ceph-osd[82775]: rocksdb: [db/column_family.cc:635] #011(skipping printing options)
Nov 27 05:55:42 np0005537642 ceph-osd[82775]: rocksdb: [db/column_family.cc:635] #011(skipping printing options)
Nov 27 05:55:42 np0005537642 ceph-osd[82775]: rocksdb: [db/version_set.cc:5566] Recovered from manifest file:db/MANIFEST-000032 succeeded,manifest_file_number is 32, next_file_number is 34, last_sequence is 12, log_number is 5,prev_log_number is 0,max_column_family is 11,min_log_number_to_keep is 5
Nov 27 05:55:42 np0005537642 ceph-osd[82775]: rocksdb: [db/version_set.cc:5581] Column family [default] (ID 0), log number is 5
Nov 27 05:55:42 np0005537642 ceph-osd[82775]: rocksdb: [db/version_set.cc:5581] Column family [m-0] (ID 1), log number is 5
Nov 27 05:55:42 np0005537642 ceph-osd[82775]: rocksdb: [db/version_set.cc:5581] Column family [m-1] (ID 2), log number is 5
Nov 27 05:55:42 np0005537642 ceph-osd[82775]: rocksdb: [db/version_set.cc:5581] Column family [m-2] (ID 3), log number is 5
Nov 27 05:55:42 np0005537642 ceph-osd[82775]: rocksdb: [db/version_set.cc:5581] Column family [p-0] (ID 4), log number is 5
Nov 27 05:55:42 np0005537642 ceph-osd[82775]: rocksdb: [db/version_set.cc:5581] Column family [p-1] (ID 5), log number is 5
Nov 27 05:55:42 np0005537642 ceph-osd[82775]: rocksdb: [db/version_set.cc:5581] Column family [p-2] (ID 6), log number is 5
Nov 27 05:55:42 np0005537642 ceph-osd[82775]: rocksdb: [db/version_set.cc:5581] Column family [O-0] (ID 7), log number is 5
Nov 27 05:55:42 np0005537642 ceph-osd[82775]: rocksdb: [db/version_set.cc:5581] Column family [O-1] (ID 8), log number is 5
Nov 27 05:55:42 np0005537642 ceph-osd[82775]: rocksdb: [db/version_set.cc:5581] Column family [O-2] (ID 9), log number is 5
Nov 27 05:55:42 np0005537642 ceph-osd[82775]: rocksdb: [db/version_set.cc:5581] Column family [L] (ID 10), log number is 5
Nov 27 05:55:42 np0005537642 ceph-osd[82775]: rocksdb: [db/version_set.cc:5581] Column family [P] (ID 11), log number is 5
Nov 27 05:55:42 np0005537642 ceph-osd[82775]: rocksdb: [db/db_impl/db_impl_open.cc:539] DB ID: 3247bce4-0644-453e-951d-cf466e3fa111
Nov 27 05:55:42 np0005537642 ceph-osd[82775]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764240942023695, "job": 1, "event": "recovery_started", "wal_files": [31]}
Nov 27 05:55:42 np0005537642 ceph-osd[82775]: rocksdb: [db/db_impl/db_impl_open.cc:1043] Recovering log #31 mode 2
Nov 27 05:55:42 np0005537642 ceph-osd[82775]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764240942252120, "cf_name": "default", "job": 1, "event": "table_file_creation", "file_number": 35, "file_size": 1272, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 13, "largest_seqno": 21, "table_properties": {"data_size": 128, "index_size": 27, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 69, "raw_key_size": 87, "raw_average_key_size": 17, "raw_value_size": 82, "raw_average_value_size": 16, "num_data_blocks": 1, "num_entries": 5, "num_filter_entries": 5, "num_deletions": 0, "num_merge_operands": 2, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": ".T:int64_array.b:bitwise_xor", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "LZ4", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764240942, "oldest_key_time": 0, "file_creation_time": 0, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "3247bce4-0644-453e-951d-cf466e3fa111", "db_session_id": "AE7GADLGY0E40TKLSHJP", "orig_file_number": 35, "seqno_to_time_mapping": "N/A"}}
Nov 27 05:55:42 np0005537642 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v61: 1 pgs: 1 active+clean; 449 KiB data, 427 MiB used, 20 GiB / 20 GiB avail
Nov 27 05:55:42 np0005537642 ceph-osd[82775]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764240942454257, "cf_name": "p-0", "job": 1, "event": "table_file_creation", "file_number": 36, "file_size": 1595, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 14, "largest_seqno": 15, "table_properties": {"data_size": 469, "index_size": 39, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 69, "raw_key_size": 72, "raw_average_key_size": 36, "raw_value_size": 571, "raw_average_value_size": 285, "num_data_blocks": 1, "num_entries": 2, "num_filter_entries": 2, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "p-0", "column_family_id": 4, "comparator": "leveldb.BytewiseComparator", "merge_operator": "nullptr", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "LZ4", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764240942, "oldest_key_time": 0, "file_creation_time": 0, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "3247bce4-0644-453e-951d-cf466e3fa111", "db_session_id": "AE7GADLGY0E40TKLSHJP", "orig_file_number": 36, "seqno_to_time_mapping": "N/A"}}
Nov 27 05:55:42 np0005537642 systemd[1]: var-lib-containers-storage-overlay-7fad0ca1044c5cf8fb765d8e728857ec3f4e7836384006bfe2b8fb92ba1b5e0c-merged.mount: Deactivated successfully.
Nov 27 05:55:42 np0005537642 ceph-osd[82775]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764240942712794, "cf_name": "O-2", "job": 1, "event": "table_file_creation", "file_number": 37, "file_size": 1275, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 16, "largest_seqno": 16, "table_properties": {"data_size": 121, "index_size": 64, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 69, "raw_key_size": 55, "raw_average_key_size": 55, "raw_value_size": 50, "raw_average_value_size": 50, "num_data_blocks": 1, "num_entries": 1, "num_filter_entries": 1, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "O-2", "column_family_id": 9, "comparator": "leveldb.BytewiseComparator", "merge_operator": "nullptr", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "LZ4", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764240942, "oldest_key_time": 0, "file_creation_time": 0, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "3247bce4-0644-453e-951d-cf466e3fa111", "db_session_id": "AE7GADLGY0E40TKLSHJP", "orig_file_number": 37, "seqno_to_time_mapping": "N/A"}}
Nov 27 05:55:42 np0005537642 ceph-osd[82775]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764240942806961, "job": 1, "event": "recovery_finished"}
Nov 27 05:55:42 np0005537642 ceph-osd[82775]: rocksdb: [db/version_set.cc:5047] Creating manifest 40
Nov 27 05:55:42 np0005537642 podman[82892]: 2025-11-27 10:55:42.962311914 +0000 UTC m=+1.891415527 container remove ec3eabea12b9e5e7ab88827234b2048bdcf8aa59e3f3207ceab5ae7607aa46d0 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=bold_murdock, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid)
Nov 27 05:55:42 np0005537642 systemd[1]: libpod-conmon-ec3eabea12b9e5e7ab88827234b2048bdcf8aa59e3f3207ceab5ae7607aa46d0.scope: Deactivated successfully.
Nov 27 05:55:43 np0005537642 podman[83325]: 2025-11-27 10:55:43.163569892 +0000 UTC m=+0.028667262 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Nov 27 05:55:43 np0005537642 ceph-osd[82775]: rocksdb: [db/db_impl/db_impl_open.cc:1987] SstFileManager instance 0x562f57c71c00
Nov 27 05:55:43 np0005537642 ceph-osd[82775]: rocksdb: DB pointer 0x562f57c08000
Nov 27 05:55:43 np0005537642 ceph-osd[82775]: bluestore(/var/lib/ceph/osd/ceph-1) _open_db opened rocksdb path db options compression=kLZ4Compression,max_write_buffer_number=64,min_write_buffer_number_to_merge=6,compaction_style=kCompactionStyleLevel,write_buffer_size=16777216,max_background_jobs=4,level0_file_num_compaction_trigger=8,max_bytes_for_level_base=1073741824,max_bytes_for_level_multiplier=8,compaction_readahead_size=2MB,max_total_wal_size=1073741824,writable_file_max_buffer_size=0
Nov 27 05:55:43 np0005537642 ceph-osd[82775]: bluestore(/var/lib/ceph/osd/ceph-1) _upgrade_super from 4, latest 4
Nov 27 05:55:43 np0005537642 ceph-osd[82775]: bluestore(/var/lib/ceph/osd/ceph-1) _upgrade_super done
Nov 27 05:55:43 np0005537642 podman[83325]: 2025-11-27 10:55:43.270663146 +0000 UTC m=+0.135760476 container create 1f80b5707550319c9054cf4786e566d03797e037cc0ee936cb11fefb60a549d5 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=hungry_faraday, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Nov 27 05:55:43 np0005537642 ceph-osd[82775]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Nov 27 05:55:43 np0005537642 ceph-osd[82775]: rocksdb: [db/db_impl/db_impl.cc:1111] #012** DB Stats **#012Uptime(secs): 1.3 total, 1.3 interval#012Cumulative writes: 0 writes, 0 keys, 0 commit groups, 0.0 writes per commit group, ingest: 0.00 GB, 0.00 MB/s#012Cumulative WAL: 0 writes, 0 syncs, 0.00 writes per sync, written: 0.00 GB, 0.00 MB/s#012Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent#012Interval writes: 0 writes, 0 keys, 0 commit groups, 0.0 writes per commit group, ingest: 0.00 MB, 0.00 MB/s#012Interval WAL: 0 writes, 0 syncs, 0.00 writes per sync, written: 0.00 GB, 0.00 MB/s#012Interval stall: 00:00:0.000 H:M:S, 0.0 percent#012#012** Compaction Stats [default] **#012Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012  L0      2/0    2.63 KB   0.2      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.0      0.23              0.00         1    0.228       0      0       0.0       0.0#012 Sum      2/0    2.63 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.0      0.23              0.00         1    0.228       0      0       0.0       0.0#012 Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.0      0.23              0.00         1    0.228       0      0       0.0       0.0#012#012** Compaction Stats [default] **#012Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.23              0.00         1    0.228       0      0       0.0       0.0#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 1.3 total, 1.3 interval#012Flush(GB): cumulative 0.000, interval 0.000#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.2 seconds#012Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.2 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count#012Block cache BinnedLRUCache@0x562f56c869b0#2 capacity: 460.80 MB usage: 1.39 KB table_size: 0 occupancy: 18446744073709551615 collections: 1 last_copies: 8 last_secs: 4.6e-05 secs_since: 0#012Block cache entry stats(count,size,portion): DataBlock(2,0.72 KB,0.000152323%) FilterBlock(3,0.33 KB,6.95388e-05%) IndexBlock(3,0.34 KB,7.28501e-05%) Misc(1,0.00 KB,0%)#012#012** File Read Latency Histogram By Level [default] **#012#012** Compaction Stats [m-0] **#012Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012 Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012 Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012#012** Compaction Stats [m-0] **#012Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 1.3 total, 1.3 interval#012Flush(GB): cumulative 0.000, interval 0.000#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count#012Block cache BinnedLRUCache@0x562f56c869b0#2 capacity: 460.80 MB usage: 1.39 KB table_size: 0 occupancy: 18446744073709551615 collections: 1 last_copies: 8 last_secs: 4.6e-05 secs_since: 0#012Block cache entry stats(count,size,portion): DataBlock(2,0.72 KB,0.000152323%) FilterBlock(3,0.33 KB,6.95388e-05%) IndexBlock(3,0.34 KB,7.28501e-05%) Misc(1,0.00 KB,0%)#012#012** File Read Latency Histogram By Level [m-0] **#012#012** Compaction Stats [m-1] **#012Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012 Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012 Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012#012** Compaction Stats [m-1] **#012Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 1.3 total, 1.3 interval#012Flush(GB): cumulative 0.000, interval 0.000#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count#012
Nov 27 05:55:43 np0005537642 ceph-osd[82775]: <cls> /home/jenkins-build/build/workspace/ceph-build/ARCH/x86_64/AVAILABLE_ARCH/x86_64/AVAILABLE_DIST/centos9/DIST/centos9/MACHINE_SIZE/gigantic/release/19.2.3/rpm/el9/BUILD/ceph-19.2.3/src/cls/cephfs/cls_cephfs.cc:201: loading cephfs
Nov 27 05:55:43 np0005537642 ceph-osd[82775]: <cls> /home/jenkins-build/build/workspace/ceph-build/ARCH/x86_64/AVAILABLE_ARCH/x86_64/AVAILABLE_DIST/centos9/DIST/centos9/MACHINE_SIZE/gigantic/release/19.2.3/rpm/el9/BUILD/ceph-19.2.3/src/cls/hello/cls_hello.cc:316: loading cls_hello
Nov 27 05:55:43 np0005537642 ceph-osd[82775]: _get_class not permitted to load lua
Nov 27 05:55:43 np0005537642 ceph-osd[82775]: _get_class not permitted to load sdk
Nov 27 05:55:43 np0005537642 ceph-osd[82775]: osd.1 0 crush map has features 288232575208783872, adjusting msgr requires for clients
Nov 27 05:55:43 np0005537642 ceph-osd[82775]: osd.1 0 crush map has features 288232575208783872 was 8705, adjusting msgr requires for mons
Nov 27 05:55:43 np0005537642 ceph-osd[82775]: osd.1 0 crush map has features 288232575208783872, adjusting msgr requires for osds
Nov 27 05:55:43 np0005537642 ceph-osd[82775]: osd.1 0 check_osdmap_features enabling on-disk ERASURE CODES compat feature
Nov 27 05:55:43 np0005537642 ceph-osd[82775]: osd.1 0 load_pgs
Nov 27 05:55:43 np0005537642 ceph-osd[82775]: osd.1 0 load_pgs opened 0 pgs
Nov 27 05:55:43 np0005537642 ceph-osd[82775]: osd.1 0 log_to_monitors true
Nov 27 05:55:43 np0005537642 ceph-4c838139-e0c9-556a-a9ca-e4422f459af7-osd-1[82771]: 2025-11-27T10:55:43.285+0000 7fe437273740 -1 osd.1 0 log_to_monitors true
Nov 27 05:55:43 np0005537642 ceph-mon[74338]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["1"]} v 0)
Nov 27 05:55:43 np0005537642 ceph-mon[74338]: log_channel(audit) log [INF] : from='osd.1 [v2:192.168.122.100:6802/2860845679,v1:192.168.122.100:6803/2860845679]' entity='osd.1' cmd=[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["1"]}]: dispatch
Nov 27 05:55:43 np0005537642 systemd[1]: Started libpod-conmon-1f80b5707550319c9054cf4786e566d03797e037cc0ee936cb11fefb60a549d5.scope.
Nov 27 05:55:43 np0005537642 systemd[1]: Started libcrun container.
Nov 27 05:55:43 np0005537642 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ca9b83fcb1af6b3ca6d3063340f369f6478b2d49383c095722a01a1c5abecc0c/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 27 05:55:43 np0005537642 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ca9b83fcb1af6b3ca6d3063340f369f6478b2d49383c095722a01a1c5abecc0c/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 27 05:55:43 np0005537642 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ca9b83fcb1af6b3ca6d3063340f369f6478b2d49383c095722a01a1c5abecc0c/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 27 05:55:43 np0005537642 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ca9b83fcb1af6b3ca6d3063340f369f6478b2d49383c095722a01a1c5abecc0c/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 27 05:55:43 np0005537642 podman[83325]: 2025-11-27 10:55:43.485330483 +0000 UTC m=+0.350427863 container init 1f80b5707550319c9054cf4786e566d03797e037cc0ee936cb11fefb60a549d5 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=hungry_faraday, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid)
Nov 27 05:55:43 np0005537642 podman[83325]: 2025-11-27 10:55:43.496628825 +0000 UTC m=+0.361726155 container start 1f80b5707550319c9054cf4786e566d03797e037cc0ee936cb11fefb60a549d5 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=hungry_faraday, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, OSD_FLAVOR=default, CEPH_REF=squid, io.buildah.version=1.40.1)
Nov 27 05:55:43 np0005537642 podman[83325]: 2025-11-27 10:55:43.550020649 +0000 UTC m=+0.415117969 container attach 1f80b5707550319c9054cf4786e566d03797e037cc0ee936cb11fefb60a549d5 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=hungry_faraday, io.buildah.version=1.40.1, CEPH_REF=squid, ceph=True, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 27 05:55:43 np0005537642 ceph-mon[74338]: mon.compute-0@0(leader).osd e11 do_prune osdmap full prune enabled
Nov 27 05:55:43 np0005537642 ceph-mon[74338]: log_channel(audit) log [INF] : from='osd.1 [v2:192.168.122.100:6802/2860845679,v1:192.168.122.100:6803/2860845679]' entity='osd.1' cmd='[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["1"]}]': finished
Nov 27 05:55:43 np0005537642 ceph-mon[74338]: mon.compute-0@0(leader).osd e12 e12: 2 total, 1 up, 2 in
Nov 27 05:55:43 np0005537642 ceph-mon[74338]: log_channel(cluster) log [DBG] : osdmap e12: 2 total, 1 up, 2 in
Nov 27 05:55:43 np0005537642 ceph-mon[74338]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd crush create-or-move", "id": 1, "weight":0.0195, "args": ["host=compute-0", "root=default"]} v 0)
Nov 27 05:55:43 np0005537642 ceph-mon[74338]: log_channel(audit) log [INF] : from='osd.1 [v2:192.168.122.100:6802/2860845679,v1:192.168.122.100:6803/2860845679]' entity='osd.1' cmd=[{"prefix": "osd crush create-or-move", "id": 1, "weight":0.0195, "args": ["host=compute-0", "root=default"]}]: dispatch
Nov 27 05:55:43 np0005537642 ceph-mon[74338]: mon.compute-0@0(leader).osd e12 create-or-move crush item name 'osd.1' initial_weight 0.0195 at location {host=compute-0,root=default}
Nov 27 05:55:43 np0005537642 ceph-mon[74338]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 1} v 0)
Nov 27 05:55:43 np0005537642 ceph-mon[74338]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/397881370' entity='mgr.compute-0.qnrkij' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch
Nov 27 05:55:43 np0005537642 ceph-mgr[74636]: mgr finish mon failed to return metadata for osd.1: (2) No such file or directory
Nov 27 05:55:44 np0005537642 ceph-mon[74338]: from='osd.1 [v2:192.168.122.100:6802/2860845679,v1:192.168.122.100:6803/2860845679]' entity='osd.1' cmd=[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["1"]}]: dispatch
Nov 27 05:55:44 np0005537642 ceph-osd[82775]: log_channel(cluster) log [DBG] : purged_snaps scrub starts
Nov 27 05:55:44 np0005537642 ceph-osd[82775]: log_channel(cluster) log [DBG] : purged_snaps scrub ok
Nov 27 05:55:44 np0005537642 lvm[83448]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Nov 27 05:55:44 np0005537642 lvm[83448]: VG ceph_vg0 finished
Nov 27 05:55:44 np0005537642 hungry_faraday[83374]: {}
Nov 27 05:55:44 np0005537642 systemd[1]: libpod-1f80b5707550319c9054cf4786e566d03797e037cc0ee936cb11fefb60a549d5.scope: Deactivated successfully.
Nov 27 05:55:44 np0005537642 systemd[1]: libpod-1f80b5707550319c9054cf4786e566d03797e037cc0ee936cb11fefb60a549d5.scope: Consumed 1.341s CPU time.
Nov 27 05:55:44 np0005537642 podman[83325]: 2025-11-27 10:55:44.386819622 +0000 UTC m=+1.251916952 container died 1f80b5707550319c9054cf4786e566d03797e037cc0ee936cb11fefb60a549d5 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=hungry_faraday, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 27 05:55:44 np0005537642 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v63: 1 pgs: 1 active+clean; 449 KiB data, 427 MiB used, 20 GiB / 20 GiB avail
Nov 27 05:55:44 np0005537642 systemd[1]: var-lib-containers-storage-overlay-ca9b83fcb1af6b3ca6d3063340f369f6478b2d49383c095722a01a1c5abecc0c-merged.mount: Deactivated successfully.
Nov 27 05:55:44 np0005537642 podman[83325]: 2025-11-27 10:55:44.727096088 +0000 UTC m=+1.592193428 container remove 1f80b5707550319c9054cf4786e566d03797e037cc0ee936cb11fefb60a549d5 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=hungry_faraday, OSD_FLAVOR=default, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0)
Nov 27 05:55:44 np0005537642 systemd[1]: libpod-conmon-1f80b5707550319c9054cf4786e566d03797e037cc0ee936cb11fefb60a549d5.scope: Deactivated successfully.
Nov 27 05:55:44 np0005537642 ceph-mon[74338]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Nov 27 05:55:44 np0005537642 ceph-mon[74338]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/397881370' entity='mgr.compute-0.qnrkij' 
Nov 27 05:55:44 np0005537642 ceph-mon[74338]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Nov 27 05:55:45 np0005537642 ceph-mon[74338]: mon.compute-0@0(leader).osd e12 do_prune osdmap full prune enabled
Nov 27 05:55:45 np0005537642 python3[83491]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z   --volume /home/ceph-admin/specs/ceph_spec.yaml:/home/ceph_spec.yaml:z   --entrypoint ceph quay.io/ceph/ceph:v19 --fsid 4c838139-e0c9-556a-a9ca-e4422f459af7 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   status --format json | jq .osdmap.num_up_osds _uses_shell=True stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 27 05:55:45 np0005537642 ceph-mon[74338]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/397881370' entity='mgr.compute-0.qnrkij' 
Nov 27 05:55:45 np0005537642 podman[83493]: 2025-11-27 10:55:45.168464013 +0000 UTC m=+0.042362647 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Nov 27 05:55:45 np0005537642 ceph-mon[74338]: log_channel(audit) log [INF] : from='osd.1 [v2:192.168.122.100:6802/2860845679,v1:192.168.122.100:6803/2860845679]' entity='osd.1' cmd='[{"prefix": "osd crush create-or-move", "id": 1, "weight":0.0195, "args": ["host=compute-0", "root=default"]}]': finished
Nov 27 05:55:45 np0005537642 ceph-mon[74338]: mon.compute-0@0(leader).osd e13 e13: 2 total, 1 up, 2 in
Nov 27 05:55:45 np0005537642 ceph-osd[82775]: osd.1 0 done with init, starting boot process
Nov 27 05:55:45 np0005537642 ceph-osd[82775]: osd.1 0 start_boot
Nov 27 05:55:45 np0005537642 ceph-osd[82775]: osd.1 0 maybe_override_options_for_qos osd_max_backfills set to 1
Nov 27 05:55:45 np0005537642 ceph-osd[82775]: osd.1 0 maybe_override_options_for_qos osd_recovery_max_active set to 0
Nov 27 05:55:45 np0005537642 ceph-osd[82775]: osd.1 0 maybe_override_options_for_qos osd_recovery_max_active_hdd set to 3
Nov 27 05:55:45 np0005537642 ceph-osd[82775]: osd.1 0 maybe_override_options_for_qos osd_recovery_max_active_ssd set to 10
Nov 27 05:55:45 np0005537642 ceph-osd[82775]: osd.1 0  bench count 12288000 bsize 4 KiB
Nov 27 05:55:45 np0005537642 podman[83493]: 2025-11-27 10:55:45.348094958 +0000 UTC m=+0.221993542 container create 985b7c925f14ab124ab34af5e2c08cb9490ff446c48079c9eb3cb0533334d5ae (image=quay.io/ceph/ceph:v19, name=agitated_napier, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 27 05:55:45 np0005537642 ceph-mon[74338]: from='osd.1 [v2:192.168.122.100:6802/2860845679,v1:192.168.122.100:6803/2860845679]' entity='osd.1' cmd='[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["1"]}]': finished
Nov 27 05:55:45 np0005537642 ceph-mon[74338]: from='osd.1 [v2:192.168.122.100:6802/2860845679,v1:192.168.122.100:6803/2860845679]' entity='osd.1' cmd=[{"prefix": "osd crush create-or-move", "id": 1, "weight":0.0195, "args": ["host=compute-0", "root=default"]}]: dispatch
Nov 27 05:55:45 np0005537642 ceph-mon[74338]: from='mgr.14122 192.168.122.100:0/397881370' entity='mgr.compute-0.qnrkij' 
Nov 27 05:55:45 np0005537642 ceph-mon[74338]: log_channel(cluster) log [DBG] : osdmap e13: 2 total, 1 up, 2 in
Nov 27 05:55:45 np0005537642 ceph-mon[74338]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 1} v 0)
Nov 27 05:55:45 np0005537642 ceph-mon[74338]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/397881370' entity='mgr.compute-0.qnrkij' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch
Nov 27 05:55:45 np0005537642 ceph-mgr[74636]: mgr finish mon failed to return metadata for osd.1: (2) No such file or directory
Nov 27 05:55:45 np0005537642 ceph-mgr[74636]: mgr.server handle_open ignoring open from osd.1 v2:192.168.122.100:6802/2860845679; not ready for session (expect reconnect)
Nov 27 05:55:45 np0005537642 ceph-mon[74338]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 1} v 0)
Nov 27 05:55:45 np0005537642 ceph-mon[74338]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/397881370' entity='mgr.compute-0.qnrkij' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch
Nov 27 05:55:45 np0005537642 ceph-mgr[74636]: mgr finish mon failed to return metadata for osd.1: (2) No such file or directory
Nov 27 05:55:45 np0005537642 systemd[1]: Started libpod-conmon-985b7c925f14ab124ab34af5e2c08cb9490ff446c48079c9eb3cb0533334d5ae.scope.
Nov 27 05:55:45 np0005537642 systemd[1]: Started libcrun container.
Nov 27 05:55:45 np0005537642 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5932fab0e3e663e32ba3b4988cbafa75138ced201cf38d1eaa1ba281e736db8d/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Nov 27 05:55:45 np0005537642 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5932fab0e3e663e32ba3b4988cbafa75138ced201cf38d1eaa1ba281e736db8d/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 27 05:55:45 np0005537642 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5932fab0e3e663e32ba3b4988cbafa75138ced201cf38d1eaa1ba281e736db8d/merged/home/ceph_spec.yaml supports timestamps until 2038 (0x7fffffff)
Nov 27 05:55:45 np0005537642 podman[83493]: 2025-11-27 10:55:45.706949129 +0000 UTC m=+0.580847673 container init 985b7c925f14ab124ab34af5e2c08cb9490ff446c48079c9eb3cb0533334d5ae (image=quay.io/ceph/ceph:v19, name=agitated_napier, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1)
Nov 27 05:55:45 np0005537642 podman[83493]: 2025-11-27 10:55:45.719607822 +0000 UTC m=+0.593506376 container start 985b7c925f14ab124ab34af5e2c08cb9490ff446c48079c9eb3cb0533334d5ae (image=quay.io/ceph/ceph:v19, name=agitated_napier, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid)
Nov 27 05:55:45 np0005537642 podman[83493]: 2025-11-27 10:55:45.765869396 +0000 UTC m=+0.639767990 container attach 985b7c925f14ab124ab34af5e2c08cb9490ff446c48079c9eb3cb0533334d5ae (image=quay.io/ceph/ceph:v19, name=agitated_napier, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, ceph=True, CEPH_REF=squid, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 27 05:55:45 np0005537642 ceph-mon[74338]: mon.compute-0@0(leader).osd e13 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 27 05:55:46 np0005537642 ceph-mon[74338]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2}] v 0)
Nov 27 05:55:46 np0005537642 ceph-mon[74338]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/397881370' entity='mgr.compute-0.qnrkij' 
Nov 27 05:55:46 np0005537642 ceph-mon[74338]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "status", "format": "json"} v 0)
Nov 27 05:55:46 np0005537642 ceph-mon[74338]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3368151111' entity='client.admin' cmd=[{"prefix": "status", "format": "json"}]: dispatch
Nov 27 05:55:46 np0005537642 agitated_napier[83558]: 
Nov 27 05:55:46 np0005537642 agitated_napier[83558]: {"fsid":"4c838139-e0c9-556a-a9ca-e4422f459af7","health":{"status":"HEALTH_WARN","checks":{"CEPHADM_APPLY_SPEC_FAIL":{"severity":"HEALTH_WARN","summary":{"message":"Failed to apply 2 service(s): mon,mgr","count":2},"muted":false}},"mutes":[]},"election_epoch":5,"quorum":[0],"quorum_names":["compute-0"],"quorum_age":141,"monmap":{"epoch":1,"min_mon_release_name":"squid","num_mons":1},"osdmap":{"epoch":13,"num_osds":2,"num_up_osds":1,"osd_up_since":1764240925,"num_in_osds":2,"osd_in_since":1764240909,"num_remapped_pgs":0},"pgmap":{"pgs_by_state":[{"state_name":"active+clean","count":1}],"num_pgs":1,"num_pools":1,"num_objects":2,"data_bytes":459280,"bytes_used":447614976,"bytes_avail":21023027200,"bytes_total":21470642176},"fsmap":{"epoch":1,"btime":"2025-11-27T10:53:22:366160+0000","by_rank":[],"up:standby":0},"mgrmap":{"available":true,"num_standbys":0,"modules":["cephadm","iostat","nfs","restful"],"services":{}},"servicemap":{"epoch":2,"modified":"2025-11-27T10:54:48.818414+0000","services":{}},"progress_events":{}}
Nov 27 05:55:46 np0005537642 systemd[1]: libpod-985b7c925f14ab124ab34af5e2c08cb9490ff446c48079c9eb3cb0533334d5ae.scope: Deactivated successfully.
Nov 27 05:55:46 np0005537642 podman[83493]: 2025-11-27 10:55:46.229246573 +0000 UTC m=+1.103145117 container died 985b7c925f14ab124ab34af5e2c08cb9490ff446c48079c9eb3cb0533334d5ae (image=quay.io/ceph/ceph:v19, name=agitated_napier, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 27 05:55:46 np0005537642 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v65: 1 pgs: 1 active+clean; 449 KiB data, 427 MiB used, 20 GiB / 20 GiB avail
Nov 27 05:55:46 np0005537642 ceph-mon[74338]: from='mgr.14122 192.168.122.100:0/397881370' entity='mgr.compute-0.qnrkij' 
Nov 27 05:55:46 np0005537642 ceph-mon[74338]: from='osd.1 [v2:192.168.122.100:6802/2860845679,v1:192.168.122.100:6803/2860845679]' entity='osd.1' cmd='[{"prefix": "osd crush create-or-move", "id": 1, "weight":0.0195, "args": ["host=compute-0", "root=default"]}]': finished
Nov 27 05:55:46 np0005537642 ceph-mon[74338]: from='mgr.14122 192.168.122.100:0/397881370' entity='mgr.compute-0.qnrkij' 
Nov 27 05:55:46 np0005537642 systemd[1]: var-lib-containers-storage-overlay-5932fab0e3e663e32ba3b4988cbafa75138ced201cf38d1eaa1ba281e736db8d-merged.mount: Deactivated successfully.
Nov 27 05:55:46 np0005537642 ceph-mgr[74636]: mgr.server handle_open ignoring open from osd.1 v2:192.168.122.100:6802/2860845679; not ready for session (expect reconnect)
Nov 27 05:55:46 np0005537642 ceph-mon[74338]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 1} v 0)
Nov 27 05:55:46 np0005537642 ceph-mon[74338]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/397881370' entity='mgr.compute-0.qnrkij' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch
Nov 27 05:55:46 np0005537642 ceph-mgr[74636]: mgr finish mon failed to return metadata for osd.1: (2) No such file or directory
Nov 27 05:55:46 np0005537642 ceph-mon[74338]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2}] v 0)
Nov 27 05:55:46 np0005537642 ceph-mon[74338]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1.devices.0}] v 0)
Nov 27 05:55:46 np0005537642 ceph-mgr[74636]: [balancer INFO root] Optimize plan auto_2025-11-27_10:55:46
Nov 27 05:55:46 np0005537642 ceph-mgr[74636]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Nov 27 05:55:46 np0005537642 ceph-mgr[74636]: [balancer INFO root] do_upmap
Nov 27 05:55:46 np0005537642 ceph-mgr[74636]: [balancer INFO root] pools ['.mgr']
Nov 27 05:55:46 np0005537642 ceph-mgr[74636]: [balancer INFO root] prepared 0/10 upmap changes
Nov 27 05:55:46 np0005537642 ceph-mon[74338]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/397881370' entity='mgr.compute-0.qnrkij' 
Nov 27 05:55:46 np0005537642 podman[83493]: 2025-11-27 10:55:46.934500477 +0000 UTC m=+1.808399051 container remove 985b7c925f14ab124ab34af5e2c08cb9490ff446c48079c9eb3cb0533334d5ae (image=quay.io/ceph/ceph:v19, name=agitated_napier, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, org.label-schema.build-date=20250325, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 27 05:55:46 np0005537642 ceph-mon[74338]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/397881370' entity='mgr.compute-0.qnrkij' 
Nov 27 05:55:46 np0005537642 ceph-mon[74338]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1}] v 0)
Nov 27 05:55:46 np0005537642 systemd[1]: libpod-conmon-985b7c925f14ab124ab34af5e2c08cb9490ff446c48079c9eb3cb0533334d5ae.scope: Deactivated successfully.
Nov 27 05:55:47 np0005537642 ceph-mon[74338]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/397881370' entity='mgr.compute-0.qnrkij' 
Nov 27 05:55:47 np0005537642 ceph-mgr[74636]: mgr.server handle_open ignoring open from osd.1 v2:192.168.122.100:6802/2860845679; not ready for session (expect reconnect)
Nov 27 05:55:47 np0005537642 ceph-mon[74338]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 1} v 0)
Nov 27 05:55:47 np0005537642 ceph-mon[74338]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/397881370' entity='mgr.compute-0.qnrkij' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch
Nov 27 05:55:47 np0005537642 ceph-mgr[74636]: mgr finish mon failed to return metadata for osd.1: (2) No such file or directory
Nov 27 05:55:47 np0005537642 python3[83729]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v19 --fsid 4c838139-e0c9-556a-a9ca-e4422f459af7 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   osd pool create vms  replicated_rule --autoscale-mode on _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 27 05:55:47 np0005537642 podman[83678]: 2025-11-27 10:55:47.865401573 +0000 UTC m=+1.636628981 container exec 10d3b07b5dbe91b896d72c044972881d213b8aa535ac9c97588798b2ade7a7fa (image=quay.io/ceph/ceph:v19, name=ceph-4c838139-e0c9-556a-a9ca-e4422f459af7-mon-compute-0, ceph=True, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 27 05:55:47 np0005537642 podman[83730]: 2025-11-27 10:55:47.841941579 +0000 UTC m=+0.082254120 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Nov 27 05:55:48 np0005537642 ceph-mon[74338]: from='mgr.14122 192.168.122.100:0/397881370' entity='mgr.compute-0.qnrkij' 
Nov 27 05:55:48 np0005537642 ceph-mon[74338]: from='mgr.14122 192.168.122.100:0/397881370' entity='mgr.compute-0.qnrkij' 
Nov 27 05:55:48 np0005537642 ceph-mon[74338]: from='mgr.14122 192.168.122.100:0/397881370' entity='mgr.compute-0.qnrkij' 
Nov 27 05:55:48 np0005537642 podman[83730]: 2025-11-27 10:55:48.11491336 +0000 UTC m=+0.355225781 container create b4822e5d4bf4fbc2102b5916eec394bafda480f1bec75bcf24844b56dc08f4c6 (image=quay.io/ceph/ceph:v19, name=busy_wilson, org.label-schema.schema-version=1.0, CEPH_REF=squid, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default)
Nov 27 05:55:48 np0005537642 ceph-mgr[74636]: [pg_autoscaler INFO root] _maybe_adjust
Nov 27 05:55:48 np0005537642 ceph-mgr[74636]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 21470642176
Nov 27 05:55:48 np0005537642 ceph-mgr[74636]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 2.1557249951162338e-05 of space, bias 1.0, pg target 0.004311449990232467 quantized to 1 (current 1)
Nov 27 05:55:48 np0005537642 ceph-mgr[74636]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Nov 27 05:55:48 np0005537642 ceph-mgr[74636]: [volumes INFO mgr_util] scanning for idle connections..
Nov 27 05:55:48 np0005537642 ceph-mgr[74636]: [volumes INFO mgr_util] cleaning up connections: []
Nov 27 05:55:48 np0005537642 ceph-mon[74338]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2}] v 0)
Nov 27 05:55:48 np0005537642 ceph-mgr[74636]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Nov 27 05:55:48 np0005537642 ceph-mgr[74636]: [volumes INFO mgr_util] scanning for idle connections..
Nov 27 05:55:48 np0005537642 ceph-mgr[74636]: [volumes INFO mgr_util] cleaning up connections: []
Nov 27 05:55:48 np0005537642 ceph-mgr[74636]: [volumes INFO mgr_util] scanning for idle connections..
Nov 27 05:55:48 np0005537642 ceph-mgr[74636]: [volumes INFO mgr_util] cleaning up connections: []
Nov 27 05:55:48 np0005537642 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v66: 1 pgs: 1 active+clean; 449 KiB data, 27 MiB used, 20 GiB / 20 GiB avail
Nov 27 05:55:48 np0005537642 systemd[1]: Started libpod-conmon-b4822e5d4bf4fbc2102b5916eec394bafda480f1bec75bcf24844b56dc08f4c6.scope.
Nov 27 05:55:48 np0005537642 systemd[1]: Started libcrun container.
Nov 27 05:55:48 np0005537642 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/dfae6a948a744b358442557fcd39ce0bb477b34c646511a58d01785468624dd7/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 27 05:55:48 np0005537642 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/dfae6a948a744b358442557fcd39ce0bb477b34c646511a58d01785468624dd7/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Nov 27 05:55:48 np0005537642 ceph-mgr[74636]: mgr.server handle_open ignoring open from osd.1 v2:192.168.122.100:6802/2860845679; not ready for session (expect reconnect)
Nov 27 05:55:48 np0005537642 ceph-mon[74338]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 1} v 0)
Nov 27 05:55:48 np0005537642 ceph-mon[74338]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/397881370' entity='mgr.compute-0.qnrkij' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch
Nov 27 05:55:48 np0005537642 ceph-mgr[74636]: mgr finish mon failed to return metadata for osd.1: (2) No such file or directory
Nov 27 05:55:48 np0005537642 ceph-mon[74338]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/397881370' entity='mgr.compute-0.qnrkij' 
Nov 27 05:55:48 np0005537642 podman[83749]: 2025-11-27 10:55:48.804834061 +0000 UTC m=+0.797368344 container exec_died 10d3b07b5dbe91b896d72c044972881d213b8aa535ac9c97588798b2ade7a7fa (image=quay.io/ceph/ceph:v19, name=ceph-4c838139-e0c9-556a-a9ca-e4422f459af7-mon-compute-0, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 27 05:55:48 np0005537642 podman[83678]: 2025-11-27 10:55:48.982440831 +0000 UTC m=+2.753668149 container exec_died 10d3b07b5dbe91b896d72c044972881d213b8aa535ac9c97588798b2ade7a7fa (image=quay.io/ceph/ceph:v19, name=ceph-4c838139-e0c9-556a-a9ca-e4422f459af7-mon-compute-0, OSD_FLAVOR=default, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2)
Nov 27 05:55:49 np0005537642 ceph-mgr[74636]: mgr.server handle_open ignoring open from osd.1 v2:192.168.122.100:6802/2860845679; not ready for session (expect reconnect)
Nov 27 05:55:49 np0005537642 ceph-mon[74338]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 1} v 0)
Nov 27 05:55:49 np0005537642 ceph-mgr[74636]: mgr finish mon failed to return metadata for osd.1: (2) No such file or directory
Nov 27 05:55:49 np0005537642 ceph-mon[74338]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/397881370' entity='mgr.compute-0.qnrkij' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch
Nov 27 05:55:49 np0005537642 ceph-mon[74338]: from='mgr.14122 192.168.122.100:0/397881370' entity='mgr.compute-0.qnrkij' 
Nov 27 05:55:50 np0005537642 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v67: 1 pgs: 1 active+clean; 449 KiB data, 27 MiB used, 20 GiB / 20 GiB avail
Nov 27 05:55:50 np0005537642 ceph-mon[74338]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1.devices.0}] v 0)
Nov 27 05:55:50 np0005537642 ceph-mgr[74636]: mgr.server handle_open ignoring open from osd.1 v2:192.168.122.100:6802/2860845679; not ready for session (expect reconnect)
Nov 27 05:55:50 np0005537642 podman[83730]: 2025-11-27 10:55:50.605889507 +0000 UTC m=+2.846201988 container init b4822e5d4bf4fbc2102b5916eec394bafda480f1bec75bcf24844b56dc08f4c6 (image=quay.io/ceph/ceph:v19, name=busy_wilson, org.label-schema.build-date=20250325, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 27 05:55:50 np0005537642 podman[83730]: 2025-11-27 10:55:50.611081433 +0000 UTC m=+2.851393864 container start b4822e5d4bf4fbc2102b5916eec394bafda480f1bec75bcf24844b56dc08f4c6 (image=quay.io/ceph/ceph:v19, name=busy_wilson, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2)
Nov 27 05:55:50 np0005537642 ceph-mon[74338]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 1} v 0)
Nov 27 05:55:50 np0005537642 ceph-mon[74338]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/397881370' entity='mgr.compute-0.qnrkij' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch
Nov 27 05:55:50 np0005537642 ceph-mgr[74636]: mgr finish mon failed to return metadata for osd.1: (2) No such file or directory
Nov 27 05:55:50 np0005537642 ceph-mon[74338]: mon.compute-0@0(leader).osd e13 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 27 05:55:50 np0005537642 ceph-mon[74338]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool create", "pool": "vms", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"} v 0)
Nov 27 05:55:50 np0005537642 ceph-mon[74338]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/4025923699' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "vms", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"}]: dispatch
Nov 27 05:55:51 np0005537642 ceph-mon[74338]: mon.compute-0@0(leader).osd e13 do_prune osdmap full prune enabled
Nov 27 05:55:51 np0005537642 ceph-mon[74338]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/397881370' entity='mgr.compute-0.qnrkij' 
Nov 27 05:55:51 np0005537642 podman[83730]: 2025-11-27 10:55:51.217791823 +0000 UTC m=+3.458104234 container attach b4822e5d4bf4fbc2102b5916eec394bafda480f1bec75bcf24844b56dc08f4c6 (image=quay.io/ceph/ceph:v19, name=busy_wilson, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Nov 27 05:55:51 np0005537642 ceph-mon[74338]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1}] v 0)
Nov 27 05:55:51 np0005537642 ceph-mon[74338]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/4025923699' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "vms", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"}]': finished
Nov 27 05:55:51 np0005537642 ceph-mon[74338]: mon.compute-0@0(leader).osd e14 e14: 2 total, 1 up, 2 in
Nov 27 05:55:51 np0005537642 busy_wilson[83762]: pool 'vms' created
Nov 27 05:55:51 np0005537642 systemd[1]: libpod-b4822e5d4bf4fbc2102b5916eec394bafda480f1bec75bcf24844b56dc08f4c6.scope: Deactivated successfully.
Nov 27 05:55:51 np0005537642 podman[83730]: 2025-11-27 10:55:51.532939567 +0000 UTC m=+3.773251948 container died b4822e5d4bf4fbc2102b5916eec394bafda480f1bec75bcf24844b56dc08f4c6 (image=quay.io/ceph/ceph:v19, name=busy_wilson, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, org.label-schema.vendor=CentOS)
Nov 27 05:55:51 np0005537642 ceph-mgr[74636]: mgr.server handle_open ignoring open from osd.1 v2:192.168.122.100:6802/2860845679; not ready for session (expect reconnect)
Nov 27 05:55:52 np0005537642 ceph-mon[74338]: log_channel(cluster) log [DBG] : osdmap e14: 2 total, 1 up, 2 in
Nov 27 05:55:52 np0005537642 ceph-mon[74338]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 1} v 0)
Nov 27 05:55:52 np0005537642 ceph-mon[74338]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/397881370' entity='mgr.compute-0.qnrkij' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch
Nov 27 05:55:52 np0005537642 ceph-mon[74338]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Nov 27 05:55:52 np0005537642 ceph-mgr[74636]: mgr finish mon failed to return metadata for osd.1: (2) No such file or directory
Nov 27 05:55:52 np0005537642 ceph-mon[74338]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/397881370' entity='mgr.compute-0.qnrkij' 
Nov 27 05:55:52 np0005537642 ceph-mon[74338]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1.devices.0}] v 0)
Nov 27 05:55:52 np0005537642 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v69: 2 pgs: 1 unknown, 1 active+clean; 449 KiB data, 27 MiB used, 20 GiB / 20 GiB avail
Nov 27 05:55:52 np0005537642 systemd[1]: var-lib-containers-storage-overlay-dfae6a948a744b358442557fcd39ce0bb477b34c646511a58d01785468624dd7-merged.mount: Deactivated successfully.
Nov 27 05:55:52 np0005537642 ceph-mgr[74636]: mgr.server handle_open ignoring open from osd.1 v2:192.168.122.100:6802/2860845679; not ready for session (expect reconnect)
Nov 27 05:55:52 np0005537642 ceph-mon[74338]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 1} v 0)
Nov 27 05:55:52 np0005537642 ceph-mon[74338]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/397881370' entity='mgr.compute-0.qnrkij' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch
Nov 27 05:55:52 np0005537642 ceph-mgr[74636]: mgr finish mon failed to return metadata for osd.1: (2) No such file or directory
Nov 27 05:55:52 np0005537642 ceph-mon[74338]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/397881370' entity='mgr.compute-0.qnrkij' 
Nov 27 05:55:52 np0005537642 ceph-mon[74338]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Nov 27 05:55:52 np0005537642 ceph-mon[74338]: from='client.? 192.168.122.100:0/4025923699' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "vms", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"}]: dispatch
Nov 27 05:55:52 np0005537642 ceph-mon[74338]: from='mgr.14122 192.168.122.100:0/397881370' entity='mgr.compute-0.qnrkij' 
Nov 27 05:55:52 np0005537642 ceph-mon[74338]: from='client.? 192.168.122.100:0/4025923699' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "vms", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"}]': finished
Nov 27 05:55:52 np0005537642 ceph-mon[74338]: from='mgr.14122 192.168.122.100:0/397881370' entity='mgr.compute-0.qnrkij' 
Nov 27 05:55:53 np0005537642 ceph-mon[74338]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/397881370' entity='mgr.compute-0.qnrkij' 
Nov 27 05:55:53 np0005537642 podman[83730]: 2025-11-27 10:55:53.191956588 +0000 UTC m=+5.432269019 container remove b4822e5d4bf4fbc2102b5916eec394bafda480f1bec75bcf24844b56dc08f4c6 (image=quay.io/ceph/ceph:v19, name=busy_wilson, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS)
Nov 27 05:55:53 np0005537642 systemd[1]: libpod-conmon-b4822e5d4bf4fbc2102b5916eec394bafda480f1bec75bcf24844b56dc08f4c6.scope: Deactivated successfully.
Nov 27 05:55:53 np0005537642 ceph-mon[74338]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1}] v 0)
Nov 27 05:55:53 np0005537642 ceph-mon[74338]: log_channel(cluster) log [WRN] : Health check failed: 1 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED)
Nov 27 05:55:53 np0005537642 ceph-mon[74338]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/397881370' entity='mgr.compute-0.qnrkij' 
Nov 27 05:55:53 np0005537642 ceph-mon[74338]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/397881370' entity='mgr.compute-0.qnrkij' 
Nov 27 05:55:53 np0005537642 ceph-mon[74338]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config rm", "who": "osd.0", "name": "osd_memory_target"} v 0)
Nov 27 05:55:53 np0005537642 ceph-mon[74338]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/397881370' entity='mgr.compute-0.qnrkij' cmd=[{"prefix": "config rm", "who": "osd.0", "name": "osd_memory_target"}]: dispatch
Nov 27 05:55:53 np0005537642 ceph-mgr[74636]: [cephadm INFO root] Adjusting osd_memory_target on compute-1 to  5247M
Nov 27 05:55:53 np0005537642 ceph-mgr[74636]: log_channel(cephadm) log [INF] : Adjusting osd_memory_target on compute-1 to  5247M
Nov 27 05:55:53 np0005537642 ceph-mon[74338]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config set, name=osd_memory_target}] v 0)
Nov 27 05:55:53 np0005537642 ceph-mgr[74636]: mgr.server handle_open ignoring open from osd.1 v2:192.168.122.100:6802/2860845679; not ready for session (expect reconnect)
Nov 27 05:55:53 np0005537642 ceph-mon[74338]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 1} v 0)
Nov 27 05:55:53 np0005537642 ceph-mon[74338]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/397881370' entity='mgr.compute-0.qnrkij' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch
Nov 27 05:55:53 np0005537642 ceph-mgr[74636]: mgr finish mon failed to return metadata for osd.1: (2) No such file or directory
Nov 27 05:55:53 np0005537642 python3[83885]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v19 --fsid 4c838139-e0c9-556a-a9ca-e4422f459af7 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   osd pool create volumes  replicated_rule --autoscale-mode on _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 27 05:55:53 np0005537642 ceph-mon[74338]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/397881370' entity='mgr.compute-0.qnrkij' 
Nov 27 05:55:53 np0005537642 podman[83934]: 2025-11-27 10:55:53.653376712 +0000 UTC m=+0.036114088 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Nov 27 05:55:53 np0005537642 podman[83934]: 2025-11-27 10:55:53.747725711 +0000 UTC m=+0.130463077 container create 0ebd6a5d10b708ea326d5b6fabb311c8e77f3206c404e74987ff83b4dfad6b8f (image=quay.io/ceph/ceph:v19, name=intelligent_lalande, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, ceph=True, OSD_FLAVOR=default, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1)
Nov 27 05:55:53 np0005537642 systemd[1]: Started libpod-conmon-0ebd6a5d10b708ea326d5b6fabb311c8e77f3206c404e74987ff83b4dfad6b8f.scope.
Nov 27 05:55:53 np0005537642 systemd[1]: Started libcrun container.
Nov 27 05:55:53 np0005537642 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4d39d2f31a0060268931dd1b0167a1a2e84b833bc3dc15b3d07215982c8963d4/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Nov 27 05:55:53 np0005537642 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4d39d2f31a0060268931dd1b0167a1a2e84b833bc3dc15b3d07215982c8963d4/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 27 05:55:54 np0005537642 podman[83934]: 2025-11-27 10:55:54.111766328 +0000 UTC m=+0.494503754 container init 0ebd6a5d10b708ea326d5b6fabb311c8e77f3206c404e74987ff83b4dfad6b8f (image=quay.io/ceph/ceph:v19, name=intelligent_lalande, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Nov 27 05:55:54 np0005537642 podman[83934]: 2025-11-27 10:55:54.120650806 +0000 UTC m=+0.503388162 container start 0ebd6a5d10b708ea326d5b6fabb311c8e77f3206c404e74987ff83b4dfad6b8f (image=quay.io/ceph/ceph:v19, name=intelligent_lalande, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Nov 27 05:55:54 np0005537642 ceph-mon[74338]: from='mgr.14122 192.168.122.100:0/397881370' entity='mgr.compute-0.qnrkij' 
Nov 27 05:55:54 np0005537642 ceph-mon[74338]: from='mgr.14122 192.168.122.100:0/397881370' entity='mgr.compute-0.qnrkij' 
Nov 27 05:55:54 np0005537642 ceph-mon[74338]: Health check failed: 1 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED)
Nov 27 05:55:54 np0005537642 ceph-mon[74338]: from='mgr.14122 192.168.122.100:0/397881370' entity='mgr.compute-0.qnrkij' 
Nov 27 05:55:54 np0005537642 ceph-mon[74338]: from='mgr.14122 192.168.122.100:0/397881370' entity='mgr.compute-0.qnrkij' 
Nov 27 05:55:54 np0005537642 ceph-mon[74338]: from='mgr.14122 192.168.122.100:0/397881370' entity='mgr.compute-0.qnrkij' cmd=[{"prefix": "config rm", "who": "osd.0", "name": "osd_memory_target"}]: dispatch
Nov 27 05:55:54 np0005537642 ceph-mon[74338]: from='mgr.14122 192.168.122.100:0/397881370' entity='mgr.compute-0.qnrkij' 
Nov 27 05:55:54 np0005537642 podman[83934]: 2025-11-27 10:55:54.24522513 +0000 UTC m=+0.627962486 container attach 0ebd6a5d10b708ea326d5b6fabb311c8e77f3206c404e74987ff83b4dfad6b8f (image=quay.io/ceph/ceph:v19, name=intelligent_lalande, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True)
Nov 27 05:55:54 np0005537642 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v70: 2 pgs: 1 unknown, 1 active+clean; 449 KiB data, 27 MiB used, 20 GiB / 20 GiB avail
Nov 27 05:55:54 np0005537642 ceph-mgr[74636]: mgr.server handle_open ignoring open from osd.1 v2:192.168.122.100:6802/2860845679; not ready for session (expect reconnect)
Nov 27 05:55:54 np0005537642 ceph-mon[74338]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 1} v 0)
Nov 27 05:55:54 np0005537642 ceph-mon[74338]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/397881370' entity='mgr.compute-0.qnrkij' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch
Nov 27 05:55:54 np0005537642 ceph-mgr[74636]: mgr finish mon failed to return metadata for osd.1: (2) No such file or directory
Nov 27 05:55:54 np0005537642 ceph-mon[74338]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool create", "pool": "volumes", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"} v 0)
Nov 27 05:55:54 np0005537642 ceph-mon[74338]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/4007303224' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "volumes", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"}]: dispatch
Nov 27 05:55:54 np0005537642 podman[84097]: 2025-11-27 10:55:54.861640167 +0000 UTC m=+0.038202235 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Nov 27 05:55:55 np0005537642 podman[84097]: 2025-11-27 10:55:55.055721685 +0000 UTC m=+0.232283693 container create e13f61d686da168f76ed8032f32e095e8ce248111ce09f59823329ed4b8be276 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=peaceful_yalow, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.schema-version=1.0, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 27 05:55:55 np0005537642 systemd[1]: Started libpod-conmon-e13f61d686da168f76ed8032f32e095e8ce248111ce09f59823329ed4b8be276.scope.
Nov 27 05:55:55 np0005537642 systemd[1]: Started libcrun container.
Nov 27 05:55:55 np0005537642 ceph-mon[74338]: mon.compute-0@0(leader).osd e14 do_prune osdmap full prune enabled
Nov 27 05:55:55 np0005537642 podman[84097]: 2025-11-27 10:55:55.4308547 +0000 UTC m=+0.607416678 container init e13f61d686da168f76ed8032f32e095e8ce248111ce09f59823329ed4b8be276 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=peaceful_yalow, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 27 05:55:55 np0005537642 podman[84097]: 2025-11-27 10:55:55.44384325 +0000 UTC m=+0.620405248 container start e13f61d686da168f76ed8032f32e095e8ce248111ce09f59823329ed4b8be276 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=peaceful_yalow, org.label-schema.schema-version=1.0, CEPH_REF=squid, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 27 05:55:55 np0005537642 peaceful_yalow[84113]: 167 167
Nov 27 05:55:55 np0005537642 systemd[1]: libpod-e13f61d686da168f76ed8032f32e095e8ce248111ce09f59823329ed4b8be276.scope: Deactivated successfully.
Nov 27 05:55:55 np0005537642 ceph-mon[74338]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/4007303224' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "volumes", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"}]': finished
Nov 27 05:55:55 np0005537642 ceph-mon[74338]: mon.compute-0@0(leader).osd e15 e15: 2 total, 1 up, 2 in
Nov 27 05:55:55 np0005537642 ceph-mgr[74636]: mgr.server handle_open ignoring open from osd.1 v2:192.168.122.100:6802/2860845679; not ready for session (expect reconnect)
Nov 27 05:55:55 np0005537642 intelligent_lalande[83962]: pool 'volumes' created
Nov 27 05:55:55 np0005537642 podman[84097]: 2025-11-27 10:55:55.59506609 +0000 UTC m=+0.771628088 container attach e13f61d686da168f76ed8032f32e095e8ce248111ce09f59823329ed4b8be276 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=peaceful_yalow, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS)
Nov 27 05:55:55 np0005537642 podman[84097]: 2025-11-27 10:55:55.595552971 +0000 UTC m=+0.772114959 container died e13f61d686da168f76ed8032f32e095e8ce248111ce09f59823329ed4b8be276 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=peaceful_yalow, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2)
Nov 27 05:55:55 np0005537642 systemd[1]: libpod-0ebd6a5d10b708ea326d5b6fabb311c8e77f3206c404e74987ff83b4dfad6b8f.scope: Deactivated successfully.
Nov 27 05:55:55 np0005537642 ceph-mon[74338]: Adjusting osd_memory_target on compute-1 to  5247M
Nov 27 05:55:55 np0005537642 ceph-mon[74338]: from='client.? 192.168.122.100:0/4007303224' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "volumes", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"}]: dispatch
Nov 27 05:55:55 np0005537642 ceph-mon[74338]: log_channel(cluster) log [DBG] : osdmap e15: 2 total, 1 up, 2 in
Nov 27 05:55:55 np0005537642 ceph-mon[74338]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 1} v 0)
Nov 27 05:55:55 np0005537642 ceph-mon[74338]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/397881370' entity='mgr.compute-0.qnrkij' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch
Nov 27 05:55:55 np0005537642 ceph-mgr[74636]: mgr finish mon failed to return metadata for osd.1: (2) No such file or directory
Nov 27 05:55:55 np0005537642 podman[83934]: 2025-11-27 10:55:55.933502845 +0000 UTC m=+2.316240181 container died 0ebd6a5d10b708ea326d5b6fabb311c8e77f3206c404e74987ff83b4dfad6b8f (image=quay.io/ceph/ceph:v19, name=intelligent_lalande, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_REF=squid, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2)
Nov 27 05:55:55 np0005537642 ceph-mon[74338]: mon.compute-0@0(leader).osd e15 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 27 05:55:56 np0005537642 systemd[1]: var-lib-containers-storage-overlay-4d39d2f31a0060268931dd1b0167a1a2e84b833bc3dc15b3d07215982c8963d4-merged.mount: Deactivated successfully.
Nov 27 05:55:56 np0005537642 systemd[75677]: Starting Mark boot as successful...
Nov 27 05:55:56 np0005537642 systemd[75677]: Finished Mark boot as successful.
Nov 27 05:55:56 np0005537642 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v72: 3 pgs: 2 unknown, 1 active+clean; 449 KiB data, 27 MiB used, 20 GiB / 20 GiB avail
Nov 27 05:55:56 np0005537642 podman[83934]: 2025-11-27 10:55:56.52299274 +0000 UTC m=+2.905730106 container remove 0ebd6a5d10b708ea326d5b6fabb311c8e77f3206c404e74987ff83b4dfad6b8f (image=quay.io/ceph/ceph:v19, name=intelligent_lalande, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, ceph=True, OSD_FLAVOR=default, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1)
Nov 27 05:55:56 np0005537642 ceph-mon[74338]: mon.compute-0@0(leader).osd e15 do_prune osdmap full prune enabled
Nov 27 05:55:56 np0005537642 ceph-mgr[74636]: mgr.server handle_open ignoring open from osd.1 v2:192.168.122.100:6802/2860845679; not ready for session (expect reconnect)
Nov 27 05:55:56 np0005537642 systemd[1]: libpod-conmon-0ebd6a5d10b708ea326d5b6fabb311c8e77f3206c404e74987ff83b4dfad6b8f.scope: Deactivated successfully.
Nov 27 05:55:56 np0005537642 ceph-mon[74338]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 1} v 0)
Nov 27 05:55:56 np0005537642 ceph-mon[74338]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/397881370' entity='mgr.compute-0.qnrkij' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch
Nov 27 05:55:56 np0005537642 ceph-mgr[74636]: mgr finish mon failed to return metadata for osd.1: (2) No such file or directory
Nov 27 05:55:56 np0005537642 ceph-mon[74338]: mon.compute-0@0(leader).osd e16 e16: 2 total, 1 up, 2 in
Nov 27 05:55:56 np0005537642 systemd[1]: var-lib-containers-storage-overlay-75ea095dbc0d13586495f5d80a000548155fd523aeef9b63d8bccc0722f13b24-merged.mount: Deactivated successfully.
Nov 27 05:55:56 np0005537642 ceph-mon[74338]: log_channel(cluster) log [DBG] : osdmap e16: 2 total, 1 up, 2 in
Nov 27 05:55:56 np0005537642 python3[84169]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v19 --fsid 4c838139-e0c9-556a-a9ca-e4422f459af7 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   osd pool create backups  replicated_rule --autoscale-mode on _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 27 05:55:56 np0005537642 ceph-mon[74338]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 1} v 0)
Nov 27 05:55:56 np0005537642 ceph-mon[74338]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/397881370' entity='mgr.compute-0.qnrkij' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch
Nov 27 05:55:56 np0005537642 ceph-mgr[74636]: mgr finish mon failed to return metadata for osd.1: (2) No such file or directory
Nov 27 05:55:57 np0005537642 ceph-mon[74338]: from='client.? 192.168.122.100:0/4007303224' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "volumes", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"}]': finished
Nov 27 05:55:57 np0005537642 podman[84097]: 2025-11-27 10:55:57.179781571 +0000 UTC m=+2.356343559 container remove e13f61d686da168f76ed8032f32e095e8ce248111ce09f59823329ed4b8be276 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=peaceful_yalow, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 27 05:55:57 np0005537642 systemd[1]: libpod-conmon-e13f61d686da168f76ed8032f32e095e8ce248111ce09f59823329ed4b8be276.scope: Deactivated successfully.
Nov 27 05:55:57 np0005537642 podman[84170]: 2025-11-27 10:55:57.221439362 +0000 UTC m=+0.270242172 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Nov 27 05:55:57 np0005537642 podman[84170]: 2025-11-27 10:55:57.37748648 +0000 UTC m=+0.426289220 container create 12fb1d0c9813743fa6cc3d37db64d5994dd79dd6e75b9d33edef9e0a1aba4c9e (image=quay.io/ceph/ceph:v19, name=sharp_jepsen, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0)
Nov 27 05:55:57 np0005537642 podman[84190]: 2025-11-27 10:55:57.38647105 +0000 UTC m=+0.036869495 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Nov 27 05:55:57 np0005537642 systemd[1]: Started libpod-conmon-12fb1d0c9813743fa6cc3d37db64d5994dd79dd6e75b9d33edef9e0a1aba4c9e.scope.
Nov 27 05:55:57 np0005537642 systemd[1]: Started libcrun container.
Nov 27 05:55:57 np0005537642 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/acd285528720aed29ab0eb3cf756ba194181b8ebb35ed92a240710519c511fe6/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Nov 27 05:55:57 np0005537642 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/acd285528720aed29ab0eb3cf756ba194181b8ebb35ed92a240710519c511fe6/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 27 05:55:57 np0005537642 ceph-mgr[74636]: mgr.server handle_open ignoring open from osd.1 v2:192.168.122.100:6802/2860845679; not ready for session (expect reconnect)
Nov 27 05:55:57 np0005537642 ceph-mon[74338]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 1} v 0)
Nov 27 05:55:57 np0005537642 ceph-mon[74338]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/397881370' entity='mgr.compute-0.qnrkij' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch
Nov 27 05:55:57 np0005537642 ceph-mgr[74636]: mgr finish mon failed to return metadata for osd.1: (2) No such file or directory
Nov 27 05:55:57 np0005537642 podman[84190]: 2025-11-27 10:55:57.578182726 +0000 UTC m=+0.228581161 container create 8599447a53135e965fe1e73c9df72359942a2902ebf436c77d0823923b08e90a (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=suspicious_euclid, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Nov 27 05:55:57 np0005537642 podman[84170]: 2025-11-27 10:55:57.679385648 +0000 UTC m=+0.728188458 container init 12fb1d0c9813743fa6cc3d37db64d5994dd79dd6e75b9d33edef9e0a1aba4c9e (image=quay.io/ceph/ceph:v19, name=sharp_jepsen, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2)
Nov 27 05:55:57 np0005537642 podman[84170]: 2025-11-27 10:55:57.6902116 +0000 UTC m=+0.739014350 container start 12fb1d0c9813743fa6cc3d37db64d5994dd79dd6e75b9d33edef9e0a1aba4c9e (image=quay.io/ceph/ceph:v19, name=sharp_jepsen, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, OSD_FLAVOR=default, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Nov 27 05:55:57 np0005537642 podman[84170]: 2025-11-27 10:55:57.749411343 +0000 UTC m=+0.798214093 container attach 12fb1d0c9813743fa6cc3d37db64d5994dd79dd6e75b9d33edef9e0a1aba4c9e (image=quay.io/ceph/ceph:v19, name=sharp_jepsen, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, CEPH_REF=squid, ceph=True, OSD_FLAVOR=default, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0)
Nov 27 05:55:57 np0005537642 ceph-mon[74338]: mon.compute-0@0(leader).osd e16 do_prune osdmap full prune enabled
Nov 27 05:55:57 np0005537642 systemd[1]: Started libpod-conmon-8599447a53135e965fe1e73c9df72359942a2902ebf436c77d0823923b08e90a.scope.
Nov 27 05:55:57 np0005537642 systemd[1]: Started libcrun container.
Nov 27 05:55:57 np0005537642 ceph-mon[74338]: mon.compute-0@0(leader).osd e17 e17: 2 total, 1 up, 2 in
Nov 27 05:55:57 np0005537642 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8835f11f49202e13b0c4e8545e18eddbc4584f1f3ad478377e79cc060cad99e6/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 27 05:55:57 np0005537642 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8835f11f49202e13b0c4e8545e18eddbc4584f1f3ad478377e79cc060cad99e6/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 27 05:55:57 np0005537642 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8835f11f49202e13b0c4e8545e18eddbc4584f1f3ad478377e79cc060cad99e6/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 27 05:55:57 np0005537642 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8835f11f49202e13b0c4e8545e18eddbc4584f1f3ad478377e79cc060cad99e6/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 27 05:55:57 np0005537642 ceph-mon[74338]: log_channel(cluster) log [DBG] : osdmap e17: 2 total, 1 up, 2 in
Nov 27 05:55:57 np0005537642 ceph-mon[74338]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 1} v 0)
Nov 27 05:55:57 np0005537642 ceph-mon[74338]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/397881370' entity='mgr.compute-0.qnrkij' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch
Nov 27 05:55:58 np0005537642 ceph-mgr[74636]: mgr finish mon failed to return metadata for osd.1: (2) No such file or directory
Nov 27 05:55:58 np0005537642 podman[84190]: 2025-11-27 10:55:58.012893771 +0000 UTC m=+0.663292186 container init 8599447a53135e965fe1e73c9df72359942a2902ebf436c77d0823923b08e90a (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=suspicious_euclid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.build-date=20250325, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 27 05:55:58 np0005537642 podman[84190]: 2025-11-27 10:55:58.019163231 +0000 UTC m=+0.669561626 container start 8599447a53135e965fe1e73c9df72359942a2902ebf436c77d0823923b08e90a (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=suspicious_euclid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 27 05:55:58 np0005537642 ceph-mon[74338]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool create", "pool": "backups", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"} v 0)
Nov 27 05:55:58 np0005537642 ceph-mon[74338]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/201554243' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "backups", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"}]: dispatch
Nov 27 05:55:58 np0005537642 podman[84190]: 2025-11-27 10:55:58.241085531 +0000 UTC m=+0.891483946 container attach 8599447a53135e965fe1e73c9df72359942a2902ebf436c77d0823923b08e90a (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=suspicious_euclid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, io.buildah.version=1.40.1, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS)
Nov 27 05:55:58 np0005537642 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v75: 3 pgs: 1 creating+peering, 1 unknown, 1 active+clean; 449 KiB data, 27 MiB used, 20 GiB / 20 GiB avail
Nov 27 05:55:58 np0005537642 ceph-mgr[74636]: mgr.server handle_open ignoring open from osd.1 v2:192.168.122.100:6802/2860845679; not ready for session (expect reconnect)
Nov 27 05:55:58 np0005537642 ceph-mon[74338]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 1} v 0)
Nov 27 05:55:58 np0005537642 ceph-mon[74338]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/397881370' entity='mgr.compute-0.qnrkij' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch
Nov 27 05:55:58 np0005537642 ceph-mgr[74636]: mgr finish mon failed to return metadata for osd.1: (2) No such file or directory
Nov 27 05:55:58 np0005537642 suspicious_euclid[84231]: [
Nov 27 05:55:58 np0005537642 ceph-mon[74338]: mon.compute-0@0(leader).osd e17 do_prune osdmap full prune enabled
Nov 27 05:55:58 np0005537642 suspicious_euclid[84231]:    {
Nov 27 05:55:58 np0005537642 suspicious_euclid[84231]:        "available": false,
Nov 27 05:55:58 np0005537642 suspicious_euclid[84231]:        "being_replaced": false,
Nov 27 05:55:58 np0005537642 suspicious_euclid[84231]:        "ceph_device_lvm": false,
Nov 27 05:55:58 np0005537642 suspicious_euclid[84231]:        "device_id": "QEMU_DVD-ROM_QM00001",
Nov 27 05:55:58 np0005537642 suspicious_euclid[84231]:        "lsm_data": {},
Nov 27 05:55:58 np0005537642 suspicious_euclid[84231]:        "lvs": [],
Nov 27 05:55:58 np0005537642 suspicious_euclid[84231]:        "path": "/dev/sr0",
Nov 27 05:55:58 np0005537642 suspicious_euclid[84231]:        "rejected_reasons": [
Nov 27 05:55:58 np0005537642 suspicious_euclid[84231]:            "Insufficient space (<5GB)",
Nov 27 05:55:58 np0005537642 suspicious_euclid[84231]:            "Has a FileSystem"
Nov 27 05:55:58 np0005537642 suspicious_euclid[84231]:        ],
Nov 27 05:55:58 np0005537642 suspicious_euclid[84231]:        "sys_api": {
Nov 27 05:55:58 np0005537642 suspicious_euclid[84231]:            "actuators": null,
Nov 27 05:55:58 np0005537642 suspicious_euclid[84231]:            "device_nodes": [
Nov 27 05:55:58 np0005537642 suspicious_euclid[84231]:                "sr0"
Nov 27 05:55:58 np0005537642 suspicious_euclid[84231]:            ],
Nov 27 05:55:58 np0005537642 suspicious_euclid[84231]:            "devname": "sr0",
Nov 27 05:55:58 np0005537642 suspicious_euclid[84231]:            "human_readable_size": "482.00 KB",
Nov 27 05:55:58 np0005537642 suspicious_euclid[84231]:            "id_bus": "ata",
Nov 27 05:55:58 np0005537642 suspicious_euclid[84231]:            "model": "QEMU DVD-ROM",
Nov 27 05:55:58 np0005537642 suspicious_euclid[84231]:            "nr_requests": "2",
Nov 27 05:55:58 np0005537642 suspicious_euclid[84231]:            "parent": "/dev/sr0",
Nov 27 05:55:58 np0005537642 suspicious_euclid[84231]:            "partitions": {},
Nov 27 05:55:58 np0005537642 suspicious_euclid[84231]:            "path": "/dev/sr0",
Nov 27 05:55:58 np0005537642 suspicious_euclid[84231]:            "removable": "1",
Nov 27 05:55:58 np0005537642 suspicious_euclid[84231]:            "rev": "2.5+",
Nov 27 05:55:58 np0005537642 suspicious_euclid[84231]:            "ro": "0",
Nov 27 05:55:58 np0005537642 suspicious_euclid[84231]:            "rotational": "1",
Nov 27 05:55:58 np0005537642 suspicious_euclid[84231]:            "sas_address": "",
Nov 27 05:55:58 np0005537642 suspicious_euclid[84231]:            "sas_device_handle": "",
Nov 27 05:55:58 np0005537642 suspicious_euclid[84231]:            "scheduler_mode": "mq-deadline",
Nov 27 05:55:58 np0005537642 suspicious_euclid[84231]:            "sectors": 0,
Nov 27 05:55:58 np0005537642 suspicious_euclid[84231]:            "sectorsize": "2048",
Nov 27 05:55:58 np0005537642 suspicious_euclid[84231]:            "size": 493568.0,
Nov 27 05:55:58 np0005537642 suspicious_euclid[84231]:            "support_discard": "2048",
Nov 27 05:55:58 np0005537642 suspicious_euclid[84231]:            "type": "disk",
Nov 27 05:55:58 np0005537642 suspicious_euclid[84231]:            "vendor": "QEMU"
Nov 27 05:55:58 np0005537642 suspicious_euclid[84231]:        }
Nov 27 05:55:58 np0005537642 suspicious_euclid[84231]:    }
Nov 27 05:55:58 np0005537642 suspicious_euclid[84231]: ]
Nov 27 05:55:58 np0005537642 systemd[1]: libpod-8599447a53135e965fe1e73c9df72359942a2902ebf436c77d0823923b08e90a.scope: Deactivated successfully.
Nov 27 05:55:58 np0005537642 podman[84190]: 2025-11-27 10:55:58.957958625 +0000 UTC m=+1.608357060 container died 8599447a53135e965fe1e73c9df72359942a2902ebf436c77d0823923b08e90a (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=suspicious_euclid, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_REF=squid, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Nov 27 05:55:59 np0005537642 ceph-mon[74338]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/201554243' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "backups", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"}]': finished
Nov 27 05:55:59 np0005537642 ceph-mon[74338]: mon.compute-0@0(leader).osd e18 e18: 2 total, 1 up, 2 in
Nov 27 05:55:59 np0005537642 sharp_jepsen[84204]: pool 'backups' created
Nov 27 05:55:59 np0005537642 systemd[1]: libpod-12fb1d0c9813743fa6cc3d37db64d5994dd79dd6e75b9d33edef9e0a1aba4c9e.scope: Deactivated successfully.
Nov 27 05:55:59 np0005537642 ceph-mon[74338]: log_channel(cluster) log [DBG] : osdmap e18: 2 total, 1 up, 2 in
Nov 27 05:55:59 np0005537642 ceph-mon[74338]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 1} v 0)
Nov 27 05:55:59 np0005537642 ceph-mon[74338]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/397881370' entity='mgr.compute-0.qnrkij' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch
Nov 27 05:55:59 np0005537642 ceph-mgr[74636]: mgr finish mon failed to return metadata for osd.1: (2) No such file or directory
Nov 27 05:55:59 np0005537642 ceph-mon[74338]: from='client.? 192.168.122.100:0/201554243' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "backups", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"}]: dispatch
Nov 27 05:55:59 np0005537642 ceph-mgr[74636]: mgr.server handle_open ignoring open from osd.1 v2:192.168.122.100:6802/2860845679; not ready for session (expect reconnect)
Nov 27 05:55:59 np0005537642 systemd[1]: var-lib-containers-storage-overlay-8835f11f49202e13b0c4e8545e18eddbc4584f1f3ad478377e79cc060cad99e6-merged.mount: Deactivated successfully.
Nov 27 05:55:59 np0005537642 ceph-mon[74338]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 1} v 0)
Nov 27 05:55:59 np0005537642 ceph-mon[74338]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/397881370' entity='mgr.compute-0.qnrkij' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch
Nov 27 05:55:59 np0005537642 ceph-mgr[74636]: mgr finish mon failed to return metadata for osd.1: (2) No such file or directory
Nov 27 05:56:00 np0005537642 ceph-mon[74338]: mon.compute-0@0(leader).osd e18 do_prune osdmap full prune enabled
Nov 27 05:56:00 np0005537642 podman[84190]: 2025-11-27 10:56:00.243560199 +0000 UTC m=+2.893958614 container remove 8599447a53135e965fe1e73c9df72359942a2902ebf436c77d0823923b08e90a (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=suspicious_euclid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 27 05:56:00 np0005537642 podman[84170]: 2025-11-27 10:56:00.24895419 +0000 UTC m=+3.297756920 container died 12fb1d0c9813743fa6cc3d37db64d5994dd79dd6e75b9d33edef9e0a1aba4c9e (image=quay.io/ceph/ceph:v19, name=sharp_jepsen, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, org.label-schema.license=GPLv2, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 27 05:56:00 np0005537642 systemd[1]: libpod-conmon-8599447a53135e965fe1e73c9df72359942a2902ebf436c77d0823923b08e90a.scope: Deactivated successfully.
Nov 27 05:56:00 np0005537642 ceph-mon[74338]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Nov 27 05:56:00 np0005537642 ceph-mon[74338]: mon.compute-0@0(leader).osd e19 e19: 2 total, 1 up, 2 in
Nov 27 05:56:00 np0005537642 ceph-mon[74338]: log_channel(cluster) log [DBG] : osdmap e19: 2 total, 1 up, 2 in
Nov 27 05:56:00 np0005537642 ceph-mon[74338]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 1} v 0)
Nov 27 05:56:00 np0005537642 ceph-mon[74338]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/397881370' entity='mgr.compute-0.qnrkij' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch
Nov 27 05:56:00 np0005537642 ceph-mgr[74636]: mgr finish mon failed to return metadata for osd.1: (2) No such file or directory
Nov 27 05:56:00 np0005537642 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v78: 4 pgs: 1 creating+peering, 2 unknown, 1 active+clean; 449 KiB data, 27 MiB used, 20 GiB / 20 GiB avail
Nov 27 05:56:00 np0005537642 ceph-mon[74338]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/397881370' entity='mgr.compute-0.qnrkij' 
Nov 27 05:56:00 np0005537642 ceph-mon[74338]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Nov 27 05:56:00 np0005537642 ceph-mgr[74636]: mgr.server handle_open ignoring open from osd.1 v2:192.168.122.100:6802/2860845679; not ready for session (expect reconnect)
Nov 27 05:56:00 np0005537642 ceph-mon[74338]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/397881370' entity='mgr.compute-0.qnrkij' 
Nov 27 05:56:00 np0005537642 systemd[1]: var-lib-containers-storage-overlay-acd285528720aed29ab0eb3cf756ba194181b8ebb35ed92a240710519c511fe6-merged.mount: Deactivated successfully.
Nov 27 05:56:00 np0005537642 ceph-mon[74338]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 1} v 0)
Nov 27 05:56:00 np0005537642 ceph-mon[74338]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/397881370' entity='mgr.compute-0.qnrkij' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch
Nov 27 05:56:00 np0005537642 ceph-mon[74338]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Nov 27 05:56:00 np0005537642 ceph-mgr[74636]: mgr finish mon failed to return metadata for osd.1: (2) No such file or directory
Nov 27 05:56:00 np0005537642 ceph-mon[74338]: from='client.? 192.168.122.100:0/201554243' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "backups", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"}]': finished
Nov 27 05:56:00 np0005537642 ceph-mon[74338]: from='mgr.14122 192.168.122.100:0/397881370' entity='mgr.compute-0.qnrkij' 
Nov 27 05:56:01 np0005537642 ceph-mon[74338]: log_channel(cluster) log [WRN] : Health check update: 2 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED)
Nov 27 05:56:01 np0005537642 ceph-mon[74338]: mon.compute-0@0(leader).osd e19 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 27 05:56:01 np0005537642 podman[84170]: 2025-11-27 10:56:01.155917513 +0000 UTC m=+4.204720253 container remove 12fb1d0c9813743fa6cc3d37db64d5994dd79dd6e75b9d33edef9e0a1aba4c9e (image=quay.io/ceph/ceph:v19, name=sharp_jepsen, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2)
Nov 27 05:56:01 np0005537642 ceph-mon[74338]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/397881370' entity='mgr.compute-0.qnrkij' 
Nov 27 05:56:01 np0005537642 ceph-mon[74338]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Nov 27 05:56:01 np0005537642 systemd[1]: libpod-conmon-12fb1d0c9813743fa6cc3d37db64d5994dd79dd6e75b9d33edef9e0a1aba4c9e.scope: Deactivated successfully.
Nov 27 05:56:01 np0005537642 ceph-mon[74338]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/397881370' entity='mgr.compute-0.qnrkij' 
Nov 27 05:56:01 np0005537642 ceph-mon[74338]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config rm", "who": "osd.1", "name": "osd_memory_target"} v 0)
Nov 27 05:56:01 np0005537642 ceph-mon[74338]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/397881370' entity='mgr.compute-0.qnrkij' cmd=[{"prefix": "config rm", "who": "osd.1", "name": "osd_memory_target"}]: dispatch
Nov 27 05:56:01 np0005537642 ceph-mgr[74636]: [cephadm INFO root] Adjusting osd_memory_target on compute-0 to 128.0M
Nov 27 05:56:01 np0005537642 ceph-mgr[74636]: log_channel(cephadm) log [INF] : Adjusting osd_memory_target on compute-0 to 128.0M
Nov 27 05:56:01 np0005537642 ceph-mon[74338]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config set, name=osd_memory_target}] v 0)
Nov 27 05:56:01 np0005537642 ceph-mgr[74636]: [cephadm WARNING cephadm.serve] Unable to set osd_memory_target on compute-0 to 134220595: error parsing value: Value '134220595' is below minimum 939524096
Nov 27 05:56:01 np0005537642 ceph-mgr[74636]: log_channel(cephadm) log [WRN] : Unable to set osd_memory_target on compute-0 to 134220595: error parsing value: Value '134220595' is below minimum 939524096
Nov 27 05:56:01 np0005537642 ceph-mon[74338]: mon.compute-0@0(leader).osd e19 do_prune osdmap full prune enabled
Nov 27 05:56:01 np0005537642 ceph-mgr[74636]: mgr.server handle_open ignoring open from osd.1 v2:192.168.122.100:6802/2860845679; not ready for session (expect reconnect)
Nov 27 05:56:01 np0005537642 ceph-mon[74338]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 1} v 0)
Nov 27 05:56:01 np0005537642 ceph-mon[74338]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/397881370' entity='mgr.compute-0.qnrkij' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch
Nov 27 05:56:01 np0005537642 ceph-mgr[74636]: mgr finish mon failed to return metadata for osd.1: (2) No such file or directory
Nov 27 05:56:01 np0005537642 ceph-mon[74338]: mon.compute-0@0(leader).osd e20 e20: 2 total, 1 up, 2 in
Nov 27 05:56:01 np0005537642 ceph-mon[74338]: log_channel(cluster) log [DBG] : osdmap e20: 2 total, 1 up, 2 in
Nov 27 05:56:01 np0005537642 python3[85379]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v19 --fsid 4c838139-e0c9-556a-a9ca-e4422f459af7 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   osd pool create images  replicated_rule --autoscale-mode on _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 27 05:56:01 np0005537642 podman[85380]: 2025-11-27 10:56:01.73881376 +0000 UTC m=+0.042994372 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Nov 27 05:56:02 np0005537642 ceph-mon[74338]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 1} v 0)
Nov 27 05:56:02 np0005537642 ceph-mon[74338]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/397881370' entity='mgr.compute-0.qnrkij' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch
Nov 27 05:56:02 np0005537642 ceph-mgr[74636]: mgr finish mon failed to return metadata for osd.1: (2) No such file or directory
Nov 27 05:56:02 np0005537642 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v80: 4 pgs: 1 unknown, 3 active+clean; 449 KiB data, 27 MiB used, 20 GiB / 20 GiB avail
Nov 27 05:56:02 np0005537642 podman[85380]: 2025-11-27 10:56:02.512441622 +0000 UTC m=+0.816622144 container create 9e285d469bde22ee9acb048445ce86b5ae5ce3f3365e70ce40a8db9309211115 (image=quay.io/ceph/ceph:v19, name=laughing_shaw, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325)
Nov 27 05:56:02 np0005537642 ceph-mgr[74636]: mgr.server handle_open ignoring open from osd.1 v2:192.168.122.100:6802/2860845679; not ready for session (expect reconnect)
Nov 27 05:56:02 np0005537642 ceph-mon[74338]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 1} v 0)
Nov 27 05:56:02 np0005537642 ceph-mon[74338]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/397881370' entity='mgr.compute-0.qnrkij' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch
Nov 27 05:56:02 np0005537642 ceph-mgr[74636]: mgr finish mon failed to return metadata for osd.1: (2) No such file or directory
Nov 27 05:56:02 np0005537642 ceph-mon[74338]: from='mgr.14122 192.168.122.100:0/397881370' entity='mgr.compute-0.qnrkij' 
Nov 27 05:56:02 np0005537642 ceph-mon[74338]: Health check update: 2 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED)
Nov 27 05:56:02 np0005537642 ceph-mon[74338]: from='mgr.14122 192.168.122.100:0/397881370' entity='mgr.compute-0.qnrkij' 
Nov 27 05:56:02 np0005537642 ceph-mon[74338]: from='mgr.14122 192.168.122.100:0/397881370' entity='mgr.compute-0.qnrkij' 
Nov 27 05:56:02 np0005537642 ceph-mon[74338]: from='mgr.14122 192.168.122.100:0/397881370' entity='mgr.compute-0.qnrkij' cmd=[{"prefix": "config rm", "who": "osd.1", "name": "osd_memory_target"}]: dispatch
Nov 27 05:56:02 np0005537642 systemd[1]: Started libpod-conmon-9e285d469bde22ee9acb048445ce86b5ae5ce3f3365e70ce40a8db9309211115.scope.
Nov 27 05:56:02 np0005537642 systemd[1]: Started libcrun container.
Nov 27 05:56:02 np0005537642 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b3413b253d92dcc9b6134f48443afe23b1ca6042877c0cf044559026d2634e3d/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 27 05:56:02 np0005537642 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b3413b253d92dcc9b6134f48443afe23b1ca6042877c0cf044559026d2634e3d/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Nov 27 05:56:02 np0005537642 podman[85380]: 2025-11-27 10:56:02.973721892 +0000 UTC m=+1.277902494 container init 9e285d469bde22ee9acb048445ce86b5ae5ce3f3365e70ce40a8db9309211115 (image=quay.io/ceph/ceph:v19, name=laughing_shaw, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 27 05:56:02 np0005537642 podman[85380]: 2025-11-27 10:56:02.98704574 +0000 UTC m=+1.291226282 container start 9e285d469bde22ee9acb048445ce86b5ae5ce3f3365e70ce40a8db9309211115 (image=quay.io/ceph/ceph:v19, name=laughing_shaw, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, org.label-schema.license=GPLv2)
Nov 27 05:56:03 np0005537642 podman[85380]: 2025-11-27 10:56:03.113926996 +0000 UTC m=+1.418107598 container attach 9e285d469bde22ee9acb048445ce86b5ae5ce3f3365e70ce40a8db9309211115 (image=quay.io/ceph/ceph:v19, name=laughing_shaw, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Nov 27 05:56:03 np0005537642 ceph-mon[74338]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool create", "pool": "images", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"} v 0)
Nov 27 05:56:03 np0005537642 ceph-mon[74338]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/3430408169' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "images", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"}]: dispatch
Nov 27 05:56:03 np0005537642 ceph-mgr[74636]: mgr.server handle_open ignoring open from osd.1 v2:192.168.122.100:6802/2860845679; not ready for session (expect reconnect)
Nov 27 05:56:03 np0005537642 ceph-mon[74338]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 1} v 0)
Nov 27 05:56:03 np0005537642 ceph-mon[74338]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/397881370' entity='mgr.compute-0.qnrkij' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch
Nov 27 05:56:03 np0005537642 ceph-mgr[74636]: mgr finish mon failed to return metadata for osd.1: (2) No such file or directory
Nov 27 05:56:03 np0005537642 ceph-mon[74338]: mon.compute-0@0(leader).osd e20 do_prune osdmap full prune enabled
Nov 27 05:56:03 np0005537642 ceph-mon[74338]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/3430408169' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "images", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"}]': finished
Nov 27 05:56:03 np0005537642 ceph-mon[74338]: mon.compute-0@0(leader).osd e21 e21: 2 total, 1 up, 2 in
Nov 27 05:56:03 np0005537642 laughing_shaw[85395]: pool 'images' created
Nov 27 05:56:03 np0005537642 systemd[1]: libpod-9e285d469bde22ee9acb048445ce86b5ae5ce3f3365e70ce40a8db9309211115.scope: Deactivated successfully.
Nov 27 05:56:03 np0005537642 podman[85380]: 2025-11-27 10:56:03.886349011 +0000 UTC m=+2.190529583 container died 9e285d469bde22ee9acb048445ce86b5ae5ce3f3365e70ce40a8db9309211115 (image=quay.io/ceph/ceph:v19, name=laughing_shaw, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, ceph=True)
Nov 27 05:56:03 np0005537642 ceph-mon[74338]: log_channel(cluster) log [DBG] : osdmap e21: 2 total, 1 up, 2 in
Nov 27 05:56:04 np0005537642 ceph-mon[74338]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 1} v 0)
Nov 27 05:56:04 np0005537642 ceph-mon[74338]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/397881370' entity='mgr.compute-0.qnrkij' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch
Nov 27 05:56:04 np0005537642 ceph-mgr[74636]: mgr finish mon failed to return metadata for osd.1: (2) No such file or directory
Nov 27 05:56:04 np0005537642 ceph-mon[74338]: Adjusting osd_memory_target on compute-0 to 128.0M
Nov 27 05:56:04 np0005537642 ceph-mon[74338]: Unable to set osd_memory_target on compute-0 to 134220595: error parsing value: Value '134220595' is below minimum 939524096
Nov 27 05:56:04 np0005537642 ceph-mon[74338]: from='client.? 192.168.122.100:0/3430408169' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "images", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"}]: dispatch
Nov 27 05:56:04 np0005537642 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v82: 5 pgs: 2 unknown, 3 active+clean; 449 KiB data, 27 MiB used, 20 GiB / 20 GiB avail
Nov 27 05:56:04 np0005537642 ceph-mgr[74636]: mgr.server handle_open ignoring open from osd.1 v2:192.168.122.100:6802/2860845679; not ready for session (expect reconnect)
Nov 27 05:56:04 np0005537642 ceph-mon[74338]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 1} v 0)
Nov 27 05:56:04 np0005537642 ceph-mon[74338]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/397881370' entity='mgr.compute-0.qnrkij' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch
Nov 27 05:56:04 np0005537642 ceph-mgr[74636]: mgr finish mon failed to return metadata for osd.1: (2) No such file or directory
Nov 27 05:56:04 np0005537642 systemd[1]: var-lib-containers-storage-overlay-b3413b253d92dcc9b6134f48443afe23b1ca6042877c0cf044559026d2634e3d-merged.mount: Deactivated successfully.
Nov 27 05:56:05 np0005537642 ceph-mon[74338]: mon.compute-0@0(leader).osd e21 do_prune osdmap full prune enabled
Nov 27 05:56:05 np0005537642 podman[85380]: 2025-11-27 10:56:05.386681724 +0000 UTC m=+3.690862266 container remove 9e285d469bde22ee9acb048445ce86b5ae5ce3f3365e70ce40a8db9309211115 (image=quay.io/ceph/ceph:v19, name=laughing_shaw, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, ceph=True, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 27 05:56:05 np0005537642 ceph-mon[74338]: mon.compute-0@0(leader).osd e22 e22: 2 total, 1 up, 2 in
Nov 27 05:56:05 np0005537642 systemd[1]: libpod-conmon-9e285d469bde22ee9acb048445ce86b5ae5ce3f3365e70ce40a8db9309211115.scope: Deactivated successfully.
Nov 27 05:56:05 np0005537642 ceph-mgr[74636]: mgr.server handle_open ignoring open from osd.1 v2:192.168.122.100:6802/2860845679; not ready for session (expect reconnect)
Nov 27 05:56:05 np0005537642 ceph-mon[74338]: log_channel(cluster) log [DBG] : osdmap e22: 2 total, 1 up, 2 in
Nov 27 05:56:05 np0005537642 ceph-mon[74338]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 1} v 0)
Nov 27 05:56:05 np0005537642 ceph-mon[74338]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/397881370' entity='mgr.compute-0.qnrkij' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch
Nov 27 05:56:05 np0005537642 ceph-mgr[74636]: mgr finish mon failed to return metadata for osd.1: (2) No such file or directory
Nov 27 05:56:05 np0005537642 ceph-mon[74338]: from='client.? 192.168.122.100:0/3430408169' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "images", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"}]': finished
Nov 27 05:56:05 np0005537642 python3[85459]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v19 --fsid 4c838139-e0c9-556a-a9ca-e4422f459af7 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   osd pool create cephfs.cephfs.meta  replicated_rule --autoscale-mode on _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 27 05:56:05 np0005537642 podman[85460]: 2025-11-27 10:56:05.814177639 +0000 UTC m=+0.026898692 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Nov 27 05:56:06 np0005537642 ceph-mon[74338]: log_channel(cluster) log [WRN] : Health check update: 4 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED)
Nov 27 05:56:06 np0005537642 ceph-mon[74338]: mon.compute-0@0(leader).osd e22 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 27 05:56:06 np0005537642 podman[85460]: 2025-11-27 10:56:06.081821012 +0000 UTC m=+0.294542035 container create 1de240c25c336e4905ae09a70350958187bf50840a9a342b99cf9760e06d4596 (image=quay.io/ceph/ceph:v19, name=recursing_gagarin, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_REF=squid, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, io.buildah.version=1.40.1)
Nov 27 05:56:06 np0005537642 systemd[1]: Started libpod-conmon-1de240c25c336e4905ae09a70350958187bf50840a9a342b99cf9760e06d4596.scope.
Nov 27 05:56:06 np0005537642 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v84: 5 pgs: 2 unknown, 3 active+clean; 449 KiB data, 27 MiB used, 20 GiB / 20 GiB avail
Nov 27 05:56:06 np0005537642 ceph-mon[74338]: mon.compute-0@0(leader).osd e22 do_prune osdmap full prune enabled
Nov 27 05:56:06 np0005537642 systemd[1]: Started libcrun container.
Nov 27 05:56:06 np0005537642 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7e30fc976380824822136f4167121a25bd089812b693419f475766845781acab/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 27 05:56:06 np0005537642 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7e30fc976380824822136f4167121a25bd089812b693419f475766845781acab/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Nov 27 05:56:06 np0005537642 ceph-mgr[74636]: mgr.server handle_open ignoring open from osd.1 v2:192.168.122.100:6802/2860845679; not ready for session (expect reconnect)
Nov 27 05:56:06 np0005537642 ceph-mon[74338]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 1} v 0)
Nov 27 05:56:06 np0005537642 ceph-mon[74338]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/397881370' entity='mgr.compute-0.qnrkij' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch
Nov 27 05:56:06 np0005537642 ceph-mgr[74636]: mgr finish mon failed to return metadata for osd.1: (2) No such file or directory
Nov 27 05:56:06 np0005537642 ceph-mon[74338]: mon.compute-0@0(leader).osd e23 e23: 2 total, 1 up, 2 in
Nov 27 05:56:06 np0005537642 podman[85460]: 2025-11-27 10:56:06.828417419 +0000 UTC m=+1.041138502 container init 1de240c25c336e4905ae09a70350958187bf50840a9a342b99cf9760e06d4596 (image=quay.io/ceph/ceph:v19, name=recursing_gagarin, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, org.label-schema.license=GPLv2, ceph=True, OSD_FLAVOR=default, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Nov 27 05:56:06 np0005537642 podman[85460]: 2025-11-27 10:56:06.839143939 +0000 UTC m=+1.051864972 container start 1de240c25c336e4905ae09a70350958187bf50840a9a342b99cf9760e06d4596 (image=quay.io/ceph/ceph:v19, name=recursing_gagarin, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, OSD_FLAVOR=default, CEPH_REF=squid, org.label-schema.build-date=20250325)
Nov 27 05:56:07 np0005537642 ceph-mon[74338]: log_channel(cluster) log [DBG] : osdmap e23: 2 total, 1 up, 2 in
Nov 27 05:56:07 np0005537642 podman[85460]: 2025-11-27 10:56:07.121624803 +0000 UTC m=+1.334345796 container attach 1de240c25c336e4905ae09a70350958187bf50840a9a342b99cf9760e06d4596 (image=quay.io/ceph/ceph:v19, name=recursing_gagarin, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, ceph=True, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325)
Nov 27 05:56:07 np0005537642 ceph-mon[74338]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 1} v 0)
Nov 27 05:56:07 np0005537642 ceph-mon[74338]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/397881370' entity='mgr.compute-0.qnrkij' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch
Nov 27 05:56:07 np0005537642 ceph-mgr[74636]: mgr finish mon failed to return metadata for osd.1: (2) No such file or directory
Nov 27 05:56:07 np0005537642 ceph-mon[74338]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool create", "pool": "cephfs.cephfs.meta", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"} v 0)
Nov 27 05:56:07 np0005537642 ceph-mon[74338]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/2854736665' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "cephfs.cephfs.meta", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"}]: dispatch
Nov 27 05:56:07 np0005537642 ceph-mon[74338]: Health check update: 4 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED)
Nov 27 05:56:07 np0005537642 ceph-mgr[74636]: mgr.server handle_open ignoring open from osd.1 v2:192.168.122.100:6802/2860845679; not ready for session (expect reconnect)
Nov 27 05:56:07 np0005537642 ceph-mon[74338]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 1} v 0)
Nov 27 05:56:07 np0005537642 ceph-mgr[74636]: mgr finish mon failed to return metadata for osd.1: (2) No such file or directory
Nov 27 05:56:07 np0005537642 ceph-mon[74338]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/397881370' entity='mgr.compute-0.qnrkij' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch
Nov 27 05:56:07 np0005537642 ceph-mon[74338]: mon.compute-0@0(leader).osd e23 do_prune osdmap full prune enabled
Nov 27 05:56:08 np0005537642 ceph-mon[74338]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/2854736665' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "cephfs.cephfs.meta", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"}]': finished
Nov 27 05:56:08 np0005537642 ceph-mon[74338]: mon.compute-0@0(leader).osd e24 e24: 2 total, 1 up, 2 in
Nov 27 05:56:08 np0005537642 recursing_gagarin[85475]: pool 'cephfs.cephfs.meta' created
Nov 27 05:56:08 np0005537642 systemd[1]: libpod-1de240c25c336e4905ae09a70350958187bf50840a9a342b99cf9760e06d4596.scope: Deactivated successfully.
Nov 27 05:56:08 np0005537642 podman[85460]: 2025-11-27 10:56:08.143137845 +0000 UTC m=+2.355858848 container died 1de240c25c336e4905ae09a70350958187bf50840a9a342b99cf9760e06d4596 (image=quay.io/ceph/ceph:v19, name=recursing_gagarin, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, OSD_FLAVOR=default, CEPH_REF=squid, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 27 05:56:08 np0005537642 ceph-mon[74338]: log_channel(cluster) log [DBG] : osdmap e24: 2 total, 1 up, 2 in
Nov 27 05:56:08 np0005537642 ceph-mon[74338]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 1} v 0)
Nov 27 05:56:08 np0005537642 ceph-mon[74338]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/397881370' entity='mgr.compute-0.qnrkij' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch
Nov 27 05:56:08 np0005537642 ceph-mgr[74636]: mgr finish mon failed to return metadata for osd.1: (2) No such file or directory
Nov 27 05:56:08 np0005537642 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v87: 6 pgs: 2 unknown, 4 active+clean; 449 KiB data, 27 MiB used, 20 GiB / 20 GiB avail
Nov 27 05:56:08 np0005537642 ceph-mgr[74636]: mgr.server handle_open ignoring open from osd.1 v2:192.168.122.100:6802/2860845679; not ready for session (expect reconnect)
Nov 27 05:56:08 np0005537642 ceph-mon[74338]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 1} v 0)
Nov 27 05:56:08 np0005537642 ceph-mon[74338]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/397881370' entity='mgr.compute-0.qnrkij' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch
Nov 27 05:56:08 np0005537642 ceph-mgr[74636]: mgr finish mon failed to return metadata for osd.1: (2) No such file or directory
Nov 27 05:56:08 np0005537642 ceph-mon[74338]: from='client.? 192.168.122.100:0/2854736665' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "cephfs.cephfs.meta", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"}]: dispatch
Nov 27 05:56:08 np0005537642 ceph-mon[74338]: from='client.? 192.168.122.100:0/2854736665' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "cephfs.cephfs.meta", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"}]': finished
Nov 27 05:56:08 np0005537642 systemd[1]: var-lib-containers-storage-overlay-7e30fc976380824822136f4167121a25bd089812b693419f475766845781acab-merged.mount: Deactivated successfully.
Nov 27 05:56:09 np0005537642 ceph-mon[74338]: mon.compute-0@0(leader).osd e24 do_prune osdmap full prune enabled
Nov 27 05:56:09 np0005537642 ceph-mon[74338]: mon.compute-0@0(leader).osd e25 e25: 2 total, 1 up, 2 in
Nov 27 05:56:09 np0005537642 ceph-mon[74338]: log_channel(cluster) log [DBG] : osdmap e25: 2 total, 1 up, 2 in
Nov 27 05:56:09 np0005537642 ceph-mgr[74636]: mgr.server handle_open ignoring open from osd.1 v2:192.168.122.100:6802/2860845679; not ready for session (expect reconnect)
Nov 27 05:56:09 np0005537642 ceph-mon[74338]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 1} v 0)
Nov 27 05:56:09 np0005537642 ceph-mon[74338]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/397881370' entity='mgr.compute-0.qnrkij' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch
Nov 27 05:56:09 np0005537642 ceph-mgr[74636]: mgr finish mon failed to return metadata for osd.1: (2) No such file or directory
Nov 27 05:56:09 np0005537642 podman[85460]: 2025-11-27 10:56:09.756864684 +0000 UTC m=+3.969585717 container remove 1de240c25c336e4905ae09a70350958187bf50840a9a342b99cf9760e06d4596 (image=quay.io/ceph/ceph:v19, name=recursing_gagarin, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, io.buildah.version=1.40.1, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, ceph=True, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 27 05:56:09 np0005537642 systemd[1]: libpod-conmon-1de240c25c336e4905ae09a70350958187bf50840a9a342b99cf9760e06d4596.scope: Deactivated successfully.
Nov 27 05:56:10 np0005537642 python3[85540]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v19 --fsid 4c838139-e0c9-556a-a9ca-e4422f459af7 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   osd pool create cephfs.cephfs.data  replicated_rule --autoscale-mode on _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 27 05:56:10 np0005537642 podman[85541]: 2025-11-27 10:56:10.164611518 +0000 UTC m=+0.030124965 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Nov 27 05:56:10 np0005537642 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v89: 6 pgs: 2 unknown, 4 active+clean; 449 KiB data, 27 MiB used, 20 GiB / 20 GiB avail
Nov 27 05:56:10 np0005537642 ceph-mgr[74636]: mgr.server handle_open ignoring open from osd.1 v2:192.168.122.100:6802/2860845679; not ready for session (expect reconnect)
Nov 27 05:56:10 np0005537642 podman[85541]: 2025-11-27 10:56:10.679287481 +0000 UTC m=+0.544800858 container create 836ba97a2bd98cfb8675b00b80bb3e46a0bd2042c7756825d288ef4ed0b80535 (image=quay.io/ceph/ceph:v19, name=eloquent_haslett, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0)
Nov 27 05:56:10 np0005537642 ceph-mon[74338]: mon.compute-0@0(leader).osd e25 do_prune osdmap full prune enabled
Nov 27 05:56:11 np0005537642 ceph-mon[74338]: log_channel(cluster) log [WRN] : Health check update: 5 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED)
Nov 27 05:56:11 np0005537642 ceph-mon[74338]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 1} v 0)
Nov 27 05:56:11 np0005537642 ceph-mon[74338]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/397881370' entity='mgr.compute-0.qnrkij' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch
Nov 27 05:56:11 np0005537642 ceph-mgr[74636]: mgr finish mon failed to return metadata for osd.1: (2) No such file or directory
Nov 27 05:56:11 np0005537642 systemd[1]: Started libpod-conmon-836ba97a2bd98cfb8675b00b80bb3e46a0bd2042c7756825d288ef4ed0b80535.scope.
Nov 27 05:56:11 np0005537642 systemd[1]: Started libcrun container.
Nov 27 05:56:11 np0005537642 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2bd6a43a12e666678bf046f8f770af27cd569bd99a4922955ac0f38d6c7c4bd7/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 27 05:56:11 np0005537642 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2bd6a43a12e666678bf046f8f770af27cd569bd99a4922955ac0f38d6c7c4bd7/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Nov 27 05:56:11 np0005537642 ceph-mon[74338]: mon.compute-0@0(leader).osd e26 e26: 2 total, 1 up, 2 in
Nov 27 05:56:11 np0005537642 ceph-mon[74338]: log_channel(cluster) log [DBG] : osdmap e26: 2 total, 1 up, 2 in
Nov 27 05:56:11 np0005537642 ceph-mgr[74636]: mgr.server handle_open ignoring open from osd.1 v2:192.168.122.100:6802/2860845679; not ready for session (expect reconnect)
Nov 27 05:56:11 np0005537642 podman[85541]: 2025-11-27 10:56:11.615493667 +0000 UTC m=+1.481007084 container init 836ba97a2bd98cfb8675b00b80bb3e46a0bd2042c7756825d288ef4ed0b80535 (image=quay.io/ceph/ceph:v19, name=eloquent_haslett, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 27 05:56:11 np0005537642 podman[85541]: 2025-11-27 10:56:11.625873569 +0000 UTC m=+1.491386936 container start 836ba97a2bd98cfb8675b00b80bb3e46a0bd2042c7756825d288ef4ed0b80535 (image=quay.io/ceph/ceph:v19, name=eloquent_haslett, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_REF=squid, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS)
Nov 27 05:56:11 np0005537642 ceph-mon[74338]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 1} v 0)
Nov 27 05:56:11 np0005537642 ceph-mon[74338]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/397881370' entity='mgr.compute-0.qnrkij' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch
Nov 27 05:56:11 np0005537642 ceph-mon[74338]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2.devices.0}] v 0)
Nov 27 05:56:11 np0005537642 ceph-mgr[74636]: mgr finish mon failed to return metadata for osd.1: (2) No such file or directory
Nov 27 05:56:11 np0005537642 ceph-mon[74338]: Health check update: 5 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED)
Nov 27 05:56:11 np0005537642 podman[85541]: 2025-11-27 10:56:11.873862652 +0000 UTC m=+1.739375999 container attach 836ba97a2bd98cfb8675b00b80bb3e46a0bd2042c7756825d288ef4ed0b80535 (image=quay.io/ceph/ceph:v19, name=eloquent_haslett, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True)
Nov 27 05:56:12 np0005537642 ceph-mon[74338]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool create", "pool": "cephfs.cephfs.data", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"} v 0)
Nov 27 05:56:12 np0005537642 ceph-mon[74338]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/1669159939' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "cephfs.cephfs.data", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"}]: dispatch
Nov 27 05:56:12 np0005537642 ceph-mon[74338]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/397881370' entity='mgr.compute-0.qnrkij' 
Nov 27 05:56:12 np0005537642 ceph-mon[74338]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2}] v 0)
Nov 27 05:56:12 np0005537642 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v91: 6 pgs: 1 unknown, 5 active+clean; 449 KiB data, 27 MiB used, 20 GiB / 20 GiB avail
Nov 27 05:56:12 np0005537642 ceph-mgr[74636]: mgr.server handle_open ignoring open from osd.1 v2:192.168.122.100:6802/2860845679; not ready for session (expect reconnect)
Nov 27 05:56:12 np0005537642 ceph-mon[74338]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 1} v 0)
Nov 27 05:56:12 np0005537642 ceph-mon[74338]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/397881370' entity='mgr.compute-0.qnrkij' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch
Nov 27 05:56:12 np0005537642 ceph-mgr[74636]: mgr finish mon failed to return metadata for osd.1: (2) No such file or directory
Nov 27 05:56:12 np0005537642 ceph-mon[74338]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/397881370' entity='mgr.compute-0.qnrkij' 
Nov 27 05:56:12 np0005537642 ceph-mon[74338]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2.devices.0}] v 0)
Nov 27 05:56:12 np0005537642 ceph-mon[74338]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/397881370' entity='mgr.compute-0.qnrkij' 
Nov 27 05:56:12 np0005537642 ceph-mon[74338]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2}] v 0)
Nov 27 05:56:12 np0005537642 ceph-mon[74338]: mon.compute-0@0(leader).osd e26 do_prune osdmap full prune enabled
Nov 27 05:56:12 np0005537642 ceph-mon[74338]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/397881370' entity='mgr.compute-0.qnrkij' 
Nov 27 05:56:13 np0005537642 ceph-mon[74338]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config rm", "who": "osd/host:compute-2", "name": "osd_memory_target"} v 0)
Nov 27 05:56:13 np0005537642 ceph-mon[74338]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/397881370' entity='mgr.compute-0.qnrkij' cmd=[{"prefix": "config rm", "who": "osd/host:compute-2", "name": "osd_memory_target"}]: dispatch
Nov 27 05:56:13 np0005537642 ceph-mon[74338]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Nov 27 05:56:13 np0005537642 ceph-mon[74338]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/397881370' entity='mgr.compute-0.qnrkij' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 27 05:56:13 np0005537642 ceph-mon[74338]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Nov 27 05:56:13 np0005537642 ceph-mon[74338]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/397881370' entity='mgr.compute-0.qnrkij' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 27 05:56:13 np0005537642 ceph-mgr[74636]: [cephadm INFO cephadm.serve] Updating compute-2:/etc/ceph/ceph.conf
Nov 27 05:56:13 np0005537642 ceph-mgr[74636]: log_channel(cephadm) log [INF] : Updating compute-2:/etc/ceph/ceph.conf
Nov 27 05:56:13 np0005537642 ceph-mon[74338]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/1669159939' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "cephfs.cephfs.data", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"}]': finished
Nov 27 05:56:13 np0005537642 ceph-mon[74338]: mon.compute-0@0(leader).osd e27 e27: 2 total, 1 up, 2 in
Nov 27 05:56:13 np0005537642 eloquent_haslett[85556]: pool 'cephfs.cephfs.data' created
Nov 27 05:56:13 np0005537642 systemd[1]: libpod-836ba97a2bd98cfb8675b00b80bb3e46a0bd2042c7756825d288ef4ed0b80535.scope: Deactivated successfully.
Nov 27 05:56:13 np0005537642 podman[85541]: 2025-11-27 10:56:13.37354052 +0000 UTC m=+3.239053857 container died 836ba97a2bd98cfb8675b00b80bb3e46a0bd2042c7756825d288ef4ed0b80535 (image=quay.io/ceph/ceph:v19, name=eloquent_haslett, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS)
Nov 27 05:56:13 np0005537642 ceph-mon[74338]: from='client.? 192.168.122.100:0/1669159939' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "cephfs.cephfs.data", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"}]: dispatch
Nov 27 05:56:13 np0005537642 ceph-mon[74338]: from='mgr.14122 192.168.122.100:0/397881370' entity='mgr.compute-0.qnrkij' 
Nov 27 05:56:13 np0005537642 ceph-mon[74338]: from='mgr.14122 192.168.122.100:0/397881370' entity='mgr.compute-0.qnrkij' 
Nov 27 05:56:13 np0005537642 ceph-mon[74338]: from='mgr.14122 192.168.122.100:0/397881370' entity='mgr.compute-0.qnrkij' 
Nov 27 05:56:13 np0005537642 ceph-mgr[74636]: mgr.server handle_open ignoring open from osd.1 v2:192.168.122.100:6802/2860845679; not ready for session (expect reconnect)
Nov 27 05:56:13 np0005537642 ceph-mgr[74636]: [cephadm INFO cephadm.serve] Updating compute-2:/var/lib/ceph/4c838139-e0c9-556a-a9ca-e4422f459af7/config/ceph.conf
Nov 27 05:56:13 np0005537642 ceph-mgr[74636]: log_channel(cephadm) log [INF] : Updating compute-2:/var/lib/ceph/4c838139-e0c9-556a-a9ca-e4422f459af7/config/ceph.conf
Nov 27 05:56:13 np0005537642 ceph-mon[74338]: log_channel(cluster) log [DBG] : osdmap e27: 2 total, 1 up, 2 in
Nov 27 05:56:13 np0005537642 ceph-mon[74338]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 1} v 0)
Nov 27 05:56:13 np0005537642 ceph-mon[74338]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/397881370' entity='mgr.compute-0.qnrkij' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch
Nov 27 05:56:13 np0005537642 ceph-mgr[74636]: mgr finish mon failed to return metadata for osd.1: (2) No such file or directory
Nov 27 05:56:13 np0005537642 systemd[1]: var-lib-containers-storage-overlay-2bd6a43a12e666678bf046f8f770af27cd569bd99a4922955ac0f38d6c7c4bd7-merged.mount: Deactivated successfully.
Nov 27 05:56:14 np0005537642 podman[85583]: 2025-11-27 10:56:14.245024089 +0000 UTC m=+0.859333878 container remove 836ba97a2bd98cfb8675b00b80bb3e46a0bd2042c7756825d288ef4ed0b80535 (image=quay.io/ceph/ceph:v19, name=eloquent_haslett, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1)
Nov 27 05:56:14 np0005537642 systemd[1]: libpod-conmon-836ba97a2bd98cfb8675b00b80bb3e46a0bd2042c7756825d288ef4ed0b80535.scope: Deactivated successfully.
Nov 27 05:56:14 np0005537642 ceph-mgr[74636]: [cephadm INFO cephadm.serve] Updating compute-2:/etc/ceph/ceph.client.admin.keyring
Nov 27 05:56:14 np0005537642 ceph-mgr[74636]: log_channel(cephadm) log [INF] : Updating compute-2:/etc/ceph/ceph.client.admin.keyring
Nov 27 05:56:14 np0005537642 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v93: 7 pgs: 2 unknown, 5 active+clean; 449 KiB data, 27 MiB used, 20 GiB / 20 GiB avail
Nov 27 05:56:14 np0005537642 ceph-mon[74338]: from='mgr.14122 192.168.122.100:0/397881370' entity='mgr.compute-0.qnrkij' 
Nov 27 05:56:14 np0005537642 ceph-mon[74338]: from='mgr.14122 192.168.122.100:0/397881370' entity='mgr.compute-0.qnrkij' cmd=[{"prefix": "config rm", "who": "osd/host:compute-2", "name": "osd_memory_target"}]: dispatch
Nov 27 05:56:14 np0005537642 ceph-mon[74338]: from='mgr.14122 192.168.122.100:0/397881370' entity='mgr.compute-0.qnrkij' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 27 05:56:14 np0005537642 ceph-mon[74338]: Updating compute-2:/etc/ceph/ceph.conf
Nov 27 05:56:14 np0005537642 ceph-mon[74338]: from='client.? 192.168.122.100:0/1669159939' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "cephfs.cephfs.data", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"}]': finished
Nov 27 05:56:14 np0005537642 ceph-mon[74338]: Updating compute-2:/var/lib/ceph/4c838139-e0c9-556a-a9ca-e4422f459af7/config/ceph.conf
Nov 27 05:56:14 np0005537642 ceph-mgr[74636]: mgr.server handle_open ignoring open from osd.1 v2:192.168.122.100:6802/2860845679; not ready for session (expect reconnect)
Nov 27 05:56:14 np0005537642 python3[85623]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v19 --fsid 4c838139-e0c9-556a-a9ca-e4422f459af7 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   osd pool application enable vms rbd _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 27 05:56:14 np0005537642 podman[85624]: 2025-11-27 10:56:14.735450341 +0000 UTC m=+0.041140551 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Nov 27 05:56:14 np0005537642 ceph-mgr[74636]: [cephadm INFO cephadm.serve] Updating compute-2:/var/lib/ceph/4c838139-e0c9-556a-a9ca-e4422f459af7/config/ceph.client.admin.keyring
Nov 27 05:56:14 np0005537642 ceph-mgr[74636]: log_channel(cephadm) log [INF] : Updating compute-2:/var/lib/ceph/4c838139-e0c9-556a-a9ca-e4422f459af7/config/ceph.client.admin.keyring
Nov 27 05:56:14 np0005537642 ceph-mon[74338]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 1} v 0)
Nov 27 05:56:14 np0005537642 ceph-mon[74338]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/397881370' entity='mgr.compute-0.qnrkij' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch
Nov 27 05:56:14 np0005537642 ceph-mgr[74636]: mgr finish mon failed to return metadata for osd.1: (2) No such file or directory
Nov 27 05:56:15 np0005537642 podman[85624]: 2025-11-27 10:56:15.010900608 +0000 UTC m=+0.316590828 container create c7f8693802fa7014c614683c21edafb3860757173e6532cfce8341219159abdc (image=quay.io/ceph/ceph:v19, name=agitated_archimedes, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, OSD_FLAVOR=default)
Nov 27 05:56:15 np0005537642 systemd[1]: Started libpod-conmon-c7f8693802fa7014c614683c21edafb3860757173e6532cfce8341219159abdc.scope.
Nov 27 05:56:15 np0005537642 systemd[1]: Started libcrun container.
Nov 27 05:56:15 np0005537642 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4c7c17c6480a769809ec1c6a7e9ab31a0671bf60f3b77d0fad964ea07b2384f7/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 27 05:56:15 np0005537642 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4c7c17c6480a769809ec1c6a7e9ab31a0671bf60f3b77d0fad964ea07b2384f7/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Nov 27 05:56:15 np0005537642 ceph-mon[74338]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2.devices.0}] v 0)
Nov 27 05:56:15 np0005537642 podman[85624]: 2025-11-27 10:56:15.541914526 +0000 UTC m=+0.847604816 container init c7f8693802fa7014c614683c21edafb3860757173e6532cfce8341219159abdc (image=quay.io/ceph/ceph:v19, name=agitated_archimedes, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 27 05:56:15 np0005537642 podman[85624]: 2025-11-27 10:56:15.549293791 +0000 UTC m=+0.854984001 container start c7f8693802fa7014c614683c21edafb3860757173e6532cfce8341219159abdc (image=quay.io/ceph/ceph:v19, name=agitated_archimedes, org.label-schema.build-date=20250325, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.vendor=CentOS)
Nov 27 05:56:15 np0005537642 ceph-mgr[74636]: mgr.server handle_open ignoring open from osd.1 v2:192.168.122.100:6802/2860845679; not ready for session (expect reconnect)
Nov 27 05:56:15 np0005537642 ceph-mon[74338]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 1} v 0)
Nov 27 05:56:15 np0005537642 ceph-mon[74338]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/397881370' entity='mgr.compute-0.qnrkij' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch
Nov 27 05:56:15 np0005537642 ceph-mgr[74636]: mgr finish mon failed to return metadata for osd.1: (2) No such file or directory
Nov 27 05:56:15 np0005537642 ceph-mon[74338]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/397881370' entity='mgr.compute-0.qnrkij' 
Nov 27 05:56:15 np0005537642 podman[85624]: 2025-11-27 10:56:15.829870771 +0000 UTC m=+1.135560991 container attach c7f8693802fa7014c614683c21edafb3860757173e6532cfce8341219159abdc (image=quay.io/ceph/ceph:v19, name=agitated_archimedes, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.build-date=20250325, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS)
Nov 27 05:56:16 np0005537642 ceph-mon[74338]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2}] v 0)
Nov 27 05:56:16 np0005537642 ceph-mon[74338]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool application enable", "pool": "vms", "app": "rbd"} v 0)
Nov 27 05:56:16 np0005537642 ceph-mon[74338]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/499688558' entity='client.admin' cmd=[{"prefix": "osd pool application enable", "pool": "vms", "app": "rbd"}]: dispatch
Nov 27 05:56:16 np0005537642 ceph-mon[74338]: Updating compute-2:/etc/ceph/ceph.client.admin.keyring
Nov 27 05:56:16 np0005537642 ceph-mon[74338]: Updating compute-2:/var/lib/ceph/4c838139-e0c9-556a-a9ca-e4422f459af7/config/ceph.client.admin.keyring
Nov 27 05:56:16 np0005537642 ceph-mon[74338]: log_channel(cluster) log [WRN] : Health check update: 6 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED)
Nov 27 05:56:16 np0005537642 ceph-mon[74338]: mon.compute-0@0(leader).osd e27 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 27 05:56:16 np0005537642 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v94: 7 pgs: 2 unknown, 5 active+clean; 449 KiB data, 27 MiB used, 20 GiB / 20 GiB avail
Nov 27 05:56:16 np0005537642 ceph-mon[74338]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/397881370' entity='mgr.compute-0.qnrkij' 
Nov 27 05:56:16 np0005537642 ceph-mon[74338]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Nov 27 05:56:16 np0005537642 ceph-mgr[74636]: mgr.server handle_open ignoring open from osd.1 v2:192.168.122.100:6802/2860845679; not ready for session (expect reconnect)
Nov 27 05:56:16 np0005537642 ceph-mon[74338]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 1} v 0)
Nov 27 05:56:16 np0005537642 ceph-mon[74338]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/397881370' entity='mgr.compute-0.qnrkij' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch
Nov 27 05:56:16 np0005537642 ceph-mgr[74636]: mgr finish mon failed to return metadata for osd.1: (2) No such file or directory
Nov 27 05:56:16 np0005537642 ceph-mon[74338]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/397881370' entity='mgr.compute-0.qnrkij' 
Nov 27 05:56:16 np0005537642 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v95: 7 pgs: 2 unknown, 5 active+clean; 449 KiB data, 27 MiB used, 20 GiB / 20 GiB avail
Nov 27 05:56:16 np0005537642 ceph-mgr[74636]: [progress INFO root] update: starting ev fb211363-14de-485e-ab62-6eecd0634d4a (Updating mon deployment (+2 -> 3))
Nov 27 05:56:16 np0005537642 ceph-mon[74338]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "mon."} v 0)
Nov 27 05:56:16 np0005537642 ceph-mon[74338]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/397881370' entity='mgr.compute-0.qnrkij' cmd=[{"prefix": "auth get", "entity": "mon."}]: dispatch
Nov 27 05:56:16 np0005537642 ceph-mon[74338]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config get", "who": "mon", "key": "public_network"} v 0)
Nov 27 05:56:16 np0005537642 ceph-mon[74338]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/397881370' entity='mgr.compute-0.qnrkij' cmd=[{"prefix": "config get", "who": "mon", "key": "public_network"}]: dispatch
Nov 27 05:56:16 np0005537642 ceph-mon[74338]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Nov 27 05:56:16 np0005537642 ceph-mon[74338]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/397881370' entity='mgr.compute-0.qnrkij' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 27 05:56:16 np0005537642 ceph-mgr[74636]: [cephadm INFO cephadm.serve] Deploying daemon mon.compute-2 on compute-2
Nov 27 05:56:16 np0005537642 ceph-mgr[74636]: log_channel(cephadm) log [INF] : Deploying daemon mon.compute-2 on compute-2
Nov 27 05:56:16 np0005537642 ceph-mon[74338]: mon.compute-0@0(leader).osd e27 do_prune osdmap full prune enabled
Nov 27 05:56:17 np0005537642 ceph-mon[74338]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/499688558' entity='client.admin' cmd='[{"prefix": "osd pool application enable", "pool": "vms", "app": "rbd"}]': finished
Nov 27 05:56:17 np0005537642 ceph-mon[74338]: mon.compute-0@0(leader).osd e28 e28: 2 total, 1 up, 2 in
Nov 27 05:56:17 np0005537642 agitated_archimedes[85639]: enabled application 'rbd' on pool 'vms'
Nov 27 05:56:17 np0005537642 systemd[1]: libpod-c7f8693802fa7014c614683c21edafb3860757173e6532cfce8341219159abdc.scope: Deactivated successfully.
Nov 27 05:56:17 np0005537642 podman[85624]: 2025-11-27 10:56:17.040993161 +0000 UTC m=+2.346683401 container died c7f8693802fa7014c614683c21edafb3860757173e6532cfce8341219159abdc (image=quay.io/ceph/ceph:v19, name=agitated_archimedes, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, OSD_FLAVOR=default)
Nov 27 05:56:17 np0005537642 ceph-mon[74338]: log_channel(cluster) log [DBG] : osdmap e28: 2 total, 1 up, 2 in
Nov 27 05:56:17 np0005537642 ceph-mon[74338]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 1} v 0)
Nov 27 05:56:17 np0005537642 ceph-mon[74338]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/397881370' entity='mgr.compute-0.qnrkij' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch
Nov 27 05:56:17 np0005537642 ceph-mgr[74636]: mgr finish mon failed to return metadata for osd.1: (2) No such file or directory
Nov 27 05:56:17 np0005537642 ceph-mon[74338]: log_channel(cluster) log [INF] : Health check cleared: CEPHADM_APPLY_SPEC_FAIL (was: Failed to apply 2 service(s): mon,mgr)
Nov 27 05:56:17 np0005537642 ceph-mgr[74636]: mgr.server handle_open ignoring open from osd.1 v2:192.168.122.100:6802/2860845679; not ready for session (expect reconnect)
Nov 27 05:56:17 np0005537642 ceph-mon[74338]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 1} v 0)
Nov 27 05:56:17 np0005537642 ceph-mon[74338]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/397881370' entity='mgr.compute-0.qnrkij' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch
Nov 27 05:56:17 np0005537642 ceph-mgr[74636]: mgr finish mon failed to return metadata for osd.1: (2) No such file or directory
Nov 27 05:56:18 np0005537642 ceph-mon[74338]: from='mgr.14122 192.168.122.100:0/397881370' entity='mgr.compute-0.qnrkij' 
Nov 27 05:56:18 np0005537642 ceph-mon[74338]: from='client.? 192.168.122.100:0/499688558' entity='client.admin' cmd=[{"prefix": "osd pool application enable", "pool": "vms", "app": "rbd"}]: dispatch
Nov 27 05:56:18 np0005537642 ceph-mon[74338]: Health check update: 6 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED)
Nov 27 05:56:18 np0005537642 ceph-mon[74338]: from='mgr.14122 192.168.122.100:0/397881370' entity='mgr.compute-0.qnrkij' 
Nov 27 05:56:18 np0005537642 ceph-mon[74338]: from='mgr.14122 192.168.122.100:0/397881370' entity='mgr.compute-0.qnrkij' 
Nov 27 05:56:18 np0005537642 ceph-mon[74338]: from='mgr.14122 192.168.122.100:0/397881370' entity='mgr.compute-0.qnrkij' cmd=[{"prefix": "auth get", "entity": "mon."}]: dispatch
Nov 27 05:56:18 np0005537642 ceph-mon[74338]: from='client.? 192.168.122.100:0/499688558' entity='client.admin' cmd='[{"prefix": "osd pool application enable", "pool": "vms", "app": "rbd"}]': finished
Nov 27 05:56:18 np0005537642 systemd[1]: var-lib-containers-storage-overlay-4c7c17c6480a769809ec1c6a7e9ab31a0671bf60f3b77d0fad964ea07b2384f7-merged.mount: Deactivated successfully.
Nov 27 05:56:18 np0005537642 ceph-mgr[74636]: [volumes INFO mgr_util] scanning for idle connections..
Nov 27 05:56:18 np0005537642 ceph-mgr[74636]: [volumes INFO mgr_util] cleaning up connections: []
Nov 27 05:56:18 np0005537642 ceph-mgr[74636]: [volumes INFO mgr_util] scanning for idle connections..
Nov 27 05:56:18 np0005537642 ceph-mgr[74636]: [volumes INFO mgr_util] cleaning up connections: []
Nov 27 05:56:18 np0005537642 ceph-mgr[74636]: [volumes INFO mgr_util] scanning for idle connections..
Nov 27 05:56:18 np0005537642 ceph-mgr[74636]: [volumes INFO mgr_util] cleaning up connections: []
Nov 27 05:56:18 np0005537642 ceph-mgr[74636]: mgr.server handle_open ignoring open from osd.1 v2:192.168.122.100:6802/2860845679; not ready for session (expect reconnect)
Nov 27 05:56:18 np0005537642 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v97: 7 pgs: 2 unknown, 5 active+clean; 449 KiB data, 27 MiB used, 20 GiB / 20 GiB avail
Nov 27 05:56:18 np0005537642 ceph-mon[74338]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 1} v 0)
Nov 27 05:56:18 np0005537642 ceph-mon[74338]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/397881370' entity='mgr.compute-0.qnrkij' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch
Nov 27 05:56:18 np0005537642 ceph-mgr[74636]: mgr finish mon failed to return metadata for osd.1: (2) No such file or directory
Nov 27 05:56:18 np0005537642 ceph-mgr[74636]: [progress WARNING root] Starting Global Recovery Event,2 pgs not in active + clean state
Nov 27 05:56:19 np0005537642 ceph-mon[74338]: mon.compute-0@0(leader) e1  adding peer [v2:192.168.122.102:3300/0,v1:192.168.122.102:6789/0] to list of hints
Nov 27 05:56:19 np0005537642 ceph-mon[74338]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2.devices.0}] v 0)
Nov 27 05:56:19 np0005537642 ceph-mon[74338]: mon.compute-0@0(leader) e1  adding peer [v2:192.168.122.102:3300/0,v1:192.168.122.102:6789/0] to list of hints
Nov 27 05:56:19 np0005537642 podman[85624]: 2025-11-27 10:56:19.37816576 +0000 UTC m=+4.683855990 container remove c7f8693802fa7014c614683c21edafb3860757173e6532cfce8341219159abdc (image=quay.io/ceph/ceph:v19, name=agitated_archimedes, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 27 05:56:19 np0005537642 systemd[1]: libpod-conmon-c7f8693802fa7014c614683c21edafb3860757173e6532cfce8341219159abdc.scope: Deactivated successfully.
Nov 27 05:56:19 np0005537642 ceph-mon[74338]: Deploying daemon mon.compute-2 on compute-2
Nov 27 05:56:19 np0005537642 ceph-mon[74338]: Health check cleared: CEPHADM_APPLY_SPEC_FAIL (was: Failed to apply 2 service(s): mon,mgr)
Nov 27 05:56:19 np0005537642 ceph-mgr[74636]: mgr.server handle_open ignoring open from osd.1 v2:192.168.122.100:6802/2860845679; not ready for session (expect reconnect)
Nov 27 05:56:20 np0005537642 python3[85701]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v19 --fsid 4c838139-e0c9-556a-a9ca-e4422f459af7 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   osd pool application enable volumes rbd _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 27 05:56:20 np0005537642 ceph-mon[74338]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 1} v 0)
Nov 27 05:56:20 np0005537642 ceph-mon[74338]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/397881370' entity='mgr.compute-0.qnrkij' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch
Nov 27 05:56:20 np0005537642 ceph-mgr[74636]: mgr finish mon failed to return metadata for osd.1: (2) No such file or directory
Nov 27 05:56:20 np0005537642 ceph-mon[74338]: mon.compute-0@0(leader) e1  adding peer [v2:192.168.122.102:3300/0,v1:192.168.122.102:6789/0] to list of hints
Nov 27 05:56:20 np0005537642 ceph-mon[74338]: mon.compute-0@0(leader).monmap v1 adding/updating compute-2 at [v2:192.168.122.102:3300/0,v1:192.168.122.102:6789/0] to monitor cluster
Nov 27 05:56:20 np0005537642 ceph-mgr[74636]: mgr.server handle_open ignoring open from mon.compute-2 192.168.122.102:0/561364870; not ready for session (expect reconnect)
Nov 27 05:56:20 np0005537642 ceph-mon[74338]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon metadata", "id": "compute-2"} v 0)
Nov 27 05:56:20 np0005537642 ceph-mon[74338]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/397881370' entity='mgr.compute-0.qnrkij' cmd=[{"prefix": "mon metadata", "id": "compute-2"}]: dispatch
Nov 27 05:56:20 np0005537642 ceph-mgr[74636]: mgr finish mon failed to return metadata for mon.compute-2: (2) No such file or directory
Nov 27 05:56:20 np0005537642 podman[85703]: 2025-11-27 10:56:20.109929496 +0000 UTC m=+0.038263766 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Nov 27 05:56:20 np0005537642 ceph-mon[74338]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/397881370' entity='mgr.compute-0.qnrkij' 
Nov 27 05:56:20 np0005537642 podman[85703]: 2025-11-27 10:56:20.326080388 +0000 UTC m=+0.254414578 container create 601f12fbee5609148be2f3f4abb90c6f02077d9d0fb5fd21a30597a3d4bbf389 (image=quay.io/ceph/ceph:v19, name=eloquent_elgamal, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_REF=squid, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2)
Nov 27 05:56:20 np0005537642 ceph-mon[74338]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2}] v 0)
Nov 27 05:56:20 np0005537642 ceph-mon[74338]: mon.compute-0@0(probing) e2 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2}] v 0)
Nov 27 05:56:20 np0005537642 ceph-mon[74338]: mon.compute-0@0(probing) e2 handle_command mon_command({"prefix": "mon metadata", "id": "compute-0"} v 0)
Nov 27 05:56:20 np0005537642 ceph-mon[74338]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/397881370' entity='mgr.compute-0.qnrkij' cmd=[{"prefix": "mon metadata", "id": "compute-0"}]: dispatch
Nov 27 05:56:20 np0005537642 ceph-mon[74338]: mon.compute-0@0(probing) e2 handle_command mon_command({"prefix": "mon metadata", "id": "compute-2"} v 0)
Nov 27 05:56:20 np0005537642 ceph-mon[74338]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/397881370' entity='mgr.compute-0.qnrkij' cmd=[{"prefix": "mon metadata", "id": "compute-2"}]: dispatch
Nov 27 05:56:20 np0005537642 ceph-mgr[74636]: mgr finish mon failed to return metadata for mon.compute-2: (22) Invalid argument
Nov 27 05:56:20 np0005537642 ceph-mon[74338]: log_channel(cluster) log [INF] : mon.compute-0 calling monitor election
Nov 27 05:56:20 np0005537642 ceph-mon[74338]: paxos.0).electionLogic(5) init, last seen epoch 5, mid-election, bumping
Nov 27 05:56:20 np0005537642 ceph-mgr[74636]: mgr.server handle_open ignoring open from osd.1 v2:192.168.122.100:6802/2860845679; not ready for session (expect reconnect)
Nov 27 05:56:20 np0005537642 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v98: 7 pgs: 2 unknown, 5 active+clean; 449 KiB data, 27 MiB used, 20 GiB / 20 GiB avail
Nov 27 05:56:20 np0005537642 systemd[1]: Started libpod-conmon-601f12fbee5609148be2f3f4abb90c6f02077d9d0fb5fd21a30597a3d4bbf389.scope.
Nov 27 05:56:20 np0005537642 systemd[1]: Started libcrun container.
Nov 27 05:56:20 np0005537642 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ea8d47b75f351edf0be26be0a67a09d94377bd7ebc2b03d49cb8c87bd26318a2/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 27 05:56:20 np0005537642 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ea8d47b75f351edf0be26be0a67a09d94377bd7ebc2b03d49cb8c87bd26318a2/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Nov 27 05:56:20 np0005537642 ceph-mon[74338]: mon.compute-0@0(electing) e2 collect_metadata vda:  no unique device id for vda: fallback method has no model nor serial
Nov 27 05:56:20 np0005537642 ceph-mon[74338]: mon.compute-0@0(electing) e2 handle_command mon_command({"prefix": "osd metadata", "id": 1} v 0)
Nov 27 05:56:20 np0005537642 ceph-mon[74338]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/397881370' entity='mgr.compute-0.qnrkij' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch
Nov 27 05:56:20 np0005537642 podman[85703]: 2025-11-27 10:56:20.893129652 +0000 UTC m=+0.821463922 container init 601f12fbee5609148be2f3f4abb90c6f02077d9d0fb5fd21a30597a3d4bbf389 (image=quay.io/ceph/ceph:v19, name=eloquent_elgamal, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True)
Nov 27 05:56:20 np0005537642 podman[85703]: 2025-11-27 10:56:20.900518867 +0000 UTC m=+0.828853067 container start 601f12fbee5609148be2f3f4abb90c6f02077d9d0fb5fd21a30597a3d4bbf389 (image=quay.io/ceph/ceph:v19, name=eloquent_elgamal, OSD_FLAVOR=default, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 27 05:56:21 np0005537642 ceph-mon[74338]: mon.compute-0@0(electing) e2 handle_auth_request failed to assign global_id
Nov 27 05:56:21 np0005537642 podman[85703]: 2025-11-27 10:56:21.062326934 +0000 UTC m=+0.990661124 container attach 601f12fbee5609148be2f3f4abb90c6f02077d9d0fb5fd21a30597a3d4bbf389 (image=quay.io/ceph/ceph:v19, name=eloquent_elgamal, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_REF=squid, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 27 05:56:21 np0005537642 ceph-mgr[74636]: mgr.server handle_open ignoring open from mon.compute-2 192.168.122.102:0/561364870; not ready for session (expect reconnect)
Nov 27 05:56:21 np0005537642 ceph-mon[74338]: mon.compute-0@0(electing) e2 handle_command mon_command({"prefix": "mon metadata", "id": "compute-2"} v 0)
Nov 27 05:56:21 np0005537642 ceph-mon[74338]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/397881370' entity='mgr.compute-0.qnrkij' cmd=[{"prefix": "mon metadata", "id": "compute-2"}]: dispatch
Nov 27 05:56:21 np0005537642 ceph-mgr[74636]: mgr finish mon failed to return metadata for mon.compute-2: (22) Invalid argument
Nov 27 05:56:21 np0005537642 ceph-mon[74338]: mon.compute-0@0(electing) e2 handle_auth_request failed to assign global_id
Nov 27 05:56:21 np0005537642 ceph-osd[82775]: osd.1 0 maybe_override_max_osd_capacity_for_qos osd bench result - bandwidth (MiB/sec): 1.250 iops: 319.891 elapsed_sec: 9.378
Nov 27 05:56:21 np0005537642 ceph-mon[74338]: mon.compute-0@0(electing) e2 handle_command mon_command([{prefix=config set, name=osd_mclock_max_capacity_iops_hdd}] v 0)
Nov 27 05:56:21 np0005537642 ceph-osd[82775]: osd.1 0 waiting for initial osdmap
Nov 27 05:56:21 np0005537642 ceph-4c838139-e0c9-556a-a9ca-e4422f459af7-osd-1[82771]: 2025-11-27T10:56:21.527+0000 7fe433a09640 -1 osd.1 0 waiting for initial osdmap
Nov 27 05:56:21 np0005537642 ceph-mgr[74636]: mgr.server handle_open ignoring open from osd.1 v2:192.168.122.100:6802/2860845679; not ready for session (expect reconnect)
Nov 27 05:56:21 np0005537642 ceph-mon[74338]: mon.compute-0@0(electing) e2 handle_auth_request failed to assign global_id
Nov 27 05:56:21 np0005537642 ceph-osd[82775]: osd.1 28 crush map has features 288514051259236352, adjusting msgr requires for clients
Nov 27 05:56:21 np0005537642 ceph-osd[82775]: osd.1 28 crush map has features 288514051259236352 was 288232575208792577, adjusting msgr requires for mons
Nov 27 05:56:21 np0005537642 ceph-osd[82775]: osd.1 28 crush map has features 3314933000852226048, adjusting msgr requires for osds
Nov 27 05:56:21 np0005537642 ceph-osd[82775]: osd.1 28 check_osdmap_features require_osd_release unknown -> squid
Nov 27 05:56:21 np0005537642 ceph-osd[82775]: osd.1 28 set_numa_affinity unable to identify public interface '' numa node: (2) No such file or directory
Nov 27 05:56:21 np0005537642 ceph-osd[82775]: osd.1 28 set_numa_affinity not setting numa affinity
Nov 27 05:56:21 np0005537642 ceph-4c838139-e0c9-556a-a9ca-e4422f459af7-osd-1[82771]: 2025-11-27T10:56:21.723+0000 7fe42e81e640 -1 osd.1 28 set_numa_affinity unable to identify public interface '' numa node: (2) No such file or directory
Nov 27 05:56:21 np0005537642 ceph-osd[82775]: osd.1 28 _collect_metadata loop3:  no unique device id for loop3: fallback method has no model nor serial no unique device path for loop3: no symlink to loop3 in /dev/disk/by-path
Nov 27 05:56:22 np0005537642 ceph-mgr[74636]: mgr.server handle_open ignoring open from mon.compute-2 192.168.122.102:0/561364870; not ready for session (expect reconnect)
Nov 27 05:56:22 np0005537642 ceph-mon[74338]: mon.compute-0@0(electing) e2 handle_command mon_command({"prefix": "mon metadata", "id": "compute-2"} v 0)
Nov 27 05:56:22 np0005537642 ceph-mon[74338]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/397881370' entity='mgr.compute-0.qnrkij' cmd=[{"prefix": "mon metadata", "id": "compute-2"}]: dispatch
Nov 27 05:56:22 np0005537642 ceph-mgr[74636]: mgr finish mon failed to return metadata for mon.compute-2: (22) Invalid argument
Nov 27 05:56:22 np0005537642 ceph-mon[74338]: mon.compute-0@0(electing) e2 handle_auth_request failed to assign global_id
Nov 27 05:56:22 np0005537642 ceph-osd[82775]: osd.1 28 tick checking mon for new map
Nov 27 05:56:22 np0005537642 ceph-mgr[74636]: mgr.server handle_open ignoring open from osd.1 v2:192.168.122.100:6802/2860845679; not ready for session (expect reconnect)
Nov 27 05:56:22 np0005537642 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v99: 7 pgs: 2 unknown, 5 active+clean; 449 KiB data, 27 MiB used, 20 GiB / 20 GiB avail
Nov 27 05:56:23 np0005537642 ceph-mgr[74636]: mgr.server handle_open ignoring open from mon.compute-2 192.168.122.102:0/561364870; not ready for session (expect reconnect)
Nov 27 05:56:23 np0005537642 ceph-mon[74338]: mon.compute-0@0(electing) e2 handle_command mon_command({"prefix": "mon metadata", "id": "compute-2"} v 0)
Nov 27 05:56:23 np0005537642 ceph-mon[74338]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/397881370' entity='mgr.compute-0.qnrkij' cmd=[{"prefix": "mon metadata", "id": "compute-2"}]: dispatch
Nov 27 05:56:23 np0005537642 ceph-mgr[74636]: mgr finish mon failed to return metadata for mon.compute-2: (22) Invalid argument
Nov 27 05:56:23 np0005537642 ceph-mgr[74636]: mgr.server handle_open ignoring open from osd.1 v2:192.168.122.100:6802/2860845679; not ready for session (expect reconnect)
Nov 27 05:56:24 np0005537642 ceph-mon[74338]: mon.compute-0@0(electing) e2 handle_auth_request failed to assign global_id
Nov 27 05:56:24 np0005537642 ceph-mgr[74636]: mgr.server handle_open ignoring open from mon.compute-2 192.168.122.102:0/561364870; not ready for session (expect reconnect)
Nov 27 05:56:24 np0005537642 ceph-mon[74338]: mon.compute-0@0(electing) e2 handle_command mon_command({"prefix": "mon metadata", "id": "compute-2"} v 0)
Nov 27 05:56:24 np0005537642 ceph-mon[74338]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/397881370' entity='mgr.compute-0.qnrkij' cmd=[{"prefix": "mon metadata", "id": "compute-2"}]: dispatch
Nov 27 05:56:24 np0005537642 ceph-mgr[74636]: mgr finish mon failed to return metadata for mon.compute-2: (22) Invalid argument
Nov 27 05:56:24 np0005537642 ceph-mon[74338]: mon.compute-0@0(electing) e2 handle_auth_request failed to assign global_id
Nov 27 05:56:24 np0005537642 ceph-mgr[74636]: mgr.server handle_open ignoring open from osd.1 v2:192.168.122.100:6802/2860845679; not ready for session (expect reconnect)
Nov 27 05:56:24 np0005537642 ceph-mon[74338]: mon.compute-0@0(electing) e2 handle_auth_request failed to assign global_id
Nov 27 05:56:24 np0005537642 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v100: 7 pgs: 2 unknown, 5 active+clean; 449 KiB data, 27 MiB used, 20 GiB / 20 GiB avail
Nov 27 05:56:25 np0005537642 ceph-mgr[74636]: mgr.server handle_open ignoring open from mon.compute-2 192.168.122.102:0/561364870; not ready for session (expect reconnect)
Nov 27 05:56:25 np0005537642 ceph-mon[74338]: mon.compute-0@0(electing) e2 handle_command mon_command({"prefix": "mon metadata", "id": "compute-2"} v 0)
Nov 27 05:56:25 np0005537642 ceph-mon[74338]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/397881370' entity='mgr.compute-0.qnrkij' cmd=[{"prefix": "mon metadata", "id": "compute-2"}]: dispatch
Nov 27 05:56:25 np0005537642 ceph-mgr[74636]: mgr finish mon failed to return metadata for mon.compute-2: (22) Invalid argument
Nov 27 05:56:25 np0005537642 ceph-mon[74338]: mon.compute-0@0(electing) e2 handle_auth_request failed to assign global_id
Nov 27 05:56:25 np0005537642 ceph-mgr[74636]: mgr.server handle_open ignoring open from osd.1 v2:192.168.122.100:6802/2860845679; not ready for session (expect reconnect)
Nov 27 05:56:25 np0005537642 ceph-mon[74338]: paxos.0).electionLogic(7) init, last seen epoch 7, mid-election, bumping
Nov 27 05:56:25 np0005537642 ceph-mon[74338]: mon.compute-0@0(electing) e2 collect_metadata vda:  no unique device id for vda: fallback method has no model nor serial
Nov 27 05:56:26 np0005537642 ceph-mon[74338]: log_channel(cluster) log [INF] : mon.compute-0 is new leader, mons compute-0,compute-2 in quorum (ranks 0,1)
Nov 27 05:56:26 np0005537642 ceph-mgr[74636]: mgr.server handle_open ignoring open from mon.compute-2 192.168.122.102:0/561364870; not ready for session (expect reconnect)
Nov 27 05:56:26 np0005537642 ceph-mon[74338]: mon.compute-0@0(leader) e2 handle_command mon_command({"prefix": "mon metadata", "id": "compute-2"} v 0)
Nov 27 05:56:26 np0005537642 ceph-mon[74338]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/397881370' entity='mgr.compute-0.qnrkij' cmd=[{"prefix": "mon metadata", "id": "compute-2"}]: dispatch
Nov 27 05:56:26 np0005537642 ceph-mgr[74636]: mgr finish mon failed to return metadata for mon.compute-2: (22) Invalid argument
Nov 27 05:56:26 np0005537642 ceph-mon[74338]: log_channel(cluster) log [DBG] : monmap epoch 2
Nov 27 05:56:26 np0005537642 ceph-mon[74338]: log_channel(cluster) log [DBG] : fsid 4c838139-e0c9-556a-a9ca-e4422f459af7
Nov 27 05:56:26 np0005537642 ceph-mon[74338]: log_channel(cluster) log [DBG] : last_changed 2025-11-27T10:56:20.146685+0000
Nov 27 05:56:26 np0005537642 ceph-mon[74338]: log_channel(cluster) log [DBG] : created 2025-11-27T10:53:19.458310+0000
Nov 27 05:56:26 np0005537642 ceph-mon[74338]: log_channel(cluster) log [DBG] : min_mon_release 19 (squid)
Nov 27 05:56:26 np0005537642 ceph-mon[74338]: log_channel(cluster) log [DBG] : election_strategy: 1
Nov 27 05:56:26 np0005537642 ceph-mon[74338]: log_channel(cluster) log [DBG] : 0: [v2:192.168.122.100:3300/0,v1:192.168.122.100:6789/0] mon.compute-0
Nov 27 05:56:26 np0005537642 ceph-mon[74338]: log_channel(cluster) log [DBG] : 1: [v2:192.168.122.102:3300/0,v1:192.168.122.102:6789/0] mon.compute-2
Nov 27 05:56:26 np0005537642 ceph-mon[74338]: mon.compute-0@0(leader) e2 collect_metadata vda:  no unique device id for vda: fallback method has no model nor serial
Nov 27 05:56:26 np0005537642 ceph-mon[74338]: log_channel(cluster) log [DBG] : fsmap 
Nov 27 05:56:26 np0005537642 ceph-mon[74338]: log_channel(cluster) log [DBG] : osdmap e28: 2 total, 1 up, 2 in
Nov 27 05:56:26 np0005537642 ceph-mon[74338]: log_channel(cluster) log [DBG] : mgrmap e9: compute-0.qnrkij(active, since 2m)
Nov 27 05:56:26 np0005537642 ceph-mon[74338]: log_channel(cluster) log [WRN] : Health detail: HEALTH_WARN 5 pool(s) do not have an application enabled
Nov 27 05:56:26 np0005537642 ceph-mon[74338]: log_channel(cluster) log [WRN] : [WRN] POOL_APP_NOT_ENABLED: 5 pool(s) do not have an application enabled
Nov 27 05:56:26 np0005537642 ceph-mon[74338]: log_channel(cluster) log [WRN] :     application not enabled on pool 'volumes'
Nov 27 05:56:26 np0005537642 ceph-mon[74338]: log_channel(cluster) log [WRN] :     application not enabled on pool 'backups'
Nov 27 05:56:26 np0005537642 ceph-mon[74338]: log_channel(cluster) log [WRN] :     application not enabled on pool 'images'
Nov 27 05:56:26 np0005537642 ceph-mon[74338]: log_channel(cluster) log [WRN] :     application not enabled on pool 'cephfs.cephfs.meta'
Nov 27 05:56:26 np0005537642 ceph-mon[74338]: log_channel(cluster) log [WRN] :     application not enabled on pool 'cephfs.cephfs.data'
Nov 27 05:56:26 np0005537642 ceph-mon[74338]: log_channel(cluster) log [WRN] :     use 'ceph osd pool application enable <pool-name> <app-name>', where <app-name> is 'cephfs', 'rbd', 'rgw', or freeform for custom applications.
Nov 27 05:56:26 np0005537642 ceph-mgr[74636]: mgr finish mon failed to return metadata for osd.1: (2) No such file or directory
Nov 27 05:56:26 np0005537642 ceph-mon[74338]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/397881370' entity='mgr.compute-0.qnrkij' 
Nov 27 05:56:26 np0005537642 ceph-mon[74338]: mon.compute-0@0(leader) e2 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.mon}] v 0)
Nov 27 05:56:26 np0005537642 ceph-mon[74338]: mon.compute-0@0(leader).osd e28 do_prune osdmap full prune enabled
Nov 27 05:56:26 np0005537642 ceph-mon[74338]: log_channel(audit) log [INF] : from='osd.1 [v2:192.168.122.100:6802/2860845679,v1:192.168.122.100:6803/2860845679]' entity='osd.1' 
Nov 27 05:56:26 np0005537642 ceph-osd[82775]: mClockScheduler: set_osd_capacity_params_from_config: osd_bandwidth_cost_per_io: 491687.17 bytes/io, osd_bandwidth_capacity_per_shard 157286400.00 bytes/second
Nov 27 05:56:26 np0005537642 ceph-mgr[74636]: mgr.server handle_open ignoring open from osd.1 v2:192.168.122.100:6802/2860845679; not ready for session (expect reconnect)
Nov 27 05:56:26 np0005537642 ceph-mon[74338]: mon.compute-0@0(leader) e2 handle_command mon_command({"prefix": "osd metadata", "id": 1} v 0)
Nov 27 05:56:26 np0005537642 ceph-mon[74338]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/397881370' entity='mgr.compute-0.qnrkij' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch
Nov 27 05:56:26 np0005537642 ceph-mgr[74636]: mgr finish mon failed to return metadata for osd.1: (2) No such file or directory
Nov 27 05:56:26 np0005537642 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v101: 7 pgs: 2 unknown, 5 active+clean; 449 KiB data, 27 MiB used, 20 GiB / 20 GiB avail
Nov 27 05:56:26 np0005537642 ceph-mon[74338]: mon.compute-0@0(leader).osd e29 e29: 2 total, 2 up, 2 in
Nov 27 05:56:26 np0005537642 ceph-mon[74338]: mon.compute-0 calling monitor election
Nov 27 05:56:26 np0005537642 ceph-mon[74338]: mon.compute-2 calling monitor election
Nov 27 05:56:26 np0005537642 ceph-mon[74338]: mon.compute-0 is new leader, mons compute-0,compute-2 in quorum (ranks 0,1)
Nov 27 05:56:26 np0005537642 ceph-mon[74338]: Health detail: HEALTH_WARN 5 pool(s) do not have an application enabled
Nov 27 05:56:26 np0005537642 ceph-mon[74338]: [WRN] POOL_APP_NOT_ENABLED: 5 pool(s) do not have an application enabled
Nov 27 05:56:26 np0005537642 ceph-mon[74338]:    application not enabled on pool 'volumes'
Nov 27 05:56:26 np0005537642 ceph-mon[74338]:    application not enabled on pool 'backups'
Nov 27 05:56:26 np0005537642 ceph-mon[74338]:    application not enabled on pool 'images'
Nov 27 05:56:26 np0005537642 ceph-mon[74338]:    application not enabled on pool 'cephfs.cephfs.meta'
Nov 27 05:56:26 np0005537642 ceph-mon[74338]:    application not enabled on pool 'cephfs.cephfs.data'
Nov 27 05:56:26 np0005537642 ceph-mon[74338]:    use 'ceph osd pool application enable <pool-name> <app-name>', where <app-name> is 'cephfs', 'rbd', 'rgw', or freeform for custom applications.
Nov 27 05:56:26 np0005537642 ceph-mon[74338]: from='mgr.14122 192.168.122.100:0/397881370' entity='mgr.compute-0.qnrkij' 
Nov 27 05:56:26 np0005537642 ceph-mon[74338]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/397881370' entity='mgr.compute-0.qnrkij' 
Nov 27 05:56:26 np0005537642 ceph-mon[74338]: log_channel(cluster) log [INF] : osd.1 [v2:192.168.122.100:6802/2860845679,v1:192.168.122.100:6803/2860845679] boot
Nov 27 05:56:26 np0005537642 ceph-mon[74338]: log_channel(cluster) log [DBG] : osdmap e29: 2 total, 2 up, 2 in
Nov 27 05:56:26 np0005537642 ceph-mon[74338]: mon.compute-0@0(leader) e2 handle_command mon_command({"prefix": "osd metadata", "id": 1} v 0)
Nov 27 05:56:26 np0005537642 ceph-mon[74338]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/397881370' entity='mgr.compute-0.qnrkij' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch
Nov 27 05:56:26 np0005537642 ceph-mon[74338]: mon.compute-0@0(leader) e2 handle_command mon_command({"prefix": "auth get", "entity": "mon."} v 0)
Nov 27 05:56:26 np0005537642 ceph-mon[74338]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/397881370' entity='mgr.compute-0.qnrkij' cmd=[{"prefix": "auth get", "entity": "mon."}]: dispatch
Nov 27 05:56:26 np0005537642 ceph-mon[74338]: mon.compute-0@0(leader) e2 handle_command mon_command({"prefix": "config get", "who": "mon", "key": "public_network"} v 0)
Nov 27 05:56:26 np0005537642 ceph-mon[74338]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/397881370' entity='mgr.compute-0.qnrkij' cmd=[{"prefix": "config get", "who": "mon", "key": "public_network"}]: dispatch
Nov 27 05:56:26 np0005537642 ceph-osd[82775]: osd.1 29 state: booting -> active
Nov 27 05:56:26 np0005537642 ceph-mon[74338]: mon.compute-0@0(leader) e2 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Nov 27 05:56:26 np0005537642 ceph-mon[74338]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/397881370' entity='mgr.compute-0.qnrkij' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 27 05:56:26 np0005537642 ceph-osd[82775]: osd.1 pg_epoch: 29 pg[1.0( empty local-lis/les=0/0 n=0 ec=9/9 lis/c=9/9 les/c/f=10/10/0 sis=29) [1] r=0 lpr=29 pi=[9,29)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 27 05:56:26 np0005537642 ceph-osd[82775]: osd.1 pg_epoch: 29 pg[2.0( empty local-lis/les=0/0 n=0 ec=14/14 lis/c=0/0 les/c/f=0/0/0 sis=29) [1] r=0 lpr=29 pi=[14,29)/0 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 27 05:56:26 np0005537642 ceph-osd[82775]: osd.1 pg_epoch: 29 pg[7.0( empty local-lis/les=0/0 n=0 ec=27/27 lis/c=0/0 les/c/f=0/0/0 sis=29) [1] r=0 lpr=29 pi=[27,29)/0 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 27 05:56:26 np0005537642 ceph-mgr[74636]: [cephadm INFO cephadm.serve] Deploying daemon mon.compute-1 on compute-1
Nov 27 05:56:26 np0005537642 ceph-mgr[74636]: log_channel(cephadm) log [INF] : Deploying daemon mon.compute-1 on compute-1
Nov 27 05:56:27 np0005537642 ceph-mgr[74636]: mgr.server handle_open ignoring open from mon.compute-2 192.168.122.102:0/561364870; not ready for session (expect reconnect)
Nov 27 05:56:27 np0005537642 ceph-mon[74338]: mon.compute-0@0(leader) e2 handle_command mon_command({"prefix": "mon metadata", "id": "compute-2"} v 0)
Nov 27 05:56:27 np0005537642 ceph-mon[74338]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/397881370' entity='mgr.compute-0.qnrkij' cmd=[{"prefix": "mon metadata", "id": "compute-2"}]: dispatch
Nov 27 05:56:27 np0005537642 ceph-mon[74338]: mon.compute-0@0(leader) e2 handle_command mon_command({"prefix": "osd pool application enable", "pool": "volumes", "app": "rbd"} v 0)
Nov 27 05:56:27 np0005537642 ceph-mon[74338]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/739780601' entity='client.admin' cmd=[{"prefix": "osd pool application enable", "pool": "volumes", "app": "rbd"}]: dispatch
Nov 27 05:56:27 np0005537642 ceph-mon[74338]: mon.compute-0@0(leader).osd e29 do_prune osdmap full prune enabled
Nov 27 05:56:27 np0005537642 ceph-mon[74338]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/739780601' entity='client.admin' cmd='[{"prefix": "osd pool application enable", "pool": "volumes", "app": "rbd"}]': finished
Nov 27 05:56:27 np0005537642 ceph-mon[74338]: mon.compute-0@0(leader).osd e30 e30: 2 total, 2 up, 2 in
Nov 27 05:56:27 np0005537642 eloquent_elgamal[85718]: enabled application 'rbd' on pool 'volumes'
Nov 27 05:56:28 np0005537642 systemd[1]: libpod-601f12fbee5609148be2f3f4abb90c6f02077d9d0fb5fd21a30597a3d4bbf389.scope: Deactivated successfully.
Nov 27 05:56:28 np0005537642 podman[85703]: 2025-11-27 10:56:28.003232259 +0000 UTC m=+7.931566549 container died 601f12fbee5609148be2f3f4abb90c6f02077d9d0fb5fd21a30597a3d4bbf389 (image=quay.io/ceph/ceph:v19, name=eloquent_elgamal, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2)
Nov 27 05:56:28 np0005537642 ceph-mon[74338]: from='osd.1 [v2:192.168.122.100:6802/2860845679,v1:192.168.122.100:6803/2860845679]' entity='osd.1' 
Nov 27 05:56:28 np0005537642 ceph-mon[74338]: from='mgr.14122 192.168.122.100:0/397881370' entity='mgr.compute-0.qnrkij' 
Nov 27 05:56:28 np0005537642 ceph-mon[74338]: osd.1 [v2:192.168.122.100:6802/2860845679,v1:192.168.122.100:6803/2860845679] boot
Nov 27 05:56:28 np0005537642 ceph-mon[74338]: from='mgr.14122 192.168.122.100:0/397881370' entity='mgr.compute-0.qnrkij' cmd=[{"prefix": "auth get", "entity": "mon."}]: dispatch
Nov 27 05:56:28 np0005537642 ceph-mon[74338]: Deploying daemon mon.compute-1 on compute-1
Nov 27 05:56:28 np0005537642 ceph-mon[74338]: from='client.? 192.168.122.100:0/739780601' entity='client.admin' cmd=[{"prefix": "osd pool application enable", "pool": "volumes", "app": "rbd"}]: dispatch
Nov 27 05:56:28 np0005537642 ceph-mgr[74636]: mgr.server handle_report got status from non-daemon mon.compute-2
Nov 27 05:56:28 np0005537642 ceph-4c838139-e0c9-556a-a9ca-e4422f459af7-mgr-compute-0-qnrkij[74632]: 2025-11-27T10:56:28.149+0000 7f7afed36640 -1 mgr.server handle_report got status from non-daemon mon.compute-2
Nov 27 05:56:28 np0005537642 ceph-osd[82775]: osd.1 pg_epoch: 30 pg[2.0( empty local-lis/les=29/30 n=0 ec=14/14 lis/c=0/0 les/c/f=0/0/0 sis=29) [1] r=0 lpr=29 pi=[14,29)/0 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 27 05:56:28 np0005537642 ceph-osd[82775]: osd.1 pg_epoch: 30 pg[7.0( empty local-lis/les=29/30 n=0 ec=27/27 lis/c=0/0 les/c/f=0/0/0 sis=29) [1] r=0 lpr=29 pi=[27,29)/0 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 27 05:56:28 np0005537642 ceph-mon[74338]: log_channel(cluster) log [DBG] : osdmap e30: 2 total, 2 up, 2 in
Nov 27 05:56:28 np0005537642 ceph-osd[82775]: osd.1 pg_epoch: 30 pg[1.0( v 10'32 lc 10'30 (0'0,10'32] local-lis/les=29/30 n=2 ec=9/9 lis/c=9/9 les/c/f=10/10/0 sis=29) [1] r=0 lpr=29 pi=[9,29)/1 crt=10'32 lcod 0'0 mlcod 0'0 active+degraded m=2 mbc={255={(0+1)=2}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 27 05:56:28 np0005537642 systemd[1]: var-lib-containers-storage-overlay-ea8d47b75f351edf0be26be0a67a09d94377bd7ebc2b03d49cb8c87bd26318a2-merged.mount: Deactivated successfully.
Nov 27 05:56:28 np0005537642 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v104: 7 pgs: 2 creating+peering, 1 peering, 4 active+clean; 449 KiB data, 453 MiB used, 40 GiB / 40 GiB avail
Nov 27 05:56:28 np0005537642 podman[85703]: 2025-11-27 10:56:28.87475067 +0000 UTC m=+8.803084890 container remove 601f12fbee5609148be2f3f4abb90c6f02077d9d0fb5fd21a30597a3d4bbf389 (image=quay.io/ceph/ceph:v19, name=eloquent_elgamal, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, OSD_FLAVOR=default)
Nov 27 05:56:28 np0005537642 systemd[1]: libpod-conmon-601f12fbee5609148be2f3f4abb90c6f02077d9d0fb5fd21a30597a3d4bbf389.scope: Deactivated successfully.
Nov 27 05:56:29 np0005537642 ceph-mgr[74636]: mgr.server handle_open ignoring open from mon.compute-1 192.168.122.101:0/4071902992; not ready for session (expect reconnect)
Nov 27 05:56:29 np0005537642 ceph-mon[74338]: mon.compute-0@0(leader).osd e30 do_prune osdmap full prune enabled
Nov 27 05:56:29 np0005537642 ceph-mon[74338]: log_channel(cluster) log [WRN] : Health check update: 4 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED)
Nov 27 05:56:29 np0005537642 ceph-mon[74338]: mon.compute-0@0(leader) e2  adding peer [v2:192.168.122.101:3300/0,v1:192.168.122.101:6789/0] to list of hints
Nov 27 05:56:29 np0005537642 ceph-mon[74338]: mon.compute-0@0(leader) e2  adding peer [v2:192.168.122.101:3300/0,v1:192.168.122.101:6789/0] to list of hints
Nov 27 05:56:29 np0005537642 ceph-mon[74338]: mon.compute-0@0(leader) e2  adding peer [v2:192.168.122.101:3300/0,v1:192.168.122.101:6789/0] to list of hints
Nov 27 05:56:29 np0005537642 ceph-mon[74338]: mon.compute-0@0(leader).monmap v2 adding/updating compute-1 at [v2:192.168.122.101:3300/0,v1:192.168.122.101:6789/0] to monitor cluster
Nov 27 05:56:29 np0005537642 ceph-mon[74338]: mon.compute-0@0(leader) e2 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1.devices.0}] v 0)
Nov 27 05:56:29 np0005537642 ceph-mon[74338]: mon.compute-0@0(leader) e2 handle_command mon_command({"prefix": "mon metadata", "id": "compute-1"} v 0)
Nov 27 05:56:29 np0005537642 ceph-mon[74338]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/397881370' entity='mgr.compute-0.qnrkij' cmd=[{"prefix": "mon metadata", "id": "compute-1"}]: dispatch
Nov 27 05:56:29 np0005537642 ceph-mgr[74636]: mgr finish mon failed to return metadata for mon.compute-1: (2) No such file or directory
Nov 27 05:56:29 np0005537642 python3[85782]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v19 --fsid 4c838139-e0c9-556a-a9ca-e4422f459af7 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   osd pool application enable backups rbd _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 27 05:56:29 np0005537642 ceph-mon[74338]: from='client.? 192.168.122.100:0/739780601' entity='client.admin' cmd='[{"prefix": "osd pool application enable", "pool": "volumes", "app": "rbd"}]': finished
Nov 27 05:56:29 np0005537642 podman[85783]: 2025-11-27 10:56:29.42551287 +0000 UTC m=+0.027922575 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Nov 27 05:56:30 np0005537642 podman[85783]: 2025-11-27 10:56:30.020939963 +0000 UTC m=+0.623349618 container create 1c35d469e0b747e341428276c36677069537785c49b001f282da0fa28f0e59a3 (image=quay.io/ceph/ceph:v19, name=laughing_yalow, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.license=GPLv2, OSD_FLAVOR=default, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid)
Nov 27 05:56:30 np0005537642 ceph-mgr[74636]: mgr.server handle_open ignoring open from mon.compute-1 192.168.122.101:0/4071902992; not ready for session (expect reconnect)
Nov 27 05:56:30 np0005537642 ceph-mon[74338]: mon.compute-0@0(leader) e2 handle_command mon_command({"prefix": "mon metadata", "id": "compute-1"} v 0)
Nov 27 05:56:30 np0005537642 ceph-mon[74338]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/397881370' entity='mgr.compute-0.qnrkij' cmd=[{"prefix": "mon metadata", "id": "compute-1"}]: dispatch
Nov 27 05:56:30 np0005537642 ceph-mgr[74636]: mgr finish mon failed to return metadata for mon.compute-1: (2) No such file or directory
Nov 27 05:56:30 np0005537642 ceph-mon[74338]: mon.compute-0@0(leader).osd e31 e31: 2 total, 2 up, 2 in
Nov 27 05:56:30 np0005537642 ceph-mon[74338]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/397881370' entity='mgr.compute-0.qnrkij' 
Nov 27 05:56:30 np0005537642 ceph-mon[74338]: mon.compute-0@0(probing) e3 handle_command mon_command({"prefix": "mon metadata", "id": "compute-0"} v 0)
Nov 27 05:56:30 np0005537642 ceph-mon[74338]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/397881370' entity='mgr.compute-0.qnrkij' cmd=[{"prefix": "mon metadata", "id": "compute-0"}]: dispatch
Nov 27 05:56:30 np0005537642 ceph-mon[74338]: log_channel(cluster) log [INF] : mon.compute-0 calling monitor election
Nov 27 05:56:30 np0005537642 ceph-mon[74338]: paxos.0).electionLogic(10) init, last seen epoch 10
Nov 27 05:56:30 np0005537642 systemd[1]: Started libpod-conmon-1c35d469e0b747e341428276c36677069537785c49b001f282da0fa28f0e59a3.scope.
Nov 27 05:56:30 np0005537642 systemd[1]: Started libcrun container.
Nov 27 05:56:30 np0005537642 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c7ee00edb25a5b67b7257703125a0446a6c2413a74161981eb058e4a48d80531/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Nov 27 05:56:30 np0005537642 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c7ee00edb25a5b67b7257703125a0446a6c2413a74161981eb058e4a48d80531/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 27 05:56:30 np0005537642 ceph-mon[74338]: mon.compute-0@0(electing) e3 collect_metadata vda:  no unique device id for vda: fallback method has no model nor serial
Nov 27 05:56:30 np0005537642 ceph-mon[74338]: mon.compute-0@0(electing) e3 handle_command mon_command({"prefix": "mon metadata", "id": "compute-1"} v 0)
Nov 27 05:56:30 np0005537642 ceph-mon[74338]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/397881370' entity='mgr.compute-0.qnrkij' cmd=[{"prefix": "mon metadata", "id": "compute-1"}]: dispatch
Nov 27 05:56:30 np0005537642 ceph-mon[74338]: mon.compute-0@0(electing) e3 handle_command mon_command({"prefix": "mon metadata", "id": "compute-2"} v 0)
Nov 27 05:56:30 np0005537642 ceph-mon[74338]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/397881370' entity='mgr.compute-0.qnrkij' cmd=[{"prefix": "mon metadata", "id": "compute-2"}]: dispatch
Nov 27 05:56:30 np0005537642 ceph-mgr[74636]: mgr finish mon failed to return metadata for mon.compute-1: (22) Invalid argument
Nov 27 05:56:30 np0005537642 ceph-mon[74338]: mon.compute-0@0(electing) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1}] v 0)
Nov 27 05:56:30 np0005537642 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v106: 7 pgs: 2 creating+peering, 1 peering, 4 active+clean; 449 KiB data, 453 MiB used, 40 GiB / 40 GiB avail
Nov 27 05:56:30 np0005537642 podman[85783]: 2025-11-27 10:56:30.672421202 +0000 UTC m=+1.274830847 container init 1c35d469e0b747e341428276c36677069537785c49b001f282da0fa28f0e59a3 (image=quay.io/ceph/ceph:v19, name=laughing_yalow, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, OSD_FLAVOR=default, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 27 05:56:30 np0005537642 podman[85783]: 2025-11-27 10:56:30.684882676 +0000 UTC m=+1.287292321 container start 1c35d469e0b747e341428276c36677069537785c49b001f282da0fa28f0e59a3 (image=quay.io/ceph/ceph:v19, name=laughing_yalow, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.build-date=20250325, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, org.label-schema.vendor=CentOS)
Nov 27 05:56:30 np0005537642 podman[85783]: 2025-11-27 10:56:30.801912993 +0000 UTC m=+1.404322648 container attach 1c35d469e0b747e341428276c36677069537785c49b001f282da0fa28f0e59a3 (image=quay.io/ceph/ceph:v19, name=laughing_yalow, org.label-schema.build-date=20250325, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, org.label-schema.license=GPLv2, io.buildah.version=1.40.1)
Nov 27 05:56:30 np0005537642 ceph-mon[74338]: mon.compute-0@0(electing) e3 handle_auth_request failed to assign global_id
Nov 27 05:56:31 np0005537642 ceph-mon[74338]: mon.compute-0@0(electing) e3 handle_auth_request failed to assign global_id
Nov 27 05:56:31 np0005537642 ceph-mgr[74636]: mgr.server handle_open ignoring open from mon.compute-1 192.168.122.101:0/4071902992; not ready for session (expect reconnect)
Nov 27 05:56:31 np0005537642 ceph-mon[74338]: mon.compute-0@0(electing) e3 handle_command mon_command({"prefix": "mon metadata", "id": "compute-1"} v 0)
Nov 27 05:56:31 np0005537642 ceph-mon[74338]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/397881370' entity='mgr.compute-0.qnrkij' cmd=[{"prefix": "mon metadata", "id": "compute-1"}]: dispatch
Nov 27 05:56:31 np0005537642 ceph-mgr[74636]: mgr finish mon failed to return metadata for mon.compute-1: (22) Invalid argument
Nov 27 05:56:31 np0005537642 ceph-mon[74338]: mon.compute-0@0(electing) e3 handle_auth_request failed to assign global_id
Nov 27 05:56:32 np0005537642 ceph-mgr[74636]: mgr.server handle_open ignoring open from mon.compute-1 192.168.122.101:0/4071902992; not ready for session (expect reconnect)
Nov 27 05:56:32 np0005537642 ceph-mon[74338]: mon.compute-0@0(electing) e3 handle_command mon_command({"prefix": "mon metadata", "id": "compute-1"} v 0)
Nov 27 05:56:32 np0005537642 ceph-mon[74338]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/397881370' entity='mgr.compute-0.qnrkij' cmd=[{"prefix": "mon metadata", "id": "compute-1"}]: dispatch
Nov 27 05:56:32 np0005537642 ceph-mgr[74636]: mgr finish mon failed to return metadata for mon.compute-1: (22) Invalid argument
Nov 27 05:56:32 np0005537642 ceph-mon[74338]: mon.compute-0@0(electing) e3 handle_auth_request failed to assign global_id
Nov 27 05:56:32 np0005537642 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v107: 7 pgs: 7 active+clean; 449 KiB data, 53 MiB used, 40 GiB / 40 GiB avail; 75 KiB/s, 0 objects/s recovering
Nov 27 05:56:33 np0005537642 ceph-mgr[74636]: mgr.server handle_open ignoring open from mon.compute-1 192.168.122.101:0/4071902992; not ready for session (expect reconnect)
Nov 27 05:56:33 np0005537642 ceph-mon[74338]: mon.compute-0@0(electing) e3 handle_command mon_command({"prefix": "mon metadata", "id": "compute-1"} v 0)
Nov 27 05:56:33 np0005537642 ceph-mon[74338]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/397881370' entity='mgr.compute-0.qnrkij' cmd=[{"prefix": "mon metadata", "id": "compute-1"}]: dispatch
Nov 27 05:56:33 np0005537642 ceph-mgr[74636]: mgr finish mon failed to return metadata for mon.compute-1: (22) Invalid argument
Nov 27 05:56:33 np0005537642 ceph-mon[74338]: mon.compute-0@0(electing) e3 handle_auth_request failed to assign global_id
Nov 27 05:56:33 np0005537642 ceph-mgr[74636]: [progress INFO root] Completed event 8905c9b6-d4c0-42ce-95e6-861ab2f349c8 (Global Recovery Event) in 15 seconds
Nov 27 05:56:34 np0005537642 ceph-mon[74338]: mon.compute-0@0(electing) e3 handle_auth_request failed to assign global_id
Nov 27 05:56:34 np0005537642 ceph-mgr[74636]: mgr.server handle_open ignoring open from mon.compute-1 192.168.122.101:0/4071902992; not ready for session (expect reconnect)
Nov 27 05:56:34 np0005537642 ceph-mon[74338]: mon.compute-0@0(electing) e3 handle_command mon_command({"prefix": "mon metadata", "id": "compute-1"} v 0)
Nov 27 05:56:34 np0005537642 ceph-mon[74338]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/397881370' entity='mgr.compute-0.qnrkij' cmd=[{"prefix": "mon metadata", "id": "compute-1"}]: dispatch
Nov 27 05:56:34 np0005537642 ceph-mgr[74636]: mgr finish mon failed to return metadata for mon.compute-1: (22) Invalid argument
Nov 27 05:56:34 np0005537642 ceph-mon[74338]: mon.compute-0@0(electing) e3 handle_auth_request failed to assign global_id
Nov 27 05:56:34 np0005537642 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v108: 7 pgs: 7 active+clean; 449 KiB data, 53 MiB used, 40 GiB / 40 GiB avail; 57 KiB/s, 0 objects/s recovering
Nov 27 05:56:35 np0005537642 ceph-mgr[74636]: mgr.server handle_open ignoring open from mon.compute-1 192.168.122.101:0/4071902992; not ready for session (expect reconnect)
Nov 27 05:56:35 np0005537642 ceph-mon[74338]: mon.compute-0@0(electing) e3 handle_command mon_command({"prefix": "mon metadata", "id": "compute-1"} v 0)
Nov 27 05:56:35 np0005537642 ceph-mon[74338]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/397881370' entity='mgr.compute-0.qnrkij' cmd=[{"prefix": "mon metadata", "id": "compute-1"}]: dispatch
Nov 27 05:56:35 np0005537642 ceph-mgr[74636]: mgr finish mon failed to return metadata for mon.compute-1: (22) Invalid argument
Nov 27 05:56:35 np0005537642 ceph-mon[74338]: mon.compute-0@0(electing) e3 handle_auth_request failed to assign global_id
Nov 27 05:56:35 np0005537642 ceph-mon[74338]: log_channel(cluster) log [INF] : mon.compute-0 is new leader, mons compute-0,compute-2 in quorum (ranks 0,1)
Nov 27 05:56:35 np0005537642 ceph-mon[74338]: log_channel(cluster) log [DBG] : monmap epoch 3
Nov 27 05:56:35 np0005537642 ceph-mon[74338]: log_channel(cluster) log [DBG] : fsid 4c838139-e0c9-556a-a9ca-e4422f459af7
Nov 27 05:56:35 np0005537642 ceph-mon[74338]: log_channel(cluster) log [DBG] : last_changed 2025-11-27T10:56:29.287830+0000
Nov 27 05:56:35 np0005537642 ceph-mon[74338]: log_channel(cluster) log [DBG] : created 2025-11-27T10:53:19.458310+0000
Nov 27 05:56:35 np0005537642 ceph-mon[74338]: log_channel(cluster) log [DBG] : min_mon_release 19 (squid)
Nov 27 05:56:35 np0005537642 ceph-mon[74338]: log_channel(cluster) log [DBG] : election_strategy: 1
Nov 27 05:56:35 np0005537642 ceph-mon[74338]: log_channel(cluster) log [DBG] : 0: [v2:192.168.122.100:3300/0,v1:192.168.122.100:6789/0] mon.compute-0
Nov 27 05:56:35 np0005537642 ceph-mon[74338]: log_channel(cluster) log [DBG] : 1: [v2:192.168.122.102:3300/0,v1:192.168.122.102:6789/0] mon.compute-2
Nov 27 05:56:35 np0005537642 ceph-mon[74338]: log_channel(cluster) log [DBG] : 2: [v2:192.168.122.101:3300/0,v1:192.168.122.101:6789/0] mon.compute-1
Nov 27 05:56:35 np0005537642 ceph-mon[74338]: mon.compute-0@0(leader) e3 collect_metadata vda:  no unique device id for vda: fallback method has no model nor serial
Nov 27 05:56:35 np0005537642 ceph-mon[74338]: log_channel(cluster) log [DBG] : fsmap 
Nov 27 05:56:35 np0005537642 ceph-mon[74338]: log_channel(cluster) log [DBG] : osdmap e31: 2 total, 2 up, 2 in
Nov 27 05:56:35 np0005537642 ceph-mon[74338]: log_channel(cluster) log [DBG] : mgrmap e9: compute-0.qnrkij(active, since 2m)
Nov 27 05:56:35 np0005537642 ceph-mon[74338]: log_channel(cluster) log [WRN] : Health check failed: 1/3 mons down, quorum compute-0,compute-2 (MON_DOWN)
Nov 27 05:56:35 np0005537642 ceph-mon[74338]: log_channel(cluster) log [WRN] : Health detail: HEALTH_WARN 1/3 mons down, quorum compute-0,compute-2; 4 pool(s) do not have an application enabled
Nov 27 05:56:35 np0005537642 ceph-mon[74338]: log_channel(cluster) log [WRN] : [WRN] MON_DOWN: 1/3 mons down, quorum compute-0,compute-2
Nov 27 05:56:35 np0005537642 ceph-mon[74338]: log_channel(cluster) log [WRN] :     mon.compute-1 (rank 2) addr [v2:192.168.122.101:3300/0,v1:192.168.122.101:6789/0] is down (out of quorum)
Nov 27 05:56:35 np0005537642 ceph-mon[74338]: log_channel(cluster) log [WRN] : [WRN] POOL_APP_NOT_ENABLED: 4 pool(s) do not have an application enabled
Nov 27 05:56:35 np0005537642 ceph-mon[74338]: log_channel(cluster) log [WRN] :     application not enabled on pool 'backups'
Nov 27 05:56:35 np0005537642 ceph-mon[74338]: log_channel(cluster) log [WRN] :     application not enabled on pool 'images'
Nov 27 05:56:35 np0005537642 ceph-mon[74338]: log_channel(cluster) log [WRN] :     application not enabled on pool 'cephfs.cephfs.meta'
Nov 27 05:56:35 np0005537642 ceph-mon[74338]: log_channel(cluster) log [WRN] :     application not enabled on pool 'cephfs.cephfs.data'
Nov 27 05:56:35 np0005537642 ceph-mon[74338]: log_channel(cluster) log [WRN] :     use 'ceph osd pool application enable <pool-name> <app-name>', where <app-name> is 'cephfs', 'rbd', 'rgw', or freeform for custom applications.
Nov 27 05:56:35 np0005537642 ceph-mon[74338]: mon.compute-2 calling monitor election
Nov 27 05:56:35 np0005537642 ceph-mon[74338]: from='mgr.14122 192.168.122.100:0/397881370' entity='mgr.compute-0.qnrkij' 
Nov 27 05:56:35 np0005537642 ceph-mon[74338]: mon.compute-0 calling monitor election
Nov 27 05:56:35 np0005537642 ceph-mon[74338]: mon.compute-0 is new leader, mons compute-0,compute-2 in quorum (ranks 0,1)
Nov 27 05:56:35 np0005537642 ceph-mon[74338]: Health check failed: 1/3 mons down, quorum compute-0,compute-2 (MON_DOWN)
Nov 27 05:56:35 np0005537642 ceph-mon[74338]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/397881370' entity='mgr.compute-0.qnrkij' 
Nov 27 05:56:35 np0005537642 ceph-mon[74338]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.mon}] v 0)
Nov 27 05:56:35 np0005537642 ceph-mon[74338]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/397881370' entity='mgr.compute-0.qnrkij' 
Nov 27 05:56:35 np0005537642 ceph-mgr[74636]: [progress INFO root] complete: finished ev fb211363-14de-485e-ab62-6eecd0634d4a (Updating mon deployment (+2 -> 3))
Nov 27 05:56:35 np0005537642 ceph-mgr[74636]: [progress INFO root] Completed event fb211363-14de-485e-ab62-6eecd0634d4a (Updating mon deployment (+2 -> 3)) in 19 seconds
Nov 27 05:56:35 np0005537642 ceph-mon[74338]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.mon}] v 0)
Nov 27 05:56:35 np0005537642 ceph-mon[74338]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/397881370' entity='mgr.compute-0.qnrkij' 
Nov 27 05:56:35 np0005537642 ceph-mgr[74636]: [progress INFO root] update: starting ev db2dd016-a333-4f0c-9a87-1578dd98323c (Updating mgr deployment (+2 -> 3))
Nov 27 05:56:35 np0005537642 ceph-mon[74338]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get-or-create", "entity": "mgr.compute-2.yyrxaz", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]} v 0)
Nov 27 05:56:35 np0005537642 ceph-mon[74338]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/397881370' entity='mgr.compute-0.qnrkij' cmd=[{"prefix": "auth get-or-create", "entity": "mgr.compute-2.yyrxaz", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]}]: dispatch
Nov 27 05:56:36 np0005537642 ceph-mon[74338]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/397881370' entity='mgr.compute-0.qnrkij' cmd='[{"prefix": "auth get-or-create", "entity": "mgr.compute-2.yyrxaz", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]}]': finished
Nov 27 05:56:36 np0005537642 ceph-mon[74338]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mgr services"} v 0)
Nov 27 05:56:36 np0005537642 ceph-mon[74338]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/397881370' entity='mgr.compute-0.qnrkij' cmd=[{"prefix": "mgr services"}]: dispatch
Nov 27 05:56:36 np0005537642 ceph-mon[74338]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Nov 27 05:56:36 np0005537642 ceph-mon[74338]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/397881370' entity='mgr.compute-0.qnrkij' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 27 05:56:36 np0005537642 ceph-mgr[74636]: [cephadm INFO cephadm.serve] Deploying daemon mgr.compute-2.yyrxaz on compute-2
Nov 27 05:56:36 np0005537642 ceph-mgr[74636]: log_channel(cephadm) log [INF] : Deploying daemon mgr.compute-2.yyrxaz on compute-2
Nov 27 05:56:36 np0005537642 ceph-mon[74338]: log_channel(cluster) log [INF] : mon.compute-0 calling monitor election
Nov 27 05:56:36 np0005537642 ceph-mon[74338]: paxos.0).electionLogic(12) init, last seen epoch 12
Nov 27 05:56:36 np0005537642 ceph-mgr[74636]: mgr.server handle_open ignoring open from mon.compute-1 192.168.122.101:0/4071902992; not ready for session (expect reconnect)
Nov 27 05:56:36 np0005537642 ceph-mon[74338]: mon.compute-0@0(electing) e3 collect_metadata vda:  no unique device id for vda: fallback method has no model nor serial
Nov 27 05:56:36 np0005537642 ceph-mon[74338]: mon.compute-0@0(electing) e3 handle_command mon_command({"prefix": "mon metadata", "id": "compute-1"} v 0)
Nov 27 05:56:36 np0005537642 ceph-mon[74338]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/397881370' entity='mgr.compute-0.qnrkij' cmd=[{"prefix": "mon metadata", "id": "compute-1"}]: dispatch
Nov 27 05:56:36 np0005537642 ceph-mgr[74636]: mgr finish mon failed to return metadata for mon.compute-1: (22) Invalid argument
Nov 27 05:56:36 np0005537642 ceph-mon[74338]: log_channel(cluster) log [INF] : mon.compute-0 is new leader, mons compute-0,compute-2,compute-1 in quorum (ranks 0,1,2)
Nov 27 05:56:36 np0005537642 ceph-mon[74338]: log_channel(cluster) log [DBG] : monmap epoch 3
Nov 27 05:56:36 np0005537642 ceph-mon[74338]: log_channel(cluster) log [DBG] : fsid 4c838139-e0c9-556a-a9ca-e4422f459af7
Nov 27 05:56:36 np0005537642 ceph-mon[74338]: log_channel(cluster) log [DBG] : last_changed 2025-11-27T10:56:29.287830+0000
Nov 27 05:56:36 np0005537642 ceph-mon[74338]: log_channel(cluster) log [DBG] : created 2025-11-27T10:53:19.458310+0000
Nov 27 05:56:36 np0005537642 ceph-mon[74338]: log_channel(cluster) log [DBG] : min_mon_release 19 (squid)
Nov 27 05:56:36 np0005537642 ceph-mon[74338]: log_channel(cluster) log [DBG] : election_strategy: 1
Nov 27 05:56:36 np0005537642 ceph-mon[74338]: log_channel(cluster) log [DBG] : 0: [v2:192.168.122.100:3300/0,v1:192.168.122.100:6789/0] mon.compute-0
Nov 27 05:56:36 np0005537642 ceph-mon[74338]: log_channel(cluster) log [DBG] : 1: [v2:192.168.122.102:3300/0,v1:192.168.122.102:6789/0] mon.compute-2
Nov 27 05:56:36 np0005537642 ceph-mon[74338]: log_channel(cluster) log [DBG] : 2: [v2:192.168.122.101:3300/0,v1:192.168.122.101:6789/0] mon.compute-1
Nov 27 05:56:36 np0005537642 ceph-mon[74338]: mon.compute-0@0(leader) e3 collect_metadata vda:  no unique device id for vda: fallback method has no model nor serial
Nov 27 05:56:36 np0005537642 ceph-mon[74338]: log_channel(cluster) log [DBG] : fsmap 
Nov 27 05:56:36 np0005537642 ceph-mon[74338]: log_channel(cluster) log [DBG] : osdmap e31: 2 total, 2 up, 2 in
Nov 27 05:56:36 np0005537642 ceph-mon[74338]: log_channel(cluster) log [DBG] : mgrmap e9: compute-0.qnrkij(active, since 2m)
Nov 27 05:56:36 np0005537642 ceph-mon[74338]: log_channel(cluster) log [INF] : Health check cleared: MON_DOWN (was: 1/3 mons down, quorum compute-0,compute-2)
Nov 27 05:56:36 np0005537642 ceph-mon[74338]: log_channel(cluster) log [WRN] : Health detail: HEALTH_WARN 4 pool(s) do not have an application enabled
Nov 27 05:56:36 np0005537642 ceph-mon[74338]: log_channel(cluster) log [WRN] : [WRN] POOL_APP_NOT_ENABLED: 4 pool(s) do not have an application enabled
Nov 27 05:56:36 np0005537642 ceph-mon[74338]: log_channel(cluster) log [WRN] :     application not enabled on pool 'backups'
Nov 27 05:56:36 np0005537642 ceph-mon[74338]: log_channel(cluster) log [WRN] :     application not enabled on pool 'images'
Nov 27 05:56:36 np0005537642 ceph-mon[74338]: log_channel(cluster) log [WRN] :     application not enabled on pool 'cephfs.cephfs.meta'
Nov 27 05:56:36 np0005537642 ceph-mon[74338]: log_channel(cluster) log [WRN] :     application not enabled on pool 'cephfs.cephfs.data'
Nov 27 05:56:36 np0005537642 ceph-mon[74338]: log_channel(cluster) log [WRN] :     use 'ceph osd pool application enable <pool-name> <app-name>', where <app-name> is 'cephfs', 'rbd', 'rgw', or freeform for custom applications.
Nov 27 05:56:36 np0005537642 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v109: 7 pgs: 7 active+clean; 449 KiB data, 53 MiB used, 40 GiB / 40 GiB avail; 52 KiB/s, 0 objects/s recovering
Nov 27 05:56:37 np0005537642 ceph-mon[74338]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool application enable", "pool": "backups", "app": "rbd"} v 0)
Nov 27 05:56:37 np0005537642 ceph-mon[74338]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/1329207592' entity='client.admin' cmd=[{"prefix": "osd pool application enable", "pool": "backups", "app": "rbd"}]: dispatch
Nov 27 05:56:37 np0005537642 ceph-mgr[74636]: mgr.server handle_open ignoring open from mon.compute-1 192.168.122.101:0/4071902992; not ready for session (expect reconnect)
Nov 27 05:56:37 np0005537642 ceph-mon[74338]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mon metadata", "id": "compute-1"} v 0)
Nov 27 05:56:37 np0005537642 ceph-mon[74338]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/397881370' entity='mgr.compute-0.qnrkij' cmd=[{"prefix": "mon metadata", "id": "compute-1"}]: dispatch
Nov 27 05:56:37 np0005537642 ceph-mon[74338]: mon.compute-1 calling monitor election
Nov 27 05:56:37 np0005537642 ceph-mon[74338]: Deploying daemon mgr.compute-2.yyrxaz on compute-2
Nov 27 05:56:37 np0005537642 ceph-mon[74338]: mon.compute-1 calling monitor election
Nov 27 05:56:37 np0005537642 ceph-mon[74338]: mon.compute-2 calling monitor election
Nov 27 05:56:37 np0005537642 ceph-mon[74338]: mon.compute-0 calling monitor election
Nov 27 05:56:37 np0005537642 ceph-mon[74338]: mon.compute-0 is new leader, mons compute-0,compute-2,compute-1 in quorum (ranks 0,1,2)
Nov 27 05:56:37 np0005537642 ceph-mon[74338]: Health check cleared: MON_DOWN (was: 1/3 mons down, quorum compute-0,compute-2)
Nov 27 05:56:37 np0005537642 ceph-mon[74338]: Health detail: HEALTH_WARN 4 pool(s) do not have an application enabled
Nov 27 05:56:37 np0005537642 ceph-mon[74338]: [WRN] POOL_APP_NOT_ENABLED: 4 pool(s) do not have an application enabled
Nov 27 05:56:37 np0005537642 ceph-mon[74338]:    application not enabled on pool 'backups'
Nov 27 05:56:37 np0005537642 ceph-mon[74338]:    application not enabled on pool 'images'
Nov 27 05:56:37 np0005537642 ceph-mon[74338]:    application not enabled on pool 'cephfs.cephfs.meta'
Nov 27 05:56:37 np0005537642 ceph-mon[74338]:    application not enabled on pool 'cephfs.cephfs.data'
Nov 27 05:56:37 np0005537642 ceph-mon[74338]:    use 'ceph osd pool application enable <pool-name> <app-name>', where <app-name> is 'cephfs', 'rbd', 'rgw', or freeform for custom applications.
Nov 27 05:56:37 np0005537642 ceph-mon[74338]: from='client.? 192.168.122.100:0/1329207592' entity='client.admin' cmd=[{"prefix": "osd pool application enable", "pool": "backups", "app": "rbd"}]: dispatch
Nov 27 05:56:37 np0005537642 ceph-mon[74338]: mon.compute-0@0(leader).osd e31 do_prune osdmap full prune enabled
Nov 27 05:56:37 np0005537642 ceph-mon[74338]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2.devices.0}] v 0)
Nov 27 05:56:38 np0005537642 ceph-mon[74338]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/1329207592' entity='client.admin' cmd='[{"prefix": "osd pool application enable", "pool": "backups", "app": "rbd"}]': finished
Nov 27 05:56:38 np0005537642 ceph-mon[74338]: mon.compute-0@0(leader).osd e32 e32: 2 total, 2 up, 2 in
Nov 27 05:56:38 np0005537642 laughing_yalow[85799]: enabled application 'rbd' on pool 'backups'
Nov 27 05:56:38 np0005537642 systemd[1]: libpod-1c35d469e0b747e341428276c36677069537785c49b001f282da0fa28f0e59a3.scope: Deactivated successfully.
Nov 27 05:56:38 np0005537642 podman[85783]: 2025-11-27 10:56:38.192255162 +0000 UTC m=+8.794664807 container died 1c35d469e0b747e341428276c36677069537785c49b001f282da0fa28f0e59a3 (image=quay.io/ceph/ceph:v19, name=laughing_yalow, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 27 05:56:38 np0005537642 ceph-mon[74338]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/397881370' entity='mgr.compute-0.qnrkij' 
Nov 27 05:56:38 np0005537642 ceph-mon[74338]: log_channel(cluster) log [DBG] : osdmap e32: 2 total, 2 up, 2 in
Nov 27 05:56:38 np0005537642 ceph-mon[74338]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2}] v 0)
Nov 27 05:56:38 np0005537642 ceph-mon[74338]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/397881370' entity='mgr.compute-0.qnrkij' 
Nov 27 05:56:38 np0005537642 ceph-mon[74338]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.mgr}] v 0)
Nov 27 05:56:38 np0005537642 ceph-mon[74338]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/397881370' entity='mgr.compute-0.qnrkij' 
Nov 27 05:56:38 np0005537642 ceph-mon[74338]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get-or-create", "entity": "mgr.compute-1.npcryb", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]} v 0)
Nov 27 05:56:38 np0005537642 ceph-mon[74338]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/397881370' entity='mgr.compute-0.qnrkij' cmd=[{"prefix": "auth get-or-create", "entity": "mgr.compute-1.npcryb", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]}]: dispatch
Nov 27 05:56:38 np0005537642 systemd[1]: var-lib-containers-storage-overlay-c7ee00edb25a5b67b7257703125a0446a6c2413a74161981eb058e4a48d80531-merged.mount: Deactivated successfully.
Nov 27 05:56:38 np0005537642 ceph-mon[74338]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/397881370' entity='mgr.compute-0.qnrkij' cmd='[{"prefix": "auth get-or-create", "entity": "mgr.compute-1.npcryb", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]}]': finished
Nov 27 05:56:38 np0005537642 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v111: 7 pgs: 7 active+clean; 449 KiB data, 53 MiB used, 40 GiB / 40 GiB avail; 54 KiB/s, 0 objects/s recovering
Nov 27 05:56:38 np0005537642 ceph-mon[74338]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mgr services"} v 0)
Nov 27 05:56:38 np0005537642 ceph-mon[74338]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/397881370' entity='mgr.compute-0.qnrkij' cmd=[{"prefix": "mgr services"}]: dispatch
Nov 27 05:56:38 np0005537642 ceph-mon[74338]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Nov 27 05:56:38 np0005537642 ceph-mon[74338]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/397881370' entity='mgr.compute-0.qnrkij' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 27 05:56:38 np0005537642 ceph-mgr[74636]: [cephadm INFO cephadm.serve] Deploying daemon mgr.compute-1.npcryb on compute-1
Nov 27 05:56:38 np0005537642 ceph-mgr[74636]: log_channel(cephadm) log [INF] : Deploying daemon mgr.compute-1.npcryb on compute-1
Nov 27 05:56:38 np0005537642 ceph-mon[74338]: from='client.? 192.168.122.100:0/1329207592' entity='client.admin' cmd='[{"prefix": "osd pool application enable", "pool": "backups", "app": "rbd"}]': finished
Nov 27 05:56:38 np0005537642 ceph-mon[74338]: from='mgr.14122 192.168.122.100:0/397881370' entity='mgr.compute-0.qnrkij' 
Nov 27 05:56:38 np0005537642 ceph-mon[74338]: from='mgr.14122 192.168.122.100:0/397881370' entity='mgr.compute-0.qnrkij' 
Nov 27 05:56:38 np0005537642 ceph-mon[74338]: from='mgr.14122 192.168.122.100:0/397881370' entity='mgr.compute-0.qnrkij' 
Nov 27 05:56:38 np0005537642 ceph-mon[74338]: from='mgr.14122 192.168.122.100:0/397881370' entity='mgr.compute-0.qnrkij' cmd=[{"prefix": "auth get-or-create", "entity": "mgr.compute-1.npcryb", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]}]: dispatch
Nov 27 05:56:38 np0005537642 ceph-mgr[74636]: [progress INFO root] Writing back 4 completed events
Nov 27 05:56:39 np0005537642 ceph-mon[74338]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/progress/completed}] v 0)
Nov 27 05:56:39 np0005537642 podman[85783]: 2025-11-27 10:56:39.256358085 +0000 UTC m=+9.858767700 container remove 1c35d469e0b747e341428276c36677069537785c49b001f282da0fa28f0e59a3 (image=quay.io/ceph/ceph:v19, name=laughing_yalow, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.build-date=20250325, CEPH_REF=squid, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0)
Nov 27 05:56:39 np0005537642 ceph-mon[74338]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/397881370' entity='mgr.compute-0.qnrkij' 
Nov 27 05:56:39 np0005537642 systemd[1]: libpod-conmon-1c35d469e0b747e341428276c36677069537785c49b001f282da0fa28f0e59a3.scope: Deactivated successfully.
Nov 27 05:56:39 np0005537642 python3[85861]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v19 --fsid 4c838139-e0c9-556a-a9ca-e4422f459af7 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   osd pool application enable images rbd _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 27 05:56:39 np0005537642 ceph-mon[74338]: log_channel(cluster) log [WRN] : Health check update: 3 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED)
Nov 27 05:56:39 np0005537642 podman[85862]: 2025-11-27 10:56:39.671403685 +0000 UTC m=+0.028862795 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Nov 27 05:56:39 np0005537642 podman[85862]: 2025-11-27 10:56:39.981768512 +0000 UTC m=+0.339227562 container create 81f449904c903acf207b22a53dfe6a4fe180d4d1cc0a16dbdefaa04b50c54f74 (image=quay.io/ceph/ceph:v19, name=adoring_einstein, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Nov 27 05:56:40 np0005537642 systemd[1]: Started libpod-conmon-81f449904c903acf207b22a53dfe6a4fe180d4d1cc0a16dbdefaa04b50c54f74.scope.
Nov 27 05:56:40 np0005537642 systemd[1]: Started libcrun container.
Nov 27 05:56:40 np0005537642 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/dec1c4d0a37b2c5040645cb86ede5d4d09b320bf4b69ec19a8cbe742740c2902/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Nov 27 05:56:40 np0005537642 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/dec1c4d0a37b2c5040645cb86ede5d4d09b320bf4b69ec19a8cbe742740c2902/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 27 05:56:40 np0005537642 ceph-mon[74338]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1.devices.0}] v 0)
Nov 27 05:56:40 np0005537642 ceph-mon[74338]: from='mgr.14122 192.168.122.100:0/397881370' entity='mgr.compute-0.qnrkij' cmd='[{"prefix": "auth get-or-create", "entity": "mgr.compute-1.npcryb", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]}]': finished
Nov 27 05:56:40 np0005537642 ceph-mon[74338]: Deploying daemon mgr.compute-1.npcryb on compute-1
Nov 27 05:56:40 np0005537642 ceph-mon[74338]: from='mgr.14122 192.168.122.100:0/397881370' entity='mgr.compute-0.qnrkij' 
Nov 27 05:56:40 np0005537642 ceph-mon[74338]: Health check update: 3 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED)
Nov 27 05:56:40 np0005537642 podman[85862]: 2025-11-27 10:56:40.349992592 +0000 UTC m=+0.707451672 container init 81f449904c903acf207b22a53dfe6a4fe180d4d1cc0a16dbdefaa04b50c54f74 (image=quay.io/ceph/ceph:v19, name=adoring_einstein, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 27 05:56:40 np0005537642 podman[85862]: 2025-11-27 10:56:40.360943112 +0000 UTC m=+0.718402162 container start 81f449904c903acf207b22a53dfe6a4fe180d4d1cc0a16dbdefaa04b50c54f74 (image=quay.io/ceph/ceph:v19, name=adoring_einstein, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325)
Nov 27 05:56:40 np0005537642 podman[85862]: 2025-11-27 10:56:40.440166259 +0000 UTC m=+0.797625309 container attach 81f449904c903acf207b22a53dfe6a4fe180d4d1cc0a16dbdefaa04b50c54f74 (image=quay.io/ceph/ceph:v19, name=adoring_einstein, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0)
Nov 27 05:56:40 np0005537642 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v112: 7 pgs: 7 active+clean; 449 KiB data, 53 MiB used, 40 GiB / 40 GiB avail; 45 KiB/s, 0 objects/s recovering
Nov 27 05:56:40 np0005537642 ceph-mon[74338]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool application enable", "pool": "images", "app": "rbd"} v 0)
Nov 27 05:56:40 np0005537642 ceph-mon[74338]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/675944323' entity='client.admin' cmd=[{"prefix": "osd pool application enable", "pool": "images", "app": "rbd"}]: dispatch
Nov 27 05:56:41 np0005537642 ceph-mon[74338]: mon.compute-0@0(leader).osd e32 do_prune osdmap full prune enabled
Nov 27 05:56:42 np0005537642 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v113: 7 pgs: 7 active+clean; 449 KiB data, 53 MiB used, 40 GiB / 40 GiB avail
Nov 27 05:56:43 np0005537642 ceph-mgr[74636]: mgr.server handle_open ignoring open from mgr.compute-2.yyrxaz 192.168.122.102:0/1397574382; not ready for session (expect reconnect)
Nov 27 05:56:44 np0005537642 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v114: 7 pgs: 7 active+clean; 449 KiB data, 53 MiB used, 40 GiB / 40 GiB avail
Nov 27 05:56:44 np0005537642 ceph-mgr[74636]: mgr.server handle_open ignoring open from mgr.compute-2.yyrxaz 192.168.122.102:0/1397574382; not ready for session (expect reconnect)
Nov 27 05:56:45 np0005537642 ceph-mgr[74636]: mgr.server handle_open ignoring open from mgr.compute-2.yyrxaz 192.168.122.102:0/1397574382; not ready for session (expect reconnect)
Nov 27 05:56:46 np0005537642 ceph-mgr[74636]: mgr.server handle_open ignoring open from mgr.compute-1.npcryb 192.168.122.101:0/2260310895; not ready for session (expect reconnect)
Nov 27 05:56:46 np0005537642 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v115: 7 pgs: 7 active+clean; 449 KiB data, 53 MiB used, 40 GiB / 40 GiB avail
Nov 27 05:56:46 np0005537642 ceph-mgr[74636]: mgr.server handle_open ignoring open from mgr.compute-2.yyrxaz 192.168.122.102:0/1397574382; not ready for session (expect reconnect)
Nov 27 05:56:46 np0005537642 ceph-mgr[74636]: [balancer INFO root] Optimize plan auto_2025-11-27_10:56:46
Nov 27 05:56:46 np0005537642 ceph-mgr[74636]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Nov 27 05:56:46 np0005537642 ceph-mgr[74636]: [balancer INFO root] do_upmap
Nov 27 05:56:46 np0005537642 ceph-mgr[74636]: [balancer INFO root] pools ['vms', 'backups', 'images', '.mgr', 'cephfs.cephfs.meta', 'cephfs.cephfs.data', 'volumes']
Nov 27 05:56:46 np0005537642 ceph-mgr[74636]: [balancer INFO root] prepared 0/10 upmap changes
Nov 27 05:56:47 np0005537642 ceph-mgr[74636]: mgr.server handle_open ignoring open from mgr.compute-1.npcryb 192.168.122.101:0/2260310895; not ready for session (expect reconnect)
Nov 27 05:56:47 np0005537642 ceph-mgr[74636]: mgr.server handle_open ignoring open from mgr.compute-2.yyrxaz 192.168.122.102:0/1397574382; not ready for session (expect reconnect)
Nov 27 05:56:48 np0005537642 ceph-mgr[74636]: mgr.server handle_open ignoring open from mgr.compute-1.npcryb 192.168.122.101:0/2260310895; not ready for session (expect reconnect)
Nov 27 05:56:48 np0005537642 ceph-mgr[74636]: [pg_autoscaler INFO root] _maybe_adjust
Nov 27 05:56:48 np0005537642 ceph-mgr[74636]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 42941284352
Nov 27 05:56:48 np0005537642 ceph-mgr[74636]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 2.1557249951162338e-05 of space, bias 1.0, pg target 0.004311449990232467 quantized to 1 (current 1)
Nov 27 05:56:48 np0005537642 ceph-mgr[74636]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 42941284352
Nov 27 05:56:48 np0005537642 ceph-mgr[74636]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 1)
Nov 27 05:56:48 np0005537642 ceph-mgr[74636]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 42941284352
Nov 27 05:56:48 np0005537642 ceph-mgr[74636]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 1)
Nov 27 05:56:48 np0005537642 ceph-mgr[74636]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 42941284352
Nov 27 05:56:48 np0005537642 ceph-mgr[74636]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 1)
Nov 27 05:56:48 np0005537642 ceph-mgr[74636]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 42941284352
Nov 27 05:56:48 np0005537642 ceph-mgr[74636]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 1)
Nov 27 05:56:48 np0005537642 ceph-mgr[74636]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 42941284352
Nov 27 05:56:48 np0005537642 ceph-mgr[74636]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 1)
Nov 27 05:56:48 np0005537642 ceph-mgr[74636]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 42941284352
Nov 27 05:56:48 np0005537642 ceph-mgr[74636]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 1)
Nov 27 05:56:48 np0005537642 ceph-mon[74338]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "vms", "var": "pg_num", "val": "32"} v 0)
Nov 27 05:56:48 np0005537642 ceph-mon[74338]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/397881370' entity='mgr.compute-0.qnrkij' cmd=[{"prefix": "osd pool set", "pool": "vms", "var": "pg_num", "val": "32"}]: dispatch
Nov 27 05:56:48 np0005537642 ceph-mgr[74636]: [volumes INFO mgr_util] scanning for idle connections..
Nov 27 05:56:48 np0005537642 ceph-mgr[74636]: [volumes INFO mgr_util] cleaning up connections: []
Nov 27 05:56:48 np0005537642 ceph-mgr[74636]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Nov 27 05:56:48 np0005537642 ceph-mgr[74636]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Nov 27 05:56:48 np0005537642 ceph-mgr[74636]: [volumes INFO mgr_util] scanning for idle connections..
Nov 27 05:56:48 np0005537642 ceph-mgr[74636]: [volumes INFO mgr_util] cleaning up connections: []
Nov 27 05:56:48 np0005537642 ceph-mgr[74636]: [volumes INFO mgr_util] scanning for idle connections..
Nov 27 05:56:48 np0005537642 ceph-mgr[74636]: [volumes INFO mgr_util] cleaning up connections: []
Nov 27 05:56:48 np0005537642 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v116: 7 pgs: 7 active+clean; 449 KiB data, 53 MiB used, 40 GiB / 40 GiB avail
Nov 27 05:56:48 np0005537642 ceph-mgr[74636]: mgr.server handle_open ignoring open from mgr.compute-2.yyrxaz 192.168.122.102:0/1397574382; not ready for session (expect reconnect)
Nov 27 05:56:49 np0005537642 ceph-mgr[74636]: mgr.server handle_open ignoring open from mgr.compute-1.npcryb 192.168.122.101:0/2260310895; not ready for session (expect reconnect)
Nov 27 05:56:49 np0005537642 ceph-mgr[74636]: mgr.server handle_open ignoring open from mgr.compute-2.yyrxaz 192.168.122.102:0/1397574382; not ready for session (expect reconnect)
Nov 27 05:56:50 np0005537642 ceph-mgr[74636]: mgr.server handle_open ignoring open from mgr.compute-1.npcryb 192.168.122.101:0/2260310895; not ready for session (expect reconnect)
Nov 27 05:56:50 np0005537642 ceph-mon[74338]: mon.compute-0@0(leader).paxos(paxos updating c 1..336) accept timeout, calling fresh election
Nov 27 05:56:50 np0005537642 ceph-mon[74338]: mon.compute-0@0(probing) e3 handle_command mon_command({"prefix": "osd pool application enable", "pool": "images", "app": "rbd"} v 0)
Nov 27 05:56:50 np0005537642 ceph-mon[74338]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/675944323' entity='client.admin' cmd=[{"prefix": "osd pool application enable", "pool": "images", "app": "rbd"}]: dispatch
Nov 27 05:56:50 np0005537642 ceph-mon[74338]: mon.compute-0@0(probing) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1.devices.0}] v 0)
Nov 27 05:56:50 np0005537642 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v117: 7 pgs: 7 active+clean; 449 KiB data, 53 MiB used, 40 GiB / 40 GiB avail
Nov 27 05:56:50 np0005537642 ceph-mgr[74636]: mgr.server handle_open ignoring open from mgr.compute-2.yyrxaz 192.168.122.102:0/1397574382; not ready for session (expect reconnect)
Nov 27 05:56:51 np0005537642 ceph-mgr[74636]: mgr.server handle_open ignoring open from mgr.compute-1.npcryb 192.168.122.101:0/2260310895; not ready for session (expect reconnect)
Nov 27 05:56:51 np0005537642 ceph-mgr[74636]: mgr.server handle_open ignoring open from mgr.compute-2.yyrxaz 192.168.122.102:0/1397574382; not ready for session (expect reconnect)
Nov 27 05:56:52 np0005537642 ceph-mgr[74636]: mgr.server handle_open ignoring open from mgr.compute-1.npcryb 192.168.122.101:0/2260310895; not ready for session (expect reconnect)
Nov 27 05:56:52 np0005537642 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v118: 7 pgs: 7 active+clean; 449 KiB data, 53 MiB used, 40 GiB / 40 GiB avail
Nov 27 05:56:52 np0005537642 ceph-mgr[74636]: mgr.server handle_open ignoring open from mgr.compute-2.yyrxaz 192.168.122.102:0/1397574382; not ready for session (expect reconnect)
Nov 27 05:56:53 np0005537642 ceph-mgr[74636]: mgr.server handle_open ignoring open from mgr.compute-1.npcryb 192.168.122.101:0/2260310895; not ready for session (expect reconnect)
Nov 27 05:56:53 np0005537642 ceph-mgr[74636]: mgr.server handle_open ignoring open from mgr.compute-2.yyrxaz 192.168.122.102:0/1397574382; not ready for session (expect reconnect)
Nov 27 05:56:53 np0005537642 ceph-mon[74338]: log_channel(cluster) log [INF] : mon.compute-0 calling monitor election
Nov 27 05:56:53 np0005537642 ceph-mon[74338]: paxos.0).electionLogic(14) init, last seen epoch 14
Nov 27 05:56:53 np0005537642 ceph-mon[74338]: mon.compute-0@0(electing) e3 collect_metadata vda:  no unique device id for vda: fallback method has no model nor serial
Nov 27 05:56:54 np0005537642 ceph-mgr[74636]: mgr.server handle_open ignoring open from mgr.compute-1.npcryb 192.168.122.101:0/2260310895; not ready for session (expect reconnect)
Nov 27 05:56:54 np0005537642 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v119: 7 pgs: 7 active+clean; 449 KiB data, 53 MiB used, 40 GiB / 40 GiB avail
Nov 27 05:56:54 np0005537642 ceph-mgr[74636]: mgr.server handle_open ignoring open from mgr.compute-2.yyrxaz 192.168.122.102:0/1397574382; not ready for session (expect reconnect)
Nov 27 05:56:55 np0005537642 ceph-mgr[74636]: mgr.server handle_open ignoring open from mgr.compute-1.npcryb 192.168.122.101:0/2260310895; not ready for session (expect reconnect)
Nov 27 05:56:55 np0005537642 ceph-mgr[74636]: mgr.server handle_open ignoring open from mgr.compute-2.yyrxaz 192.168.122.102:0/1397574382; not ready for session (expect reconnect)
Nov 27 05:56:56 np0005537642 ceph-mgr[74636]: mgr.server handle_open ignoring open from mgr.compute-1.npcryb 192.168.122.101:0/2260310895; not ready for session (expect reconnect)
Nov 27 05:56:56 np0005537642 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v120: 7 pgs: 7 active+clean; 449 KiB data, 53 MiB used, 40 GiB / 40 GiB avail
Nov 27 05:56:56 np0005537642 ceph-mgr[74636]: mgr.server handle_open ignoring open from mgr.compute-2.yyrxaz 192.168.122.102:0/1397574382; not ready for session (expect reconnect)
Nov 27 05:56:57 np0005537642 ceph-mgr[74636]: mgr.server handle_open ignoring open from mgr.compute-1.npcryb 192.168.122.101:0/2260310895; not ready for session (expect reconnect)
Nov 27 05:56:57 np0005537642 ceph-mon[74338]: log_channel(cluster) log [INF] : mon.compute-0 calling monitor election
Nov 27 05:56:57 np0005537642 ceph-mon[74338]: paxos.0).electionLogic(17) init, last seen epoch 17, mid-election, bumping
Nov 27 05:56:57 np0005537642 ceph-mon[74338]: mon.compute-0@0(electing) e3 collect_metadata vda:  no unique device id for vda: fallback method has no model nor serial
Nov 27 05:56:57 np0005537642 ceph-mon[74338]: log_channel(cluster) log [INF] : mon.compute-0 is new leader, mons compute-0,compute-2,compute-1 in quorum (ranks 0,1,2)
Nov 27 05:56:57 np0005537642 ceph-mon[74338]: log_channel(cluster) log [DBG] : monmap epoch 3
Nov 27 05:56:57 np0005537642 ceph-mon[74338]: log_channel(cluster) log [DBG] : fsid 4c838139-e0c9-556a-a9ca-e4422f459af7
Nov 27 05:56:57 np0005537642 ceph-mon[74338]: log_channel(cluster) log [DBG] : last_changed 2025-11-27T10:56:29.287830+0000
Nov 27 05:56:57 np0005537642 ceph-mon[74338]: log_channel(cluster) log [DBG] : created 2025-11-27T10:53:19.458310+0000
Nov 27 05:56:57 np0005537642 ceph-mon[74338]: log_channel(cluster) log [DBG] : min_mon_release 19 (squid)
Nov 27 05:56:57 np0005537642 ceph-mon[74338]: log_channel(cluster) log [DBG] : election_strategy: 1
Nov 27 05:56:57 np0005537642 ceph-mon[74338]: log_channel(cluster) log [DBG] : 0: [v2:192.168.122.100:3300/0,v1:192.168.122.100:6789/0] mon.compute-0
Nov 27 05:56:57 np0005537642 ceph-mon[74338]: log_channel(cluster) log [DBG] : 1: [v2:192.168.122.102:3300/0,v1:192.168.122.102:6789/0] mon.compute-2
Nov 27 05:56:57 np0005537642 ceph-mon[74338]: log_channel(cluster) log [DBG] : 2: [v2:192.168.122.101:3300/0,v1:192.168.122.101:6789/0] mon.compute-1
Nov 27 05:56:57 np0005537642 ceph-mon[74338]: mon.compute-0@0(leader) e3 collect_metadata vda:  no unique device id for vda: fallback method has no model nor serial
Nov 27 05:56:57 np0005537642 ceph-mon[74338]: log_channel(cluster) log [DBG] : fsmap 
Nov 27 05:56:57 np0005537642 ceph-mon[74338]: log_channel(cluster) log [DBG] : osdmap e32: 2 total, 2 up, 2 in
Nov 27 05:56:57 np0005537642 ceph-mon[74338]: log_channel(cluster) log [DBG] : mgrmap e9: compute-0.qnrkij(active, since 3m)
Nov 27 05:56:57 np0005537642 ceph-mon[74338]: log_channel(cluster) log [WRN] : Health detail: HEALTH_WARN 3 pool(s) do not have an application enabled
Nov 27 05:56:57 np0005537642 ceph-mon[74338]: log_channel(cluster) log [WRN] : [WRN] POOL_APP_NOT_ENABLED: 3 pool(s) do not have an application enabled
Nov 27 05:56:57 np0005537642 ceph-mon[74338]: log_channel(cluster) log [WRN] :     application not enabled on pool 'images'
Nov 27 05:56:57 np0005537642 ceph-mon[74338]: log_channel(cluster) log [WRN] :     application not enabled on pool 'cephfs.cephfs.meta'
Nov 27 05:56:57 np0005537642 ceph-mon[74338]: log_channel(cluster) log [WRN] :     application not enabled on pool 'cephfs.cephfs.data'
Nov 27 05:56:57 np0005537642 ceph-mon[74338]: log_channel(cluster) log [WRN] :     use 'ceph osd pool application enable <pool-name> <app-name>', where <app-name> is 'cephfs', 'rbd', 'rgw', or freeform for custom applications.
Nov 27 05:56:57 np0005537642 ceph-mgr[74636]: mgr.server handle_open ignoring open from mgr.compute-2.yyrxaz 192.168.122.102:0/1397574382; not ready for session (expect reconnect)
Nov 27 05:56:57 np0005537642 ceph-mon[74338]: log_channel(cluster) log [DBG] : Standby manager daemon compute-2.yyrxaz started
Nov 27 05:56:57 np0005537642 ceph-mon[74338]: log_channel(cluster) log [DBG] : Standby manager daemon compute-1.npcryb started
Nov 27 05:56:57 np0005537642 ceph-mgr[74636]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 27 05:56:57 np0005537642 ceph-mgr[74636]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 27 05:56:57 np0005537642 ceph-mgr[74636]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 27 05:56:57 np0005537642 ceph-mgr[74636]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 27 05:56:57 np0005537642 ceph-mgr[74636]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 27 05:56:57 np0005537642 ceph-mgr[74636]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 27 05:56:57 np0005537642 ceph-mon[74338]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/397881370' entity='mgr.compute-0.qnrkij' 
Nov 27 05:56:57 np0005537642 ceph-mon[74338]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1}] v 0)
Nov 27 05:56:57 np0005537642 ceph-mon[74338]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/397881370' entity='mgr.compute-0.qnrkij' 
Nov 27 05:56:57 np0005537642 ceph-mon[74338]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.mgr}] v 0)
Nov 27 05:56:57 np0005537642 ceph-mon[74338]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/397881370' entity='mgr.compute-0.qnrkij' 
Nov 27 05:56:57 np0005537642 ceph-mgr[74636]: [progress INFO root] complete: finished ev db2dd016-a333-4f0c-9a87-1578dd98323c (Updating mgr deployment (+2 -> 3))
Nov 27 05:56:57 np0005537642 ceph-mgr[74636]: [progress INFO root] Completed event db2dd016-a333-4f0c-9a87-1578dd98323c (Updating mgr deployment (+2 -> 3)) in 22 seconds
Nov 27 05:56:57 np0005537642 ceph-mon[74338]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.mgr}] v 0)
Nov 27 05:56:58 np0005537642 ceph-mon[74338]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/397881370' entity='mgr.compute-0.qnrkij' 
Nov 27 05:56:58 np0005537642 ceph-mgr[74636]: [progress INFO root] update: starting ev 3e02a6c4-5131-4598-9839-807b320eab7b (Updating crash deployment (+1 -> 3))
Nov 27 05:56:58 np0005537642 ceph-mon[74338]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get-or-create", "entity": "client.crash.compute-2", "caps": ["mon", "profile crash", "mgr", "profile crash"]} v 0)
Nov 27 05:56:58 np0005537642 ceph-mon[74338]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/397881370' entity='mgr.compute-0.qnrkij' cmd=[{"prefix": "auth get-or-create", "entity": "client.crash.compute-2", "caps": ["mon", "profile crash", "mgr", "profile crash"]}]: dispatch
Nov 27 05:56:58 np0005537642 ceph-mon[74338]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/397881370' entity='mgr.compute-0.qnrkij' cmd='[{"prefix": "auth get-or-create", "entity": "client.crash.compute-2", "caps": ["mon", "profile crash", "mgr", "profile crash"]}]': finished
Nov 27 05:56:58 np0005537642 ceph-mon[74338]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Nov 27 05:56:58 np0005537642 ceph-mon[74338]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/397881370' entity='mgr.compute-0.qnrkij' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 27 05:56:58 np0005537642 ceph-mgr[74636]: [cephadm INFO cephadm.serve] Deploying daemon crash.compute-2 on compute-2
Nov 27 05:56:58 np0005537642 ceph-mgr[74636]: log_channel(cephadm) log [INF] : Deploying daemon crash.compute-2 on compute-2
Nov 27 05:56:58 np0005537642 ceph-mgr[74636]: mgr.server handle_open ignoring open from mgr.compute-1.npcryb 192.168.122.101:0/2260310895; not ready for session (expect reconnect)
Nov 27 05:56:58 np0005537642 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v121: 7 pgs: 7 active+clean; 449 KiB data, 53 MiB used, 40 GiB / 40 GiB avail
Nov 27 05:56:58 np0005537642 ceph-mon[74338]: mon.compute-0@0(leader).osd e32 do_prune osdmap full prune enabled
Nov 27 05:56:58 np0005537642 ceph-mgr[74636]: mgr.server handle_open ignoring open from mgr.compute-2.yyrxaz 192.168.122.102:0/1397574382; not ready for session (expect reconnect)
Nov 27 05:56:58 np0005537642 ceph-mon[74338]: from='mgr.14122 192.168.122.100:0/397881370' entity='mgr.compute-0.qnrkij' cmd=[{"prefix": "osd pool set", "pool": "vms", "var": "pg_num", "val": "32"}]: dispatch
Nov 27 05:56:58 np0005537642 ceph-mon[74338]: from='client.? 192.168.122.100:0/675944323' entity='client.admin' cmd=[{"prefix": "osd pool application enable", "pool": "images", "app": "rbd"}]: dispatch
Nov 27 05:56:58 np0005537642 ceph-mon[74338]: mon.compute-0 calling monitor election
Nov 27 05:56:58 np0005537642 ceph-mon[74338]: mon.compute-1 calling monitor election
Nov 27 05:56:58 np0005537642 ceph-mon[74338]: mon.compute-2 calling monitor election
Nov 27 05:56:58 np0005537642 ceph-mon[74338]: mon.compute-0 calling monitor election
Nov 27 05:56:58 np0005537642 ceph-mon[74338]: mon.compute-0 is new leader, mons compute-0,compute-2,compute-1 in quorum (ranks 0,1,2)
Nov 27 05:56:58 np0005537642 ceph-mon[74338]: Health detail: HEALTH_WARN 3 pool(s) do not have an application enabled
Nov 27 05:56:58 np0005537642 ceph-mon[74338]: [WRN] POOL_APP_NOT_ENABLED: 3 pool(s) do not have an application enabled
Nov 27 05:56:58 np0005537642 ceph-mon[74338]:    application not enabled on pool 'images'
Nov 27 05:56:58 np0005537642 ceph-mon[74338]:    application not enabled on pool 'cephfs.cephfs.meta'
Nov 27 05:56:58 np0005537642 ceph-mon[74338]:    application not enabled on pool 'cephfs.cephfs.data'
Nov 27 05:56:58 np0005537642 ceph-mon[74338]:    use 'ceph osd pool application enable <pool-name> <app-name>', where <app-name> is 'cephfs', 'rbd', 'rgw', or freeform for custom applications.
Nov 27 05:56:58 np0005537642 ceph-mon[74338]: from='mgr.14122 192.168.122.100:0/397881370' entity='mgr.compute-0.qnrkij' 
Nov 27 05:56:58 np0005537642 ceph-mon[74338]: from='mgr.14122 192.168.122.100:0/397881370' entity='mgr.compute-0.qnrkij' 
Nov 27 05:56:58 np0005537642 ceph-mon[74338]: from='mgr.14122 192.168.122.100:0/397881370' entity='mgr.compute-0.qnrkij' 
Nov 27 05:56:58 np0005537642 ceph-mon[74338]: from='mgr.14122 192.168.122.100:0/397881370' entity='mgr.compute-0.qnrkij' 
Nov 27 05:56:58 np0005537642 ceph-mon[74338]: from='mgr.14122 192.168.122.100:0/397881370' entity='mgr.compute-0.qnrkij' cmd=[{"prefix": "auth get-or-create", "entity": "client.crash.compute-2", "caps": ["mon", "profile crash", "mgr", "profile crash"]}]: dispatch
Nov 27 05:56:58 np0005537642 ceph-mon[74338]: from='mgr.14122 192.168.122.100:0/397881370' entity='mgr.compute-0.qnrkij' cmd='[{"prefix": "auth get-or-create", "entity": "client.crash.compute-2", "caps": ["mon", "profile crash", "mgr", "profile crash"]}]': finished
Nov 27 05:56:58 np0005537642 ceph-mon[74338]: Deploying daemon crash.compute-2 on compute-2
Nov 27 05:56:59 np0005537642 ceph-mon[74338]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/397881370' entity='mgr.compute-0.qnrkij' cmd='[{"prefix": "osd pool set", "pool": "vms", "var": "pg_num", "val": "32"}]': finished
Nov 27 05:56:59 np0005537642 ceph-mon[74338]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/675944323' entity='client.admin' cmd='[{"prefix": "osd pool application enable", "pool": "images", "app": "rbd"}]': finished
Nov 27 05:56:59 np0005537642 ceph-mon[74338]: mon.compute-0@0(leader).osd e33 e33: 2 total, 2 up, 2 in
Nov 27 05:56:59 np0005537642 adoring_einstein[85878]: enabled application 'rbd' on pool 'images'
Nov 27 05:56:59 np0005537642 ceph-mon[74338]: log_channel(cluster) log [DBG] : osdmap e33: 2 total, 2 up, 2 in
Nov 27 05:56:59 np0005537642 ceph-mon[74338]: log_channel(cluster) log [DBG] : mgrmap e10: compute-0.qnrkij(active, since 3m), standbys: compute-2.yyrxaz, compute-1.npcryb
Nov 27 05:56:59 np0005537642 ceph-mon[74338]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mgr metadata", "who": "compute-2.yyrxaz", "id": "compute-2.yyrxaz"} v 0)
Nov 27 05:56:59 np0005537642 ceph-mon[74338]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/397881370' entity='mgr.compute-0.qnrkij' cmd=[{"prefix": "mgr metadata", "who": "compute-2.yyrxaz", "id": "compute-2.yyrxaz"}]: dispatch
Nov 27 05:56:59 np0005537642 ceph-mon[74338]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mgr metadata", "who": "compute-1.npcryb", "id": "compute-1.npcryb"} v 0)
Nov 27 05:56:59 np0005537642 ceph-mon[74338]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/397881370' entity='mgr.compute-0.qnrkij' cmd=[{"prefix": "mgr metadata", "who": "compute-1.npcryb", "id": "compute-1.npcryb"}]: dispatch
Nov 27 05:56:59 np0005537642 ceph-mgr[74636]: [progress INFO root] update: starting ev 4aa9a7a3-c4bb-4f03-934a-dc50c4c79a0c (PG autoscaler increasing pool 2 PGs from 1 to 32)
Nov 27 05:56:59 np0005537642 ceph-mon[74338]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "volumes", "var": "pg_num", "val": "32"} v 0)
Nov 27 05:56:59 np0005537642 ceph-mon[74338]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/397881370' entity='mgr.compute-0.qnrkij' cmd=[{"prefix": "osd pool set", "pool": "volumes", "var": "pg_num", "val": "32"}]: dispatch
Nov 27 05:56:59 np0005537642 systemd[1]: libpod-81f449904c903acf207b22a53dfe6a4fe180d4d1cc0a16dbdefaa04b50c54f74.scope: Deactivated successfully.
Nov 27 05:56:59 np0005537642 podman[85862]: 2025-11-27 10:56:59.079873429 +0000 UTC m=+19.437332479 container died 81f449904c903acf207b22a53dfe6a4fe180d4d1cc0a16dbdefaa04b50c54f74 (image=quay.io/ceph/ceph:v19, name=adoring_einstein, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid)
Nov 27 05:56:59 np0005537642 ceph-mgr[74636]: [progress INFO root] Writing back 5 completed events
Nov 27 05:56:59 np0005537642 ceph-mon[74338]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/progress/completed}] v 0)
Nov 27 05:56:59 np0005537642 ceph-mon[74338]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/397881370' entity='mgr.compute-0.qnrkij' 
Nov 27 05:56:59 np0005537642 systemd[1]: var-lib-containers-storage-overlay-dec1c4d0a37b2c5040645cb86ede5d4d09b320bf4b69ec19a8cbe742740c2902-merged.mount: Deactivated successfully.
Nov 27 05:56:59 np0005537642 ceph-mon[74338]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2.devices.0}] v 0)
Nov 27 05:56:59 np0005537642 ceph-mon[74338]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/397881370' entity='mgr.compute-0.qnrkij' 
Nov 27 05:56:59 np0005537642 ceph-mon[74338]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2}] v 0)
Nov 27 05:57:00 np0005537642 ceph-mon[74338]: mon.compute-0@0(leader).osd e33 do_prune osdmap full prune enabled
Nov 27 05:57:00 np0005537642 podman[85862]: 2025-11-27 10:57:00.193657055 +0000 UTC m=+20.551116105 container remove 81f449904c903acf207b22a53dfe6a4fe180d4d1cc0a16dbdefaa04b50c54f74 (image=quay.io/ceph/ceph:v19, name=adoring_einstein, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 27 05:57:00 np0005537642 systemd[1]: libpod-conmon-81f449904c903acf207b22a53dfe6a4fe180d4d1cc0a16dbdefaa04b50c54f74.scope: Deactivated successfully.
Nov 27 05:57:00 np0005537642 ceph-mon[74338]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/397881370' entity='mgr.compute-0.qnrkij' 
Nov 27 05:57:00 np0005537642 ceph-mon[74338]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.crash}] v 0)
Nov 27 05:57:00 np0005537642 ceph-mon[74338]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/397881370' entity='mgr.compute-0.qnrkij' cmd='[{"prefix": "osd pool set", "pool": "volumes", "var": "pg_num", "val": "32"}]': finished
Nov 27 05:57:00 np0005537642 ceph-mon[74338]: mon.compute-0@0(leader).osd e34 e34: 2 total, 2 up, 2 in
Nov 27 05:57:00 np0005537642 ceph-mon[74338]: from='mgr.14122 192.168.122.100:0/397881370' entity='mgr.compute-0.qnrkij' cmd='[{"prefix": "osd pool set", "pool": "vms", "var": "pg_num", "val": "32"}]': finished
Nov 27 05:57:00 np0005537642 ceph-mon[74338]: from='client.? 192.168.122.100:0/675944323' entity='client.admin' cmd='[{"prefix": "osd pool application enable", "pool": "images", "app": "rbd"}]': finished
Nov 27 05:57:00 np0005537642 ceph-mon[74338]: from='mgr.14122 192.168.122.100:0/397881370' entity='mgr.compute-0.qnrkij' cmd=[{"prefix": "osd pool set", "pool": "volumes", "var": "pg_num", "val": "32"}]: dispatch
Nov 27 05:57:00 np0005537642 ceph-mon[74338]: from='mgr.14122 192.168.122.100:0/397881370' entity='mgr.compute-0.qnrkij' 
Nov 27 05:57:00 np0005537642 ceph-mon[74338]: from='mgr.14122 192.168.122.100:0/397881370' entity='mgr.compute-0.qnrkij' 
Nov 27 05:57:00 np0005537642 ceph-mon[74338]: log_channel(cluster) log [DBG] : osdmap e34: 2 total, 2 up, 2 in
Nov 27 05:57:00 np0005537642 python3[85940]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v19 --fsid 4c838139-e0c9-556a-a9ca-e4422f459af7 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   osd pool application enable cephfs.cephfs.meta cephfs _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 27 05:57:00 np0005537642 ceph-mgr[74636]: [progress INFO root] update: starting ev 710924fe-20ea-4452-a387-71959e3e987b (PG autoscaler increasing pool 3 PGs from 1 to 32)
Nov 27 05:57:00 np0005537642 ceph-mon[74338]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "backups", "var": "pg_num", "val": "32"} v 0)
Nov 27 05:57:00 np0005537642 ceph-mon[74338]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/397881370' entity='mgr.compute-0.qnrkij' cmd=[{"prefix": "osd pool set", "pool": "backups", "var": "pg_num", "val": "32"}]: dispatch
Nov 27 05:57:00 np0005537642 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v124: 7 pgs: 7 active+clean; 449 KiB data, 53 MiB used, 40 GiB / 40 GiB avail
Nov 27 05:57:00 np0005537642 ceph-mon[74338]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "vms", "var": "pg_num_actual", "val": "32"} v 0)
Nov 27 05:57:00 np0005537642 ceph-mon[74338]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/397881370' entity='mgr.compute-0.qnrkij' cmd=[{"prefix": "osd pool set", "pool": "vms", "var": "pg_num_actual", "val": "32"}]: dispatch
Nov 27 05:57:00 np0005537642 ceph-mon[74338]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "volumes", "var": "pg_num_actual", "val": "32"} v 0)
Nov 27 05:57:00 np0005537642 ceph-mon[74338]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/397881370' entity='mgr.compute-0.qnrkij' cmd=[{"prefix": "osd pool set", "pool": "volumes", "var": "pg_num_actual", "val": "32"}]: dispatch
Nov 27 05:57:00 np0005537642 podman[85941]: 2025-11-27 10:57:00.698860111 +0000 UTC m=+0.046487891 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Nov 27 05:57:00 np0005537642 ceph-mon[74338]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/397881370' entity='mgr.compute-0.qnrkij' 
Nov 27 05:57:00 np0005537642 ceph-mgr[74636]: [progress INFO root] complete: finished ev 3e02a6c4-5131-4598-9839-807b320eab7b (Updating crash deployment (+1 -> 3))
Nov 27 05:57:00 np0005537642 ceph-mgr[74636]: [progress INFO root] Completed event 3e02a6c4-5131-4598-9839-807b320eab7b (Updating crash deployment (+1 -> 3)) in 3 seconds
Nov 27 05:57:00 np0005537642 ceph-mon[74338]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.crash}] v 0)
Nov 27 05:57:00 np0005537642 podman[85941]: 2025-11-27 10:57:00.9224155 +0000 UTC m=+0.270043260 container create 5dd6d43d2ad3f887ff4631920f393254325231fe4651a3112afd3736a0b66c81 (image=quay.io/ceph/ceph:v19, name=cranky_sinoussi, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, io.buildah.version=1.40.1)
Nov 27 05:57:00 np0005537642 ceph-mon[74338]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/397881370' entity='mgr.compute-0.qnrkij' 
Nov 27 05:57:00 np0005537642 ceph-mon[74338]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Nov 27 05:57:00 np0005537642 ceph-mon[74338]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/397881370' entity='mgr.compute-0.qnrkij' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 27 05:57:00 np0005537642 ceph-mon[74338]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Nov 27 05:57:00 np0005537642 ceph-mon[74338]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/397881370' entity='mgr.compute-0.qnrkij' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 27 05:57:00 np0005537642 ceph-mon[74338]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Nov 27 05:57:00 np0005537642 ceph-mon[74338]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/397881370' entity='mgr.compute-0.qnrkij' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 27 05:57:00 np0005537642 ceph-mon[74338]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Nov 27 05:57:00 np0005537642 ceph-mon[74338]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/397881370' entity='mgr.compute-0.qnrkij' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 27 05:57:00 np0005537642 ceph-mon[74338]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Nov 27 05:57:00 np0005537642 ceph-mon[74338]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/397881370' entity='mgr.compute-0.qnrkij' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 27 05:57:01 np0005537642 systemd[1]: Started libpod-conmon-5dd6d43d2ad3f887ff4631920f393254325231fe4651a3112afd3736a0b66c81.scope.
Nov 27 05:57:01 np0005537642 systemd[1]: Started libcrun container.
Nov 27 05:57:01 np0005537642 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b8a89669d5f643e6f81ad243d16401b9be585eaea243d29d4021c6f411f33609/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Nov 27 05:57:01 np0005537642 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b8a89669d5f643e6f81ad243d16401b9be585eaea243d29d4021c6f411f33609/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 27 05:57:01 np0005537642 podman[85941]: 2025-11-27 10:57:01.112325624 +0000 UTC m=+0.459953374 container init 5dd6d43d2ad3f887ff4631920f393254325231fe4651a3112afd3736a0b66c81 (image=quay.io/ceph/ceph:v19, name=cranky_sinoussi, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, CEPH_REF=squid, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 27 05:57:01 np0005537642 podman[85941]: 2025-11-27 10:57:01.122529413 +0000 UTC m=+0.470157143 container start 5dd6d43d2ad3f887ff4631920f393254325231fe4651a3112afd3736a0b66c81 (image=quay.io/ceph/ceph:v19, name=cranky_sinoussi, CEPH_REF=squid, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2)
Nov 27 05:57:01 np0005537642 podman[85941]: 2025-11-27 10:57:01.205332995 +0000 UTC m=+0.552960775 container attach 5dd6d43d2ad3f887ff4631920f393254325231fe4651a3112afd3736a0b66c81 (image=quay.io/ceph/ceph:v19, name=cranky_sinoussi, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid)
Nov 27 05:57:01 np0005537642 ceph-mon[74338]: mon.compute-0@0(leader).osd e34 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 27 05:57:01 np0005537642 ceph-mon[74338]: mon.compute-0@0(leader).osd e34 do_prune osdmap full prune enabled
Nov 27 05:57:01 np0005537642 ceph-mon[74338]: log_channel(cluster) log [WRN] : Health check update: 2 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED)
Nov 27 05:57:01 np0005537642 ceph-mon[74338]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool application enable", "pool": "cephfs.cephfs.meta", "app": "cephfs"} v 0)
Nov 27 05:57:01 np0005537642 ceph-mon[74338]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/2676223360' entity='client.admin' cmd=[{"prefix": "osd pool application enable", "pool": "cephfs.cephfs.meta", "app": "cephfs"}]: dispatch
Nov 27 05:57:01 np0005537642 ceph-mon[74338]: from='mgr.14122 192.168.122.100:0/397881370' entity='mgr.compute-0.qnrkij' 
Nov 27 05:57:01 np0005537642 ceph-mon[74338]: from='mgr.14122 192.168.122.100:0/397881370' entity='mgr.compute-0.qnrkij' cmd='[{"prefix": "osd pool set", "pool": "volumes", "var": "pg_num", "val": "32"}]': finished
Nov 27 05:57:01 np0005537642 ceph-mon[74338]: from='mgr.14122 192.168.122.100:0/397881370' entity='mgr.compute-0.qnrkij' cmd=[{"prefix": "osd pool set", "pool": "backups", "var": "pg_num", "val": "32"}]: dispatch
Nov 27 05:57:01 np0005537642 ceph-mon[74338]: from='mgr.14122 192.168.122.100:0/397881370' entity='mgr.compute-0.qnrkij' cmd=[{"prefix": "osd pool set", "pool": "vms", "var": "pg_num_actual", "val": "32"}]: dispatch
Nov 27 05:57:01 np0005537642 ceph-mon[74338]: from='mgr.14122 192.168.122.100:0/397881370' entity='mgr.compute-0.qnrkij' cmd=[{"prefix": "osd pool set", "pool": "volumes", "var": "pg_num_actual", "val": "32"}]: dispatch
Nov 27 05:57:01 np0005537642 ceph-mon[74338]: from='mgr.14122 192.168.122.100:0/397881370' entity='mgr.compute-0.qnrkij' 
Nov 27 05:57:01 np0005537642 ceph-mon[74338]: from='mgr.14122 192.168.122.100:0/397881370' entity='mgr.compute-0.qnrkij' 
Nov 27 05:57:01 np0005537642 ceph-mon[74338]: from='mgr.14122 192.168.122.100:0/397881370' entity='mgr.compute-0.qnrkij' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 27 05:57:01 np0005537642 ceph-mon[74338]: from='mgr.14122 192.168.122.100:0/397881370' entity='mgr.compute-0.qnrkij' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 27 05:57:01 np0005537642 podman[86070]: 2025-11-27 10:57:01.596747513 +0000 UTC m=+0.026189627 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Nov 27 05:57:01 np0005537642 podman[86070]: 2025-11-27 10:57:01.741820986 +0000 UTC m=+0.171263050 container create 83d3d0cd2133176b691245b5b5fe4d976f57ffb9c21e7ae76f4cc8ad2a4d3625 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=unruffled_newton, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_REF=squid, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, OSD_FLAVOR=default)
Nov 27 05:57:01 np0005537642 ceph-mon[74338]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/397881370' entity='mgr.compute-0.qnrkij' cmd='[{"prefix": "osd pool set", "pool": "backups", "var": "pg_num", "val": "32"}]': finished
Nov 27 05:57:01 np0005537642 ceph-mon[74338]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/397881370' entity='mgr.compute-0.qnrkij' cmd='[{"prefix": "osd pool set", "pool": "vms", "var": "pg_num_actual", "val": "32"}]': finished
Nov 27 05:57:01 np0005537642 ceph-mon[74338]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/397881370' entity='mgr.compute-0.qnrkij' cmd='[{"prefix": "osd pool set", "pool": "volumes", "var": "pg_num_actual", "val": "32"}]': finished
Nov 27 05:57:01 np0005537642 ceph-mon[74338]: mon.compute-0@0(leader).osd e35 e35: 2 total, 2 up, 2 in
Nov 27 05:57:01 np0005537642 ceph-mon[74338]: log_channel(cluster) log [DBG] : osdmap e35: 2 total, 2 up, 2 in
Nov 27 05:57:01 np0005537642 ceph-mgr[74636]: [progress INFO root] update: starting ev 92b76d52-0bda-43ff-93fb-52da7398c849 (PG autoscaler increasing pool 4 PGs from 1 to 32)
Nov 27 05:57:01 np0005537642 ceph-mon[74338]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "images", "var": "pg_num", "val": "32"} v 0)
Nov 27 05:57:01 np0005537642 ceph-mon[74338]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/397881370' entity='mgr.compute-0.qnrkij' cmd=[{"prefix": "osd pool set", "pool": "images", "var": "pg_num", "val": "32"}]: dispatch
Nov 27 05:57:01 np0005537642 systemd[1]: Started libpod-conmon-83d3d0cd2133176b691245b5b5fe4d976f57ffb9c21e7ae76f4cc8ad2a4d3625.scope.
Nov 27 05:57:01 np0005537642 systemd[1]: Started libcrun container.
Nov 27 05:57:01 np0005537642 ceph-osd[82775]: osd.1 pg_epoch: 35 pg[2.0( empty local-lis/les=29/30 n=0 ec=14/14 lis/c=29/29 les/c/f=30/30/0 sis=35 pruub=14.216547966s) [1] r=0 lpr=35 pi=[29,35)/1 crt=0'0 mlcod 0'0 active pruub 92.880798340s@ mbc={}] PeeringState::start_peering_interval up [1] -> [1], acting [1] -> [1], acting_primary 1 -> 1, up_primary 1 -> 1, role 0 -> 0, features acting 4540701547738038271 upacting 4540701547738038271
Nov 27 05:57:01 np0005537642 ceph-osd[82775]: osd.1 pg_epoch: 35 pg[2.0( empty local-lis/les=29/30 n=0 ec=14/14 lis/c=29/29 les/c/f=30/30/0 sis=35 pruub=14.216547966s) [1] r=0 lpr=35 pi=[29,35)/1 crt=0'0 mlcod 0'0 unknown pruub 92.880798340s@ mbc={}] state<Start>: transitioning to Primary
Nov 27 05:57:01 np0005537642 podman[86070]: 2025-11-27 10:57:01.963087057 +0000 UTC m=+0.392529121 container init 83d3d0cd2133176b691245b5b5fe4d976f57ffb9c21e7ae76f4cc8ad2a4d3625 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=unruffled_newton, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 27 05:57:01 np0005537642 podman[86070]: 2025-11-27 10:57:01.976816499 +0000 UTC m=+0.406258533 container start 83d3d0cd2133176b691245b5b5fe4d976f57ffb9c21e7ae76f4cc8ad2a4d3625 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=unruffled_newton, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 27 05:57:01 np0005537642 unruffled_newton[86087]: 167 167
Nov 27 05:57:01 np0005537642 systemd[1]: libpod-83d3d0cd2133176b691245b5b5fe4d976f57ffb9c21e7ae76f4cc8ad2a4d3625.scope: Deactivated successfully.
Nov 27 05:57:02 np0005537642 podman[86070]: 2025-11-27 10:57:02.033362382 +0000 UTC m=+0.462804416 container attach 83d3d0cd2133176b691245b5b5fe4d976f57ffb9c21e7ae76f4cc8ad2a4d3625 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=unruffled_newton, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, io.buildah.version=1.40.1, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2)
Nov 27 05:57:02 np0005537642 podman[86070]: 2025-11-27 10:57:02.033927919 +0000 UTC m=+0.463370003 container died 83d3d0cd2133176b691245b5b5fe4d976f57ffb9c21e7ae76f4cc8ad2a4d3625 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=unruffled_newton, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, io.buildah.version=1.40.1, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 27 05:57:02 np0005537642 systemd[1]: var-lib-containers-storage-overlay-8cc3449d9897995c68d9d3824add9e63ed46d10f46cced53f43cb20ca68dfe6c-merged.mount: Deactivated successfully.
Nov 27 05:57:02 np0005537642 podman[86070]: 2025-11-27 10:57:02.617102595 +0000 UTC m=+1.046544659 container remove 83d3d0cd2133176b691245b5b5fe4d976f57ffb9c21e7ae76f4cc8ad2a4d3625 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=unruffled_newton, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Nov 27 05:57:02 np0005537642 ceph-mon[74338]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd new", "uuid": "f997980f-1f21-4e9f-aafb-01b8bc0a19a8"} v 0)
Nov 27 05:57:02 np0005537642 ceph-mon[74338]: log_channel(audit) log [INF] : from='client.? ' entity='client.bootstrap-osd' cmd=[{"prefix": "osd new", "uuid": "f997980f-1f21-4e9f-aafb-01b8bc0a19a8"}]: dispatch
Nov 27 05:57:02 np0005537642 ceph-mon[74338]: mon.compute-0@0(leader).osd e35 do_prune osdmap full prune enabled
Nov 27 05:57:02 np0005537642 systemd[1]: libpod-conmon-83d3d0cd2133176b691245b5b5fe4d976f57ffb9c21e7ae76f4cc8ad2a4d3625.scope: Deactivated successfully.
Nov 27 05:57:02 np0005537642 ceph-mon[74338]: Health check update: 2 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED)
Nov 27 05:57:02 np0005537642 ceph-mon[74338]: from='client.? 192.168.122.100:0/2676223360' entity='client.admin' cmd=[{"prefix": "osd pool application enable", "pool": "cephfs.cephfs.meta", "app": "cephfs"}]: dispatch
Nov 27 05:57:02 np0005537642 ceph-mon[74338]: from='mgr.14122 192.168.122.100:0/397881370' entity='mgr.compute-0.qnrkij' cmd='[{"prefix": "osd pool set", "pool": "backups", "var": "pg_num", "val": "32"}]': finished
Nov 27 05:57:02 np0005537642 ceph-mon[74338]: from='mgr.14122 192.168.122.100:0/397881370' entity='mgr.compute-0.qnrkij' cmd='[{"prefix": "osd pool set", "pool": "vms", "var": "pg_num_actual", "val": "32"}]': finished
Nov 27 05:57:02 np0005537642 ceph-mon[74338]: from='mgr.14122 192.168.122.100:0/397881370' entity='mgr.compute-0.qnrkij' cmd='[{"prefix": "osd pool set", "pool": "volumes", "var": "pg_num_actual", "val": "32"}]': finished
Nov 27 05:57:02 np0005537642 ceph-mon[74338]: from='mgr.14122 192.168.122.100:0/397881370' entity='mgr.compute-0.qnrkij' cmd=[{"prefix": "osd pool set", "pool": "images", "var": "pg_num", "val": "32"}]: dispatch
Nov 27 05:57:02 np0005537642 ceph-mon[74338]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/2676223360' entity='client.admin' cmd='[{"prefix": "osd pool application enable", "pool": "cephfs.cephfs.meta", "app": "cephfs"}]': finished
Nov 27 05:57:02 np0005537642 ceph-mon[74338]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/397881370' entity='mgr.compute-0.qnrkij' cmd='[{"prefix": "osd pool set", "pool": "images", "var": "pg_num", "val": "32"}]': finished
Nov 27 05:57:02 np0005537642 ceph-mon[74338]: log_channel(audit) log [INF] : from='client.? ' entity='client.bootstrap-osd' cmd='[{"prefix": "osd new", "uuid": "f997980f-1f21-4e9f-aafb-01b8bc0a19a8"}]': finished
Nov 27 05:57:02 np0005537642 ceph-mon[74338]: mon.compute-0@0(leader).osd e36 e36: 3 total, 2 up, 3 in
Nov 27 05:57:02 np0005537642 cranky_sinoussi[85979]: enabled application 'cephfs' on pool 'cephfs.cephfs.meta'
Nov 27 05:57:02 np0005537642 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v126: 69 pgs: 1 peering, 62 unknown, 6 active+clean; 449 KiB data, 53 MiB used, 40 GiB / 40 GiB avail
Nov 27 05:57:02 np0005537642 ceph-mon[74338]: log_channel(cluster) log [DBG] : osdmap e36: 3 total, 2 up, 3 in
Nov 27 05:57:02 np0005537642 ceph-osd[82775]: osd.1 pg_epoch: 36 pg[2.1c( empty local-lis/les=29/30 n=0 ec=35/14 lis/c=29/29 les/c/f=30/30/0 sis=35) [1] r=0 lpr=35 pi=[29,35)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 27 05:57:02 np0005537642 ceph-osd[82775]: osd.1 pg_epoch: 36 pg[2.1f( empty local-lis/les=29/30 n=0 ec=35/14 lis/c=29/29 les/c/f=30/30/0 sis=35) [1] r=0 lpr=35 pi=[29,35)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 27 05:57:02 np0005537642 ceph-osd[82775]: osd.1 pg_epoch: 36 pg[2.1e( empty local-lis/les=29/30 n=0 ec=35/14 lis/c=29/29 les/c/f=30/30/0 sis=35) [1] r=0 lpr=35 pi=[29,35)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 27 05:57:02 np0005537642 ceph-osd[82775]: osd.1 pg_epoch: 36 pg[2.1d( empty local-lis/les=29/30 n=0 ec=35/14 lis/c=29/29 les/c/f=30/30/0 sis=35) [1] r=0 lpr=35 pi=[29,35)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 27 05:57:02 np0005537642 ceph-osd[82775]: osd.1 pg_epoch: 36 pg[2.1b( empty local-lis/les=29/30 n=0 ec=35/14 lis/c=29/29 les/c/f=30/30/0 sis=35) [1] r=0 lpr=35 pi=[29,35)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 27 05:57:02 np0005537642 ceph-osd[82775]: osd.1 pg_epoch: 36 pg[2.7( empty local-lis/les=29/30 n=0 ec=35/14 lis/c=29/29 les/c/f=30/30/0 sis=35) [1] r=0 lpr=35 pi=[29,35)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 27 05:57:02 np0005537642 ceph-osd[82775]: osd.1 pg_epoch: 36 pg[2.6( empty local-lis/les=29/30 n=0 ec=35/14 lis/c=29/29 les/c/f=30/30/0 sis=35) [1] r=0 lpr=35 pi=[29,35)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 27 05:57:02 np0005537642 ceph-osd[82775]: osd.1 pg_epoch: 36 pg[2.9( empty local-lis/les=29/30 n=0 ec=35/14 lis/c=29/29 les/c/f=30/30/0 sis=35) [1] r=0 lpr=35 pi=[29,35)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 27 05:57:02 np0005537642 ceph-osd[82775]: osd.1 pg_epoch: 36 pg[2.4( empty local-lis/les=29/30 n=0 ec=35/14 lis/c=29/29 les/c/f=30/30/0 sis=35) [1] r=0 lpr=35 pi=[29,35)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 27 05:57:02 np0005537642 ceph-osd[82775]: osd.1 pg_epoch: 36 pg[2.2( empty local-lis/les=29/30 n=0 ec=35/14 lis/c=29/29 les/c/f=30/30/0 sis=35) [1] r=0 lpr=35 pi=[29,35)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 27 05:57:02 np0005537642 ceph-osd[82775]: osd.1 pg_epoch: 36 pg[2.1( empty local-lis/les=29/30 n=0 ec=35/14 lis/c=29/29 les/c/f=30/30/0 sis=35) [1] r=0 lpr=35 pi=[29,35)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 27 05:57:02 np0005537642 ceph-osd[82775]: osd.1 pg_epoch: 36 pg[2.5( empty local-lis/les=29/30 n=0 ec=35/14 lis/c=29/29 les/c/f=30/30/0 sis=35) [1] r=0 lpr=35 pi=[29,35)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 27 05:57:02 np0005537642 ceph-osd[82775]: osd.1 pg_epoch: 36 pg[2.3( empty local-lis/les=29/30 n=0 ec=35/14 lis/c=29/29 les/c/f=30/30/0 sis=35) [1] r=0 lpr=35 pi=[29,35)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 27 05:57:02 np0005537642 ceph-osd[82775]: osd.1 pg_epoch: 36 pg[2.a( empty local-lis/les=29/30 n=0 ec=35/14 lis/c=29/29 les/c/f=30/30/0 sis=35) [1] r=0 lpr=35 pi=[29,35)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 27 05:57:02 np0005537642 ceph-osd[82775]: osd.1 pg_epoch: 36 pg[2.b( empty local-lis/les=29/30 n=0 ec=35/14 lis/c=29/29 les/c/f=30/30/0 sis=35) [1] r=0 lpr=35 pi=[29,35)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 27 05:57:02 np0005537642 ceph-osd[82775]: osd.1 pg_epoch: 36 pg[2.d( empty local-lis/les=29/30 n=0 ec=35/14 lis/c=29/29 les/c/f=30/30/0 sis=35) [1] r=0 lpr=35 pi=[29,35)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 27 05:57:02 np0005537642 ceph-osd[82775]: osd.1 pg_epoch: 36 pg[2.e( empty local-lis/les=29/30 n=0 ec=35/14 lis/c=29/29 les/c/f=30/30/0 sis=35) [1] r=0 lpr=35 pi=[29,35)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 27 05:57:02 np0005537642 ceph-osd[82775]: osd.1 pg_epoch: 36 pg[2.f( empty local-lis/les=29/30 n=0 ec=35/14 lis/c=29/29 les/c/f=30/30/0 sis=35) [1] r=0 lpr=35 pi=[29,35)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 27 05:57:02 np0005537642 ceph-osd[82775]: osd.1 pg_epoch: 36 pg[2.10( empty local-lis/les=29/30 n=0 ec=35/14 lis/c=29/29 les/c/f=30/30/0 sis=35) [1] r=0 lpr=35 pi=[29,35)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 27 05:57:02 np0005537642 ceph-osd[82775]: osd.1 pg_epoch: 36 pg[2.c( empty local-lis/les=29/30 n=0 ec=35/14 lis/c=29/29 les/c/f=30/30/0 sis=35) [1] r=0 lpr=35 pi=[29,35)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 27 05:57:02 np0005537642 ceph-osd[82775]: osd.1 pg_epoch: 36 pg[2.11( empty local-lis/les=29/30 n=0 ec=35/14 lis/c=29/29 les/c/f=30/30/0 sis=35) [1] r=0 lpr=35 pi=[29,35)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 27 05:57:02 np0005537642 ceph-osd[82775]: osd.1 pg_epoch: 36 pg[2.12( empty local-lis/les=29/30 n=0 ec=35/14 lis/c=29/29 les/c/f=30/30/0 sis=35) [1] r=0 lpr=35 pi=[29,35)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 27 05:57:02 np0005537642 ceph-osd[82775]: osd.1 pg_epoch: 36 pg[2.13( empty local-lis/les=29/30 n=0 ec=35/14 lis/c=29/29 les/c/f=30/30/0 sis=35) [1] r=0 lpr=35 pi=[29,35)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 27 05:57:02 np0005537642 ceph-osd[82775]: osd.1 pg_epoch: 36 pg[2.14( empty local-lis/les=29/30 n=0 ec=35/14 lis/c=29/29 les/c/f=30/30/0 sis=35) [1] r=0 lpr=35 pi=[29,35)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 27 05:57:02 np0005537642 ceph-osd[82775]: osd.1 pg_epoch: 36 pg[2.8( empty local-lis/les=29/30 n=0 ec=35/14 lis/c=29/29 les/c/f=30/30/0 sis=35) [1] r=0 lpr=35 pi=[29,35)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 27 05:57:02 np0005537642 ceph-osd[82775]: osd.1 pg_epoch: 36 pg[2.15( empty local-lis/les=29/30 n=0 ec=35/14 lis/c=29/29 les/c/f=30/30/0 sis=35) [1] r=0 lpr=35 pi=[29,35)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 27 05:57:02 np0005537642 ceph-osd[82775]: osd.1 pg_epoch: 36 pg[2.16( empty local-lis/les=29/30 n=0 ec=35/14 lis/c=29/29 les/c/f=30/30/0 sis=35) [1] r=0 lpr=35 pi=[29,35)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 27 05:57:02 np0005537642 ceph-osd[82775]: osd.1 pg_epoch: 36 pg[2.17( empty local-lis/les=29/30 n=0 ec=35/14 lis/c=29/29 les/c/f=30/30/0 sis=35) [1] r=0 lpr=35 pi=[29,35)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 27 05:57:02 np0005537642 ceph-osd[82775]: osd.1 pg_epoch: 36 pg[2.18( empty local-lis/les=29/30 n=0 ec=35/14 lis/c=29/29 les/c/f=30/30/0 sis=35) [1] r=0 lpr=35 pi=[29,35)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 27 05:57:02 np0005537642 ceph-osd[82775]: osd.1 pg_epoch: 36 pg[2.19( empty local-lis/les=29/30 n=0 ec=35/14 lis/c=29/29 les/c/f=30/30/0 sis=35) [1] r=0 lpr=35 pi=[29,35)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 27 05:57:02 np0005537642 ceph-osd[82775]: osd.1 pg_epoch: 36 pg[2.1a( empty local-lis/les=29/30 n=0 ec=35/14 lis/c=29/29 les/c/f=30/30/0 sis=35) [1] r=0 lpr=35 pi=[29,35)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 27 05:57:02 np0005537642 ceph-mon[74338]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "backups", "var": "pg_num_actual", "val": "32"} v 0)
Nov 27 05:57:02 np0005537642 ceph-mon[74338]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/397881370' entity='mgr.compute-0.qnrkij' cmd=[{"prefix": "osd pool set", "pool": "backups", "var": "pg_num_actual", "val": "32"}]: dispatch
Nov 27 05:57:02 np0005537642 ceph-mgr[74636]: [progress INFO root] update: starting ev 32356c02-c71b-47b3-ac4e-29b539ab22c7 (PG autoscaler increasing pool 5 PGs from 1 to 32)
Nov 27 05:57:02 np0005537642 ceph-mgr[74636]: mgr finish mon failed to return metadata for osd.2: (2) No such file or directory
Nov 27 05:57:02 np0005537642 ceph-mon[74338]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd metadata", "id": 2} v 0)
Nov 27 05:57:02 np0005537642 ceph-mon[74338]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/397881370' entity='mgr.compute-0.qnrkij' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch
Nov 27 05:57:02 np0005537642 ceph-mon[74338]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pg_num", "val": "32"} v 0)
Nov 27 05:57:02 np0005537642 ceph-mon[74338]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/397881370' entity='mgr.compute-0.qnrkij' cmd=[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pg_num", "val": "32"}]: dispatch
Nov 27 05:57:02 np0005537642 ceph-osd[82775]: osd.1 pg_epoch: 36 pg[2.1f( empty local-lis/les=35/36 n=0 ec=35/14 lis/c=29/29 les/c/f=30/30/0 sis=35) [1] r=0 lpr=35 pi=[29,35)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 27 05:57:02 np0005537642 ceph-osd[82775]: osd.1 pg_epoch: 36 pg[2.1c( empty local-lis/les=35/36 n=0 ec=35/14 lis/c=29/29 les/c/f=30/30/0 sis=35) [1] r=0 lpr=35 pi=[29,35)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 27 05:57:02 np0005537642 ceph-osd[82775]: osd.1 pg_epoch: 36 pg[2.1e( empty local-lis/les=35/36 n=0 ec=35/14 lis/c=29/29 les/c/f=30/30/0 sis=35) [1] r=0 lpr=35 pi=[29,35)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 27 05:57:02 np0005537642 ceph-osd[82775]: osd.1 pg_epoch: 36 pg[2.1d( empty local-lis/les=35/36 n=0 ec=35/14 lis/c=29/29 les/c/f=30/30/0 sis=35) [1] r=0 lpr=35 pi=[29,35)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 27 05:57:02 np0005537642 ceph-osd[82775]: osd.1 pg_epoch: 36 pg[2.6( empty local-lis/les=35/36 n=0 ec=35/14 lis/c=29/29 les/c/f=30/30/0 sis=35) [1] r=0 lpr=35 pi=[29,35)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 27 05:57:02 np0005537642 ceph-osd[82775]: osd.1 pg_epoch: 36 pg[2.1b( empty local-lis/les=35/36 n=0 ec=35/14 lis/c=29/29 les/c/f=30/30/0 sis=35) [1] r=0 lpr=35 pi=[29,35)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 27 05:57:02 np0005537642 ceph-osd[82775]: osd.1 pg_epoch: 36 pg[2.7( empty local-lis/les=35/36 n=0 ec=35/14 lis/c=29/29 les/c/f=30/30/0 sis=35) [1] r=0 lpr=35 pi=[29,35)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 27 05:57:02 np0005537642 ceph-osd[82775]: osd.1 pg_epoch: 36 pg[2.2( empty local-lis/les=35/36 n=0 ec=35/14 lis/c=29/29 les/c/f=30/30/0 sis=35) [1] r=0 lpr=35 pi=[29,35)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 27 05:57:02 np0005537642 ceph-osd[82775]: osd.1 pg_epoch: 36 pg[2.1( empty local-lis/les=35/36 n=0 ec=35/14 lis/c=29/29 les/c/f=30/30/0 sis=35) [1] r=0 lpr=35 pi=[29,35)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 27 05:57:02 np0005537642 ceph-osd[82775]: osd.1 pg_epoch: 36 pg[2.4( empty local-lis/les=35/36 n=0 ec=35/14 lis/c=29/29 les/c/f=30/30/0 sis=35) [1] r=0 lpr=35 pi=[29,35)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 27 05:57:02 np0005537642 ceph-osd[82775]: osd.1 pg_epoch: 36 pg[2.5( empty local-lis/les=35/36 n=0 ec=35/14 lis/c=29/29 les/c/f=30/30/0 sis=35) [1] r=0 lpr=35 pi=[29,35)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 27 05:57:02 np0005537642 ceph-osd[82775]: osd.1 pg_epoch: 36 pg[2.9( empty local-lis/les=35/36 n=0 ec=35/14 lis/c=29/29 les/c/f=30/30/0 sis=35) [1] r=0 lpr=35 pi=[29,35)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 27 05:57:02 np0005537642 ceph-osd[82775]: osd.1 pg_epoch: 36 pg[2.0( empty local-lis/les=35/36 n=0 ec=14/14 lis/c=29/29 les/c/f=30/30/0 sis=35) [1] r=0 lpr=35 pi=[29,35)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 27 05:57:02 np0005537642 ceph-osd[82775]: osd.1 pg_epoch: 36 pg[2.a( empty local-lis/les=35/36 n=0 ec=35/14 lis/c=29/29 les/c/f=30/30/0 sis=35) [1] r=0 lpr=35 pi=[29,35)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 27 05:57:02 np0005537642 ceph-osd[82775]: osd.1 pg_epoch: 36 pg[2.b( empty local-lis/les=35/36 n=0 ec=35/14 lis/c=29/29 les/c/f=30/30/0 sis=35) [1] r=0 lpr=35 pi=[29,35)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 27 05:57:02 np0005537642 ceph-osd[82775]: osd.1 pg_epoch: 36 pg[2.d( empty local-lis/les=35/36 n=0 ec=35/14 lis/c=29/29 les/c/f=30/30/0 sis=35) [1] r=0 lpr=35 pi=[29,35)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 27 05:57:02 np0005537642 ceph-osd[82775]: osd.1 pg_epoch: 36 pg[2.e( empty local-lis/les=35/36 n=0 ec=35/14 lis/c=29/29 les/c/f=30/30/0 sis=35) [1] r=0 lpr=35 pi=[29,35)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 27 05:57:02 np0005537642 ceph-osd[82775]: osd.1 pg_epoch: 36 pg[2.10( empty local-lis/les=35/36 n=0 ec=35/14 lis/c=29/29 les/c/f=30/30/0 sis=35) [1] r=0 lpr=35 pi=[29,35)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 27 05:57:02 np0005537642 ceph-osd[82775]: osd.1 pg_epoch: 36 pg[2.f( empty local-lis/les=35/36 n=0 ec=35/14 lis/c=29/29 les/c/f=30/30/0 sis=35) [1] r=0 lpr=35 pi=[29,35)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 27 05:57:02 np0005537642 ceph-osd[82775]: osd.1 pg_epoch: 36 pg[2.11( empty local-lis/les=35/36 n=0 ec=35/14 lis/c=29/29 les/c/f=30/30/0 sis=35) [1] r=0 lpr=35 pi=[29,35)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 27 05:57:02 np0005537642 ceph-osd[82775]: osd.1 pg_epoch: 36 pg[2.12( empty local-lis/les=35/36 n=0 ec=35/14 lis/c=29/29 les/c/f=30/30/0 sis=35) [1] r=0 lpr=35 pi=[29,35)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 27 05:57:02 np0005537642 ceph-osd[82775]: osd.1 pg_epoch: 36 pg[2.14( empty local-lis/les=35/36 n=0 ec=35/14 lis/c=29/29 les/c/f=30/30/0 sis=35) [1] r=0 lpr=35 pi=[29,35)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 27 05:57:02 np0005537642 ceph-osd[82775]: osd.1 pg_epoch: 36 pg[2.13( empty local-lis/les=35/36 n=0 ec=35/14 lis/c=29/29 les/c/f=30/30/0 sis=35) [1] r=0 lpr=35 pi=[29,35)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 27 05:57:02 np0005537642 ceph-osd[82775]: osd.1 pg_epoch: 36 pg[2.15( empty local-lis/les=35/36 n=0 ec=35/14 lis/c=29/29 les/c/f=30/30/0 sis=35) [1] r=0 lpr=35 pi=[29,35)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 27 05:57:02 np0005537642 ceph-osd[82775]: osd.1 pg_epoch: 36 pg[2.3( empty local-lis/les=35/36 n=0 ec=35/14 lis/c=29/29 les/c/f=30/30/0 sis=35) [1] r=0 lpr=35 pi=[29,35)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 27 05:57:02 np0005537642 ceph-osd[82775]: osd.1 pg_epoch: 36 pg[2.18( empty local-lis/les=35/36 n=0 ec=35/14 lis/c=29/29 les/c/f=30/30/0 sis=35) [1] r=0 lpr=35 pi=[29,35)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 27 05:57:02 np0005537642 ceph-osd[82775]: osd.1 pg_epoch: 36 pg[2.19( empty local-lis/les=35/36 n=0 ec=35/14 lis/c=29/29 les/c/f=30/30/0 sis=35) [1] r=0 lpr=35 pi=[29,35)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 27 05:57:02 np0005537642 ceph-osd[82775]: osd.1 pg_epoch: 36 pg[2.c( empty local-lis/les=35/36 n=0 ec=35/14 lis/c=29/29 les/c/f=30/30/0 sis=35) [1] r=0 lpr=35 pi=[29,35)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 27 05:57:02 np0005537642 ceph-osd[82775]: osd.1 pg_epoch: 36 pg[2.1a( empty local-lis/les=35/36 n=0 ec=35/14 lis/c=29/29 les/c/f=30/30/0 sis=35) [1] r=0 lpr=35 pi=[29,35)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 27 05:57:02 np0005537642 ceph-osd[82775]: osd.1 pg_epoch: 36 pg[2.16( empty local-lis/les=35/36 n=0 ec=35/14 lis/c=29/29 les/c/f=30/30/0 sis=35) [1] r=0 lpr=35 pi=[29,35)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 27 05:57:02 np0005537642 ceph-osd[82775]: osd.1 pg_epoch: 36 pg[2.8( empty local-lis/les=35/36 n=0 ec=35/14 lis/c=29/29 les/c/f=30/30/0 sis=35) [1] r=0 lpr=35 pi=[29,35)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 27 05:57:02 np0005537642 ceph-osd[82775]: osd.1 pg_epoch: 36 pg[2.17( empty local-lis/les=35/36 n=0 ec=35/14 lis/c=29/29 les/c/f=30/30/0 sis=35) [1] r=0 lpr=35 pi=[29,35)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 27 05:57:02 np0005537642 systemd[1]: libpod-5dd6d43d2ad3f887ff4631920f393254325231fe4651a3112afd3736a0b66c81.scope: Deactivated successfully.
Nov 27 05:57:02 np0005537642 conmon[85979]: conmon 5dd6d43d2ad3f887ff46 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-5dd6d43d2ad3f887ff4631920f393254325231fe4651a3112afd3736a0b66c81.scope/container/memory.events
Nov 27 05:57:02 np0005537642 podman[85941]: 2025-11-27 10:57:02.710922459 +0000 UTC m=+2.058550189 container died 5dd6d43d2ad3f887ff4631920f393254325231fe4651a3112afd3736a0b66c81 (image=quay.io/ceph/ceph:v19, name=cranky_sinoussi, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325)
Nov 27 05:57:02 np0005537642 systemd[1]: var-lib-containers-storage-overlay-b8a89669d5f643e6f81ad243d16401b9be585eaea243d29d4021c6f411f33609-merged.mount: Deactivated successfully.
Nov 27 05:57:03 np0005537642 podman[85941]: 2025-11-27 10:57:03.009035459 +0000 UTC m=+2.356663219 container remove 5dd6d43d2ad3f887ff4631920f393254325231fe4651a3112afd3736a0b66c81 (image=quay.io/ceph/ceph:v19, name=cranky_sinoussi, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, io.buildah.version=1.40.1)
Nov 27 05:57:03 np0005537642 systemd[1]: libpod-conmon-5dd6d43d2ad3f887ff4631920f393254325231fe4651a3112afd3736a0b66c81.scope: Deactivated successfully.
Nov 27 05:57:03 np0005537642 ceph-osd[82775]: log_channel(cluster) log [DBG] : 2.1c scrub starts
Nov 27 05:57:03 np0005537642 ceph-osd[82775]: log_channel(cluster) log [DBG] : 2.1c scrub ok
Nov 27 05:57:03 np0005537642 podman[86121]: 2025-11-27 10:57:03.04123381 +0000 UTC m=+0.261146499 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Nov 27 05:57:04 np0005537642 ceph-osd[82775]: log_channel(cluster) log [DBG] : 2.1f scrub starts
Nov 27 05:57:04 np0005537642 podman[86121]: 2025-11-27 10:57:04.293847517 +0000 UTC m=+1.513760196 container create 0a901fa0004675300792977d0d19d688ce3c1e2344a7fc58085a0cf6c86fc8c0 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=flamboyant_dewdney, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 27 05:57:04 np0005537642 ceph-mon[74338]: mon.compute-0@0(leader).osd e36 do_prune osdmap full prune enabled
Nov 27 05:57:04 np0005537642 ceph-mgr[74636]: [progress INFO root] Writing back 6 completed events
Nov 27 05:57:04 np0005537642 ceph-mon[74338]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/progress/completed}] v 0)
Nov 27 05:57:04 np0005537642 python3[86165]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v19 --fsid 4c838139-e0c9-556a-a9ca-e4422f459af7 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   osd pool application enable cephfs.cephfs.data cephfs _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 27 05:57:04 np0005537642 ceph-osd[82775]: log_channel(cluster) log [DBG] : 2.1f scrub ok
Nov 27 05:57:04 np0005537642 ceph-mon[74338]: from='client.? 192.168.122.102:0/493566679' entity='client.bootstrap-osd' cmd=[{"prefix": "osd new", "uuid": "f997980f-1f21-4e9f-aafb-01b8bc0a19a8"}]: dispatch
Nov 27 05:57:04 np0005537642 ceph-mon[74338]: from='client.? ' entity='client.bootstrap-osd' cmd=[{"prefix": "osd new", "uuid": "f997980f-1f21-4e9f-aafb-01b8bc0a19a8"}]: dispatch
Nov 27 05:57:04 np0005537642 ceph-mon[74338]: from='client.? 192.168.122.100:0/2676223360' entity='client.admin' cmd='[{"prefix": "osd pool application enable", "pool": "cephfs.cephfs.meta", "app": "cephfs"}]': finished
Nov 27 05:57:04 np0005537642 ceph-mon[74338]: from='mgr.14122 192.168.122.100:0/397881370' entity='mgr.compute-0.qnrkij' cmd='[{"prefix": "osd pool set", "pool": "images", "var": "pg_num", "val": "32"}]': finished
Nov 27 05:57:04 np0005537642 ceph-mon[74338]: from='client.? ' entity='client.bootstrap-osd' cmd='[{"prefix": "osd new", "uuid": "f997980f-1f21-4e9f-aafb-01b8bc0a19a8"}]': finished
Nov 27 05:57:04 np0005537642 ceph-mon[74338]: from='mgr.14122 192.168.122.100:0/397881370' entity='mgr.compute-0.qnrkij' cmd=[{"prefix": "osd pool set", "pool": "backups", "var": "pg_num_actual", "val": "32"}]: dispatch
Nov 27 05:57:04 np0005537642 ceph-mon[74338]: from='mgr.14122 192.168.122.100:0/397881370' entity='mgr.compute-0.qnrkij' cmd=[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pg_num", "val": "32"}]: dispatch
Nov 27 05:57:04 np0005537642 systemd[1]: Started libpod-conmon-0a901fa0004675300792977d0d19d688ce3c1e2344a7fc58085a0cf6c86fc8c0.scope.
Nov 27 05:57:04 np0005537642 systemd[1]: Started libcrun container.
Nov 27 05:57:04 np0005537642 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/07c6774e6ce1c198a1f1997f983b3d33e4a2dbfb5ebd7d8530fc132e0b508b0a/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 27 05:57:04 np0005537642 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/07c6774e6ce1c198a1f1997f983b3d33e4a2dbfb5ebd7d8530fc132e0b508b0a/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 27 05:57:04 np0005537642 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/07c6774e6ce1c198a1f1997f983b3d33e4a2dbfb5ebd7d8530fc132e0b508b0a/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 27 05:57:04 np0005537642 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/07c6774e6ce1c198a1f1997f983b3d33e4a2dbfb5ebd7d8530fc132e0b508b0a/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 27 05:57:04 np0005537642 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/07c6774e6ce1c198a1f1997f983b3d33e4a2dbfb5ebd7d8530fc132e0b508b0a/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Nov 27 05:57:04 np0005537642 podman[86166]: 2025-11-27 10:57:04.630457652 +0000 UTC m=+0.133634559 container create 6a03874ff43ab0e3d00251302b59afcde074493dc2fa918e1c7eec9e269cff5a (image=quay.io/ceph/ceph:v19, name=gallant_kowalevski, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, io.buildah.version=1.40.1)
Nov 27 05:57:04 np0005537642 podman[86166]: 2025-11-27 10:57:04.539168762 +0000 UTC m=+0.042345709 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Nov 27 05:57:04 np0005537642 ceph-mon[74338]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/397881370' entity='mgr.compute-0.qnrkij' cmd='[{"prefix": "osd pool set", "pool": "backups", "var": "pg_num_actual", "val": "32"}]': finished
Nov 27 05:57:04 np0005537642 ceph-mon[74338]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/397881370' entity='mgr.compute-0.qnrkij' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pg_num", "val": "32"}]': finished
Nov 27 05:57:04 np0005537642 ceph-mon[74338]: mon.compute-0@0(leader).osd e37 e37: 3 total, 2 up, 3 in
Nov 27 05:57:04 np0005537642 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v128: 69 pgs: 1 peering, 62 unknown, 6 active+clean; 449 KiB data, 53 MiB used, 40 GiB / 40 GiB avail
Nov 27 05:57:04 np0005537642 ceph-mon[74338]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/397881370' entity='mgr.compute-0.qnrkij' 
Nov 27 05:57:04 np0005537642 ceph-mon[74338]: log_channel(cluster) log [DBG] : osdmap e37: 3 total, 2 up, 3 in
Nov 27 05:57:04 np0005537642 ceph-mon[74338]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "backups", "var": "pg_num_actual", "val": "32"} v 0)
Nov 27 05:57:04 np0005537642 ceph-mon[74338]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/397881370' entity='mgr.compute-0.qnrkij' cmd=[{"prefix": "osd pool set", "pool": "backups", "var": "pg_num_actual", "val": "32"}]: dispatch
Nov 27 05:57:04 np0005537642 ceph-mon[74338]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "images", "var": "pg_num_actual", "val": "32"} v 0)
Nov 27 05:57:04 np0005537642 ceph-mon[74338]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/397881370' entity='mgr.compute-0.qnrkij' cmd=[{"prefix": "osd pool set", "pool": "images", "var": "pg_num_actual", "val": "32"}]: dispatch
Nov 27 05:57:04 np0005537642 ceph-mon[74338]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd metadata", "id": 2} v 0)
Nov 27 05:57:04 np0005537642 ceph-mon[74338]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/397881370' entity='mgr.compute-0.qnrkij' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch
Nov 27 05:57:04 np0005537642 ceph-mgr[74636]: mgr finish mon failed to return metadata for osd.2: (2) No such file or directory
Nov 27 05:57:04 np0005537642 ceph-mgr[74636]: [progress WARNING root] Starting Global Recovery Event,94 pgs not in active + clean state
Nov 27 05:57:04 np0005537642 ceph-mgr[74636]: [progress INFO root] update: starting ev 1bbc582c-1722-4345-892d-b5cd3e348c51 (PG autoscaler increasing pool 6 PGs from 1 to 32)
Nov 27 05:57:04 np0005537642 ceph-mon[74338]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "cephfs.cephfs.data", "var": "pg_num", "val": "32"} v 0)
Nov 27 05:57:04 np0005537642 ceph-mon[74338]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/397881370' entity='mgr.compute-0.qnrkij' cmd=[{"prefix": "osd pool set", "pool": "cephfs.cephfs.data", "var": "pg_num", "val": "32"}]: dispatch
Nov 27 05:57:04 np0005537642 podman[86121]: 2025-11-27 10:57:04.719532117 +0000 UTC m=+1.939444766 container init 0a901fa0004675300792977d0d19d688ce3c1e2344a7fc58085a0cf6c86fc8c0 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=flamboyant_dewdney, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 27 05:57:04 np0005537642 podman[86121]: 2025-11-27 10:57:04.725712068 +0000 UTC m=+1.945624707 container start 0a901fa0004675300792977d0d19d688ce3c1e2344a7fc58085a0cf6c86fc8c0 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=flamboyant_dewdney, org.label-schema.license=GPLv2, CEPH_REF=squid, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, io.buildah.version=1.40.1)
Nov 27 05:57:04 np0005537642 systemd[1]: Started libpod-conmon-6a03874ff43ab0e3d00251302b59afcde074493dc2fa918e1c7eec9e269cff5a.scope.
Nov 27 05:57:04 np0005537642 systemd[1]: Started libcrun container.
Nov 27 05:57:04 np0005537642 podman[86121]: 2025-11-27 10:57:04.848668444 +0000 UTC m=+2.068581123 container attach 0a901fa0004675300792977d0d19d688ce3c1e2344a7fc58085a0cf6c86fc8c0 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=flamboyant_dewdney, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2)
Nov 27 05:57:04 np0005537642 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/511a65054e55f0718087a023085635f23d3aa1cba4957b6c7dcfd11805a67c5f/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Nov 27 05:57:04 np0005537642 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/511a65054e55f0718087a023085635f23d3aa1cba4957b6c7dcfd11805a67c5f/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 27 05:57:04 np0005537642 podman[86166]: 2025-11-27 10:57:04.897443061 +0000 UTC m=+0.400620038 container init 6a03874ff43ab0e3d00251302b59afcde074493dc2fa918e1c7eec9e269cff5a (image=quay.io/ceph/ceph:v19, name=gallant_kowalevski, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, OSD_FLAVOR=default, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 27 05:57:04 np0005537642 podman[86166]: 2025-11-27 10:57:04.907167205 +0000 UTC m=+0.410344082 container start 6a03874ff43ab0e3d00251302b59afcde074493dc2fa918e1c7eec9e269cff5a (image=quay.io/ceph/ceph:v19, name=gallant_kowalevski, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, io.buildah.version=1.40.1)
Nov 27 05:57:04 np0005537642 podman[86166]: 2025-11-27 10:57:04.923679738 +0000 UTC m=+0.426856725 container attach 6a03874ff43ab0e3d00251302b59afcde074493dc2fa918e1c7eec9e269cff5a (image=quay.io/ceph/ceph:v19, name=gallant_kowalevski, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325)
Nov 27 05:57:05 np0005537642 ceph-osd[82775]: log_channel(cluster) log [DBG] : 2.1e scrub starts
Nov 27 05:57:05 np0005537642 ceph-osd[82775]: log_channel(cluster) log [DBG] : 2.1e scrub ok
Nov 27 05:57:05 np0005537642 flamboyant_dewdney[86182]: --> passed data devices: 0 physical, 1 LVM
Nov 27 05:57:05 np0005537642 flamboyant_dewdney[86182]: --> All data devices are unavailable
Nov 27 05:57:05 np0005537642 systemd[1]: libpod-0a901fa0004675300792977d0d19d688ce3c1e2344a7fc58085a0cf6c86fc8c0.scope: Deactivated successfully.
Nov 27 05:57:05 np0005537642 conmon[86182]: conmon 0a901fa0004675300792 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-0a901fa0004675300792977d0d19d688ce3c1e2344a7fc58085a0cf6c86fc8c0.scope/container/memory.events
Nov 27 05:57:05 np0005537642 podman[86121]: 2025-11-27 10:57:05.120286509 +0000 UTC m=+2.340199168 container died 0a901fa0004675300792977d0d19d688ce3c1e2344a7fc58085a0cf6c86fc8c0 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=flamboyant_dewdney, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 27 05:57:05 np0005537642 systemd[1]: var-lib-containers-storage-overlay-07c6774e6ce1c198a1f1997f983b3d33e4a2dbfb5ebd7d8530fc132e0b508b0a-merged.mount: Deactivated successfully.
Nov 27 05:57:05 np0005537642 ceph-mon[74338]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool application enable", "pool": "cephfs.cephfs.data", "app": "cephfs"} v 0)
Nov 27 05:57:05 np0005537642 ceph-mon[74338]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/1381433325' entity='client.admin' cmd=[{"prefix": "osd pool application enable", "pool": "cephfs.cephfs.data", "app": "cephfs"}]: dispatch
Nov 27 05:57:05 np0005537642 podman[86121]: 2025-11-27 10:57:05.617784569 +0000 UTC m=+2.837697248 container remove 0a901fa0004675300792977d0d19d688ce3c1e2344a7fc58085a0cf6c86fc8c0 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=flamboyant_dewdney, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default)
Nov 27 05:57:05 np0005537642 systemd[1]: libpod-conmon-0a901fa0004675300792977d0d19d688ce3c1e2344a7fc58085a0cf6c86fc8c0.scope: Deactivated successfully.
Nov 27 05:57:05 np0005537642 ceph-mon[74338]: mon.compute-0@0(leader).osd e37 do_prune osdmap full prune enabled
Nov 27 05:57:05 np0005537642 ceph-mon[74338]: from='mgr.14122 192.168.122.100:0/397881370' entity='mgr.compute-0.qnrkij' cmd='[{"prefix": "osd pool set", "pool": "backups", "var": "pg_num_actual", "val": "32"}]': finished
Nov 27 05:57:05 np0005537642 ceph-mon[74338]: from='mgr.14122 192.168.122.100:0/397881370' entity='mgr.compute-0.qnrkij' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pg_num", "val": "32"}]': finished
Nov 27 05:57:05 np0005537642 ceph-mon[74338]: from='mgr.14122 192.168.122.100:0/397881370' entity='mgr.compute-0.qnrkij' 
Nov 27 05:57:05 np0005537642 ceph-mon[74338]: from='mgr.14122 192.168.122.100:0/397881370' entity='mgr.compute-0.qnrkij' cmd=[{"prefix": "osd pool set", "pool": "backups", "var": "pg_num_actual", "val": "32"}]: dispatch
Nov 27 05:57:05 np0005537642 ceph-mon[74338]: from='mgr.14122 192.168.122.100:0/397881370' entity='mgr.compute-0.qnrkij' cmd=[{"prefix": "osd pool set", "pool": "images", "var": "pg_num_actual", "val": "32"}]: dispatch
Nov 27 05:57:05 np0005537642 ceph-mon[74338]: from='mgr.14122 192.168.122.100:0/397881370' entity='mgr.compute-0.qnrkij' cmd=[{"prefix": "osd pool set", "pool": "cephfs.cephfs.data", "var": "pg_num", "val": "32"}]: dispatch
Nov 27 05:57:05 np0005537642 ceph-mon[74338]: from='client.? 192.168.122.100:0/1381433325' entity='client.admin' cmd=[{"prefix": "osd pool application enable", "pool": "cephfs.cephfs.data", "app": "cephfs"}]: dispatch
Nov 27 05:57:05 np0005537642 ceph-mon[74338]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/397881370' entity='mgr.compute-0.qnrkij' cmd='[{"prefix": "osd pool set", "pool": "backups", "var": "pg_num_actual", "val": "32"}]': finished
Nov 27 05:57:05 np0005537642 ceph-mon[74338]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/397881370' entity='mgr.compute-0.qnrkij' cmd='[{"prefix": "osd pool set", "pool": "images", "var": "pg_num_actual", "val": "32"}]': finished
Nov 27 05:57:05 np0005537642 ceph-mon[74338]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/397881370' entity='mgr.compute-0.qnrkij' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.data", "var": "pg_num", "val": "32"}]': finished
Nov 27 05:57:05 np0005537642 ceph-mon[74338]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/1381433325' entity='client.admin' cmd='[{"prefix": "osd pool application enable", "pool": "cephfs.cephfs.data", "app": "cephfs"}]': finished
Nov 27 05:57:05 np0005537642 ceph-mon[74338]: mon.compute-0@0(leader).osd e38 e38: 3 total, 2 up, 3 in
Nov 27 05:57:05 np0005537642 gallant_kowalevski[86190]: enabled application 'cephfs' on pool 'cephfs.cephfs.data'
Nov 27 05:57:05 np0005537642 ceph-mon[74338]: log_channel(cluster) log [DBG] : osdmap e38: 3 total, 2 up, 3 in
Nov 27 05:57:05 np0005537642 ceph-mon[74338]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd metadata", "id": 2} v 0)
Nov 27 05:57:05 np0005537642 ceph-mon[74338]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/397881370' entity='mgr.compute-0.qnrkij' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch
Nov 27 05:57:05 np0005537642 ceph-mgr[74636]: mgr finish mon failed to return metadata for osd.2: (2) No such file or directory
Nov 27 05:57:05 np0005537642 ceph-mgr[74636]: [progress INFO root] update: starting ev 7ee43ed5-4093-4678-a8db-51bf8817ac27 (PG autoscaler increasing pool 7 PGs from 1 to 32)
Nov 27 05:57:05 np0005537642 ceph-mgr[74636]: [progress INFO root] complete: finished ev 4aa9a7a3-c4bb-4f03-934a-dc50c4c79a0c (PG autoscaler increasing pool 2 PGs from 1 to 32)
Nov 27 05:57:05 np0005537642 ceph-mgr[74636]: [progress INFO root] Completed event 4aa9a7a3-c4bb-4f03-934a-dc50c4c79a0c (PG autoscaler increasing pool 2 PGs from 1 to 32) in 7 seconds
Nov 27 05:57:05 np0005537642 ceph-mgr[74636]: [progress INFO root] complete: finished ev 710924fe-20ea-4452-a387-71959e3e987b (PG autoscaler increasing pool 3 PGs from 1 to 32)
Nov 27 05:57:05 np0005537642 ceph-mgr[74636]: [progress INFO root] Completed event 710924fe-20ea-4452-a387-71959e3e987b (PG autoscaler increasing pool 3 PGs from 1 to 32) in 5 seconds
Nov 27 05:57:05 np0005537642 ceph-mgr[74636]: [progress INFO root] complete: finished ev 92b76d52-0bda-43ff-93fb-52da7398c849 (PG autoscaler increasing pool 4 PGs from 1 to 32)
Nov 27 05:57:05 np0005537642 ceph-mgr[74636]: [progress INFO root] Completed event 92b76d52-0bda-43ff-93fb-52da7398c849 (PG autoscaler increasing pool 4 PGs from 1 to 32) in 4 seconds
Nov 27 05:57:05 np0005537642 ceph-mgr[74636]: [progress INFO root] complete: finished ev 32356c02-c71b-47b3-ac4e-29b539ab22c7 (PG autoscaler increasing pool 5 PGs from 1 to 32)
Nov 27 05:57:05 np0005537642 ceph-mgr[74636]: [progress INFO root] Completed event 32356c02-c71b-47b3-ac4e-29b539ab22c7 (PG autoscaler increasing pool 5 PGs from 1 to 32) in 3 seconds
Nov 27 05:57:05 np0005537642 ceph-mgr[74636]: [progress INFO root] complete: finished ev 1bbc582c-1722-4345-892d-b5cd3e348c51 (PG autoscaler increasing pool 6 PGs from 1 to 32)
Nov 27 05:57:05 np0005537642 ceph-mgr[74636]: [progress INFO root] Completed event 1bbc582c-1722-4345-892d-b5cd3e348c51 (PG autoscaler increasing pool 6 PGs from 1 to 32) in 1 seconds
Nov 27 05:57:05 np0005537642 ceph-mgr[74636]: [progress INFO root] complete: finished ev 7ee43ed5-4093-4678-a8db-51bf8817ac27 (PG autoscaler increasing pool 7 PGs from 1 to 32)
Nov 27 05:57:05 np0005537642 ceph-mgr[74636]: [progress INFO root] Completed event 7ee43ed5-4093-4678-a8db-51bf8817ac27 (PG autoscaler increasing pool 7 PGs from 1 to 32) in 0 seconds
Nov 27 05:57:05 np0005537642 podman[86166]: 2025-11-27 10:57:05.86948222 +0000 UTC m=+1.372659087 container died 6a03874ff43ab0e3d00251302b59afcde074493dc2fa918e1c7eec9e269cff5a (image=quay.io/ceph/ceph:v19, name=gallant_kowalevski, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 27 05:57:05 np0005537642 systemd[1]: libpod-6a03874ff43ab0e3d00251302b59afcde074493dc2fa918e1c7eec9e269cff5a.scope: Deactivated successfully.
Nov 27 05:57:06 np0005537642 ceph-osd[82775]: log_channel(cluster) log [DBG] : 2.1b scrub starts
Nov 27 05:57:06 np0005537642 ceph-osd[82775]: log_channel(cluster) log [DBG] : 2.1b scrub ok
Nov 27 05:57:06 np0005537642 systemd[1]: var-lib-containers-storage-overlay-511a65054e55f0718087a023085635f23d3aa1cba4957b6c7dcfd11805a67c5f-merged.mount: Deactivated successfully.
Nov 27 05:57:06 np0005537642 podman[86166]: 2025-11-27 10:57:06.253718038 +0000 UTC m=+1.756894945 container remove 6a03874ff43ab0e3d00251302b59afcde074493dc2fa918e1c7eec9e269cff5a (image=quay.io/ceph/ceph:v19, name=gallant_kowalevski, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, CEPH_REF=squid, org.label-schema.build-date=20250325, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 27 05:57:06 np0005537642 systemd[1]: libpod-conmon-6a03874ff43ab0e3d00251302b59afcde074493dc2fa918e1c7eec9e269cff5a.scope: Deactivated successfully.
Nov 27 05:57:06 np0005537642 ceph-mon[74338]: mon.compute-0@0(leader).osd e38 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 27 05:57:06 np0005537642 podman[86340]: 2025-11-27 10:57:06.510969142 +0000 UTC m=+0.040653690 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Nov 27 05:57:06 np0005537642 podman[86340]: 2025-11-27 10:57:06.644849358 +0000 UTC m=+0.174533856 container create 6e4a829931d20b08130552684b0683e5a6be3085f23358b38a72496e5b5a0dea (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=vibrant_sinoussi, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 27 05:57:06 np0005537642 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v131: 131 pgs: 2 peering, 62 unknown, 67 active+clean; 449 KiB data, 54 MiB used, 40 GiB / 40 GiB avail
Nov 27 05:57:06 np0005537642 ceph-mon[74338]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "cephfs.cephfs.data", "var": "pg_num_actual", "val": "32"} v 0)
Nov 27 05:57:06 np0005537642 ceph-mon[74338]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/397881370' entity='mgr.compute-0.qnrkij' cmd=[{"prefix": "osd pool set", "pool": "cephfs.cephfs.data", "var": "pg_num_actual", "val": "32"}]: dispatch
Nov 27 05:57:06 np0005537642 ceph-mon[74338]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pg_num_actual", "val": "32"} v 0)
Nov 27 05:57:06 np0005537642 ceph-mon[74338]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/397881370' entity='mgr.compute-0.qnrkij' cmd=[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pg_num_actual", "val": "32"}]: dispatch
Nov 27 05:57:06 np0005537642 systemd[1]: Started libpod-conmon-6e4a829931d20b08130552684b0683e5a6be3085f23358b38a72496e5b5a0dea.scope.
Nov 27 05:57:06 np0005537642 ceph-mon[74338]: mon.compute-0@0(leader).osd e38 do_prune osdmap full prune enabled
Nov 27 05:57:06 np0005537642 ceph-mon[74338]: log_channel(cluster) log [INF] : Health check cleared: POOL_APP_NOT_ENABLED (was: 1 pool(s) do not have an application enabled)
Nov 27 05:57:06 np0005537642 ceph-mon[74338]: log_channel(cluster) log [INF] : Cluster is now healthy
Nov 27 05:57:06 np0005537642 systemd[1]: Started libcrun container.
Nov 27 05:57:07 np0005537642 ceph-osd[82775]: log_channel(cluster) log [DBG] : 2.1d scrub starts
Nov 27 05:57:07 np0005537642 ceph-mon[74338]: from='mgr.14122 192.168.122.100:0/397881370' entity='mgr.compute-0.qnrkij' cmd='[{"prefix": "osd pool set", "pool": "backups", "var": "pg_num_actual", "val": "32"}]': finished
Nov 27 05:57:07 np0005537642 ceph-mon[74338]: from='mgr.14122 192.168.122.100:0/397881370' entity='mgr.compute-0.qnrkij' cmd='[{"prefix": "osd pool set", "pool": "images", "var": "pg_num_actual", "val": "32"}]': finished
Nov 27 05:57:07 np0005537642 ceph-mon[74338]: from='mgr.14122 192.168.122.100:0/397881370' entity='mgr.compute-0.qnrkij' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.data", "var": "pg_num", "val": "32"}]': finished
Nov 27 05:57:07 np0005537642 ceph-mon[74338]: from='client.? 192.168.122.100:0/1381433325' entity='client.admin' cmd='[{"prefix": "osd pool application enable", "pool": "cephfs.cephfs.data", "app": "cephfs"}]': finished
Nov 27 05:57:07 np0005537642 ceph-mon[74338]: from='mgr.14122 192.168.122.100:0/397881370' entity='mgr.compute-0.qnrkij' cmd=[{"prefix": "osd pool set", "pool": "cephfs.cephfs.data", "var": "pg_num_actual", "val": "32"}]: dispatch
Nov 27 05:57:07 np0005537642 ceph-mon[74338]: from='mgr.14122 192.168.122.100:0/397881370' entity='mgr.compute-0.qnrkij' cmd=[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pg_num_actual", "val": "32"}]: dispatch
Nov 27 05:57:07 np0005537642 ceph-osd[82775]: log_channel(cluster) log [DBG] : 2.1d scrub ok
Nov 27 05:57:07 np0005537642 podman[86340]: 2025-11-27 10:57:07.247955187 +0000 UTC m=+0.777639725 container init 6e4a829931d20b08130552684b0683e5a6be3085f23358b38a72496e5b5a0dea (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=vibrant_sinoussi, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 27 05:57:07 np0005537642 podman[86340]: 2025-11-27 10:57:07.259063092 +0000 UTC m=+0.788747600 container start 6e4a829931d20b08130552684b0683e5a6be3085f23358b38a72496e5b5a0dea (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=vibrant_sinoussi, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, ceph=True, org.label-schema.build-date=20250325, OSD_FLAVOR=default, org.label-schema.vendor=CentOS)
Nov 27 05:57:07 np0005537642 vibrant_sinoussi[86356]: 167 167
Nov 27 05:57:07 np0005537642 systemd[1]: libpod-6e4a829931d20b08130552684b0683e5a6be3085f23358b38a72496e5b5a0dea.scope: Deactivated successfully.
Nov 27 05:57:07 np0005537642 podman[86340]: 2025-11-27 10:57:07.357522232 +0000 UTC m=+0.887206740 container attach 6e4a829931d20b08130552684b0683e5a6be3085f23358b38a72496e5b5a0dea (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=vibrant_sinoussi, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0)
Nov 27 05:57:07 np0005537642 podman[86340]: 2025-11-27 10:57:07.358096269 +0000 UTC m=+0.887780767 container died 6e4a829931d20b08130552684b0683e5a6be3085f23358b38a72496e5b5a0dea (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=vibrant_sinoussi, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 27 05:57:07 np0005537642 ceph-mon[74338]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/397881370' entity='mgr.compute-0.qnrkij' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.data", "var": "pg_num_actual", "val": "32"}]': finished
Nov 27 05:57:07 np0005537642 ceph-mon[74338]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/397881370' entity='mgr.compute-0.qnrkij' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pg_num_actual", "val": "32"}]': finished
Nov 27 05:57:07 np0005537642 ceph-mon[74338]: mon.compute-0@0(leader).osd e39 e39: 3 total, 2 up, 3 in
Nov 27 05:57:07 np0005537642 python3[86434]: ansible-ansible.legacy.stat Invoked with path=/tmp/ceph_rgw.yml follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Nov 27 05:57:07 np0005537642 ceph-mon[74338]: log_channel(cluster) log [DBG] : osdmap e39: 3 total, 2 up, 3 in
Nov 27 05:57:07 np0005537642 ceph-mon[74338]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd metadata", "id": 2} v 0)
Nov 27 05:57:07 np0005537642 ceph-mon[74338]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/397881370' entity='mgr.compute-0.qnrkij' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch
Nov 27 05:57:07 np0005537642 ceph-mgr[74636]: mgr finish mon failed to return metadata for osd.2: (2) No such file or directory
Nov 27 05:57:07 np0005537642 systemd[1]: var-lib-containers-storage-overlay-1297a69242ced31eb55ab430932e822de41bfdfcce7c54d738bed0116562eb48-merged.mount: Deactivated successfully.
Nov 27 05:57:07 np0005537642 podman[86340]: 2025-11-27 10:57:07.655111096 +0000 UTC m=+1.184795594 container remove 6e4a829931d20b08130552684b0683e5a6be3085f23358b38a72496e5b5a0dea (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=vibrant_sinoussi, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 27 05:57:07 np0005537642 systemd[1]: libpod-conmon-6e4a829931d20b08130552684b0683e5a6be3085f23358b38a72496e5b5a0dea.scope: Deactivated successfully.
Nov 27 05:57:07 np0005537642 python3[86521]: ansible-ansible.legacy.copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1764241026.9922194-37105-131346368735675/source dest=/tmp/ceph_rgw.yml mode=0644 force=True follow=False _original_basename=ceph_rgw.yml.j2 checksum=ad866aa1f51f395809dd7ac5cb7a56d43c167b49 backup=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 27 05:57:07 np0005537642 podman[86527]: 2025-11-27 10:57:07.938392531 +0000 UTC m=+0.117118606 container create 9e36c76315a45cab387d579dd68a37e159a8c35d297aecb9c0d4c96424d4961e (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=festive_cartwright, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Nov 27 05:57:07 np0005537642 podman[86527]: 2025-11-27 10:57:07.851313824 +0000 UTC m=+0.030039969 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Nov 27 05:57:08 np0005537642 ceph-osd[82775]: log_channel(cluster) log [DBG] : 2.7 scrub starts
Nov 27 05:57:08 np0005537642 ceph-osd[82775]: log_channel(cluster) log [DBG] : 2.7 scrub ok
Nov 27 05:57:08 np0005537642 systemd[1]: Started libpod-conmon-9e36c76315a45cab387d579dd68a37e159a8c35d297aecb9c0d4c96424d4961e.scope.
Nov 27 05:57:08 np0005537642 systemd[1]: Started libcrun container.
Nov 27 05:57:08 np0005537642 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/32ee6f3ff73dd93a41025de32b0736b3762811a599a6a4f6c99b0d879622d505/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 27 05:57:08 np0005537642 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/32ee6f3ff73dd93a41025de32b0736b3762811a599a6a4f6c99b0d879622d505/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 27 05:57:08 np0005537642 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/32ee6f3ff73dd93a41025de32b0736b3762811a599a6a4f6c99b0d879622d505/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 27 05:57:08 np0005537642 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/32ee6f3ff73dd93a41025de32b0736b3762811a599a6a4f6c99b0d879622d505/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 27 05:57:08 np0005537642 ceph-mon[74338]: Health check cleared: POOL_APP_NOT_ENABLED (was: 1 pool(s) do not have an application enabled)
Nov 27 05:57:08 np0005537642 ceph-mon[74338]: Cluster is now healthy
Nov 27 05:57:08 np0005537642 ceph-mon[74338]: from='mgr.14122 192.168.122.100:0/397881370' entity='mgr.compute-0.qnrkij' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.data", "var": "pg_num_actual", "val": "32"}]': finished
Nov 27 05:57:08 np0005537642 ceph-mon[74338]: from='mgr.14122 192.168.122.100:0/397881370' entity='mgr.compute-0.qnrkij' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pg_num_actual", "val": "32"}]': finished
Nov 27 05:57:08 np0005537642 podman[86527]: 2025-11-27 10:57:08.255129595 +0000 UTC m=+0.433855660 container init 9e36c76315a45cab387d579dd68a37e159a8c35d297aecb9c0d4c96424d4961e (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=festive_cartwright, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 27 05:57:08 np0005537642 podman[86527]: 2025-11-27 10:57:08.268818246 +0000 UTC m=+0.447544331 container start 9e36c76315a45cab387d579dd68a37e159a8c35d297aecb9c0d4c96424d4961e (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=festive_cartwright, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 27 05:57:08 np0005537642 podman[86527]: 2025-11-27 10:57:08.32126566 +0000 UTC m=+0.499991715 container attach 9e36c76315a45cab387d579dd68a37e159a8c35d297aecb9c0d4c96424d4961e (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=festive_cartwright, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0)
Nov 27 05:57:08 np0005537642 ceph-mon[74338]: mon.compute-0@0(leader).osd e39 do_prune osdmap full prune enabled
Nov 27 05:57:08 np0005537642 ceph-mon[74338]: mon.compute-0@0(leader).osd e40 e40: 3 total, 2 up, 3 in
Nov 27 05:57:08 np0005537642 ceph-mon[74338]: log_channel(cluster) log [DBG] : osdmap e40: 3 total, 2 up, 3 in
Nov 27 05:57:08 np0005537642 ceph-mon[74338]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd metadata", "id": 2} v 0)
Nov 27 05:57:08 np0005537642 ceph-mon[74338]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/397881370' entity='mgr.compute-0.qnrkij' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch
Nov 27 05:57:08 np0005537642 ceph-mgr[74636]: mgr finish mon failed to return metadata for osd.2: (2) No such file or directory
Nov 27 05:57:08 np0005537642 festive_cartwright[86568]: {
Nov 27 05:57:08 np0005537642 festive_cartwright[86568]:    "1": [
Nov 27 05:57:08 np0005537642 festive_cartwright[86568]:        {
Nov 27 05:57:08 np0005537642 festive_cartwright[86568]:            "devices": [
Nov 27 05:57:08 np0005537642 festive_cartwright[86568]:                "/dev/loop3"
Nov 27 05:57:08 np0005537642 festive_cartwright[86568]:            ],
Nov 27 05:57:08 np0005537642 festive_cartwright[86568]:            "lv_name": "ceph_lv0",
Nov 27 05:57:08 np0005537642 festive_cartwright[86568]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Nov 27 05:57:08 np0005537642 festive_cartwright[86568]:            "lv_size": "21470642176",
Nov 27 05:57:08 np0005537642 festive_cartwright[86568]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=whPowo-sd77-WkNQ-nG3J-nhwn-01QM-SzpkeN,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=4c838139-e0c9-556a-a9ca-e4422f459af7,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=047f3e15-ba18-4c86-b24b-f8e9584c5eff,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Nov 27 05:57:08 np0005537642 festive_cartwright[86568]:            "lv_uuid": "whPowo-sd77-WkNQ-nG3J-nhwn-01QM-SzpkeN",
Nov 27 05:57:08 np0005537642 festive_cartwright[86568]:            "name": "ceph_lv0",
Nov 27 05:57:08 np0005537642 festive_cartwright[86568]:            "path": "/dev/ceph_vg0/ceph_lv0",
Nov 27 05:57:08 np0005537642 festive_cartwright[86568]:            "tags": {
Nov 27 05:57:08 np0005537642 festive_cartwright[86568]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Nov 27 05:57:08 np0005537642 festive_cartwright[86568]:                "ceph.block_uuid": "whPowo-sd77-WkNQ-nG3J-nhwn-01QM-SzpkeN",
Nov 27 05:57:08 np0005537642 festive_cartwright[86568]:                "ceph.cephx_lockbox_secret": "",
Nov 27 05:57:08 np0005537642 festive_cartwright[86568]:                "ceph.cluster_fsid": "4c838139-e0c9-556a-a9ca-e4422f459af7",
Nov 27 05:57:08 np0005537642 festive_cartwright[86568]:                "ceph.cluster_name": "ceph",
Nov 27 05:57:08 np0005537642 festive_cartwright[86568]:                "ceph.crush_device_class": "",
Nov 27 05:57:08 np0005537642 festive_cartwright[86568]:                "ceph.encrypted": "0",
Nov 27 05:57:08 np0005537642 festive_cartwright[86568]:                "ceph.osd_fsid": "047f3e15-ba18-4c86-b24b-f8e9584c5eff",
Nov 27 05:57:08 np0005537642 festive_cartwright[86568]:                "ceph.osd_id": "1",
Nov 27 05:57:08 np0005537642 festive_cartwright[86568]:                "ceph.osdspec_affinity": "default_drive_group",
Nov 27 05:57:08 np0005537642 festive_cartwright[86568]:                "ceph.type": "block",
Nov 27 05:57:08 np0005537642 festive_cartwright[86568]:                "ceph.vdo": "0",
Nov 27 05:57:08 np0005537642 festive_cartwright[86568]:                "ceph.with_tpm": "0"
Nov 27 05:57:08 np0005537642 festive_cartwright[86568]:            },
Nov 27 05:57:08 np0005537642 festive_cartwright[86568]:            "type": "block",
Nov 27 05:57:08 np0005537642 festive_cartwright[86568]:            "vg_name": "ceph_vg0"
Nov 27 05:57:08 np0005537642 festive_cartwright[86568]:        }
Nov 27 05:57:08 np0005537642 festive_cartwright[86568]:    ]
Nov 27 05:57:08 np0005537642 festive_cartwright[86568]: }
Nov 27 05:57:08 np0005537642 ceph-mon[74338]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "osd.2"} v 0)
Nov 27 05:57:08 np0005537642 ceph-mon[74338]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/397881370' entity='mgr.compute-0.qnrkij' cmd=[{"prefix": "auth get", "entity": "osd.2"}]: dispatch
Nov 27 05:57:08 np0005537642 ceph-mon[74338]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Nov 27 05:57:08 np0005537642 ceph-mon[74338]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/397881370' entity='mgr.compute-0.qnrkij' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 27 05:57:08 np0005537642 ceph-mgr[74636]: [cephadm INFO cephadm.serve] Deploying daemon osd.2 on compute-2
Nov 27 05:57:08 np0005537642 ceph-mgr[74636]: log_channel(cephadm) log [INF] : Deploying daemon osd.2 on compute-2
Nov 27 05:57:08 np0005537642 systemd[1]: libpod-9e36c76315a45cab387d579dd68a37e159a8c35d297aecb9c0d4c96424d4961e.scope: Deactivated successfully.
Nov 27 05:57:08 np0005537642 podman[86527]: 2025-11-27 10:57:08.671443842 +0000 UTC m=+0.850169927 container died 9e36c76315a45cab387d579dd68a37e159a8c35d297aecb9c0d4c96424d4961e (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=festive_cartwright, OSD_FLAVOR=default, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS)
Nov 27 05:57:08 np0005537642 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v134: 193 pgs: 1 peering, 93 unknown, 99 active+clean; 449 KiB data, 54 MiB used, 40 GiB / 40 GiB avail
Nov 27 05:57:08 np0005537642 python3[86652]: ansible-ansible.legacy.stat Invoked with path=/home/ceph-admin/assimilate_ceph.conf follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Nov 27 05:57:08 np0005537642 systemd[1]: var-lib-containers-storage-overlay-32ee6f3ff73dd93a41025de32b0736b3762811a599a6a4f6c99b0d879622d505-merged.mount: Deactivated successfully.
Nov 27 05:57:09 np0005537642 ceph-osd[82775]: osd.1 pg_epoch: 39 pg[7.0( empty local-lis/les=29/30 n=0 ec=27/27 lis/c=29/29 les/c/f=30/30/0 sis=39 pruub=15.124266624s) [1] r=0 lpr=39 pi=[29,39)/1 crt=0'0 mlcod 0'0 active pruub 100.882499695s@ mbc={}] PeeringState::start_peering_interval up [1] -> [1], acting [1] -> [1], acting_primary 1 -> 1, up_primary 1 -> 1, role 0 -> 0, features acting 4540701547738038271 upacting 4540701547738038271
Nov 27 05:57:09 np0005537642 ceph-osd[82775]: osd.1 pg_epoch: 40 pg[7.0( empty local-lis/les=29/30 n=0 ec=27/27 lis/c=29/29 les/c/f=30/30/0 sis=39 pruub=15.124266624s) [1] r=0 lpr=39 pi=[29,39)/1 crt=0'0 mlcod 0'0 unknown pruub 100.882499695s@ mbc={}] state<Start>: transitioning to Primary
Nov 27 05:57:09 np0005537642 ceph-osd[82775]: osd.1 pg_epoch: 40 pg[7.4( empty local-lis/les=29/30 n=0 ec=39/27 lis/c=29/29 les/c/f=30/30/0 sis=39) [1] r=0 lpr=39 pi=[29,39)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 27 05:57:09 np0005537642 ceph-osd[82775]: osd.1 pg_epoch: 40 pg[7.5( empty local-lis/les=29/30 n=0 ec=39/27 lis/c=29/29 les/c/f=30/30/0 sis=39) [1] r=0 lpr=39 pi=[29,39)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 27 05:57:09 np0005537642 ceph-osd[82775]: osd.1 pg_epoch: 40 pg[7.3( empty local-lis/les=29/30 n=0 ec=39/27 lis/c=29/29 les/c/f=30/30/0 sis=39) [1] r=0 lpr=39 pi=[29,39)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 27 05:57:09 np0005537642 ceph-osd[82775]: osd.1 pg_epoch: 40 pg[7.6( empty local-lis/les=29/30 n=0 ec=39/27 lis/c=29/29 les/c/f=30/30/0 sis=39) [1] r=0 lpr=39 pi=[29,39)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 27 05:57:09 np0005537642 ceph-osd[82775]: osd.1 pg_epoch: 40 pg[7.1c( empty local-lis/les=29/30 n=0 ec=39/27 lis/c=29/29 les/c/f=30/30/0 sis=39) [1] r=0 lpr=39 pi=[29,39)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 27 05:57:09 np0005537642 ceph-osd[82775]: osd.1 pg_epoch: 40 pg[7.1b( empty local-lis/les=29/30 n=0 ec=39/27 lis/c=29/29 les/c/f=30/30/0 sis=39) [1] r=0 lpr=39 pi=[29,39)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 27 05:57:09 np0005537642 ceph-osd[82775]: osd.1 pg_epoch: 40 pg[7.1d( empty local-lis/les=29/30 n=0 ec=39/27 lis/c=29/29 les/c/f=30/30/0 sis=39) [1] r=0 lpr=39 pi=[29,39)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 27 05:57:09 np0005537642 ceph-osd[82775]: osd.1 pg_epoch: 40 pg[7.1e( empty local-lis/les=29/30 n=0 ec=39/27 lis/c=29/29 les/c/f=30/30/0 sis=39) [1] r=0 lpr=39 pi=[29,39)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 27 05:57:09 np0005537642 ceph-osd[82775]: osd.1 pg_epoch: 40 pg[7.1f( empty local-lis/les=29/30 n=0 ec=39/27 lis/c=29/29 les/c/f=30/30/0 sis=39) [1] r=0 lpr=39 pi=[29,39)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 27 05:57:09 np0005537642 ceph-osd[82775]: osd.1 pg_epoch: 40 pg[7.2( empty local-lis/les=29/30 n=0 ec=39/27 lis/c=29/29 les/c/f=30/30/0 sis=39) [1] r=0 lpr=39 pi=[29,39)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 27 05:57:09 np0005537642 ceph-osd[82775]: osd.1 pg_epoch: 40 pg[7.7( empty local-lis/les=29/30 n=0 ec=39/27 lis/c=29/29 les/c/f=30/30/0 sis=39) [1] r=0 lpr=39 pi=[29,39)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 27 05:57:09 np0005537642 ceph-osd[82775]: osd.1 pg_epoch: 40 pg[7.8( empty local-lis/les=29/30 n=0 ec=39/27 lis/c=29/29 les/c/f=30/30/0 sis=39) [1] r=0 lpr=39 pi=[29,39)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 27 05:57:09 np0005537642 ceph-osd[82775]: osd.1 pg_epoch: 40 pg[7.1( empty local-lis/les=29/30 n=0 ec=39/27 lis/c=29/29 les/c/f=30/30/0 sis=39) [1] r=0 lpr=39 pi=[29,39)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 27 05:57:09 np0005537642 ceph-osd[82775]: osd.1 pg_epoch: 40 pg[7.11( empty local-lis/les=29/30 n=0 ec=39/27 lis/c=29/29 les/c/f=30/30/0 sis=39) [1] r=0 lpr=39 pi=[29,39)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 27 05:57:09 np0005537642 ceph-osd[82775]: osd.1 pg_epoch: 40 pg[7.12( empty local-lis/les=29/30 n=0 ec=39/27 lis/c=29/29 les/c/f=30/30/0 sis=39) [1] r=0 lpr=39 pi=[29,39)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 27 05:57:09 np0005537642 ceph-osd[82775]: osd.1 pg_epoch: 40 pg[7.9( empty local-lis/les=29/30 n=0 ec=39/27 lis/c=29/29 les/c/f=30/30/0 sis=39) [1] r=0 lpr=39 pi=[29,39)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 27 05:57:09 np0005537642 ceph-osd[82775]: osd.1 pg_epoch: 40 pg[7.a( empty local-lis/les=29/30 n=0 ec=39/27 lis/c=29/29 les/c/f=30/30/0 sis=39) [1] r=0 lpr=39 pi=[29,39)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 27 05:57:09 np0005537642 ceph-osd[82775]: osd.1 pg_epoch: 40 pg[7.b( empty local-lis/les=29/30 n=0 ec=39/27 lis/c=29/29 les/c/f=30/30/0 sis=39) [1] r=0 lpr=39 pi=[29,39)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 27 05:57:09 np0005537642 ceph-osd[82775]: osd.1 pg_epoch: 40 pg[7.13( empty local-lis/les=29/30 n=0 ec=39/27 lis/c=29/29 les/c/f=30/30/0 sis=39) [1] r=0 lpr=39 pi=[29,39)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 27 05:57:09 np0005537642 ceph-osd[82775]: osd.1 pg_epoch: 40 pg[7.14( empty local-lis/les=29/30 n=0 ec=39/27 lis/c=29/29 les/c/f=30/30/0 sis=39) [1] r=0 lpr=39 pi=[29,39)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 27 05:57:09 np0005537642 ceph-osd[82775]: osd.1 pg_epoch: 40 pg[7.c( empty local-lis/les=29/30 n=0 ec=39/27 lis/c=29/29 les/c/f=30/30/0 sis=39) [1] r=0 lpr=39 pi=[29,39)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 27 05:57:09 np0005537642 ceph-osd[82775]: osd.1 pg_epoch: 40 pg[7.15( empty local-lis/les=29/30 n=0 ec=39/27 lis/c=29/29 les/c/f=30/30/0 sis=39) [1] r=0 lpr=39 pi=[29,39)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 27 05:57:09 np0005537642 ceph-osd[82775]: osd.1 pg_epoch: 40 pg[7.16( empty local-lis/les=29/30 n=0 ec=39/27 lis/c=29/29 les/c/f=30/30/0 sis=39) [1] r=0 lpr=39 pi=[29,39)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 27 05:57:09 np0005537642 ceph-osd[82775]: osd.1 pg_epoch: 40 pg[7.18( empty local-lis/les=29/30 n=0 ec=39/27 lis/c=29/29 les/c/f=30/30/0 sis=39) [1] r=0 lpr=39 pi=[29,39)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 27 05:57:09 np0005537642 ceph-osd[82775]: osd.1 pg_epoch: 40 pg[7.17( empty local-lis/les=29/30 n=0 ec=39/27 lis/c=29/29 les/c/f=30/30/0 sis=39) [1] r=0 lpr=39 pi=[29,39)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 27 05:57:09 np0005537642 ceph-osd[82775]: osd.1 pg_epoch: 40 pg[7.19( empty local-lis/les=29/30 n=0 ec=39/27 lis/c=29/29 les/c/f=30/30/0 sis=39) [1] r=0 lpr=39 pi=[29,39)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 27 05:57:09 np0005537642 ceph-osd[82775]: osd.1 pg_epoch: 40 pg[7.1a( empty local-lis/les=29/30 n=0 ec=39/27 lis/c=29/29 les/c/f=30/30/0 sis=39) [1] r=0 lpr=39 pi=[29,39)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 27 05:57:09 np0005537642 ceph-osd[82775]: osd.1 pg_epoch: 40 pg[7.d( empty local-lis/les=29/30 n=0 ec=39/27 lis/c=29/29 les/c/f=30/30/0 sis=39) [1] r=0 lpr=39 pi=[29,39)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 27 05:57:09 np0005537642 ceph-osd[82775]: osd.1 pg_epoch: 40 pg[7.e( empty local-lis/les=29/30 n=0 ec=39/27 lis/c=29/29 les/c/f=30/30/0 sis=39) [1] r=0 lpr=39 pi=[29,39)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 27 05:57:09 np0005537642 ceph-osd[82775]: osd.1 pg_epoch: 40 pg[7.f( empty local-lis/les=29/30 n=0 ec=39/27 lis/c=29/29 les/c/f=30/30/0 sis=39) [1] r=0 lpr=39 pi=[29,39)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 27 05:57:09 np0005537642 ceph-osd[82775]: osd.1 pg_epoch: 40 pg[7.10( empty local-lis/les=29/30 n=0 ec=39/27 lis/c=29/29 les/c/f=30/30/0 sis=39) [1] r=0 lpr=39 pi=[29,39)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 27 05:57:09 np0005537642 ceph-osd[82775]: log_channel(cluster) log [DBG] : 2.2 scrub starts
Nov 27 05:57:09 np0005537642 ceph-osd[82775]: log_channel(cluster) log [DBG] : 2.2 scrub ok
Nov 27 05:57:09 np0005537642 podman[86527]: 2025-11-27 10:57:09.196570771 +0000 UTC m=+1.375296856 container remove 9e36c76315a45cab387d579dd68a37e159a8c35d297aecb9c0d4c96424d4961e (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=festive_cartwright, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, OSD_FLAVOR=default)
Nov 27 05:57:09 np0005537642 systemd[1]: libpod-conmon-9e36c76315a45cab387d579dd68a37e159a8c35d297aecb9c0d4c96424d4961e.scope: Deactivated successfully.
Nov 27 05:57:09 np0005537642 ceph-mon[74338]: from='mgr.14122 192.168.122.100:0/397881370' entity='mgr.compute-0.qnrkij' cmd=[{"prefix": "auth get", "entity": "osd.2"}]: dispatch
Nov 27 05:57:09 np0005537642 python3[86742]: ansible-ansible.legacy.copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1764241028.2742932-37119-262608619481118/source dest=/home/ceph-admin/assimilate_ceph.conf owner=167 group=167 mode=0644 follow=False _original_basename=ceph_rgw.conf.j2 checksum=b4e1d33019ed44dbf1fc68e5adf383a93b6cd852 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 27 05:57:09 np0005537642 ceph-mon[74338]: mon.compute-0@0(leader).osd e40 do_prune osdmap full prune enabled
Nov 27 05:57:09 np0005537642 ceph-mgr[74636]: [progress INFO root] Writing back 12 completed events
Nov 27 05:57:09 np0005537642 ceph-mon[74338]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/progress/completed}] v 0)
Nov 27 05:57:09 np0005537642 python3[86856]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z   --volume /tmp/ceph_rgw.yml:/home/ceph_spec.yaml:z   --entrypoint ceph quay.io/ceph/ceph:v19 --fsid 4c838139-e0c9-556a-a9ca-e4422f459af7 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   config assimilate-conf -i /home/assimilate_ceph.conf#012 _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 27 05:57:09 np0005537642 ceph-mon[74338]: mon.compute-0@0(leader).osd e41 e41: 3 total, 2 up, 3 in
Nov 27 05:57:09 np0005537642 ceph-mon[74338]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/397881370' entity='mgr.compute-0.qnrkij' 
Nov 27 05:57:09 np0005537642 ceph-mon[74338]: log_channel(cluster) log [DBG] : osdmap e41: 3 total, 2 up, 3 in
Nov 27 05:57:09 np0005537642 ceph-mon[74338]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd metadata", "id": 2} v 0)
Nov 27 05:57:09 np0005537642 ceph-mon[74338]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/397881370' entity='mgr.compute-0.qnrkij' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch
Nov 27 05:57:09 np0005537642 ceph-mgr[74636]: mgr finish mon failed to return metadata for osd.2: (2) No such file or directory
Nov 27 05:57:10 np0005537642 podman[86883]: 2025-11-27 10:57:09.912025556 +0000 UTC m=+0.041592438 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Nov 27 05:57:10 np0005537642 ceph-osd[82775]: log_channel(cluster) log [DBG] : 2.1 scrub starts
Nov 27 05:57:10 np0005537642 podman[86885]: 2025-11-27 10:57:09.935708999 +0000 UTC m=+0.048935133 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Nov 27 05:57:10 np0005537642 podman[86883]: 2025-11-27 10:57:10.031307415 +0000 UTC m=+0.160874187 container create a7cd86f36ed5f113006b2f1e7539bad248e7ad5c9ec7a7d7ec69716cfde3379d (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=gifted_zhukovsky, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS)
Nov 27 05:57:10 np0005537642 ceph-osd[82775]: log_channel(cluster) log [DBG] : 2.1 scrub ok
Nov 27 05:57:10 np0005537642 systemd[1]: Started libpod-conmon-a7cd86f36ed5f113006b2f1e7539bad248e7ad5c9ec7a7d7ec69716cfde3379d.scope.
Nov 27 05:57:10 np0005537642 systemd[1]: Started libcrun container.
Nov 27 05:57:10 np0005537642 ceph-osd[82775]: osd.1 pg_epoch: 41 pg[7.1c( empty local-lis/les=39/41 n=0 ec=39/27 lis/c=29/29 les/c/f=30/30/0 sis=39) [1] r=0 lpr=39 pi=[29,39)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 27 05:57:10 np0005537642 ceph-osd[82775]: osd.1 pg_epoch: 41 pg[7.1d( empty local-lis/les=39/41 n=0 ec=39/27 lis/c=29/29 les/c/f=30/30/0 sis=39) [1] r=0 lpr=39 pi=[29,39)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 27 05:57:10 np0005537642 ceph-osd[82775]: osd.1 pg_epoch: 41 pg[7.1f( empty local-lis/les=39/41 n=0 ec=39/27 lis/c=29/29 les/c/f=30/30/0 sis=39) [1] r=0 lpr=39 pi=[29,39)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 27 05:57:10 np0005537642 ceph-osd[82775]: osd.1 pg_epoch: 41 pg[7.13( empty local-lis/les=39/41 n=0 ec=39/27 lis/c=29/29 les/c/f=30/30/0 sis=39) [1] r=0 lpr=39 pi=[29,39)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 27 05:57:10 np0005537642 ceph-osd[82775]: osd.1 pg_epoch: 41 pg[7.12( empty local-lis/les=39/41 n=0 ec=39/27 lis/c=29/29 les/c/f=30/30/0 sis=39) [1] r=0 lpr=39 pi=[29,39)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 27 05:57:10 np0005537642 ceph-osd[82775]: osd.1 pg_epoch: 41 pg[7.11( empty local-lis/les=39/41 n=0 ec=39/27 lis/c=29/29 les/c/f=30/30/0 sis=39) [1] r=0 lpr=39 pi=[29,39)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 27 05:57:10 np0005537642 ceph-osd[82775]: osd.1 pg_epoch: 41 pg[7.10( empty local-lis/les=39/41 n=0 ec=39/27 lis/c=29/29 les/c/f=30/30/0 sis=39) [1] r=0 lpr=39 pi=[29,39)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 27 05:57:10 np0005537642 ceph-osd[82775]: osd.1 pg_epoch: 41 pg[7.14( empty local-lis/les=39/41 n=0 ec=39/27 lis/c=29/29 les/c/f=30/30/0 sis=39) [1] r=0 lpr=39 pi=[29,39)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 27 05:57:10 np0005537642 ceph-osd[82775]: osd.1 pg_epoch: 41 pg[7.16( empty local-lis/les=39/41 n=0 ec=39/27 lis/c=29/29 les/c/f=30/30/0 sis=39) [1] r=0 lpr=39 pi=[29,39)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 27 05:57:10 np0005537642 ceph-osd[82775]: osd.1 pg_epoch: 41 pg[7.b( empty local-lis/les=39/41 n=0 ec=39/27 lis/c=29/29 les/c/f=30/30/0 sis=39) [1] r=0 lpr=39 pi=[29,39)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 27 05:57:10 np0005537642 ceph-osd[82775]: osd.1 pg_epoch: 41 pg[7.a( empty local-lis/les=39/41 n=0 ec=39/27 lis/c=29/29 les/c/f=30/30/0 sis=39) [1] r=0 lpr=39 pi=[29,39)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 27 05:57:10 np0005537642 ceph-osd[82775]: osd.1 pg_epoch: 41 pg[7.15( empty local-lis/les=39/41 n=0 ec=39/27 lis/c=29/29 les/c/f=30/30/0 sis=39) [1] r=0 lpr=39 pi=[29,39)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 27 05:57:10 np0005537642 ceph-osd[82775]: osd.1 pg_epoch: 41 pg[7.e( empty local-lis/les=39/41 n=0 ec=39/27 lis/c=29/29 les/c/f=30/30/0 sis=39) [1] r=0 lpr=39 pi=[29,39)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 27 05:57:10 np0005537642 ceph-osd[82775]: osd.1 pg_epoch: 41 pg[7.6( empty local-lis/les=39/41 n=0 ec=39/27 lis/c=29/29 les/c/f=30/30/0 sis=39) [1] r=0 lpr=39 pi=[29,39)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 27 05:57:10 np0005537642 ceph-osd[82775]: osd.1 pg_epoch: 41 pg[7.8( empty local-lis/les=39/41 n=0 ec=39/27 lis/c=29/29 les/c/f=30/30/0 sis=39) [1] r=0 lpr=39 pi=[29,39)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 27 05:57:10 np0005537642 ceph-osd[82775]: osd.1 pg_epoch: 41 pg[7.9( empty local-lis/les=39/41 n=0 ec=39/27 lis/c=29/29 les/c/f=30/30/0 sis=39) [1] r=0 lpr=39 pi=[29,39)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 27 05:57:10 np0005537642 ceph-osd[82775]: osd.1 pg_epoch: 41 pg[7.0( empty local-lis/les=39/41 n=0 ec=27/27 lis/c=29/29 les/c/f=30/30/0 sis=39) [1] r=0 lpr=39 pi=[29,39)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 27 05:57:10 np0005537642 ceph-osd[82775]: osd.1 pg_epoch: 41 pg[7.5( empty local-lis/les=39/41 n=0 ec=39/27 lis/c=29/29 les/c/f=30/30/0 sis=39) [1] r=0 lpr=39 pi=[29,39)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 27 05:57:10 np0005537642 ceph-osd[82775]: osd.1 pg_epoch: 41 pg[7.7( empty local-lis/les=39/41 n=0 ec=39/27 lis/c=29/29 les/c/f=30/30/0 sis=39) [1] r=0 lpr=39 pi=[29,39)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 27 05:57:10 np0005537642 ceph-osd[82775]: osd.1 pg_epoch: 41 pg[7.3( empty local-lis/les=39/41 n=0 ec=39/27 lis/c=29/29 les/c/f=30/30/0 sis=39) [1] r=0 lpr=39 pi=[29,39)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 27 05:57:10 np0005537642 ceph-osd[82775]: osd.1 pg_epoch: 41 pg[7.1( empty local-lis/les=39/41 n=0 ec=39/27 lis/c=29/29 les/c/f=30/30/0 sis=39) [1] r=0 lpr=39 pi=[29,39)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 27 05:57:10 np0005537642 ceph-osd[82775]: osd.1 pg_epoch: 41 pg[7.4( empty local-lis/les=39/41 n=0 ec=39/27 lis/c=29/29 les/c/f=30/30/0 sis=39) [1] r=0 lpr=39 pi=[29,39)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 27 05:57:10 np0005537642 ceph-osd[82775]: osd.1 pg_epoch: 41 pg[7.d( empty local-lis/les=39/41 n=0 ec=39/27 lis/c=29/29 les/c/f=30/30/0 sis=39) [1] r=0 lpr=39 pi=[29,39)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 27 05:57:10 np0005537642 ceph-osd[82775]: osd.1 pg_epoch: 41 pg[7.2( empty local-lis/les=39/41 n=0 ec=39/27 lis/c=29/29 les/c/f=30/30/0 sis=39) [1] r=0 lpr=39 pi=[29,39)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 27 05:57:10 np0005537642 ceph-osd[82775]: osd.1 pg_epoch: 41 pg[7.f( empty local-lis/les=39/41 n=0 ec=39/27 lis/c=29/29 les/c/f=30/30/0 sis=39) [1] r=0 lpr=39 pi=[29,39)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 27 05:57:10 np0005537642 ceph-osd[82775]: osd.1 pg_epoch: 41 pg[7.c( empty local-lis/les=39/41 n=0 ec=39/27 lis/c=29/29 les/c/f=30/30/0 sis=39) [1] r=0 lpr=39 pi=[29,39)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 27 05:57:10 np0005537642 ceph-osd[82775]: osd.1 pg_epoch: 41 pg[7.1e( empty local-lis/les=39/41 n=0 ec=39/27 lis/c=29/29 les/c/f=30/30/0 sis=39) [1] r=0 lpr=39 pi=[29,39)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 27 05:57:10 np0005537642 ceph-osd[82775]: osd.1 pg_epoch: 41 pg[7.17( empty local-lis/les=39/41 n=0 ec=39/27 lis/c=29/29 les/c/f=30/30/0 sis=39) [1] r=0 lpr=39 pi=[29,39)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 27 05:57:10 np0005537642 ceph-osd[82775]: osd.1 pg_epoch: 41 pg[7.18( empty local-lis/les=39/41 n=0 ec=39/27 lis/c=29/29 les/c/f=30/30/0 sis=39) [1] r=0 lpr=39 pi=[29,39)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 27 05:57:10 np0005537642 ceph-osd[82775]: osd.1 pg_epoch: 41 pg[7.1b( empty local-lis/les=39/41 n=0 ec=39/27 lis/c=29/29 les/c/f=30/30/0 sis=39) [1] r=0 lpr=39 pi=[29,39)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 27 05:57:10 np0005537642 ceph-osd[82775]: osd.1 pg_epoch: 41 pg[7.19( empty local-lis/les=39/41 n=0 ec=39/27 lis/c=29/29 les/c/f=30/30/0 sis=39) [1] r=0 lpr=39 pi=[29,39)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 27 05:57:10 np0005537642 ceph-osd[82775]: osd.1 pg_epoch: 41 pg[7.1a( empty local-lis/les=39/41 n=0 ec=39/27 lis/c=29/29 les/c/f=30/30/0 sis=39) [1] r=0 lpr=39 pi=[29,39)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 27 05:57:10 np0005537642 podman[86885]: 2025-11-27 10:57:10.620675912 +0000 UTC m=+0.733901996 container create c5da4448a3516c98ad35cef7cfc4537984a816948a1ffba10e6a32b39871890c (image=quay.io/ceph/ceph:v19, name=cool_mayer, org.label-schema.license=GPLv2, CEPH_REF=squid, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 27 05:57:10 np0005537642 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v136: 193 pgs: 1 peering, 93 unknown, 99 active+clean; 449 KiB data, 54 MiB used, 40 GiB / 40 GiB avail
Nov 27 05:57:10 np0005537642 ceph-mon[74338]: Deploying daemon osd.2 on compute-2
Nov 27 05:57:10 np0005537642 ceph-mon[74338]: from='mgr.14122 192.168.122.100:0/397881370' entity='mgr.compute-0.qnrkij' 
Nov 27 05:57:10 np0005537642 systemd[1]: Started libpod-conmon-c5da4448a3516c98ad35cef7cfc4537984a816948a1ffba10e6a32b39871890c.scope.
Nov 27 05:57:10 np0005537642 systemd[1]: Started libcrun container.
Nov 27 05:57:10 np0005537642 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/78f2cedd5bcf81f2f06b901f4e54c22a39f38cbbd1d1c3ccec9bedcd8fd5ac86/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Nov 27 05:57:10 np0005537642 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/78f2cedd5bcf81f2f06b901f4e54c22a39f38cbbd1d1c3ccec9bedcd8fd5ac86/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 27 05:57:10 np0005537642 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/78f2cedd5bcf81f2f06b901f4e54c22a39f38cbbd1d1c3ccec9bedcd8fd5ac86/merged/home/ceph_spec.yaml supports timestamps until 2038 (0x7fffffff)
Nov 27 05:57:11 np0005537642 ceph-osd[82775]: log_channel(cluster) log [DBG] : 2.4 scrub starts
Nov 27 05:57:11 np0005537642 ceph-osd[82775]: log_channel(cluster) log [DBG] : 2.4 scrub ok
Nov 27 05:57:11 np0005537642 podman[86883]: 2025-11-27 10:57:11.239114311 +0000 UTC m=+1.368681163 container init a7cd86f36ed5f113006b2f1e7539bad248e7ad5c9ec7a7d7ec69716cfde3379d (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=gifted_zhukovsky, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_REF=squid, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 27 05:57:11 np0005537642 podman[86883]: 2025-11-27 10:57:11.252125021 +0000 UTC m=+1.381691833 container start a7cd86f36ed5f113006b2f1e7539bad248e7ad5c9ec7a7d7ec69716cfde3379d (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=gifted_zhukovsky, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Nov 27 05:57:11 np0005537642 gifted_zhukovsky[86910]: 167 167
Nov 27 05:57:11 np0005537642 systemd[1]: libpod-a7cd86f36ed5f113006b2f1e7539bad248e7ad5c9ec7a7d7ec69716cfde3379d.scope: Deactivated successfully.
Nov 27 05:57:11 np0005537642 ceph-mon[74338]: mon.compute-0@0(leader).osd e41 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 27 05:57:11 np0005537642 podman[86883]: 2025-11-27 10:57:11.424330069 +0000 UTC m=+1.553896881 container attach a7cd86f36ed5f113006b2f1e7539bad248e7ad5c9ec7a7d7ec69716cfde3379d (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=gifted_zhukovsky, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, ceph=True, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 27 05:57:11 np0005537642 podman[86883]: 2025-11-27 10:57:11.425857563 +0000 UTC m=+1.555424375 container died a7cd86f36ed5f113006b2f1e7539bad248e7ad5c9ec7a7d7ec69716cfde3379d (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=gifted_zhukovsky, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 27 05:57:12 np0005537642 ceph-osd[82775]: log_channel(cluster) log [DBG] : 2.6 deep-scrub starts
Nov 27 05:57:12 np0005537642 ceph-osd[82775]: log_channel(cluster) log [DBG] : 2.6 deep-scrub ok
Nov 27 05:57:12 np0005537642 systemd[1]: var-lib-containers-storage-overlay-7b1275d37afc1fbab569480c8854e8450a959564d8c6c097d7ff304bcefa7ff1-merged.mount: Deactivated successfully.
Nov 27 05:57:12 np0005537642 podman[86885]: 2025-11-27 10:57:12.446091564 +0000 UTC m=+2.559317668 container init c5da4448a3516c98ad35cef7cfc4537984a816948a1ffba10e6a32b39871890c (image=quay.io/ceph/ceph:v19, name=cool_mayer, ceph=True, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 27 05:57:12 np0005537642 podman[86885]: 2025-11-27 10:57:12.456033045 +0000 UTC m=+2.569259119 container start c5da4448a3516c98ad35cef7cfc4537984a816948a1ffba10e6a32b39871890c (image=quay.io/ceph/ceph:v19, name=cool_mayer, CEPH_REF=squid, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 27 05:57:12 np0005537642 ceph-mon[74338]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2.devices.0}] v 0)
Nov 27 05:57:12 np0005537642 podman[86885]: 2025-11-27 10:57:12.665892763 +0000 UTC m=+2.779118887 container attach c5da4448a3516c98ad35cef7cfc4537984a816948a1ffba10e6a32b39871890c (image=quay.io/ceph/ceph:v19, name=cool_mayer, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_REF=squid, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 27 05:57:12 np0005537642 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v137: 193 pgs: 193 active+clean; 449 KiB data, 54 MiB used, 40 GiB / 40 GiB avail
Nov 27 05:57:12 np0005537642 ceph-mon[74338]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "backups", "var": "pgp_num_actual", "val": "32"} v 0)
Nov 27 05:57:12 np0005537642 ceph-mon[74338]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/397881370' entity='mgr.compute-0.qnrkij' cmd=[{"prefix": "osd pool set", "pool": "backups", "var": "pgp_num_actual", "val": "32"}]: dispatch
Nov 27 05:57:12 np0005537642 ceph-mon[74338]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "cephfs.cephfs.data", "var": "pgp_num_actual", "val": "32"} v 0)
Nov 27 05:57:12 np0005537642 ceph-mon[74338]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/397881370' entity='mgr.compute-0.qnrkij' cmd=[{"prefix": "osd pool set", "pool": "cephfs.cephfs.data", "var": "pgp_num_actual", "val": "32"}]: dispatch
Nov 27 05:57:12 np0005537642 ceph-mon[74338]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "32"} v 0)
Nov 27 05:57:12 np0005537642 ceph-mon[74338]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/397881370' entity='mgr.compute-0.qnrkij' cmd=[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "32"}]: dispatch
Nov 27 05:57:12 np0005537642 ceph-mon[74338]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "images", "var": "pgp_num_actual", "val": "32"} v 0)
Nov 27 05:57:12 np0005537642 ceph-mon[74338]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/397881370' entity='mgr.compute-0.qnrkij' cmd=[{"prefix": "osd pool set", "pool": "images", "var": "pgp_num_actual", "val": "32"}]: dispatch
Nov 27 05:57:12 np0005537642 ceph-mon[74338]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "vms", "var": "pgp_num_actual", "val": "32"} v 0)
Nov 27 05:57:12 np0005537642 ceph-mon[74338]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/397881370' entity='mgr.compute-0.qnrkij' cmd=[{"prefix": "osd pool set", "pool": "vms", "var": "pgp_num_actual", "val": "32"}]: dispatch
Nov 27 05:57:12 np0005537642 ceph-mon[74338]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "volumes", "var": "pgp_num_actual", "val": "32"} v 0)
Nov 27 05:57:12 np0005537642 ceph-mon[74338]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/397881370' entity='mgr.compute-0.qnrkij' cmd=[{"prefix": "osd pool set", "pool": "volumes", "var": "pgp_num_actual", "val": "32"}]: dispatch
Nov 27 05:57:12 np0005537642 ceph-mon[74338]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/397881370' entity='mgr.compute-0.qnrkij' 
Nov 27 05:57:12 np0005537642 ceph-mon[74338]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2}] v 0)
Nov 27 05:57:12 np0005537642 ceph-mon[74338]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/397881370' entity='mgr.compute-0.qnrkij' 
Nov 27 05:57:13 np0005537642 ceph-mon[74338]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config assimilate-conf"} v 0)
Nov 27 05:57:13 np0005537642 ceph-mon[74338]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/175809760' entity='client.admin' cmd=[{"prefix": "config assimilate-conf"}]: dispatch
Nov 27 05:57:13 np0005537642 ceph-osd[82775]: log_channel(cluster) log [DBG] : 2.5 scrub starts
Nov 27 05:57:13 np0005537642 ceph-osd[82775]: log_channel(cluster) log [DBG] : 2.5 scrub ok
Nov 27 05:57:13 np0005537642 podman[86883]: 2025-11-27 10:57:13.083489626 +0000 UTC m=+3.213056428 container remove a7cd86f36ed5f113006b2f1e7539bad248e7ad5c9ec7a7d7ec69716cfde3379d (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=gifted_zhukovsky, OSD_FLAVOR=default, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2)
Nov 27 05:57:13 np0005537642 systemd[1]: libpod-conmon-a7cd86f36ed5f113006b2f1e7539bad248e7ad5c9ec7a7d7ec69716cfde3379d.scope: Deactivated successfully.
Nov 27 05:57:13 np0005537642 ceph-mon[74338]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/175809760' entity='client.admin' cmd='[{"prefix": "config assimilate-conf"}]': finished
Nov 27 05:57:13 np0005537642 cool_mayer[86915]: 
Nov 27 05:57:13 np0005537642 cool_mayer[86915]: [global]
Nov 27 05:57:13 np0005537642 cool_mayer[86915]: #011fsid = 4c838139-e0c9-556a-a9ca-e4422f459af7
Nov 27 05:57:13 np0005537642 cool_mayer[86915]: #011mon_host = 192.168.122.100
Nov 27 05:57:13 np0005537642 systemd[1]: libpod-c5da4448a3516c98ad35cef7cfc4537984a816948a1ffba10e6a32b39871890c.scope: Deactivated successfully.
Nov 27 05:57:13 np0005537642 podman[86885]: 2025-11-27 10:57:13.202500047 +0000 UTC m=+3.315726111 container died c5da4448a3516c98ad35cef7cfc4537984a816948a1ffba10e6a32b39871890c (image=quay.io/ceph/ceph:v19, name=cool_mayer, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_REF=squid, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, ceph=True, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 27 05:57:13 np0005537642 systemd[1]: var-lib-containers-storage-overlay-78f2cedd5bcf81f2f06b901f4e54c22a39f38cbbd1d1c3ccec9bedcd8fd5ac86-merged.mount: Deactivated successfully.
Nov 27 05:57:13 np0005537642 ceph-mon[74338]: mon.compute-0@0(leader).osd e41 do_prune osdmap full prune enabled
Nov 27 05:57:13 np0005537642 ceph-mon[74338]: from='mgr.14122 192.168.122.100:0/397881370' entity='mgr.compute-0.qnrkij' cmd=[{"prefix": "osd pool set", "pool": "backups", "var": "pgp_num_actual", "val": "32"}]: dispatch
Nov 27 05:57:13 np0005537642 ceph-mon[74338]: from='mgr.14122 192.168.122.100:0/397881370' entity='mgr.compute-0.qnrkij' cmd=[{"prefix": "osd pool set", "pool": "cephfs.cephfs.data", "var": "pgp_num_actual", "val": "32"}]: dispatch
Nov 27 05:57:13 np0005537642 ceph-mon[74338]: from='mgr.14122 192.168.122.100:0/397881370' entity='mgr.compute-0.qnrkij' cmd=[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "32"}]: dispatch
Nov 27 05:57:13 np0005537642 ceph-mon[74338]: from='mgr.14122 192.168.122.100:0/397881370' entity='mgr.compute-0.qnrkij' cmd=[{"prefix": "osd pool set", "pool": "images", "var": "pgp_num_actual", "val": "32"}]: dispatch
Nov 27 05:57:13 np0005537642 ceph-mon[74338]: from='mgr.14122 192.168.122.100:0/397881370' entity='mgr.compute-0.qnrkij' cmd=[{"prefix": "osd pool set", "pool": "vms", "var": "pgp_num_actual", "val": "32"}]: dispatch
Nov 27 05:57:13 np0005537642 ceph-mon[74338]: from='mgr.14122 192.168.122.100:0/397881370' entity='mgr.compute-0.qnrkij' cmd=[{"prefix": "osd pool set", "pool": "volumes", "var": "pgp_num_actual", "val": "32"}]: dispatch
Nov 27 05:57:13 np0005537642 ceph-mon[74338]: from='mgr.14122 192.168.122.100:0/397881370' entity='mgr.compute-0.qnrkij' 
Nov 27 05:57:13 np0005537642 ceph-mon[74338]: from='mgr.14122 192.168.122.100:0/397881370' entity='mgr.compute-0.qnrkij' 
Nov 27 05:57:13 np0005537642 ceph-mon[74338]: from='client.? 192.168.122.100:0/175809760' entity='client.admin' cmd=[{"prefix": "config assimilate-conf"}]: dispatch
Nov 27 05:57:13 np0005537642 ceph-mon[74338]: from='client.? 192.168.122.100:0/175809760' entity='client.admin' cmd='[{"prefix": "config assimilate-conf"}]': finished
Nov 27 05:57:13 np0005537642 ceph-mon[74338]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/397881370' entity='mgr.compute-0.qnrkij' cmd='[{"prefix": "osd pool set", "pool": "backups", "var": "pgp_num_actual", "val": "32"}]': finished
Nov 27 05:57:13 np0005537642 ceph-mon[74338]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/397881370' entity='mgr.compute-0.qnrkij' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.data", "var": "pgp_num_actual", "val": "32"}]': finished
Nov 27 05:57:13 np0005537642 ceph-mon[74338]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/397881370' entity='mgr.compute-0.qnrkij' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "32"}]': finished
Nov 27 05:57:13 np0005537642 ceph-mon[74338]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/397881370' entity='mgr.compute-0.qnrkij' cmd='[{"prefix": "osd pool set", "pool": "images", "var": "pgp_num_actual", "val": "32"}]': finished
Nov 27 05:57:13 np0005537642 ceph-mon[74338]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/397881370' entity='mgr.compute-0.qnrkij' cmd='[{"prefix": "osd pool set", "pool": "vms", "var": "pgp_num_actual", "val": "32"}]': finished
Nov 27 05:57:13 np0005537642 ceph-mon[74338]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/397881370' entity='mgr.compute-0.qnrkij' cmd='[{"prefix": "osd pool set", "pool": "volumes", "var": "pgp_num_actual", "val": "32"}]': finished
Nov 27 05:57:13 np0005537642 ceph-mon[74338]: mon.compute-0@0(leader).osd e42 e42: 3 total, 2 up, 3 in
Nov 27 05:57:13 np0005537642 podman[86885]: 2025-11-27 10:57:13.691851259 +0000 UTC m=+3.805077333 container remove c5da4448a3516c98ad35cef7cfc4537984a816948a1ffba10e6a32b39871890c (image=quay.io/ceph/ceph:v19, name=cool_mayer, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Nov 27 05:57:13 np0005537642 ceph-mon[74338]: log_channel(cluster) log [DBG] : osdmap e42: 3 total, 2 up, 3 in
Nov 27 05:57:13 np0005537642 ceph-mon[74338]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd metadata", "id": 2} v 0)
Nov 27 05:57:13 np0005537642 ceph-mon[74338]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/397881370' entity='mgr.compute-0.qnrkij' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch
Nov 27 05:57:13 np0005537642 ceph-mgr[74636]: mgr finish mon failed to return metadata for osd.2: (2) No such file or directory
Nov 27 05:57:13 np0005537642 ceph-osd[82775]: osd.1 pg_epoch: 42 pg[7.1d( empty local-lis/les=39/41 n=0 ec=39/27 lis/c=39/39 les/c/f=41/41/0 sis=42 pruub=12.552398682s) [0] r=-1 lpr=42 pi=[39,42)/1 crt=0'0 mlcod 0'0 active pruub 103.019966125s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Nov 27 05:57:13 np0005537642 ceph-osd[82775]: osd.1 pg_epoch: 42 pg[7.1d( empty local-lis/les=39/41 n=0 ec=39/27 lis/c=39/39 les/c/f=41/41/0 sis=42 pruub=12.552367210s) [0] r=-1 lpr=42 pi=[39,42)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 103.019966125s@ mbc={}] state<Start>: transitioning to Stray
Nov 27 05:57:13 np0005537642 ceph-osd[82775]: osd.1 pg_epoch: 42 pg[2.19( empty local-lis/les=35/36 n=0 ec=35/14 lis/c=35/35 les/c/f=36/36/0 sis=42 pruub=12.964323997s) [0] r=-1 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 active pruub 103.431968689s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Nov 27 05:57:13 np0005537642 ceph-osd[82775]: osd.1 pg_epoch: 42 pg[2.19( empty local-lis/les=35/36 n=0 ec=35/14 lis/c=35/35 les/c/f=36/36/0 sis=42 pruub=12.964285851s) [0] r=-1 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 103.431968689s@ mbc={}] state<Start>: transitioning to Stray
Nov 27 05:57:13 np0005537642 ceph-osd[82775]: osd.1 pg_epoch: 42 pg[7.13( empty local-lis/les=39/41 n=0 ec=39/27 lis/c=39/39 les/c/f=41/41/0 sis=42 pruub=12.552317619s) [0] r=-1 lpr=42 pi=[39,42)/1 crt=0'0 mlcod 0'0 active pruub 103.020057678s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Nov 27 05:57:13 np0005537642 ceph-osd[82775]: osd.1 pg_epoch: 42 pg[2.15( empty local-lis/les=35/36 n=0 ec=35/14 lis/c=35/35 les/c/f=36/36/0 sis=42 pruub=12.964072227s) [0] r=-1 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 active pruub 103.431846619s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Nov 27 05:57:13 np0005537642 ceph-osd[82775]: osd.1 pg_epoch: 42 pg[7.10( empty local-lis/les=39/41 n=0 ec=39/27 lis/c=39/39 les/c/f=41/41/0 sis=42 pruub=12.552502632s) [0] r=-1 lpr=42 pi=[39,42)/1 crt=0'0 mlcod 0'0 active pruub 103.020301819s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Nov 27 05:57:13 np0005537642 ceph-osd[82775]: osd.1 pg_epoch: 42 pg[7.13( empty local-lis/les=39/41 n=0 ec=39/27 lis/c=39/39 les/c/f=41/41/0 sis=42 pruub=12.552285194s) [0] r=-1 lpr=42 pi=[39,42)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 103.020057678s@ mbc={}] state<Start>: transitioning to Stray
Nov 27 05:57:13 np0005537642 ceph-osd[82775]: osd.1 pg_epoch: 42 pg[2.15( empty local-lis/les=35/36 n=0 ec=35/14 lis/c=35/35 les/c/f=36/36/0 sis=42 pruub=12.964045525s) [0] r=-1 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 103.431846619s@ mbc={}] state<Start>: transitioning to Stray
Nov 27 05:57:13 np0005537642 ceph-osd[82775]: osd.1 pg_epoch: 42 pg[7.10( empty local-lis/les=39/41 n=0 ec=39/27 lis/c=39/39 les/c/f=41/41/0 sis=42 pruub=12.552489281s) [0] r=-1 lpr=42 pi=[39,42)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 103.020301819s@ mbc={}] state<Start>: transitioning to Stray
Nov 27 05:57:13 np0005537642 ceph-osd[82775]: osd.1 pg_epoch: 42 pg[2.13( empty local-lis/les=35/36 n=0 ec=35/14 lis/c=35/35 les/c/f=36/36/0 sis=42 pruub=12.963867188s) [0] r=-1 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 active pruub 103.431762695s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Nov 27 05:57:13 np0005537642 ceph-osd[82775]: osd.1 pg_epoch: 42 pg[2.13( empty local-lis/les=35/36 n=0 ec=35/14 lis/c=35/35 les/c/f=36/36/0 sis=42 pruub=12.963835716s) [0] r=-1 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 103.431762695s@ mbc={}] state<Start>: transitioning to Stray
Nov 27 05:57:13 np0005537642 ceph-osd[82775]: osd.1 pg_epoch: 42 pg[7.14( empty local-lis/les=39/41 n=0 ec=39/27 lis/c=39/39 les/c/f=41/41/0 sis=42 pruub=12.552378654s) [0] r=-1 lpr=42 pi=[39,42)/1 crt=0'0 mlcod 0'0 active pruub 103.020362854s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Nov 27 05:57:13 np0005537642 ceph-osd[82775]: osd.1 pg_epoch: 42 pg[7.14( empty local-lis/les=39/41 n=0 ec=39/27 lis/c=39/39 les/c/f=41/41/0 sis=42 pruub=12.552367210s) [0] r=-1 lpr=42 pi=[39,42)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 103.020362854s@ mbc={}] state<Start>: transitioning to Stray
Nov 27 05:57:13 np0005537642 ceph-osd[82775]: osd.1 pg_epoch: 42 pg[2.10( empty local-lis/les=35/36 n=0 ec=35/14 lis/c=35/35 les/c/f=36/36/0 sis=42 pruub=12.963438988s) [0] r=-1 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 active pruub 103.431564331s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Nov 27 05:57:13 np0005537642 ceph-osd[82775]: osd.1 pg_epoch: 42 pg[2.e( empty local-lis/les=35/36 n=0 ec=35/14 lis/c=35/35 les/c/f=36/36/0 sis=42 pruub=12.963421822s) [0] r=-1 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 active pruub 103.431556702s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Nov 27 05:57:13 np0005537642 ceph-osd[82775]: osd.1 pg_epoch: 42 pg[7.a( empty local-lis/les=39/41 n=0 ec=39/27 lis/c=39/39 les/c/f=41/41/0 sis=42 pruub=12.552522659s) [0] r=-1 lpr=42 pi=[39,42)/1 crt=0'0 mlcod 0'0 active pruub 103.020660400s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Nov 27 05:57:13 np0005537642 ceph-osd[82775]: osd.1 pg_epoch: 42 pg[2.e( empty local-lis/les=35/36 n=0 ec=35/14 lis/c=35/35 les/c/f=36/36/0 sis=42 pruub=12.963399887s) [0] r=-1 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 103.431556702s@ mbc={}] state<Start>: transitioning to Stray
Nov 27 05:57:13 np0005537642 ceph-osd[82775]: osd.1 pg_epoch: 42 pg[2.10( empty local-lis/les=35/36 n=0 ec=35/14 lis/c=35/35 les/c/f=36/36/0 sis=42 pruub=12.963401794s) [0] r=-1 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 103.431564331s@ mbc={}] state<Start>: transitioning to Stray
Nov 27 05:57:13 np0005537642 ceph-osd[82775]: osd.1 pg_epoch: 42 pg[7.a( empty local-lis/les=39/41 n=0 ec=39/27 lis/c=39/39 les/c/f=41/41/0 sis=42 pruub=12.552491188s) [0] r=-1 lpr=42 pi=[39,42)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 103.020660400s@ mbc={}] state<Start>: transitioning to Stray
Nov 27 05:57:13 np0005537642 ceph-osd[82775]: osd.1 pg_epoch: 42 pg[7.b( empty local-lis/les=39/41 n=0 ec=39/27 lis/c=39/39 les/c/f=41/41/0 sis=42 pruub=12.552324295s) [0] r=-1 lpr=42 pi=[39,42)/1 crt=0'0 mlcod 0'0 active pruub 103.020584106s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Nov 27 05:57:13 np0005537642 ceph-osd[82775]: osd.1 pg_epoch: 42 pg[7.b( empty local-lis/les=39/41 n=0 ec=39/27 lis/c=39/39 les/c/f=41/41/0 sis=42 pruub=12.552309036s) [0] r=-1 lpr=42 pi=[39,42)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 103.020584106s@ mbc={}] state<Start>: transitioning to Stray
Nov 27 05:57:13 np0005537642 ceph-osd[82775]: osd.1 pg_epoch: 42 pg[7.8( empty local-lis/les=39/41 n=0 ec=39/27 lis/c=39/39 les/c/f=41/41/0 sis=42 pruub=12.552405357s) [0] r=-1 lpr=42 pi=[39,42)/1 crt=0'0 mlcod 0'0 active pruub 103.020759583s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Nov 27 05:57:13 np0005537642 ceph-osd[82775]: osd.1 pg_epoch: 42 pg[2.d( empty local-lis/les=35/36 n=0 ec=35/14 lis/c=35/35 les/c/f=36/36/0 sis=42 pruub=12.963147163s) [0] r=-1 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 active pruub 103.431541443s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Nov 27 05:57:13 np0005537642 ceph-osd[82775]: osd.1 pg_epoch: 42 pg[7.e( empty local-lis/les=39/41 n=0 ec=39/27 lis/c=39/39 les/c/f=41/41/0 sis=42 pruub=12.552270889s) [0] r=-1 lpr=42 pi=[39,42)/1 crt=0'0 mlcod 0'0 active pruub 103.020683289s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Nov 27 05:57:13 np0005537642 ceph-osd[82775]: osd.1 pg_epoch: 42 pg[7.8( empty local-lis/les=39/41 n=0 ec=39/27 lis/c=39/39 les/c/f=41/41/0 sis=42 pruub=12.552382469s) [0] r=-1 lpr=42 pi=[39,42)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 103.020759583s@ mbc={}] state<Start>: transitioning to Stray
Nov 27 05:57:13 np0005537642 ceph-osd[82775]: osd.1 pg_epoch: 42 pg[2.c( empty local-lis/les=35/36 n=0 ec=35/14 lis/c=35/35 les/c/f=36/36/0 sis=42 pruub=12.963767052s) [0] r=-1 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 active pruub 103.432174683s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Nov 27 05:57:13 np0005537642 ceph-osd[82775]: osd.1 pg_epoch: 42 pg[7.e( empty local-lis/les=39/41 n=0 ec=39/27 lis/c=39/39 les/c/f=41/41/0 sis=42 pruub=12.552257538s) [0] r=-1 lpr=42 pi=[39,42)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 103.020683289s@ mbc={}] state<Start>: transitioning to Stray
Nov 27 05:57:13 np0005537642 ceph-osd[82775]: osd.1 pg_epoch: 42 pg[2.d( empty local-lis/les=35/36 n=0 ec=35/14 lis/c=35/35 les/c/f=36/36/0 sis=42 pruub=12.963123322s) [0] r=-1 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 103.431541443s@ mbc={}] state<Start>: transitioning to Stray
Nov 27 05:57:13 np0005537642 ceph-osd[82775]: osd.1 pg_epoch: 42 pg[2.c( empty local-lis/les=35/36 n=0 ec=35/14 lis/c=35/35 les/c/f=36/36/0 sis=42 pruub=12.963732719s) [0] r=-1 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 103.432174683s@ mbc={}] state<Start>: transitioning to Stray
Nov 27 05:57:13 np0005537642 ceph-osd[82775]: osd.1 pg_epoch: 42 pg[7.6( empty local-lis/les=39/41 n=0 ec=39/27 lis/c=39/39 les/c/f=41/41/0 sis=42 pruub=12.552133560s) [0] r=-1 lpr=42 pi=[39,42)/1 crt=0'0 mlcod 0'0 active pruub 103.020706177s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Nov 27 05:57:13 np0005537642 ceph-osd[82775]: osd.1 pg_epoch: 42 pg[7.6( empty local-lis/les=39/41 n=0 ec=39/27 lis/c=39/39 les/c/f=41/41/0 sis=42 pruub=12.552118301s) [0] r=-1 lpr=42 pi=[39,42)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 103.020706177s@ mbc={}] state<Start>: transitioning to Stray
Nov 27 05:57:13 np0005537642 ceph-osd[82775]: osd.1 pg_epoch: 42 pg[7.9( empty local-lis/les=39/41 n=0 ec=39/27 lis/c=39/39 les/c/f=41/41/0 sis=42 pruub=12.552131653s) [0] r=-1 lpr=42 pi=[39,42)/1 crt=0'0 mlcod 0'0 active pruub 103.020767212s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Nov 27 05:57:13 np0005537642 ceph-osd[82775]: osd.1 pg_epoch: 42 pg[2.1( empty local-lis/les=35/36 n=0 ec=35/14 lis/c=35/35 les/c/f=36/36/0 sis=42 pruub=12.962391853s) [0] r=-1 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 active pruub 103.431098938s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Nov 27 05:57:13 np0005537642 ceph-osd[82775]: osd.1 pg_epoch: 42 pg[7.4( empty local-lis/les=39/41 n=0 ec=39/27 lis/c=39/39 les/c/f=41/41/0 sis=42 pruub=12.552244186s) [0] r=-1 lpr=42 pi=[39,42)/1 crt=0'0 mlcod 0'0 active pruub 103.020973206s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Nov 27 05:57:13 np0005537642 ceph-osd[82775]: osd.1 pg_epoch: 42 pg[7.4( empty local-lis/les=39/41 n=0 ec=39/27 lis/c=39/39 les/c/f=41/41/0 sis=42 pruub=12.552218437s) [0] r=-1 lpr=42 pi=[39,42)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 103.020973206s@ mbc={}] state<Start>: transitioning to Stray
Nov 27 05:57:13 np0005537642 ceph-osd[82775]: osd.1 pg_epoch: 42 pg[7.9( empty local-lis/les=39/41 n=0 ec=39/27 lis/c=39/39 les/c/f=41/41/0 sis=42 pruub=12.552053452s) [0] r=-1 lpr=42 pi=[39,42)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 103.020767212s@ mbc={}] state<Start>: transitioning to Stray
Nov 27 05:57:13 np0005537642 ceph-osd[82775]: osd.1 pg_epoch: 42 pg[2.1( empty local-lis/les=35/36 n=0 ec=35/14 lis/c=35/35 les/c/f=36/36/0 sis=42 pruub=12.962356567s) [0] r=-1 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 103.431098938s@ mbc={}] state<Start>: transitioning to Stray
Nov 27 05:57:13 np0005537642 ceph-osd[82775]: osd.1 pg_epoch: 42 pg[2.6( empty local-lis/les=35/36 n=0 ec=35/14 lis/c=35/35 les/c/f=36/36/0 sis=42 pruub=12.961824417s) [0] r=-1 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 active pruub 103.430694580s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Nov 27 05:57:13 np0005537642 ceph-osd[82775]: osd.1 pg_epoch: 42 pg[2.4( empty local-lis/les=35/36 n=0 ec=35/14 lis/c=35/35 les/c/f=36/36/0 sis=42 pruub=12.962197304s) [0] r=-1 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 active pruub 103.431076050s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Nov 27 05:57:13 np0005537642 ceph-osd[82775]: osd.1 pg_epoch: 42 pg[2.6( empty local-lis/les=35/36 n=0 ec=35/14 lis/c=35/35 les/c/f=36/36/0 sis=42 pruub=12.961807251s) [0] r=-1 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 103.430694580s@ mbc={}] state<Start>: transitioning to Stray
Nov 27 05:57:13 np0005537642 ceph-osd[82775]: osd.1 pg_epoch: 42 pg[2.4( empty local-lis/les=35/36 n=0 ec=35/14 lis/c=35/35 les/c/f=36/36/0 sis=42 pruub=12.962181091s) [0] r=-1 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 103.431076050s@ mbc={}] state<Start>: transitioning to Stray
Nov 27 05:57:13 np0005537642 ceph-osd[82775]: osd.1 pg_epoch: 42 pg[7.3( empty local-lis/les=39/41 n=0 ec=39/27 lis/c=39/39 les/c/f=41/41/0 sis=42 pruub=12.551943779s) [0] r=-1 lpr=42 pi=[39,42)/1 crt=0'0 mlcod 0'0 active pruub 103.020889282s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Nov 27 05:57:13 np0005537642 ceph-osd[82775]: osd.1 pg_epoch: 42 pg[7.2( empty local-lis/les=39/41 n=0 ec=39/27 lis/c=39/39 les/c/f=41/41/0 sis=42 pruub=12.552093506s) [0] r=-1 lpr=42 pi=[39,42)/1 crt=0'0 mlcod 0'0 active pruub 103.021064758s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Nov 27 05:57:13 np0005537642 ceph-osd[82775]: osd.1 pg_epoch: 42 pg[7.3( empty local-lis/les=39/41 n=0 ec=39/27 lis/c=39/39 les/c/f=41/41/0 sis=42 pruub=12.551923752s) [0] r=-1 lpr=42 pi=[39,42)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 103.020889282s@ mbc={}] state<Start>: transitioning to Stray
Nov 27 05:57:13 np0005537642 ceph-osd[82775]: osd.1 pg_epoch: 42 pg[7.2( empty local-lis/les=39/41 n=0 ec=39/27 lis/c=39/39 les/c/f=41/41/0 sis=42 pruub=12.552062988s) [0] r=-1 lpr=42 pi=[39,42)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 103.021064758s@ mbc={}] state<Start>: transitioning to Stray
Nov 27 05:57:13 np0005537642 ceph-osd[82775]: osd.1 pg_epoch: 42 pg[2.9( empty local-lis/les=35/36 n=0 ec=35/14 lis/c=35/35 les/c/f=36/36/0 sis=42 pruub=12.961816788s) [0] r=-1 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 active pruub 103.431114197s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Nov 27 05:57:13 np0005537642 ceph-osd[82775]: osd.1 pg_epoch: 42 pg[2.9( empty local-lis/les=35/36 n=0 ec=35/14 lis/c=35/35 les/c/f=36/36/0 sis=42 pruub=12.961800575s) [0] r=-1 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 103.431114197s@ mbc={}] state<Start>: transitioning to Stray
Nov 27 05:57:13 np0005537642 ceph-osd[82775]: osd.1 pg_epoch: 42 pg[7.1e( empty local-lis/les=39/41 n=0 ec=39/27 lis/c=39/39 les/c/f=41/41/0 sis=42 pruub=12.551793098s) [0] r=-1 lpr=42 pi=[39,42)/1 crt=0'0 mlcod 0'0 active pruub 103.021148682s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Nov 27 05:57:13 np0005537642 ceph-osd[82775]: osd.1 pg_epoch: 42 pg[7.1e( empty local-lis/les=39/41 n=0 ec=39/27 lis/c=39/39 les/c/f=41/41/0 sis=42 pruub=12.551766396s) [0] r=-1 lpr=42 pi=[39,42)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 103.021148682s@ mbc={}] state<Start>: transitioning to Stray
Nov 27 05:57:13 np0005537642 ceph-osd[82775]: osd.1 pg_epoch: 42 pg[2.1b( empty local-lis/les=35/36 n=0 ec=35/14 lis/c=35/35 les/c/f=36/36/0 sis=42 pruub=12.961360931s) [0] r=-1 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 active pruub 103.430793762s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Nov 27 05:57:13 np0005537642 ceph-osd[82775]: osd.1 pg_epoch: 42 pg[2.1b( empty local-lis/les=35/36 n=0 ec=35/14 lis/c=35/35 les/c/f=36/36/0 sis=42 pruub=12.961347580s) [0] r=-1 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 103.430793762s@ mbc={}] state<Start>: transitioning to Stray
Nov 27 05:57:13 np0005537642 ceph-osd[82775]: osd.1 pg_epoch: 42 pg[7.1b( empty local-lis/les=39/41 n=0 ec=39/27 lis/c=39/39 les/c/f=41/41/0 sis=42 pruub=12.551608086s) [0] r=-1 lpr=42 pi=[39,42)/1 crt=0'0 mlcod 0'0 active pruub 103.021209717s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Nov 27 05:57:13 np0005537642 ceph-osd[82775]: osd.1 pg_epoch: 42 pg[7.18( empty local-lis/les=39/41 n=0 ec=39/27 lis/c=39/39 les/c/f=41/41/0 sis=42 pruub=12.551591873s) [0] r=-1 lpr=42 pi=[39,42)/1 crt=0'0 mlcod 0'0 active pruub 103.021194458s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Nov 27 05:57:13 np0005537642 ceph-osd[82775]: osd.1 pg_epoch: 42 pg[7.18( empty local-lis/les=39/41 n=0 ec=39/27 lis/c=39/39 les/c/f=41/41/0 sis=42 pruub=12.551577568s) [0] r=-1 lpr=42 pi=[39,42)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 103.021194458s@ mbc={}] state<Start>: transitioning to Stray
Nov 27 05:57:13 np0005537642 ceph-osd[82775]: osd.1 pg_epoch: 42 pg[2.a( empty local-lis/les=35/36 n=0 ec=35/14 lis/c=35/35 les/c/f=36/36/0 sis=42 pruub=12.961780548s) [0] r=-1 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 active pruub 103.431396484s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Nov 27 05:57:13 np0005537642 ceph-osd[82775]: osd.1 pg_epoch: 42 pg[7.1b( empty local-lis/les=39/41 n=0 ec=39/27 lis/c=39/39 les/c/f=41/41/0 sis=42 pruub=12.551596642s) [0] r=-1 lpr=42 pi=[39,42)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 103.021209717s@ mbc={}] state<Start>: transitioning to Stray
Nov 27 05:57:13 np0005537642 ceph-osd[82775]: osd.1 pg_epoch: 42 pg[7.f( empty local-lis/les=39/41 n=0 ec=39/27 lis/c=39/39 les/c/f=41/41/0 sis=42 pruub=12.551424026s) [0] r=-1 lpr=42 pi=[39,42)/1 crt=0'0 mlcod 0'0 active pruub 103.021080017s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Nov 27 05:57:13 np0005537642 ceph-osd[82775]: osd.1 pg_epoch: 42 pg[2.a( empty local-lis/les=35/36 n=0 ec=35/14 lis/c=35/35 les/c/f=36/36/0 sis=42 pruub=12.961753845s) [0] r=-1 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 103.431396484s@ mbc={}] state<Start>: transitioning to Stray
Nov 27 05:57:13 np0005537642 ceph-osd[82775]: osd.1 pg_epoch: 42 pg[7.f( empty local-lis/les=39/41 n=0 ec=39/27 lis/c=39/39 les/c/f=41/41/0 sis=42 pruub=12.551385880s) [0] r=-1 lpr=42 pi=[39,42)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 103.021080017s@ mbc={}] state<Start>: transitioning to Stray
Nov 27 05:57:13 np0005537642 ceph-osd[82775]: osd.1 pg_epoch: 42 pg[2.1f( empty local-lis/les=35/36 n=0 ec=35/14 lis/c=35/35 les/c/f=36/36/0 sis=42 pruub=12.960915565s) [0] r=-1 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 active pruub 103.430656433s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Nov 27 05:57:13 np0005537642 ceph-osd[82775]: osd.1 pg_epoch: 42 pg[2.1e( empty local-lis/les=35/36 n=0 ec=35/14 lis/c=35/35 les/c/f=36/36/0 sis=42 pruub=12.960928917s) [0] r=-1 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 active pruub 103.430671692s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Nov 27 05:57:13 np0005537642 ceph-osd[82775]: osd.1 pg_epoch: 42 pg[2.1f( empty local-lis/les=35/36 n=0 ec=35/14 lis/c=35/35 les/c/f=36/36/0 sis=42 pruub=12.960904121s) [0] r=-1 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 103.430656433s@ mbc={}] state<Start>: transitioning to Stray
Nov 27 05:57:13 np0005537642 ceph-osd[82775]: osd.1 pg_epoch: 42 pg[2.1e( empty local-lis/les=35/36 n=0 ec=35/14 lis/c=35/35 les/c/f=36/36/0 sis=42 pruub=12.960890770s) [0] r=-1 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 103.430671692s@ mbc={}] state<Start>: transitioning to Stray
Nov 27 05:57:13 np0005537642 systemd[1]: libpod-conmon-c5da4448a3516c98ad35cef7cfc4537984a816948a1ffba10e6a32b39871890c.scope: Deactivated successfully.
Nov 27 05:57:13 np0005537642 ceph-osd[82775]: osd.1 pg_epoch: 42 pg[4.18( empty local-lis/les=0/0 n=0 ec=37/18 lis/c=37/37 les/c/f=38/38/0 sis=42) [1] r=0 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 27 05:57:13 np0005537642 ceph-osd[82775]: osd.1 pg_epoch: 42 pg[6.1a( empty local-lis/les=0/0 n=0 ec=39/24 lis/c=39/39 les/c/f=40/40/0 sis=42) [1] r=0 lpr=42 pi=[39,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 27 05:57:13 np0005537642 ceph-osd[82775]: osd.1 pg_epoch: 42 pg[5.18( empty local-lis/les=0/0 n=0 ec=38/21 lis/c=38/38 les/c/f=39/39/0 sis=42) [1] r=0 lpr=42 pi=[38,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 27 05:57:13 np0005537642 ceph-osd[82775]: osd.1 pg_epoch: 42 pg[3.1d( empty local-lis/les=0/0 n=0 ec=35/15 lis/c=35/35 les/c/f=38/38/0 sis=42) [1] r=0 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 27 05:57:13 np0005537642 ceph-osd[82775]: osd.1 pg_epoch: 42 pg[4.1a( empty local-lis/les=0/0 n=0 ec=37/18 lis/c=37/37 les/c/f=38/38/0 sis=42) [1] r=0 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 27 05:57:13 np0005537642 ceph-osd[82775]: osd.1 pg_epoch: 42 pg[5.1b( empty local-lis/les=0/0 n=0 ec=38/21 lis/c=38/38 les/c/f=39/39/0 sis=42) [1] r=0 lpr=42 pi=[38,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 27 05:57:13 np0005537642 ceph-osd[82775]: osd.1 pg_epoch: 42 pg[5.1a( empty local-lis/les=0/0 n=0 ec=38/21 lis/c=38/38 les/c/f=39/39/0 sis=42) [1] r=0 lpr=42 pi=[38,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 27 05:57:13 np0005537642 ceph-osd[82775]: osd.1 pg_epoch: 42 pg[4.1b( empty local-lis/les=0/0 n=0 ec=37/18 lis/c=37/37 les/c/f=38/38/0 sis=42) [1] r=0 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 27 05:57:13 np0005537642 ceph-osd[82775]: osd.1 pg_epoch: 42 pg[6.19( empty local-lis/les=0/0 n=0 ec=39/24 lis/c=39/39 les/c/f=40/40/0 sis=42) [1] r=0 lpr=42 pi=[39,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 27 05:57:13 np0005537642 ceph-osd[82775]: osd.1 pg_epoch: 42 pg[6.1e( empty local-lis/les=0/0 n=0 ec=39/24 lis/c=39/39 les/c/f=40/40/0 sis=42) [1] r=0 lpr=42 pi=[39,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 27 05:57:13 np0005537642 ceph-osd[82775]: osd.1 pg_epoch: 42 pg[3.1a( empty local-lis/les=0/0 n=0 ec=35/15 lis/c=35/35 les/c/f=38/38/0 sis=42) [1] r=0 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 27 05:57:13 np0005537642 ceph-osd[82775]: osd.1 pg_epoch: 42 pg[5.1c( empty local-lis/les=0/0 n=0 ec=38/21 lis/c=38/38 les/c/f=39/39/0 sis=42) [1] r=0 lpr=42 pi=[38,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 27 05:57:13 np0005537642 ceph-osd[82775]: osd.1 pg_epoch: 42 pg[3.9( empty local-lis/les=0/0 n=0 ec=35/15 lis/c=35/35 les/c/f=38/38/0 sis=42) [1] r=0 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 27 05:57:13 np0005537642 ceph-osd[82775]: osd.1 pg_epoch: 42 pg[4.e( empty local-lis/les=0/0 n=0 ec=37/18 lis/c=37/37 les/c/f=38/38/0 sis=42) [1] r=0 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 27 05:57:13 np0005537642 ceph-osd[82775]: osd.1 pg_epoch: 42 pg[5.f( empty local-lis/les=0/0 n=0 ec=38/21 lis/c=38/38 les/c/f=39/39/0 sis=42) [1] r=0 lpr=42 pi=[38,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 27 05:57:13 np0005537642 ceph-osd[82775]: osd.1 pg_epoch: 42 pg[5.e( empty local-lis/les=0/0 n=0 ec=38/21 lis/c=38/38 les/c/f=39/39/0 sis=42) [1] r=0 lpr=42 pi=[38,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 27 05:57:13 np0005537642 ceph-osd[82775]: osd.1 pg_epoch: 42 pg[6.d( empty local-lis/les=0/0 n=0 ec=39/24 lis/c=39/39 les/c/f=40/40/0 sis=42) [1] r=0 lpr=42 pi=[39,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 27 05:57:13 np0005537642 ceph-osd[82775]: osd.1 pg_epoch: 42 pg[5.2( empty local-lis/les=0/0 n=0 ec=38/21 lis/c=38/38 les/c/f=39/39/0 sis=42) [1] r=0 lpr=42 pi=[38,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 27 05:57:13 np0005537642 ceph-osd[82775]: osd.1 pg_epoch: 42 pg[3.3( empty local-lis/les=0/0 n=0 ec=35/15 lis/c=35/35 les/c/f=38/38/0 sis=42) [1] r=0 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 27 05:57:13 np0005537642 ceph-osd[82775]: osd.1 pg_epoch: 42 pg[4.5( empty local-lis/les=0/0 n=0 ec=37/18 lis/c=37/37 les/c/f=38/38/0 sis=42) [1] r=0 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 27 05:57:13 np0005537642 ceph-osd[82775]: osd.1 pg_epoch: 42 pg[5.4( empty local-lis/les=0/0 n=0 ec=38/21 lis/c=38/38 les/c/f=39/39/0 sis=42) [1] r=0 lpr=42 pi=[38,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 27 05:57:13 np0005537642 ceph-osd[82775]: osd.1 pg_epoch: 42 pg[6.7( empty local-lis/les=0/0 n=0 ec=39/24 lis/c=39/39 les/c/f=40/40/0 sis=42) [1] r=0 lpr=42 pi=[39,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 27 05:57:13 np0005537642 ceph-osd[82775]: osd.1 pg_epoch: 42 pg[3.1c( empty local-lis/les=0/0 n=0 ec=35/15 lis/c=35/35 les/c/f=38/38/0 sis=42) [1] r=0 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 27 05:57:13 np0005537642 ceph-osd[82775]: osd.1 pg_epoch: 42 pg[4.1( empty local-lis/les=0/0 n=0 ec=37/18 lis/c=37/37 les/c/f=38/38/0 sis=42) [1] r=0 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 27 05:57:13 np0005537642 ceph-osd[82775]: osd.1 pg_epoch: 42 pg[5.7( empty local-lis/les=0/0 n=0 ec=38/21 lis/c=38/38 les/c/f=39/39/0 sis=42) [1] r=0 lpr=42 pi=[38,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 27 05:57:13 np0005537642 ceph-osd[82775]: osd.1 pg_epoch: 42 pg[6.3( empty local-lis/les=0/0 n=0 ec=39/24 lis/c=39/39 les/c/f=40/40/0 sis=42) [1] r=0 lpr=42 pi=[39,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 27 05:57:13 np0005537642 ceph-osd[82775]: osd.1 pg_epoch: 42 pg[3.5( empty local-lis/les=0/0 n=0 ec=35/15 lis/c=35/35 les/c/f=38/38/0 sis=42) [1] r=0 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 27 05:57:13 np0005537642 ceph-osd[82775]: osd.1 pg_epoch: 42 pg[6.2( empty local-lis/les=0/0 n=0 ec=39/24 lis/c=39/39 les/c/f=40/40/0 sis=42) [1] r=0 lpr=42 pi=[39,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 27 05:57:13 np0005537642 ceph-osd[82775]: osd.1 pg_epoch: 42 pg[4.d( empty local-lis/les=0/0 n=0 ec=37/18 lis/c=37/37 les/c/f=38/38/0 sis=42) [1] r=0 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 27 05:57:13 np0005537642 ceph-osd[82775]: osd.1 pg_epoch: 42 pg[6.5( empty local-lis/les=0/0 n=0 ec=39/24 lis/c=39/39 les/c/f=40/40/0 sis=42) [1] r=0 lpr=42 pi=[39,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 27 05:57:13 np0005537642 ceph-osd[82775]: osd.1 pg_epoch: 42 pg[3.a( empty local-lis/les=0/0 n=0 ec=35/15 lis/c=35/35 les/c/f=38/38/0 sis=42) [1] r=0 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 27 05:57:13 np0005537642 ceph-osd[82775]: osd.1 pg_epoch: 42 pg[4.c( empty local-lis/les=0/0 n=0 ec=37/18 lis/c=37/37 les/c/f=38/38/0 sis=42) [1] r=0 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 27 05:57:13 np0005537642 ceph-osd[82775]: osd.1 pg_epoch: 42 pg[6.e( empty local-lis/les=0/0 n=0 ec=39/24 lis/c=39/39 les/c/f=40/40/0 sis=42) [1] r=0 lpr=42 pi=[39,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 27 05:57:13 np0005537642 ceph-osd[82775]: osd.1 pg_epoch: 42 pg[3.c( empty local-lis/les=0/0 n=0 ec=35/15 lis/c=35/35 les/c/f=38/38/0 sis=42) [1] r=0 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 27 05:57:13 np0005537642 ceph-osd[82775]: osd.1 pg_epoch: 42 pg[4.a( empty local-lis/les=0/0 n=0 ec=37/18 lis/c=37/37 les/c/f=38/38/0 sis=42) [1] r=0 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 27 05:57:13 np0005537642 ceph-osd[82775]: osd.1 pg_epoch: 42 pg[3.d( empty local-lis/les=0/0 n=0 ec=35/15 lis/c=35/35 les/c/f=38/38/0 sis=42) [1] r=0 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 27 05:57:13 np0005537642 ceph-osd[82775]: osd.1 pg_epoch: 42 pg[6.8( empty local-lis/les=0/0 n=0 ec=39/24 lis/c=39/39 les/c/f=40/40/0 sis=42) [1] r=0 lpr=42 pi=[39,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 27 05:57:13 np0005537642 ceph-osd[82775]: osd.1 pg_epoch: 42 pg[5.1( empty local-lis/les=0/0 n=0 ec=38/21 lis/c=38/38 les/c/f=39/39/0 sis=42) [1] r=0 lpr=42 pi=[38,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 27 05:57:13 np0005537642 ceph-osd[82775]: osd.1 pg_epoch: 42 pg[4.9( empty local-lis/les=0/0 n=0 ec=37/18 lis/c=37/37 les/c/f=38/38/0 sis=42) [1] r=0 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 27 05:57:13 np0005537642 ceph-osd[82775]: osd.1 pg_epoch: 42 pg[3.e( empty local-lis/les=0/0 n=0 ec=35/15 lis/c=35/35 les/c/f=38/38/0 sis=42) [1] r=0 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 27 05:57:13 np0005537642 ceph-osd[82775]: osd.1 pg_epoch: 42 pg[3.f( empty local-lis/les=0/0 n=0 ec=35/15 lis/c=35/35 les/c/f=38/38/0 sis=42) [1] r=0 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 27 05:57:13 np0005537642 ceph-osd[82775]: osd.1 pg_epoch: 42 pg[5.9( empty local-lis/les=0/0 n=0 ec=38/21 lis/c=38/38 les/c/f=39/39/0 sis=42) [1] r=0 lpr=42 pi=[38,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 27 05:57:13 np0005537642 ceph-osd[82775]: osd.1 pg_epoch: 42 pg[6.a( empty local-lis/les=0/0 n=0 ec=39/24 lis/c=39/39 les/c/f=40/40/0 sis=42) [1] r=0 lpr=42 pi=[39,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 27 05:57:13 np0005537642 ceph-osd[82775]: osd.1 pg_epoch: 42 pg[3.10( empty local-lis/les=0/0 n=0 ec=35/15 lis/c=35/35 les/c/f=38/38/0 sis=42) [1] r=0 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 27 05:57:13 np0005537642 ceph-osd[82775]: osd.1 pg_epoch: 42 pg[6.15( empty local-lis/les=0/0 n=0 ec=39/24 lis/c=39/39 les/c/f=40/40/0 sis=42) [1] r=0 lpr=42 pi=[39,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 27 05:57:13 np0005537642 ceph-osd[82775]: osd.1 pg_epoch: 42 pg[5.16( empty local-lis/les=0/0 n=0 ec=38/21 lis/c=38/38 les/c/f=39/39/0 sis=42) [1] r=0 lpr=42 pi=[38,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 27 05:57:13 np0005537642 ceph-osd[82775]: osd.1 pg_epoch: 42 pg[6.17( empty local-lis/les=0/0 n=0 ec=39/24 lis/c=39/39 les/c/f=40/40/0 sis=42) [1] r=0 lpr=42 pi=[39,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 27 05:57:13 np0005537642 ceph-osd[82775]: osd.1 pg_epoch: 42 pg[4.15( empty local-lis/les=0/0 n=0 ec=37/18 lis/c=37/37 les/c/f=38/38/0 sis=42) [1] r=0 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 27 05:57:13 np0005537642 ceph-osd[82775]: osd.1 pg_epoch: 42 pg[3.13( empty local-lis/les=0/0 n=0 ec=35/15 lis/c=35/35 les/c/f=38/38/0 sis=42) [1] r=0 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 27 05:57:13 np0005537642 ceph-osd[82775]: osd.1 pg_epoch: 42 pg[5.15( empty local-lis/les=0/0 n=0 ec=38/21 lis/c=38/38 les/c/f=39/39/0 sis=42) [1] r=0 lpr=42 pi=[38,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 27 05:57:13 np0005537642 ceph-osd[82775]: osd.1 pg_epoch: 42 pg[4.13( empty local-lis/les=0/0 n=0 ec=37/18 lis/c=37/37 les/c/f=38/38/0 sis=42) [1] r=0 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 27 05:57:13 np0005537642 ceph-osd[82775]: osd.1 pg_epoch: 42 pg[3.11( empty local-lis/les=0/0 n=0 ec=35/15 lis/c=35/35 les/c/f=38/38/0 sis=42) [1] r=0 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 27 05:57:13 np0005537642 ceph-osd[82775]: osd.1 pg_epoch: 42 pg[3.15( empty local-lis/les=0/0 n=0 ec=35/15 lis/c=35/35 les/c/f=38/38/0 sis=42) [1] r=0 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 27 05:57:13 np0005537642 ceph-osd[82775]: osd.1 pg_epoch: 42 pg[4.8( empty local-lis/les=0/0 n=0 ec=37/18 lis/c=37/37 les/c/f=38/38/0 sis=42) [1] r=0 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 27 05:57:13 np0005537642 ceph-osd[82775]: osd.1 pg_epoch: 42 pg[3.14( empty local-lis/les=0/0 n=0 ec=35/15 lis/c=35/35 les/c/f=38/38/0 sis=42) [1] r=0 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 27 05:57:13 np0005537642 ceph-osd[82775]: osd.1 pg_epoch: 42 pg[3.16( empty local-lis/les=0/0 n=0 ec=35/15 lis/c=35/35 les/c/f=38/38/0 sis=42) [1] r=0 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 27 05:57:13 np0005537642 ceph-osd[82775]: osd.1 pg_epoch: 42 pg[6.12( empty local-lis/les=0/0 n=0 ec=39/24 lis/c=39/39 les/c/f=40/40/0 sis=42) [1] r=0 lpr=42 pi=[39,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 27 05:57:13 np0005537642 ceph-osd[82775]: osd.1 pg_epoch: 42 pg[4.1f( empty local-lis/les=0/0 n=0 ec=37/18 lis/c=37/37 les/c/f=38/38/0 sis=42) [1] r=0 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 27 05:57:13 np0005537642 ceph-osd[82775]: osd.1 pg_epoch: 42 pg[6.1c( empty local-lis/les=0/0 n=0 ec=39/24 lis/c=39/39 les/c/f=40/40/0 sis=42) [1] r=0 lpr=42 pi=[39,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 27 05:57:13 np0005537642 ceph-osd[82775]: osd.1 pg_epoch: 42 pg[5.1f( empty local-lis/les=0/0 n=0 ec=38/21 lis/c=38/38 les/c/f=39/39/0 sis=42) [1] r=0 lpr=42 pi=[38,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 27 05:57:13 np0005537642 ceph-osd[82775]: osd.1 pg_epoch: 42 pg[5.10( empty local-lis/les=0/0 n=0 ec=38/21 lis/c=38/38 les/c/f=39/39/0 sis=42) [1] r=0 lpr=42 pi=[38,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 27 05:57:13 np0005537642 ceph-osd[82775]: osd.1 pg_epoch: 42 pg[5.11( empty local-lis/les=0/0 n=0 ec=38/21 lis/c=38/38 les/c/f=39/39/0 sis=42) [1] r=0 lpr=42 pi=[38,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 27 05:57:13 np0005537642 podman[86975]: 2025-11-27 10:57:13.844800783 +0000 UTC m=+0.565173771 container create 6060ae200b639997ca4360d5c04dca6181be19d279647f534ef2c84c5049895a (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=hopeful_swartz, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 27 05:57:13 np0005537642 podman[86975]: 2025-11-27 10:57:13.752976787 +0000 UTC m=+0.473349815 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Nov 27 05:57:13 np0005537642 systemd[1]: Started libpod-conmon-6060ae200b639997ca4360d5c04dca6181be19d279647f534ef2c84c5049895a.scope.
Nov 27 05:57:14 np0005537642 systemd[1]: Started libcrun container.
Nov 27 05:57:14 np0005537642 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/08413f335727ccf9a67f4fb56ef838e64f280d20277d93bb2a09109284004e0a/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 27 05:57:14 np0005537642 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/08413f335727ccf9a67f4fb56ef838e64f280d20277d93bb2a09109284004e0a/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 27 05:57:14 np0005537642 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/08413f335727ccf9a67f4fb56ef838e64f280d20277d93bb2a09109284004e0a/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 27 05:57:14 np0005537642 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/08413f335727ccf9a67f4fb56ef838e64f280d20277d93bb2a09109284004e0a/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 27 05:57:14 np0005537642 ceph-osd[82775]: log_channel(cluster) log [DBG] : 7.1f scrub starts
Nov 27 05:57:14 np0005537642 ceph-osd[82775]: log_channel(cluster) log [DBG] : 7.1f scrub ok
Nov 27 05:57:14 np0005537642 podman[86975]: 2025-11-27 10:57:14.135026461 +0000 UTC m=+0.855399449 container init 6060ae200b639997ca4360d5c04dca6181be19d279647f534ef2c84c5049895a (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=hopeful_swartz, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, org.label-schema.vendor=CentOS, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 27 05:57:14 np0005537642 python3[87015]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z   --volume /tmp/ceph_rgw.yml:/home/ceph_spec.yaml:z   --entrypoint ceph quay.io/ceph/ceph:v19 --fsid 4c838139-e0c9-556a-a9ca-e4422f459af7 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   config-key set ssl_option no_sslv2:sslv3:no_tlsv1:no_tlsv1_1#012 _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 27 05:57:14 np0005537642 podman[86975]: 2025-11-27 10:57:14.150859934 +0000 UTC m=+0.871232922 container start 6060ae200b639997ca4360d5c04dca6181be19d279647f534ef2c84c5049895a (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=hopeful_swartz, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, org.label-schema.schema-version=1.0, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 27 05:57:14 np0005537642 podman[86975]: 2025-11-27 10:57:14.194674006 +0000 UTC m=+0.915046994 container attach 6060ae200b639997ca4360d5c04dca6181be19d279647f534ef2c84c5049895a (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=hopeful_swartz, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 27 05:57:14 np0005537642 podman[87023]: 2025-11-27 10:57:14.20713802 +0000 UTC m=+0.045414079 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Nov 27 05:57:14 np0005537642 podman[87023]: 2025-11-27 10:57:14.306202218 +0000 UTC m=+0.144478267 container create 76c726e2c431052947bb808faecbb60e32f28bef932dd03b925ca1903ffd4bdc (image=quay.io/ceph/ceph:v19, name=eloquent_tu, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 27 05:57:14 np0005537642 systemd[1]: Started libpod-conmon-76c726e2c431052947bb808faecbb60e32f28bef932dd03b925ca1903ffd4bdc.scope.
Nov 27 05:57:14 np0005537642 systemd[1]: Started libcrun container.
Nov 27 05:57:14 np0005537642 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d0bc1d54728abdbfac8075000f2d6f4aba5bf494e5a5eafd2243c677dd32f384/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Nov 27 05:57:14 np0005537642 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d0bc1d54728abdbfac8075000f2d6f4aba5bf494e5a5eafd2243c677dd32f384/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 27 05:57:14 np0005537642 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d0bc1d54728abdbfac8075000f2d6f4aba5bf494e5a5eafd2243c677dd32f384/merged/home/ceph_spec.yaml supports timestamps until 2038 (0x7fffffff)
Nov 27 05:57:14 np0005537642 podman[87023]: 2025-11-27 10:57:14.509771642 +0000 UTC m=+0.348047711 container init 76c726e2c431052947bb808faecbb60e32f28bef932dd03b925ca1903ffd4bdc (image=quay.io/ceph/ceph:v19, name=eloquent_tu, CEPH_REF=squid, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 27 05:57:14 np0005537642 podman[87023]: 2025-11-27 10:57:14.516382725 +0000 UTC m=+0.354658744 container start 76c726e2c431052947bb808faecbb60e32f28bef932dd03b925ca1903ffd4bdc (image=quay.io/ceph/ceph:v19, name=eloquent_tu, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0)
Nov 27 05:57:14 np0005537642 podman[87023]: 2025-11-27 10:57:14.531542508 +0000 UTC m=+0.369818537 container attach 76c726e2c431052947bb808faecbb60e32f28bef932dd03b925ca1903ffd4bdc (image=quay.io/ceph/ceph:v19, name=eloquent_tu, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, OSD_FLAVOR=default)
Nov 27 05:57:14 np0005537642 ceph-mon[74338]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2.devices.0}] v 0)
Nov 27 05:57:14 np0005537642 ceph-mon[74338]: mon.compute-0@0(leader).osd e42 do_prune osdmap full prune enabled
Nov 27 05:57:14 np0005537642 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v139: 193 pgs: 193 active+clean; 449 KiB data, 54 MiB used, 40 GiB / 40 GiB avail
Nov 27 05:57:14 np0005537642 ceph-mon[74338]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/397881370' entity='mgr.compute-0.qnrkij' 
Nov 27 05:57:14 np0005537642 ceph-mon[74338]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2}] v 0)
Nov 27 05:57:14 np0005537642 ceph-mon[74338]: mon.compute-0@0(leader).osd e43 e43: 3 total, 2 up, 3 in
Nov 27 05:57:14 np0005537642 lvm[87132]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Nov 27 05:57:14 np0005537642 lvm[87132]: VG ceph_vg0 finished
Nov 27 05:57:14 np0005537642 ceph-mon[74338]: from='mgr.14122 192.168.122.100:0/397881370' entity='mgr.compute-0.qnrkij' cmd='[{"prefix": "osd pool set", "pool": "backups", "var": "pgp_num_actual", "val": "32"}]': finished
Nov 27 05:57:14 np0005537642 ceph-mon[74338]: from='mgr.14122 192.168.122.100:0/397881370' entity='mgr.compute-0.qnrkij' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.data", "var": "pgp_num_actual", "val": "32"}]': finished
Nov 27 05:57:14 np0005537642 ceph-mon[74338]: from='mgr.14122 192.168.122.100:0/397881370' entity='mgr.compute-0.qnrkij' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "32"}]': finished
Nov 27 05:57:14 np0005537642 ceph-mon[74338]: from='mgr.14122 192.168.122.100:0/397881370' entity='mgr.compute-0.qnrkij' cmd='[{"prefix": "osd pool set", "pool": "images", "var": "pgp_num_actual", "val": "32"}]': finished
Nov 27 05:57:14 np0005537642 ceph-mon[74338]: from='mgr.14122 192.168.122.100:0/397881370' entity='mgr.compute-0.qnrkij' cmd='[{"prefix": "osd pool set", "pool": "vms", "var": "pgp_num_actual", "val": "32"}]': finished
Nov 27 05:57:14 np0005537642 ceph-mon[74338]: from='mgr.14122 192.168.122.100:0/397881370' entity='mgr.compute-0.qnrkij' cmd='[{"prefix": "osd pool set", "pool": "volumes", "var": "pgp_num_actual", "val": "32"}]': finished
Nov 27 05:57:14 np0005537642 hopeful_swartz[87019]: {}
Nov 27 05:57:14 np0005537642 ceph-mon[74338]: log_channel(cluster) log [DBG] : osdmap e43: 3 total, 2 up, 3 in
Nov 27 05:57:14 np0005537642 systemd[1]: libpod-6060ae200b639997ca4360d5c04dca6181be19d279647f534ef2c84c5049895a.scope: Deactivated successfully.
Nov 27 05:57:14 np0005537642 systemd[1]: libpod-6060ae200b639997ca4360d5c04dca6181be19d279647f534ef2c84c5049895a.scope: Consumed 1.247s CPU time.
Nov 27 05:57:14 np0005537642 podman[86975]: 2025-11-27 10:57:14.954975123 +0000 UTC m=+1.675348151 container died 6060ae200b639997ca4360d5c04dca6181be19d279647f534ef2c84c5049895a (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=hopeful_swartz, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, ceph=True, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 27 05:57:14 np0005537642 ceph-mgr[74636]: [progress INFO root] Completed event c5217927-6d36-4de2-b551-7c17d49d5ed8 (Global Recovery Event) in 10 seconds
Nov 27 05:57:14 np0005537642 ceph-mon[74338]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd metadata", "id": 2} v 0)
Nov 27 05:57:14 np0005537642 ceph-mon[74338]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/397881370' entity='mgr.compute-0.qnrkij' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch
Nov 27 05:57:14 np0005537642 ceph-mgr[74636]: mgr finish mon failed to return metadata for osd.2: (2) No such file or directory
Nov 27 05:57:14 np0005537642 ceph-osd[82775]: osd.1 pg_epoch: 43 pg[6.1e( empty local-lis/les=42/43 n=0 ec=39/24 lis/c=39/39 les/c/f=40/40/0 sis=42) [1] r=0 lpr=42 pi=[39,42)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 27 05:57:14 np0005537642 ceph-osd[82775]: osd.1 pg_epoch: 43 pg[6.1c( empty local-lis/les=42/43 n=0 ec=39/24 lis/c=39/39 les/c/f=40/40/0 sis=42) [1] r=0 lpr=42 pi=[39,42)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 27 05:57:15 np0005537642 ceph-mon[74338]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/397881370' entity='mgr.compute-0.qnrkij' 
Nov 27 05:57:15 np0005537642 ceph-osd[82775]: osd.1 pg_epoch: 43 pg[5.10( empty local-lis/les=42/43 n=0 ec=38/21 lis/c=38/38 les/c/f=39/39/0 sis=42) [1] r=0 lpr=42 pi=[38,42)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 27 05:57:15 np0005537642 ceph-osd[82775]: osd.1 pg_epoch: 43 pg[4.1f( empty local-lis/les=42/43 n=0 ec=37/18 lis/c=37/37 les/c/f=38/38/0 sis=42) [1] r=0 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 27 05:57:15 np0005537642 ceph-osd[82775]: osd.1 pg_epoch: 43 pg[5.11( empty local-lis/les=42/43 n=0 ec=38/21 lis/c=38/38 les/c/f=39/39/0 sis=42) [1] r=0 lpr=42 pi=[38,42)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 27 05:57:15 np0005537642 ceph-osd[82775]: osd.1 pg_epoch: 43 pg[6.12( empty local-lis/les=42/43 n=0 ec=39/24 lis/c=39/39 les/c/f=40/40/0 sis=42) [1] r=0 lpr=42 pi=[39,42)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 27 05:57:15 np0005537642 ceph-osd[82775]: osd.1 pg_epoch: 43 pg[3.16( empty local-lis/les=42/43 n=0 ec=35/15 lis/c=35/35 les/c/f=38/38/0 sis=42) [1] r=0 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 27 05:57:15 np0005537642 ceph-osd[82775]: osd.1 pg_epoch: 43 pg[3.14( empty local-lis/les=42/43 n=0 ec=35/15 lis/c=35/35 les/c/f=38/38/0 sis=42) [1] r=0 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 27 05:57:15 np0005537642 ceph-osd[82775]: osd.1 pg_epoch: 43 pg[3.15( empty local-lis/les=42/43 n=0 ec=35/15 lis/c=35/35 les/c/f=38/38/0 sis=42) [1] r=0 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 27 05:57:15 np0005537642 ceph-osd[82775]: osd.1 pg_epoch: 43 pg[4.13( empty local-lis/les=42/43 n=0 ec=37/18 lis/c=37/37 les/c/f=38/38/0 sis=42) [1] r=0 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 27 05:57:15 np0005537642 ceph-osd[82775]: osd.1 pg_epoch: 43 pg[4.15( empty local-lis/les=42/43 n=0 ec=37/18 lis/c=37/37 les/c/f=38/38/0 sis=42) [1] r=0 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 27 05:57:15 np0005537642 ceph-osd[82775]: osd.1 pg_epoch: 43 pg[5.15( empty local-lis/les=42/43 n=0 ec=38/21 lis/c=38/38 les/c/f=39/39/0 sis=42) [1] r=0 lpr=42 pi=[38,42)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 27 05:57:15 np0005537642 ceph-osd[82775]: osd.1 pg_epoch: 43 pg[3.13( empty local-lis/les=42/43 n=0 ec=35/15 lis/c=35/35 les/c/f=38/38/0 sis=42) [1] r=0 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 27 05:57:15 np0005537642 ceph-osd[82775]: osd.1 pg_epoch: 43 pg[6.17( empty local-lis/les=42/43 n=0 ec=39/24 lis/c=39/39 les/c/f=40/40/0 sis=42) [1] r=0 lpr=42 pi=[39,42)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 27 05:57:15 np0005537642 ceph-osd[82775]: osd.1 pg_epoch: 43 pg[6.15( empty local-lis/les=42/43 n=0 ec=39/24 lis/c=39/39 les/c/f=40/40/0 sis=42) [1] r=0 lpr=42 pi=[39,42)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 27 05:57:15 np0005537642 ceph-osd[82775]: osd.1 pg_epoch: 43 pg[3.11( empty local-lis/les=42/43 n=0 ec=35/15 lis/c=35/35 les/c/f=38/38/0 sis=42) [1] r=0 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 27 05:57:15 np0005537642 ceph-osd[82775]: osd.1 pg_epoch: 43 pg[3.10( empty local-lis/les=42/43 n=0 ec=35/15 lis/c=35/35 les/c/f=38/38/0 sis=42) [1] r=0 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 27 05:57:15 np0005537642 ceph-osd[82775]: osd.1 pg_epoch: 43 pg[5.1f( empty local-lis/les=42/43 n=0 ec=38/21 lis/c=38/38 les/c/f=39/39/0 sis=42) [1] r=0 lpr=42 pi=[38,42)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 27 05:57:15 np0005537642 ceph-osd[82775]: osd.1 pg_epoch: 43 pg[5.16( empty local-lis/les=42/43 n=0 ec=38/21 lis/c=38/38 les/c/f=39/39/0 sis=42) [1] r=0 lpr=42 pi=[38,42)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 27 05:57:15 np0005537642 ceph-osd[82775]: osd.1 pg_epoch: 43 pg[4.8( empty local-lis/les=42/43 n=0 ec=37/18 lis/c=37/37 les/c/f=38/38/0 sis=42) [1] r=0 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 27 05:57:15 np0005537642 ceph-osd[82775]: osd.1 pg_epoch: 43 pg[6.a( empty local-lis/les=42/43 n=0 ec=39/24 lis/c=39/39 les/c/f=40/40/0 sis=42) [1] r=0 lpr=42 pi=[39,42)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 27 05:57:15 np0005537642 ceph-osd[82775]: osd.1 pg_epoch: 43 pg[4.9( empty local-lis/les=42/43 n=0 ec=37/18 lis/c=37/37 les/c/f=38/38/0 sis=42) [1] r=0 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 27 05:57:15 np0005537642 ceph-osd[82775]: osd.1 pg_epoch: 43 pg[3.f( empty local-lis/les=42/43 n=0 ec=35/15 lis/c=35/35 les/c/f=38/38/0 sis=42) [1] r=0 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 27 05:57:15 np0005537642 ceph-osd[82775]: osd.1 pg_epoch: 43 pg[5.9( empty local-lis/les=42/43 n=0 ec=38/21 lis/c=38/38 les/c/f=39/39/0 sis=42) [1] r=0 lpr=42 pi=[38,42)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 27 05:57:15 np0005537642 ceph-osd[82775]: osd.1 pg_epoch: 43 pg[3.c( empty local-lis/les=42/43 n=0 ec=35/15 lis/c=35/35 les/c/f=38/38/0 sis=42) [1] r=0 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 27 05:57:15 np0005537642 ceph-osd[82775]: osd.1 pg_epoch: 43 pg[3.e( empty local-lis/les=42/43 n=0 ec=35/15 lis/c=35/35 les/c/f=38/38/0 sis=42) [1] r=0 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 27 05:57:15 np0005537642 ceph-osd[82775]: osd.1 pg_epoch: 43 pg[3.a( empty local-lis/les=42/43 n=0 ec=35/15 lis/c=35/35 les/c/f=38/38/0 sis=42) [1] r=0 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 27 05:57:15 np0005537642 ceph-osd[82775]: osd.1 pg_epoch: 43 pg[3.d( empty local-lis/les=42/43 n=0 ec=35/15 lis/c=35/35 les/c/f=38/38/0 sis=42) [1] r=0 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 27 05:57:15 np0005537642 ceph-osd[82775]: osd.1 pg_epoch: 43 pg[4.d( empty local-lis/les=42/43 n=0 ec=37/18 lis/c=37/37 les/c/f=38/38/0 sis=42) [1] r=0 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 27 05:57:15 np0005537642 ceph-osd[82775]: osd.1 pg_epoch: 43 pg[5.4( empty local-lis/les=42/43 n=0 ec=38/21 lis/c=38/38 les/c/f=39/39/0 sis=42) [1] r=0 lpr=42 pi=[38,42)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 27 05:57:15 np0005537642 ceph-osd[82775]: osd.1 pg_epoch: 43 pg[6.7( empty local-lis/les=42/43 n=0 ec=39/24 lis/c=39/39 les/c/f=40/40/0 sis=42) [1] r=0 lpr=42 pi=[39,42)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 27 05:57:15 np0005537642 ceph-osd[82775]: osd.1 pg_epoch: 43 pg[4.5( empty local-lis/les=42/43 n=0 ec=37/18 lis/c=37/37 les/c/f=38/38/0 sis=42) [1] r=0 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 27 05:57:15 np0005537642 ceph-osd[82775]: osd.1 pg_epoch: 43 pg[5.7( empty local-lis/les=42/43 n=0 ec=38/21 lis/c=38/38 les/c/f=39/39/0 sis=42) [1] r=0 lpr=42 pi=[38,42)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 27 05:57:15 np0005537642 ceph-osd[82775]: osd.1 pg_epoch: 43 pg[4.a( empty local-lis/les=42/43 n=0 ec=37/18 lis/c=37/37 les/c/f=38/38/0 sis=42) [1] r=0 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 27 05:57:15 np0005537642 ceph-osd[82775]: osd.1 pg_epoch: 43 pg[5.2( empty local-lis/les=42/43 n=0 ec=38/21 lis/c=38/38 les/c/f=39/39/0 sis=42) [1] r=0 lpr=42 pi=[38,42)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 27 05:57:15 np0005537642 ceph-osd[82775]: osd.1 pg_epoch: 43 pg[6.5( empty local-lis/les=42/43 n=0 ec=39/24 lis/c=39/39 les/c/f=40/40/0 sis=42) [1] r=0 lpr=42 pi=[39,42)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 27 05:57:15 np0005537642 ceph-osd[82775]: osd.1 pg_epoch: 43 pg[3.5( empty local-lis/les=42/43 n=0 ec=35/15 lis/c=35/35 les/c/f=38/38/0 sis=42) [1] r=0 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 27 05:57:15 np0005537642 ceph-osd[82775]: osd.1 pg_epoch: 43 pg[3.3( empty local-lis/les=42/43 n=0 ec=35/15 lis/c=35/35 les/c/f=38/38/0 sis=42) [1] r=0 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 27 05:57:15 np0005537642 ceph-osd[82775]: osd.1 pg_epoch: 43 pg[6.2( empty local-lis/les=42/43 n=0 ec=39/24 lis/c=39/39 les/c/f=40/40/0 sis=42) [1] r=0 lpr=42 pi=[39,42)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 27 05:57:15 np0005537642 ceph-osd[82775]: osd.1 pg_epoch: 43 pg[5.1( empty local-lis/les=42/43 n=0 ec=38/21 lis/c=38/38 les/c/f=39/39/0 sis=42) [1] r=0 lpr=42 pi=[38,42)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 27 05:57:15 np0005537642 ceph-osd[82775]: osd.1 pg_epoch: 43 pg[6.3( empty local-lis/les=42/43 n=0 ec=39/24 lis/c=39/39 les/c/f=40/40/0 sis=42) [1] r=0 lpr=42 pi=[39,42)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 27 05:57:15 np0005537642 ceph-osd[82775]: osd.1 pg_epoch: 43 pg[4.1( empty local-lis/les=42/43 n=0 ec=37/18 lis/c=37/37 les/c/f=38/38/0 sis=42) [1] r=0 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 27 05:57:15 np0005537642 ceph-osd[82775]: osd.1 pg_epoch: 43 pg[5.f( empty local-lis/les=42/43 n=0 ec=38/21 lis/c=38/38 les/c/f=39/39/0 sis=42) [1] r=0 lpr=42 pi=[38,42)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 27 05:57:15 np0005537642 ceph-osd[82775]: osd.1 pg_epoch: 43 pg[3.9( empty local-lis/les=42/43 n=0 ec=35/15 lis/c=35/35 les/c/f=38/38/0 sis=42) [1] r=0 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 27 05:57:15 np0005537642 ceph-osd[82775]: osd.1 pg_epoch: 43 pg[6.d( empty local-lis/les=42/43 n=0 ec=39/24 lis/c=39/39 les/c/f=40/40/0 sis=42) [1] r=0 lpr=42 pi=[39,42)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 27 05:57:15 np0005537642 ceph-osd[82775]: osd.1 pg_epoch: 43 pg[5.e( empty local-lis/les=42/43 n=0 ec=38/21 lis/c=38/38 les/c/f=39/39/0 sis=42) [1] r=0 lpr=42 pi=[38,42)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 27 05:57:15 np0005537642 ceph-osd[82775]: osd.1 pg_epoch: 43 pg[6.e( empty local-lis/les=42/43 n=0 ec=39/24 lis/c=39/39 les/c/f=40/40/0 sis=42) [1] r=0 lpr=42 pi=[39,42)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 27 05:57:15 np0005537642 ceph-osd[82775]: osd.1 pg_epoch: 43 pg[5.1c( empty local-lis/les=42/43 n=0 ec=38/21 lis/c=38/38 les/c/f=39/39/0 sis=42) [1] r=0 lpr=42 pi=[38,42)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 27 05:57:15 np0005537642 ceph-osd[82775]: osd.1 pg_epoch: 43 pg[3.1a( empty local-lis/les=42/43 n=0 ec=35/15 lis/c=35/35 les/c/f=38/38/0 sis=42) [1] r=0 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 27 05:57:15 np0005537642 ceph-osd[82775]: osd.1 pg_epoch: 43 pg[5.1b( empty local-lis/les=42/43 n=0 ec=38/21 lis/c=38/38 les/c/f=39/39/0 sis=42) [1] r=0 lpr=42 pi=[38,42)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 27 05:57:15 np0005537642 ceph-osd[82775]: osd.1 pg_epoch: 43 pg[3.1d( empty local-lis/les=42/43 n=0 ec=35/15 lis/c=35/35 les/c/f=38/38/0 sis=42) [1] r=0 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 27 05:57:15 np0005537642 ceph-osd[82775]: osd.1 pg_epoch: 43 pg[4.1a( empty local-lis/les=42/43 n=0 ec=37/18 lis/c=37/37 les/c/f=38/38/0 sis=42) [1] r=0 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 27 05:57:15 np0005537642 ceph-osd[82775]: osd.1 pg_epoch: 43 pg[4.1b( empty local-lis/les=42/43 n=0 ec=37/18 lis/c=37/37 les/c/f=38/38/0 sis=42) [1] r=0 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 27 05:57:15 np0005537642 ceph-osd[82775]: osd.1 pg_epoch: 43 pg[5.1a( empty local-lis/les=42/43 n=0 ec=38/21 lis/c=38/38 les/c/f=39/39/0 sis=42) [1] r=0 lpr=42 pi=[38,42)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 27 05:57:15 np0005537642 ceph-osd[82775]: osd.1 pg_epoch: 43 pg[3.1c( empty local-lis/les=42/43 n=0 ec=35/15 lis/c=35/35 les/c/f=38/38/0 sis=42) [1] r=0 lpr=42 pi=[35,42)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 27 05:57:15 np0005537642 ceph-osd[82775]: osd.1 pg_epoch: 43 pg[6.19( empty local-lis/les=42/43 n=0 ec=39/24 lis/c=39/39 les/c/f=40/40/0 sis=42) [1] r=0 lpr=42 pi=[39,42)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 27 05:57:15 np0005537642 ceph-osd[82775]: osd.1 pg_epoch: 43 pg[4.c( empty local-lis/les=42/43 n=0 ec=37/18 lis/c=37/37 les/c/f=38/38/0 sis=42) [1] r=0 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 27 05:57:15 np0005537642 ceph-osd[82775]: osd.1 pg_epoch: 43 pg[6.1a( empty local-lis/les=42/43 n=0 ec=39/24 lis/c=39/39 les/c/f=40/40/0 sis=42) [1] r=0 lpr=42 pi=[39,42)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 27 05:57:15 np0005537642 ceph-osd[82775]: osd.1 pg_epoch: 43 pg[4.18( empty local-lis/les=42/43 n=0 ec=37/18 lis/c=37/37 les/c/f=38/38/0 sis=42) [1] r=0 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 27 05:57:15 np0005537642 ceph-osd[82775]: osd.1 pg_epoch: 43 pg[5.18( empty local-lis/les=42/43 n=0 ec=38/21 lis/c=38/38 les/c/f=39/39/0 sis=42) [1] r=0 lpr=42 pi=[38,42)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 27 05:57:15 np0005537642 ceph-osd[82775]: osd.1 pg_epoch: 43 pg[4.e( empty local-lis/les=42/43 n=0 ec=37/18 lis/c=37/37 les/c/f=38/38/0 sis=42) [1] r=0 lpr=42 pi=[37,42)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 27 05:57:15 np0005537642 ceph-osd[82775]: osd.1 pg_epoch: 43 pg[6.8( empty local-lis/les=42/43 n=0 ec=39/24 lis/c=39/39 les/c/f=40/40/0 sis=42) [1] r=0 lpr=42 pi=[39,42)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 27 05:57:15 np0005537642 ceph-mon[74338]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=ssl_option}] v 0)
Nov 27 05:57:15 np0005537642 ceph-mon[74338]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/531099834' entity='client.admin' 
Nov 27 05:57:15 np0005537642 eloquent_tu[87051]: set ssl_option
Nov 27 05:57:15 np0005537642 systemd[1]: libpod-76c726e2c431052947bb808faecbb60e32f28bef932dd03b925ca1903ffd4bdc.scope: Deactivated successfully.
Nov 27 05:57:15 np0005537642 systemd[1]: var-lib-containers-storage-overlay-08413f335727ccf9a67f4fb56ef838e64f280d20277d93bb2a09109284004e0a-merged.mount: Deactivated successfully.
Nov 27 05:57:15 np0005537642 podman[86975]: 2025-11-27 10:57:15.276639311 +0000 UTC m=+1.997012299 container remove 6060ae200b639997ca4360d5c04dca6181be19d279647f534ef2c84c5049895a (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=hopeful_swartz, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True)
Nov 27 05:57:15 np0005537642 podman[87023]: 2025-11-27 10:57:15.28479093 +0000 UTC m=+1.123066989 container died 76c726e2c431052947bb808faecbb60e32f28bef932dd03b925ca1903ffd4bdc (image=quay.io/ceph/ceph:v19, name=eloquent_tu, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, ceph=True, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 27 05:57:15 np0005537642 systemd[1]: libpod-conmon-6060ae200b639997ca4360d5c04dca6181be19d279647f534ef2c84c5049895a.scope: Deactivated successfully.
Nov 27 05:57:15 np0005537642 ceph-osd[82775]: log_channel(cluster) log [DBG] : 2.1a scrub starts
Nov 27 05:57:15 np0005537642 ceph-mon[74338]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Nov 27 05:57:15 np0005537642 ceph-osd[82775]: log_channel(cluster) log [DBG] : 2.1a scrub ok
Nov 27 05:57:15 np0005537642 ceph-mon[74338]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/397881370' entity='mgr.compute-0.qnrkij' 
Nov 27 05:57:15 np0005537642 ceph-mon[74338]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Nov 27 05:57:15 np0005537642 systemd[1]: var-lib-containers-storage-overlay-d0bc1d54728abdbfac8075000f2d6f4aba5bf494e5a5eafd2243c677dd32f384-merged.mount: Deactivated successfully.
Nov 27 05:57:15 np0005537642 ceph-mon[74338]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/397881370' entity='mgr.compute-0.qnrkij' 
Nov 27 05:57:15 np0005537642 ceph-mon[74338]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["2"]} v 0)
Nov 27 05:57:15 np0005537642 ceph-mon[74338]: log_channel(audit) log [INF] : from='osd.2 ' entity='osd.2' cmd=[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["2"]}]: dispatch
Nov 27 05:57:15 np0005537642 podman[87023]: 2025-11-27 10:57:15.674479797 +0000 UTC m=+1.512755816 container remove 76c726e2c431052947bb808faecbb60e32f28bef932dd03b925ca1903ffd4bdc (image=quay.io/ceph/ceph:v19, name=eloquent_tu, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0)
Nov 27 05:57:15 np0005537642 systemd[1]: libpod-conmon-76c726e2c431052947bb808faecbb60e32f28bef932dd03b925ca1903ffd4bdc.scope: Deactivated successfully.
Nov 27 05:57:16 np0005537642 ceph-mon[74338]: from='mgr.14122 192.168.122.100:0/397881370' entity='mgr.compute-0.qnrkij' 
Nov 27 05:57:16 np0005537642 ceph-mon[74338]: from='mgr.14122 192.168.122.100:0/397881370' entity='mgr.compute-0.qnrkij' 
Nov 27 05:57:16 np0005537642 ceph-mon[74338]: from='client.? 192.168.122.100:0/531099834' entity='client.admin' 
Nov 27 05:57:16 np0005537642 ceph-mon[74338]: from='mgr.14122 192.168.122.100:0/397881370' entity='mgr.compute-0.qnrkij' 
Nov 27 05:57:16 np0005537642 ceph-mon[74338]: from='mgr.14122 192.168.122.100:0/397881370' entity='mgr.compute-0.qnrkij' 
Nov 27 05:57:16 np0005537642 ceph-mon[74338]: from='osd.2 [v2:192.168.122.102:6800/1778559861,v1:192.168.122.102:6801/1778559861]' entity='osd.2' cmd=[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["2"]}]: dispatch
Nov 27 05:57:16 np0005537642 ceph-mon[74338]: from='osd.2 ' entity='osd.2' cmd=[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["2"]}]: dispatch
Nov 27 05:57:16 np0005537642 python3[87244]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z   --volume /tmp/ceph_rgw.yml:/home/ceph_spec.yaml:z   --entrypoint ceph quay.io/ceph/ceph:v19 --fsid 4c838139-e0c9-556a-a9ca-e4422f459af7 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   orch apply --in-file /home/ceph_spec.yaml _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 27 05:57:16 np0005537642 ceph-osd[82775]: log_channel(cluster) log [DBG] : 7.1c scrub starts
Nov 27 05:57:16 np0005537642 ceph-osd[82775]: log_channel(cluster) log [DBG] : 7.1c scrub ok
Nov 27 05:57:16 np0005537642 podman[87265]: 2025-11-27 10:57:16.149980755 +0000 UTC m=+0.043150863 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Nov 27 05:57:16 np0005537642 podman[87265]: 2025-11-27 10:57:16.317128834 +0000 UTC m=+0.210298922 container create 38cf0caab2a8d9dc4e7a057ff592798c1856a06a67c10ed0ef709f26a253e27c (image=quay.io/ceph/ceph:v19, name=determined_ritchie, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 27 05:57:16 np0005537642 ceph-mon[74338]: mon.compute-0@0(leader).osd e43 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 27 05:57:16 np0005537642 systemd[1]: Started libpod-conmon-38cf0caab2a8d9dc4e7a057ff592798c1856a06a67c10ed0ef709f26a253e27c.scope.
Nov 27 05:57:16 np0005537642 systemd[1]: Started libcrun container.
Nov 27 05:57:16 np0005537642 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5e189e6e023a3bd8cb8848143c1f090b4f476a168f66c6d54c31fb49e143b459/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 27 05:57:16 np0005537642 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5e189e6e023a3bd8cb8848143c1f090b4f476a168f66c6d54c31fb49e143b459/merged/home/ceph_spec.yaml supports timestamps until 2038 (0x7fffffff)
Nov 27 05:57:16 np0005537642 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5e189e6e023a3bd8cb8848143c1f090b4f476a168f66c6d54c31fb49e143b459/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Nov 27 05:57:16 np0005537642 ceph-mon[74338]: mon.compute-0@0(leader).osd e43 do_prune osdmap full prune enabled
Nov 27 05:57:16 np0005537642 podman[87265]: 2025-11-27 10:57:16.507947565 +0000 UTC m=+0.401117703 container init 38cf0caab2a8d9dc4e7a057ff592798c1856a06a67c10ed0ef709f26a253e27c (image=quay.io/ceph/ceph:v19, name=determined_ritchie, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, ceph=True, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Nov 27 05:57:16 np0005537642 podman[87265]: 2025-11-27 10:57:16.51428793 +0000 UTC m=+0.407457978 container start 38cf0caab2a8d9dc4e7a057ff592798c1856a06a67c10ed0ef709f26a253e27c (image=quay.io/ceph/ceph:v19, name=determined_ritchie, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_REF=squid, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 27 05:57:16 np0005537642 ceph-mon[74338]: log_channel(audit) log [INF] : from='osd.2 ' entity='osd.2' cmd='[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["2"]}]': finished
Nov 27 05:57:16 np0005537642 ceph-mon[74338]: mon.compute-0@0(leader).osd e44 e44: 3 total, 2 up, 3 in
Nov 27 05:57:16 np0005537642 podman[87265]: 2025-11-27 10:57:16.573337966 +0000 UTC m=+0.466508054 container attach 38cf0caab2a8d9dc4e7a057ff592798c1856a06a67c10ed0ef709f26a253e27c (image=quay.io/ceph/ceph:v19, name=determined_ritchie, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS)
Nov 27 05:57:16 np0005537642 ceph-mon[74338]: log_channel(cluster) log [DBG] : osdmap e44: 3 total, 2 up, 3 in
Nov 27 05:57:16 np0005537642 ceph-mon[74338]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd metadata", "id": 2} v 0)
Nov 27 05:57:16 np0005537642 ceph-mon[74338]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/397881370' entity='mgr.compute-0.qnrkij' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch
Nov 27 05:57:16 np0005537642 ceph-mgr[74636]: mgr finish mon failed to return metadata for osd.2: (2) No such file or directory
Nov 27 05:57:16 np0005537642 ceph-mon[74338]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd crush create-or-move", "id": 2, "weight":0.0195, "args": ["host=compute-2", "root=default"]} v 0)
Nov 27 05:57:16 np0005537642 ceph-mon[74338]: log_channel(audit) log [INF] : from='osd.2 ' entity='osd.2' cmd=[{"prefix": "osd crush create-or-move", "id": 2, "weight":0.0195, "args": ["host=compute-2", "root=default"]}]: dispatch
Nov 27 05:57:16 np0005537642 ceph-mon[74338]: mon.compute-0@0(leader).osd e44 create-or-move crush item name 'osd.2' initial_weight 0.0195 at location {host=compute-2,root=default}
Nov 27 05:57:16 np0005537642 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v142: 193 pgs: 193 active+clean; 449 KiB data, 54 MiB used, 40 GiB / 40 GiB avail
Nov 27 05:57:16 np0005537642 ceph-mgr[74636]: log_channel(audit) log [DBG] : from='client.14253 -' entity='client.admin' cmd=[{"prefix": "orch apply", "target": ["mon-mgr", ""]}]: dispatch
Nov 27 05:57:16 np0005537642 ceph-mgr[74636]: [cephadm INFO root] Saving service rgw.rgw spec with placement compute-0;compute-1;compute-2
Nov 27 05:57:16 np0005537642 ceph-mgr[74636]: log_channel(cephadm) log [INF] : Saving service rgw.rgw spec with placement compute-0;compute-1;compute-2
Nov 27 05:57:16 np0005537642 ceph-mon[74338]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.rgw.rgw}] v 0)
Nov 27 05:57:16 np0005537642 ceph-mon[74338]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/397881370' entity='mgr.compute-0.qnrkij' 
Nov 27 05:57:16 np0005537642 ceph-mgr[74636]: [cephadm INFO root] Saving service ingress.rgw.default spec with placement count:2
Nov 27 05:57:16 np0005537642 ceph-mgr[74636]: log_channel(cephadm) log [INF] : Saving service ingress.rgw.default spec with placement count:2
Nov 27 05:57:17 np0005537642 ceph-mon[74338]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.ingress.rgw.default}] v 0)
Nov 27 05:57:17 np0005537642 ceph-mon[74338]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1.devices.0}] v 0)
Nov 27 05:57:17 np0005537642 ceph-osd[82775]: log_channel(cluster) log [DBG] : 2.17 scrub starts
Nov 27 05:57:17 np0005537642 ceph-mon[74338]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/397881370' entity='mgr.compute-0.qnrkij' 
Nov 27 05:57:17 np0005537642 determined_ritchie[87303]: Scheduled rgw.rgw update...
Nov 27 05:57:17 np0005537642 determined_ritchie[87303]: Scheduled ingress.rgw.default update...
Nov 27 05:57:17 np0005537642 ceph-osd[82775]: log_channel(cluster) log [DBG] : 2.17 scrub ok
Nov 27 05:57:17 np0005537642 systemd[1]: libpod-38cf0caab2a8d9dc4e7a057ff592798c1856a06a67c10ed0ef709f26a253e27c.scope: Deactivated successfully.
Nov 27 05:57:17 np0005537642 podman[87265]: 2025-11-27 10:57:17.12255197 +0000 UTC m=+1.015722038 container died 38cf0caab2a8d9dc4e7a057ff592798c1856a06a67c10ed0ef709f26a253e27c (image=quay.io/ceph/ceph:v19, name=determined_ritchie, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 27 05:57:17 np0005537642 ceph-mon[74338]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2.devices.0}] v 0)
Nov 27 05:57:17 np0005537642 ceph-mon[74338]: from='osd.2 ' entity='osd.2' cmd='[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["2"]}]': finished
Nov 27 05:57:17 np0005537642 ceph-mon[74338]: from='osd.2 [v2:192.168.122.102:6800/1778559861,v1:192.168.122.102:6801/1778559861]' entity='osd.2' cmd=[{"prefix": "osd crush create-or-move", "id": 2, "weight":0.0195, "args": ["host=compute-2", "root=default"]}]: dispatch
Nov 27 05:57:17 np0005537642 ceph-mon[74338]: from='osd.2 ' entity='osd.2' cmd=[{"prefix": "osd crush create-or-move", "id": 2, "weight":0.0195, "args": ["host=compute-2", "root=default"]}]: dispatch
Nov 27 05:57:17 np0005537642 ceph-mon[74338]: from='mgr.14122 192.168.122.100:0/397881370' entity='mgr.compute-0.qnrkij' 
Nov 27 05:57:17 np0005537642 ceph-mon[74338]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/397881370' entity='mgr.compute-0.qnrkij' 
Nov 27 05:57:17 np0005537642 ceph-mon[74338]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1}] v 0)
Nov 27 05:57:17 np0005537642 ceph-mon[74338]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/397881370' entity='mgr.compute-0.qnrkij' 
Nov 27 05:57:17 np0005537642 systemd[1]: var-lib-containers-storage-overlay-5e189e6e023a3bd8cb8848143c1f090b4f476a168f66c6d54c31fb49e143b459-merged.mount: Deactivated successfully.
Nov 27 05:57:17 np0005537642 ceph-mon[74338]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2}] v 0)
Nov 27 05:57:17 np0005537642 ceph-mon[74338]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/397881370' entity='mgr.compute-0.qnrkij' 
Nov 27 05:57:17 np0005537642 ceph-mon[74338]: mon.compute-0@0(leader).osd e44 do_prune osdmap full prune enabled
Nov 27 05:57:17 np0005537642 ceph-mon[74338]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/397881370' entity='mgr.compute-0.qnrkij' 
Nov 27 05:57:17 np0005537642 podman[87265]: 2025-11-27 10:57:17.698898217 +0000 UTC m=+1.592068295 container remove 38cf0caab2a8d9dc4e7a057ff592798c1856a06a67c10ed0ef709f26a253e27c (image=quay.io/ceph/ceph:v19, name=determined_ritchie, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.license=GPLv2, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0)
Nov 27 05:57:17 np0005537642 ceph-mon[74338]: log_channel(audit) log [INF] : from='osd.2 ' entity='osd.2' cmd='[{"prefix": "osd crush create-or-move", "id": 2, "weight":0.0195, "args": ["host=compute-2", "root=default"]}]': finished
Nov 27 05:57:17 np0005537642 ceph-mon[74338]: mon.compute-0@0(leader).osd e45 e45: 3 total, 2 up, 3 in
Nov 27 05:57:17 np0005537642 ceph-mgr[74636]: mgr.server handle_open ignoring open from osd.2 v2:192.168.122.102:6800/1778559861; not ready for session (expect reconnect)
Nov 27 05:57:17 np0005537642 ceph-mon[74338]: log_channel(cluster) log [DBG] : osdmap e45: 3 total, 2 up, 3 in
Nov 27 05:57:17 np0005537642 ceph-mon[74338]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd metadata", "id": 2} v 0)
Nov 27 05:57:17 np0005537642 ceph-mon[74338]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/397881370' entity='mgr.compute-0.qnrkij' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch
Nov 27 05:57:17 np0005537642 ceph-mgr[74636]: mgr finish mon failed to return metadata for osd.2: (2) No such file or directory
Nov 27 05:57:17 np0005537642 systemd[1]: libpod-conmon-38cf0caab2a8d9dc4e7a057ff592798c1856a06a67c10ed0ef709f26a253e27c.scope: Deactivated successfully.
Nov 27 05:57:18 np0005537642 ceph-osd[82775]: log_channel(cluster) log [DBG] : 7.12 scrub starts
Nov 27 05:57:18 np0005537642 ceph-osd[82775]: log_channel(cluster) log [DBG] : 7.12 scrub ok
Nov 27 05:57:18 np0005537642 python3[87422]: ansible-ansible.legacy.stat Invoked with path=/tmp/ceph_dashboard.yml follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Nov 27 05:57:18 np0005537642 ceph-mon[74338]: Saving service rgw.rgw spec with placement compute-0;compute-1;compute-2
Nov 27 05:57:18 np0005537642 ceph-mon[74338]: Saving service ingress.rgw.default spec with placement count:2
Nov 27 05:57:18 np0005537642 ceph-mon[74338]: from='mgr.14122 192.168.122.100:0/397881370' entity='mgr.compute-0.qnrkij' 
Nov 27 05:57:18 np0005537642 ceph-mon[74338]: from='mgr.14122 192.168.122.100:0/397881370' entity='mgr.compute-0.qnrkij' 
Nov 27 05:57:18 np0005537642 ceph-mon[74338]: from='mgr.14122 192.168.122.100:0/397881370' entity='mgr.compute-0.qnrkij' 
Nov 27 05:57:18 np0005537642 ceph-mon[74338]: from='mgr.14122 192.168.122.100:0/397881370' entity='mgr.compute-0.qnrkij' 
Nov 27 05:57:18 np0005537642 ceph-mon[74338]: from='mgr.14122 192.168.122.100:0/397881370' entity='mgr.compute-0.qnrkij' 
Nov 27 05:57:18 np0005537642 ceph-mon[74338]: from='osd.2 ' entity='osd.2' cmd='[{"prefix": "osd crush create-or-move", "id": 2, "weight":0.0195, "args": ["host=compute-2", "root=default"]}]': finished
Nov 27 05:57:18 np0005537642 ceph-mgr[74636]: [volumes INFO mgr_util] scanning for idle connections..
Nov 27 05:57:18 np0005537642 ceph-mgr[74636]: [volumes INFO mgr_util] cleaning up connections: []
Nov 27 05:57:18 np0005537642 ceph-mgr[74636]: [volumes INFO mgr_util] scanning for idle connections..
Nov 27 05:57:18 np0005537642 ceph-mgr[74636]: [volumes INFO mgr_util] cleaning up connections: []
Nov 27 05:57:18 np0005537642 ceph-mgr[74636]: [volumes INFO mgr_util] scanning for idle connections..
Nov 27 05:57:18 np0005537642 ceph-mgr[74636]: [volumes INFO mgr_util] cleaning up connections: []
Nov 27 05:57:18 np0005537642 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v144: 193 pgs: 193 active+clean; 449 KiB data, 54 MiB used, 40 GiB / 40 GiB avail
Nov 27 05:57:18 np0005537642 python3[87493]: ansible-ansible.legacy.copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1764241037.9090905-37138-160925062524024/source dest=/tmp/ceph_dashboard.yml mode=0644 force=True follow=False _original_basename=ceph_monitoring_stack.yml.j2 checksum=2701faaa92cae31b5bbad92984c27e2af7a44b84 backup=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 27 05:57:18 np0005537642 ceph-mgr[74636]: mgr.server handle_open ignoring open from osd.2 v2:192.168.122.102:6800/1778559861; not ready for session (expect reconnect)
Nov 27 05:57:18 np0005537642 ceph-mon[74338]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd metadata", "id": 2} v 0)
Nov 27 05:57:18 np0005537642 ceph-mgr[74636]: mgr finish mon failed to return metadata for osd.2: (2) No such file or directory
Nov 27 05:57:18 np0005537642 ceph-mon[74338]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/397881370' entity='mgr.compute-0.qnrkij' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch
Nov 27 05:57:19 np0005537642 ceph-osd[82775]: log_channel(cluster) log [DBG] : 2.16 scrub starts
Nov 27 05:57:19 np0005537642 ceph-osd[82775]: log_channel(cluster) log [DBG] : 2.16 scrub ok
Nov 27 05:57:19 np0005537642 python3[87543]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z   --volume /tmp/ceph_dashboard.yml:/home/ceph_spec.yaml:z   --entrypoint ceph quay.io/ceph/ceph:v19 --fsid 4c838139-e0c9-556a-a9ca-e4422f459af7 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   orch apply --in-file /home/ceph_spec.yaml _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 27 05:57:19 np0005537642 podman[87544]: 2025-11-27 10:57:19.617469651 +0000 UTC m=+0.114407347 container create 62b50d181a9ed42d50e701bfaac01a07ecadc3e14b9f2a4863d307615397de4c (image=quay.io/ceph/ceph:v19, name=flamboyant_noyce, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, CEPH_REF=squid, org.label-schema.license=GPLv2, ceph=True)
Nov 27 05:57:19 np0005537642 podman[87544]: 2025-11-27 10:57:19.541205881 +0000 UTC m=+0.038143617 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Nov 27 05:57:19 np0005537642 systemd[1]: Started libpod-conmon-62b50d181a9ed42d50e701bfaac01a07ecadc3e14b9f2a4863d307615397de4c.scope.
Nov 27 05:57:19 np0005537642 systemd[1]: Started libcrun container.
Nov 27 05:57:19 np0005537642 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9207c72fbc2318994620e602f3edcbd99dc9c057314593830a0d6c4c4cd3acd6/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 27 05:57:19 np0005537642 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9207c72fbc2318994620e602f3edcbd99dc9c057314593830a0d6c4c4cd3acd6/merged/home/ceph_spec.yaml supports timestamps until 2038 (0x7fffffff)
Nov 27 05:57:19 np0005537642 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9207c72fbc2318994620e602f3edcbd99dc9c057314593830a0d6c4c4cd3acd6/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Nov 27 05:57:19 np0005537642 ceph-mgr[74636]: mgr.server handle_open ignoring open from osd.2 v2:192.168.122.102:6800/1778559861; not ready for session (expect reconnect)
Nov 27 05:57:19 np0005537642 ceph-mon[74338]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd metadata", "id": 2} v 0)
Nov 27 05:57:19 np0005537642 ceph-mon[74338]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/397881370' entity='mgr.compute-0.qnrkij' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch
Nov 27 05:57:19 np0005537642 ceph-mgr[74636]: mgr finish mon failed to return metadata for osd.2: (2) No such file or directory
Nov 27 05:57:19 np0005537642 podman[87544]: 2025-11-27 10:57:19.800419802 +0000 UTC m=+0.297357468 container init 62b50d181a9ed42d50e701bfaac01a07ecadc3e14b9f2a4863d307615397de4c (image=quay.io/ceph/ceph:v19, name=flamboyant_noyce, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2)
Nov 27 05:57:19 np0005537642 podman[87544]: 2025-11-27 10:57:19.80719441 +0000 UTC m=+0.304132106 container start 62b50d181a9ed42d50e701bfaac01a07ecadc3e14b9f2a4863d307615397de4c (image=quay.io/ceph/ceph:v19, name=flamboyant_noyce, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_REF=squid, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True)
Nov 27 05:57:19 np0005537642 podman[87544]: 2025-11-27 10:57:19.894644288 +0000 UTC m=+0.391581994 container attach 62b50d181a9ed42d50e701bfaac01a07ecadc3e14b9f2a4863d307615397de4c (image=quay.io/ceph/ceph:v19, name=flamboyant_noyce, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS)
Nov 27 05:57:19 np0005537642 ceph-mgr[74636]: [progress INFO root] Writing back 13 completed events
Nov 27 05:57:19 np0005537642 ceph-mon[74338]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/progress/completed}] v 0)
Nov 27 05:57:20 np0005537642 ceph-mon[74338]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/397881370' entity='mgr.compute-0.qnrkij' 
Nov 27 05:57:20 np0005537642 ceph-mon[74338]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2.devices.0}] v 0)
Nov 27 05:57:20 np0005537642 ceph-osd[82775]: osd.1 pg_epoch: 45 pg[7.1f( empty local-lis/les=39/41 n=0 ec=39/27 lis/c=39/39 les/c/f=41/41/0 sis=45 pruub=14.131459236s) [] r=-1 lpr=45 pi=[39,45)/1 crt=0'0 mlcod 0'0 active pruub 111.020919800s@ mbc={}] PeeringState::start_peering_interval up [1] -> [], acting [1] -> [], acting_primary 1 -> -1, up_primary 1 -> -1, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Nov 27 05:57:20 np0005537642 ceph-osd[82775]: osd.1 pg_epoch: 45 pg[7.1f( empty local-lis/les=39/41 n=0 ec=39/27 lis/c=39/39 les/c/f=41/41/0 sis=45 pruub=14.131459236s) [] r=-1 lpr=45 pi=[39,45)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 111.020919800s@ mbc={}] state<Start>: transitioning to Stray
Nov 27 05:57:20 np0005537642 ceph-osd[82775]: osd.1 pg_epoch: 45 pg[4.1f( empty local-lis/les=42/43 n=0 ec=37/18 lis/c=42/42 les/c/f=43/43/0 sis=45 pruub=10.844661713s) [] r=-1 lpr=45 pi=[42,45)/1 crt=0'0 mlcod 0'0 active pruub 107.734283447s@ mbc={}] PeeringState::start_peering_interval up [1] -> [], acting [1] -> [], acting_primary 1 -> -1, up_primary 1 -> -1, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Nov 27 05:57:20 np0005537642 ceph-osd[82775]: osd.1 pg_epoch: 45 pg[4.1f( empty local-lis/les=42/43 n=0 ec=37/18 lis/c=42/42 les/c/f=43/43/0 sis=45 pruub=10.844661713s) [] r=-1 lpr=45 pi=[42,45)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 107.734283447s@ mbc={}] state<Start>: transitioning to Stray
Nov 27 05:57:20 np0005537642 ceph-osd[82775]: osd.1 pg_epoch: 45 pg[6.1c( empty local-lis/les=42/43 n=0 ec=39/24 lis/c=42/42 les/c/f=43/43/0 sis=45 pruub=10.797966003s) [] r=-1 lpr=45 pi=[42,45)/1 crt=0'0 mlcod 0'0 active pruub 107.687774658s@ mbc={}] PeeringState::start_peering_interval up [1] -> [], acting [1] -> [], acting_primary 1 -> -1, up_primary 1 -> -1, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Nov 27 05:57:20 np0005537642 ceph-osd[82775]: osd.1 pg_epoch: 45 pg[6.1c( empty local-lis/les=42/43 n=0 ec=39/24 lis/c=42/42 les/c/f=43/43/0 sis=45 pruub=10.797966003s) [] r=-1 lpr=45 pi=[42,45)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 107.687774658s@ mbc={}] state<Start>: transitioning to Stray
Nov 27 05:57:20 np0005537642 ceph-osd[82775]: osd.1 pg_epoch: 45 pg[6.1e( empty local-lis/les=42/43 n=0 ec=39/24 lis/c=42/42 les/c/f=43/43/0 sis=45 pruub=10.797569275s) [] r=-1 lpr=45 pi=[42,45)/1 crt=0'0 mlcod 0'0 active pruub 107.687690735s@ mbc={}] PeeringState::start_peering_interval up [1] -> [], acting [1] -> [], acting_primary 1 -> -1, up_primary 1 -> -1, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Nov 27 05:57:20 np0005537642 ceph-osd[82775]: osd.1 pg_epoch: 45 pg[3.15( empty local-lis/les=42/43 n=0 ec=35/15 lis/c=42/42 les/c/f=43/43/0 sis=45 pruub=10.844484329s) [] r=-1 lpr=45 pi=[42,45)/1 crt=0'0 mlcod 0'0 active pruub 107.734626770s@ mbc={}] PeeringState::start_peering_interval up [1] -> [], acting [1] -> [], acting_primary 1 -> -1, up_primary 1 -> -1, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Nov 27 05:57:20 np0005537642 ceph-osd[82775]: osd.1 pg_epoch: 45 pg[6.12( empty local-lis/les=42/43 n=0 ec=39/24 lis/c=42/42 les/c/f=43/43/0 sis=45 pruub=10.844383240s) [] r=-1 lpr=45 pi=[42,45)/1 crt=0'0 mlcod 0'0 active pruub 107.734527588s@ mbc={}] PeeringState::start_peering_interval up [1] -> [], acting [1] -> [], acting_primary 1 -> -1, up_primary 1 -> -1, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Nov 27 05:57:20 np0005537642 ceph-osd[82775]: osd.1 pg_epoch: 45 pg[3.15( empty local-lis/les=42/43 n=0 ec=35/15 lis/c=42/42 les/c/f=43/43/0 sis=45 pruub=10.844484329s) [] r=-1 lpr=45 pi=[42,45)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 107.734626770s@ mbc={}] state<Start>: transitioning to Stray
Nov 27 05:57:20 np0005537642 ceph-osd[82775]: osd.1 pg_epoch: 45 pg[6.1e( empty local-lis/les=42/43 n=0 ec=39/24 lis/c=42/42 les/c/f=43/43/0 sis=45 pruub=10.797569275s) [] r=-1 lpr=45 pi=[42,45)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 107.687690735s@ mbc={}] state<Start>: transitioning to Stray
Nov 27 05:57:20 np0005537642 ceph-osd[82775]: osd.1 pg_epoch: 45 pg[6.12( empty local-lis/les=42/43 n=0 ec=39/24 lis/c=42/42 les/c/f=43/43/0 sis=45 pruub=10.844383240s) [] r=-1 lpr=45 pi=[42,45)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 107.734527588s@ mbc={}] state<Start>: transitioning to Stray
Nov 27 05:57:20 np0005537642 ceph-osd[82775]: osd.1 pg_epoch: 45 pg[7.11( empty local-lis/les=39/41 n=0 ec=39/27 lis/c=39/39 les/c/f=41/41/0 sis=45 pruub=14.130631447s) [] r=-1 lpr=45 pi=[39,45)/1 crt=0'0 mlcod 0'0 active pruub 111.020935059s@ mbc={}] PeeringState::start_peering_interval up [1] -> [], acting [1] -> [], acting_primary 1 -> -1, up_primary 1 -> -1, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Nov 27 05:57:20 np0005537642 ceph-osd[82775]: osd.1 pg_epoch: 45 pg[7.11( empty local-lis/les=39/41 n=0 ec=39/27 lis/c=39/39 les/c/f=41/41/0 sis=45 pruub=14.130631447s) [] r=-1 lpr=45 pi=[39,45)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 111.020935059s@ mbc={}] state<Start>: transitioning to Stray
Nov 27 05:57:20 np0005537642 ceph-osd[82775]: osd.1 pg_epoch: 45 pg[4.15( empty local-lis/les=42/43 n=0 ec=37/18 lis/c=42/42 les/c/f=43/43/0 sis=45 pruub=10.844336510s) [] r=-1 lpr=45 pi=[42,45)/1 crt=0'0 mlcod 0'0 active pruub 107.734695435s@ mbc={}] PeeringState::start_peering_interval up [1] -> [], acting [1] -> [], acting_primary 1 -> -1, up_primary 1 -> -1, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Nov 27 05:57:20 np0005537642 ceph-osd[82775]: osd.1 pg_epoch: 45 pg[6.17( empty local-lis/les=42/43 n=0 ec=39/24 lis/c=42/42 les/c/f=43/43/0 sis=45 pruub=10.844380379s) [] r=-1 lpr=45 pi=[42,45)/1 crt=0'0 mlcod 0'0 active pruub 107.734779358s@ mbc={}] PeeringState::start_peering_interval up [1] -> [], acting [1] -> [], acting_primary 1 -> -1, up_primary 1 -> -1, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Nov 27 05:57:20 np0005537642 ceph-osd[82775]: osd.1 pg_epoch: 45 pg[4.15( empty local-lis/les=42/43 n=0 ec=37/18 lis/c=42/42 les/c/f=43/43/0 sis=45 pruub=10.844336510s) [] r=-1 lpr=45 pi=[42,45)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 107.734695435s@ mbc={}] state<Start>: transitioning to Stray
Nov 27 05:57:20 np0005537642 ceph-osd[82775]: osd.1 pg_epoch: 45 pg[6.17( empty local-lis/les=42/43 n=0 ec=39/24 lis/c=42/42 les/c/f=43/43/0 sis=45 pruub=10.844380379s) [] r=-1 lpr=45 pi=[42,45)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 107.734779358s@ mbc={}] state<Start>: transitioning to Stray
Nov 27 05:57:20 np0005537642 ceph-osd[82775]: osd.1 pg_epoch: 45 pg[7.16( empty local-lis/les=39/41 n=0 ec=39/27 lis/c=39/39 les/c/f=41/41/0 sis=45 pruub=14.130512238s) [] r=-1 lpr=45 pi=[39,45)/1 crt=0'0 mlcod 0'0 active pruub 111.020957947s@ mbc={}] PeeringState::start_peering_interval up [1] -> [], acting [1] -> [], acting_primary 1 -> -1, up_primary 1 -> -1, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Nov 27 05:57:20 np0005537642 ceph-osd[82775]: osd.1 pg_epoch: 45 pg[7.16( empty local-lis/les=39/41 n=0 ec=39/27 lis/c=39/39 les/c/f=41/41/0 sis=45 pruub=14.130512238s) [] r=-1 lpr=45 pi=[39,45)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 111.020957947s@ mbc={}] state<Start>: transitioning to Stray
Nov 27 05:57:20 np0005537642 ceph-osd[82775]: osd.1 pg_epoch: 45 pg[2.12( empty local-lis/les=35/36 n=0 ec=35/14 lis/c=35/35 les/c/f=36/36/0 sis=45 pruub=14.541445732s) [] r=-1 lpr=45 pi=[35,45)/1 crt=0'0 mlcod 0'0 active pruub 111.432083130s@ mbc={}] PeeringState::start_peering_interval up [1] -> [], acting [1] -> [], acting_primary 1 -> -1, up_primary 1 -> -1, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Nov 27 05:57:20 np0005537642 ceph-osd[82775]: osd.1 pg_epoch: 45 pg[3.11( empty local-lis/les=42/43 n=0 ec=35/15 lis/c=42/42 les/c/f=43/43/0 sis=45 pruub=10.844153404s) [] r=-1 lpr=45 pi=[42,45)/1 crt=0'0 mlcod 0'0 active pruub 107.734832764s@ mbc={}] PeeringState::start_peering_interval up [1] -> [], acting [1] -> [], acting_primary 1 -> -1, up_primary 1 -> -1, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Nov 27 05:57:20 np0005537642 ceph-osd[82775]: osd.1 pg_epoch: 45 pg[2.12( empty local-lis/les=35/36 n=0 ec=35/14 lis/c=35/35 les/c/f=36/36/0 sis=45 pruub=14.541445732s) [] r=-1 lpr=45 pi=[35,45)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 111.432083130s@ mbc={}] state<Start>: transitioning to Stray
Nov 27 05:57:20 np0005537642 ceph-osd[82775]: osd.1 pg_epoch: 45 pg[3.11( empty local-lis/les=42/43 n=0 ec=35/15 lis/c=42/42 les/c/f=43/43/0 sis=45 pruub=10.844153404s) [] r=-1 lpr=45 pi=[42,45)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 107.734832764s@ mbc={}] state<Start>: transitioning to Stray
Nov 27 05:57:20 np0005537642 ceph-osd[82775]: osd.1 pg_epoch: 45 pg[4.9( empty local-lis/les=42/43 n=0 ec=37/18 lis/c=42/42 les/c/f=43/43/0 sis=45 pruub=10.845370293s) [] r=-1 lpr=45 pi=[42,45)/1 crt=0'0 mlcod 0'0 active pruub 107.736236572s@ mbc={}] PeeringState::start_peering_interval up [1] -> [], acting [1] -> [], acting_primary 1 -> -1, up_primary 1 -> -1, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Nov 27 05:57:20 np0005537642 ceph-osd[82775]: osd.1 pg_epoch: 45 pg[4.9( empty local-lis/les=42/43 n=0 ec=37/18 lis/c=42/42 les/c/f=43/43/0 sis=45 pruub=10.845370293s) [] r=-1 lpr=45 pi=[42,45)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 107.736236572s@ mbc={}] state<Start>: transitioning to Stray
Nov 27 05:57:20 np0005537642 ceph-osd[82775]: osd.1 pg_epoch: 45 pg[3.e( empty local-lis/les=42/43 n=0 ec=35/15 lis/c=42/42 les/c/f=43/43/0 sis=45 pruub=10.845404625s) [] r=-1 lpr=45 pi=[42,45)/1 crt=0'0 mlcod 0'0 active pruub 107.736305237s@ mbc={}] PeeringState::start_peering_interval up [1] -> [], acting [1] -> [], acting_primary 1 -> -1, up_primary 1 -> -1, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Nov 27 05:57:20 np0005537642 ceph-osd[82775]: osd.1 pg_epoch: 45 pg[3.e( empty local-lis/les=42/43 n=0 ec=35/15 lis/c=42/42 les/c/f=43/43/0 sis=45 pruub=10.845404625s) [] r=-1 lpr=45 pi=[42,45)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 107.736305237s@ mbc={}] state<Start>: transitioning to Stray
Nov 27 05:57:20 np0005537642 ceph-osd[82775]: osd.1 pg_epoch: 45 pg[2.f( empty local-lis/les=35/36 n=0 ec=35/14 lis/c=35/35 les/c/f=36/36/0 sis=45 pruub=14.540903091s) [] r=-1 lpr=45 pi=[35,45)/1 crt=0'0 mlcod 0'0 active pruub 111.431900024s@ mbc={}] PeeringState::start_peering_interval up [1] -> [], acting [1] -> [], acting_primary 1 -> -1, up_primary 1 -> -1, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Nov 27 05:57:20 np0005537642 ceph-osd[82775]: osd.1 pg_epoch: 45 pg[2.f( empty local-lis/les=35/36 n=0 ec=35/14 lis/c=35/35 les/c/f=36/36/0 sis=45 pruub=14.540903091s) [] r=-1 lpr=45 pi=[35,45)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 111.431900024s@ mbc={}] state<Start>: transitioning to Stray
Nov 27 05:57:20 np0005537642 ceph-osd[82775]: osd.1 pg_epoch: 45 pg[4.8( empty local-lis/les=42/43 n=0 ec=37/18 lis/c=42/42 les/c/f=43/43/0 sis=45 pruub=10.844982147s) [] r=-1 lpr=45 pi=[42,45)/1 crt=0'0 mlcod 0'0 active pruub 107.736175537s@ mbc={}] PeeringState::start_peering_interval up [1] -> [], acting [1] -> [], acting_primary 1 -> -1, up_primary 1 -> -1, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Nov 27 05:57:20 np0005537642 ceph-osd[82775]: osd.1 pg_epoch: 45 pg[4.8( empty local-lis/les=42/43 n=0 ec=37/18 lis/c=42/42 les/c/f=43/43/0 sis=45 pruub=10.844982147s) [] r=-1 lpr=45 pi=[42,45)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 107.736175537s@ mbc={}] state<Start>: transitioning to Stray
Nov 27 05:57:20 np0005537642 ceph-osd[82775]: osd.1 pg_epoch: 45 pg[2.b( empty local-lis/les=35/36 n=0 ec=35/14 lis/c=35/35 les/c/f=36/36/0 sis=45 pruub=14.540296555s) [] r=-1 lpr=45 pi=[35,45)/1 crt=0'0 mlcod 0'0 active pruub 111.431648254s@ mbc={}] PeeringState::start_peering_interval up [1] -> [], acting [1] -> [], acting_primary 1 -> -1, up_primary 1 -> -1, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Nov 27 05:57:20 np0005537642 ceph-osd[82775]: osd.1 pg_epoch: 45 pg[2.b( empty local-lis/les=35/36 n=0 ec=35/14 lis/c=35/35 les/c/f=36/36/0 sis=45 pruub=14.540296555s) [] r=-1 lpr=45 pi=[35,45)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 111.431648254s@ mbc={}] state<Start>: transitioning to Stray
Nov 27 05:57:20 np0005537642 ceph-osd[82775]: osd.1 pg_epoch: 45 pg[5.4( empty local-lis/les=42/43 n=0 ec=38/21 lis/c=42/42 les/c/f=43/43/0 sis=45 pruub=10.844952583s) [] r=-1 lpr=45 pi=[42,45)/1 crt=0'0 mlcod 0'0 active pruub 107.736419678s@ mbc={}] PeeringState::start_peering_interval up [1] -> [], acting [1] -> [], acting_primary 1 -> -1, up_primary 1 -> -1, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Nov 27 05:57:20 np0005537642 ceph-osd[82775]: osd.1 pg_epoch: 45 pg[5.4( empty local-lis/les=42/43 n=0 ec=38/21 lis/c=42/42 les/c/f=43/43/0 sis=45 pruub=10.844952583s) [] r=-1 lpr=45 pi=[42,45)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 107.736419678s@ mbc={}] state<Start>: transitioning to Stray
Nov 27 05:57:20 np0005537642 ceph-osd[82775]: osd.1 pg_epoch: 45 pg[7.5( empty local-lis/les=39/41 n=0 ec=39/27 lis/c=39/39 les/c/f=41/41/0 sis=45 pruub=14.129794121s) [] r=-1 lpr=45 pi=[39,45)/1 crt=0'0 mlcod 0'0 active pruub 111.021392822s@ mbc={}] PeeringState::start_peering_interval up [1] -> [], acting [1] -> [], acting_primary 1 -> -1, up_primary 1 -> -1, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Nov 27 05:57:20 np0005537642 ceph-osd[82775]: osd.1 pg_epoch: 45 pg[7.5( empty local-lis/les=39/41 n=0 ec=39/27 lis/c=39/39 les/c/f=41/41/0 sis=45 pruub=14.129794121s) [] r=-1 lpr=45 pi=[39,45)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 111.021392822s@ mbc={}] state<Start>: transitioning to Stray
Nov 27 05:57:20 np0005537642 ceph-osd[82775]: osd.1 pg_epoch: 45 pg[2.5( empty local-lis/les=35/36 n=0 ec=35/14 lis/c=35/35 les/c/f=36/36/0 sis=45 pruub=14.539807320s) [] r=-1 lpr=45 pi=[35,45)/1 crt=0'0 mlcod 0'0 active pruub 111.431510925s@ mbc={}] PeeringState::start_peering_interval up [1] -> [], acting [1] -> [], acting_primary 1 -> -1, up_primary 1 -> -1, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Nov 27 05:57:20 np0005537642 ceph-osd[82775]: osd.1 pg_epoch: 45 pg[2.5( empty local-lis/les=35/36 n=0 ec=35/14 lis/c=35/35 les/c/f=36/36/0 sis=45 pruub=14.539807320s) [] r=-1 lpr=45 pi=[35,45)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 111.431510925s@ mbc={}] state<Start>: transitioning to Stray
Nov 27 05:57:20 np0005537642 ceph-osd[82775]: osd.1 pg_epoch: 45 pg[4.1( empty local-lis/les=42/43 n=0 ec=37/18 lis/c=42/42 les/c/f=43/43/0 sis=45 pruub=10.844694138s) [] r=-1 lpr=45 pi=[42,45)/1 crt=0'0 mlcod 0'0 active pruub 107.736587524s@ mbc={}] PeeringState::start_peering_interval up [1] -> [], acting [1] -> [], acting_primary 1 -> -1, up_primary 1 -> -1, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Nov 27 05:57:20 np0005537642 ceph-osd[82775]: osd.1 pg_epoch: 45 pg[4.1( empty local-lis/les=42/43 n=0 ec=37/18 lis/c=42/42 les/c/f=43/43/0 sis=45 pruub=10.844694138s) [] r=-1 lpr=45 pi=[42,45)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 107.736587524s@ mbc={}] state<Start>: transitioning to Stray
Nov 27 05:57:20 np0005537642 ceph-osd[82775]: osd.1 pg_epoch: 45 pg[3.9( empty local-lis/les=42/43 n=0 ec=35/15 lis/c=42/42 les/c/f=43/43/0 sis=45 pruub=10.844524384s) [] r=-1 lpr=45 pi=[42,45)/1 crt=0'0 mlcod 0'0 active pruub 107.736648560s@ mbc={}] PeeringState::start_peering_interval up [1] -> [], acting [1] -> [], acting_primary 1 -> -1, up_primary 1 -> -1, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Nov 27 05:57:20 np0005537642 ceph-osd[82775]: osd.1 pg_epoch: 45 pg[3.9( empty local-lis/les=42/43 n=0 ec=35/15 lis/c=42/42 les/c/f=43/43/0 sis=45 pruub=10.844524384s) [] r=-1 lpr=45 pi=[42,45)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 107.736648560s@ mbc={}] state<Start>: transitioning to Stray
Nov 27 05:57:20 np0005537642 ceph-osd[82775]: osd.1 pg_epoch: 45 pg[5.e( empty local-lis/les=42/43 n=0 ec=38/21 lis/c=42/42 les/c/f=43/43/0 sis=45 pruub=10.844395638s) [] r=-1 lpr=45 pi=[42,45)/1 crt=0'0 mlcod 0'0 active pruub 107.736701965s@ mbc={}] PeeringState::start_peering_interval up [1] -> [], acting [1] -> [], acting_primary 1 -> -1, up_primary 1 -> -1, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Nov 27 05:57:20 np0005537642 ceph-osd[82775]: osd.1 pg_epoch: 45 pg[5.e( empty local-lis/les=42/43 n=0 ec=38/21 lis/c=42/42 les/c/f=43/43/0 sis=45 pruub=10.844395638s) [] r=-1 lpr=45 pi=[42,45)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 107.736701965s@ mbc={}] state<Start>: transitioning to Stray
Nov 27 05:57:20 np0005537642 ceph-osd[82775]: osd.1 pg_epoch: 45 pg[2.18( empty local-lis/les=35/36 n=0 ec=35/14 lis/c=35/35 les/c/f=36/36/0 sis=45 pruub=14.539799690s) [] r=-1 lpr=45 pi=[35,45)/1 crt=0'0 mlcod 0'0 active pruub 111.432197571s@ mbc={}] PeeringState::start_peering_interval up [1] -> [], acting [1] -> [], acting_primary 1 -> -1, up_primary 1 -> -1, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Nov 27 05:57:20 np0005537642 ceph-osd[82775]: osd.1 pg_epoch: 45 pg[2.18( empty local-lis/les=35/36 n=0 ec=35/14 lis/c=35/35 les/c/f=36/36/0 sis=45 pruub=14.539799690s) [] r=-1 lpr=45 pi=[35,45)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 111.432197571s@ mbc={}] state<Start>: transitioning to Stray
Nov 27 05:57:20 np0005537642 ceph-osd[82775]: osd.1 pg_epoch: 45 pg[3.1a( empty local-lis/les=42/43 n=0 ec=35/15 lis/c=42/42 les/c/f=43/43/0 sis=45 pruub=10.844281197s) [] r=-1 lpr=45 pi=[42,45)/1 crt=0'0 mlcod 0'0 active pruub 107.736778259s@ mbc={}] PeeringState::start_peering_interval up [1] -> [], acting [1] -> [], acting_primary 1 -> -1, up_primary 1 -> -1, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Nov 27 05:57:20 np0005537642 ceph-osd[82775]: osd.1 pg_epoch: 45 pg[3.1a( empty local-lis/les=42/43 n=0 ec=35/15 lis/c=42/42 les/c/f=43/43/0 sis=45 pruub=10.844281197s) [] r=-1 lpr=45 pi=[42,45)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 107.736778259s@ mbc={}] state<Start>: transitioning to Stray
Nov 27 05:57:20 np0005537642 ceph-osd[82775]: osd.1 pg_epoch: 45 pg[2.1c( empty local-lis/les=35/36 n=0 ec=35/14 lis/c=35/35 les/c/f=36/36/0 sis=45 pruub=14.538430214s) [] r=-1 lpr=45 pi=[35,45)/1 crt=0'0 mlcod 0'0 active pruub 111.431030273s@ mbc={}] PeeringState::start_peering_interval up [1] -> [], acting [1] -> [], acting_primary 1 -> -1, up_primary 1 -> -1, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Nov 27 05:57:20 np0005537642 ceph-osd[82775]: osd.1 pg_epoch: 45 pg[2.1c( empty local-lis/les=35/36 n=0 ec=35/14 lis/c=35/35 les/c/f=36/36/0 sis=45 pruub=14.538430214s) [] r=-1 lpr=45 pi=[35,45)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 111.431030273s@ mbc={}] state<Start>: transitioning to Stray
Nov 27 05:57:20 np0005537642 ceph-osd[82775]: osd.1 pg_epoch: 45 pg[5.1a( empty local-lis/les=42/43 n=0 ec=38/21 lis/c=42/42 les/c/f=43/43/0 sis=45 pruub=10.844213486s) [] r=-1 lpr=45 pi=[42,45)/1 crt=0'0 mlcod 0'0 active pruub 107.736915588s@ mbc={}] PeeringState::start_peering_interval up [1] -> [], acting [1] -> [], acting_primary 1 -> -1, up_primary 1 -> -1, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Nov 27 05:57:20 np0005537642 ceph-osd[82775]: osd.1 pg_epoch: 45 pg[5.1a( empty local-lis/les=42/43 n=0 ec=38/21 lis/c=42/42 les/c/f=43/43/0 sis=45 pruub=10.844213486s) [] r=-1 lpr=45 pi=[42,45)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 107.736915588s@ mbc={}] state<Start>: transitioning to Stray
Nov 27 05:57:20 np0005537642 ceph-osd[82775]: osd.1 pg_epoch: 45 pg[2.1d( empty local-lis/les=35/36 n=0 ec=35/14 lis/c=35/35 les/c/f=36/36/0 sis=45 pruub=14.538304329s) [] r=-1 lpr=45 pi=[35,45)/1 crt=0'0 mlcod 0'0 active pruub 111.431045532s@ mbc={}] PeeringState::start_peering_interval up [1] -> [], acting [1] -> [], acting_primary 1 -> -1, up_primary 1 -> -1, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Nov 27 05:57:20 np0005537642 ceph-osd[82775]: osd.1 pg_epoch: 45 pg[2.1d( empty local-lis/les=35/36 n=0 ec=35/14 lis/c=35/35 les/c/f=36/36/0 sis=45 pruub=14.538304329s) [] r=-1 lpr=45 pi=[35,45)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 111.431045532s@ mbc={}] state<Start>: transitioning to Stray
Nov 27 05:57:20 np0005537642 ceph-osd[82775]: osd.1 pg_epoch: 45 pg[3.1d( empty local-lis/les=42/43 n=0 ec=35/15 lis/c=42/42 les/c/f=43/43/0 sis=45 pruub=10.842114449s) [] r=-1 lpr=45 pi=[42,45)/1 crt=0'0 mlcod 0'0 active pruub 107.736831665s@ mbc={}] PeeringState::start_peering_interval up [1] -> [], acting [1] -> [], acting_primary 1 -> -1, up_primary 1 -> -1, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Nov 27 05:57:20 np0005537642 ceph-osd[82775]: osd.1 pg_epoch: 45 pg[3.1d( empty local-lis/les=42/43 n=0 ec=35/15 lis/c=42/42 les/c/f=43/43/0 sis=45 pruub=10.842114449s) [] r=-1 lpr=45 pi=[42,45)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 107.736831665s@ mbc={}] state<Start>: transitioning to Stray
Nov 27 05:57:20 np0005537642 ceph-osd[82775]: log_channel(cluster) log [DBG] : 2.14 scrub starts
Nov 27 05:57:20 np0005537642 ceph-mon[74338]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/397881370' entity='mgr.compute-0.qnrkij' 
Nov 27 05:57:20 np0005537642 ceph-osd[82775]: log_channel(cluster) log [DBG] : 2.14 scrub ok
Nov 27 05:57:20 np0005537642 ceph-mon[74338]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2}] v 0)
Nov 27 05:57:20 np0005537642 ceph-mgr[74636]: log_channel(audit) log [DBG] : from='client.14259 -' entity='client.admin' cmd=[{"prefix": "orch apply", "target": ["mon-mgr", ""]}]: dispatch
Nov 27 05:57:20 np0005537642 ceph-mgr[74636]: [cephadm INFO root] Saving service node-exporter spec with placement *
Nov 27 05:57:20 np0005537642 ceph-mgr[74636]: log_channel(cephadm) log [INF] : Saving service node-exporter spec with placement *
Nov 27 05:57:20 np0005537642 ceph-mon[74338]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.node-exporter}] v 0)
Nov 27 05:57:20 np0005537642 ceph-mon[74338]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/397881370' entity='mgr.compute-0.qnrkij' 
Nov 27 05:57:20 np0005537642 ceph-mon[74338]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config rm", "who": "osd.2", "name": "osd_memory_target"} v 0)
Nov 27 05:57:20 np0005537642 ceph-mon[74338]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/397881370' entity='mgr.compute-0.qnrkij' cmd=[{"prefix": "config rm", "who": "osd.2", "name": "osd_memory_target"}]: dispatch
Nov 27 05:57:20 np0005537642 ceph-mon[74338]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/397881370' entity='mgr.compute-0.qnrkij' 
Nov 27 05:57:20 np0005537642 ceph-mgr[74636]: [cephadm INFO root] Saving service grafana spec with placement compute-0;count:1
Nov 27 05:57:20 np0005537642 ceph-mgr[74636]: log_channel(cephadm) log [INF] : Saving service grafana spec with placement compute-0;count:1
Nov 27 05:57:20 np0005537642 ceph-mon[74338]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.grafana}] v 0)
Nov 27 05:57:20 np0005537642 ceph-mgr[74636]: [cephadm INFO root] Adjusting osd_memory_target on compute-2 to 128.0M
Nov 27 05:57:20 np0005537642 ceph-mgr[74636]: log_channel(cephadm) log [INF] : Adjusting osd_memory_target on compute-2 to 128.0M
Nov 27 05:57:20 np0005537642 ceph-mon[74338]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config set, name=osd_memory_target}] v 0)
Nov 27 05:57:20 np0005537642 ceph-mon[74338]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/397881370' entity='mgr.compute-0.qnrkij' 
Nov 27 05:57:20 np0005537642 ceph-mgr[74636]: [cephadm INFO root] Saving service prometheus spec with placement compute-0;count:1
Nov 27 05:57:20 np0005537642 ceph-mgr[74636]: log_channel(cephadm) log [INF] : Saving service prometheus spec with placement compute-0;count:1
Nov 27 05:57:20 np0005537642 ceph-mon[74338]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.prometheus}] v 0)
Nov 27 05:57:20 np0005537642 ceph-mgr[74636]: [cephadm WARNING cephadm.serve] Unable to set osd_memory_target on compute-2 to 134220595: error parsing value: Value '134220595' is below minimum 939524096
Nov 27 05:57:20 np0005537642 ceph-mgr[74636]: log_channel(cephadm) log [WRN] : Unable to set osd_memory_target on compute-2 to 134220595: error parsing value: Value '134220595' is below minimum 939524096
Nov 27 05:57:20 np0005537642 ceph-mon[74338]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Nov 27 05:57:20 np0005537642 ceph-mon[74338]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/397881370' entity='mgr.compute-0.qnrkij' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 27 05:57:20 np0005537642 ceph-mon[74338]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Nov 27 05:57:20 np0005537642 ceph-mon[74338]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/397881370' entity='mgr.compute-0.qnrkij' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 27 05:57:20 np0005537642 ceph-mgr[74636]: [cephadm INFO cephadm.serve] Updating compute-0:/etc/ceph/ceph.conf
Nov 27 05:57:20 np0005537642 ceph-mgr[74636]: log_channel(cephadm) log [INF] : Updating compute-0:/etc/ceph/ceph.conf
Nov 27 05:57:20 np0005537642 ceph-mgr[74636]: [cephadm INFO cephadm.serve] Updating compute-1:/etc/ceph/ceph.conf
Nov 27 05:57:20 np0005537642 ceph-mgr[74636]: [cephadm INFO cephadm.serve] Updating compute-2:/etc/ceph/ceph.conf
Nov 27 05:57:20 np0005537642 ceph-mgr[74636]: log_channel(cephadm) log [INF] : Updating compute-1:/etc/ceph/ceph.conf
Nov 27 05:57:20 np0005537642 ceph-mgr[74636]: log_channel(cephadm) log [INF] : Updating compute-2:/etc/ceph/ceph.conf
Nov 27 05:57:20 np0005537642 ceph-mon[74338]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/397881370' entity='mgr.compute-0.qnrkij' 
Nov 27 05:57:20 np0005537642 ceph-mgr[74636]: [cephadm INFO root] Saving service alertmanager spec with placement compute-0;count:1
Nov 27 05:57:20 np0005537642 ceph-mgr[74636]: log_channel(cephadm) log [INF] : Saving service alertmanager spec with placement compute-0;count:1
Nov 27 05:57:20 np0005537642 ceph-mon[74338]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.alertmanager}] v 0)
Nov 27 05:57:20 np0005537642 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v145: 193 pgs: 193 active+clean; 449 KiB data, 54 MiB used, 40 GiB / 40 GiB avail
Nov 27 05:57:20 np0005537642 ceph-mon[74338]: from='mgr.14122 192.168.122.100:0/397881370' entity='mgr.compute-0.qnrkij' 
Nov 27 05:57:20 np0005537642 ceph-mon[74338]: from='mgr.14122 192.168.122.100:0/397881370' entity='mgr.compute-0.qnrkij' 
Nov 27 05:57:20 np0005537642 ceph-mon[74338]: Saving service node-exporter spec with placement *
Nov 27 05:57:20 np0005537642 ceph-mon[74338]: from='mgr.14122 192.168.122.100:0/397881370' entity='mgr.compute-0.qnrkij' 
Nov 27 05:57:20 np0005537642 ceph-mon[74338]: OSD bench result of 8937.637731 IOPS is not within the threshold limit range of 50.000000 IOPS and 500.000000 IOPS for osd.2. IOPS capacity is unchanged at 315.000000 IOPS. The recommendation is to establish the osd's IOPS capacity using other benchmark tools (e.g. Fio) and then override osd_mclock_max_capacity_iops_[hdd|ssd].
Nov 27 05:57:20 np0005537642 ceph-mon[74338]: from='mgr.14122 192.168.122.100:0/397881370' entity='mgr.compute-0.qnrkij' cmd=[{"prefix": "config rm", "who": "osd.2", "name": "osd_memory_target"}]: dispatch
Nov 27 05:57:20 np0005537642 ceph-mon[74338]: from='mgr.14122 192.168.122.100:0/397881370' entity='mgr.compute-0.qnrkij' 
Nov 27 05:57:20 np0005537642 ceph-mon[74338]: from='mgr.14122 192.168.122.100:0/397881370' entity='mgr.compute-0.qnrkij' 
Nov 27 05:57:20 np0005537642 ceph-mgr[74636]: mgr.server handle_open ignoring open from osd.2 v2:192.168.122.102:6800/1778559861; not ready for session (expect reconnect)
Nov 27 05:57:20 np0005537642 ceph-mon[74338]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd metadata", "id": 2} v 0)
Nov 27 05:57:20 np0005537642 ceph-mon[74338]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/397881370' entity='mgr.compute-0.qnrkij' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch
Nov 27 05:57:20 np0005537642 ceph-mgr[74636]: mgr finish mon failed to return metadata for osd.2: (2) No such file or directory
Nov 27 05:57:20 np0005537642 ceph-mon[74338]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/397881370' entity='mgr.compute-0.qnrkij' 
Nov 27 05:57:20 np0005537642 flamboyant_noyce[87559]: Scheduled node-exporter update...
Nov 27 05:57:20 np0005537642 flamboyant_noyce[87559]: Scheduled grafana update...
Nov 27 05:57:20 np0005537642 flamboyant_noyce[87559]: Scheduled prometheus update...
Nov 27 05:57:20 np0005537642 flamboyant_noyce[87559]: Scheduled alertmanager update...
Nov 27 05:57:20 np0005537642 systemd[1]: libpod-62b50d181a9ed42d50e701bfaac01a07ecadc3e14b9f2a4863d307615397de4c.scope: Deactivated successfully.
Nov 27 05:57:20 np0005537642 podman[87544]: 2025-11-27 10:57:20.788238272 +0000 UTC m=+1.285175938 container died 62b50d181a9ed42d50e701bfaac01a07ecadc3e14b9f2a4863d307615397de4c (image=quay.io/ceph/ceph:v19, name=flamboyant_noyce, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325)
Nov 27 05:57:20 np0005537642 systemd[1]: var-lib-containers-storage-overlay-9207c72fbc2318994620e602f3edcbd99dc9c057314593830a0d6c4c4cd3acd6-merged.mount: Deactivated successfully.
Nov 27 05:57:21 np0005537642 podman[87544]: 2025-11-27 10:57:21.084781066 +0000 UTC m=+1.581718772 container remove 62b50d181a9ed42d50e701bfaac01a07ecadc3e14b9f2a4863d307615397de4c (image=quay.io/ceph/ceph:v19, name=flamboyant_noyce, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, ceph=True, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 27 05:57:21 np0005537642 systemd[1]: libpod-conmon-62b50d181a9ed42d50e701bfaac01a07ecadc3e14b9f2a4863d307615397de4c.scope: Deactivated successfully.
Nov 27 05:57:21 np0005537642 ceph-osd[82775]: log_channel(cluster) log [DBG] : 7.17 scrub starts
Nov 27 05:57:21 np0005537642 ceph-osd[82775]: log_channel(cluster) log [DBG] : 7.17 scrub ok
Nov 27 05:57:21 np0005537642 ceph-mgr[74636]: [cephadm INFO cephadm.serve] Updating compute-2:/var/lib/ceph/4c838139-e0c9-556a-a9ca-e4422f459af7/config/ceph.conf
Nov 27 05:57:21 np0005537642 ceph-mgr[74636]: log_channel(cephadm) log [INF] : Updating compute-2:/var/lib/ceph/4c838139-e0c9-556a-a9ca-e4422f459af7/config/ceph.conf
Nov 27 05:57:21 np0005537642 ceph-mgr[74636]: [cephadm INFO cephadm.serve] Updating compute-1:/var/lib/ceph/4c838139-e0c9-556a-a9ca-e4422f459af7/config/ceph.conf
Nov 27 05:57:21 np0005537642 ceph-mgr[74636]: log_channel(cephadm) log [INF] : Updating compute-1:/var/lib/ceph/4c838139-e0c9-556a-a9ca-e4422f459af7/config/ceph.conf
Nov 27 05:57:21 np0005537642 ceph-mgr[74636]: [cephadm INFO cephadm.serve] Updating compute-0:/var/lib/ceph/4c838139-e0c9-556a-a9ca-e4422f459af7/config/ceph.conf
Nov 27 05:57:21 np0005537642 ceph-mgr[74636]: log_channel(cephadm) log [INF] : Updating compute-0:/var/lib/ceph/4c838139-e0c9-556a-a9ca-e4422f459af7/config/ceph.conf
Nov 27 05:57:21 np0005537642 ceph-mon[74338]: mon.compute-0@0(leader).osd e45 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 27 05:57:21 np0005537642 ceph-mon[74338]: mon.compute-0@0(leader).osd e45 do_prune osdmap full prune enabled
Nov 27 05:57:21 np0005537642 ceph-mon[74338]: mon.compute-0@0(leader).osd e46 e46: 3 total, 3 up, 3 in
Nov 27 05:57:21 np0005537642 ceph-mon[74338]: log_channel(cluster) log [INF] : osd.2 [v2:192.168.122.102:6800/1778559861,v1:192.168.122.102:6801/1778559861] boot
Nov 27 05:57:21 np0005537642 ceph-mon[74338]: log_channel(cluster) log [DBG] : osdmap e46: 3 total, 3 up, 3 in
Nov 27 05:57:21 np0005537642 ceph-mon[74338]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd metadata", "id": 2} v 0)
Nov 27 05:57:21 np0005537642 ceph-mon[74338]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/397881370' entity='mgr.compute-0.qnrkij' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch
Nov 27 05:57:21 np0005537642 ceph-osd[82775]: osd.1 pg_epoch: 46 pg[6.1e( empty local-lis/les=42/43 n=0 ec=39/24 lis/c=42/42 les/c/f=43/43/0 sis=46 pruub=9.335611343s) [2] r=-1 lpr=46 pi=[42,46)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 107.687690735s@ mbc={}] PeeringState::start_peering_interval up [] -> [2], acting [] -> [2], acting_primary ? -> 2, up_primary ? -> 2, role -1 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Nov 27 05:57:21 np0005537642 ceph-osd[82775]: osd.1 pg_epoch: 46 pg[6.1e( empty local-lis/les=42/43 n=0 ec=39/24 lis/c=42/42 les/c/f=43/43/0 sis=46 pruub=9.335571289s) [2] r=-1 lpr=46 pi=[42,46)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 107.687690735s@ mbc={}] state<Start>: transitioning to Stray
Nov 27 05:57:21 np0005537642 ceph-osd[82775]: osd.1 pg_epoch: 46 pg[7.1f( empty local-lis/les=39/41 n=0 ec=39/27 lis/c=39/39 les/c/f=41/41/0 sis=46 pruub=12.668775558s) [2] r=-1 lpr=46 pi=[39,46)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 111.020919800s@ mbc={}] PeeringState::start_peering_interval up [] -> [2], acting [] -> [2], acting_primary ? -> 2, up_primary ? -> 2, role -1 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Nov 27 05:57:21 np0005537642 ceph-osd[82775]: osd.1 pg_epoch: 46 pg[4.1f( empty local-lis/les=42/43 n=0 ec=37/18 lis/c=42/42 les/c/f=43/43/0 sis=46 pruub=9.382145882s) [2] r=-1 lpr=46 pi=[42,46)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 107.734283447s@ mbc={}] PeeringState::start_peering_interval up [] -> [2], acting [] -> [2], acting_primary ? -> 2, up_primary ? -> 2, role -1 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Nov 27 05:57:21 np0005537642 ceph-osd[82775]: osd.1 pg_epoch: 46 pg[7.1f( empty local-lis/les=39/41 n=0 ec=39/27 lis/c=39/39 les/c/f=41/41/0 sis=46 pruub=12.668744087s) [2] r=-1 lpr=46 pi=[39,46)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 111.020919800s@ mbc={}] state<Start>: transitioning to Stray
Nov 27 05:57:21 np0005537642 ceph-osd[82775]: osd.1 pg_epoch: 46 pg[4.1f( empty local-lis/les=42/43 n=0 ec=37/18 lis/c=42/42 les/c/f=43/43/0 sis=46 pruub=9.382100105s) [2] r=-1 lpr=46 pi=[42,46)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 107.734283447s@ mbc={}] state<Start>: transitioning to Stray
Nov 27 05:57:21 np0005537642 ceph-osd[82775]: osd.1 pg_epoch: 46 pg[6.1c( empty local-lis/les=42/43 n=0 ec=39/24 lis/c=42/42 les/c/f=43/43/0 sis=46 pruub=9.335507393s) [2] r=-1 lpr=46 pi=[42,46)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 107.687774658s@ mbc={}] PeeringState::start_peering_interval up [] -> [2], acting [] -> [2], acting_primary ? -> 2, up_primary ? -> 2, role -1 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Nov 27 05:57:21 np0005537642 ceph-osd[82775]: osd.1 pg_epoch: 46 pg[2.18( empty local-lis/les=35/36 n=0 ec=35/14 lis/c=35/35 les/c/f=36/36/0 sis=46 pruub=13.079885483s) [2] r=-1 lpr=46 pi=[35,46)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 111.432197571s@ mbc={}] PeeringState::start_peering_interval up [] -> [2], acting [] -> [2], acting_primary ? -> 2, up_primary ? -> 2, role -1 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Nov 27 05:57:21 np0005537642 ceph-osd[82775]: osd.1 pg_epoch: 46 pg[6.1c( empty local-lis/les=42/43 n=0 ec=39/24 lis/c=42/42 les/c/f=43/43/0 sis=46 pruub=9.335469246s) [2] r=-1 lpr=46 pi=[42,46)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 107.687774658s@ mbc={}] state<Start>: transitioning to Stray
Nov 27 05:57:21 np0005537642 ceph-osd[82775]: osd.1 pg_epoch: 46 pg[2.18( empty local-lis/les=35/36 n=0 ec=35/14 lis/c=35/35 les/c/f=36/36/0 sis=46 pruub=13.079828262s) [2] r=-1 lpr=46 pi=[35,46)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 111.432197571s@ mbc={}] state<Start>: transitioning to Stray
Nov 27 05:57:21 np0005537642 ceph-osd[82775]: osd.1 pg_epoch: 46 pg[6.12( empty local-lis/les=42/43 n=0 ec=39/24 lis/c=42/42 les/c/f=43/43/0 sis=46 pruub=9.382134438s) [2] r=-1 lpr=46 pi=[42,46)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 107.734527588s@ mbc={}] PeeringState::start_peering_interval up [] -> [2], acting [] -> [2], acting_primary ? -> 2, up_primary ? -> 2, role -1 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Nov 27 05:57:21 np0005537642 ceph-osd[82775]: osd.1 pg_epoch: 46 pg[6.12( empty local-lis/les=42/43 n=0 ec=39/24 lis/c=42/42 les/c/f=43/43/0 sis=46 pruub=9.382123947s) [2] r=-1 lpr=46 pi=[42,46)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 107.734527588s@ mbc={}] state<Start>: transitioning to Stray
Nov 27 05:57:21 np0005537642 ceph-osd[82775]: osd.1 pg_epoch: 46 pg[7.11( empty local-lis/les=39/41 n=0 ec=39/27 lis/c=39/39 les/c/f=41/41/0 sis=46 pruub=12.668414116s) [2] r=-1 lpr=46 pi=[39,46)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 111.020935059s@ mbc={}] PeeringState::start_peering_interval up [] -> [2], acting [] -> [2], acting_primary ? -> 2, up_primary ? -> 2, role -1 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Nov 27 05:57:21 np0005537642 ceph-osd[82775]: osd.1 pg_epoch: 46 pg[4.15( empty local-lis/les=42/43 n=0 ec=37/18 lis/c=42/42 les/c/f=43/43/0 sis=46 pruub=9.382151604s) [2] r=-1 lpr=46 pi=[42,46)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 107.734695435s@ mbc={}] PeeringState::start_peering_interval up [] -> [2], acting [] -> [2], acting_primary ? -> 2, up_primary ? -> 2, role -1 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Nov 27 05:57:21 np0005537642 ceph-osd[82775]: osd.1 pg_epoch: 46 pg[7.11( empty local-lis/les=39/41 n=0 ec=39/27 lis/c=39/39 les/c/f=41/41/0 sis=46 pruub=12.668402672s) [2] r=-1 lpr=46 pi=[39,46)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 111.020935059s@ mbc={}] state<Start>: transitioning to Stray
Nov 27 05:57:21 np0005537642 ceph-osd[82775]: osd.1 pg_epoch: 46 pg[4.15( empty local-lis/les=42/43 n=0 ec=37/18 lis/c=42/42 les/c/f=43/43/0 sis=46 pruub=9.382135391s) [2] r=-1 lpr=46 pi=[42,46)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 107.734695435s@ mbc={}] state<Start>: transitioning to Stray
Nov 27 05:57:21 np0005537642 ceph-osd[82775]: osd.1 pg_epoch: 46 pg[6.17( empty local-lis/les=42/43 n=0 ec=39/24 lis/c=42/42 les/c/f=43/43/0 sis=46 pruub=9.382186890s) [2] r=-1 lpr=46 pi=[42,46)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 107.734779358s@ mbc={}] PeeringState::start_peering_interval up [] -> [2], acting [] -> [2], acting_primary ? -> 2, up_primary ? -> 2, role -1 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Nov 27 05:57:21 np0005537642 ceph-osd[82775]: osd.1 pg_epoch: 46 pg[3.15( empty local-lis/les=42/43 n=0 ec=35/15 lis/c=42/42 les/c/f=43/43/0 sis=46 pruub=9.382047653s) [2] r=-1 lpr=46 pi=[42,46)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 107.734626770s@ mbc={}] PeeringState::start_peering_interval up [] -> [2], acting [] -> [2], acting_primary ? -> 2, up_primary ? -> 2, role -1 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Nov 27 05:57:21 np0005537642 ceph-osd[82775]: osd.1 pg_epoch: 46 pg[6.17( empty local-lis/les=42/43 n=0 ec=39/24 lis/c=42/42 les/c/f=43/43/0 sis=46 pruub=9.382172585s) [2] r=-1 lpr=46 pi=[42,46)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 107.734779358s@ mbc={}] state<Start>: transitioning to Stray
Nov 27 05:57:21 np0005537642 ceph-osd[82775]: osd.1 pg_epoch: 46 pg[7.16( empty local-lis/les=39/41 n=0 ec=39/27 lis/c=39/39 les/c/f=41/41/0 sis=46 pruub=12.668317795s) [2] r=-1 lpr=46 pi=[39,46)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 111.020957947s@ mbc={}] PeeringState::start_peering_interval up [] -> [2], acting [] -> [2], acting_primary ? -> 2, up_primary ? -> 2, role -1 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Nov 27 05:57:21 np0005537642 ceph-osd[82775]: osd.1 pg_epoch: 46 pg[3.15( empty local-lis/les=42/43 n=0 ec=35/15 lis/c=42/42 les/c/f=43/43/0 sis=46 pruub=9.382013321s) [2] r=-1 lpr=46 pi=[42,46)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 107.734626770s@ mbc={}] state<Start>: transitioning to Stray
Nov 27 05:57:21 np0005537642 ceph-osd[82775]: osd.1 pg_epoch: 46 pg[7.16( empty local-lis/les=39/41 n=0 ec=39/27 lis/c=39/39 les/c/f=41/41/0 sis=46 pruub=12.668305397s) [2] r=-1 lpr=46 pi=[39,46)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 111.020957947s@ mbc={}] state<Start>: transitioning to Stray
Nov 27 05:57:21 np0005537642 ceph-osd[82775]: osd.1 pg_epoch: 46 pg[2.12( empty local-lis/les=35/36 n=0 ec=35/14 lis/c=35/35 les/c/f=36/36/0 sis=46 pruub=13.079389572s) [2] r=-1 lpr=46 pi=[35,46)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 111.432083130s@ mbc={}] PeeringState::start_peering_interval up [] -> [2], acting [] -> [2], acting_primary ? -> 2, up_primary ? -> 2, role -1 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Nov 27 05:57:21 np0005537642 ceph-osd[82775]: osd.1 pg_epoch: 46 pg[2.12( empty local-lis/les=35/36 n=0 ec=35/14 lis/c=35/35 les/c/f=36/36/0 sis=46 pruub=13.079376221s) [2] r=-1 lpr=46 pi=[35,46)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 111.432083130s@ mbc={}] state<Start>: transitioning to Stray
Nov 27 05:57:21 np0005537642 ceph-osd[82775]: osd.1 pg_epoch: 46 pg[3.e( empty local-lis/les=42/43 n=0 ec=35/15 lis/c=42/42 les/c/f=43/43/0 sis=46 pruub=9.383471489s) [2] r=-1 lpr=46 pi=[42,46)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 107.736305237s@ mbc={}] PeeringState::start_peering_interval up [] -> [2], acting [] -> [2], acting_primary ? -> 2, up_primary ? -> 2, role -1 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Nov 27 05:57:21 np0005537642 ceph-osd[82775]: osd.1 pg_epoch: 46 pg[3.11( empty local-lis/les=42/43 n=0 ec=35/15 lis/c=42/42 les/c/f=43/43/0 sis=46 pruub=9.381982803s) [2] r=-1 lpr=46 pi=[42,46)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 107.734832764s@ mbc={}] PeeringState::start_peering_interval up [] -> [2], acting [] -> [2], acting_primary ? -> 2, up_primary ? -> 2, role -1 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Nov 27 05:57:21 np0005537642 ceph-osd[82775]: osd.1 pg_epoch: 46 pg[4.9( empty local-lis/les=42/43 n=0 ec=37/18 lis/c=42/42 les/c/f=43/43/0 sis=46 pruub=9.383381844s) [2] r=-1 lpr=46 pi=[42,46)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 107.736236572s@ mbc={}] PeeringState::start_peering_interval up [] -> [2], acting [] -> [2], acting_primary ? -> 2, up_primary ? -> 2, role -1 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Nov 27 05:57:21 np0005537642 ceph-osd[82775]: osd.1 pg_epoch: 46 pg[3.11( empty local-lis/les=42/43 n=0 ec=35/15 lis/c=42/42 les/c/f=43/43/0 sis=46 pruub=9.381969452s) [2] r=-1 lpr=46 pi=[42,46)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 107.734832764s@ mbc={}] state<Start>: transitioning to Stray
Nov 27 05:57:21 np0005537642 ceph-osd[82775]: osd.1 pg_epoch: 46 pg[4.9( empty local-lis/les=42/43 n=0 ec=37/18 lis/c=42/42 les/c/f=43/43/0 sis=46 pruub=9.383368492s) [2] r=-1 lpr=46 pi=[42,46)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 107.736236572s@ mbc={}] state<Start>: transitioning to Stray
Nov 27 05:57:21 np0005537642 ceph-osd[82775]: osd.1 pg_epoch: 46 pg[3.e( empty local-lis/les=42/43 n=0 ec=35/15 lis/c=42/42 les/c/f=43/43/0 sis=46 pruub=9.383458138s) [2] r=-1 lpr=46 pi=[42,46)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 107.736305237s@ mbc={}] state<Start>: transitioning to Stray
Nov 27 05:57:21 np0005537642 ceph-osd[82775]: osd.1 pg_epoch: 46 pg[2.f( empty local-lis/les=35/36 n=0 ec=35/14 lis/c=35/35 les/c/f=36/36/0 sis=46 pruub=13.078996658s) [2] r=-1 lpr=46 pi=[35,46)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 111.431900024s@ mbc={}] PeeringState::start_peering_interval up [] -> [2], acting [] -> [2], acting_primary ? -> 2, up_primary ? -> 2, role -1 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Nov 27 05:57:21 np0005537642 ceph-osd[82775]: osd.1 pg_epoch: 46 pg[4.8( empty local-lis/les=42/43 n=0 ec=37/18 lis/c=42/42 les/c/f=43/43/0 sis=46 pruub=9.383230209s) [2] r=-1 lpr=46 pi=[42,46)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 107.736175537s@ mbc={}] PeeringState::start_peering_interval up [] -> [2], acting [] -> [2], acting_primary ? -> 2, up_primary ? -> 2, role -1 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Nov 27 05:57:21 np0005537642 ceph-osd[82775]: osd.1 pg_epoch: 46 pg[4.8( empty local-lis/les=42/43 n=0 ec=37/18 lis/c=42/42 les/c/f=43/43/0 sis=46 pruub=9.383220673s) [2] r=-1 lpr=46 pi=[42,46)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 107.736175537s@ mbc={}] state<Start>: transitioning to Stray
Nov 27 05:57:21 np0005537642 ceph-osd[82775]: osd.1 pg_epoch: 46 pg[2.f( empty local-lis/les=35/36 n=0 ec=35/14 lis/c=35/35 les/c/f=36/36/0 sis=46 pruub=13.078967094s) [2] r=-1 lpr=46 pi=[35,46)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 111.431900024s@ mbc={}] state<Start>: transitioning to Stray
Nov 27 05:57:21 np0005537642 ceph-osd[82775]: osd.1 pg_epoch: 46 pg[5.4( empty local-lis/les=42/43 n=0 ec=38/21 lis/c=42/42 les/c/f=43/43/0 sis=46 pruub=9.383300781s) [2] r=-1 lpr=46 pi=[42,46)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 107.736419678s@ mbc={}] PeeringState::start_peering_interval up [] -> [2], acting [] -> [2], acting_primary ? -> 2, up_primary ? -> 2, role -1 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Nov 27 05:57:21 np0005537642 ceph-osd[82775]: osd.1 pg_epoch: 46 pg[5.4( empty local-lis/les=42/43 n=0 ec=38/21 lis/c=42/42 les/c/f=43/43/0 sis=46 pruub=9.383290291s) [2] r=-1 lpr=46 pi=[42,46)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 107.736419678s@ mbc={}] state<Start>: transitioning to Stray
Nov 27 05:57:21 np0005537642 ceph-osd[82775]: osd.1 pg_epoch: 46 pg[7.5( empty local-lis/les=39/41 n=0 ec=39/27 lis/c=39/39 les/c/f=41/41/0 sis=46 pruub=12.668189049s) [2] r=-1 lpr=46 pi=[39,46)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 111.021392822s@ mbc={}] PeeringState::start_peering_interval up [] -> [2], acting [] -> [2], acting_primary ? -> 2, up_primary ? -> 2, role -1 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Nov 27 05:57:21 np0005537642 ceph-osd[82775]: osd.1 pg_epoch: 46 pg[7.5( empty local-lis/les=39/41 n=0 ec=39/27 lis/c=39/39 les/c/f=41/41/0 sis=46 pruub=12.668178558s) [2] r=-1 lpr=46 pi=[39,46)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 111.021392822s@ mbc={}] state<Start>: transitioning to Stray
Nov 27 05:57:21 np0005537642 ceph-osd[82775]: osd.1 pg_epoch: 46 pg[2.5( empty local-lis/les=35/36 n=0 ec=35/14 lis/c=35/35 les/c/f=36/36/0 sis=46 pruub=13.078227997s) [2] r=-1 lpr=46 pi=[35,46)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 111.431510925s@ mbc={}] PeeringState::start_peering_interval up [] -> [2], acting [] -> [2], acting_primary ? -> 2, up_primary ? -> 2, role -1 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Nov 27 05:57:21 np0005537642 ceph-osd[82775]: osd.1 pg_epoch: 46 pg[2.b( empty local-lis/les=35/36 n=0 ec=35/14 lis/c=35/35 les/c/f=36/36/0 sis=46 pruub=13.078363419s) [2] r=-1 lpr=46 pi=[35,46)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 111.431648254s@ mbc={}] PeeringState::start_peering_interval up [] -> [2], acting [] -> [2], acting_primary ? -> 2, up_primary ? -> 2, role -1 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Nov 27 05:57:21 np0005537642 ceph-osd[82775]: osd.1 pg_epoch: 46 pg[2.5( empty local-lis/les=35/36 n=0 ec=35/14 lis/c=35/35 les/c/f=36/36/0 sis=46 pruub=13.078213692s) [2] r=-1 lpr=46 pi=[35,46)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 111.431510925s@ mbc={}] state<Start>: transitioning to Stray
Nov 27 05:57:21 np0005537642 ceph-osd[82775]: osd.1 pg_epoch: 46 pg[2.b( empty local-lis/les=35/36 n=0 ec=35/14 lis/c=35/35 les/c/f=36/36/0 sis=46 pruub=13.078345299s) [2] r=-1 lpr=46 pi=[35,46)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 111.431648254s@ mbc={}] state<Start>: transitioning to Stray
Nov 27 05:57:21 np0005537642 ceph-osd[82775]: osd.1 pg_epoch: 46 pg[4.1( empty local-lis/les=42/43 n=0 ec=37/18 lis/c=42/42 les/c/f=43/43/0 sis=46 pruub=9.383203506s) [2] r=-1 lpr=46 pi=[42,46)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 107.736587524s@ mbc={}] PeeringState::start_peering_interval up [] -> [2], acting [] -> [2], acting_primary ? -> 2, up_primary ? -> 2, role -1 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Nov 27 05:57:21 np0005537642 ceph-osd[82775]: osd.1 pg_epoch: 46 pg[4.1( empty local-lis/les=42/43 n=0 ec=37/18 lis/c=42/42 les/c/f=43/43/0 sis=46 pruub=9.383191109s) [2] r=-1 lpr=46 pi=[42,46)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 107.736587524s@ mbc={}] state<Start>: transitioning to Stray
Nov 27 05:57:21 np0005537642 ceph-osd[82775]: osd.1 pg_epoch: 46 pg[3.9( empty local-lis/les=42/43 n=0 ec=35/15 lis/c=42/42 les/c/f=43/43/0 sis=46 pruub=9.383182526s) [2] r=-1 lpr=46 pi=[42,46)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 107.736648560s@ mbc={}] PeeringState::start_peering_interval up [] -> [2], acting [] -> [2], acting_primary ? -> 2, up_primary ? -> 2, role -1 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Nov 27 05:57:21 np0005537642 ceph-osd[82775]: osd.1 pg_epoch: 46 pg[3.9( empty local-lis/les=42/43 n=0 ec=35/15 lis/c=42/42 les/c/f=43/43/0 sis=46 pruub=9.383143425s) [2] r=-1 lpr=46 pi=[42,46)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 107.736648560s@ mbc={}] state<Start>: transitioning to Stray
Nov 27 05:57:21 np0005537642 ceph-osd[82775]: osd.1 pg_epoch: 46 pg[3.1a( empty local-lis/les=42/43 n=0 ec=35/15 lis/c=42/42 les/c/f=43/43/0 sis=46 pruub=9.383194923s) [2] r=-1 lpr=46 pi=[42,46)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 107.736778259s@ mbc={}] PeeringState::start_peering_interval up [] -> [2], acting [] -> [2], acting_primary ? -> 2, up_primary ? -> 2, role -1 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Nov 27 05:57:21 np0005537642 ceph-osd[82775]: osd.1 pg_epoch: 46 pg[3.1a( empty local-lis/les=42/43 n=0 ec=35/15 lis/c=42/42 les/c/f=43/43/0 sis=46 pruub=9.383180618s) [2] r=-1 lpr=46 pi=[42,46)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 107.736778259s@ mbc={}] state<Start>: transitioning to Stray
Nov 27 05:57:21 np0005537642 ceph-osd[82775]: osd.1 pg_epoch: 46 pg[3.1d( empty local-lis/les=42/43 n=0 ec=35/15 lis/c=42/42 les/c/f=43/43/0 sis=46 pruub=9.383217812s) [2] r=-1 lpr=46 pi=[42,46)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 107.736831665s@ mbc={}] PeeringState::start_peering_interval up [] -> [2], acting [] -> [2], acting_primary ? -> 2, up_primary ? -> 2, role -1 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Nov 27 05:57:21 np0005537642 ceph-osd[82775]: osd.1 pg_epoch: 46 pg[5.e( empty local-lis/les=42/43 n=0 ec=38/21 lis/c=42/42 les/c/f=43/43/0 sis=46 pruub=9.383078575s) [2] r=-1 lpr=46 pi=[42,46)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 107.736701965s@ mbc={}] PeeringState::start_peering_interval up [] -> [2], acting [] -> [2], acting_primary ? -> 2, up_primary ? -> 2, role -1 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Nov 27 05:57:21 np0005537642 ceph-osd[82775]: osd.1 pg_epoch: 46 pg[3.1d( empty local-lis/les=42/43 n=0 ec=35/15 lis/c=42/42 les/c/f=43/43/0 sis=46 pruub=9.383202553s) [2] r=-1 lpr=46 pi=[42,46)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 107.736831665s@ mbc={}] state<Start>: transitioning to Stray
Nov 27 05:57:21 np0005537642 ceph-osd[82775]: osd.1 pg_epoch: 46 pg[5.e( empty local-lis/les=42/43 n=0 ec=38/21 lis/c=42/42 les/c/f=43/43/0 sis=46 pruub=9.383045197s) [2] r=-1 lpr=46 pi=[42,46)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 107.736701965s@ mbc={}] state<Start>: transitioning to Stray
Nov 27 05:57:21 np0005537642 ceph-osd[82775]: osd.1 pg_epoch: 46 pg[2.1c( empty local-lis/les=35/36 n=0 ec=35/14 lis/c=35/35 les/c/f=36/36/0 sis=46 pruub=13.077311516s) [2] r=-1 lpr=46 pi=[35,46)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 111.431030273s@ mbc={}] PeeringState::start_peering_interval up [] -> [2], acting [] -> [2], acting_primary ? -> 2, up_primary ? -> 2, role -1 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Nov 27 05:57:21 np0005537642 ceph-osd[82775]: osd.1 pg_epoch: 46 pg[2.1c( empty local-lis/les=35/36 n=0 ec=35/14 lis/c=35/35 les/c/f=36/36/0 sis=46 pruub=13.077284813s) [2] r=-1 lpr=46 pi=[35,46)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 111.431030273s@ mbc={}] state<Start>: transitioning to Stray
Nov 27 05:57:21 np0005537642 ceph-osd[82775]: osd.1 pg_epoch: 46 pg[5.1a( empty local-lis/les=42/43 n=0 ec=38/21 lis/c=42/42 les/c/f=43/43/0 sis=46 pruub=9.383118629s) [2] r=-1 lpr=46 pi=[42,46)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 107.736915588s@ mbc={}] PeeringState::start_peering_interval up [] -> [2], acting [] -> [2], acting_primary ? -> 2, up_primary ? -> 2, role -1 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Nov 27 05:57:21 np0005537642 ceph-osd[82775]: osd.1 pg_epoch: 46 pg[2.1d( empty local-lis/les=35/36 n=0 ec=35/14 lis/c=35/35 les/c/f=36/36/0 sis=46 pruub=13.077241898s) [2] r=-1 lpr=46 pi=[35,46)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 111.431045532s@ mbc={}] PeeringState::start_peering_interval up [] -> [2], acting [] -> [2], acting_primary ? -> 2, up_primary ? -> 2, role -1 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Nov 27 05:57:21 np0005537642 ceph-osd[82775]: osd.1 pg_epoch: 46 pg[5.1a( empty local-lis/les=42/43 n=0 ec=38/21 lis/c=42/42 les/c/f=43/43/0 sis=46 pruub=9.383102417s) [2] r=-1 lpr=46 pi=[42,46)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 107.736915588s@ mbc={}] state<Start>: transitioning to Stray
Nov 27 05:57:21 np0005537642 ceph-osd[82775]: osd.1 pg_epoch: 46 pg[2.1d( empty local-lis/les=35/36 n=0 ec=35/14 lis/c=35/35 les/c/f=36/36/0 sis=46 pruub=13.077226639s) [2] r=-1 lpr=46 pi=[35,46)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 111.431045532s@ mbc={}] state<Start>: transitioning to Stray
Nov 27 05:57:21 np0005537642 python3[87899]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --interactive  --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z   --volume /home/ceph-admin/specs/ceph_spec.yaml:/home/ceph_spec.yaml:z   --entrypoint ceph quay.io/ceph/ceph:v19 --fsid 4c838139-e0c9-556a-a9ca-e4422f459af7 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   config set mgr mgr/dashboard/server_port 8443 _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 27 05:57:21 np0005537642 ceph-mon[74338]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2.devices.0}] v 0)
Nov 27 05:57:21 np0005537642 ceph-mon[74338]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1.devices.0}] v 0)
Nov 27 05:57:21 np0005537642 ceph-mon[74338]: Saving service grafana spec with placement compute-0;count:1
Nov 27 05:57:21 np0005537642 ceph-mon[74338]: Adjusting osd_memory_target on compute-2 to 128.0M
Nov 27 05:57:21 np0005537642 ceph-mon[74338]: Saving service prometheus spec with placement compute-0;count:1
Nov 27 05:57:21 np0005537642 ceph-mon[74338]: Unable to set osd_memory_target on compute-2 to 134220595: error parsing value: Value '134220595' is below minimum 939524096
Nov 27 05:57:21 np0005537642 ceph-mon[74338]: from='mgr.14122 192.168.122.100:0/397881370' entity='mgr.compute-0.qnrkij' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 27 05:57:21 np0005537642 ceph-mon[74338]: Updating compute-0:/etc/ceph/ceph.conf
Nov 27 05:57:21 np0005537642 ceph-mon[74338]: Updating compute-1:/etc/ceph/ceph.conf
Nov 27 05:57:21 np0005537642 ceph-mon[74338]: Updating compute-2:/etc/ceph/ceph.conf
Nov 27 05:57:21 np0005537642 ceph-mon[74338]: from='mgr.14122 192.168.122.100:0/397881370' entity='mgr.compute-0.qnrkij' 
Nov 27 05:57:21 np0005537642 ceph-mon[74338]: Saving service alertmanager spec with placement compute-0;count:1
Nov 27 05:57:21 np0005537642 ceph-mon[74338]: from='mgr.14122 192.168.122.100:0/397881370' entity='mgr.compute-0.qnrkij' 
Nov 27 05:57:21 np0005537642 ceph-mon[74338]: osd.2 [v2:192.168.122.102:6800/1778559861,v1:192.168.122.102:6801/1778559861] boot
Nov 27 05:57:21 np0005537642 podman[87958]: 2025-11-27 10:57:21.797417589 +0000 UTC m=+0.083680919 container create 29c8a0321f4686973e8821de1eb5f6b07c90a28d36bff4c4cba3abea58c80ade (image=quay.io/ceph/ceph:v19, name=crazy_shockley, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_REF=squid, ceph=True, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 27 05:57:21 np0005537642 podman[87958]: 2025-11-27 10:57:21.738961929 +0000 UTC m=+0.025225309 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Nov 27 05:57:21 np0005537642 systemd[1]: Started libpod-conmon-29c8a0321f4686973e8821de1eb5f6b07c90a28d36bff4c4cba3abea58c80ade.scope.
Nov 27 05:57:21 np0005537642 ceph-mon[74338]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/397881370' entity='mgr.compute-0.qnrkij' 
Nov 27 05:57:21 np0005537642 systemd[1]: Started libcrun container.
Nov 27 05:57:21 np0005537642 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a03fe5eb76d69140f611b6fdd021954c1d88017bea5b8af66f879d8bf68a0e52/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Nov 27 05:57:21 np0005537642 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a03fe5eb76d69140f611b6fdd021954c1d88017bea5b8af66f879d8bf68a0e52/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 27 05:57:21 np0005537642 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a03fe5eb76d69140f611b6fdd021954c1d88017bea5b8af66f879d8bf68a0e52/merged/home/ceph_spec.yaml supports timestamps until 2038 (0x7fffffff)
Nov 27 05:57:21 np0005537642 ceph-mon[74338]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2}] v 0)
Nov 27 05:57:21 np0005537642 ceph-mon[74338]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/397881370' entity='mgr.compute-0.qnrkij' 
Nov 27 05:57:21 np0005537642 podman[87958]: 2025-11-27 10:57:21.982358168 +0000 UTC m=+0.268621518 container init 29c8a0321f4686973e8821de1eb5f6b07c90a28d36bff4c4cba3abea58c80ade (image=quay.io/ceph/ceph:v19, name=crazy_shockley, OSD_FLAVOR=default, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 27 05:57:21 np0005537642 podman[87958]: 2025-11-27 10:57:21.992756802 +0000 UTC m=+0.279020142 container start 29c8a0321f4686973e8821de1eb5f6b07c90a28d36bff4c4cba3abea58c80ade (image=quay.io/ceph/ceph:v19, name=crazy_shockley, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid)
Nov 27 05:57:22 np0005537642 ceph-mon[74338]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1}] v 0)
Nov 27 05:57:22 np0005537642 ceph-mon[74338]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Nov 27 05:57:22 np0005537642 podman[87958]: 2025-11-27 10:57:22.074980897 +0000 UTC m=+0.361244267 container attach 29c8a0321f4686973e8821de1eb5f6b07c90a28d36bff4c4cba3abea58c80ade (image=quay.io/ceph/ceph:v19, name=crazy_shockley, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, ceph=True, org.label-schema.vendor=CentOS)
Nov 27 05:57:22 np0005537642 ceph-mon[74338]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/397881370' entity='mgr.compute-0.qnrkij' 
Nov 27 05:57:22 np0005537642 ceph-mon[74338]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/397881370' entity='mgr.compute-0.qnrkij' 
Nov 27 05:57:22 np0005537642 ceph-mon[74338]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/397881370' entity='mgr.compute-0.qnrkij' 
Nov 27 05:57:22 np0005537642 ceph-mon[74338]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Nov 27 05:57:22 np0005537642 ceph-osd[82775]: log_channel(cluster) log [DBG] : 2.11 scrub starts
Nov 27 05:57:22 np0005537642 ceph-osd[82775]: log_channel(cluster) log [DBG] : 2.11 scrub ok
Nov 27 05:57:22 np0005537642 ceph-mon[74338]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/397881370' entity='mgr.compute-0.qnrkij' 
Nov 27 05:57:22 np0005537642 ceph-mon[74338]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Nov 27 05:57:22 np0005537642 ceph-mon[74338]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config set, name=mgr/dashboard/server_port}] v 0)
Nov 27 05:57:22 np0005537642 ceph-mon[74338]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/397881370' entity='mgr.compute-0.qnrkij' 
Nov 27 05:57:22 np0005537642 ceph-mon[74338]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Nov 27 05:57:22 np0005537642 ceph-mon[74338]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/397881370' entity='mgr.compute-0.qnrkij' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 27 05:57:22 np0005537642 ceph-mon[74338]: mon.compute-0@0(leader).osd e46 do_prune osdmap full prune enabled
Nov 27 05:57:22 np0005537642 ceph-mon[74338]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Nov 27 05:57:22 np0005537642 ceph-mon[74338]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/397881370' entity='mgr.compute-0.qnrkij' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 27 05:57:22 np0005537642 ceph-mon[74338]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Nov 27 05:57:22 np0005537642 ceph-mon[74338]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/397881370' entity='mgr.compute-0.qnrkij' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 27 05:57:22 np0005537642 ceph-mon[74338]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/2490680151' entity='client.admin' 
Nov 27 05:57:22 np0005537642 systemd[1]: libpod-29c8a0321f4686973e8821de1eb5f6b07c90a28d36bff4c4cba3abea58c80ade.scope: Deactivated successfully.
Nov 27 05:57:22 np0005537642 podman[87958]: 2025-11-27 10:57:22.642783174 +0000 UTC m=+0.929046574 container died 29c8a0321f4686973e8821de1eb5f6b07c90a28d36bff4c4cba3abea58c80ade (image=quay.io/ceph/ceph:v19, name=crazy_shockley, CEPH_REF=squid, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 27 05:57:22 np0005537642 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v147: 193 pgs: 29 peering, 164 active+clean; 449 KiB data, 480 MiB used, 60 GiB / 60 GiB avail
Nov 27 05:57:22 np0005537642 ceph-mon[74338]: mon.compute-0@0(leader).osd e47 e47: 3 total, 3 up, 3 in
Nov 27 05:57:22 np0005537642 ceph-mon[74338]: log_channel(cluster) log [DBG] : osdmap e47: 3 total, 3 up, 3 in
Nov 27 05:57:22 np0005537642 systemd[1]: var-lib-containers-storage-overlay-a03fe5eb76d69140f611b6fdd021954c1d88017bea5b8af66f879d8bf68a0e52-merged.mount: Deactivated successfully.
Nov 27 05:57:23 np0005537642 ceph-mon[74338]: Updating compute-2:/var/lib/ceph/4c838139-e0c9-556a-a9ca-e4422f459af7/config/ceph.conf
Nov 27 05:57:23 np0005537642 ceph-mon[74338]: Updating compute-1:/var/lib/ceph/4c838139-e0c9-556a-a9ca-e4422f459af7/config/ceph.conf
Nov 27 05:57:23 np0005537642 ceph-mon[74338]: Updating compute-0:/var/lib/ceph/4c838139-e0c9-556a-a9ca-e4422f459af7/config/ceph.conf
Nov 27 05:57:23 np0005537642 ceph-mon[74338]: from='mgr.14122 192.168.122.100:0/397881370' entity='mgr.compute-0.qnrkij' 
Nov 27 05:57:23 np0005537642 ceph-mon[74338]: from='mgr.14122 192.168.122.100:0/397881370' entity='mgr.compute-0.qnrkij' 
Nov 27 05:57:23 np0005537642 ceph-mon[74338]: from='mgr.14122 192.168.122.100:0/397881370' entity='mgr.compute-0.qnrkij' 
Nov 27 05:57:23 np0005537642 ceph-mon[74338]: from='mgr.14122 192.168.122.100:0/397881370' entity='mgr.compute-0.qnrkij' 
Nov 27 05:57:23 np0005537642 ceph-mon[74338]: from='mgr.14122 192.168.122.100:0/397881370' entity='mgr.compute-0.qnrkij' 
Nov 27 05:57:23 np0005537642 ceph-mon[74338]: from='mgr.14122 192.168.122.100:0/397881370' entity='mgr.compute-0.qnrkij' 
Nov 27 05:57:23 np0005537642 ceph-mon[74338]: from='mgr.14122 192.168.122.100:0/397881370' entity='mgr.compute-0.qnrkij' 
Nov 27 05:57:23 np0005537642 ceph-mon[74338]: from='mgr.14122 192.168.122.100:0/397881370' entity='mgr.compute-0.qnrkij' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 27 05:57:23 np0005537642 ceph-mon[74338]: from='client.? 192.168.122.100:0/2490680151' entity='client.admin' 
Nov 27 05:57:23 np0005537642 podman[87958]: 2025-11-27 10:57:23.073656546 +0000 UTC m=+1.359919916 container remove 29c8a0321f4686973e8821de1eb5f6b07c90a28d36bff4c4cba3abea58c80ade (image=quay.io/ceph/ceph:v19, name=crazy_shockley, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1)
Nov 27 05:57:23 np0005537642 systemd[1]: libpod-conmon-29c8a0321f4686973e8821de1eb5f6b07c90a28d36bff4c4cba3abea58c80ade.scope: Deactivated successfully.
Nov 27 05:57:23 np0005537642 ceph-osd[82775]: log_channel(cluster) log [DBG] : 7.15 scrub starts
Nov 27 05:57:23 np0005537642 ceph-osd[82775]: log_channel(cluster) log [DBG] : 7.15 scrub ok
Nov 27 05:57:23 np0005537642 podman[88220]: 2025-11-27 10:57:23.287953564 +0000 UTC m=+0.029177144 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Nov 27 05:57:23 np0005537642 podman[88220]: 2025-11-27 10:57:23.435678425 +0000 UTC m=+0.176902015 container create cdcdfb5ed449206598fcd70c8cba2a87e700577a7fb2a1542c0c0f3871cf0395 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=adoring_diffie, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, io.buildah.version=1.40.1)
Nov 27 05:57:23 np0005537642 systemd[1]: Started libpod-conmon-cdcdfb5ed449206598fcd70c8cba2a87e700577a7fb2a1542c0c0f3871cf0395.scope.
Nov 27 05:57:23 np0005537642 systemd[1]: Started libcrun container.
Nov 27 05:57:23 np0005537642 python3[88252]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --interactive  --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z   --volume /home/ceph-admin/specs/ceph_spec.yaml:/home/ceph_spec.yaml:z   --entrypoint ceph quay.io/ceph/ceph:v19 --fsid 4c838139-e0c9-556a-a9ca-e4422f459af7 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   config set mgr mgr/dashboard/ssl_server_port 8443 _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 27 05:57:23 np0005537642 podman[88220]: 2025-11-27 10:57:23.698531762 +0000 UTC m=+0.439755392 container init cdcdfb5ed449206598fcd70c8cba2a87e700577a7fb2a1542c0c0f3871cf0395 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=adoring_diffie, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 27 05:57:23 np0005537642 podman[88220]: 2025-11-27 10:57:23.709033609 +0000 UTC m=+0.450257169 container start cdcdfb5ed449206598fcd70c8cba2a87e700577a7fb2a1542c0c0f3871cf0395 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=adoring_diffie, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, io.buildah.version=1.40.1)
Nov 27 05:57:23 np0005537642 adoring_diffie[88255]: 167 167
Nov 27 05:57:23 np0005537642 systemd[1]: libpod-cdcdfb5ed449206598fcd70c8cba2a87e700577a7fb2a1542c0c0f3871cf0395.scope: Deactivated successfully.
Nov 27 05:57:23 np0005537642 podman[88220]: 2025-11-27 10:57:23.766455828 +0000 UTC m=+0.507679468 container attach cdcdfb5ed449206598fcd70c8cba2a87e700577a7fb2a1542c0c0f3871cf0395 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=adoring_diffie, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 27 05:57:23 np0005537642 podman[88220]: 2025-11-27 10:57:23.76720856 +0000 UTC m=+0.508432150 container died cdcdfb5ed449206598fcd70c8cba2a87e700577a7fb2a1542c0c0f3871cf0395 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=adoring_diffie, org.label-schema.build-date=20250325, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid)
Nov 27 05:57:24 np0005537642 systemd[1]: var-lib-containers-storage-overlay-0e3a9c1439eb14b46aaa118667ecbd59271946fd33d14ada62e8b352a0342568-merged.mount: Deactivated successfully.
Nov 27 05:57:24 np0005537642 ceph-osd[82775]: log_channel(cluster) log [DBG] : 2.3 scrub starts
Nov 27 05:57:24 np0005537642 ceph-osd[82775]: log_channel(cluster) log [DBG] : 2.3 scrub ok
Nov 27 05:57:24 np0005537642 podman[88220]: 2025-11-27 10:57:24.459420556 +0000 UTC m=+1.200644146 container remove cdcdfb5ed449206598fcd70c8cba2a87e700577a7fb2a1542c0c0f3871cf0395 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=adoring_diffie, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, OSD_FLAVOR=default, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 27 05:57:24 np0005537642 systemd[1]: libpod-conmon-cdcdfb5ed449206598fcd70c8cba2a87e700577a7fb2a1542c0c0f3871cf0395.scope: Deactivated successfully.
Nov 27 05:57:24 np0005537642 podman[88258]: 2025-11-27 10:57:24.596166396 +0000 UTC m=+0.944425163 container create 24440b28c75947fffc3cb8aae7db2475a636d00ad83de395ba30aa3896c1adee (image=quay.io/ceph/ceph:v19, name=elated_cohen, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_REF=squid, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Nov 27 05:57:24 np0005537642 podman[88258]: 2025-11-27 10:57:24.524535881 +0000 UTC m=+0.872794698 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Nov 27 05:57:24 np0005537642 systemd[1]: Started libpod-conmon-24440b28c75947fffc3cb8aae7db2475a636d00ad83de395ba30aa3896c1adee.scope.
Nov 27 05:57:24 np0005537642 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v149: 193 pgs: 29 peering, 164 active+clean; 449 KiB data, 480 MiB used, 60 GiB / 60 GiB avail
Nov 27 05:57:24 np0005537642 systemd[1]: Started libcrun container.
Nov 27 05:57:24 np0005537642 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/215a1da7f1e937a7ec37775318f29dda98a67cc64a7538db5ee5fba80c76932e/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 27 05:57:24 np0005537642 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/215a1da7f1e937a7ec37775318f29dda98a67cc64a7538db5ee5fba80c76932e/merged/home/ceph_spec.yaml supports timestamps until 2038 (0x7fffffff)
Nov 27 05:57:24 np0005537642 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/215a1da7f1e937a7ec37775318f29dda98a67cc64a7538db5ee5fba80c76932e/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Nov 27 05:57:24 np0005537642 podman[88292]: 2025-11-27 10:57:24.731297248 +0000 UTC m=+0.129532410 container create 3f8888ed59374294380be4cbd1aff588bc42c04cbe50a9c5269f37ca49bae552 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=great_wilson, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.license=GPLv2, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Nov 27 05:57:24 np0005537642 podman[88292]: 2025-11-27 10:57:24.640194643 +0000 UTC m=+0.038429815 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Nov 27 05:57:24 np0005537642 systemd[1]: Started libpod-conmon-3f8888ed59374294380be4cbd1aff588bc42c04cbe50a9c5269f37ca49bae552.scope.
Nov 27 05:57:24 np0005537642 podman[88258]: 2025-11-27 10:57:24.846121516 +0000 UTC m=+1.194380323 container init 24440b28c75947fffc3cb8aae7db2475a636d00ad83de395ba30aa3896c1adee (image=quay.io/ceph/ceph:v19, name=elated_cohen, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 27 05:57:24 np0005537642 systemd[1]: Started libcrun container.
Nov 27 05:57:24 np0005537642 podman[88258]: 2025-11-27 10:57:24.857375145 +0000 UTC m=+1.205633902 container start 24440b28c75947fffc3cb8aae7db2475a636d00ad83de395ba30aa3896c1adee (image=quay.io/ceph/ceph:v19, name=elated_cohen, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, ceph=True, CEPH_REF=squid, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 27 05:57:24 np0005537642 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/85450bc3d136cdbb416f013113d7ff636bd3e07a52201a8b451465511ca403a0/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 27 05:57:24 np0005537642 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/85450bc3d136cdbb416f013113d7ff636bd3e07a52201a8b451465511ca403a0/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 27 05:57:24 np0005537642 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/85450bc3d136cdbb416f013113d7ff636bd3e07a52201a8b451465511ca403a0/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 27 05:57:24 np0005537642 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/85450bc3d136cdbb416f013113d7ff636bd3e07a52201a8b451465511ca403a0/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 27 05:57:24 np0005537642 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/85450bc3d136cdbb416f013113d7ff636bd3e07a52201a8b451465511ca403a0/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Nov 27 05:57:24 np0005537642 podman[88258]: 2025-11-27 10:57:24.893057089 +0000 UTC m=+1.241315846 container attach 24440b28c75947fffc3cb8aae7db2475a636d00ad83de395ba30aa3896c1adee (image=quay.io/ceph/ceph:v19, name=elated_cohen, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1)
Nov 27 05:57:24 np0005537642 podman[88292]: 2025-11-27 10:57:24.916153245 +0000 UTC m=+0.314388427 container init 3f8888ed59374294380be4cbd1aff588bc42c04cbe50a9c5269f37ca49bae552 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=great_wilson, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, CEPH_REF=squid, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 27 05:57:24 np0005537642 podman[88292]: 2025-11-27 10:57:24.927734043 +0000 UTC m=+0.325969165 container start 3f8888ed59374294380be4cbd1aff588bc42c04cbe50a9c5269f37ca49bae552 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=great_wilson, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.license=GPLv2)
Nov 27 05:57:24 np0005537642 podman[88292]: 2025-11-27 10:57:24.943160785 +0000 UTC m=+0.341396017 container attach 3f8888ed59374294380be4cbd1aff588bc42c04cbe50a9c5269f37ca49bae552 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=great_wilson, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, OSD_FLAVOR=default, org.label-schema.vendor=CentOS)
Nov 27 05:57:25 np0005537642 ceph-osd[82775]: log_channel(cluster) log [DBG] : 2.0 scrub starts
Nov 27 05:57:25 np0005537642 ceph-osd[82775]: log_channel(cluster) log [DBG] : 2.0 scrub ok
Nov 27 05:57:25 np0005537642 great_wilson[88314]: --> passed data devices: 0 physical, 1 LVM
Nov 27 05:57:25 np0005537642 great_wilson[88314]: --> All data devices are unavailable
Nov 27 05:57:25 np0005537642 ceph-mon[74338]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config set, name=mgr/dashboard/ssl_server_port}] v 0)
Nov 27 05:57:25 np0005537642 systemd[1]: libpod-3f8888ed59374294380be4cbd1aff588bc42c04cbe50a9c5269f37ca49bae552.scope: Deactivated successfully.
Nov 27 05:57:25 np0005537642 podman[88292]: 2025-11-27 10:57:25.313736583 +0000 UTC m=+0.711971725 container died 3f8888ed59374294380be4cbd1aff588bc42c04cbe50a9c5269f37ca49bae552 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=great_wilson, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.vendor=CentOS, CEPH_REF=squid, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 27 05:57:25 np0005537642 ceph-mon[74338]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/998130342' entity='client.admin' 
Nov 27 05:57:25 np0005537642 systemd[1]: libpod-24440b28c75947fffc3cb8aae7db2475a636d00ad83de395ba30aa3896c1adee.scope: Deactivated successfully.
Nov 27 05:57:25 np0005537642 podman[88258]: 2025-11-27 10:57:25.418413745 +0000 UTC m=+1.766672472 container died 24440b28c75947fffc3cb8aae7db2475a636d00ad83de395ba30aa3896c1adee (image=quay.io/ceph/ceph:v19, name=elated_cohen, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.build-date=20250325, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid)
Nov 27 05:57:25 np0005537642 systemd[1]: var-lib-containers-storage-overlay-215a1da7f1e937a7ec37775318f29dda98a67cc64a7538db5ee5fba80c76932e-merged.mount: Deactivated successfully.
Nov 27 05:57:25 np0005537642 podman[88258]: 2025-11-27 10:57:25.776435287 +0000 UTC m=+2.124694044 container remove 24440b28c75947fffc3cb8aae7db2475a636d00ad83de395ba30aa3896c1adee (image=quay.io/ceph/ceph:v19, name=elated_cohen, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 27 05:57:25 np0005537642 systemd[1]: libpod-conmon-24440b28c75947fffc3cb8aae7db2475a636d00ad83de395ba30aa3896c1adee.scope: Deactivated successfully.
Nov 27 05:57:25 np0005537642 systemd[1]: var-lib-containers-storage-overlay-85450bc3d136cdbb416f013113d7ff636bd3e07a52201a8b451465511ca403a0-merged.mount: Deactivated successfully.
Nov 27 05:57:26 np0005537642 podman[88292]: 2025-11-27 10:57:26.192704471 +0000 UTC m=+1.590939633 container remove 3f8888ed59374294380be4cbd1aff588bc42c04cbe50a9c5269f37ca49bae552 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=great_wilson, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True)
Nov 27 05:57:26 np0005537642 systemd[1]: libpod-conmon-3f8888ed59374294380be4cbd1aff588bc42c04cbe50a9c5269f37ca49bae552.scope: Deactivated successfully.
Nov 27 05:57:26 np0005537642 ceph-mon[74338]: from='client.? 192.168.122.100:0/998130342' entity='client.admin' 
Nov 27 05:57:26 np0005537642 ceph-osd[82775]: log_channel(cluster) log [DBG] : 7.0 scrub starts
Nov 27 05:57:26 np0005537642 ceph-osd[82775]: log_channel(cluster) log [DBG] : 7.0 scrub ok
Nov 27 05:57:26 np0005537642 python3[88404]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --interactive  --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z   --volume /home/ceph-admin/specs/ceph_spec.yaml:/home/ceph_spec.yaml:z   --entrypoint ceph quay.io/ceph/ceph:v19 --fsid 4c838139-e0c9-556a-a9ca-e4422f459af7 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   config set mgr mgr/dashboard/ssl false _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 27 05:57:26 np0005537642 podman[88405]: 2025-11-27 10:57:26.396282915 +0000 UTC m=+0.123076941 container create d86f01a94712cb8c1b1871e6ba2d69bc501d3a4e28e0552821fdd9ee849ecbbd (image=quay.io/ceph/ceph:v19, name=reverent_perlman, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_REF=squid, ceph=True, org.label-schema.license=GPLv2)
Nov 27 05:57:26 np0005537642 podman[88405]: 2025-11-27 10:57:26.319645884 +0000 UTC m=+0.046439970 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Nov 27 05:57:26 np0005537642 ceph-mon[74338]: mon.compute-0@0(leader).osd e47 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 27 05:57:26 np0005537642 systemd[1]: Started libpod-conmon-d86f01a94712cb8c1b1871e6ba2d69bc501d3a4e28e0552821fdd9ee849ecbbd.scope.
Nov 27 05:57:26 np0005537642 systemd[1]: Started libcrun container.
Nov 27 05:57:26 np0005537642 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c6b6ae10d403f71cfc23897479efc516e8121ce10f583d0035ac5c7b563af294/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 27 05:57:26 np0005537642 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c6b6ae10d403f71cfc23897479efc516e8121ce10f583d0035ac5c7b563af294/merged/home/ceph_spec.yaml supports timestamps until 2038 (0x7fffffff)
Nov 27 05:57:26 np0005537642 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c6b6ae10d403f71cfc23897479efc516e8121ce10f583d0035ac5c7b563af294/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Nov 27 05:57:26 np0005537642 podman[88405]: 2025-11-27 10:57:26.579092952 +0000 UTC m=+0.305886948 container init d86f01a94712cb8c1b1871e6ba2d69bc501d3a4e28e0552821fdd9ee849ecbbd (image=quay.io/ceph/ceph:v19, name=reverent_perlman, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Nov 27 05:57:26 np0005537642 podman[88405]: 2025-11-27 10:57:26.588464746 +0000 UTC m=+0.315258742 container start d86f01a94712cb8c1b1871e6ba2d69bc501d3a4e28e0552821fdd9ee849ecbbd (image=quay.io/ceph/ceph:v19, name=reverent_perlman, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 27 05:57:26 np0005537642 podman[88405]: 2025-11-27 10:57:26.618630268 +0000 UTC m=+0.345424274 container attach d86f01a94712cb8c1b1871e6ba2d69bc501d3a4e28e0552821fdd9ee849ecbbd (image=quay.io/ceph/ceph:v19, name=reverent_perlman, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid)
Nov 27 05:57:26 np0005537642 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v150: 193 pgs: 29 peering, 164 active+clean; 449 KiB data, 480 MiB used, 60 GiB / 60 GiB avail
Nov 27 05:57:26 np0005537642 podman[88530]: 2025-11-27 10:57:26.912184754 +0000 UTC m=+0.118271190 container create abc836c8b5759279f09d98149927bc89f5b2138061c4b057c5cca29d95e12d49 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=brave_gould, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, ceph=True, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 27 05:57:26 np0005537642 podman[88530]: 2025-11-27 10:57:26.824408347 +0000 UTC m=+0.030494803 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Nov 27 05:57:27 np0005537642 ceph-mon[74338]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config set, name=mgr/dashboard/ssl}] v 0)
Nov 27 05:57:27 np0005537642 systemd[1]: Started libpod-conmon-abc836c8b5759279f09d98149927bc89f5b2138061c4b057c5cca29d95e12d49.scope.
Nov 27 05:57:27 np0005537642 systemd[1]: Started libcrun container.
Nov 27 05:57:27 np0005537642 ceph-mon[74338]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/2094396554' entity='client.admin' 
Nov 27 05:57:27 np0005537642 podman[88530]: 2025-11-27 10:57:27.106815117 +0000 UTC m=+0.312901523 container init abc836c8b5759279f09d98149927bc89f5b2138061c4b057c5cca29d95e12d49 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=brave_gould, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS)
Nov 27 05:57:27 np0005537642 podman[88530]: 2025-11-27 10:57:27.112477322 +0000 UTC m=+0.318563728 container start abc836c8b5759279f09d98149927bc89f5b2138061c4b057c5cca29d95e12d49 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=brave_gould, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, OSD_FLAVOR=default)
Nov 27 05:57:27 np0005537642 brave_gould[88547]: 167 167
Nov 27 05:57:27 np0005537642 systemd[1]: libpod-abc836c8b5759279f09d98149927bc89f5b2138061c4b057c5cca29d95e12d49.scope: Deactivated successfully.
Nov 27 05:57:27 np0005537642 conmon[88547]: conmon abc836c8b5759279f09d <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-abc836c8b5759279f09d98149927bc89f5b2138061c4b057c5cca29d95e12d49.scope/container/memory.events
Nov 27 05:57:27 np0005537642 systemd[1]: libpod-d86f01a94712cb8c1b1871e6ba2d69bc501d3a4e28e0552821fdd9ee849ecbbd.scope: Deactivated successfully.
Nov 27 05:57:27 np0005537642 podman[88530]: 2025-11-27 10:57:27.130299484 +0000 UTC m=+0.336386240 container attach abc836c8b5759279f09d98149927bc89f5b2138061c4b057c5cca29d95e12d49 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=brave_gould, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2)
Nov 27 05:57:27 np0005537642 podman[88530]: 2025-11-27 10:57:27.130756887 +0000 UTC m=+0.336843313 container died abc836c8b5759279f09d98149927bc89f5b2138061c4b057c5cca29d95e12d49 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=brave_gould, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, OSD_FLAVOR=default)
Nov 27 05:57:27 np0005537642 podman[88405]: 2025-11-27 10:57:27.175751003 +0000 UTC m=+0.902545039 container died d86f01a94712cb8c1b1871e6ba2d69bc501d3a4e28e0552821fdd9ee849ecbbd (image=quay.io/ceph/ceph:v19, name=reverent_perlman, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2)
Nov 27 05:57:27 np0005537642 ceph-osd[82775]: log_channel(cluster) log [DBG] : 7.7 scrub starts
Nov 27 05:57:27 np0005537642 ceph-osd[82775]: log_channel(cluster) log [DBG] : 7.7 scrub ok
Nov 27 05:57:27 np0005537642 ceph-mon[74338]: from='client.? 192.168.122.100:0/2094396554' entity='client.admin' 
Nov 27 05:57:27 np0005537642 systemd[1]: var-lib-containers-storage-overlay-c6b6ae10d403f71cfc23897479efc516e8121ce10f583d0035ac5c7b563af294-merged.mount: Deactivated successfully.
Nov 27 05:57:27 np0005537642 podman[88554]: 2025-11-27 10:57:27.411303202 +0000 UTC m=+0.269256966 container remove d86f01a94712cb8c1b1871e6ba2d69bc501d3a4e28e0552821fdd9ee849ecbbd (image=quay.io/ceph/ceph:v19, name=reverent_perlman, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 27 05:57:27 np0005537642 systemd[1]: libpod-conmon-d86f01a94712cb8c1b1871e6ba2d69bc501d3a4e28e0552821fdd9ee849ecbbd.scope: Deactivated successfully.
Nov 27 05:57:27 np0005537642 systemd[1]: var-lib-containers-storage-overlay-a7aca1227dbc0054b3a186e212f2cb5d6fe55bc0f4677f7205a73169aa8de750-merged.mount: Deactivated successfully.
Nov 27 05:57:27 np0005537642 podman[88530]: 2025-11-27 10:57:27.645636506 +0000 UTC m=+0.851722942 container remove abc836c8b5759279f09d98149927bc89f5b2138061c4b057c5cca29d95e12d49 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=brave_gould, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, CEPH_REF=squid, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, ceph=True)
Nov 27 05:57:27 np0005537642 systemd[1]: libpod-conmon-abc836c8b5759279f09d98149927bc89f5b2138061c4b057c5cca29d95e12d49.scope: Deactivated successfully.
Nov 27 05:57:27 np0005537642 podman[88587]: 2025-11-27 10:57:27.848310163 +0000 UTC m=+0.037460667 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Nov 27 05:57:27 np0005537642 podman[88587]: 2025-11-27 10:57:27.957032833 +0000 UTC m=+0.146183327 container create fa36e464b613719f8fcced6660a810d44b697c6265477e4e44fd1bbcc2014da2 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=serene_newton, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 27 05:57:28 np0005537642 systemd[1]: Started libpod-conmon-fa36e464b613719f8fcced6660a810d44b697c6265477e4e44fd1bbcc2014da2.scope.
Nov 27 05:57:28 np0005537642 python3[88626]: ansible-ansible.legacy.command Invoked with _raw_params=podman ps -a -f 'name=ceph-?(.*)-mgr.*' --format \{\{\.Command\}\} --no-trunc#012 _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 27 05:57:28 np0005537642 systemd[1]: Started libcrun container.
Nov 27 05:57:28 np0005537642 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b4d5d39f90f30e83fe1c35755c7cd54cdeefe041b8fb6de5b30aee6796e81acf/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 27 05:57:28 np0005537642 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b4d5d39f90f30e83fe1c35755c7cd54cdeefe041b8fb6de5b30aee6796e81acf/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 27 05:57:28 np0005537642 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b4d5d39f90f30e83fe1c35755c7cd54cdeefe041b8fb6de5b30aee6796e81acf/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 27 05:57:28 np0005537642 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b4d5d39f90f30e83fe1c35755c7cd54cdeefe041b8fb6de5b30aee6796e81acf/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 27 05:57:28 np0005537642 podman[88587]: 2025-11-27 10:57:28.140726775 +0000 UTC m=+0.329877279 container init fa36e464b613719f8fcced6660a810d44b697c6265477e4e44fd1bbcc2014da2 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=serene_newton, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 27 05:57:28 np0005537642 podman[88587]: 2025-11-27 10:57:28.150727768 +0000 UTC m=+0.339878262 container start fa36e464b613719f8fcced6660a810d44b697c6265477e4e44fd1bbcc2014da2 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=serene_newton, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 27 05:57:28 np0005537642 podman[88587]: 2025-11-27 10:57:28.197402383 +0000 UTC m=+0.386552907 container attach fa36e464b613719f8fcced6660a810d44b697c6265477e4e44fd1bbcc2014da2 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=serene_newton, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, ceph=True, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS)
Nov 27 05:57:28 np0005537642 ceph-osd[82775]: log_channel(cluster) log [DBG] : 7.1 deep-scrub starts
Nov 27 05:57:28 np0005537642 ceph-osd[82775]: log_channel(cluster) log [DBG] : 7.1 deep-scrub ok
Nov 27 05:57:28 np0005537642 serene_newton[88629]: {
Nov 27 05:57:28 np0005537642 serene_newton[88629]:    "1": [
Nov 27 05:57:28 np0005537642 serene_newton[88629]:        {
Nov 27 05:57:28 np0005537642 serene_newton[88629]:            "devices": [
Nov 27 05:57:28 np0005537642 serene_newton[88629]:                "/dev/loop3"
Nov 27 05:57:28 np0005537642 serene_newton[88629]:            ],
Nov 27 05:57:28 np0005537642 serene_newton[88629]:            "lv_name": "ceph_lv0",
Nov 27 05:57:28 np0005537642 serene_newton[88629]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Nov 27 05:57:28 np0005537642 serene_newton[88629]:            "lv_size": "21470642176",
Nov 27 05:57:28 np0005537642 serene_newton[88629]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=whPowo-sd77-WkNQ-nG3J-nhwn-01QM-SzpkeN,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=4c838139-e0c9-556a-a9ca-e4422f459af7,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=047f3e15-ba18-4c86-b24b-f8e9584c5eff,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Nov 27 05:57:28 np0005537642 serene_newton[88629]:            "lv_uuid": "whPowo-sd77-WkNQ-nG3J-nhwn-01QM-SzpkeN",
Nov 27 05:57:28 np0005537642 serene_newton[88629]:            "name": "ceph_lv0",
Nov 27 05:57:28 np0005537642 serene_newton[88629]:            "path": "/dev/ceph_vg0/ceph_lv0",
Nov 27 05:57:28 np0005537642 serene_newton[88629]:            "tags": {
Nov 27 05:57:28 np0005537642 serene_newton[88629]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Nov 27 05:57:28 np0005537642 serene_newton[88629]:                "ceph.block_uuid": "whPowo-sd77-WkNQ-nG3J-nhwn-01QM-SzpkeN",
Nov 27 05:57:28 np0005537642 serene_newton[88629]:                "ceph.cephx_lockbox_secret": "",
Nov 27 05:57:28 np0005537642 serene_newton[88629]:                "ceph.cluster_fsid": "4c838139-e0c9-556a-a9ca-e4422f459af7",
Nov 27 05:57:28 np0005537642 serene_newton[88629]:                "ceph.cluster_name": "ceph",
Nov 27 05:57:28 np0005537642 serene_newton[88629]:                "ceph.crush_device_class": "",
Nov 27 05:57:28 np0005537642 serene_newton[88629]:                "ceph.encrypted": "0",
Nov 27 05:57:28 np0005537642 serene_newton[88629]:                "ceph.osd_fsid": "047f3e15-ba18-4c86-b24b-f8e9584c5eff",
Nov 27 05:57:28 np0005537642 serene_newton[88629]:                "ceph.osd_id": "1",
Nov 27 05:57:28 np0005537642 serene_newton[88629]:                "ceph.osdspec_affinity": "default_drive_group",
Nov 27 05:57:28 np0005537642 serene_newton[88629]:                "ceph.type": "block",
Nov 27 05:57:28 np0005537642 serene_newton[88629]:                "ceph.vdo": "0",
Nov 27 05:57:28 np0005537642 serene_newton[88629]:                "ceph.with_tpm": "0"
Nov 27 05:57:28 np0005537642 serene_newton[88629]:            },
Nov 27 05:57:28 np0005537642 serene_newton[88629]:            "type": "block",
Nov 27 05:57:28 np0005537642 serene_newton[88629]:            "vg_name": "ceph_vg0"
Nov 27 05:57:28 np0005537642 serene_newton[88629]:        }
Nov 27 05:57:28 np0005537642 serene_newton[88629]:    ]
Nov 27 05:57:28 np0005537642 serene_newton[88629]: }
Nov 27 05:57:28 np0005537642 systemd[1]: libpod-fa36e464b613719f8fcced6660a810d44b697c6265477e4e44fd1bbcc2014da2.scope: Deactivated successfully.
Nov 27 05:57:28 np0005537642 podman[88587]: 2025-11-27 10:57:28.460295772 +0000 UTC m=+0.649446266 container died fa36e464b613719f8fcced6660a810d44b697c6265477e4e44fd1bbcc2014da2 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=serene_newton, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 27 05:57:28 np0005537642 systemd[1]: var-lib-containers-storage-overlay-b4d5d39f90f30e83fe1c35755c7cd54cdeefe041b8fb6de5b30aee6796e81acf-merged.mount: Deactivated successfully.
Nov 27 05:57:28 np0005537642 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v151: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 27 05:57:28 np0005537642 podman[88587]: 2025-11-27 10:57:28.75492754 +0000 UTC m=+0.944078064 container remove fa36e464b613719f8fcced6660a810d44b697c6265477e4e44fd1bbcc2014da2 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=serene_newton, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 27 05:57:28 np0005537642 python3[88690]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --interactive  --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z   --volume /home/ceph-admin/specs/ceph_spec.yaml:/home/ceph_spec.yaml:z   --entrypoint ceph quay.io/ceph/ceph:v19 --fsid 4c838139-e0c9-556a-a9ca-e4422f459af7 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   config set mgr mgr/dashboard/compute-0.qnrkij/server_addr 192.168.122.100#012 _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 27 05:57:28 np0005537642 systemd[1]: libpod-conmon-fa36e464b613719f8fcced6660a810d44b697c6265477e4e44fd1bbcc2014da2.scope: Deactivated successfully.
Nov 27 05:57:28 np0005537642 podman[88691]: 2025-11-27 10:57:28.897453929 +0000 UTC m=+0.108182806 container create 20649a9569aa67695879fbe1da5fe3d5d55fd9c462f672ea8824e2161c89aac2 (image=quay.io/ceph/ceph:v19, name=flamboyant_agnesi, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0)
Nov 27 05:57:28 np0005537642 podman[88691]: 2025-11-27 10:57:28.830187901 +0000 UTC m=+0.040916848 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Nov 27 05:57:29 np0005537642 systemd[1]: Started libpod-conmon-20649a9569aa67695879fbe1da5fe3d5d55fd9c462f672ea8824e2161c89aac2.scope.
Nov 27 05:57:29 np0005537642 systemd[1]: Started libcrun container.
Nov 27 05:57:29 np0005537642 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/72145758a65e3314e8e4e63968fa50df796d1047ed4a62f63a85b374177f37bb/merged/home/ceph_spec.yaml supports timestamps until 2038 (0x7fffffff)
Nov 27 05:57:29 np0005537642 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/72145758a65e3314e8e4e63968fa50df796d1047ed4a62f63a85b374177f37bb/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Nov 27 05:57:29 np0005537642 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/72145758a65e3314e8e4e63968fa50df796d1047ed4a62f63a85b374177f37bb/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 27 05:57:29 np0005537642 podman[88691]: 2025-11-27 10:57:29.095351607 +0000 UTC m=+0.306080524 container init 20649a9569aa67695879fbe1da5fe3d5d55fd9c462f672ea8824e2161c89aac2 (image=quay.io/ceph/ceph:v19, name=flamboyant_agnesi, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 27 05:57:29 np0005537642 podman[88691]: 2025-11-27 10:57:29.102799734 +0000 UTC m=+0.313528641 container start 20649a9569aa67695879fbe1da5fe3d5d55fd9c462f672ea8824e2161c89aac2 (image=quay.io/ceph/ceph:v19, name=flamboyant_agnesi, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, OSD_FLAVOR=default, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Nov 27 05:57:29 np0005537642 podman[88691]: 2025-11-27 10:57:29.152668813 +0000 UTC m=+0.363397730 container attach 20649a9569aa67695879fbe1da5fe3d5d55fd9c462f672ea8824e2161c89aac2 (image=quay.io/ceph/ceph:v19, name=flamboyant_agnesi, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Nov 27 05:57:29 np0005537642 ceph-osd[82775]: log_channel(cluster) log [DBG] : 2.8 scrub starts
Nov 27 05:57:29 np0005537642 ceph-osd[82775]: log_channel(cluster) log [DBG] : 2.8 scrub ok
Nov 27 05:57:29 np0005537642 ceph-mon[74338]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config set, name=mgr/dashboard/compute-0.qnrkij/server_addr}] v 0)
Nov 27 05:57:29 np0005537642 podman[88818]: 2025-11-27 10:57:29.471372764 +0000 UTC m=+0.034274323 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Nov 27 05:57:29 np0005537642 podman[88818]: 2025-11-27 10:57:29.749250142 +0000 UTC m=+0.312151711 container create 40e5c8e612bc8f73858ad2b40b2e02d88093bc6072b806e30ae9f584dd24e20c (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=sad_meitner, ceph=True, CEPH_REF=squid, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default)
Nov 27 05:57:29 np0005537642 ceph-mon[74338]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/4181589533' entity='client.admin' 
Nov 27 05:57:29 np0005537642 systemd[1]: libpod-20649a9569aa67695879fbe1da5fe3d5d55fd9c462f672ea8824e2161c89aac2.scope: Deactivated successfully.
Nov 27 05:57:29 np0005537642 podman[88691]: 2025-11-27 10:57:29.996015689 +0000 UTC m=+1.206744646 container died 20649a9569aa67695879fbe1da5fe3d5d55fd9c462f672ea8824e2161c89aac2 (image=quay.io/ceph/ceph:v19, name=flamboyant_agnesi, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, OSD_FLAVOR=default, CEPH_REF=squid, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 27 05:57:30 np0005537642 systemd[1]: Started libpod-conmon-40e5c8e612bc8f73858ad2b40b2e02d88093bc6072b806e30ae9f584dd24e20c.scope.
Nov 27 05:57:30 np0005537642 systemd[1]: Started libcrun container.
Nov 27 05:57:30 np0005537642 ceph-osd[82775]: log_channel(cluster) log [DBG] : 7.d scrub starts
Nov 27 05:57:30 np0005537642 ceph-osd[82775]: log_channel(cluster) log [DBG] : 7.d scrub ok
Nov 27 05:57:30 np0005537642 podman[88818]: 2025-11-27 10:57:30.263774821 +0000 UTC m=+0.826676460 container init 40e5c8e612bc8f73858ad2b40b2e02d88093bc6072b806e30ae9f584dd24e20c (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=sad_meitner, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0)
Nov 27 05:57:30 np0005537642 podman[88818]: 2025-11-27 10:57:30.275097762 +0000 UTC m=+0.837999341 container start 40e5c8e612bc8f73858ad2b40b2e02d88093bc6072b806e30ae9f584dd24e20c (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=sad_meitner, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default)
Nov 27 05:57:30 np0005537642 sad_meitner[88847]: 167 167
Nov 27 05:57:30 np0005537642 systemd[1]: libpod-40e5c8e612bc8f73858ad2b40b2e02d88093bc6072b806e30ae9f584dd24e20c.scope: Deactivated successfully.
Nov 27 05:57:30 np0005537642 podman[88818]: 2025-11-27 10:57:30.323996642 +0000 UTC m=+0.886898211 container attach 40e5c8e612bc8f73858ad2b40b2e02d88093bc6072b806e30ae9f584dd24e20c (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=sad_meitner, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, OSD_FLAVOR=default, org.label-schema.schema-version=1.0)
Nov 27 05:57:30 np0005537642 podman[88818]: 2025-11-27 10:57:30.325135925 +0000 UTC m=+0.888037554 container died 40e5c8e612bc8f73858ad2b40b2e02d88093bc6072b806e30ae9f584dd24e20c (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=sad_meitner, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, CEPH_REF=squid, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 27 05:57:30 np0005537642 ceph-mon[74338]: from='client.? 192.168.122.100:0/4181589533' entity='client.admin' 
Nov 27 05:57:30 np0005537642 systemd[1]: var-lib-containers-storage-overlay-6d51f1cbf4fdecd12933806291e02921701ac48de6068ef95cf09fd483d8eabd-merged.mount: Deactivated successfully.
Nov 27 05:57:30 np0005537642 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v152: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 27 05:57:30 np0005537642 podman[88818]: 2025-11-27 10:57:30.836914134 +0000 UTC m=+1.399815713 container remove 40e5c8e612bc8f73858ad2b40b2e02d88093bc6072b806e30ae9f584dd24e20c (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=sad_meitner, OSD_FLAVOR=default, ceph=True, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 27 05:57:30 np0005537642 systemd[1]: libpod-conmon-40e5c8e612bc8f73858ad2b40b2e02d88093bc6072b806e30ae9f584dd24e20c.scope: Deactivated successfully.
Nov 27 05:57:30 np0005537642 systemd[1]: var-lib-containers-storage-overlay-72145758a65e3314e8e4e63968fa50df796d1047ed4a62f63a85b374177f37bb-merged.mount: Deactivated successfully.
Nov 27 05:57:31 np0005537642 podman[88691]: 2025-11-27 10:57:31.061819601 +0000 UTC m=+2.272548498 container remove 20649a9569aa67695879fbe1da5fe3d5d55fd9c462f672ea8824e2161c89aac2 (image=quay.io/ceph/ceph:v19, name=flamboyant_agnesi, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Nov 27 05:57:31 np0005537642 systemd[1]: libpod-conmon-20649a9569aa67695879fbe1da5fe3d5d55fd9c462f672ea8824e2161c89aac2.scope: Deactivated successfully.
Nov 27 05:57:31 np0005537642 podman[88875]: 2025-11-27 10:57:31.074550713 +0000 UTC m=+0.071337457 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Nov 27 05:57:31 np0005537642 podman[88875]: 2025-11-27 10:57:31.170072117 +0000 UTC m=+0.166858861 container create 863314339ef617fa2bcf519434cdf0cf209d2f0d72035b06e43d0f697879bf68 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=musing_jackson, org.label-schema.license=GPLv2, CEPH_REF=squid, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Nov 27 05:57:31 np0005537642 ceph-osd[82775]: log_channel(cluster) log [DBG] : 7.c scrub starts
Nov 27 05:57:31 np0005537642 ceph-osd[82775]: log_channel(cluster) log [DBG] : 7.c scrub ok
Nov 27 05:57:31 np0005537642 systemd[1]: Started libpod-conmon-863314339ef617fa2bcf519434cdf0cf209d2f0d72035b06e43d0f697879bf68.scope.
Nov 27 05:57:31 np0005537642 systemd[1]: Started libcrun container.
Nov 27 05:57:31 np0005537642 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ec67bf17ec73a5bb297ce8cc79d032f1bd312389e893264c97d09cbf0debaa3b/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 27 05:57:31 np0005537642 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ec67bf17ec73a5bb297ce8cc79d032f1bd312389e893264c97d09cbf0debaa3b/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 27 05:57:31 np0005537642 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ec67bf17ec73a5bb297ce8cc79d032f1bd312389e893264c97d09cbf0debaa3b/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 27 05:57:31 np0005537642 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ec67bf17ec73a5bb297ce8cc79d032f1bd312389e893264c97d09cbf0debaa3b/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 27 05:57:31 np0005537642 ceph-mon[74338]: mon.compute-0@0(leader).osd e47 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 27 05:57:31 np0005537642 podman[88875]: 2025-11-27 10:57:31.432289756 +0000 UTC m=+0.429076510 container init 863314339ef617fa2bcf519434cdf0cf209d2f0d72035b06e43d0f697879bf68 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=musing_jackson, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True)
Nov 27 05:57:31 np0005537642 podman[88875]: 2025-11-27 10:57:31.44301496 +0000 UTC m=+0.439801704 container start 863314339ef617fa2bcf519434cdf0cf209d2f0d72035b06e43d0f697879bf68 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=musing_jackson, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_REF=squid)
Nov 27 05:57:31 np0005537642 podman[88875]: 2025-11-27 10:57:31.481257058 +0000 UTC m=+0.478043872 container attach 863314339ef617fa2bcf519434cdf0cf209d2f0d72035b06e43d0f697879bf68 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=musing_jackson, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 27 05:57:31 np0005537642 python3[88941]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --interactive  --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z   --volume /home/ceph-admin/specs/ceph_spec.yaml:/home/ceph_spec.yaml:z   --entrypoint ceph quay.io/ceph/ceph:v19 --fsid 4c838139-e0c9-556a-a9ca-e4422f459af7 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   config set mgr mgr/dashboard/compute-1.npcryb/server_addr 192.168.122.101#012 _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 27 05:57:32 np0005537642 podman[88966]: 2025-11-27 10:57:32.09983265 +0000 UTC m=+0.104098435 container create 068391740b41d381d2a08f48b83fcc7ae1b0747fdeff199b0df5c6b526f6bb6e (image=quay.io/ceph/ceph:v19, name=naughty_almeida, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, ceph=True, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 27 05:57:32 np0005537642 podman[88966]: 2025-11-27 10:57:32.021894821 +0000 UTC m=+0.026160586 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Nov 27 05:57:32 np0005537642 ceph-osd[82775]: log_channel(cluster) log [DBG] : 7.19 scrub starts
Nov 27 05:57:32 np0005537642 ceph-osd[82775]: log_channel(cluster) log [DBG] : 7.19 scrub ok
Nov 27 05:57:32 np0005537642 systemd[1]: Started libpod-conmon-068391740b41d381d2a08f48b83fcc7ae1b0747fdeff199b0df5c6b526f6bb6e.scope.
Nov 27 05:57:32 np0005537642 systemd[1]: Started libcrun container.
Nov 27 05:57:32 np0005537642 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/de84e53add66a88c177efd91498c855877d30939c6344657b4dae364e05e5384/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Nov 27 05:57:32 np0005537642 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/de84e53add66a88c177efd91498c855877d30939c6344657b4dae364e05e5384/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 27 05:57:32 np0005537642 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/de84e53add66a88c177efd91498c855877d30939c6344657b4dae364e05e5384/merged/home/ceph_spec.yaml supports timestamps until 2038 (0x7fffffff)
Nov 27 05:57:32 np0005537642 lvm[89010]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Nov 27 05:57:32 np0005537642 lvm[89010]: VG ceph_vg0 finished
Nov 27 05:57:32 np0005537642 musing_jackson[88891]: {}
Nov 27 05:57:32 np0005537642 podman[88966]: 2025-11-27 10:57:32.33266068 +0000 UTC m=+0.336926465 container init 068391740b41d381d2a08f48b83fcc7ae1b0747fdeff199b0df5c6b526f6bb6e (image=quay.io/ceph/ceph:v19, name=naughty_almeida, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Nov 27 05:57:32 np0005537642 podman[88966]: 2025-11-27 10:57:32.344244569 +0000 UTC m=+0.348510324 container start 068391740b41d381d2a08f48b83fcc7ae1b0747fdeff199b0df5c6b526f6bb6e (image=quay.io/ceph/ceph:v19, name=naughty_almeida, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, ceph=True, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 27 05:57:32 np0005537642 systemd[1]: libpod-863314339ef617fa2bcf519434cdf0cf209d2f0d72035b06e43d0f697879bf68.scope: Deactivated successfully.
Nov 27 05:57:32 np0005537642 systemd[1]: libpod-863314339ef617fa2bcf519434cdf0cf209d2f0d72035b06e43d0f697879bf68.scope: Consumed 1.550s CPU time.
Nov 27 05:57:32 np0005537642 podman[88966]: 2025-11-27 10:57:32.382118327 +0000 UTC m=+0.386384112 container attach 068391740b41d381d2a08f48b83fcc7ae1b0747fdeff199b0df5c6b526f6bb6e (image=quay.io/ceph/ceph:v19, name=naughty_almeida, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True)
Nov 27 05:57:32 np0005537642 podman[88875]: 2025-11-27 10:57:32.417420869 +0000 UTC m=+1.414207863 container died 863314339ef617fa2bcf519434cdf0cf209d2f0d72035b06e43d0f697879bf68 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=musing_jackson, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 27 05:57:32 np0005537642 systemd[1]: var-lib-containers-storage-overlay-ec67bf17ec73a5bb297ce8cc79d032f1bd312389e893264c97d09cbf0debaa3b-merged.mount: Deactivated successfully.
Nov 27 05:57:32 np0005537642 podman[89013]: 2025-11-27 10:57:32.601275597 +0000 UTC m=+0.221840210 container remove 863314339ef617fa2bcf519434cdf0cf209d2f0d72035b06e43d0f697879bf68 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=musing_jackson, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Nov 27 05:57:32 np0005537642 systemd[1]: libpod-conmon-863314339ef617fa2bcf519434cdf0cf209d2f0d72035b06e43d0f697879bf68.scope: Deactivated successfully.
Nov 27 05:57:32 np0005537642 ceph-mon[74338]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Nov 27 05:57:32 np0005537642 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v153: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 27 05:57:32 np0005537642 ceph-mon[74338]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/397881370' entity='mgr.compute-0.qnrkij' 
Nov 27 05:57:32 np0005537642 ceph-mon[74338]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config set, name=mgr/dashboard/compute-1.npcryb/server_addr}] v 0)
Nov 27 05:57:32 np0005537642 ceph-mon[74338]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Nov 27 05:57:32 np0005537642 ceph-mon[74338]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/1762285947' entity='client.admin' 
Nov 27 05:57:32 np0005537642 systemd[1]: libpod-068391740b41d381d2a08f48b83fcc7ae1b0747fdeff199b0df5c6b526f6bb6e.scope: Deactivated successfully.
Nov 27 05:57:32 np0005537642 podman[88966]: 2025-11-27 10:57:32.796457035 +0000 UTC m=+0.800722770 container died 068391740b41d381d2a08f48b83fcc7ae1b0747fdeff199b0df5c6b526f6bb6e (image=quay.io/ceph/ceph:v19, name=naughty_almeida, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 27 05:57:32 np0005537642 ceph-mon[74338]: from='mgr.14122 192.168.122.100:0/397881370' entity='mgr.compute-0.qnrkij' 
Nov 27 05:57:32 np0005537642 ceph-mon[74338]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/397881370' entity='mgr.compute-0.qnrkij' 
Nov 27 05:57:32 np0005537642 ceph-mgr[74636]: [progress INFO root] update: starting ev 32cc2a1a-4d7c-402b-b47b-dbecf30b6e8e (Updating rgw.rgw deployment (+3 -> 3))
Nov 27 05:57:32 np0005537642 ceph-mon[74338]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get-or-create", "entity": "client.rgw.rgw.compute-2.ujaphm", "caps": ["mon", "allow *", "mgr", "allow rw", "osd", "allow rwx tag rgw *=*"]} v 0)
Nov 27 05:57:32 np0005537642 ceph-mon[74338]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/397881370' entity='mgr.compute-0.qnrkij' cmd=[{"prefix": "auth get-or-create", "entity": "client.rgw.rgw.compute-2.ujaphm", "caps": ["mon", "allow *", "mgr", "allow rw", "osd", "allow rwx tag rgw *=*"]}]: dispatch
Nov 27 05:57:33 np0005537642 systemd[1]: var-lib-containers-storage-overlay-de84e53add66a88c177efd91498c855877d30939c6344657b4dae364e05e5384-merged.mount: Deactivated successfully.
Nov 27 05:57:33 np0005537642 ceph-mon[74338]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/397881370' entity='mgr.compute-0.qnrkij' cmd='[{"prefix": "auth get-or-create", "entity": "client.rgw.rgw.compute-2.ujaphm", "caps": ["mon", "allow *", "mgr", "allow rw", "osd", "allow rwx tag rgw *=*"]}]': finished
Nov 27 05:57:33 np0005537642 ceph-mon[74338]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config set, name=rgw_frontends}] v 0)
Nov 27 05:57:33 np0005537642 ceph-mon[74338]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/397881370' entity='mgr.compute-0.qnrkij' 
Nov 27 05:57:33 np0005537642 ceph-mon[74338]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Nov 27 05:57:33 np0005537642 ceph-mon[74338]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/397881370' entity='mgr.compute-0.qnrkij' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 27 05:57:33 np0005537642 ceph-mgr[74636]: [cephadm INFO cephadm.serve] Deploying daemon rgw.rgw.compute-2.ujaphm on compute-2
Nov 27 05:57:33 np0005537642 ceph-mgr[74636]: log_channel(cephadm) log [INF] : Deploying daemon rgw.rgw.compute-2.ujaphm on compute-2
Nov 27 05:57:33 np0005537642 ceph-osd[82775]: log_channel(cluster) log [DBG] : 7.1a deep-scrub starts
Nov 27 05:57:33 np0005537642 ceph-osd[82775]: log_channel(cluster) log [DBG] : 7.1a deep-scrub ok
Nov 27 05:57:33 np0005537642 podman[88966]: 2025-11-27 10:57:33.259822748 +0000 UTC m=+1.264088523 container remove 068391740b41d381d2a08f48b83fcc7ae1b0747fdeff199b0df5c6b526f6bb6e (image=quay.io/ceph/ceph:v19, name=naughty_almeida, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, ceph=True, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 27 05:57:33 np0005537642 systemd[1]: libpod-conmon-068391740b41d381d2a08f48b83fcc7ae1b0747fdeff199b0df5c6b526f6bb6e.scope: Deactivated successfully.
Nov 27 05:57:33 np0005537642 ceph-mon[74338]: from='client.? 192.168.122.100:0/1762285947' entity='client.admin' 
Nov 27 05:57:33 np0005537642 ceph-mon[74338]: from='mgr.14122 192.168.122.100:0/397881370' entity='mgr.compute-0.qnrkij' 
Nov 27 05:57:33 np0005537642 ceph-mon[74338]: from='mgr.14122 192.168.122.100:0/397881370' entity='mgr.compute-0.qnrkij' cmd=[{"prefix": "auth get-or-create", "entity": "client.rgw.rgw.compute-2.ujaphm", "caps": ["mon", "allow *", "mgr", "allow rw", "osd", "allow rwx tag rgw *=*"]}]: dispatch
Nov 27 05:57:33 np0005537642 ceph-mon[74338]: from='mgr.14122 192.168.122.100:0/397881370' entity='mgr.compute-0.qnrkij' cmd='[{"prefix": "auth get-or-create", "entity": "client.rgw.rgw.compute-2.ujaphm", "caps": ["mon", "allow *", "mgr", "allow rw", "osd", "allow rwx tag rgw *=*"]}]': finished
Nov 27 05:57:33 np0005537642 ceph-mon[74338]: from='mgr.14122 192.168.122.100:0/397881370' entity='mgr.compute-0.qnrkij' 
Nov 27 05:57:33 np0005537642 ceph-mon[74338]: Deploying daemon rgw.rgw.compute-2.ujaphm on compute-2
Nov 27 05:57:34 np0005537642 python3[89086]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --interactive  --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z   --volume /home/ceph-admin/specs/ceph_spec.yaml:/home/ceph_spec.yaml:z   --entrypoint ceph quay.io/ceph/ceph:v19 --fsid 4c838139-e0c9-556a-a9ca-e4422f459af7 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   config set mgr mgr/dashboard/compute-2.yyrxaz/server_addr 192.168.122.102#012 _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 27 05:57:34 np0005537642 ceph-osd[82775]: log_channel(cluster) log [DBG] : 5.11 scrub starts
Nov 27 05:57:34 np0005537642 ceph-osd[82775]: log_channel(cluster) log [DBG] : 5.11 scrub ok
Nov 27 05:57:34 np0005537642 podman[89087]: 2025-11-27 10:57:34.224638897 +0000 UTC m=+0.043163254 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Nov 27 05:57:34 np0005537642 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v154: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 27 05:57:34 np0005537642 podman[89087]: 2025-11-27 10:57:34.692723966 +0000 UTC m=+0.511248253 container create cd7ee83bc50dae9bc83e4d0ee119a58ff923fde68b3f31e25980f38cc578d5cf (image=quay.io/ceph/ceph:v19, name=pedantic_meninsky, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_REF=squid, org.label-schema.schema-version=1.0)
Nov 27 05:57:34 np0005537642 ceph-mon[74338]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2.devices.0}] v 0)
Nov 27 05:57:34 np0005537642 ceph-mon[74338]: mon.compute-0@0(leader).osd e47 do_prune osdmap full prune enabled
Nov 27 05:57:34 np0005537642 systemd[1]: Started libpod-conmon-cd7ee83bc50dae9bc83e4d0ee119a58ff923fde68b3f31e25980f38cc578d5cf.scope.
Nov 27 05:57:34 np0005537642 ceph-mon[74338]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/397881370' entity='mgr.compute-0.qnrkij' 
Nov 27 05:57:35 np0005537642 systemd[1]: Started libcrun container.
Nov 27 05:57:35 np0005537642 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7ded795574270e536eafcbe0c51befe0927483fcea4527d67c8511514630fe08/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Nov 27 05:57:35 np0005537642 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7ded795574270e536eafcbe0c51befe0927483fcea4527d67c8511514630fe08/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 27 05:57:35 np0005537642 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7ded795574270e536eafcbe0c51befe0927483fcea4527d67c8511514630fe08/merged/home/ceph_spec.yaml supports timestamps until 2038 (0x7fffffff)
Nov 27 05:57:35 np0005537642 ceph-mon[74338]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2}] v 0)
Nov 27 05:57:35 np0005537642 ceph-osd[82775]: log_channel(cluster) log [DBG] : 3.14 scrub starts
Nov 27 05:57:35 np0005537642 ceph-osd[82775]: log_channel(cluster) log [DBG] : 3.14 scrub ok
Nov 27 05:57:35 np0005537642 podman[89087]: 2025-11-27 10:57:35.365016679 +0000 UTC m=+1.183541036 container init cd7ee83bc50dae9bc83e4d0ee119a58ff923fde68b3f31e25980f38cc578d5cf (image=quay.io/ceph/ceph:v19, name=pedantic_meninsky, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, CEPH_REF=squid, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.license=GPLv2, ceph=True, io.buildah.version=1.40.1)
Nov 27 05:57:35 np0005537642 podman[89087]: 2025-11-27 10:57:35.376536626 +0000 UTC m=+1.195060883 container start cd7ee83bc50dae9bc83e4d0ee119a58ff923fde68b3f31e25980f38cc578d5cf (image=quay.io/ceph/ceph:v19, name=pedantic_meninsky, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 27 05:57:35 np0005537642 ceph-mon[74338]: mon.compute-0@0(leader).osd e48 e48: 3 total, 3 up, 3 in
Nov 27 05:57:35 np0005537642 podman[89087]: 2025-11-27 10:57:35.545273782 +0000 UTC m=+1.363798129 container attach cd7ee83bc50dae9bc83e4d0ee119a58ff923fde68b3f31e25980f38cc578d5cf (image=quay.io/ceph/ceph:v19, name=pedantic_meninsky, OSD_FLAVOR=default, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, ceph=True, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1)
Nov 27 05:57:35 np0005537642 ceph-mon[74338]: log_channel(cluster) log [DBG] : osdmap e48: 3 total, 3 up, 3 in
Nov 27 05:57:35 np0005537642 ceph-mon[74338]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool application enable","pool": ".rgw.root","app": "rgw"} v 0)
Nov 27 05:57:35 np0005537642 ceph-mon[74338]: log_channel(audit) log [INF] : from='client.? ' entity='client.rgw.rgw.compute-2.ujaphm' cmd=[{"prefix": "osd pool application enable","pool": ".rgw.root","app": "rgw"}]: dispatch
Nov 27 05:57:35 np0005537642 ceph-mon[74338]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/397881370' entity='mgr.compute-0.qnrkij' 
Nov 27 05:57:35 np0005537642 ceph-mon[74338]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.rgw.rgw}] v 0)
Nov 27 05:57:35 np0005537642 ceph-mon[74338]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/397881370' entity='mgr.compute-0.qnrkij' 
Nov 27 05:57:35 np0005537642 ceph-mon[74338]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get-or-create", "entity": "client.rgw.rgw.compute-1.mkskbt", "caps": ["mon", "allow *", "mgr", "allow rw", "osd", "allow rwx tag rgw *=*"]} v 0)
Nov 27 05:57:35 np0005537642 ceph-mon[74338]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/397881370' entity='mgr.compute-0.qnrkij' cmd=[{"prefix": "auth get-or-create", "entity": "client.rgw.rgw.compute-1.mkskbt", "caps": ["mon", "allow *", "mgr", "allow rw", "osd", "allow rwx tag rgw *=*"]}]: dispatch
Nov 27 05:57:36 np0005537642 ceph-mon[74338]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/397881370' entity='mgr.compute-0.qnrkij' cmd='[{"prefix": "auth get-or-create", "entity": "client.rgw.rgw.compute-1.mkskbt", "caps": ["mon", "allow *", "mgr", "allow rw", "osd", "allow rwx tag rgw *=*"]}]': finished
Nov 27 05:57:36 np0005537642 ceph-mon[74338]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config set, name=rgw_frontends}] v 0)
Nov 27 05:57:36 np0005537642 ceph-mon[74338]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/397881370' entity='mgr.compute-0.qnrkij' 
Nov 27 05:57:36 np0005537642 ceph-mon[74338]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config set, name=mgr/dashboard/compute-2.yyrxaz/server_addr}] v 0)
Nov 27 05:57:36 np0005537642 ceph-mon[74338]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Nov 27 05:57:36 np0005537642 ceph-mon[74338]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/397881370' entity='mgr.compute-0.qnrkij' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 27 05:57:36 np0005537642 ceph-mgr[74636]: [cephadm INFO cephadm.serve] Deploying daemon rgw.rgw.compute-1.mkskbt on compute-1
Nov 27 05:57:36 np0005537642 ceph-mgr[74636]: log_channel(cephadm) log [INF] : Deploying daemon rgw.rgw.compute-1.mkskbt on compute-1
Nov 27 05:57:36 np0005537642 ceph-mon[74338]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/441122636' entity='client.admin' 
Nov 27 05:57:36 np0005537642 systemd[1]: libpod-cd7ee83bc50dae9bc83e4d0ee119a58ff923fde68b3f31e25980f38cc578d5cf.scope: Deactivated successfully.
Nov 27 05:57:36 np0005537642 podman[89087]: 2025-11-27 10:57:36.231535254 +0000 UTC m=+2.050059541 container died cd7ee83bc50dae9bc83e4d0ee119a58ff923fde68b3f31e25980f38cc578d5cf (image=quay.io/ceph/ceph:v19, name=pedantic_meninsky, OSD_FLAVOR=default, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.vendor=CentOS)
Nov 27 05:57:36 np0005537642 ceph-osd[82775]: log_channel(cluster) log [DBG] : 4.13 scrub starts
Nov 27 05:57:36 np0005537642 ceph-osd[82775]: log_channel(cluster) log [DBG] : 4.13 scrub ok
Nov 27 05:57:36 np0005537642 systemd[1]: var-lib-containers-storage-overlay-7ded795574270e536eafcbe0c51befe0927483fcea4527d67c8511514630fe08-merged.mount: Deactivated successfully.
Nov 27 05:57:36 np0005537642 ceph-mon[74338]: mon.compute-0@0(leader).osd e48 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 27 05:57:36 np0005537642 ceph-mon[74338]: mon.compute-0@0(leader).osd e48 do_prune osdmap full prune enabled
Nov 27 05:57:36 np0005537642 ceph-mon[74338]: from='mgr.14122 192.168.122.100:0/397881370' entity='mgr.compute-0.qnrkij' 
Nov 27 05:57:36 np0005537642 ceph-mon[74338]: from='client.? 192.168.122.102:0/441066255' entity='client.rgw.rgw.compute-2.ujaphm' cmd=[{"prefix": "osd pool application enable","pool": ".rgw.root","app": "rgw"}]: dispatch
Nov 27 05:57:36 np0005537642 ceph-mon[74338]: from='client.? ' entity='client.rgw.rgw.compute-2.ujaphm' cmd=[{"prefix": "osd pool application enable","pool": ".rgw.root","app": "rgw"}]: dispatch
Nov 27 05:57:36 np0005537642 ceph-mon[74338]: from='mgr.14122 192.168.122.100:0/397881370' entity='mgr.compute-0.qnrkij' 
Nov 27 05:57:36 np0005537642 ceph-mon[74338]: from='mgr.14122 192.168.122.100:0/397881370' entity='mgr.compute-0.qnrkij' 
Nov 27 05:57:36 np0005537642 ceph-mon[74338]: from='mgr.14122 192.168.122.100:0/397881370' entity='mgr.compute-0.qnrkij' cmd=[{"prefix": "auth get-or-create", "entity": "client.rgw.rgw.compute-1.mkskbt", "caps": ["mon", "allow *", "mgr", "allow rw", "osd", "allow rwx tag rgw *=*"]}]: dispatch
Nov 27 05:57:36 np0005537642 ceph-mon[74338]: from='mgr.14122 192.168.122.100:0/397881370' entity='mgr.compute-0.qnrkij' cmd='[{"prefix": "auth get-or-create", "entity": "client.rgw.rgw.compute-1.mkskbt", "caps": ["mon", "allow *", "mgr", "allow rw", "osd", "allow rwx tag rgw *=*"]}]': finished
Nov 27 05:57:36 np0005537642 ceph-mon[74338]: from='mgr.14122 192.168.122.100:0/397881370' entity='mgr.compute-0.qnrkij' 
Nov 27 05:57:36 np0005537642 ceph-mon[74338]: Deploying daemon rgw.rgw.compute-1.mkskbt on compute-1
Nov 27 05:57:36 np0005537642 ceph-mon[74338]: from='client.? 192.168.122.100:0/441122636' entity='client.admin' 
Nov 27 05:57:36 np0005537642 podman[89087]: 2025-11-27 10:57:36.661271571 +0000 UTC m=+2.479795868 container remove cd7ee83bc50dae9bc83e4d0ee119a58ff923fde68b3f31e25980f38cc578d5cf (image=quay.io/ceph/ceph:v19, name=pedantic_meninsky, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 27 05:57:36 np0005537642 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v156: 194 pgs: 1 creating+peering, 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 27 05:57:36 np0005537642 systemd[1]: libpod-conmon-cd7ee83bc50dae9bc83e4d0ee119a58ff923fde68b3f31e25980f38cc578d5cf.scope: Deactivated successfully.
Nov 27 05:57:36 np0005537642 ceph-mon[74338]: log_channel(audit) log [INF] : from='client.? ' entity='client.rgw.rgw.compute-2.ujaphm' cmd='[{"prefix": "osd pool application enable","pool": ".rgw.root","app": "rgw"}]': finished
Nov 27 05:57:36 np0005537642 ceph-mon[74338]: mon.compute-0@0(leader).osd e49 e49: 3 total, 3 up, 3 in
Nov 27 05:57:36 np0005537642 ceph-mon[74338]: log_channel(cluster) log [DBG] : osdmap e49: 3 total, 3 up, 3 in
Nov 27 05:57:37 np0005537642 python3[89165]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --interactive  --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z   --volume /home/ceph-admin/specs/ceph_spec.yaml:/home/ceph_spec.yaml:z   --entrypoint ceph quay.io/ceph/ceph:v19 --fsid 4c838139-e0c9-556a-a9ca-e4422f459af7 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   mgr module disable dashboard _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 27 05:57:37 np0005537642 podman[89168]: 2025-11-27 10:57:37.187257838 +0000 UTC m=+0.113738286 container create 102b63a95660457b32be808c32b1ff8b7cbbc004b2727153fae10e9a6fe55025 (image=quay.io/ceph/ceph:v19, name=silly_merkle, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_REF=squid, org.label-schema.schema-version=1.0)
Nov 27 05:57:37 np0005537642 podman[89168]: 2025-11-27 10:57:37.097316218 +0000 UTC m=+0.023796716 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Nov 27 05:57:37 np0005537642 ceph-osd[82775]: log_channel(cluster) log [DBG] : 5.15 scrub starts
Nov 27 05:57:37 np0005537642 ceph-osd[82775]: log_channel(cluster) log [DBG] : 5.15 scrub ok
Nov 27 05:57:37 np0005537642 systemd[1]: Started libpod-conmon-102b63a95660457b32be808c32b1ff8b7cbbc004b2727153fae10e9a6fe55025.scope.
Nov 27 05:57:37 np0005537642 systemd[1]: Started libcrun container.
Nov 27 05:57:37 np0005537642 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/49b5513317998f814266c6a21fc1ed0dcb5cd5aa63a9a4dd4b436ad94c6faffa/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Nov 27 05:57:37 np0005537642 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/49b5513317998f814266c6a21fc1ed0dcb5cd5aa63a9a4dd4b436ad94c6faffa/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 27 05:57:37 np0005537642 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/49b5513317998f814266c6a21fc1ed0dcb5cd5aa63a9a4dd4b436ad94c6faffa/merged/home/ceph_spec.yaml supports timestamps until 2038 (0x7fffffff)
Nov 27 05:57:37 np0005537642 podman[89168]: 2025-11-27 10:57:37.516820967 +0000 UTC m=+0.443301415 container init 102b63a95660457b32be808c32b1ff8b7cbbc004b2727153fae10e9a6fe55025 (image=quay.io/ceph/ceph:v19, name=silly_merkle, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, io.buildah.version=1.40.1, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid)
Nov 27 05:57:37 np0005537642 podman[89168]: 2025-11-27 10:57:37.529825705 +0000 UTC m=+0.456306173 container start 102b63a95660457b32be808c32b1ff8b7cbbc004b2727153fae10e9a6fe55025 (image=quay.io/ceph/ceph:v19, name=silly_merkle, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 27 05:57:37 np0005537642 ceph-mon[74338]: log_channel(cluster) log [WRN] : Health check failed: 1 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED)
Nov 27 05:57:37 np0005537642 podman[89168]: 2025-11-27 10:57:37.595986692 +0000 UTC m=+0.522467180 container attach 102b63a95660457b32be808c32b1ff8b7cbbc004b2727153fae10e9a6fe55025 (image=quay.io/ceph/ceph:v19, name=silly_merkle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, CEPH_REF=squid, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1)
Nov 27 05:57:37 np0005537642 ceph-mon[74338]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1.devices.0}] v 0)
Nov 27 05:57:37 np0005537642 ceph-mon[74338]: mon.compute-0@0(leader).osd e49 do_prune osdmap full prune enabled
Nov 27 05:57:37 np0005537642 ceph-mon[74338]: from='client.? ' entity='client.rgw.rgw.compute-2.ujaphm' cmd='[{"prefix": "osd pool application enable","pool": ".rgw.root","app": "rgw"}]': finished
Nov 27 05:57:38 np0005537642 ceph-mon[74338]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mgr module disable", "module": "dashboard"} v 0)
Nov 27 05:57:38 np0005537642 ceph-mon[74338]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/925740538' entity='client.admin' cmd=[{"prefix": "mgr module disable", "module": "dashboard"}]: dispatch
Nov 27 05:57:38 np0005537642 ceph-mon[74338]: mon.compute-0@0(leader).osd e50 e50: 3 total, 3 up, 3 in
Nov 27 05:57:38 np0005537642 ceph-mon[74338]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/397881370' entity='mgr.compute-0.qnrkij' 
Nov 27 05:57:38 np0005537642 ceph-mon[74338]: log_channel(cluster) log [DBG] : osdmap e50: 3 total, 3 up, 3 in
Nov 27 05:57:38 np0005537642 ceph-mon[74338]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1}] v 0)
Nov 27 05:57:38 np0005537642 ceph-mon[74338]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool application enable","pool": "default.rgw.log","app": "rgw"} v 0)
Nov 27 05:57:38 np0005537642 ceph-mon[74338]: log_channel(audit) log [INF] : from='client.? ' entity='client.rgw.rgw.compute-1.mkskbt' cmd=[{"prefix": "osd pool application enable","pool": "default.rgw.log","app": "rgw"}]: dispatch
Nov 27 05:57:38 np0005537642 ceph-mon[74338]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool application enable","pool": "default.rgw.log","app": "rgw"} v 0)
Nov 27 05:57:38 np0005537642 ceph-mon[74338]: log_channel(audit) log [INF] : from='client.? ' entity='client.rgw.rgw.compute-2.ujaphm' cmd=[{"prefix": "osd pool application enable","pool": "default.rgw.log","app": "rgw"}]: dispatch
Nov 27 05:57:38 np0005537642 ceph-mon[74338]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/397881370' entity='mgr.compute-0.qnrkij' 
Nov 27 05:57:38 np0005537642 ceph-mon[74338]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.rgw.rgw}] v 0)
Nov 27 05:57:38 np0005537642 ceph-mon[74338]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/397881370' entity='mgr.compute-0.qnrkij' 
Nov 27 05:57:38 np0005537642 ceph-mon[74338]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get-or-create", "entity": "client.rgw.rgw.compute-0.xkdunz", "caps": ["mon", "allow *", "mgr", "allow rw", "osd", "allow rwx tag rgw *=*"]} v 0)
Nov 27 05:57:38 np0005537642 ceph-mon[74338]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/397881370' entity='mgr.compute-0.qnrkij' cmd=[{"prefix": "auth get-or-create", "entity": "client.rgw.rgw.compute-0.xkdunz", "caps": ["mon", "allow *", "mgr", "allow rw", "osd", "allow rwx tag rgw *=*"]}]: dispatch
Nov 27 05:57:38 np0005537642 ceph-osd[82775]: log_channel(cluster) log [DBG] : 3.13 scrub starts
Nov 27 05:57:38 np0005537642 ceph-osd[82775]: log_channel(cluster) log [DBG] : 3.13 scrub ok
Nov 27 05:57:38 np0005537642 ceph-mon[74338]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/397881370' entity='mgr.compute-0.qnrkij' cmd='[{"prefix": "auth get-or-create", "entity": "client.rgw.rgw.compute-0.xkdunz", "caps": ["mon", "allow *", "mgr", "allow rw", "osd", "allow rwx tag rgw *=*"]}]': finished
Nov 27 05:57:38 np0005537642 ceph-mon[74338]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config set, name=rgw_frontends}] v 0)
Nov 27 05:57:38 np0005537642 ceph-mon[74338]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/397881370' entity='mgr.compute-0.qnrkij' 
Nov 27 05:57:38 np0005537642 ceph-mon[74338]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Nov 27 05:57:38 np0005537642 ceph-mon[74338]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/397881370' entity='mgr.compute-0.qnrkij' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 27 05:57:38 np0005537642 ceph-mgr[74636]: [cephadm INFO cephadm.serve] Deploying daemon rgw.rgw.compute-0.xkdunz on compute-0
Nov 27 05:57:38 np0005537642 ceph-mgr[74636]: log_channel(cephadm) log [INF] : Deploying daemon rgw.rgw.compute-0.xkdunz on compute-0
Nov 27 05:57:38 np0005537642 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v159: 195 pgs: 1 unknown, 1 creating+peering, 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 27 05:57:38 np0005537642 ceph-mon[74338]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/925740538' entity='client.admin' cmd='[{"prefix": "mgr module disable", "module": "dashboard"}]': finished
Nov 27 05:57:38 np0005537642 silly_merkle[89183]: module 'dashboard' is already disabled
Nov 27 05:57:38 np0005537642 ceph-mon[74338]: log_channel(cluster) log [DBG] : mgrmap e11: compute-0.qnrkij(active, since 3m), standbys: compute-2.yyrxaz, compute-1.npcryb
Nov 27 05:57:38 np0005537642 systemd[1]: libpod-102b63a95660457b32be808c32b1ff8b7cbbc004b2727153fae10e9a6fe55025.scope: Deactivated successfully.
Nov 27 05:57:38 np0005537642 podman[89168]: 2025-11-27 10:57:38.934596717 +0000 UTC m=+1.861077185 container died 102b63a95660457b32be808c32b1ff8b7cbbc004b2727153fae10e9a6fe55025 (image=quay.io/ceph/ceph:v19, name=silly_merkle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, org.label-schema.license=GPLv2)
Nov 27 05:57:39 np0005537642 ceph-mon[74338]: mon.compute-0@0(leader).osd e50 do_prune osdmap full prune enabled
Nov 27 05:57:39 np0005537642 ceph-mon[74338]: Health check failed: 1 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED)
Nov 27 05:57:39 np0005537642 ceph-mon[74338]: from='client.? 192.168.122.100:0/925740538' entity='client.admin' cmd=[{"prefix": "mgr module disable", "module": "dashboard"}]: dispatch
Nov 27 05:57:39 np0005537642 ceph-mon[74338]: from='mgr.14122 192.168.122.100:0/397881370' entity='mgr.compute-0.qnrkij' 
Nov 27 05:57:39 np0005537642 ceph-mon[74338]: from='client.? 192.168.122.102:0/3611071583' entity='client.rgw.rgw.compute-2.ujaphm' cmd=[{"prefix": "osd pool application enable","pool": "default.rgw.log","app": "rgw"}]: dispatch
Nov 27 05:57:39 np0005537642 ceph-mon[74338]: from='client.? 192.168.122.101:0/4171051465' entity='client.rgw.rgw.compute-1.mkskbt' cmd=[{"prefix": "osd pool application enable","pool": "default.rgw.log","app": "rgw"}]: dispatch
Nov 27 05:57:39 np0005537642 ceph-mon[74338]: from='client.? ' entity='client.rgw.rgw.compute-1.mkskbt' cmd=[{"prefix": "osd pool application enable","pool": "default.rgw.log","app": "rgw"}]: dispatch
Nov 27 05:57:39 np0005537642 ceph-mon[74338]: from='client.? ' entity='client.rgw.rgw.compute-2.ujaphm' cmd=[{"prefix": "osd pool application enable","pool": "default.rgw.log","app": "rgw"}]: dispatch
Nov 27 05:57:39 np0005537642 ceph-mon[74338]: from='mgr.14122 192.168.122.100:0/397881370' entity='mgr.compute-0.qnrkij' 
Nov 27 05:57:39 np0005537642 ceph-mon[74338]: from='mgr.14122 192.168.122.100:0/397881370' entity='mgr.compute-0.qnrkij' 
Nov 27 05:57:39 np0005537642 ceph-mon[74338]: from='mgr.14122 192.168.122.100:0/397881370' entity='mgr.compute-0.qnrkij' cmd=[{"prefix": "auth get-or-create", "entity": "client.rgw.rgw.compute-0.xkdunz", "caps": ["mon", "allow *", "mgr", "allow rw", "osd", "allow rwx tag rgw *=*"]}]: dispatch
Nov 27 05:57:39 np0005537642 ceph-mon[74338]: from='mgr.14122 192.168.122.100:0/397881370' entity='mgr.compute-0.qnrkij' cmd='[{"prefix": "auth get-or-create", "entity": "client.rgw.rgw.compute-0.xkdunz", "caps": ["mon", "allow *", "mgr", "allow rw", "osd", "allow rwx tag rgw *=*"]}]': finished
Nov 27 05:57:39 np0005537642 ceph-mon[74338]: from='mgr.14122 192.168.122.100:0/397881370' entity='mgr.compute-0.qnrkij' 
Nov 27 05:57:39 np0005537642 ceph-osd[82775]: log_channel(cluster) log [DBG] : 6.15 deep-scrub starts
Nov 27 05:57:39 np0005537642 ceph-osd[82775]: log_channel(cluster) log [DBG] : 6.15 deep-scrub ok
Nov 27 05:57:39 np0005537642 ceph-mon[74338]: log_channel(audit) log [INF] : from='client.? ' entity='client.rgw.rgw.compute-1.mkskbt' cmd='[{"prefix": "osd pool application enable","pool": "default.rgw.log","app": "rgw"}]': finished
Nov 27 05:57:39 np0005537642 ceph-mon[74338]: log_channel(audit) log [INF] : from='client.? ' entity='client.rgw.rgw.compute-2.ujaphm' cmd='[{"prefix": "osd pool application enable","pool": "default.rgw.log","app": "rgw"}]': finished
Nov 27 05:57:39 np0005537642 ceph-mon[74338]: mon.compute-0@0(leader).osd e51 e51: 3 total, 3 up, 3 in
Nov 27 05:57:39 np0005537642 systemd[1]: var-lib-containers-storage-overlay-49b5513317998f814266c6a21fc1ed0dcb5cd5aa63a9a4dd4b436ad94c6faffa-merged.mount: Deactivated successfully.
Nov 27 05:57:39 np0005537642 ceph-mon[74338]: log_channel(cluster) log [DBG] : osdmap e51: 3 total, 3 up, 3 in
Nov 27 05:57:39 np0005537642 podman[89168]: 2025-11-27 10:57:39.797529782 +0000 UTC m=+2.724010250 container remove 102b63a95660457b32be808c32b1ff8b7cbbc004b2727153fae10e9a6fe55025 (image=quay.io/ceph/ceph:v19, name=silly_merkle, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 27 05:57:39 np0005537642 systemd[1]: libpod-conmon-102b63a95660457b32be808c32b1ff8b7cbbc004b2727153fae10e9a6fe55025.scope: Deactivated successfully.
Nov 27 05:57:39 np0005537642 podman[89315]: 2025-11-27 10:57:39.946274951 +0000 UTC m=+0.738084795 container create 992b6a2ae1a1a99f7d5d99fa907b9312fe077a56f4ab86664be820631785cf1a (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=elated_fermat, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0)
Nov 27 05:57:39 np0005537642 podman[89315]: 2025-11-27 10:57:39.883398058 +0000 UTC m=+0.675207952 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Nov 27 05:57:40 np0005537642 ceph-mon[74338]: Deploying daemon rgw.rgw.compute-0.xkdunz on compute-0
Nov 27 05:57:40 np0005537642 ceph-mon[74338]: from='client.? 192.168.122.100:0/925740538' entity='client.admin' cmd='[{"prefix": "mgr module disable", "module": "dashboard"}]': finished
Nov 27 05:57:40 np0005537642 ceph-mon[74338]: from='client.? ' entity='client.rgw.rgw.compute-1.mkskbt' cmd='[{"prefix": "osd pool application enable","pool": "default.rgw.log","app": "rgw"}]': finished
Nov 27 05:57:40 np0005537642 ceph-mon[74338]: from='client.? ' entity='client.rgw.rgw.compute-2.ujaphm' cmd='[{"prefix": "osd pool application enable","pool": "default.rgw.log","app": "rgw"}]': finished
Nov 27 05:57:40 np0005537642 systemd[1]: Started libpod-conmon-992b6a2ae1a1a99f7d5d99fa907b9312fe077a56f4ab86664be820631785cf1a.scope.
Nov 27 05:57:40 np0005537642 ceph-mgr[74636]: [progress WARNING root] Starting Global Recovery Event,2 pgs not in active + clean state
Nov 27 05:57:40 np0005537642 systemd[1]: Started libcrun container.
Nov 27 05:57:40 np0005537642 python3[89358]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --interactive  --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z   --volume /home/ceph-admin/specs/ceph_spec.yaml:/home/ceph_spec.yaml:z   --entrypoint ceph quay.io/ceph/ceph:v19 --fsid 4c838139-e0c9-556a-a9ca-e4422f459af7 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   mgr module enable dashboard _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 27 05:57:40 np0005537642 podman[89315]: 2025-11-27 10:57:40.180420972 +0000 UTC m=+0.972230836 container init 992b6a2ae1a1a99f7d5d99fa907b9312fe077a56f4ab86664be820631785cf1a (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=elated_fermat, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.build-date=20250325, CEPH_REF=squid, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 27 05:57:40 np0005537642 podman[89315]: 2025-11-27 10:57:40.186745462 +0000 UTC m=+0.978555306 container start 992b6a2ae1a1a99f7d5d99fa907b9312fe077a56f4ab86664be820631785cf1a (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=elated_fermat, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 27 05:57:40 np0005537642 elated_fermat[89362]: 167 167
Nov 27 05:57:40 np0005537642 systemd[1]: libpod-992b6a2ae1a1a99f7d5d99fa907b9312fe077a56f4ab86664be820631785cf1a.scope: Deactivated successfully.
Nov 27 05:57:40 np0005537642 podman[89315]: 2025-11-27 10:57:40.31220835 +0000 UTC m=+1.104018304 container attach 992b6a2ae1a1a99f7d5d99fa907b9312fe077a56f4ab86664be820631785cf1a (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=elated_fermat, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 27 05:57:40 np0005537642 podman[89315]: 2025-11-27 10:57:40.312850518 +0000 UTC m=+1.104660402 container died 992b6a2ae1a1a99f7d5d99fa907b9312fe077a56f4ab86664be820631785cf1a (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=elated_fermat, org.label-schema.license=GPLv2, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS)
Nov 27 05:57:40 np0005537642 ceph-mon[74338]: mon.compute-0@0(leader).osd e51 do_prune osdmap full prune enabled
Nov 27 05:57:40 np0005537642 ceph-osd[82775]: log_channel(cluster) log [DBG] : 3.10 scrub starts
Nov 27 05:57:40 np0005537642 ceph-osd[82775]: log_channel(cluster) log [DBG] : 3.10 scrub ok
Nov 27 05:57:40 np0005537642 ceph-mon[74338]: mon.compute-0@0(leader).osd e52 e52: 3 total, 3 up, 3 in
Nov 27 05:57:40 np0005537642 ceph-mon[74338]: log_channel(cluster) log [DBG] : osdmap e52: 3 total, 3 up, 3 in
Nov 27 05:57:40 np0005537642 ceph-mon[74338]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool application enable","pool": "default.rgw.control","app": "rgw"} v 0)
Nov 27 05:57:40 np0005537642 ceph-mon[74338]: log_channel(audit) log [INF] : from='client.? ' entity='client.rgw.rgw.compute-2.ujaphm' cmd=[{"prefix": "osd pool application enable","pool": "default.rgw.control","app": "rgw"}]: dispatch
Nov 27 05:57:40 np0005537642 ceph-mon[74338]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool application enable","pool": "default.rgw.control","app": "rgw"} v 0)
Nov 27 05:57:40 np0005537642 ceph-mon[74338]: log_channel(audit) log [INF] : from='client.? ' entity='client.rgw.rgw.compute-1.mkskbt' cmd=[{"prefix": "osd pool application enable","pool": "default.rgw.control","app": "rgw"}]: dispatch
Nov 27 05:57:40 np0005537642 systemd[1]: var-lib-containers-storage-overlay-f94d43bd95bf0cb74dd05f1b4bfbeed9b8b1f2b35bedf89eaab92db39bd21cfe-merged.mount: Deactivated successfully.
Nov 27 05:57:40 np0005537642 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v162: 196 pgs: 2 unknown, 1 creating+peering, 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 27 05:57:40 np0005537642 podman[89315]: 2025-11-27 10:57:40.692110515 +0000 UTC m=+1.483920409 container remove 992b6a2ae1a1a99f7d5d99fa907b9312fe077a56f4ab86664be820631785cf1a (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=elated_fermat, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1)
Nov 27 05:57:40 np0005537642 systemd[1]: libpod-conmon-992b6a2ae1a1a99f7d5d99fa907b9312fe077a56f4ab86664be820631785cf1a.scope: Deactivated successfully.
Nov 27 05:57:40 np0005537642 podman[89365]: 2025-11-27 10:57:40.812899481 +0000 UTC m=+0.633457547 container create 0f216df46921c7b09fe6275632aaf81ba796700c12e5c68ac54389df9581940c (image=quay.io/ceph/ceph:v19, name=strange_germain, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, ceph=True, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 27 05:57:40 np0005537642 podman[89365]: 2025-11-27 10:57:40.756697367 +0000 UTC m=+0.577255423 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Nov 27 05:57:40 np0005537642 systemd[1]: Started libpod-conmon-0f216df46921c7b09fe6275632aaf81ba796700c12e5c68ac54389df9581940c.scope.
Nov 27 05:57:40 np0005537642 systemd[1]: Started libcrun container.
Nov 27 05:57:40 np0005537642 systemd[1]: Reloading.
Nov 27 05:57:40 np0005537642 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7e8ea2e3f3dd5f78a7b008bb7153ee17a25cd356c0cc488a51d8c2d9b3614fdf/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Nov 27 05:57:40 np0005537642 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7e8ea2e3f3dd5f78a7b008bb7153ee17a25cd356c0cc488a51d8c2d9b3614fdf/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 27 05:57:40 np0005537642 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7e8ea2e3f3dd5f78a7b008bb7153ee17a25cd356c0cc488a51d8c2d9b3614fdf/merged/home/ceph_spec.yaml supports timestamps until 2038 (0x7fffffff)
Nov 27 05:57:40 np0005537642 podman[89365]: 2025-11-27 10:57:40.988511212 +0000 UTC m=+0.809069288 container init 0f216df46921c7b09fe6275632aaf81ba796700c12e5c68ac54389df9581940c (image=quay.io/ceph/ceph:v19, name=strange_germain, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Nov 27 05:57:40 np0005537642 podman[89365]: 2025-11-27 10:57:40.998091844 +0000 UTC m=+0.818649890 container start 0f216df46921c7b09fe6275632aaf81ba796700c12e5c68ac54389df9581940c (image=quay.io/ceph/ceph:v19, name=strange_germain, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, ceph=True, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1)
Nov 27 05:57:41 np0005537642 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 27 05:57:41 np0005537642 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 27 05:57:41 np0005537642 podman[89365]: 2025-11-27 10:57:41.039974502 +0000 UTC m=+0.860532588 container attach 0f216df46921c7b09fe6275632aaf81ba796700c12e5c68ac54389df9581940c (image=quay.io/ceph/ceph:v19, name=strange_germain, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 27 05:57:41 np0005537642 ceph-mon[74338]: from='client.? 192.168.122.102:0/3611071583' entity='client.rgw.rgw.compute-2.ujaphm' cmd=[{"prefix": "osd pool application enable","pool": "default.rgw.control","app": "rgw"}]: dispatch
Nov 27 05:57:41 np0005537642 ceph-mon[74338]: from='client.? 192.168.122.101:0/4171051465' entity='client.rgw.rgw.compute-1.mkskbt' cmd=[{"prefix": "osd pool application enable","pool": "default.rgw.control","app": "rgw"}]: dispatch
Nov 27 05:57:41 np0005537642 ceph-mon[74338]: from='client.? ' entity='client.rgw.rgw.compute-2.ujaphm' cmd=[{"prefix": "osd pool application enable","pool": "default.rgw.control","app": "rgw"}]: dispatch
Nov 27 05:57:41 np0005537642 ceph-mon[74338]: from='client.? ' entity='client.rgw.rgw.compute-1.mkskbt' cmd=[{"prefix": "osd pool application enable","pool": "default.rgw.control","app": "rgw"}]: dispatch
Nov 27 05:57:41 np0005537642 systemd[1]: Reloading.
Nov 27 05:57:41 np0005537642 ceph-osd[82775]: log_channel(cluster) log [DBG] : 5.1f scrub starts
Nov 27 05:57:41 np0005537642 ceph-osd[82775]: log_channel(cluster) log [DBG] : 5.1f scrub ok
Nov 27 05:57:41 np0005537642 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 27 05:57:41 np0005537642 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 27 05:57:41 np0005537642 ceph-osd[82775]: osd.1 pg_epoch: 52 pg[10.0( empty local-lis/les=0/0 n=0 ec=52/52 lis/c=0/0 les/c/f=0/0/0 sis=52) [1] r=0 lpr=52 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 27 05:57:41 np0005537642 ceph-mon[74338]: mon.compute-0@0(leader).osd e52 do_prune osdmap full prune enabled
Nov 27 05:57:41 np0005537642 ceph-mon[74338]: log_channel(audit) log [INF] : from='client.? ' entity='client.rgw.rgw.compute-2.ujaphm' cmd='[{"prefix": "osd pool application enable","pool": "default.rgw.control","app": "rgw"}]': finished
Nov 27 05:57:41 np0005537642 ceph-mon[74338]: log_channel(audit) log [INF] : from='client.? ' entity='client.rgw.rgw.compute-1.mkskbt' cmd='[{"prefix": "osd pool application enable","pool": "default.rgw.control","app": "rgw"}]': finished
Nov 27 05:57:41 np0005537642 ceph-mon[74338]: mon.compute-0@0(leader).osd e53 e53: 3 total, 3 up, 3 in
Nov 27 05:57:41 np0005537642 ceph-mon[74338]: log_channel(cluster) log [DBG] : osdmap e53: 3 total, 3 up, 3 in
Nov 27 05:57:41 np0005537642 ceph-mon[74338]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mgr module enable", "module": "dashboard"} v 0)
Nov 27 05:57:41 np0005537642 ceph-mon[74338]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/1331916404' entity='client.admin' cmd=[{"prefix": "mgr module enable", "module": "dashboard"}]: dispatch
Nov 27 05:57:41 np0005537642 ceph-osd[82775]: osd.1 pg_epoch: 53 pg[10.0( empty local-lis/les=52/53 n=0 ec=52/52 lis/c=0/0 les/c/f=0/0/0 sis=52) [1] r=0 lpr=52 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 27 05:57:41 np0005537642 systemd[1]: Starting Ceph rgw.rgw.compute-0.xkdunz for 4c838139-e0c9-556a-a9ca-e4422f459af7...
Nov 27 05:57:41 np0005537642 podman[89543]: 2025-11-27 10:57:41.770705866 +0000 UTC m=+0.027168211 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Nov 27 05:57:41 np0005537642 podman[89543]: 2025-11-27 10:57:41.906999622 +0000 UTC m=+0.163461877 container create c3ed71a75325bb4842de8c403ef62a9df297c78ee842d32de7a580c628074b98 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-4c838139-e0c9-556a-a9ca-e4422f459af7-rgw-rgw-compute-0-xkdunz, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 27 05:57:42 np0005537642 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/560b365800fe2845f9e0d750c307c0a42bdb3aa1dcf9f4a3c13e7bbaf9cccd71/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 27 05:57:42 np0005537642 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/560b365800fe2845f9e0d750c307c0a42bdb3aa1dcf9f4a3c13e7bbaf9cccd71/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 27 05:57:42 np0005537642 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/560b365800fe2845f9e0d750c307c0a42bdb3aa1dcf9f4a3c13e7bbaf9cccd71/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 27 05:57:42 np0005537642 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/560b365800fe2845f9e0d750c307c0a42bdb3aa1dcf9f4a3c13e7bbaf9cccd71/merged/var/lib/ceph/radosgw/ceph-rgw.rgw.compute-0.xkdunz supports timestamps until 2038 (0x7fffffff)
Nov 27 05:57:42 np0005537642 podman[89543]: 2025-11-27 10:57:42.099957235 +0000 UTC m=+0.356419510 container init c3ed71a75325bb4842de8c403ef62a9df297c78ee842d32de7a580c628074b98 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-4c838139-e0c9-556a-a9ca-e4422f459af7-rgw-rgw-compute-0-xkdunz, org.label-schema.build-date=20250325, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 27 05:57:42 np0005537642 podman[89543]: 2025-11-27 10:57:42.108735314 +0000 UTC m=+0.365197599 container start c3ed71a75325bb4842de8c403ef62a9df297c78ee842d32de7a580c628074b98 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-4c838139-e0c9-556a-a9ca-e4422f459af7-rgw-rgw-compute-0-xkdunz, CEPH_REF=squid, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default)
Nov 27 05:57:42 np0005537642 bash[89543]: c3ed71a75325bb4842de8c403ef62a9df297c78ee842d32de7a580c628074b98
Nov 27 05:57:42 np0005537642 radosgw[89563]: deferred set uid:gid to 167:167 (ceph:ceph)
Nov 27 05:57:42 np0005537642 radosgw[89563]: ceph version 19.2.3 (c92aebb279828e9c3c1f5d24613efca272649e62) squid (stable), process radosgw, pid 2
Nov 27 05:57:42 np0005537642 radosgw[89563]: framework: beast
Nov 27 05:57:42 np0005537642 radosgw[89563]: framework conf key: endpoint, val: 192.168.122.100:8082
Nov 27 05:57:42 np0005537642 radosgw[89563]: init_numa not setting numa affinity
Nov 27 05:57:42 np0005537642 systemd[1]: Started Ceph rgw.rgw.compute-0.xkdunz for 4c838139-e0c9-556a-a9ca-e4422f459af7.
Nov 27 05:57:42 np0005537642 ceph-mon[74338]: from='client.? ' entity='client.rgw.rgw.compute-2.ujaphm' cmd='[{"prefix": "osd pool application enable","pool": "default.rgw.control","app": "rgw"}]': finished
Nov 27 05:57:42 np0005537642 ceph-mon[74338]: from='client.? ' entity='client.rgw.rgw.compute-1.mkskbt' cmd='[{"prefix": "osd pool application enable","pool": "default.rgw.control","app": "rgw"}]': finished
Nov 27 05:57:42 np0005537642 ceph-mon[74338]: from='client.? 192.168.122.100:0/1331916404' entity='client.admin' cmd=[{"prefix": "mgr module enable", "module": "dashboard"}]: dispatch
Nov 27 05:57:42 np0005537642 ceph-mon[74338]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Nov 27 05:57:42 np0005537642 ceph-osd[82775]: log_channel(cluster) log [DBG] : 5.10 scrub starts
Nov 27 05:57:42 np0005537642 ceph-mon[74338]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/397881370' entity='mgr.compute-0.qnrkij' 
Nov 27 05:57:42 np0005537642 ceph-mon[74338]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Nov 27 05:57:42 np0005537642 ceph-osd[82775]: log_channel(cluster) log [DBG] : 5.10 scrub ok
Nov 27 05:57:42 np0005537642 ceph-mon[74338]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/397881370' entity='mgr.compute-0.qnrkij' 
Nov 27 05:57:42 np0005537642 ceph-mon[74338]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.rgw.rgw}] v 0)
Nov 27 05:57:42 np0005537642 ceph-mon[74338]: mon.compute-0@0(leader).osd e53 do_prune osdmap full prune enabled
Nov 27 05:57:42 np0005537642 ceph-mon[74338]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/1331916404' entity='client.admin' cmd='[{"prefix": "mgr module enable", "module": "dashboard"}]': finished
Nov 27 05:57:42 np0005537642 ceph-mgr[74636]: mgr handle_mgr_map respawning because set of enabled modules changed!
Nov 27 05:57:42 np0005537642 ceph-mgr[74636]: mgr respawn  e: '/usr/bin/ceph-mgr'
Nov 27 05:57:42 np0005537642 ceph-mgr[74636]: mgr respawn  0: '/usr/bin/ceph-mgr'
Nov 27 05:57:42 np0005537642 ceph-mgr[74636]: mgr respawn  1: '-n'
Nov 27 05:57:42 np0005537642 ceph-mgr[74636]: mgr respawn  2: 'mgr.compute-0.qnrkij'
Nov 27 05:57:42 np0005537642 ceph-mgr[74636]: mgr respawn  3: '-f'
Nov 27 05:57:42 np0005537642 ceph-mgr[74636]: mgr respawn  4: '--setuser'
Nov 27 05:57:42 np0005537642 ceph-mgr[74636]: mgr respawn  5: 'ceph'
Nov 27 05:57:42 np0005537642 ceph-mgr[74636]: mgr respawn  6: '--setgroup'
Nov 27 05:57:42 np0005537642 ceph-mgr[74636]: mgr respawn  7: 'ceph'
Nov 27 05:57:42 np0005537642 ceph-mgr[74636]: mgr respawn  8: '--default-log-to-file=false'
Nov 27 05:57:42 np0005537642 ceph-mgr[74636]: mgr respawn  9: '--default-log-to-journald=true'
Nov 27 05:57:42 np0005537642 ceph-mgr[74636]: mgr respawn  10: '--default-log-to-stderr=false'
Nov 27 05:57:42 np0005537642 ceph-mon[74338]: log_channel(cluster) log [DBG] : mgrmap e12: compute-0.qnrkij(active, since 3m), standbys: compute-2.yyrxaz, compute-1.npcryb
Nov 27 05:57:42 np0005537642 systemd[1]: libpod-0f216df46921c7b09fe6275632aaf81ba796700c12e5c68ac54389df9581940c.scope: Deactivated successfully.
Nov 27 05:57:42 np0005537642 podman[89365]: 2025-11-27 10:57:42.560034744 +0000 UTC m=+2.380592810 container died 0f216df46921c7b09fe6275632aaf81ba796700c12e5c68ac54389df9581940c (image=quay.io/ceph/ceph:v19, name=strange_germain, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, OSD_FLAVOR=default, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 27 05:57:42 np0005537642 ceph-mon[74338]: mon.compute-0@0(leader).osd e54 e54: 3 total, 3 up, 3 in
Nov 27 05:57:42 np0005537642 systemd[1]: session-32.scope: Deactivated successfully.
Nov 27 05:57:42 np0005537642 systemd[1]: session-29.scope: Deactivated successfully.
Nov 27 05:57:42 np0005537642 systemd[1]: session-23.scope: Deactivated successfully.
Nov 27 05:57:42 np0005537642 systemd[1]: session-30.scope: Deactivated successfully.
Nov 27 05:57:42 np0005537642 systemd[1]: session-31.scope: Deactivated successfully.
Nov 27 05:57:42 np0005537642 systemd[1]: session-24.scope: Deactivated successfully.
Nov 27 05:57:42 np0005537642 systemd[1]: session-25.scope: Deactivated successfully.
Nov 27 05:57:42 np0005537642 systemd[1]: session-27.scope: Deactivated successfully.
Nov 27 05:57:42 np0005537642 systemd[1]: session-21.scope: Deactivated successfully.
Nov 27 05:57:42 np0005537642 systemd[1]: session-33.scope: Deactivated successfully.
Nov 27 05:57:42 np0005537642 systemd[1]: session-33.scope: Consumed 33.026s CPU time.
Nov 27 05:57:42 np0005537642 systemd[1]: session-28.scope: Deactivated successfully.
Nov 27 05:57:42 np0005537642 systemd-logind[801]: Session 33 logged out. Waiting for processes to exit.
Nov 27 05:57:42 np0005537642 systemd[1]: session-26.scope: Deactivated successfully.
Nov 27 05:57:42 np0005537642 systemd-logind[801]: Session 31 logged out. Waiting for processes to exit.
Nov 27 05:57:42 np0005537642 systemd-logind[801]: Session 32 logged out. Waiting for processes to exit.
Nov 27 05:57:42 np0005537642 systemd-logind[801]: Session 29 logged out. Waiting for processes to exit.
Nov 27 05:57:42 np0005537642 ceph-mon[74338]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/397881370' entity='mgr.compute-0.qnrkij' 
Nov 27 05:57:42 np0005537642 systemd-logind[801]: Session 30 logged out. Waiting for processes to exit.
Nov 27 05:57:42 np0005537642 ceph-mon[74338]: log_channel(cluster) log [DBG] : osdmap e54: 3 total, 3 up, 3 in
Nov 27 05:57:42 np0005537642 systemd-logind[801]: Session 21 logged out. Waiting for processes to exit.
Nov 27 05:57:42 np0005537642 systemd-logind[801]: Session 23 logged out. Waiting for processes to exit.
Nov 27 05:57:42 np0005537642 systemd-logind[801]: Session 25 logged out. Waiting for processes to exit.
Nov 27 05:57:42 np0005537642 systemd-logind[801]: Session 24 logged out. Waiting for processes to exit.
Nov 27 05:57:42 np0005537642 systemd-logind[801]: Session 26 logged out. Waiting for processes to exit.
Nov 27 05:57:42 np0005537642 ceph-4c838139-e0c9-556a-a9ca-e4422f459af7-mgr-compute-0-qnrkij[74632]: ignoring --setuser ceph since I am not root
Nov 27 05:57:42 np0005537642 ceph-4c838139-e0c9-556a-a9ca-e4422f459af7-mgr-compute-0-qnrkij[74632]: ignoring --setgroup ceph since I am not root
Nov 27 05:57:42 np0005537642 systemd-logind[801]: Session 28 logged out. Waiting for processes to exit.
Nov 27 05:57:42 np0005537642 systemd-logind[801]: Session 27 logged out. Waiting for processes to exit.
Nov 27 05:57:42 np0005537642 systemd-logind[801]: Removed session 32.
Nov 27 05:57:42 np0005537642 systemd-logind[801]: Removed session 29.
Nov 27 05:57:42 np0005537642 systemd-logind[801]: Removed session 23.
Nov 27 05:57:42 np0005537642 systemd-logind[801]: Removed session 30.
Nov 27 05:57:42 np0005537642 systemd-logind[801]: Removed session 31.
Nov 27 05:57:42 np0005537642 systemd-logind[801]: Removed session 24.
Nov 27 05:57:42 np0005537642 ceph-mgr[74636]: ceph version 19.2.3 (c92aebb279828e9c3c1f5d24613efca272649e62) squid (stable), process ceph-mgr, pid 2
Nov 27 05:57:42 np0005537642 systemd-logind[801]: Removed session 25.
Nov 27 05:57:42 np0005537642 ceph-mgr[74636]: pidfile_write: ignore empty --pid-file
Nov 27 05:57:42 np0005537642 systemd-logind[801]: Removed session 27.
Nov 27 05:57:42 np0005537642 systemd-logind[801]: Removed session 21.
Nov 27 05:57:42 np0005537642 systemd-logind[801]: Removed session 33.
Nov 27 05:57:42 np0005537642 systemd-logind[801]: Removed session 28.
Nov 27 05:57:42 np0005537642 systemd-logind[801]: Removed session 26.
Nov 27 05:57:42 np0005537642 ceph-mgr[74636]: mgr[py] Loading python module 'alerts'
Nov 27 05:57:42 np0005537642 systemd[1]: var-lib-containers-storage-overlay-7e8ea2e3f3dd5f78a7b008bb7153ee17a25cd356c0cc488a51d8c2d9b3614fdf-merged.mount: Deactivated successfully.
Nov 27 05:57:42 np0005537642 ceph-4c838139-e0c9-556a-a9ca-e4422f459af7-mgr-compute-0-qnrkij[74632]: 2025-11-27T10:57:42.798+0000 7f8a172d4140 -1 mgr[py] Module alerts has missing NOTIFY_TYPES member
Nov 27 05:57:42 np0005537642 ceph-mgr[74636]: mgr[py] Module alerts has missing NOTIFY_TYPES member
Nov 27 05:57:42 np0005537642 ceph-mgr[74636]: mgr[py] Loading python module 'balancer'
Nov 27 05:57:42 np0005537642 podman[89365]: 2025-11-27 10:57:42.817207208 +0000 UTC m=+2.637765234 container remove 0f216df46921c7b09fe6275632aaf81ba796700c12e5c68ac54389df9581940c (image=quay.io/ceph/ceph:v19, name=strange_germain, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325)
Nov 27 05:57:42 np0005537642 systemd[1]: libpod-conmon-0f216df46921c7b09fe6275632aaf81ba796700c12e5c68ac54389df9581940c.scope: Deactivated successfully.
Nov 27 05:57:42 np0005537642 ceph-4c838139-e0c9-556a-a9ca-e4422f459af7-mgr-compute-0-qnrkij[74632]: 2025-11-27T10:57:42.896+0000 7f8a172d4140 -1 mgr[py] Module balancer has missing NOTIFY_TYPES member
Nov 27 05:57:42 np0005537642 ceph-mgr[74636]: mgr[py] Module balancer has missing NOTIFY_TYPES member
Nov 27 05:57:42 np0005537642 ceph-mgr[74636]: mgr[py] Loading python module 'cephadm'
Nov 27 05:57:43 np0005537642 python3[90215]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --interactive  --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z   --volume /home/ceph-admin/specs/ceph_spec.yaml:/home/ceph_spec.yaml:z   --entrypoint ceph quay.io/ceph/ceph:v19 --fsid 4c838139-e0c9-556a-a9ca-e4422f459af7 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   dashboard set-grafana-api-username admin _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 27 05:57:43 np0005537642 ceph-mon[74338]: from='mgr.14122 192.168.122.100:0/397881370' entity='mgr.compute-0.qnrkij' 
Nov 27 05:57:43 np0005537642 ceph-mon[74338]: from='mgr.14122 192.168.122.100:0/397881370' entity='mgr.compute-0.qnrkij' 
Nov 27 05:57:43 np0005537642 ceph-mon[74338]: from='client.? 192.168.122.100:0/1331916404' entity='client.admin' cmd='[{"prefix": "mgr module enable", "module": "dashboard"}]': finished
Nov 27 05:57:43 np0005537642 ceph-mon[74338]: from='mgr.14122 192.168.122.100:0/397881370' entity='mgr.compute-0.qnrkij' 
Nov 27 05:57:43 np0005537642 ceph-osd[82775]: log_channel(cluster) log [DBG] : 5.16 scrub starts
Nov 27 05:57:43 np0005537642 ceph-osd[82775]: log_channel(cluster) log [DBG] : 5.16 scrub ok
Nov 27 05:57:43 np0005537642 podman[90216]: 2025-11-27 10:57:43.26222415 +0000 UTC m=+0.030131015 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Nov 27 05:57:43 np0005537642 podman[90216]: 2025-11-27 10:57:43.3630573 +0000 UTC m=+0.130964095 container create 27641b88beb9420456c82b6c57610f0e12fbdd0472f01dafe35496150760c071 (image=quay.io/ceph/ceph:v19, name=modest_grothendieck, CEPH_REF=squid, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 27 05:57:43 np0005537642 systemd[1]: Started libpod-conmon-27641b88beb9420456c82b6c57610f0e12fbdd0472f01dafe35496150760c071.scope.
Nov 27 05:57:43 np0005537642 systemd[1]: Started libcrun container.
Nov 27 05:57:43 np0005537642 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d6fc3c5aa1344702a75b4ce4e6d5ccded943f931e5f0c08acbf3f5747cd4033f/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Nov 27 05:57:43 np0005537642 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d6fc3c5aa1344702a75b4ce4e6d5ccded943f931e5f0c08acbf3f5747cd4033f/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 27 05:57:43 np0005537642 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d6fc3c5aa1344702a75b4ce4e6d5ccded943f931e5f0c08acbf3f5747cd4033f/merged/home/ceph_spec.yaml supports timestamps until 2038 (0x7fffffff)
Nov 27 05:57:43 np0005537642 podman[90216]: 2025-11-27 10:57:43.523364647 +0000 UTC m=+0.291271472 container init 27641b88beb9420456c82b6c57610f0e12fbdd0472f01dafe35496150760c071 (image=quay.io/ceph/ceph:v19, name=modest_grothendieck, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, org.label-schema.build-date=20250325, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 27 05:57:43 np0005537642 podman[90216]: 2025-11-27 10:57:43.532098345 +0000 UTC m=+0.300005130 container start 27641b88beb9420456c82b6c57610f0e12fbdd0472f01dafe35496150760c071 (image=quay.io/ceph/ceph:v19, name=modest_grothendieck, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 27 05:57:43 np0005537642 podman[90216]: 2025-11-27 10:57:43.611644271 +0000 UTC m=+0.379551176 container attach 27641b88beb9420456c82b6c57610f0e12fbdd0472f01dafe35496150760c071 (image=quay.io/ceph/ceph:v19, name=modest_grothendieck, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 27 05:57:43 np0005537642 ceph-mon[74338]: mon.compute-0@0(leader).osd e54 do_prune osdmap full prune enabled
Nov 27 05:57:43 np0005537642 ceph-mgr[74636]: mgr[py] Loading python module 'crash'
Nov 27 05:57:43 np0005537642 ceph-mon[74338]: mon.compute-0@0(leader).osd e55 e55: 3 total, 3 up, 3 in
Nov 27 05:57:43 np0005537642 ceph-mon[74338]: log_channel(cluster) log [DBG] : osdmap e55: 3 total, 3 up, 3 in
Nov 27 05:57:43 np0005537642 ceph-mon[74338]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool application enable","pool": "default.rgw.meta","app": "rgw"} v 0)
Nov 27 05:57:43 np0005537642 ceph-mon[74338]: log_channel(audit) log [INF] : from='client.? ' entity='client.rgw.rgw.compute-2.ujaphm' cmd=[{"prefix": "osd pool application enable","pool": "default.rgw.meta","app": "rgw"}]: dispatch
Nov 27 05:57:43 np0005537642 ceph-mon[74338]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool application enable","pool": "default.rgw.meta","app": "rgw"} v 0)
Nov 27 05:57:43 np0005537642 ceph-mon[74338]: log_channel(audit) log [INF] : from='client.? ' entity='client.rgw.rgw.compute-0.xkdunz' cmd=[{"prefix": "osd pool application enable","pool": "default.rgw.meta","app": "rgw"}]: dispatch
Nov 27 05:57:43 np0005537642 ceph-mon[74338]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool application enable","pool": "default.rgw.meta","app": "rgw"} v 0)
Nov 27 05:57:43 np0005537642 ceph-mon[74338]: log_channel(audit) log [INF] : from='client.? ' entity='client.rgw.rgw.compute-1.mkskbt' cmd=[{"prefix": "osd pool application enable","pool": "default.rgw.meta","app": "rgw"}]: dispatch
Nov 27 05:57:43 np0005537642 ceph-4c838139-e0c9-556a-a9ca-e4422f459af7-mgr-compute-0-qnrkij[74632]: 2025-11-27T10:57:43.752+0000 7f8a172d4140 -1 mgr[py] Module crash has missing NOTIFY_TYPES member
Nov 27 05:57:43 np0005537642 ceph-mgr[74636]: mgr[py] Module crash has missing NOTIFY_TYPES member
Nov 27 05:57:43 np0005537642 ceph-mgr[74636]: mgr[py] Loading python module 'dashboard'
Nov 27 05:57:44 np0005537642 ceph-mon[74338]: from='client.? 192.168.122.102:0/3611071583' entity='client.rgw.rgw.compute-2.ujaphm' cmd=[{"prefix": "osd pool application enable","pool": "default.rgw.meta","app": "rgw"}]: dispatch
Nov 27 05:57:44 np0005537642 ceph-mon[74338]: from='client.? ' entity='client.rgw.rgw.compute-2.ujaphm' cmd=[{"prefix": "osd pool application enable","pool": "default.rgw.meta","app": "rgw"}]: dispatch
Nov 27 05:57:44 np0005537642 ceph-mon[74338]: from='client.? 192.168.122.100:0/2968932671' entity='client.rgw.rgw.compute-0.xkdunz' cmd=[{"prefix": "osd pool application enable","pool": "default.rgw.meta","app": "rgw"}]: dispatch
Nov 27 05:57:44 np0005537642 ceph-mon[74338]: from='client.? 192.168.122.101:0/4171051465' entity='client.rgw.rgw.compute-1.mkskbt' cmd=[{"prefix": "osd pool application enable","pool": "default.rgw.meta","app": "rgw"}]: dispatch
Nov 27 05:57:44 np0005537642 ceph-mon[74338]: from='client.? ' entity='client.rgw.rgw.compute-0.xkdunz' cmd=[{"prefix": "osd pool application enable","pool": "default.rgw.meta","app": "rgw"}]: dispatch
Nov 27 05:57:44 np0005537642 ceph-mon[74338]: from='client.? ' entity='client.rgw.rgw.compute-1.mkskbt' cmd=[{"prefix": "osd pool application enable","pool": "default.rgw.meta","app": "rgw"}]: dispatch
Nov 27 05:57:44 np0005537642 ceph-mgr[74636]: mgr[py] Loading python module 'devicehealth'
Nov 27 05:57:44 np0005537642 ceph-osd[82775]: log_channel(cluster) log [DBG] : 6.a scrub starts
Nov 27 05:57:44 np0005537642 ceph-osd[82775]: log_channel(cluster) log [DBG] : 6.a scrub ok
Nov 27 05:57:44 np0005537642 ceph-4c838139-e0c9-556a-a9ca-e4422f459af7-mgr-compute-0-qnrkij[74632]: 2025-11-27T10:57:44.398+0000 7f8a172d4140 -1 mgr[py] Module devicehealth has missing NOTIFY_TYPES member
Nov 27 05:57:44 np0005537642 ceph-mgr[74636]: mgr[py] Module devicehealth has missing NOTIFY_TYPES member
Nov 27 05:57:44 np0005537642 ceph-mgr[74636]: mgr[py] Loading python module 'diskprediction_local'
Nov 27 05:57:44 np0005537642 ceph-4c838139-e0c9-556a-a9ca-e4422f459af7-mgr-compute-0-qnrkij[74632]: /lib64/python3.9/site-packages/scipy/__init__.py:73: UserWarning: NumPy was imported from a Python sub-interpreter but NumPy does not properly support sub-interpreters. This will likely work for most users but might cause hard to track down issues or subtle bugs. A common user of the rare sub-interpreter feature is wsgi which also allows single-interpreter mode.
Nov 27 05:57:44 np0005537642 ceph-4c838139-e0c9-556a-a9ca-e4422f459af7-mgr-compute-0-qnrkij[74632]: Improvements in the case of bugs are welcome, but is not on the NumPy roadmap, and full support may require significant effort to achieve.
Nov 27 05:57:44 np0005537642 ceph-4c838139-e0c9-556a-a9ca-e4422f459af7-mgr-compute-0-qnrkij[74632]:  from numpy import show_config as show_numpy_config
Nov 27 05:57:44 np0005537642 ceph-4c838139-e0c9-556a-a9ca-e4422f459af7-mgr-compute-0-qnrkij[74632]: 2025-11-27T10:57:44.593+0000 7f8a172d4140 -1 mgr[py] Module diskprediction_local has missing NOTIFY_TYPES member
Nov 27 05:57:44 np0005537642 ceph-mgr[74636]: mgr[py] Module diskprediction_local has missing NOTIFY_TYPES member
Nov 27 05:57:44 np0005537642 ceph-mgr[74636]: mgr[py] Loading python module 'influx'
Nov 27 05:57:44 np0005537642 ceph-4c838139-e0c9-556a-a9ca-e4422f459af7-mgr-compute-0-qnrkij[74632]: 2025-11-27T10:57:44.665+0000 7f8a172d4140 -1 mgr[py] Module influx has missing NOTIFY_TYPES member
Nov 27 05:57:44 np0005537642 ceph-mgr[74636]: mgr[py] Module influx has missing NOTIFY_TYPES member
Nov 27 05:57:44 np0005537642 ceph-mgr[74636]: mgr[py] Loading python module 'insights'
Nov 27 05:57:44 np0005537642 ceph-mon[74338]: mon.compute-0@0(leader).osd e55 do_prune osdmap full prune enabled
Nov 27 05:57:44 np0005537642 ceph-mon[74338]: log_channel(audit) log [INF] : from='client.? ' entity='client.rgw.rgw.compute-2.ujaphm' cmd='[{"prefix": "osd pool application enable","pool": "default.rgw.meta","app": "rgw"}]': finished
Nov 27 05:57:44 np0005537642 ceph-mon[74338]: log_channel(audit) log [INF] : from='client.? ' entity='client.rgw.rgw.compute-0.xkdunz' cmd='[{"prefix": "osd pool application enable","pool": "default.rgw.meta","app": "rgw"}]': finished
Nov 27 05:57:44 np0005537642 ceph-mon[74338]: log_channel(audit) log [INF] : from='client.? ' entity='client.rgw.rgw.compute-1.mkskbt' cmd='[{"prefix": "osd pool application enable","pool": "default.rgw.meta","app": "rgw"}]': finished
Nov 27 05:57:44 np0005537642 ceph-mon[74338]: mon.compute-0@0(leader).osd e56 e56: 3 total, 3 up, 3 in
Nov 27 05:57:44 np0005537642 ceph-mgr[74636]: mgr[py] Loading python module 'iostat'
Nov 27 05:57:44 np0005537642 ceph-mon[74338]: log_channel(cluster) log [DBG] : osdmap e56: 3 total, 3 up, 3 in
Nov 27 05:57:44 np0005537642 ceph-mon[74338]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_autoscale_bias", "val": "4"} v 0)
Nov 27 05:57:44 np0005537642 ceph-mon[74338]: log_channel(audit) log [INF] : from='client.? ' entity='client.rgw.rgw.compute-1.mkskbt' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_autoscale_bias", "val": "4"}]: dispatch
Nov 27 05:57:44 np0005537642 ceph-mon[74338]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_autoscale_bias", "val": "4"} v 0)
Nov 27 05:57:44 np0005537642 ceph-mon[74338]: log_channel(audit) log [INF] : from='client.? ' entity='client.rgw.rgw.compute-0.xkdunz' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_autoscale_bias", "val": "4"}]: dispatch
Nov 27 05:57:44 np0005537642 ceph-mon[74338]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_autoscale_bias", "val": "4"} v 0)
Nov 27 05:57:44 np0005537642 ceph-mon[74338]: log_channel(audit) log [INF] : from='client.? ' entity='client.rgw.rgw.compute-2.ujaphm' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_autoscale_bias", "val": "4"}]: dispatch
Nov 27 05:57:44 np0005537642 ceph-4c838139-e0c9-556a-a9ca-e4422f459af7-mgr-compute-0-qnrkij[74632]: 2025-11-27T10:57:44.793+0000 7f8a172d4140 -1 mgr[py] Module iostat has missing NOTIFY_TYPES member
Nov 27 05:57:44 np0005537642 ceph-mgr[74636]: mgr[py] Module iostat has missing NOTIFY_TYPES member
Nov 27 05:57:44 np0005537642 ceph-mgr[74636]: mgr[py] Loading python module 'k8sevents'
Nov 27 05:57:45 np0005537642 ceph-mgr[74636]: mgr[py] Loading python module 'localpool'
Nov 27 05:57:45 np0005537642 ceph-mgr[74636]: mgr[py] Loading python module 'mds_autoscaler'
Nov 27 05:57:45 np0005537642 ceph-osd[82775]: log_channel(cluster) log [DBG] : 3.f scrub starts
Nov 27 05:57:45 np0005537642 ceph-osd[82775]: log_channel(cluster) log [DBG] : 3.f scrub ok
Nov 27 05:57:45 np0005537642 ceph-mon[74338]: from='client.? ' entity='client.rgw.rgw.compute-2.ujaphm' cmd='[{"prefix": "osd pool application enable","pool": "default.rgw.meta","app": "rgw"}]': finished
Nov 27 05:57:45 np0005537642 ceph-mon[74338]: from='client.? ' entity='client.rgw.rgw.compute-0.xkdunz' cmd='[{"prefix": "osd pool application enable","pool": "default.rgw.meta","app": "rgw"}]': finished
Nov 27 05:57:45 np0005537642 ceph-mon[74338]: from='client.? ' entity='client.rgw.rgw.compute-1.mkskbt' cmd='[{"prefix": "osd pool application enable","pool": "default.rgw.meta","app": "rgw"}]': finished
Nov 27 05:57:45 np0005537642 ceph-mon[74338]: from='client.? 192.168.122.101:0/4171051465' entity='client.rgw.rgw.compute-1.mkskbt' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_autoscale_bias", "val": "4"}]: dispatch
Nov 27 05:57:45 np0005537642 ceph-mon[74338]: from='client.? ' entity='client.rgw.rgw.compute-1.mkskbt' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_autoscale_bias", "val": "4"}]: dispatch
Nov 27 05:57:45 np0005537642 ceph-mon[74338]: from='client.? 192.168.122.100:0/2968932671' entity='client.rgw.rgw.compute-0.xkdunz' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_autoscale_bias", "val": "4"}]: dispatch
Nov 27 05:57:45 np0005537642 ceph-mon[74338]: from='client.? 192.168.122.102:0/3611071583' entity='client.rgw.rgw.compute-2.ujaphm' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_autoscale_bias", "val": "4"}]: dispatch
Nov 27 05:57:45 np0005537642 ceph-mon[74338]: from='client.? ' entity='client.rgw.rgw.compute-0.xkdunz' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_autoscale_bias", "val": "4"}]: dispatch
Nov 27 05:57:45 np0005537642 ceph-mon[74338]: from='client.? ' entity='client.rgw.rgw.compute-2.ujaphm' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_autoscale_bias", "val": "4"}]: dispatch
Nov 27 05:57:45 np0005537642 ceph-mgr[74636]: mgr[py] Loading python module 'mirroring'
Nov 27 05:57:45 np0005537642 ceph-mgr[74636]: mgr[py] Loading python module 'nfs'
Nov 27 05:57:45 np0005537642 ceph-mon[74338]: mon.compute-0@0(leader).osd e56 do_prune osdmap full prune enabled
Nov 27 05:57:45 np0005537642 ceph-mon[74338]: log_channel(audit) log [INF] : from='client.? ' entity='client.rgw.rgw.compute-1.mkskbt' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_autoscale_bias", "val": "4"}]': finished
Nov 27 05:57:45 np0005537642 ceph-mon[74338]: log_channel(audit) log [INF] : from='client.? ' entity='client.rgw.rgw.compute-0.xkdunz' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_autoscale_bias", "val": "4"}]': finished
Nov 27 05:57:45 np0005537642 ceph-mon[74338]: log_channel(audit) log [INF] : from='client.? ' entity='client.rgw.rgw.compute-2.ujaphm' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_autoscale_bias", "val": "4"}]': finished
Nov 27 05:57:45 np0005537642 ceph-mon[74338]: mon.compute-0@0(leader).osd e57 e57: 3 total, 3 up, 3 in
Nov 27 05:57:45 np0005537642 ceph-mon[74338]: log_channel(cluster) log [DBG] : osdmap e57: 3 total, 3 up, 3 in
Nov 27 05:57:45 np0005537642 ceph-4c838139-e0c9-556a-a9ca-e4422f459af7-mgr-compute-0-qnrkij[74632]: 2025-11-27T10:57:45.762+0000 7f8a172d4140 -1 mgr[py] Module nfs has missing NOTIFY_TYPES member
Nov 27 05:57:45 np0005537642 ceph-mgr[74636]: mgr[py] Module nfs has missing NOTIFY_TYPES member
Nov 27 05:57:45 np0005537642 ceph-mgr[74636]: mgr[py] Loading python module 'orchestrator'
Nov 27 05:57:45 np0005537642 ceph-4c838139-e0c9-556a-a9ca-e4422f459af7-mgr-compute-0-qnrkij[74632]: 2025-11-27T10:57:45.970+0000 7f8a172d4140 -1 mgr[py] Module orchestrator has missing NOTIFY_TYPES member
Nov 27 05:57:45 np0005537642 ceph-mgr[74636]: mgr[py] Module orchestrator has missing NOTIFY_TYPES member
Nov 27 05:57:45 np0005537642 ceph-mgr[74636]: mgr[py] Loading python module 'osd_perf_query'
Nov 27 05:57:46 np0005537642 radosgw[89563]: v1 topic migration: starting v1 topic migration..
Nov 27 05:57:46 np0005537642 radosgw[89563]: LDAP not started since no server URIs were provided in the configuration.
Nov 27 05:57:46 np0005537642 ceph-4c838139-e0c9-556a-a9ca-e4422f459af7-rgw-rgw-compute-0-xkdunz[89559]: 2025-11-27T10:57:46.029+0000 7f2a4ff8e980 -1 LDAP not started since no server URIs were provided in the configuration.
Nov 27 05:57:46 np0005537642 radosgw[89563]: v1 topic migration: finished v1 topic migration
Nov 27 05:57:46 np0005537642 radosgw[89563]: INFO: RGWReshardLock::lock found lock on reshard.0000000000 to be held by another RGW process; skipping for now
Nov 27 05:57:46 np0005537642 ceph-4c838139-e0c9-556a-a9ca-e4422f459af7-mgr-compute-0-qnrkij[74632]: 2025-11-27T10:57:46.042+0000 7f8a172d4140 -1 mgr[py] Module osd_perf_query has missing NOTIFY_TYPES member
Nov 27 05:57:46 np0005537642 ceph-mgr[74636]: mgr[py] Module osd_perf_query has missing NOTIFY_TYPES member
Nov 27 05:57:46 np0005537642 ceph-mgr[74636]: mgr[py] Loading python module 'osd_support'
Nov 27 05:57:46 np0005537642 radosgw[89563]: INFO: RGWReshardLock::lock found lock on reshard.0000000001 to be held by another RGW process; skipping for now
Nov 27 05:57:46 np0005537642 radosgw[89563]: INFO: RGWReshardLock::lock found lock on reshard.0000000003 to be held by another RGW process; skipping for now
Nov 27 05:57:46 np0005537642 radosgw[89563]: framework: beast
Nov 27 05:57:46 np0005537642 radosgw[89563]: framework conf key: ssl_certificate, val: config://rgw/cert/$realm/$zone.crt
Nov 27 05:57:46 np0005537642 radosgw[89563]: framework conf key: ssl_private_key, val: config://rgw/cert/$realm/$zone.key
Nov 27 05:57:46 np0005537642 radosgw[89563]: INFO: RGWReshardLock::lock found lock on reshard.0000000004 to be held by another RGW process; skipping for now
Nov 27 05:57:46 np0005537642 radosgw[89563]: starting handler: beast
Nov 27 05:57:46 np0005537642 radosgw[89563]: INFO: RGWReshardLock::lock found lock on reshard.0000000006 to be held by another RGW process; skipping for now
Nov 27 05:57:46 np0005537642 radosgw[89563]: set uid:gid to 167:167 (ceph:ceph)
Nov 27 05:57:46 np0005537642 radosgw[89563]: mgrc service_daemon_register rgw.24149 metadata {arch=x86_64,ceph_release=squid,ceph_version=ceph version 19.2.3 (c92aebb279828e9c3c1f5d24613efca272649e62) squid (stable),ceph_version_short=19.2.3,container_hostname=compute-0,container_image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec,cpu=AMD EPYC-Rome Processor,distro=centos,distro_description=CentOS Stream 9,distro_version=9,frontend_config#0=beast endpoint=192.168.122.100:8082,frontend_type#0=beast,hostname=compute-0,id=rgw.compute-0.xkdunz,kernel_description=#1 SMP PREEMPT_DYNAMIC Thu Nov 20 14:15:03 UTC 2025,kernel_version=5.14.0-642.el9.x86_64,mem_swap_kb=1048572,mem_total_kb=7864324,num_handles=1,os=Linux,pid=2,realm_id=,realm_name=,zone_id=675152cd-7f14-4b99-b8fc-74e8884ed61a,zone_name=default,zonegroup_id=b3bd84b0-a1e5-48d4-ab74-2f2514937c72,zonegroup_name=default}
Nov 27 05:57:46 np0005537642 radosgw[89563]: INFO: RGWReshardLock::lock found lock on reshard.0000000007 to be held by another RGW process; skipping for now
Nov 27 05:57:46 np0005537642 ceph-4c838139-e0c9-556a-a9ca-e4422f459af7-mgr-compute-0-qnrkij[74632]: 2025-11-27T10:57:46.115+0000 7f8a172d4140 -1 mgr[py] Module osd_support has missing NOTIFY_TYPES member
Nov 27 05:57:46 np0005537642 ceph-mgr[74636]: mgr[py] Module osd_support has missing NOTIFY_TYPES member
Nov 27 05:57:46 np0005537642 ceph-mgr[74636]: mgr[py] Loading python module 'pg_autoscaler'
Nov 27 05:57:46 np0005537642 radosgw[89563]: INFO: RGWReshardLock::lock found lock on reshard.0000000009 to be held by another RGW process; skipping for now
Nov 27 05:57:46 np0005537642 radosgw[89563]: INFO: RGWReshardLock::lock found lock on reshard.0000000010 to be held by another RGW process; skipping for now
Nov 27 05:57:46 np0005537642 radosgw[89563]: INFO: RGWReshardLock::lock found lock on reshard.0000000012 to be held by another RGW process; skipping for now
Nov 27 05:57:46 np0005537642 radosgw[89563]: INFO: RGWReshardLock::lock found lock on reshard.0000000013 to be held by another RGW process; skipping for now
Nov 27 05:57:46 np0005537642 radosgw[89563]: INFO: RGWReshardLock::lock found lock on reshard.0000000015 to be held by another RGW process; skipping for now
Nov 27 05:57:46 np0005537642 ceph-4c838139-e0c9-556a-a9ca-e4422f459af7-mgr-compute-0-qnrkij[74632]: 2025-11-27T10:57:46.197+0000 7f8a172d4140 -1 mgr[py] Module pg_autoscaler has missing NOTIFY_TYPES member
Nov 27 05:57:46 np0005537642 ceph-mgr[74636]: mgr[py] Module pg_autoscaler has missing NOTIFY_TYPES member
Nov 27 05:57:46 np0005537642 ceph-mgr[74636]: mgr[py] Loading python module 'progress'
Nov 27 05:57:46 np0005537642 ceph-4c838139-e0c9-556a-a9ca-e4422f459af7-mgr-compute-0-qnrkij[74632]: 2025-11-27T10:57:46.264+0000 7f8a172d4140 -1 mgr[py] Module progress has missing NOTIFY_TYPES member
Nov 27 05:57:46 np0005537642 ceph-mgr[74636]: mgr[py] Module progress has missing NOTIFY_TYPES member
Nov 27 05:57:46 np0005537642 ceph-mgr[74636]: mgr[py] Loading python module 'prometheus'
Nov 27 05:57:46 np0005537642 ceph-osd[82775]: log_channel(cluster) log [DBG] : 5.9 scrub starts
Nov 27 05:57:46 np0005537642 ceph-osd[82775]: log_channel(cluster) log [DBG] : 5.9 scrub ok
Nov 27 05:57:46 np0005537642 ceph-mon[74338]: from='client.? ' entity='client.rgw.rgw.compute-1.mkskbt' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_autoscale_bias", "val": "4"}]': finished
Nov 27 05:57:46 np0005537642 ceph-mon[74338]: from='client.? ' entity='client.rgw.rgw.compute-0.xkdunz' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_autoscale_bias", "val": "4"}]': finished
Nov 27 05:57:46 np0005537642 ceph-mon[74338]: from='client.? ' entity='client.rgw.rgw.compute-2.ujaphm' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_autoscale_bias", "val": "4"}]': finished
Nov 27 05:57:46 np0005537642 ceph-mon[74338]: mon.compute-0@0(leader).osd e57 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 27 05:57:46 np0005537642 ceph-4c838139-e0c9-556a-a9ca-e4422f459af7-mgr-compute-0-qnrkij[74632]: 2025-11-27T10:57:46.585+0000 7f8a172d4140 -1 mgr[py] Module prometheus has missing NOTIFY_TYPES member
Nov 27 05:57:46 np0005537642 ceph-mgr[74636]: mgr[py] Module prometheus has missing NOTIFY_TYPES member
Nov 27 05:57:46 np0005537642 ceph-mgr[74636]: mgr[py] Loading python module 'rbd_support'
Nov 27 05:57:46 np0005537642 ceph-4c838139-e0c9-556a-a9ca-e4422f459af7-mgr-compute-0-qnrkij[74632]: 2025-11-27T10:57:46.677+0000 7f8a172d4140 -1 mgr[py] Module rbd_support has missing NOTIFY_TYPES member
Nov 27 05:57:46 np0005537642 ceph-mgr[74636]: mgr[py] Module rbd_support has missing NOTIFY_TYPES member
Nov 27 05:57:46 np0005537642 ceph-mgr[74636]: mgr[py] Loading python module 'restful'
Nov 27 05:57:46 np0005537642 ceph-mgr[74636]: mgr[py] Loading python module 'rgw'
Nov 27 05:57:47 np0005537642 ceph-4c838139-e0c9-556a-a9ca-e4422f459af7-mgr-compute-0-qnrkij[74632]: 2025-11-27T10:57:47.110+0000 7f8a172d4140 -1 mgr[py] Module rgw has missing NOTIFY_TYPES member
Nov 27 05:57:47 np0005537642 ceph-mgr[74636]: mgr[py] Module rgw has missing NOTIFY_TYPES member
Nov 27 05:57:47 np0005537642 ceph-mgr[74636]: mgr[py] Loading python module 'rook'
Nov 27 05:57:47 np0005537642 ceph-osd[82775]: log_channel(cluster) log [DBG] : 3.c scrub starts
Nov 27 05:57:47 np0005537642 ceph-osd[82775]: log_channel(cluster) log [DBG] : 3.c scrub ok
Nov 27 05:57:47 np0005537642 ceph-4c838139-e0c9-556a-a9ca-e4422f459af7-mgr-compute-0-qnrkij[74632]: 2025-11-27T10:57:47.712+0000 7f8a172d4140 -1 mgr[py] Module rook has missing NOTIFY_TYPES member
Nov 27 05:57:47 np0005537642 ceph-mgr[74636]: mgr[py] Module rook has missing NOTIFY_TYPES member
Nov 27 05:57:47 np0005537642 ceph-mgr[74636]: mgr[py] Loading python module 'selftest'
Nov 27 05:57:47 np0005537642 ceph-4c838139-e0c9-556a-a9ca-e4422f459af7-mgr-compute-0-qnrkij[74632]: 2025-11-27T10:57:47.794+0000 7f8a172d4140 -1 mgr[py] Module selftest has missing NOTIFY_TYPES member
Nov 27 05:57:47 np0005537642 ceph-mgr[74636]: mgr[py] Module selftest has missing NOTIFY_TYPES member
Nov 27 05:57:47 np0005537642 ceph-mgr[74636]: mgr[py] Loading python module 'snap_schedule'
Nov 27 05:57:47 np0005537642 ceph-4c838139-e0c9-556a-a9ca-e4422f459af7-mgr-compute-0-qnrkij[74632]: 2025-11-27T10:57:47.869+0000 7f8a172d4140 -1 mgr[py] Module snap_schedule has missing NOTIFY_TYPES member
Nov 27 05:57:47 np0005537642 ceph-mgr[74636]: mgr[py] Module snap_schedule has missing NOTIFY_TYPES member
Nov 27 05:57:47 np0005537642 ceph-mgr[74636]: mgr[py] Loading python module 'stats'
Nov 27 05:57:47 np0005537642 ceph-mgr[74636]: mgr[py] Loading python module 'status'
Nov 27 05:57:48 np0005537642 ceph-4c838139-e0c9-556a-a9ca-e4422f459af7-mgr-compute-0-qnrkij[74632]: 2025-11-27T10:57:48.038+0000 7f8a172d4140 -1 mgr[py] Module status has missing NOTIFY_TYPES member
Nov 27 05:57:48 np0005537642 ceph-mgr[74636]: mgr[py] Module status has missing NOTIFY_TYPES member
Nov 27 05:57:48 np0005537642 ceph-mgr[74636]: mgr[py] Loading python module 'telegraf'
Nov 27 05:57:48 np0005537642 ceph-4c838139-e0c9-556a-a9ca-e4422f459af7-mgr-compute-0-qnrkij[74632]: 2025-11-27T10:57:48.116+0000 7f8a172d4140 -1 mgr[py] Module telegraf has missing NOTIFY_TYPES member
Nov 27 05:57:48 np0005537642 ceph-mgr[74636]: mgr[py] Module telegraf has missing NOTIFY_TYPES member
Nov 27 05:57:48 np0005537642 ceph-mgr[74636]: mgr[py] Loading python module 'telemetry'
Nov 27 05:57:48 np0005537642 ceph-4c838139-e0c9-556a-a9ca-e4422f459af7-mgr-compute-0-qnrkij[74632]: 2025-11-27T10:57:48.268+0000 7f8a172d4140 -1 mgr[py] Module telemetry has missing NOTIFY_TYPES member
Nov 27 05:57:48 np0005537642 ceph-mgr[74636]: mgr[py] Module telemetry has missing NOTIFY_TYPES member
Nov 27 05:57:48 np0005537642 ceph-mgr[74636]: mgr[py] Loading python module 'test_orchestrator'
Nov 27 05:57:48 np0005537642 ceph-osd[82775]: log_channel(cluster) log [DBG] : 3.a scrub starts
Nov 27 05:57:48 np0005537642 ceph-osd[82775]: log_channel(cluster) log [DBG] : 3.a scrub ok
Nov 27 05:57:48 np0005537642 ceph-4c838139-e0c9-556a-a9ca-e4422f459af7-mgr-compute-0-qnrkij[74632]: 2025-11-27T10:57:48.521+0000 7f8a172d4140 -1 mgr[py] Module test_orchestrator has missing NOTIFY_TYPES member
Nov 27 05:57:48 np0005537642 ceph-mgr[74636]: mgr[py] Module test_orchestrator has missing NOTIFY_TYPES member
Nov 27 05:57:48 np0005537642 ceph-mgr[74636]: mgr[py] Loading python module 'volumes'
Nov 27 05:57:48 np0005537642 ceph-mon[74338]: log_channel(cluster) log [DBG] : Standby manager daemon compute-1.npcryb restarted
Nov 27 05:57:48 np0005537642 ceph-mon[74338]: log_channel(cluster) log [DBG] : Standby manager daemon compute-1.npcryb started
Nov 27 05:57:48 np0005537642 ceph-mon[74338]: log_channel(cluster) log [DBG] : Standby manager daemon compute-2.yyrxaz restarted
Nov 27 05:57:48 np0005537642 ceph-mon[74338]: log_channel(cluster) log [DBG] : Standby manager daemon compute-2.yyrxaz started
Nov 27 05:57:48 np0005537642 ceph-4c838139-e0c9-556a-a9ca-e4422f459af7-mgr-compute-0-qnrkij[74632]: 2025-11-27T10:57:48.778+0000 7f8a172d4140 -1 mgr[py] Module volumes has missing NOTIFY_TYPES member
Nov 27 05:57:48 np0005537642 ceph-mgr[74636]: mgr[py] Module volumes has missing NOTIFY_TYPES member
Nov 27 05:57:48 np0005537642 ceph-mgr[74636]: mgr[py] Loading python module 'zabbix'
Nov 27 05:57:48 np0005537642 ceph-4c838139-e0c9-556a-a9ca-e4422f459af7-mgr-compute-0-qnrkij[74632]: 2025-11-27T10:57:48.849+0000 7f8a172d4140 -1 mgr[py] Module zabbix has missing NOTIFY_TYPES member
Nov 27 05:57:48 np0005537642 ceph-mgr[74636]: mgr[py] Module zabbix has missing NOTIFY_TYPES member
Nov 27 05:57:48 np0005537642 ceph-mon[74338]: log_channel(cluster) log [INF] : Active manager daemon compute-0.qnrkij restarted
Nov 27 05:57:48 np0005537642 ceph-mon[74338]: mon.compute-0@0(leader).osd e57 do_prune osdmap full prune enabled
Nov 27 05:57:48 np0005537642 ceph-mon[74338]: log_channel(cluster) log [INF] : Activating manager daemon compute-0.qnrkij
Nov 27 05:57:48 np0005537642 ceph-mgr[74636]: ms_deliver_dispatch: unhandled message 0x55826c2c3860 mon_map magic: 0 from mon.0 v2:192.168.122.100:3300/0
Nov 27 05:57:48 np0005537642 ceph-mon[74338]: mon.compute-0@0(leader).osd e58 e58: 3 total, 3 up, 3 in
Nov 27 05:57:48 np0005537642 ceph-mgr[74636]: mgr handle_mgr_map Activating!
Nov 27 05:57:48 np0005537642 ceph-mgr[74636]: mgr handle_mgr_map I am now activating
Nov 27 05:57:48 np0005537642 ceph-mon[74338]: log_channel(cluster) log [DBG] : osdmap e58: 3 total, 3 up, 3 in
Nov 27 05:57:48 np0005537642 ceph-mon[74338]: log_channel(cluster) log [DBG] : mgrmap e13: compute-0.qnrkij(active, starting, since 0.033087s), standbys: compute-1.npcryb, compute-2.yyrxaz
Nov 27 05:57:48 np0005537642 ceph-mon[74338]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mon metadata", "id": "compute-0"} v 0)
Nov 27 05:57:48 np0005537642 ceph-mon[74338]: log_channel(audit) log [DBG] : from='mgr.14337 192.168.122.100:0/1533299113' entity='mgr.compute-0.qnrkij' cmd=[{"prefix": "mon metadata", "id": "compute-0"}]: dispatch
Nov 27 05:57:48 np0005537642 ceph-mon[74338]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mon metadata", "id": "compute-1"} v 0)
Nov 27 05:57:48 np0005537642 ceph-mon[74338]: log_channel(audit) log [DBG] : from='mgr.14337 192.168.122.100:0/1533299113' entity='mgr.compute-0.qnrkij' cmd=[{"prefix": "mon metadata", "id": "compute-1"}]: dispatch
Nov 27 05:57:48 np0005537642 ceph-mon[74338]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mon metadata", "id": "compute-2"} v 0)
Nov 27 05:57:48 np0005537642 ceph-mon[74338]: log_channel(audit) log [DBG] : from='mgr.14337 192.168.122.100:0/1533299113' entity='mgr.compute-0.qnrkij' cmd=[{"prefix": "mon metadata", "id": "compute-2"}]: dispatch
Nov 27 05:57:48 np0005537642 ceph-mon[74338]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mgr metadata", "who": "compute-0.qnrkij", "id": "compute-0.qnrkij"} v 0)
Nov 27 05:57:48 np0005537642 ceph-mon[74338]: log_channel(audit) log [DBG] : from='mgr.14337 192.168.122.100:0/1533299113' entity='mgr.compute-0.qnrkij' cmd=[{"prefix": "mgr metadata", "who": "compute-0.qnrkij", "id": "compute-0.qnrkij"}]: dispatch
Nov 27 05:57:48 np0005537642 ceph-mon[74338]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mgr metadata", "who": "compute-1.npcryb", "id": "compute-1.npcryb"} v 0)
Nov 27 05:57:48 np0005537642 ceph-mon[74338]: log_channel(audit) log [DBG] : from='mgr.14337 192.168.122.100:0/1533299113' entity='mgr.compute-0.qnrkij' cmd=[{"prefix": "mgr metadata", "who": "compute-1.npcryb", "id": "compute-1.npcryb"}]: dispatch
Nov 27 05:57:48 np0005537642 ceph-mon[74338]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mgr metadata", "who": "compute-2.yyrxaz", "id": "compute-2.yyrxaz"} v 0)
Nov 27 05:57:48 np0005537642 ceph-mon[74338]: log_channel(audit) log [DBG] : from='mgr.14337 192.168.122.100:0/1533299113' entity='mgr.compute-0.qnrkij' cmd=[{"prefix": "mgr metadata", "who": "compute-2.yyrxaz", "id": "compute-2.yyrxaz"}]: dispatch
Nov 27 05:57:48 np0005537642 ceph-mon[74338]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd metadata", "id": 0} v 0)
Nov 27 05:57:48 np0005537642 ceph-mon[74338]: log_channel(audit) log [DBG] : from='mgr.14337 192.168.122.100:0/1533299113' entity='mgr.compute-0.qnrkij' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch
Nov 27 05:57:48 np0005537642 ceph-mon[74338]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd metadata", "id": 1} v 0)
Nov 27 05:57:48 np0005537642 ceph-mon[74338]: log_channel(audit) log [DBG] : from='mgr.14337 192.168.122.100:0/1533299113' entity='mgr.compute-0.qnrkij' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch
Nov 27 05:57:48 np0005537642 ceph-mon[74338]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd metadata", "id": 2} v 0)
Nov 27 05:57:48 np0005537642 ceph-mon[74338]: log_channel(audit) log [DBG] : from='mgr.14337 192.168.122.100:0/1533299113' entity='mgr.compute-0.qnrkij' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch
Nov 27 05:57:48 np0005537642 ceph-mon[74338]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mds metadata"} v 0)
Nov 27 05:57:48 np0005537642 ceph-mon[74338]: log_channel(audit) log [DBG] : from='mgr.14337 192.168.122.100:0/1533299113' entity='mgr.compute-0.qnrkij' cmd=[{"prefix": "mds metadata"}]: dispatch
Nov 27 05:57:48 np0005537642 ceph-mon[74338]: mon.compute-0@0(leader).mds e1 all = 1
Nov 27 05:57:48 np0005537642 ceph-mon[74338]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd metadata"} v 0)
Nov 27 05:57:48 np0005537642 ceph-mon[74338]: log_channel(audit) log [DBG] : from='mgr.14337 192.168.122.100:0/1533299113' entity='mgr.compute-0.qnrkij' cmd=[{"prefix": "osd metadata"}]: dispatch
Nov 27 05:57:48 np0005537642 ceph-mon[74338]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mon metadata"} v 0)
Nov 27 05:57:48 np0005537642 ceph-mon[74338]: log_channel(audit) log [DBG] : from='mgr.14337 192.168.122.100:0/1533299113' entity='mgr.compute-0.qnrkij' cmd=[{"prefix": "mon metadata"}]: dispatch
Nov 27 05:57:48 np0005537642 ceph-mgr[74636]: [balancer DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Nov 27 05:57:48 np0005537642 ceph-mgr[74636]: mgr load Constructed class from module: balancer
Nov 27 05:57:48 np0005537642 ceph-mgr[74636]: [balancer INFO root] Starting
Nov 27 05:57:48 np0005537642 ceph-mgr[74636]: [cephadm DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Nov 27 05:57:48 np0005537642 ceph-mgr[74636]: [balancer INFO root] Optimize plan auto_2025-11-27_10:57:48
Nov 27 05:57:48 np0005537642 ceph-mgr[74636]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Nov 27 05:57:48 np0005537642 ceph-mgr[74636]: [balancer INFO root] Some PGs (1.000000) are unknown; try again later
Nov 27 05:57:48 np0005537642 ceph-mon[74338]: log_channel(cluster) log [INF] : Manager daemon compute-0.qnrkij is now available
Nov 27 05:57:48 np0005537642 ceph-mgr[74636]: mgr load Constructed class from module: cephadm
Nov 27 05:57:48 np0005537642 ceph-mgr[74636]: [crash DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Nov 27 05:57:48 np0005537642 ceph-mgr[74636]: mgr load Constructed class from module: crash
Nov 27 05:57:48 np0005537642 ceph-mgr[74636]: [dashboard DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Nov 27 05:57:48 np0005537642 ceph-mgr[74636]: mgr load Constructed class from module: dashboard
Nov 27 05:57:48 np0005537642 ceph-mgr[74636]: [devicehealth DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Nov 27 05:57:48 np0005537642 ceph-mgr[74636]: mgr load Constructed class from module: devicehealth
Nov 27 05:57:48 np0005537642 ceph-mgr[74636]: [dashboard INFO access_control] Loading user roles DB version=2
Nov 27 05:57:48 np0005537642 ceph-mgr[74636]: [dashboard INFO sso] Loading SSO DB version=1
Nov 27 05:57:48 np0005537642 ceph-mgr[74636]: [dashboard INFO root] server: ssl=no host=192.168.122.100 port=8443
Nov 27 05:57:48 np0005537642 ceph-mgr[74636]: [dashboard INFO root] Configured CherryPy, starting engine...
Nov 27 05:57:48 np0005537642 ceph-mgr[74636]: [iostat DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Nov 27 05:57:48 np0005537642 ceph-mgr[74636]: mgr load Constructed class from module: iostat
Nov 27 05:57:48 np0005537642 ceph-mgr[74636]: [devicehealth INFO root] Starting
Nov 27 05:57:48 np0005537642 ceph-mgr[74636]: [nfs DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Nov 27 05:57:48 np0005537642 ceph-mgr[74636]: mgr load Constructed class from module: nfs
Nov 27 05:57:48 np0005537642 ceph-mgr[74636]: [orchestrator DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Nov 27 05:57:48 np0005537642 ceph-mgr[74636]: mgr load Constructed class from module: orchestrator
Nov 27 05:57:48 np0005537642 ceph-mgr[74636]: [pg_autoscaler DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Nov 27 05:57:48 np0005537642 ceph-mgr[74636]: mgr load Constructed class from module: pg_autoscaler
Nov 27 05:57:48 np0005537642 ceph-mgr[74636]: [progress DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Nov 27 05:57:48 np0005537642 ceph-mgr[74636]: mgr load Constructed class from module: progress
Nov 27 05:57:48 np0005537642 ceph-mgr[74636]: [pg_autoscaler INFO root] _maybe_adjust
Nov 27 05:57:48 np0005537642 ceph-mgr[74636]: [rbd_support DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Nov 27 05:57:48 np0005537642 ceph-mgr[74636]: [progress INFO root] Loading...
Nov 27 05:57:48 np0005537642 ceph-mgr[74636]: [rbd_support INFO root] recovery thread starting
Nov 27 05:57:48 np0005537642 ceph-mgr[74636]: [rbd_support INFO root] starting setup
Nov 27 05:57:48 np0005537642 ceph-mgr[74636]: [progress INFO root] Loaded [<progress.module.GhostEvent object at 0x7f89975e67f0>, <progress.module.GhostEvent object at 0x7f89975e6820>, <progress.module.GhostEvent object at 0x7f89975e6850>, <progress.module.GhostEvent object at 0x7f89975e6880>, <progress.module.GhostEvent object at 0x7f89975e68b0>, <progress.module.GhostEvent object at 0x7f89975e68e0>, <progress.module.GhostEvent object at 0x7f89975e6910>, <progress.module.GhostEvent object at 0x7f89975e6940>, <progress.module.GhostEvent object at 0x7f89975e6970>, <progress.module.GhostEvent object at 0x7f89975e69a0>, <progress.module.GhostEvent object at 0x7f89975e69d0>, <progress.module.GhostEvent object at 0x7f89975e6a00>, <progress.module.GhostEvent object at 0x7f89975e6a30>] historic events
Nov 27 05:57:48 np0005537642 ceph-mgr[74636]: [progress INFO root] Loaded OSDMap, ready.
Nov 27 05:57:48 np0005537642 ceph-mon[74338]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/compute-0.qnrkij/mirror_snapshot_schedule"} v 0)
Nov 27 05:57:48 np0005537642 ceph-mon[74338]: log_channel(audit) log [INF] : from='mgr.14337 192.168.122.100:0/1533299113' entity='mgr.compute-0.qnrkij' cmd=[{"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/compute-0.qnrkij/mirror_snapshot_schedule"}]: dispatch
Nov 27 05:57:48 np0005537642 ceph-mgr[74636]: mgr load Constructed class from module: rbd_support
Nov 27 05:57:48 np0005537642 ceph-mgr[74636]: [restful DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Nov 27 05:57:48 np0005537642 ceph-mgr[74636]: mgr load Constructed class from module: restful
Nov 27 05:57:48 np0005537642 ceph-mgr[74636]: [restful INFO root] server_addr: :: server_port: 8003
Nov 27 05:57:48 np0005537642 ceph-mgr[74636]: [restful WARNING root] server not running: no certificate configured
Nov 27 05:57:48 np0005537642 ceph-mgr[74636]: [status DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Nov 27 05:57:48 np0005537642 ceph-mgr[74636]: mgr load Constructed class from module: status
Nov 27 05:57:48 np0005537642 ceph-mgr[74636]: [telemetry DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Nov 27 05:57:48 np0005537642 ceph-mgr[74636]: mgr load Constructed class from module: telemetry
Nov 27 05:57:48 np0005537642 ceph-mgr[74636]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Nov 27 05:57:48 np0005537642 ceph-mgr[74636]: [volumes DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Nov 27 05:57:48 np0005537642 ceph-mgr[74636]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 27 05:57:48 np0005537642 ceph-mgr[74636]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 27 05:57:48 np0005537642 ceph-mgr[74636]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 27 05:57:48 np0005537642 ceph-mgr[74636]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 27 05:57:48 np0005537642 ceph-mgr[74636]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: starting
Nov 27 05:57:48 np0005537642 ceph-mgr[74636]: [rbd_support INFO root] PerfHandler: starting
Nov 27 05:57:48 np0005537642 ceph-mgr[74636]: [rbd_support INFO root] load_task_task: vms, start_after=
Nov 27 05:57:48 np0005537642 ceph-mgr[74636]: [rbd_support INFO root] load_task_task: volumes, start_after=
Nov 27 05:57:48 np0005537642 ceph-mgr[74636]: [rbd_support INFO root] load_task_task: backups, start_after=
Nov 27 05:57:49 np0005537642 ceph-mgr[74636]: [rbd_support INFO root] load_task_task: images, start_after=
Nov 27 05:57:49 np0005537642 ceph-mgr[74636]: [rbd_support INFO root] TaskHandler: starting
Nov 27 05:57:49 np0005537642 ceph-mon[74338]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/compute-0.qnrkij/trash_purge_schedule"} v 0)
Nov 27 05:57:49 np0005537642 ceph-mon[74338]: log_channel(audit) log [INF] : from='mgr.14337 192.168.122.100:0/1533299113' entity='mgr.compute-0.qnrkij' cmd=[{"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/compute-0.qnrkij/trash_purge_schedule"}]: dispatch
Nov 27 05:57:49 np0005537642 ceph-mgr[74636]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Nov 27 05:57:49 np0005537642 ceph-mgr[74636]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 27 05:57:49 np0005537642 ceph-mgr[74636]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 27 05:57:49 np0005537642 ceph-mgr[74636]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 27 05:57:49 np0005537642 ceph-mgr[74636]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 27 05:57:49 np0005537642 ceph-mgr[74636]: [rbd_support INFO root] TrashPurgeScheduleHandler: starting
Nov 27 05:57:49 np0005537642 ceph-mgr[74636]: [rbd_support INFO root] setup complete
Nov 27 05:57:49 np0005537642 ceph-mgr[74636]: mgr load Constructed class from module: volumes
Nov 27 05:57:49 np0005537642 ceph-mgr[74636]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: CephFS -> /api/cephfs
Nov 27 05:57:49 np0005537642 ceph-mgr[74636]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: CephFsUi -> /ui-api/cephfs
Nov 27 05:57:49 np0005537642 ceph-mgr[74636]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: CephFSSubvolume -> /api/cephfs/subvolume
Nov 27 05:57:49 np0005537642 ceph-mgr[74636]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: CephFSSubvolumeGroups -> /api/cephfs/subvolume/group
Nov 27 05:57:49 np0005537642 ceph-mgr[74636]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: CephFSSubvolumeSnapshots -> /api/cephfs/subvolume/snapshot
Nov 27 05:57:49 np0005537642 ceph-mgr[74636]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: CephFsSnapshotClone -> /api/cephfs/subvolume/snapshot/clone
Nov 27 05:57:49 np0005537642 ceph-mgr[74636]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: CephFSSnapshotSchedule -> /api/cephfs/snapshot/schedule
Nov 27 05:57:49 np0005537642 ceph-mgr[74636]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: IscsiUi -> /ui-api/iscsi
Nov 27 05:57:49 np0005537642 ceph-mgr[74636]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: Iscsi -> /api/iscsi
Nov 27 05:57:49 np0005537642 ceph-mgr[74636]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: IscsiTarget -> /api/iscsi/target
Nov 27 05:57:49 np0005537642 ceph-mgr[74636]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: NFSGaneshaCluster -> /api/nfs-ganesha/cluster
Nov 27 05:57:49 np0005537642 ceph-mgr[74636]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: NFSGaneshaExports -> /api/nfs-ganesha/export
Nov 27 05:57:49 np0005537642 ceph-mgr[74636]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: NFSGaneshaUi -> /ui-api/nfs-ganesha
Nov 27 05:57:49 np0005537642 ceph-mgr[74636]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: Orchestrator -> /ui-api/orchestrator
Nov 27 05:57:49 np0005537642 ceph-mgr[74636]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: Service -> /api/service
Nov 27 05:57:49 np0005537642 ceph-mgr[74636]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: RbdMirroring -> /api/block/mirroring
Nov 27 05:57:49 np0005537642 ceph-mgr[74636]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: RbdMirroringSummary -> /api/block/mirroring/summary
Nov 27 05:57:49 np0005537642 ceph-mgr[74636]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: RbdMirroringPoolMode -> /api/block/mirroring/pool
Nov 27 05:57:49 np0005537642 ceph-mgr[74636]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: RbdMirroringPoolBootstrap -> /api/block/mirroring/pool/{pool_name}/bootstrap
Nov 27 05:57:49 np0005537642 ceph-mgr[74636]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: RbdMirroringPoolPeer -> /api/block/mirroring/pool/{pool_name}/peer
Nov 27 05:57:49 np0005537642 ceph-mgr[74636]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: RbdMirroringStatus -> /ui-api/block/mirroring
Nov 27 05:57:49 np0005537642 ceph-mgr[74636]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: Pool -> /api/pool
Nov 27 05:57:49 np0005537642 ceph-mgr[74636]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: PoolUi -> /ui-api/pool
Nov 27 05:57:49 np0005537642 ceph-mgr[74636]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: RBDPool -> /api/pool
Nov 27 05:57:49 np0005537642 ceph-mgr[74636]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: Rbd -> /api/block/image
Nov 27 05:57:49 np0005537642 ceph-mgr[74636]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: RbdStatus -> /ui-api/block/rbd
Nov 27 05:57:49 np0005537642 ceph-mgr[74636]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: RbdSnapshot -> /api/block/image/{image_spec}/snap
Nov 27 05:57:49 np0005537642 ceph-mgr[74636]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: RbdTrash -> /api/block/image/trash
Nov 27 05:57:49 np0005537642 ceph-mgr[74636]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: RbdNamespace -> /api/block/pool/{pool_name}/namespace
Nov 27 05:57:49 np0005537642 ceph-mgr[74636]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: Rgw -> /ui-api/rgw
Nov 27 05:57:49 np0005537642 ceph-mgr[74636]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: RgwMultisiteStatus -> /ui-api/rgw/multisite
Nov 27 05:57:49 np0005537642 ceph-mgr[74636]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: RgwMultisiteController -> /api/rgw/multisite
Nov 27 05:57:49 np0005537642 ceph-mgr[74636]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: RgwDaemon -> /api/rgw/daemon
Nov 27 05:57:49 np0005537642 ceph-mgr[74636]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: RgwSite -> /api/rgw/site
Nov 27 05:57:49 np0005537642 ceph-mgr[74636]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: RgwBucket -> /api/rgw/bucket
Nov 27 05:57:49 np0005537642 ceph-mgr[74636]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: RgwBucketUi -> /ui-api/rgw/bucket
Nov 27 05:57:49 np0005537642 ceph-mgr[74636]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: RgwUser -> /api/rgw/user
Nov 27 05:57:49 np0005537642 ceph-mgr[74636]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: rgwroles_CRUDClass -> /api/rgw/roles
Nov 27 05:57:49 np0005537642 ceph-mgr[74636]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: rgwroles_CRUDClassMetadata -> /ui-api/rgw/roles
Nov 27 05:57:49 np0005537642 ceph-mgr[74636]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: RgwRealm -> /api/rgw/realm
Nov 27 05:57:49 np0005537642 ceph-mgr[74636]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: RgwZonegroup -> /api/rgw/zonegroup
Nov 27 05:57:49 np0005537642 ceph-mgr[74636]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: RgwZone -> /api/rgw/zone
Nov 27 05:57:49 np0005537642 ceph-mgr[74636]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: Auth -> /api/auth
Nov 27 05:57:49 np0005537642 ceph-mgr[74636]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: clusteruser_CRUDClass -> /api/cluster/user
Nov 27 05:57:49 np0005537642 ceph-mgr[74636]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: clusteruser_CRUDClassMetadata -> /ui-api/cluster/user
Nov 27 05:57:49 np0005537642 ceph-mgr[74636]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: Cluster -> /api/cluster
Nov 27 05:57:49 np0005537642 ceph-mgr[74636]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: ClusterUpgrade -> /api/cluster/upgrade
Nov 27 05:57:49 np0005537642 ceph-mgr[74636]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: ClusterConfiguration -> /api/cluster_conf
Nov 27 05:57:49 np0005537642 ceph-mgr[74636]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: CrushRule -> /api/crush_rule
Nov 27 05:57:49 np0005537642 ceph-mgr[74636]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: CrushRuleUi -> /ui-api/crush_rule
Nov 27 05:57:49 np0005537642 ceph-mgr[74636]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: Daemon -> /api/daemon
Nov 27 05:57:49 np0005537642 ceph-mgr[74636]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: Docs -> /docs
Nov 27 05:57:49 np0005537642 ceph-mgr[74636]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: ErasureCodeProfile -> /api/erasure_code_profile
Nov 27 05:57:49 np0005537642 ceph-mgr[74636]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: ErasureCodeProfileUi -> /ui-api/erasure_code_profile
Nov 27 05:57:49 np0005537642 ceph-mgr[74636]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: FeedbackController -> /api/feedback
Nov 27 05:57:49 np0005537642 ceph-mgr[74636]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: FeedbackApiController -> /api/feedback/api_key
Nov 27 05:57:49 np0005537642 ceph-mgr[74636]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: FeedbackUiController -> /ui-api/feedback/api_key
Nov 27 05:57:49 np0005537642 ceph-mgr[74636]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: FrontendLogging -> /ui-api/logging
Nov 27 05:57:49 np0005537642 ceph-mgr[74636]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: Grafana -> /api/grafana
Nov 27 05:57:49 np0005537642 ceph-mgr[74636]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: Host -> /api/host
Nov 27 05:57:49 np0005537642 ceph-mgr[74636]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: HostUi -> /ui-api/host
Nov 27 05:57:49 np0005537642 ceph-mgr[74636]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: Health -> /api/health
Nov 27 05:57:49 np0005537642 ceph-mgr[74636]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: HomeController -> /
Nov 27 05:57:49 np0005537642 ceph-mgr[74636]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: LangsController -> /ui-api/langs
Nov 27 05:57:49 np0005537642 ceph-mgr[74636]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: LoginController -> /ui-api/login
Nov 27 05:57:49 np0005537642 ceph-mgr[74636]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: Logs -> /api/logs
Nov 27 05:57:49 np0005537642 ceph-mgr[74636]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: MgrModules -> /api/mgr/module
Nov 27 05:57:49 np0005537642 ceph-mgr[74636]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: Monitor -> /api/monitor
Nov 27 05:57:49 np0005537642 ceph-mgr[74636]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: NVMeoFGateway -> /api/nvmeof/gateway
Nov 27 05:57:49 np0005537642 ceph-mgr[74636]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: NVMeoFSpdk -> /api/nvmeof/spdk
Nov 27 05:57:49 np0005537642 ceph-mgr[74636]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: NVMeoFSubsystem -> /api/nvmeof/subsystem
Nov 27 05:57:49 np0005537642 ceph-mgr[74636]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: NVMeoFListener -> /api/nvmeof/subsystem/{nqn}/listener
Nov 27 05:57:49 np0005537642 ceph-mgr[74636]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: NVMeoFNamespace -> /api/nvmeof/subsystem/{nqn}/namespace
Nov 27 05:57:49 np0005537642 ceph-mgr[74636]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: NVMeoFHost -> /api/nvmeof/subsystem/{nqn}/host
Nov 27 05:57:49 np0005537642 ceph-mgr[74636]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: NVMeoFConnection -> /api/nvmeof/subsystem/{nqn}/connection
Nov 27 05:57:49 np0005537642 ceph-mgr[74636]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: NVMeoFTcpUI -> /ui-api/nvmeof
Nov 27 05:57:49 np0005537642 ceph-mgr[74636]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: Osd -> /api/osd
Nov 27 05:57:49 np0005537642 ceph-mgr[74636]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: OsdUi -> /ui-api/osd
Nov 27 05:57:49 np0005537642 ceph-mgr[74636]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: OsdFlagsController -> /api/osd/flags
Nov 27 05:57:49 np0005537642 ceph-mgr[74636]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: MdsPerfCounter -> /api/perf_counters/mds
Nov 27 05:57:49 np0005537642 ceph-mgr[74636]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: MonPerfCounter -> /api/perf_counters/mon
Nov 27 05:57:49 np0005537642 ceph-mgr[74636]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: OsdPerfCounter -> /api/perf_counters/osd
Nov 27 05:57:49 np0005537642 ceph-mgr[74636]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: RgwPerfCounter -> /api/perf_counters/rgw
Nov 27 05:57:49 np0005537642 ceph-mgr[74636]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: RbdMirrorPerfCounter -> /api/perf_counters/rbd-mirror
Nov 27 05:57:49 np0005537642 ceph-mgr[74636]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: MgrPerfCounter -> /api/perf_counters/mgr
Nov 27 05:57:49 np0005537642 ceph-mgr[74636]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: TcmuRunnerPerfCounter -> /api/perf_counters/tcmu-runner
Nov 27 05:57:49 np0005537642 ceph-mgr[74636]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: PerfCounters -> /api/perf_counters
Nov 27 05:57:49 np0005537642 ceph-mgr[74636]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: PrometheusReceiver -> /api/prometheus_receiver
Nov 27 05:57:49 np0005537642 ceph-mgr[74636]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: Prometheus -> /api/prometheus
Nov 27 05:57:49 np0005537642 ceph-mgr[74636]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: PrometheusNotifications -> /api/prometheus/notifications
Nov 27 05:57:49 np0005537642 ceph-mgr[74636]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: PrometheusSettings -> /ui-api/prometheus
Nov 27 05:57:49 np0005537642 ceph-mgr[74636]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: Role -> /api/role
Nov 27 05:57:49 np0005537642 ceph-mgr[74636]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: Scope -> /ui-api/scope
Nov 27 05:57:49 np0005537642 ceph-mgr[74636]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: Saml2 -> /auth/saml2
Nov 27 05:57:49 np0005537642 ceph-mgr[74636]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: Settings -> /api/settings
Nov 27 05:57:49 np0005537642 ceph-mgr[74636]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: StandardSettings -> /ui-api/standard_settings
Nov 27 05:57:49 np0005537642 ceph-mgr[74636]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: Summary -> /api/summary
Nov 27 05:57:49 np0005537642 ceph-mgr[74636]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: Task -> /api/task
Nov 27 05:57:49 np0005537642 ceph-mgr[74636]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: Telemetry -> /api/telemetry
Nov 27 05:57:49 np0005537642 ceph-mgr[74636]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: User -> /api/user
Nov 27 05:57:49 np0005537642 ceph-mgr[74636]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: UserPasswordPolicy -> /api/user
Nov 27 05:57:49 np0005537642 ceph-mgr[74636]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: UserChangePassword -> /api/user/{username}
Nov 27 05:57:49 np0005537642 ceph-mgr[74636]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: FeatureTogglesEndpoint -> /api/feature_toggles
Nov 27 05:57:49 np0005537642 ceph-mgr[74636]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: MessageOfTheDay -> /ui-api/motd
Nov 27 05:57:49 np0005537642 ceph-osd[82775]: log_channel(cluster) log [DBG] : 4.d scrub starts
Nov 27 05:57:49 np0005537642 ceph-osd[82775]: log_channel(cluster) log [DBG] : 4.d scrub ok
Nov 27 05:57:49 np0005537642 systemd-logind[801]: New session 34 of user ceph-admin.
Nov 27 05:57:49 np0005537642 systemd[1]: Started Session 34 of User ceph-admin.
Nov 27 05:57:49 np0005537642 ceph-mgr[74636]: [dashboard INFO dashboard.module] Engine started.
Nov 27 05:57:49 np0005537642 ceph-mon[74338]: Active manager daemon compute-0.qnrkij restarted
Nov 27 05:57:49 np0005537642 ceph-mon[74338]: Activating manager daemon compute-0.qnrkij
Nov 27 05:57:49 np0005537642 ceph-mon[74338]: Manager daemon compute-0.qnrkij is now available
Nov 27 05:57:49 np0005537642 ceph-mon[74338]: from='mgr.14337 192.168.122.100:0/1533299113' entity='mgr.compute-0.qnrkij' cmd=[{"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/compute-0.qnrkij/mirror_snapshot_schedule"}]: dispatch
Nov 27 05:57:49 np0005537642 ceph-mon[74338]: from='mgr.14337 192.168.122.100:0/1533299113' entity='mgr.compute-0.qnrkij' cmd=[{"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/compute-0.qnrkij/trash_purge_schedule"}]: dispatch
Nov 27 05:57:49 np0005537642 ceph-mon[74338]: log_channel(cluster) log [DBG] : mgrmap e14: compute-0.qnrkij(active, since 1.05811s), standbys: compute-1.npcryb, compute-2.yyrxaz
Nov 27 05:57:49 np0005537642 ceph-mgr[74636]: log_channel(audit) log [DBG] : from='client.14343 -' entity='client.admin' cmd=[{"prefix": "dashboard set-grafana-api-username", "value": "admin", "target": ["mon-mgr", ""]}]: dispatch
Nov 27 05:57:49 np0005537642 ceph-mon[74338]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config set, name=mgr/dashboard/GRAFANA_API_USERNAME}] v 0)
Nov 27 05:57:49 np0005537642 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v3: 197 pgs: 197 active+clean; 454 KiB data, 103 MiB used, 60 GiB / 60 GiB avail
Nov 27 05:57:49 np0005537642 ceph-mon[74338]: log_channel(audit) log [INF] : from='mgr.14337 192.168.122.100:0/1533299113' entity='mgr.compute-0.qnrkij' 
Nov 27 05:57:49 np0005537642 modest_grothendieck[90242]: Option GRAFANA_API_USERNAME updated
Nov 27 05:57:49 np0005537642 systemd[1]: libpod-27641b88beb9420456c82b6c57610f0e12fbdd0472f01dafe35496150760c071.scope: Deactivated successfully.
Nov 27 05:57:49 np0005537642 podman[90216]: 2025-11-27 10:57:49.969477794 +0000 UTC m=+6.737384629 container died 27641b88beb9420456c82b6c57610f0e12fbdd0472f01dafe35496150760c071 (image=quay.io/ceph/ceph:v19, name=modest_grothendieck, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.license=GPLv2, CEPH_REF=squid, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS)
Nov 27 05:57:50 np0005537642 systemd[1]: var-lib-containers-storage-overlay-d6fc3c5aa1344702a75b4ce4e6d5ccded943f931e5f0c08acbf3f5747cd4033f-merged.mount: Deactivated successfully.
Nov 27 05:57:50 np0005537642 ceph-mgr[74636]: [cephadm INFO cherrypy.error] [27/Nov/2025:10:57:50] ENGINE Bus STARTING
Nov 27 05:57:50 np0005537642 ceph-mgr[74636]: log_channel(cephadm) log [INF] : [27/Nov/2025:10:57:50] ENGINE Bus STARTING
Nov 27 05:57:50 np0005537642 podman[90216]: 2025-11-27 10:57:50.026289795 +0000 UTC m=+6.794196590 container remove 27641b88beb9420456c82b6c57610f0e12fbdd0472f01dafe35496150760c071 (image=quay.io/ceph/ceph:v19, name=modest_grothendieck, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 27 05:57:50 np0005537642 systemd[1]: libpod-conmon-27641b88beb9420456c82b6c57610f0e12fbdd0472f01dafe35496150760c071.scope: Deactivated successfully.
Nov 27 05:57:50 np0005537642 ceph-mgr[74636]: [cephadm INFO cherrypy.error] [27/Nov/2025:10:57:50] ENGINE Serving on http://192.168.122.100:8765
Nov 27 05:57:50 np0005537642 ceph-mgr[74636]: log_channel(cephadm) log [INF] : [27/Nov/2025:10:57:50] ENGINE Serving on http://192.168.122.100:8765
Nov 27 05:57:50 np0005537642 podman[90574]: 2025-11-27 10:57:50.160725178 +0000 UTC m=+0.072939919 container exec 10d3b07b5dbe91b896d72c044972881d213b8aa535ac9c97588798b2ade7a7fa (image=quay.io/ceph/ceph:v19, name=ceph-4c838139-e0c9-556a-a9ca-e4422f459af7-mon-compute-0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Nov 27 05:57:50 np0005537642 ceph-mgr[74636]: [cephadm INFO cherrypy.error] [27/Nov/2025:10:57:50] ENGINE Serving on https://192.168.122.100:7150
Nov 27 05:57:50 np0005537642 ceph-mgr[74636]: log_channel(cephadm) log [INF] : [27/Nov/2025:10:57:50] ENGINE Serving on https://192.168.122.100:7150
Nov 27 05:57:50 np0005537642 ceph-mgr[74636]: [cephadm INFO cherrypy.error] [27/Nov/2025:10:57:50] ENGINE Bus STARTED
Nov 27 05:57:50 np0005537642 ceph-mgr[74636]: log_channel(cephadm) log [INF] : [27/Nov/2025:10:57:50] ENGINE Bus STARTED
Nov 27 05:57:50 np0005537642 ceph-mgr[74636]: [cephadm INFO cherrypy.error] [27/Nov/2025:10:57:50] ENGINE Client ('192.168.122.100', 50218) lost — peer dropped the TLS connection suddenly, during handshake: (6, 'TLS/SSL connection has been closed (EOF) (_ssl.c:1147)')
Nov 27 05:57:50 np0005537642 ceph-mgr[74636]: log_channel(cephadm) log [INF] : [27/Nov/2025:10:57:50] ENGINE Client ('192.168.122.100', 50218) lost — peer dropped the TLS connection suddenly, during handshake: (6, 'TLS/SSL connection has been closed (EOF) (_ssl.c:1147)')
Nov 27 05:57:50 np0005537642 podman[90574]: 2025-11-27 10:57:50.273992131 +0000 UTC m=+0.186206852 container exec_died 10d3b07b5dbe91b896d72c044972881d213b8aa535ac9c97588798b2ade7a7fa (image=quay.io/ceph/ceph:v19, name=ceph-4c838139-e0c9-556a-a9ca-e4422f459af7-mon-compute-0, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, io.buildah.version=1.40.1, org.label-schema.license=GPLv2)
Nov 27 05:57:50 np0005537642 ceph-osd[82775]: log_channel(cluster) log [DBG] : 3.d scrub starts
Nov 27 05:57:50 np0005537642 ceph-osd[82775]: log_channel(cluster) log [DBG] : 3.d scrub ok
Nov 27 05:57:50 np0005537642 ceph-mon[74338]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2.devices.0}] v 0)
Nov 27 05:57:50 np0005537642 ceph-mon[74338]: log_channel(audit) log [INF] : from='mgr.14337 192.168.122.100:0/1533299113' entity='mgr.compute-0.qnrkij' 
Nov 27 05:57:50 np0005537642 ceph-mon[74338]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2}] v 0)
Nov 27 05:57:50 np0005537642 ceph-mon[74338]: log_channel(audit) log [INF] : from='mgr.14337 192.168.122.100:0/1533299113' entity='mgr.compute-0.qnrkij' 
Nov 27 05:57:50 np0005537642 python3[90631]: ansible-ansible.legacy.command Invoked with stdin=/home/grafana_password.yml stdin_add_newline=False _raw_params=podman run --rm --net=host --ipc=host   --interactive  --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z   --volume /home/ceph-admin/specs/ceph_spec.yaml:/home/ceph_spec.yaml:z   --entrypoint ceph quay.io/ceph/ceph:v19 --fsid 4c838139-e0c9-556a-a9ca-e4422f459af7 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   dashboard set-grafana-api-password -i - _uses_shell=False strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None
Nov 27 05:57:50 np0005537642 ceph-mon[74338]: from='mgr.14337 192.168.122.100:0/1533299113' entity='mgr.compute-0.qnrkij' 
Nov 27 05:57:50 np0005537642 ceph-mon[74338]: from='mgr.14337 192.168.122.100:0/1533299113' entity='mgr.compute-0.qnrkij' 
Nov 27 05:57:50 np0005537642 ceph-mon[74338]: from='mgr.14337 192.168.122.100:0/1533299113' entity='mgr.compute-0.qnrkij' 
Nov 27 05:57:50 np0005537642 podman[90664]: 2025-11-27 10:57:50.487877677 +0000 UTC m=+0.045192482 container create fd79dd114d8487c7a99e69eb73d599a93a72da189b7c1ddf2a5fd0456228782c (image=quay.io/ceph/ceph:v19, name=condescending_cray, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 27 05:57:50 np0005537642 ceph-mon[74338]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1.devices.0}] v 0)
Nov 27 05:57:50 np0005537642 ceph-mon[74338]: log_channel(audit) log [INF] : from='mgr.14337 192.168.122.100:0/1533299113' entity='mgr.compute-0.qnrkij' 
Nov 27 05:57:50 np0005537642 ceph-mon[74338]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1}] v 0)
Nov 27 05:57:50 np0005537642 ceph-mon[74338]: log_channel(audit) log [INF] : from='mgr.14337 192.168.122.100:0/1533299113' entity='mgr.compute-0.qnrkij' 
Nov 27 05:57:50 np0005537642 systemd[1]: Started libpod-conmon-fd79dd114d8487c7a99e69eb73d599a93a72da189b7c1ddf2a5fd0456228782c.scope.
Nov 27 05:57:50 np0005537642 podman[90664]: 2025-11-27 10:57:50.464395421 +0000 UTC m=+0.021710246 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Nov 27 05:57:50 np0005537642 systemd[1]: Started libcrun container.
Nov 27 05:57:50 np0005537642 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/829406a48b88cf180445bdf35e4e4263dc76976e85b6abd5ffd48829dcc1115f/merged/home/ceph_spec.yaml supports timestamps until 2038 (0x7fffffff)
Nov 27 05:57:50 np0005537642 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/829406a48b88cf180445bdf35e4e4263dc76976e85b6abd5ffd48829dcc1115f/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Nov 27 05:57:50 np0005537642 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/829406a48b88cf180445bdf35e4e4263dc76976e85b6abd5ffd48829dcc1115f/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 27 05:57:50 np0005537642 podman[90664]: 2025-11-27 10:57:50.583614433 +0000 UTC m=+0.140929268 container init fd79dd114d8487c7a99e69eb73d599a93a72da189b7c1ddf2a5fd0456228782c (image=quay.io/ceph/ceph:v19, name=condescending_cray, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 27 05:57:50 np0005537642 podman[90664]: 2025-11-27 10:57:50.595064157 +0000 UTC m=+0.152378952 container start fd79dd114d8487c7a99e69eb73d599a93a72da189b7c1ddf2a5fd0456228782c (image=quay.io/ceph/ceph:v19, name=condescending_cray, org.label-schema.license=GPLv2, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, CEPH_REF=squid, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0)
Nov 27 05:57:50 np0005537642 podman[90664]: 2025-11-27 10:57:50.598498275 +0000 UTC m=+0.155813070 container attach fd79dd114d8487c7a99e69eb73d599a93a72da189b7c1ddf2a5fd0456228782c (image=quay.io/ceph/ceph:v19, name=condescending_cray, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 27 05:57:50 np0005537642 ceph-mon[74338]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Nov 27 05:57:50 np0005537642 ceph-mon[74338]: log_channel(audit) log [INF] : from='mgr.14337 192.168.122.100:0/1533299113' entity='mgr.compute-0.qnrkij' 
Nov 27 05:57:50 np0005537642 ceph-mon[74338]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Nov 27 05:57:50 np0005537642 ceph-mon[74338]: log_channel(audit) log [INF] : from='mgr.14337 192.168.122.100:0/1533299113' entity='mgr.compute-0.qnrkij' 
Nov 27 05:57:50 np0005537642 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v4: 197 pgs: 197 active+clean; 454 KiB data, 103 MiB used, 60 GiB / 60 GiB avail
Nov 27 05:57:50 np0005537642 ceph-mon[74338]: log_channel(cluster) log [INF] : Health check cleared: POOL_APP_NOT_ENABLED (was: 1 pool(s) do not have an application enabled)
Nov 27 05:57:50 np0005537642 ceph-mon[74338]: log_channel(cluster) log [INF] : Cluster is now healthy
Nov 27 05:57:51 np0005537642 ceph-mgr[74636]: log_channel(audit) log [DBG] : from='client.14367 -' entity='client.admin' cmd=[{"prefix": "dashboard set-grafana-api-password", "target": ["mon-mgr", ""]}]: dispatch
Nov 27 05:57:51 np0005537642 ceph-mon[74338]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config set, name=mgr/dashboard/GRAFANA_API_PASSWORD}] v 0)
Nov 27 05:57:51 np0005537642 ceph-mon[74338]: log_channel(audit) log [INF] : from='mgr.14337 192.168.122.100:0/1533299113' entity='mgr.compute-0.qnrkij' 
Nov 27 05:57:51 np0005537642 condescending_cray[90693]: Option GRAFANA_API_PASSWORD updated
Nov 27 05:57:51 np0005537642 ceph-mgr[74636]: [devicehealth INFO root] Check health
Nov 27 05:57:51 np0005537642 systemd[1]: libpod-fd79dd114d8487c7a99e69eb73d599a93a72da189b7c1ddf2a5fd0456228782c.scope: Deactivated successfully.
Nov 27 05:57:51 np0005537642 podman[90664]: 2025-11-27 10:57:51.048302923 +0000 UTC m=+0.605617718 container died fd79dd114d8487c7a99e69eb73d599a93a72da189b7c1ddf2a5fd0456228782c (image=quay.io/ceph/ceph:v19, name=condescending_cray, CEPH_REF=squid, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 27 05:57:51 np0005537642 systemd[1]: var-lib-containers-storage-overlay-829406a48b88cf180445bdf35e4e4263dc76976e85b6abd5ffd48829dcc1115f-merged.mount: Deactivated successfully.
Nov 27 05:57:51 np0005537642 podman[90664]: 2025-11-27 10:57:51.097723554 +0000 UTC m=+0.655038359 container remove fd79dd114d8487c7a99e69eb73d599a93a72da189b7c1ddf2a5fd0456228782c (image=quay.io/ceph/ceph:v19, name=condescending_cray, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 27 05:57:51 np0005537642 systemd[1]: libpod-conmon-fd79dd114d8487c7a99e69eb73d599a93a72da189b7c1ddf2a5fd0456228782c.scope: Deactivated successfully.
Nov 27 05:57:51 np0005537642 ceph-osd[82775]: log_channel(cluster) log [DBG] : 6.7 deep-scrub starts
Nov 27 05:57:51 np0005537642 ceph-osd[82775]: log_channel(cluster) log [DBG] : 6.7 deep-scrub ok
Nov 27 05:57:51 np0005537642 ceph-mon[74338]: mon.compute-0@0(leader).osd e58 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 27 05:57:51 np0005537642 python3[90867]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --interactive  --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z   --volume /home/ceph-admin/specs/ceph_spec.yaml:/home/ceph_spec.yaml:z   --entrypoint ceph quay.io/ceph/ceph:v19 --fsid 4c838139-e0c9-556a-a9ca-e4422f459af7 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   dashboard set-alertmanager-api-host http://192.168.122.100:9093#012 _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 27 05:57:51 np0005537642 podman[90906]: 2025-11-27 10:57:51.640405786 +0000 UTC m=+0.028374316 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Nov 27 05:57:51 np0005537642 ceph-mon[74338]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2.devices.0}] v 0)
Nov 27 05:57:51 np0005537642 ceph-mon[74338]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1.devices.0}] v 0)
Nov 27 05:57:51 np0005537642 ceph-mon[74338]: [27/Nov/2025:10:57:50] ENGINE Bus STARTING
Nov 27 05:57:51 np0005537642 ceph-mon[74338]: [27/Nov/2025:10:57:50] ENGINE Serving on http://192.168.122.100:8765
Nov 27 05:57:51 np0005537642 ceph-mon[74338]: [27/Nov/2025:10:57:50] ENGINE Serving on https://192.168.122.100:7150
Nov 27 05:57:51 np0005537642 ceph-mon[74338]: [27/Nov/2025:10:57:50] ENGINE Bus STARTED
Nov 27 05:57:51 np0005537642 ceph-mon[74338]: [27/Nov/2025:10:57:50] ENGINE Client ('192.168.122.100', 50218) lost — peer dropped the TLS connection suddenly, during handshake: (6, 'TLS/SSL connection has been closed (EOF) (_ssl.c:1147)')
Nov 27 05:57:51 np0005537642 podman[90906]: 2025-11-27 10:57:51.942839484 +0000 UTC m=+0.330808004 container create 7b3405d98385df9106527578fb5891f5a84ef5ccbed06a4295d5635d5d9c2b97 (image=quay.io/ceph/ceph:v19, name=optimistic_ishizaka, ceph=True, OSD_FLAVOR=default, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 27 05:57:51 np0005537642 ceph-mon[74338]: from='mgr.14337 192.168.122.100:0/1533299113' entity='mgr.compute-0.qnrkij' 
Nov 27 05:57:51 np0005537642 ceph-mon[74338]: from='mgr.14337 192.168.122.100:0/1533299113' entity='mgr.compute-0.qnrkij' 
Nov 27 05:57:51 np0005537642 ceph-mon[74338]: from='mgr.14337 192.168.122.100:0/1533299113' entity='mgr.compute-0.qnrkij' 
Nov 27 05:57:51 np0005537642 ceph-mon[74338]: from='mgr.14337 192.168.122.100:0/1533299113' entity='mgr.compute-0.qnrkij' 
Nov 27 05:57:51 np0005537642 ceph-mon[74338]: Health check cleared: POOL_APP_NOT_ENABLED (was: 1 pool(s) do not have an application enabled)
Nov 27 05:57:51 np0005537642 ceph-mon[74338]: Cluster is now healthy
Nov 27 05:57:51 np0005537642 ceph-mon[74338]: from='mgr.14337 192.168.122.100:0/1533299113' entity='mgr.compute-0.qnrkij' 
Nov 27 05:57:52 np0005537642 systemd[1]: Started libpod-conmon-7b3405d98385df9106527578fb5891f5a84ef5ccbed06a4295d5635d5d9c2b97.scope.
Nov 27 05:57:52 np0005537642 systemd[1]: Started libcrun container.
Nov 27 05:57:52 np0005537642 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/807483f6264b95ed5e8ece942400b4d22b0215ff028efb1e3504aecb68482f5d/merged/home/ceph_spec.yaml supports timestamps until 2038 (0x7fffffff)
Nov 27 05:57:52 np0005537642 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/807483f6264b95ed5e8ece942400b4d22b0215ff028efb1e3504aecb68482f5d/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Nov 27 05:57:52 np0005537642 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/807483f6264b95ed5e8ece942400b4d22b0215ff028efb1e3504aecb68482f5d/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 27 05:57:52 np0005537642 ceph-mon[74338]: log_channel(audit) log [INF] : from='mgr.14337 192.168.122.100:0/1533299113' entity='mgr.compute-0.qnrkij' 
Nov 27 05:57:52 np0005537642 ceph-mon[74338]: log_channel(cluster) log [DBG] : mgrmap e15: compute-0.qnrkij(active, since 3s), standbys: compute-1.npcryb, compute-2.yyrxaz
Nov 27 05:57:52 np0005537642 ceph-mon[74338]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2}] v 0)
Nov 27 05:57:52 np0005537642 ceph-mon[74338]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Nov 27 05:57:52 np0005537642 podman[90906]: 2025-11-27 10:57:52.123649472 +0000 UTC m=+0.511617982 container init 7b3405d98385df9106527578fb5891f5a84ef5ccbed06a4295d5635d5d9c2b97 (image=quay.io/ceph/ceph:v19, name=optimistic_ishizaka, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True)
Nov 27 05:57:52 np0005537642 ceph-mon[74338]: log_channel(audit) log [INF] : from='mgr.14337 192.168.122.100:0/1533299113' entity='mgr.compute-0.qnrkij' 
Nov 27 05:57:52 np0005537642 podman[90906]: 2025-11-27 10:57:52.135254661 +0000 UTC m=+0.523223141 container start 7b3405d98385df9106527578fb5891f5a84ef5ccbed06a4295d5635d5d9c2b97 (image=quay.io/ceph/ceph:v19, name=optimistic_ishizaka, OSD_FLAVOR=default, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.vendor=CentOS, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0)
Nov 27 05:57:52 np0005537642 ceph-mon[74338]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1}] v 0)
Nov 27 05:57:52 np0005537642 podman[90906]: 2025-11-27 10:57:52.188169372 +0000 UTC m=+0.576137902 container attach 7b3405d98385df9106527578fb5891f5a84ef5ccbed06a4295d5635d5d9c2b97 (image=quay.io/ceph/ceph:v19, name=optimistic_ishizaka, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, ceph=True, org.label-schema.vendor=CentOS, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2)
Nov 27 05:57:52 np0005537642 ceph-mon[74338]: log_channel(audit) log [INF] : from='mgr.14337 192.168.122.100:0/1533299113' entity='mgr.compute-0.qnrkij' 
Nov 27 05:57:52 np0005537642 ceph-mon[74338]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config rm", "who": "osd/host:compute-2", "name": "osd_memory_target"} v 0)
Nov 27 05:57:52 np0005537642 ceph-mon[74338]: log_channel(audit) log [INF] : from='mgr.14337 192.168.122.100:0/1533299113' entity='mgr.compute-0.qnrkij' cmd=[{"prefix": "config rm", "who": "osd/host:compute-2", "name": "osd_memory_target"}]: dispatch
Nov 27 05:57:52 np0005537642 ceph-osd[82775]: log_channel(cluster) log [DBG] : 4.5 deep-scrub starts
Nov 27 05:57:52 np0005537642 ceph-osd[82775]: log_channel(cluster) log [DBG] : 4.5 deep-scrub ok
Nov 27 05:57:52 np0005537642 ceph-mon[74338]: log_channel(audit) log [INF] : from='mgr.14337 192.168.122.100:0/1533299113' entity='mgr.compute-0.qnrkij' 
Nov 27 05:57:52 np0005537642 ceph-mgr[74636]: log_channel(audit) log [DBG] : from='client.14379 -' entity='client.admin' cmd=[{"prefix": "dashboard set-alertmanager-api-host", "value": "http://192.168.122.100:9093", "target": ["mon-mgr", ""]}]: dispatch
Nov 27 05:57:52 np0005537642 ceph-mon[74338]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Nov 27 05:57:52 np0005537642 ceph-mon[74338]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config set, name=mgr/dashboard/ALERTMANAGER_API_HOST}] v 0)
Nov 27 05:57:52 np0005537642 ceph-mon[74338]: log_channel(audit) log [INF] : from='mgr.14337 192.168.122.100:0/1533299113' entity='mgr.compute-0.qnrkij' 
Nov 27 05:57:52 np0005537642 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v5: 197 pgs: 197 active+clean; 454 KiB data, 103 MiB used, 60 GiB / 60 GiB avail
Nov 27 05:57:52 np0005537642 ceph-mon[74338]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config rm", "who": "osd/host:compute-1", "name": "osd_memory_target"} v 0)
Nov 27 05:57:52 np0005537642 ceph-mon[74338]: log_channel(audit) log [INF] : from='mgr.14337 192.168.122.100:0/1533299113' entity='mgr.compute-0.qnrkij' cmd=[{"prefix": "config rm", "who": "osd/host:compute-1", "name": "osd_memory_target"}]: dispatch
Nov 27 05:57:53 np0005537642 ceph-mon[74338]: log_channel(audit) log [INF] : from='mgr.14337 192.168.122.100:0/1533299113' entity='mgr.compute-0.qnrkij' 
Nov 27 05:57:53 np0005537642 ceph-mon[74338]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"} v 0)
Nov 27 05:57:53 np0005537642 ceph-mon[74338]: log_channel(audit) log [INF] : from='mgr.14337 192.168.122.100:0/1533299113' entity='mgr.compute-0.qnrkij' cmd=[{"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"}]: dispatch
Nov 27 05:57:53 np0005537642 ceph-mon[74338]: from='mgr.14337 192.168.122.100:0/1533299113' entity='mgr.compute-0.qnrkij' 
Nov 27 05:57:53 np0005537642 ceph-mon[74338]: from='mgr.14337 192.168.122.100:0/1533299113' entity='mgr.compute-0.qnrkij' 
Nov 27 05:57:53 np0005537642 ceph-mon[74338]: from='mgr.14337 192.168.122.100:0/1533299113' entity='mgr.compute-0.qnrkij' 
Nov 27 05:57:53 np0005537642 ceph-mon[74338]: from='mgr.14337 192.168.122.100:0/1533299113' entity='mgr.compute-0.qnrkij' cmd=[{"prefix": "config rm", "who": "osd/host:compute-2", "name": "osd_memory_target"}]: dispatch
Nov 27 05:57:53 np0005537642 ceph-mon[74338]: from='mgr.14337 192.168.122.100:0/1533299113' entity='mgr.compute-0.qnrkij' 
Nov 27 05:57:53 np0005537642 ceph-mon[74338]: from='mgr.14337 192.168.122.100:0/1533299113' entity='mgr.compute-0.qnrkij' 
Nov 27 05:57:53 np0005537642 ceph-osd[82775]: log_channel(cluster) log [DBG] : 5.7 scrub starts
Nov 27 05:57:53 np0005537642 ceph-osd[82775]: log_channel(cluster) log [DBG] : 5.7 scrub ok
Nov 27 05:57:53 np0005537642 ceph-mon[74338]: log_channel(audit) log [INF] : from='mgr.14337 192.168.122.100:0/1533299113' entity='mgr.compute-0.qnrkij' 
Nov 27 05:57:53 np0005537642 optimistic_ishizaka[90958]: Option ALERTMANAGER_API_HOST updated
Nov 27 05:57:53 np0005537642 systemd[1]: libpod-7b3405d98385df9106527578fb5891f5a84ef5ccbed06a4295d5635d5d9c2b97.scope: Deactivated successfully.
Nov 27 05:57:53 np0005537642 podman[90906]: 2025-11-27 10:57:53.469867244 +0000 UTC m=+1.857835774 container died 7b3405d98385df9106527578fb5891f5a84ef5ccbed06a4295d5635d5d9c2b97 (image=quay.io/ceph/ceph:v19, name=optimistic_ishizaka, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_REF=squid, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 27 05:57:53 np0005537642 ceph-mon[74338]: log_channel(audit) log [INF] : from='mgr.14337 192.168.122.100:0/1533299113' entity='mgr.compute-0.qnrkij' cmd='[{"prefix": "config rm", "who": "osd/host:compute-1", "name": "osd_memory_target"}]': finished
Nov 27 05:57:53 np0005537642 ceph-mon[74338]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Nov 27 05:57:53 np0005537642 ceph-mon[74338]: log_channel(audit) log [DBG] : from='mgr.14337 192.168.122.100:0/1533299113' entity='mgr.compute-0.qnrkij' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 27 05:57:53 np0005537642 ceph-mon[74338]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Nov 27 05:57:53 np0005537642 ceph-mon[74338]: log_channel(audit) log [INF] : from='mgr.14337 192.168.122.100:0/1533299113' entity='mgr.compute-0.qnrkij' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 27 05:57:53 np0005537642 ceph-mgr[74636]: [cephadm INFO cephadm.serve] Updating compute-0:/etc/ceph/ceph.conf
Nov 27 05:57:53 np0005537642 ceph-mgr[74636]: log_channel(cephadm) log [INF] : Updating compute-0:/etc/ceph/ceph.conf
Nov 27 05:57:53 np0005537642 ceph-mgr[74636]: [cephadm INFO cephadm.serve] Updating compute-1:/etc/ceph/ceph.conf
Nov 27 05:57:53 np0005537642 ceph-mgr[74636]: log_channel(cephadm) log [INF] : Updating compute-1:/etc/ceph/ceph.conf
Nov 27 05:57:53 np0005537642 ceph-mgr[74636]: [cephadm INFO cephadm.serve] Updating compute-2:/etc/ceph/ceph.conf
Nov 27 05:57:53 np0005537642 ceph-mgr[74636]: log_channel(cephadm) log [INF] : Updating compute-2:/etc/ceph/ceph.conf
Nov 27 05:57:53 np0005537642 systemd[1]: var-lib-containers-storage-overlay-807483f6264b95ed5e8ece942400b4d22b0215ff028efb1e3504aecb68482f5d-merged.mount: Deactivated successfully.
Nov 27 05:57:53 np0005537642 podman[90906]: 2025-11-27 10:57:53.901153136 +0000 UTC m=+2.289121646 container remove 7b3405d98385df9106527578fb5891f5a84ef5ccbed06a4295d5635d5d9c2b97 (image=quay.io/ceph/ceph:v19, name=optimistic_ishizaka, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, OSD_FLAVOR=default)
Nov 27 05:57:53 np0005537642 systemd[1]: libpod-conmon-7b3405d98385df9106527578fb5891f5a84ef5ccbed06a4295d5635d5d9c2b97.scope: Deactivated successfully.
Nov 27 05:57:54 np0005537642 ceph-mon[74338]: log_channel(cluster) log [DBG] : mgrmap e16: compute-0.qnrkij(active, since 5s), standbys: compute-1.npcryb, compute-2.yyrxaz
Nov 27 05:57:54 np0005537642 python3[91171]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --interactive  --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z   --volume /home/ceph-admin/specs/ceph_spec.yaml:/home/ceph_spec.yaml:z   --entrypoint ceph quay.io/ceph/ceph:v19 --fsid 4c838139-e0c9-556a-a9ca-e4422f459af7 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   dashboard set-prometheus-api-host http://192.168.122.100:9092#012 _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 27 05:57:54 np0005537642 ceph-mgr[74636]: [cephadm INFO cephadm.serve] Updating compute-2:/var/lib/ceph/4c838139-e0c9-556a-a9ca-e4422f459af7/config/ceph.conf
Nov 27 05:57:54 np0005537642 ceph-mgr[74636]: log_channel(cephadm) log [INF] : Updating compute-2:/var/lib/ceph/4c838139-e0c9-556a-a9ca-e4422f459af7/config/ceph.conf
Nov 27 05:57:54 np0005537642 ceph-mgr[74636]: [cephadm INFO cephadm.serve] Updating compute-1:/var/lib/ceph/4c838139-e0c9-556a-a9ca-e4422f459af7/config/ceph.conf
Nov 27 05:57:54 np0005537642 ceph-mgr[74636]: log_channel(cephadm) log [INF] : Updating compute-1:/var/lib/ceph/4c838139-e0c9-556a-a9ca-e4422f459af7/config/ceph.conf
Nov 27 05:57:54 np0005537642 ceph-osd[82775]: log_channel(cluster) log [DBG] : 4.a scrub starts
Nov 27 05:57:54 np0005537642 ceph-osd[82775]: log_channel(cluster) log [DBG] : 4.a scrub ok
Nov 27 05:57:54 np0005537642 ceph-mgr[74636]: [cephadm INFO cephadm.serve] Updating compute-0:/var/lib/ceph/4c838139-e0c9-556a-a9ca-e4422f459af7/config/ceph.conf
Nov 27 05:57:54 np0005537642 ceph-mgr[74636]: log_channel(cephadm) log [INF] : Updating compute-0:/var/lib/ceph/4c838139-e0c9-556a-a9ca-e4422f459af7/config/ceph.conf
Nov 27 05:57:54 np0005537642 podman[91221]: 2025-11-27 10:57:54.342562276 +0000 UTC m=+0.024317641 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Nov 27 05:57:54 np0005537642 podman[91221]: 2025-11-27 10:57:54.545684297 +0000 UTC m=+0.227439572 container create dbad40bf8ea335f7728552bc6b97006b482c7e9ff88a2137fc211eedee9054d5 (image=quay.io/ceph/ceph:v19, name=awesome_agnesi, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0)
Nov 27 05:57:54 np0005537642 systemd[1]: Started libpod-conmon-dbad40bf8ea335f7728552bc6b97006b482c7e9ff88a2137fc211eedee9054d5.scope.
Nov 27 05:57:54 np0005537642 systemd[1]: Started libcrun container.
Nov 27 05:57:54 np0005537642 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/df650868c5c5a3523aad4c7d735046fee4ff552df4f6921932bd7b00dcf4c887/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Nov 27 05:57:54 np0005537642 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/df650868c5c5a3523aad4c7d735046fee4ff552df4f6921932bd7b00dcf4c887/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 27 05:57:54 np0005537642 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/df650868c5c5a3523aad4c7d735046fee4ff552df4f6921932bd7b00dcf4c887/merged/home/ceph_spec.yaml supports timestamps until 2038 (0x7fffffff)
Nov 27 05:57:54 np0005537642 ceph-mon[74338]: from='mgr.14337 192.168.122.100:0/1533299113' entity='mgr.compute-0.qnrkij' cmd=[{"prefix": "config rm", "who": "osd/host:compute-1", "name": "osd_memory_target"}]: dispatch
Nov 27 05:57:54 np0005537642 ceph-mon[74338]: from='mgr.14337 192.168.122.100:0/1533299113' entity='mgr.compute-0.qnrkij' 
Nov 27 05:57:54 np0005537642 ceph-mon[74338]: from='mgr.14337 192.168.122.100:0/1533299113' entity='mgr.compute-0.qnrkij' cmd=[{"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"}]: dispatch
Nov 27 05:57:54 np0005537642 ceph-mon[74338]: from='mgr.14337 192.168.122.100:0/1533299113' entity='mgr.compute-0.qnrkij' 
Nov 27 05:57:54 np0005537642 ceph-mon[74338]: from='mgr.14337 192.168.122.100:0/1533299113' entity='mgr.compute-0.qnrkij' cmd='[{"prefix": "config rm", "who": "osd/host:compute-1", "name": "osd_memory_target"}]': finished
Nov 27 05:57:54 np0005537642 ceph-mon[74338]: from='mgr.14337 192.168.122.100:0/1533299113' entity='mgr.compute-0.qnrkij' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 27 05:57:54 np0005537642 ceph-mon[74338]: Updating compute-0:/etc/ceph/ceph.conf
Nov 27 05:57:54 np0005537642 ceph-mon[74338]: Updating compute-1:/etc/ceph/ceph.conf
Nov 27 05:57:54 np0005537642 ceph-mon[74338]: Updating compute-2:/etc/ceph/ceph.conf
Nov 27 05:57:54 np0005537642 podman[91221]: 2025-11-27 10:57:54.85676984 +0000 UTC m=+0.538525195 container init dbad40bf8ea335f7728552bc6b97006b482c7e9ff88a2137fc211eedee9054d5 (image=quay.io/ceph/ceph:v19, name=awesome_agnesi, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_REF=squid, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 27 05:57:54 np0005537642 podman[91221]: 2025-11-27 10:57:54.863319416 +0000 UTC m=+0.545074721 container start dbad40bf8ea335f7728552bc6b97006b482c7e9ff88a2137fc211eedee9054d5 (image=quay.io/ceph/ceph:v19, name=awesome_agnesi, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True)
Nov 27 05:57:54 np0005537642 podman[91221]: 2025-11-27 10:57:54.880857224 +0000 UTC m=+0.562612499 container attach dbad40bf8ea335f7728552bc6b97006b482c7e9ff88a2137fc211eedee9054d5 (image=quay.io/ceph/ceph:v19, name=awesome_agnesi, org.label-schema.schema-version=1.0, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 27 05:57:54 np0005537642 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v6: 197 pgs: 197 active+clean; 454 KiB data, 103 MiB used, 60 GiB / 60 GiB avail
Nov 27 05:57:54 np0005537642 ceph-mgr[74636]: [cephadm INFO cephadm.serve] Updating compute-2:/etc/ceph/ceph.client.admin.keyring
Nov 27 05:57:54 np0005537642 ceph-mgr[74636]: log_channel(cephadm) log [INF] : Updating compute-2:/etc/ceph/ceph.client.admin.keyring
Nov 27 05:57:54 np0005537642 ceph-mgr[74636]: [cephadm INFO cephadm.serve] Updating compute-1:/etc/ceph/ceph.client.admin.keyring
Nov 27 05:57:54 np0005537642 ceph-mgr[74636]: log_channel(cephadm) log [INF] : Updating compute-1:/etc/ceph/ceph.client.admin.keyring
Nov 27 05:57:55 np0005537642 ceph-mgr[74636]: [cephadm INFO cephadm.serve] Updating compute-0:/etc/ceph/ceph.client.admin.keyring
Nov 27 05:57:55 np0005537642 ceph-mgr[74636]: log_channel(cephadm) log [INF] : Updating compute-0:/etc/ceph/ceph.client.admin.keyring
Nov 27 05:57:55 np0005537642 ceph-mgr[74636]: log_channel(audit) log [DBG] : from='client.14385 -' entity='client.admin' cmd=[{"prefix": "dashboard set-prometheus-api-host", "value": "http://192.168.122.100:9092", "target": ["mon-mgr", ""]}]: dispatch
Nov 27 05:57:55 np0005537642 ceph-mon[74338]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config set, name=mgr/dashboard/PROMETHEUS_API_HOST}] v 0)
Nov 27 05:57:55 np0005537642 ceph-osd[82775]: log_channel(cluster) log [DBG] : 5.2 scrub starts
Nov 27 05:57:55 np0005537642 ceph-osd[82775]: log_channel(cluster) log [DBG] : 5.2 scrub ok
Nov 27 05:57:55 np0005537642 ceph-mgr[74636]: [cephadm INFO cephadm.serve] Updating compute-1:/var/lib/ceph/4c838139-e0c9-556a-a9ca-e4422f459af7/config/ceph.client.admin.keyring
Nov 27 05:57:55 np0005537642 ceph-mgr[74636]: log_channel(cephadm) log [INF] : Updating compute-1:/var/lib/ceph/4c838139-e0c9-556a-a9ca-e4422f459af7/config/ceph.client.admin.keyring
Nov 27 05:57:55 np0005537642 ceph-mon[74338]: log_channel(audit) log [INF] : from='mgr.14337 192.168.122.100:0/1533299113' entity='mgr.compute-0.qnrkij' 
Nov 27 05:57:55 np0005537642 awesome_agnesi[91361]: Option PROMETHEUS_API_HOST updated
Nov 27 05:57:55 np0005537642 systemd[1]: libpod-dbad40bf8ea335f7728552bc6b97006b482c7e9ff88a2137fc211eedee9054d5.scope: Deactivated successfully.
Nov 27 05:57:55 np0005537642 podman[91221]: 2025-11-27 10:57:55.588048612 +0000 UTC m=+1.269803897 container died dbad40bf8ea335f7728552bc6b97006b482c7e9ff88a2137fc211eedee9054d5 (image=quay.io/ceph/ceph:v19, name=awesome_agnesi, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, ceph=True, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 27 05:57:55 np0005537642 systemd[1]: var-lib-containers-storage-overlay-df650868c5c5a3523aad4c7d735046fee4ff552df4f6921932bd7b00dcf4c887-merged.mount: Deactivated successfully.
Nov 27 05:57:55 np0005537642 podman[91221]: 2025-11-27 10:57:55.713729656 +0000 UTC m=+1.395484951 container remove dbad40bf8ea335f7728552bc6b97006b482c7e9ff88a2137fc211eedee9054d5 (image=quay.io/ceph/ceph:v19, name=awesome_agnesi, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Nov 27 05:57:55 np0005537642 systemd[1]: libpod-conmon-dbad40bf8ea335f7728552bc6b97006b482c7e9ff88a2137fc211eedee9054d5.scope: Deactivated successfully.
Nov 27 05:57:55 np0005537642 ceph-mgr[74636]: [cephadm INFO cephadm.serve] Updating compute-2:/var/lib/ceph/4c838139-e0c9-556a-a9ca-e4422f459af7/config/ceph.client.admin.keyring
Nov 27 05:57:55 np0005537642 ceph-mgr[74636]: log_channel(cephadm) log [INF] : Updating compute-2:/var/lib/ceph/4c838139-e0c9-556a-a9ca-e4422f459af7/config/ceph.client.admin.keyring
Nov 27 05:57:55 np0005537642 ceph-mgr[74636]: [cephadm INFO cephadm.serve] Updating compute-0:/var/lib/ceph/4c838139-e0c9-556a-a9ca-e4422f459af7/config/ceph.client.admin.keyring
Nov 27 05:57:55 np0005537642 ceph-mgr[74636]: log_channel(cephadm) log [INF] : Updating compute-0:/var/lib/ceph/4c838139-e0c9-556a-a9ca-e4422f459af7/config/ceph.client.admin.keyring
Nov 27 05:57:55 np0005537642 ceph-mon[74338]: Updating compute-2:/var/lib/ceph/4c838139-e0c9-556a-a9ca-e4422f459af7/config/ceph.conf
Nov 27 05:57:55 np0005537642 ceph-mon[74338]: Updating compute-1:/var/lib/ceph/4c838139-e0c9-556a-a9ca-e4422f459af7/config/ceph.conf
Nov 27 05:57:55 np0005537642 ceph-mon[74338]: Updating compute-0:/var/lib/ceph/4c838139-e0c9-556a-a9ca-e4422f459af7/config/ceph.conf
Nov 27 05:57:55 np0005537642 ceph-mon[74338]: Updating compute-2:/etc/ceph/ceph.client.admin.keyring
Nov 27 05:57:55 np0005537642 ceph-mon[74338]: Updating compute-1:/etc/ceph/ceph.client.admin.keyring
Nov 27 05:57:55 np0005537642 ceph-mon[74338]: Updating compute-0:/etc/ceph/ceph.client.admin.keyring
Nov 27 05:57:55 np0005537642 ceph-mon[74338]: Updating compute-1:/var/lib/ceph/4c838139-e0c9-556a-a9ca-e4422f459af7/config/ceph.client.admin.keyring
Nov 27 05:57:55 np0005537642 ceph-mon[74338]: from='mgr.14337 192.168.122.100:0/1533299113' entity='mgr.compute-0.qnrkij' 
Nov 27 05:57:56 np0005537642 python3[91806]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --interactive  --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z   --volume /home/ceph-admin/specs/ceph_spec.yaml:/home/ceph_spec.yaml:z   --entrypoint ceph quay.io/ceph/ceph:v19 --fsid 4c838139-e0c9-556a-a9ca-e4422f459af7 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   dashboard set-grafana-api-url http://192.168.122.100:3100#012 _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 27 05:57:56 np0005537642 ceph-mon[74338]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1.devices.0}] v 0)
Nov 27 05:57:56 np0005537642 podman[91863]: 2025-11-27 10:57:56.122092828 +0000 UTC m=+0.052124570 container create 9bf7aa6aa6bac29d85928e8d36b2b593b3920b69e5c5793b3a470063b33b93f1 (image=quay.io/ceph/ceph:v19, name=focused_kowalevski, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_REF=squid, OSD_FLAVOR=default)
Nov 27 05:57:56 np0005537642 systemd[1]: Started libpod-conmon-9bf7aa6aa6bac29d85928e8d36b2b593b3920b69e5c5793b3a470063b33b93f1.scope.
Nov 27 05:57:56 np0005537642 systemd[1]: Started libcrun container.
Nov 27 05:57:56 np0005537642 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1b6055040fbebbcf6a11c07349546852a0d9770f5c6ee3c61390a86537dd9fcd/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 27 05:57:56 np0005537642 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1b6055040fbebbcf6a11c07349546852a0d9770f5c6ee3c61390a86537dd9fcd/merged/home/ceph_spec.yaml supports timestamps until 2038 (0x7fffffff)
Nov 27 05:57:56 np0005537642 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1b6055040fbebbcf6a11c07349546852a0d9770f5c6ee3c61390a86537dd9fcd/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Nov 27 05:57:56 np0005537642 podman[91863]: 2025-11-27 10:57:56.095310708 +0000 UTC m=+0.025342500 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Nov 27 05:57:56 np0005537642 podman[91863]: 2025-11-27 10:57:56.199245736 +0000 UTC m=+0.129277468 container init 9bf7aa6aa6bac29d85928e8d36b2b593b3920b69e5c5793b3a470063b33b93f1 (image=quay.io/ceph/ceph:v19, name=focused_kowalevski, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, CEPH_REF=squid, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, OSD_FLAVOR=default)
Nov 27 05:57:56 np0005537642 podman[91863]: 2025-11-27 10:57:56.204824914 +0000 UTC m=+0.134856626 container start 9bf7aa6aa6bac29d85928e8d36b2b593b3920b69e5c5793b3a470063b33b93f1 (image=quay.io/ceph/ceph:v19, name=focused_kowalevski, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, ceph=True, org.label-schema.license=GPLv2, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 27 05:57:56 np0005537642 podman[91863]: 2025-11-27 10:57:56.211105372 +0000 UTC m=+0.141137064 container attach 9bf7aa6aa6bac29d85928e8d36b2b593b3920b69e5c5793b3a470063b33b93f1 (image=quay.io/ceph/ceph:v19, name=focused_kowalevski, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, ceph=True)
Nov 27 05:57:56 np0005537642 ceph-mon[74338]: log_channel(audit) log [INF] : from='mgr.14337 192.168.122.100:0/1533299113' entity='mgr.compute-0.qnrkij' 
Nov 27 05:57:56 np0005537642 ceph-mon[74338]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2.devices.0}] v 0)
Nov 27 05:57:56 np0005537642 ceph-mon[74338]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1}] v 0)
Nov 27 05:57:56 np0005537642 ceph-mon[74338]: log_channel(audit) log [INF] : from='mgr.14337 192.168.122.100:0/1533299113' entity='mgr.compute-0.qnrkij' 
Nov 27 05:57:56 np0005537642 ceph-mon[74338]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2}] v 0)
Nov 27 05:57:56 np0005537642 ceph-mon[74338]: log_channel(audit) log [INF] : from='mgr.14337 192.168.122.100:0/1533299113' entity='mgr.compute-0.qnrkij' 
Nov 27 05:57:56 np0005537642 ceph-mon[74338]: log_channel(audit) log [INF] : from='mgr.14337 192.168.122.100:0/1533299113' entity='mgr.compute-0.qnrkij' 
Nov 27 05:57:56 np0005537642 ceph-osd[82775]: log_channel(cluster) log [DBG] : 6.5 scrub starts
Nov 27 05:57:56 np0005537642 ceph-osd[82775]: log_channel(cluster) log [DBG] : 6.5 scrub ok
Nov 27 05:57:56 np0005537642 ceph-mon[74338]: mon.compute-0@0(leader).osd e58 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 27 05:57:56 np0005537642 ceph-mon[74338]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Nov 27 05:57:56 np0005537642 ceph-mgr[74636]: log_channel(audit) log [DBG] : from='client.14391 -' entity='client.admin' cmd=[{"prefix": "dashboard set-grafana-api-url", "value": "http://192.168.122.100:3100", "target": ["mon-mgr", ""]}]: dispatch
Nov 27 05:57:56 np0005537642 ceph-mon[74338]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config set, name=mgr/dashboard/GRAFANA_API_URL}] v 0)
Nov 27 05:57:56 np0005537642 ceph-mon[74338]: log_channel(audit) log [INF] : from='mgr.14337 192.168.122.100:0/1533299113' entity='mgr.compute-0.qnrkij' 
Nov 27 05:57:56 np0005537642 ceph-mon[74338]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Nov 27 05:57:56 np0005537642 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v7: 197 pgs: 197 active+clean; 454 KiB data, 103 MiB used, 60 GiB / 60 GiB avail; 25 KiB/s rd, 0 B/s wr, 10 op/s
Nov 27 05:57:56 np0005537642 ceph-mon[74338]: log_channel(audit) log [INF] : from='mgr.14337 192.168.122.100:0/1533299113' entity='mgr.compute-0.qnrkij' 
Nov 27 05:57:56 np0005537642 focused_kowalevski[91913]: Option GRAFANA_API_URL updated
Nov 27 05:57:56 np0005537642 systemd[1]: libpod-9bf7aa6aa6bac29d85928e8d36b2b593b3920b69e5c5793b3a470063b33b93f1.scope: Deactivated successfully.
Nov 27 05:57:56 np0005537642 podman[91863]: 2025-11-27 10:57:56.968102923 +0000 UTC m=+0.898134665 container died 9bf7aa6aa6bac29d85928e8d36b2b593b3920b69e5c5793b3a470063b33b93f1 (image=quay.io/ceph/ceph:v19, name=focused_kowalevski, ceph=True, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 27 05:57:57 np0005537642 ceph-mon[74338]: Updating compute-2:/var/lib/ceph/4c838139-e0c9-556a-a9ca-e4422f459af7/config/ceph.client.admin.keyring
Nov 27 05:57:57 np0005537642 ceph-mon[74338]: Updating compute-0:/var/lib/ceph/4c838139-e0c9-556a-a9ca-e4422f459af7/config/ceph.client.admin.keyring
Nov 27 05:57:57 np0005537642 ceph-mon[74338]: from='mgr.14337 192.168.122.100:0/1533299113' entity='mgr.compute-0.qnrkij' 
Nov 27 05:57:57 np0005537642 ceph-mon[74338]: from='mgr.14337 192.168.122.100:0/1533299113' entity='mgr.compute-0.qnrkij' 
Nov 27 05:57:57 np0005537642 ceph-mon[74338]: from='mgr.14337 192.168.122.100:0/1533299113' entity='mgr.compute-0.qnrkij' 
Nov 27 05:57:57 np0005537642 ceph-mon[74338]: from='mgr.14337 192.168.122.100:0/1533299113' entity='mgr.compute-0.qnrkij' 
Nov 27 05:57:57 np0005537642 ceph-mon[74338]: from='mgr.14337 192.168.122.100:0/1533299113' entity='mgr.compute-0.qnrkij' 
Nov 27 05:57:57 np0005537642 ceph-osd[82775]: log_channel(cluster) log [DBG] : 3.5 scrub starts
Nov 27 05:57:57 np0005537642 ceph-osd[82775]: log_channel(cluster) log [DBG] : 3.5 scrub ok
Nov 27 05:57:57 np0005537642 ceph-mon[74338]: log_channel(audit) log [INF] : from='mgr.14337 192.168.122.100:0/1533299113' entity='mgr.compute-0.qnrkij' 
Nov 27 05:57:57 np0005537642 ceph-mon[74338]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Nov 27 05:57:57 np0005537642 ceph-mon[74338]: log_channel(audit) log [INF] : from='mgr.14337 192.168.122.100:0/1533299113' entity='mgr.compute-0.qnrkij' 
Nov 27 05:57:57 np0005537642 ceph-mgr[74636]: [progress INFO root] update: starting ev 148a4e2e-7c03-4e1a-97c9-1616cad5bf4a (Updating node-exporter deployment (+3 -> 3))
Nov 27 05:57:57 np0005537642 ceph-mgr[74636]: [cephadm INFO cephadm.serve] Deploying daemon node-exporter.compute-0 on compute-0
Nov 27 05:57:57 np0005537642 ceph-mgr[74636]: log_channel(cephadm) log [INF] : Deploying daemon node-exporter.compute-0 on compute-0
Nov 27 05:57:57 np0005537642 systemd[1]: var-lib-containers-storage-overlay-1b6055040fbebbcf6a11c07349546852a0d9770f5c6ee3c61390a86537dd9fcd-merged.mount: Deactivated successfully.
Nov 27 05:57:57 np0005537642 podman[91863]: 2025-11-27 10:57:57.57265691 +0000 UTC m=+1.502688622 container remove 9bf7aa6aa6bac29d85928e8d36b2b593b3920b69e5c5793b3a470063b33b93f1 (image=quay.io/ceph/ceph:v19, name=focused_kowalevski, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, OSD_FLAVOR=default, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 27 05:57:57 np0005537642 systemd[1]: libpod-conmon-9bf7aa6aa6bac29d85928e8d36b2b593b3920b69e5c5793b3a470063b33b93f1.scope: Deactivated successfully.
Nov 27 05:57:58 np0005537642 python3[92126]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --interactive  --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z   --volume /home/ceph-admin/specs/ceph_spec.yaml:/home/ceph_spec.yaml:z   --entrypoint ceph quay.io/ceph/ceph:v19 --fsid 4c838139-e0c9-556a-a9ca-e4422f459af7 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   mgr module disable dashboard _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 27 05:57:58 np0005537642 podman[92144]: 2025-11-27 10:57:58.060521137 +0000 UTC m=+0.026958266 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Nov 27 05:57:58 np0005537642 podman[92144]: 2025-11-27 10:57:58.229653244 +0000 UTC m=+0.196090383 container create 2229c462985a6b8425fb0b4f05cc3ba6b5731714169c9a02e956627b058aef3e (image=quay.io/ceph/ceph:v19, name=friendly_agnesi, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, OSD_FLAVOR=default)
Nov 27 05:57:58 np0005537642 ceph-mon[74338]: from='mgr.14337 192.168.122.100:0/1533299113' entity='mgr.compute-0.qnrkij' 
Nov 27 05:57:58 np0005537642 ceph-mon[74338]: from='mgr.14337 192.168.122.100:0/1533299113' entity='mgr.compute-0.qnrkij' 
Nov 27 05:57:58 np0005537642 ceph-mon[74338]: from='mgr.14337 192.168.122.100:0/1533299113' entity='mgr.compute-0.qnrkij' 
Nov 27 05:57:58 np0005537642 ceph-mon[74338]: Deploying daemon node-exporter.compute-0 on compute-0
Nov 27 05:57:58 np0005537642 systemd[1]: Started libpod-conmon-2229c462985a6b8425fb0b4f05cc3ba6b5731714169c9a02e956627b058aef3e.scope.
Nov 27 05:57:58 np0005537642 systemd[1]: Started libcrun container.
Nov 27 05:57:58 np0005537642 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/87ebf0de9fa545d4b9d71b04659ffbee2c09ec73b6e5c336cd66f458f95977b5/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Nov 27 05:57:58 np0005537642 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/87ebf0de9fa545d4b9d71b04659ffbee2c09ec73b6e5c336cd66f458f95977b5/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 27 05:57:58 np0005537642 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/87ebf0de9fa545d4b9d71b04659ffbee2c09ec73b6e5c336cd66f458f95977b5/merged/home/ceph_spec.yaml supports timestamps until 2038 (0x7fffffff)
Nov 27 05:57:58 np0005537642 podman[92144]: 2025-11-27 10:57:58.427758873 +0000 UTC m=+0.394196052 container init 2229c462985a6b8425fb0b4f05cc3ba6b5731714169c9a02e956627b058aef3e (image=quay.io/ceph/ceph:v19, name=friendly_agnesi, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 27 05:57:58 np0005537642 podman[92144]: 2025-11-27 10:57:58.437792047 +0000 UTC m=+0.404229196 container start 2229c462985a6b8425fb0b4f05cc3ba6b5731714169c9a02e956627b058aef3e (image=quay.io/ceph/ceph:v19, name=friendly_agnesi, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_REF=squid, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325)
Nov 27 05:57:58 np0005537642 ceph-osd[82775]: log_channel(cluster) log [DBG] : 3.3 scrub starts
Nov 27 05:57:58 np0005537642 ceph-osd[82775]: log_channel(cluster) log [DBG] : 3.3 scrub ok
Nov 27 05:57:58 np0005537642 podman[92144]: 2025-11-27 10:57:58.502651217 +0000 UTC m=+0.469088366 container attach 2229c462985a6b8425fb0b4f05cc3ba6b5731714169c9a02e956627b058aef3e (image=quay.io/ceph/ceph:v19, name=friendly_agnesi, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 27 05:57:58 np0005537642 systemd[1]: Reloading.
Nov 27 05:57:58 np0005537642 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 27 05:57:58 np0005537642 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 27 05:57:58 np0005537642 systemd[1]: Reloading.
Nov 27 05:57:58 np0005537642 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v8: 197 pgs: 197 active+clean; 454 KiB data, 103 MiB used, 60 GiB / 60 GiB avail; 20 KiB/s rd, 0 B/s wr, 8 op/s
Nov 27 05:57:58 np0005537642 ceph-mon[74338]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mgr module disable", "module": "dashboard"} v 0)
Nov 27 05:57:58 np0005537642 ceph-mon[74338]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/26158713' entity='client.admin' cmd=[{"prefix": "mgr module disable", "module": "dashboard"}]: dispatch
Nov 27 05:57:58 np0005537642 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 27 05:57:58 np0005537642 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 27 05:57:59 np0005537642 systemd[1]: Starting Ceph node-exporter.compute-0 for 4c838139-e0c9-556a-a9ca-e4422f459af7...
Nov 27 05:57:59 np0005537642 ceph-mon[74338]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/26158713' entity='client.admin' cmd='[{"prefix": "mgr module disable", "module": "dashboard"}]': finished
Nov 27 05:57:59 np0005537642 ceph-mgr[74636]: mgr handle_mgr_map respawning because set of enabled modules changed!
Nov 27 05:57:59 np0005537642 ceph-mgr[74636]: mgr respawn  e: '/usr/bin/ceph-mgr'
Nov 27 05:57:59 np0005537642 ceph-mgr[74636]: mgr respawn  0: '/usr/bin/ceph-mgr'
Nov 27 05:57:59 np0005537642 ceph-mgr[74636]: mgr respawn  1: '-n'
Nov 27 05:57:59 np0005537642 ceph-mgr[74636]: mgr respawn  2: 'mgr.compute-0.qnrkij'
Nov 27 05:57:59 np0005537642 ceph-mgr[74636]: mgr respawn  3: '-f'
Nov 27 05:57:59 np0005537642 ceph-mgr[74636]: mgr respawn  4: '--setuser'
Nov 27 05:57:59 np0005537642 ceph-mgr[74636]: mgr respawn  5: 'ceph'
Nov 27 05:57:59 np0005537642 ceph-mgr[74636]: mgr respawn  6: '--setgroup'
Nov 27 05:57:59 np0005537642 ceph-mgr[74636]: mgr respawn  7: 'ceph'
Nov 27 05:57:59 np0005537642 ceph-mgr[74636]: mgr respawn  8: '--default-log-to-file=false'
Nov 27 05:57:59 np0005537642 ceph-mgr[74636]: mgr respawn  9: '--default-log-to-journald=true'
Nov 27 05:57:59 np0005537642 ceph-mgr[74636]: mgr respawn  10: '--default-log-to-stderr=false'
Nov 27 05:57:59 np0005537642 ceph-mgr[74636]: mgr respawn respawning with exe /usr/bin/ceph-mgr
Nov 27 05:57:59 np0005537642 ceph-mgr[74636]: mgr respawn  exe_path /proc/self/exe
Nov 27 05:57:59 np0005537642 ceph-mon[74338]: log_channel(cluster) log [DBG] : mgrmap e17: compute-0.qnrkij(active, since 10s), standbys: compute-1.npcryb, compute-2.yyrxaz
Nov 27 05:57:59 np0005537642 systemd[1]: libpod-2229c462985a6b8425fb0b4f05cc3ba6b5731714169c9a02e956627b058aef3e.scope: Deactivated successfully.
Nov 27 05:57:59 np0005537642 ceph-mon[74338]: from='client.? 192.168.122.100:0/26158713' entity='client.admin' cmd=[{"prefix": "mgr module disable", "module": "dashboard"}]: dispatch
Nov 27 05:57:59 np0005537642 podman[92334]: 2025-11-27 10:57:59.395525451 +0000 UTC m=+0.029057065 container died 2229c462985a6b8425fb0b4f05cc3ba6b5731714169c9a02e956627b058aef3e (image=quay.io/ceph/ceph:v19, name=friendly_agnesi, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid)
Nov 27 05:57:59 np0005537642 bash[92340]: Trying to pull quay.io/prometheus/node-exporter:v1.7.0...
Nov 27 05:57:59 np0005537642 systemd[1]: var-lib-containers-storage-overlay-87ebf0de9fa545d4b9d71b04659ffbee2c09ec73b6e5c336cd66f458f95977b5-merged.mount: Deactivated successfully.
Nov 27 05:57:59 np0005537642 systemd-logind[801]: Session 34 logged out. Waiting for processes to exit.
Nov 27 05:57:59 np0005537642 podman[92334]: 2025-11-27 10:57:59.451446667 +0000 UTC m=+0.084978271 container remove 2229c462985a6b8425fb0b4f05cc3ba6b5731714169c9a02e956627b058aef3e (image=quay.io/ceph/ceph:v19, name=friendly_agnesi, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default)
Nov 27 05:57:59 np0005537642 systemd[1]: libpod-conmon-2229c462985a6b8425fb0b4f05cc3ba6b5731714169c9a02e956627b058aef3e.scope: Deactivated successfully.
Nov 27 05:57:59 np0005537642 ceph-4c838139-e0c9-556a-a9ca-e4422f459af7-mgr-compute-0-qnrkij[74632]: ignoring --setuser ceph since I am not root
Nov 27 05:57:59 np0005537642 ceph-4c838139-e0c9-556a-a9ca-e4422f459af7-mgr-compute-0-qnrkij[74632]: ignoring --setgroup ceph since I am not root
Nov 27 05:57:59 np0005537642 ceph-mgr[74636]: ceph version 19.2.3 (c92aebb279828e9c3c1f5d24613efca272649e62) squid (stable), process ceph-mgr, pid 2
Nov 27 05:57:59 np0005537642 ceph-mgr[74636]: pidfile_write: ignore empty --pid-file
Nov 27 05:57:59 np0005537642 ceph-osd[82775]: log_channel(cluster) log [DBG] : 6.2 deep-scrub starts
Nov 27 05:57:59 np0005537642 ceph-osd[82775]: log_channel(cluster) log [DBG] : 6.2 deep-scrub ok
Nov 27 05:57:59 np0005537642 ceph-mgr[74636]: mgr[py] Loading python module 'alerts'
Nov 27 05:57:59 np0005537642 ceph-4c838139-e0c9-556a-a9ca-e4422f459af7-mgr-compute-0-qnrkij[74632]: 2025-11-27T10:57:59.599+0000 7f1633d52140 -1 mgr[py] Module alerts has missing NOTIFY_TYPES member
Nov 27 05:57:59 np0005537642 ceph-mgr[74636]: mgr[py] Module alerts has missing NOTIFY_TYPES member
Nov 27 05:57:59 np0005537642 ceph-mgr[74636]: mgr[py] Loading python module 'balancer'
Nov 27 05:57:59 np0005537642 ceph-4c838139-e0c9-556a-a9ca-e4422f459af7-mgr-compute-0-qnrkij[74632]: 2025-11-27T10:57:59.703+0000 7f1633d52140 -1 mgr[py] Module balancer has missing NOTIFY_TYPES member
Nov 27 05:57:59 np0005537642 ceph-mgr[74636]: mgr[py] Module balancer has missing NOTIFY_TYPES member
Nov 27 05:57:59 np0005537642 ceph-mgr[74636]: mgr[py] Loading python module 'cephadm'
Nov 27 05:57:59 np0005537642 python3[92406]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --interactive  --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z   --volume /home/ceph-admin/specs/ceph_spec.yaml:/home/ceph_spec.yaml:z   --entrypoint ceph quay.io/ceph/ceph:v19 --fsid 4c838139-e0c9-556a-a9ca-e4422f459af7 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   mgr module enable dashboard _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 27 05:57:59 np0005537642 podman[92407]: 2025-11-27 10:57:59.942062342 +0000 UTC m=+0.099040030 container create f4eed987a48ed8d9d4672a4063e41bf209f9eae5647dd4d4b88147ba16b63947 (image=quay.io/ceph/ceph:v19, name=ecstatic_williams, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 27 05:57:59 np0005537642 bash[92340]: Getting image source signatures
Nov 27 05:57:59 np0005537642 bash[92340]: Copying blob sha256:324153f2810a9927fcce320af9e4e291e0b6e805cbdd1f338386c756b9defa24
Nov 27 05:57:59 np0005537642 bash[92340]: Copying blob sha256:2abcce694348cd2c949c0e98a7400ebdfd8341021bcf6b541bc72033ce982510
Nov 27 05:57:59 np0005537642 bash[92340]: Copying blob sha256:455fd88e5221bc1e278ef2d059cd70e4df99a24e5af050ede621534276f6cf9a
Nov 27 05:57:59 np0005537642 podman[92407]: 2025-11-27 10:57:59.874387962 +0000 UTC m=+0.031365690 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Nov 27 05:58:00 np0005537642 systemd[1]: Started libpod-conmon-f4eed987a48ed8d9d4672a4063e41bf209f9eae5647dd4d4b88147ba16b63947.scope.
Nov 27 05:58:00 np0005537642 systemd[1]: Started libcrun container.
Nov 27 05:58:00 np0005537642 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6d769ef6adb9e738eb6805bc15e1f84cc4f4a88eaeebb92ef1a3ffe7f467031c/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 27 05:58:00 np0005537642 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6d769ef6adb9e738eb6805bc15e1f84cc4f4a88eaeebb92ef1a3ffe7f467031c/merged/home/ceph_spec.yaml supports timestamps until 2038 (0x7fffffff)
Nov 27 05:58:00 np0005537642 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6d769ef6adb9e738eb6805bc15e1f84cc4f4a88eaeebb92ef1a3ffe7f467031c/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Nov 27 05:58:00 np0005537642 podman[92407]: 2025-11-27 10:58:00.168124414 +0000 UTC m=+0.325102062 container init f4eed987a48ed8d9d4672a4063e41bf209f9eae5647dd4d4b88147ba16b63947 (image=quay.io/ceph/ceph:v19, name=ecstatic_williams, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 27 05:58:00 np0005537642 podman[92407]: 2025-11-27 10:58:00.176182512 +0000 UTC m=+0.333160160 container start f4eed987a48ed8d9d4672a4063e41bf209f9eae5647dd4d4b88147ba16b63947 (image=quay.io/ceph/ceph:v19, name=ecstatic_williams, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, ceph=True, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0)
Nov 27 05:58:00 np0005537642 podman[92407]: 2025-11-27 10:58:00.215990921 +0000 UTC m=+0.372968569 container attach f4eed987a48ed8d9d4672a4063e41bf209f9eae5647dd4d4b88147ba16b63947 (image=quay.io/ceph/ceph:v19, name=ecstatic_williams, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.license=GPLv2, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid)
Nov 27 05:58:00 np0005537642 ceph-mon[74338]: from='client.? 192.168.122.100:0/26158713' entity='client.admin' cmd='[{"prefix": "mgr module disable", "module": "dashboard"}]': finished
Nov 27 05:58:00 np0005537642 ceph-osd[82775]: log_channel(cluster) log [DBG] : 5.1 scrub starts
Nov 27 05:58:00 np0005537642 ceph-osd[82775]: log_channel(cluster) log [DBG] : 5.1 scrub ok
Nov 27 05:58:00 np0005537642 ceph-mgr[74636]: mgr[py] Loading python module 'crash'
Nov 27 05:58:00 np0005537642 ceph-4c838139-e0c9-556a-a9ca-e4422f459af7-mgr-compute-0-qnrkij[74632]: 2025-11-27T10:58:00.568+0000 7f1633d52140 -1 mgr[py] Module crash has missing NOTIFY_TYPES member
Nov 27 05:58:00 np0005537642 ceph-mgr[74636]: mgr[py] Module crash has missing NOTIFY_TYPES member
Nov 27 05:58:00 np0005537642 ceph-mgr[74636]: mgr[py] Loading python module 'dashboard'
Nov 27 05:58:00 np0005537642 ceph-mon[74338]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mgr module enable", "module": "dashboard"} v 0)
Nov 27 05:58:00 np0005537642 ceph-mon[74338]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/1536312525' entity='client.admin' cmd=[{"prefix": "mgr module enable", "module": "dashboard"}]: dispatch
Nov 27 05:58:01 np0005537642 ceph-mgr[74636]: mgr[py] Loading python module 'devicehealth'
Nov 27 05:58:01 np0005537642 bash[92340]: Copying config sha256:72c9c208898624938c9e4183d6686ea4a5fd3f912bc29bc3f00147924c521a3e
Nov 27 05:58:01 np0005537642 ceph-4c838139-e0c9-556a-a9ca-e4422f459af7-mgr-compute-0-qnrkij[74632]: 2025-11-27T10:58:01.218+0000 7f1633d52140 -1 mgr[py] Module devicehealth has missing NOTIFY_TYPES member
Nov 27 05:58:01 np0005537642 ceph-mgr[74636]: mgr[py] Module devicehealth has missing NOTIFY_TYPES member
Nov 27 05:58:01 np0005537642 ceph-mgr[74636]: mgr[py] Loading python module 'diskprediction_local'
Nov 27 05:58:01 np0005537642 ceph-4c838139-e0c9-556a-a9ca-e4422f459af7-mgr-compute-0-qnrkij[74632]: /lib64/python3.9/site-packages/scipy/__init__.py:73: UserWarning: NumPy was imported from a Python sub-interpreter but NumPy does not properly support sub-interpreters. This will likely work for most users but might cause hard to track down issues or subtle bugs. A common user of the rare sub-interpreter feature is wsgi which also allows single-interpreter mode.
Nov 27 05:58:01 np0005537642 ceph-4c838139-e0c9-556a-a9ca-e4422f459af7-mgr-compute-0-qnrkij[74632]: Improvements in the case of bugs are welcome, but is not on the NumPy roadmap, and full support may require significant effort to achieve.
Nov 27 05:58:01 np0005537642 ceph-4c838139-e0c9-556a-a9ca-e4422f459af7-mgr-compute-0-qnrkij[74632]:  from numpy import show_config as show_numpy_config
Nov 27 05:58:01 np0005537642 ceph-4c838139-e0c9-556a-a9ca-e4422f459af7-mgr-compute-0-qnrkij[74632]: 2025-11-27T10:58:01.399+0000 7f1633d52140 -1 mgr[py] Module diskprediction_local has missing NOTIFY_TYPES member
Nov 27 05:58:01 np0005537642 ceph-mgr[74636]: mgr[py] Module diskprediction_local has missing NOTIFY_TYPES member
Nov 27 05:58:01 np0005537642 ceph-mgr[74636]: mgr[py] Loading python module 'influx'
Nov 27 05:58:01 np0005537642 ceph-osd[82775]: log_channel(cluster) log [DBG] : 6.3 deep-scrub starts
Nov 27 05:58:01 np0005537642 ceph-osd[82775]: log_channel(cluster) log [DBG] : 6.3 deep-scrub ok
Nov 27 05:58:01 np0005537642 ceph-mon[74338]: mon.compute-0@0(leader).osd e58 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 27 05:58:01 np0005537642 ceph-4c838139-e0c9-556a-a9ca-e4422f459af7-mgr-compute-0-qnrkij[74632]: 2025-11-27T10:58:01.477+0000 7f1633d52140 -1 mgr[py] Module influx has missing NOTIFY_TYPES member
Nov 27 05:58:01 np0005537642 ceph-mgr[74636]: mgr[py] Module influx has missing NOTIFY_TYPES member
Nov 27 05:58:01 np0005537642 ceph-mgr[74636]: mgr[py] Loading python module 'insights'
Nov 27 05:58:01 np0005537642 ceph-mgr[74636]: mgr[py] Loading python module 'iostat'
Nov 27 05:58:01 np0005537642 ceph-4c838139-e0c9-556a-a9ca-e4422f459af7-mgr-compute-0-qnrkij[74632]: 2025-11-27T10:58:01.617+0000 7f1633d52140 -1 mgr[py] Module iostat has missing NOTIFY_TYPES member
Nov 27 05:58:01 np0005537642 ceph-mgr[74636]: mgr[py] Module iostat has missing NOTIFY_TYPES member
Nov 27 05:58:01 np0005537642 ceph-mgr[74636]: mgr[py] Loading python module 'k8sevents'
Nov 27 05:58:01 np0005537642 bash[92340]: Writing manifest to image destination
Nov 27 05:58:01 np0005537642 ceph-mon[74338]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/1536312525' entity='client.admin' cmd='[{"prefix": "mgr module enable", "module": "dashboard"}]': finished
Nov 27 05:58:01 np0005537642 ceph-mon[74338]: log_channel(cluster) log [DBG] : mgrmap e18: compute-0.qnrkij(active, since 13s), standbys: compute-1.npcryb, compute-2.yyrxaz
Nov 27 05:58:01 np0005537642 systemd[1]: libpod-f4eed987a48ed8d9d4672a4063e41bf209f9eae5647dd4d4b88147ba16b63947.scope: Deactivated successfully.
Nov 27 05:58:01 np0005537642 podman[92407]: 2025-11-27 10:58:01.944229339 +0000 UTC m=+2.101206987 container died f4eed987a48ed8d9d4672a4063e41bf209f9eae5647dd4d4b88147ba16b63947 (image=quay.io/ceph/ceph:v19, name=ecstatic_williams, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 27 05:58:01 np0005537642 ceph-mon[74338]: from='client.? 192.168.122.100:0/1536312525' entity='client.admin' cmd=[{"prefix": "mgr module enable", "module": "dashboard"}]: dispatch
Nov 27 05:58:02 np0005537642 systemd[1]: var-lib-containers-storage-overlay-6d769ef6adb9e738eb6805bc15e1f84cc4f4a88eaeebb92ef1a3ffe7f467031c-merged.mount: Deactivated successfully.
Nov 27 05:58:02 np0005537642 podman[92407]: 2025-11-27 10:58:02.060246669 +0000 UTC m=+2.217224317 container remove f4eed987a48ed8d9d4672a4063e41bf209f9eae5647dd4d4b88147ba16b63947 (image=quay.io/ceph/ceph:v19, name=ecstatic_williams, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 27 05:58:02 np0005537642 ceph-mgr[74636]: mgr[py] Loading python module 'localpool'
Nov 27 05:58:02 np0005537642 podman[92340]: 2025-11-27 10:58:02.089262452 +0000 UTC m=+2.707144402 container create ef4ca26692ee3ac96369d611d0545c11cb87735cb1047144085ac48b0518225b (image=quay.io/prometheus/node-exporter:v1.7.0, name=ceph-4c838139-e0c9-556a-a9ca-e4422f459af7-node-exporter-compute-0, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Nov 27 05:58:02 np0005537642 systemd[1]: libpod-conmon-f4eed987a48ed8d9d4672a4063e41bf209f9eae5647dd4d4b88147ba16b63947.scope: Deactivated successfully.
Nov 27 05:58:02 np0005537642 podman[92340]: 2025-11-27 10:58:02.037565166 +0000 UTC m=+2.655447166 image pull 72c9c208898624938c9e4183d6686ea4a5fd3f912bc29bc3f00147924c521a3e quay.io/prometheus/node-exporter:v1.7.0
Nov 27 05:58:02 np0005537642 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f7ffecffd7b3add11fc5037dcae7467871d1a60701868bdf2a0f712c8a50f674/merged/etc/node-exporter supports timestamps until 2038 (0x7fffffff)
Nov 27 05:58:02 np0005537642 ceph-mgr[74636]: mgr[py] Loading python module 'mds_autoscaler'
Nov 27 05:58:02 np0005537642 podman[92340]: 2025-11-27 10:58:02.167076739 +0000 UTC m=+2.784958689 container init ef4ca26692ee3ac96369d611d0545c11cb87735cb1047144085ac48b0518225b (image=quay.io/prometheus/node-exporter:v1.7.0, name=ceph-4c838139-e0c9-556a-a9ca-e4422f459af7-node-exporter-compute-0, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Nov 27 05:58:02 np0005537642 podman[92340]: 2025-11-27 10:58:02.173747819 +0000 UTC m=+2.791629749 container start ef4ca26692ee3ac96369d611d0545c11cb87735cb1047144085ac48b0518225b (image=quay.io/prometheus/node-exporter:v1.7.0, name=ceph-4c838139-e0c9-556a-a9ca-e4422f459af7-node-exporter-compute-0, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Nov 27 05:58:02 np0005537642 ceph-4c838139-e0c9-556a-a9ca-e4422f459af7-node-exporter-compute-0[92532]: ts=2025-11-27T10:58:02.180Z caller=node_exporter.go:192 level=info msg="Starting node_exporter" version="(version=1.7.0, branch=HEAD, revision=7333465abf9efba81876303bb57e6fadb946041b)"
Nov 27 05:58:02 np0005537642 ceph-4c838139-e0c9-556a-a9ca-e4422f459af7-node-exporter-compute-0[92532]: ts=2025-11-27T10:58:02.181Z caller=node_exporter.go:193 level=info msg="Build context" build_context="(go=go1.21.4, platform=linux/amd64, user=root@35918982f6d8, date=20231112-23:53:35, tags=netgo osusergo static_build)"
Nov 27 05:58:02 np0005537642 ceph-4c838139-e0c9-556a-a9ca-e4422f459af7-node-exporter-compute-0[92532]: ts=2025-11-27T10:58:02.181Z caller=diskstats_common.go:111 level=info collector=diskstats msg="Parsed flag --collector.diskstats.device-exclude" flag=^(ram|loop|fd|(h|s|v|xv)d[a-z]|nvme\d+n\d+p)\d+$
Nov 27 05:58:02 np0005537642 ceph-4c838139-e0c9-556a-a9ca-e4422f459af7-node-exporter-compute-0[92532]: ts=2025-11-27T10:58:02.181Z caller=diskstats_linux.go:265 level=error collector=diskstats msg="Failed to open directory, disabling udev device properties" path=/run/udev/data
Nov 27 05:58:02 np0005537642 ceph-4c838139-e0c9-556a-a9ca-e4422f459af7-node-exporter-compute-0[92532]: ts=2025-11-27T10:58:02.181Z caller=filesystem_common.go:111 level=info collector=filesystem msg="Parsed flag --collector.filesystem.mount-points-exclude" flag=^/(dev|proc|run/credentials/.+|sys|var/lib/docker/.+|var/lib/containers/storage/.+)($|/)
Nov 27 05:58:02 np0005537642 ceph-4c838139-e0c9-556a-a9ca-e4422f459af7-node-exporter-compute-0[92532]: ts=2025-11-27T10:58:02.181Z caller=filesystem_common.go:113 level=info collector=filesystem msg="Parsed flag --collector.filesystem.fs-types-exclude" flag=^(autofs|binfmt_misc|bpf|cgroup2?|configfs|debugfs|devpts|devtmpfs|fusectl|hugetlbfs|iso9660|mqueue|nsfs|overlay|proc|procfs|pstore|rpc_pipefs|securityfs|selinuxfs|squashfs|sysfs|tracefs)$
Nov 27 05:58:02 np0005537642 ceph-4c838139-e0c9-556a-a9ca-e4422f459af7-node-exporter-compute-0[92532]: ts=2025-11-27T10:58:02.183Z caller=node_exporter.go:110 level=info msg="Enabled collectors"
Nov 27 05:58:02 np0005537642 ceph-4c838139-e0c9-556a-a9ca-e4422f459af7-node-exporter-compute-0[92532]: ts=2025-11-27T10:58:02.183Z caller=node_exporter.go:117 level=info collector=arp
Nov 27 05:58:02 np0005537642 ceph-4c838139-e0c9-556a-a9ca-e4422f459af7-node-exporter-compute-0[92532]: ts=2025-11-27T10:58:02.183Z caller=node_exporter.go:117 level=info collector=bcache
Nov 27 05:58:02 np0005537642 ceph-4c838139-e0c9-556a-a9ca-e4422f459af7-node-exporter-compute-0[92532]: ts=2025-11-27T10:58:02.183Z caller=node_exporter.go:117 level=info collector=bonding
Nov 27 05:58:02 np0005537642 ceph-4c838139-e0c9-556a-a9ca-e4422f459af7-node-exporter-compute-0[92532]: ts=2025-11-27T10:58:02.183Z caller=node_exporter.go:117 level=info collector=btrfs
Nov 27 05:58:02 np0005537642 ceph-4c838139-e0c9-556a-a9ca-e4422f459af7-node-exporter-compute-0[92532]: ts=2025-11-27T10:58:02.183Z caller=node_exporter.go:117 level=info collector=conntrack
Nov 27 05:58:02 np0005537642 ceph-4c838139-e0c9-556a-a9ca-e4422f459af7-node-exporter-compute-0[92532]: ts=2025-11-27T10:58:02.183Z caller=node_exporter.go:117 level=info collector=cpu
Nov 27 05:58:02 np0005537642 ceph-4c838139-e0c9-556a-a9ca-e4422f459af7-node-exporter-compute-0[92532]: ts=2025-11-27T10:58:02.183Z caller=node_exporter.go:117 level=info collector=cpufreq
Nov 27 05:58:02 np0005537642 ceph-4c838139-e0c9-556a-a9ca-e4422f459af7-node-exporter-compute-0[92532]: ts=2025-11-27T10:58:02.183Z caller=node_exporter.go:117 level=info collector=diskstats
Nov 27 05:58:02 np0005537642 ceph-4c838139-e0c9-556a-a9ca-e4422f459af7-node-exporter-compute-0[92532]: ts=2025-11-27T10:58:02.183Z caller=node_exporter.go:117 level=info collector=dmi
Nov 27 05:58:02 np0005537642 ceph-4c838139-e0c9-556a-a9ca-e4422f459af7-node-exporter-compute-0[92532]: ts=2025-11-27T10:58:02.183Z caller=node_exporter.go:117 level=info collector=edac
Nov 27 05:58:02 np0005537642 ceph-4c838139-e0c9-556a-a9ca-e4422f459af7-node-exporter-compute-0[92532]: ts=2025-11-27T10:58:02.183Z caller=node_exporter.go:117 level=info collector=entropy
Nov 27 05:58:02 np0005537642 ceph-4c838139-e0c9-556a-a9ca-e4422f459af7-node-exporter-compute-0[92532]: ts=2025-11-27T10:58:02.183Z caller=node_exporter.go:117 level=info collector=fibrechannel
Nov 27 05:58:02 np0005537642 ceph-4c838139-e0c9-556a-a9ca-e4422f459af7-node-exporter-compute-0[92532]: ts=2025-11-27T10:58:02.183Z caller=node_exporter.go:117 level=info collector=filefd
Nov 27 05:58:02 np0005537642 ceph-4c838139-e0c9-556a-a9ca-e4422f459af7-node-exporter-compute-0[92532]: ts=2025-11-27T10:58:02.183Z caller=node_exporter.go:117 level=info collector=filesystem
Nov 27 05:58:02 np0005537642 ceph-4c838139-e0c9-556a-a9ca-e4422f459af7-node-exporter-compute-0[92532]: ts=2025-11-27T10:58:02.183Z caller=node_exporter.go:117 level=info collector=hwmon
Nov 27 05:58:02 np0005537642 ceph-4c838139-e0c9-556a-a9ca-e4422f459af7-node-exporter-compute-0[92532]: ts=2025-11-27T10:58:02.183Z caller=node_exporter.go:117 level=info collector=infiniband
Nov 27 05:58:02 np0005537642 ceph-4c838139-e0c9-556a-a9ca-e4422f459af7-node-exporter-compute-0[92532]: ts=2025-11-27T10:58:02.183Z caller=node_exporter.go:117 level=info collector=ipvs
Nov 27 05:58:02 np0005537642 ceph-4c838139-e0c9-556a-a9ca-e4422f459af7-node-exporter-compute-0[92532]: ts=2025-11-27T10:58:02.183Z caller=node_exporter.go:117 level=info collector=loadavg
Nov 27 05:58:02 np0005537642 ceph-4c838139-e0c9-556a-a9ca-e4422f459af7-node-exporter-compute-0[92532]: ts=2025-11-27T10:58:02.183Z caller=node_exporter.go:117 level=info collector=mdadm
Nov 27 05:58:02 np0005537642 ceph-4c838139-e0c9-556a-a9ca-e4422f459af7-node-exporter-compute-0[92532]: ts=2025-11-27T10:58:02.183Z caller=node_exporter.go:117 level=info collector=meminfo
Nov 27 05:58:02 np0005537642 ceph-4c838139-e0c9-556a-a9ca-e4422f459af7-node-exporter-compute-0[92532]: ts=2025-11-27T10:58:02.183Z caller=node_exporter.go:117 level=info collector=netclass
Nov 27 05:58:02 np0005537642 ceph-4c838139-e0c9-556a-a9ca-e4422f459af7-node-exporter-compute-0[92532]: ts=2025-11-27T10:58:02.183Z caller=node_exporter.go:117 level=info collector=netdev
Nov 27 05:58:02 np0005537642 ceph-4c838139-e0c9-556a-a9ca-e4422f459af7-node-exporter-compute-0[92532]: ts=2025-11-27T10:58:02.183Z caller=node_exporter.go:117 level=info collector=netstat
Nov 27 05:58:02 np0005537642 ceph-4c838139-e0c9-556a-a9ca-e4422f459af7-node-exporter-compute-0[92532]: ts=2025-11-27T10:58:02.183Z caller=node_exporter.go:117 level=info collector=nfs
Nov 27 05:58:02 np0005537642 ceph-4c838139-e0c9-556a-a9ca-e4422f459af7-node-exporter-compute-0[92532]: ts=2025-11-27T10:58:02.183Z caller=node_exporter.go:117 level=info collector=nfsd
Nov 27 05:58:02 np0005537642 ceph-4c838139-e0c9-556a-a9ca-e4422f459af7-node-exporter-compute-0[92532]: ts=2025-11-27T10:58:02.183Z caller=node_exporter.go:117 level=info collector=nvme
Nov 27 05:58:02 np0005537642 ceph-4c838139-e0c9-556a-a9ca-e4422f459af7-node-exporter-compute-0[92532]: ts=2025-11-27T10:58:02.183Z caller=node_exporter.go:117 level=info collector=os
Nov 27 05:58:02 np0005537642 ceph-4c838139-e0c9-556a-a9ca-e4422f459af7-node-exporter-compute-0[92532]: ts=2025-11-27T10:58:02.183Z caller=node_exporter.go:117 level=info collector=powersupplyclass
Nov 27 05:58:02 np0005537642 ceph-4c838139-e0c9-556a-a9ca-e4422f459af7-node-exporter-compute-0[92532]: ts=2025-11-27T10:58:02.183Z caller=node_exporter.go:117 level=info collector=pressure
Nov 27 05:58:02 np0005537642 ceph-4c838139-e0c9-556a-a9ca-e4422f459af7-node-exporter-compute-0[92532]: ts=2025-11-27T10:58:02.183Z caller=node_exporter.go:117 level=info collector=rapl
Nov 27 05:58:02 np0005537642 ceph-4c838139-e0c9-556a-a9ca-e4422f459af7-node-exporter-compute-0[92532]: ts=2025-11-27T10:58:02.183Z caller=node_exporter.go:117 level=info collector=schedstat
Nov 27 05:58:02 np0005537642 ceph-4c838139-e0c9-556a-a9ca-e4422f459af7-node-exporter-compute-0[92532]: ts=2025-11-27T10:58:02.183Z caller=node_exporter.go:117 level=info collector=selinux
Nov 27 05:58:02 np0005537642 ceph-4c838139-e0c9-556a-a9ca-e4422f459af7-node-exporter-compute-0[92532]: ts=2025-11-27T10:58:02.183Z caller=node_exporter.go:117 level=info collector=sockstat
Nov 27 05:58:02 np0005537642 ceph-4c838139-e0c9-556a-a9ca-e4422f459af7-node-exporter-compute-0[92532]: ts=2025-11-27T10:58:02.183Z caller=node_exporter.go:117 level=info collector=softnet
Nov 27 05:58:02 np0005537642 ceph-4c838139-e0c9-556a-a9ca-e4422f459af7-node-exporter-compute-0[92532]: ts=2025-11-27T10:58:02.183Z caller=node_exporter.go:117 level=info collector=stat
Nov 27 05:58:02 np0005537642 ceph-4c838139-e0c9-556a-a9ca-e4422f459af7-node-exporter-compute-0[92532]: ts=2025-11-27T10:58:02.183Z caller=node_exporter.go:117 level=info collector=tapestats
Nov 27 05:58:02 np0005537642 ceph-4c838139-e0c9-556a-a9ca-e4422f459af7-node-exporter-compute-0[92532]: ts=2025-11-27T10:58:02.183Z caller=node_exporter.go:117 level=info collector=textfile
Nov 27 05:58:02 np0005537642 ceph-4c838139-e0c9-556a-a9ca-e4422f459af7-node-exporter-compute-0[92532]: ts=2025-11-27T10:58:02.183Z caller=node_exporter.go:117 level=info collector=thermal_zone
Nov 27 05:58:02 np0005537642 ceph-4c838139-e0c9-556a-a9ca-e4422f459af7-node-exporter-compute-0[92532]: ts=2025-11-27T10:58:02.183Z caller=node_exporter.go:117 level=info collector=time
Nov 27 05:58:02 np0005537642 ceph-4c838139-e0c9-556a-a9ca-e4422f459af7-node-exporter-compute-0[92532]: ts=2025-11-27T10:58:02.183Z caller=node_exporter.go:117 level=info collector=udp_queues
Nov 27 05:58:02 np0005537642 ceph-4c838139-e0c9-556a-a9ca-e4422f459af7-node-exporter-compute-0[92532]: ts=2025-11-27T10:58:02.183Z caller=node_exporter.go:117 level=info collector=uname
Nov 27 05:58:02 np0005537642 ceph-4c838139-e0c9-556a-a9ca-e4422f459af7-node-exporter-compute-0[92532]: ts=2025-11-27T10:58:02.183Z caller=node_exporter.go:117 level=info collector=vmstat
Nov 27 05:58:02 np0005537642 ceph-4c838139-e0c9-556a-a9ca-e4422f459af7-node-exporter-compute-0[92532]: ts=2025-11-27T10:58:02.183Z caller=node_exporter.go:117 level=info collector=xfs
Nov 27 05:58:02 np0005537642 ceph-4c838139-e0c9-556a-a9ca-e4422f459af7-node-exporter-compute-0[92532]: ts=2025-11-27T10:58:02.183Z caller=node_exporter.go:117 level=info collector=zfs
Nov 27 05:58:02 np0005537642 ceph-4c838139-e0c9-556a-a9ca-e4422f459af7-node-exporter-compute-0[92532]: ts=2025-11-27T10:58:02.184Z caller=tls_config.go:274 level=info msg="Listening on" address=[::]:9100
Nov 27 05:58:02 np0005537642 ceph-4c838139-e0c9-556a-a9ca-e4422f459af7-node-exporter-compute-0[92532]: ts=2025-11-27T10:58:02.184Z caller=tls_config.go:277 level=info msg="TLS is disabled." http2=false address=[::]:9100
Nov 27 05:58:02 np0005537642 bash[92340]: ef4ca26692ee3ac96369d611d0545c11cb87735cb1047144085ac48b0518225b
Nov 27 05:58:02 np0005537642 systemd[1]: Started Ceph node-exporter.compute-0 for 4c838139-e0c9-556a-a9ca-e4422f459af7.
Nov 27 05:58:02 np0005537642 systemd[1]: session-34.scope: Deactivated successfully.
Nov 27 05:58:02 np0005537642 systemd[1]: session-34.scope: Consumed 6.028s CPU time.
Nov 27 05:58:02 np0005537642 systemd-logind[801]: Removed session 34.
Nov 27 05:58:02 np0005537642 ceph-mgr[74636]: mgr[py] Loading python module 'mirroring'
Nov 27 05:58:02 np0005537642 ceph-osd[82775]: log_channel(cluster) log [DBG] : 5.f scrub starts
Nov 27 05:58:02 np0005537642 ceph-osd[82775]: log_channel(cluster) log [DBG] : 5.f scrub ok
Nov 27 05:58:02 np0005537642 ceph-mgr[74636]: mgr[py] Loading python module 'nfs'
Nov 27 05:58:02 np0005537642 ceph-4c838139-e0c9-556a-a9ca-e4422f459af7-mgr-compute-0-qnrkij[74632]: 2025-11-27T10:58:02.735+0000 7f1633d52140 -1 mgr[py] Module nfs has missing NOTIFY_TYPES member
Nov 27 05:58:02 np0005537642 ceph-mgr[74636]: mgr[py] Module nfs has missing NOTIFY_TYPES member
Nov 27 05:58:02 np0005537642 ceph-mgr[74636]: mgr[py] Loading python module 'orchestrator'
Nov 27 05:58:02 np0005537642 ceph-4c838139-e0c9-556a-a9ca-e4422f459af7-mgr-compute-0-qnrkij[74632]: 2025-11-27T10:58:02.948+0000 7f1633d52140 -1 mgr[py] Module orchestrator has missing NOTIFY_TYPES member
Nov 27 05:58:02 np0005537642 ceph-mgr[74636]: mgr[py] Module orchestrator has missing NOTIFY_TYPES member
Nov 27 05:58:02 np0005537642 ceph-mgr[74636]: mgr[py] Loading python module 'osd_perf_query'
Nov 27 05:58:03 np0005537642 python3[92616]: ansible-ansible.legacy.stat Invoked with path=/tmp/ceph_mds.yml follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Nov 27 05:58:03 np0005537642 ceph-4c838139-e0c9-556a-a9ca-e4422f459af7-mgr-compute-0-qnrkij[74632]: 2025-11-27T10:58:03.029+0000 7f1633d52140 -1 mgr[py] Module osd_perf_query has missing NOTIFY_TYPES member
Nov 27 05:58:03 np0005537642 ceph-mgr[74636]: mgr[py] Module osd_perf_query has missing NOTIFY_TYPES member
Nov 27 05:58:03 np0005537642 ceph-mgr[74636]: mgr[py] Loading python module 'osd_support'
Nov 27 05:58:03 np0005537642 ceph-mon[74338]: from='client.? 192.168.122.100:0/1536312525' entity='client.admin' cmd='[{"prefix": "mgr module enable", "module": "dashboard"}]': finished
Nov 27 05:58:03 np0005537642 ceph-4c838139-e0c9-556a-a9ca-e4422f459af7-mgr-compute-0-qnrkij[74632]: 2025-11-27T10:58:03.098+0000 7f1633d52140 -1 mgr[py] Module osd_support has missing NOTIFY_TYPES member
Nov 27 05:58:03 np0005537642 ceph-mgr[74636]: mgr[py] Module osd_support has missing NOTIFY_TYPES member
Nov 27 05:58:03 np0005537642 ceph-mgr[74636]: mgr[py] Loading python module 'pg_autoscaler'
Nov 27 05:58:03 np0005537642 ceph-4c838139-e0c9-556a-a9ca-e4422f459af7-mgr-compute-0-qnrkij[74632]: 2025-11-27T10:58:03.174+0000 7f1633d52140 -1 mgr[py] Module pg_autoscaler has missing NOTIFY_TYPES member
Nov 27 05:58:03 np0005537642 ceph-mgr[74636]: mgr[py] Module pg_autoscaler has missing NOTIFY_TYPES member
Nov 27 05:58:03 np0005537642 ceph-mgr[74636]: mgr[py] Loading python module 'progress'
Nov 27 05:58:03 np0005537642 ceph-4c838139-e0c9-556a-a9ca-e4422f459af7-mgr-compute-0-qnrkij[74632]: 2025-11-27T10:58:03.246+0000 7f1633d52140 -1 mgr[py] Module progress has missing NOTIFY_TYPES member
Nov 27 05:58:03 np0005537642 ceph-mgr[74636]: mgr[py] Module progress has missing NOTIFY_TYPES member
Nov 27 05:58:03 np0005537642 ceph-mgr[74636]: mgr[py] Loading python module 'prometheus'
Nov 27 05:58:03 np0005537642 ceph-osd[82775]: log_channel(cluster) log [DBG] : 6.d scrub starts
Nov 27 05:58:03 np0005537642 ceph-osd[82775]: log_channel(cluster) log [DBG] : 6.d scrub ok
Nov 27 05:58:03 np0005537642 python3[92687]: ansible-ansible.legacy.copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1764241082.6503725-37254-274612030401559/source dest=/tmp/ceph_mds.yml mode=0644 force=True follow=False _original_basename=ceph_mds.yml.j2 checksum=b1f36629bdb347469f4890c95dfdef5abc68c3ae backup=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 27 05:58:03 np0005537642 ceph-4c838139-e0c9-556a-a9ca-e4422f459af7-mgr-compute-0-qnrkij[74632]: 2025-11-27T10:58:03.596+0000 7f1633d52140 -1 mgr[py] Module prometheus has missing NOTIFY_TYPES member
Nov 27 05:58:03 np0005537642 ceph-mgr[74636]: mgr[py] Module prometheus has missing NOTIFY_TYPES member
Nov 27 05:58:03 np0005537642 ceph-mgr[74636]: mgr[py] Loading python module 'rbd_support'
Nov 27 05:58:03 np0005537642 ceph-4c838139-e0c9-556a-a9ca-e4422f459af7-mgr-compute-0-qnrkij[74632]: 2025-11-27T10:58:03.687+0000 7f1633d52140 -1 mgr[py] Module rbd_support has missing NOTIFY_TYPES member
Nov 27 05:58:03 np0005537642 ceph-mgr[74636]: mgr[py] Module rbd_support has missing NOTIFY_TYPES member
Nov 27 05:58:03 np0005537642 ceph-mgr[74636]: mgr[py] Loading python module 'restful'
Nov 27 05:58:03 np0005537642 ceph-mgr[74636]: mgr[py] Loading python module 'rgw'
Nov 27 05:58:03 np0005537642 python3[92737]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z   --volume /tmp/ceph_mds.yml:/home/ceph_spec.yaml:z   --entrypoint ceph quay.io/ceph/ceph:v19 --fsid 4c838139-e0c9-556a-a9ca-e4422f459af7 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   fs volume create cephfs '--placement=compute-0 compute-1 compute-2 '#012 _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 27 05:58:04 np0005537642 podman[92738]: 2025-11-27 10:58:04.081117556 +0000 UTC m=+0.067434394 container create 92506e83a31c2706cfdf90027bbf0aa5d36f4e4946676004b9c9ae7f63c68d82 (image=quay.io/ceph/ceph:v19, name=stupefied_gauss, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1)
Nov 27 05:58:04 np0005537642 ceph-4c838139-e0c9-556a-a9ca-e4422f459af7-mgr-compute-0-qnrkij[74632]: 2025-11-27T10:58:04.099+0000 7f1633d52140 -1 mgr[py] Module rgw has missing NOTIFY_TYPES member
Nov 27 05:58:04 np0005537642 ceph-mgr[74636]: mgr[py] Module rgw has missing NOTIFY_TYPES member
Nov 27 05:58:04 np0005537642 ceph-mgr[74636]: mgr[py] Loading python module 'rook'
Nov 27 05:58:04 np0005537642 systemd[1]: Started libpod-conmon-92506e83a31c2706cfdf90027bbf0aa5d36f4e4946676004b9c9ae7f63c68d82.scope.
Nov 27 05:58:04 np0005537642 podman[92738]: 2025-11-27 10:58:04.057334601 +0000 UTC m=+0.043651459 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Nov 27 05:58:04 np0005537642 systemd[1]: Started libcrun container.
Nov 27 05:58:04 np0005537642 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/05afa9de1e35f2f21932992b25af4b5f33018e03e53dd5d0f99664487cb9eb26/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Nov 27 05:58:04 np0005537642 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/05afa9de1e35f2f21932992b25af4b5f33018e03e53dd5d0f99664487cb9eb26/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 27 05:58:04 np0005537642 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/05afa9de1e35f2f21932992b25af4b5f33018e03e53dd5d0f99664487cb9eb26/merged/home/ceph_spec.yaml supports timestamps until 2038 (0x7fffffff)
Nov 27 05:58:04 np0005537642 podman[92738]: 2025-11-27 10:58:04.189704386 +0000 UTC m=+0.176021244 container init 92506e83a31c2706cfdf90027bbf0aa5d36f4e4946676004b9c9ae7f63c68d82 (image=quay.io/ceph/ceph:v19, name=stupefied_gauss, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 27 05:58:04 np0005537642 podman[92738]: 2025-11-27 10:58:04.198091514 +0000 UTC m=+0.184408372 container start 92506e83a31c2706cfdf90027bbf0aa5d36f4e4946676004b9c9ae7f63c68d82 (image=quay.io/ceph/ceph:v19, name=stupefied_gauss, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Nov 27 05:58:04 np0005537642 podman[92738]: 2025-11-27 10:58:04.202325814 +0000 UTC m=+0.188642732 container attach 92506e83a31c2706cfdf90027bbf0aa5d36f4e4946676004b9c9ae7f63c68d82 (image=quay.io/ceph/ceph:v19, name=stupefied_gauss, CEPH_REF=squid, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, OSD_FLAVOR=default, org.label-schema.license=GPLv2, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 27 05:58:04 np0005537642 ceph-osd[82775]: log_channel(cluster) log [DBG] : 6.e scrub starts
Nov 27 05:58:04 np0005537642 ceph-osd[82775]: log_channel(cluster) log [DBG] : 6.e scrub ok
Nov 27 05:58:04 np0005537642 ceph-4c838139-e0c9-556a-a9ca-e4422f459af7-mgr-compute-0-qnrkij[74632]: 2025-11-27T10:58:04.639+0000 7f1633d52140 -1 mgr[py] Module rook has missing NOTIFY_TYPES member
Nov 27 05:58:04 np0005537642 ceph-mgr[74636]: mgr[py] Module rook has missing NOTIFY_TYPES member
Nov 27 05:58:04 np0005537642 ceph-mgr[74636]: mgr[py] Loading python module 'selftest'
Nov 27 05:58:04 np0005537642 ceph-4c838139-e0c9-556a-a9ca-e4422f459af7-mgr-compute-0-qnrkij[74632]: 2025-11-27T10:58:04.719+0000 7f1633d52140 -1 mgr[py] Module selftest has missing NOTIFY_TYPES member
Nov 27 05:58:04 np0005537642 ceph-mgr[74636]: mgr[py] Module selftest has missing NOTIFY_TYPES member
Nov 27 05:58:04 np0005537642 ceph-mgr[74636]: mgr[py] Loading python module 'snap_schedule'
Nov 27 05:58:04 np0005537642 ceph-4c838139-e0c9-556a-a9ca-e4422f459af7-mgr-compute-0-qnrkij[74632]: 2025-11-27T10:58:04.799+0000 7f1633d52140 -1 mgr[py] Module snap_schedule has missing NOTIFY_TYPES member
Nov 27 05:58:04 np0005537642 ceph-mgr[74636]: mgr[py] Module snap_schedule has missing NOTIFY_TYPES member
Nov 27 05:58:04 np0005537642 ceph-mgr[74636]: mgr[py] Loading python module 'stats'
Nov 27 05:58:04 np0005537642 ceph-mgr[74636]: mgr[py] Loading python module 'status'
Nov 27 05:58:04 np0005537642 ceph-4c838139-e0c9-556a-a9ca-e4422f459af7-mgr-compute-0-qnrkij[74632]: 2025-11-27T10:58:04.954+0000 7f1633d52140 -1 mgr[py] Module status has missing NOTIFY_TYPES member
Nov 27 05:58:04 np0005537642 ceph-mgr[74636]: mgr[py] Module status has missing NOTIFY_TYPES member
Nov 27 05:58:04 np0005537642 ceph-mgr[74636]: mgr[py] Loading python module 'telegraf'
Nov 27 05:58:05 np0005537642 ceph-4c838139-e0c9-556a-a9ca-e4422f459af7-mgr-compute-0-qnrkij[74632]: 2025-11-27T10:58:05.025+0000 7f1633d52140 -1 mgr[py] Module telegraf has missing NOTIFY_TYPES member
Nov 27 05:58:05 np0005537642 ceph-mgr[74636]: mgr[py] Module telegraf has missing NOTIFY_TYPES member
Nov 27 05:58:05 np0005537642 ceph-mgr[74636]: mgr[py] Loading python module 'telemetry'
Nov 27 05:58:05 np0005537642 ceph-4c838139-e0c9-556a-a9ca-e4422f459af7-mgr-compute-0-qnrkij[74632]: 2025-11-27T10:58:05.192+0000 7f1633d52140 -1 mgr[py] Module telemetry has missing NOTIFY_TYPES member
Nov 27 05:58:05 np0005537642 ceph-mgr[74636]: mgr[py] Module telemetry has missing NOTIFY_TYPES member
Nov 27 05:58:05 np0005537642 ceph-mgr[74636]: mgr[py] Loading python module 'test_orchestrator'
Nov 27 05:58:05 np0005537642 ceph-osd[82775]: log_channel(cluster) log [DBG] : 5.1c scrub starts
Nov 27 05:58:05 np0005537642 ceph-osd[82775]: log_channel(cluster) log [DBG] : 5.1c scrub ok
Nov 27 05:58:05 np0005537642 ceph-mon[74338]: log_channel(cluster) log [DBG] : Standby manager daemon compute-1.npcryb restarted
Nov 27 05:58:05 np0005537642 ceph-mon[74338]: log_channel(cluster) log [DBG] : Standby manager daemon compute-1.npcryb started
Nov 27 05:58:05 np0005537642 ceph-4c838139-e0c9-556a-a9ca-e4422f459af7-mgr-compute-0-qnrkij[74632]: 2025-11-27T10:58:05.427+0000 7f1633d52140 -1 mgr[py] Module test_orchestrator has missing NOTIFY_TYPES member
Nov 27 05:58:05 np0005537642 ceph-mgr[74636]: mgr[py] Module test_orchestrator has missing NOTIFY_TYPES member
Nov 27 05:58:05 np0005537642 ceph-mgr[74636]: mgr[py] Loading python module 'volumes'
Nov 27 05:58:05 np0005537642 ceph-mon[74338]: log_channel(cluster) log [DBG] : Standby manager daemon compute-2.yyrxaz restarted
Nov 27 05:58:05 np0005537642 ceph-mon[74338]: log_channel(cluster) log [DBG] : Standby manager daemon compute-2.yyrxaz started
Nov 27 05:58:05 np0005537642 ceph-4c838139-e0c9-556a-a9ca-e4422f459af7-mgr-compute-0-qnrkij[74632]: 2025-11-27T10:58:05.693+0000 7f1633d52140 -1 mgr[py] Module volumes has missing NOTIFY_TYPES member
Nov 27 05:58:05 np0005537642 ceph-mgr[74636]: mgr[py] Module volumes has missing NOTIFY_TYPES member
Nov 27 05:58:05 np0005537642 ceph-mgr[74636]: mgr[py] Loading python module 'zabbix'
Nov 27 05:58:05 np0005537642 ceph-4c838139-e0c9-556a-a9ca-e4422f459af7-mgr-compute-0-qnrkij[74632]: 2025-11-27T10:58:05.762+0000 7f1633d52140 -1 mgr[py] Module zabbix has missing NOTIFY_TYPES member
Nov 27 05:58:05 np0005537642 ceph-mgr[74636]: mgr[py] Module zabbix has missing NOTIFY_TYPES member
Nov 27 05:58:05 np0005537642 ceph-mon[74338]: log_channel(cluster) log [INF] : Active manager daemon compute-0.qnrkij restarted
Nov 27 05:58:05 np0005537642 ceph-mon[74338]: mon.compute-0@0(leader).osd e58 do_prune osdmap full prune enabled
Nov 27 05:58:05 np0005537642 ceph-mon[74338]: log_channel(cluster) log [INF] : Activating manager daemon compute-0.qnrkij
Nov 27 05:58:05 np0005537642 ceph-mgr[74636]: ms_deliver_dispatch: unhandled message 0x5633d57cd860 mon_map magic: 0 from mon.0 v2:192.168.122.100:3300/0
Nov 27 05:58:05 np0005537642 ceph-mgr[74636]: mgr handle_mgr_map respawning because set of enabled modules changed!
Nov 27 05:58:05 np0005537642 ceph-mgr[74636]: mgr respawn  e: '/usr/bin/ceph-mgr'
Nov 27 05:58:05 np0005537642 ceph-mgr[74636]: mgr respawn  0: '/usr/bin/ceph-mgr'
Nov 27 05:58:05 np0005537642 ceph-mgr[74636]: mgr respawn  1: '-n'
Nov 27 05:58:05 np0005537642 ceph-mgr[74636]: mgr respawn  2: 'mgr.compute-0.qnrkij'
Nov 27 05:58:05 np0005537642 ceph-mgr[74636]: mgr respawn  3: '-f'
Nov 27 05:58:05 np0005537642 ceph-mgr[74636]: mgr respawn  4: '--setuser'
Nov 27 05:58:05 np0005537642 ceph-mgr[74636]: mgr respawn  5: 'ceph'
Nov 27 05:58:05 np0005537642 ceph-mgr[74636]: mgr respawn  6: '--setgroup'
Nov 27 05:58:05 np0005537642 ceph-mgr[74636]: mgr respawn  7: 'ceph'
Nov 27 05:58:05 np0005537642 ceph-mgr[74636]: mgr respawn  8: '--default-log-to-file=false'
Nov 27 05:58:05 np0005537642 ceph-mgr[74636]: mgr respawn  9: '--default-log-to-journald=true'
Nov 27 05:58:05 np0005537642 ceph-mgr[74636]: mgr respawn  10: '--default-log-to-stderr=false'
Nov 27 05:58:05 np0005537642 ceph-mgr[74636]: mgr respawn respawning with exe /usr/bin/ceph-mgr
Nov 27 05:58:05 np0005537642 ceph-mgr[74636]: mgr respawn  exe_path /proc/self/exe
Nov 27 05:58:05 np0005537642 ceph-4c838139-e0c9-556a-a9ca-e4422f459af7-mgr-compute-0-qnrkij[74632]: ignoring --setuser ceph since I am not root
Nov 27 05:58:05 np0005537642 ceph-4c838139-e0c9-556a-a9ca-e4422f459af7-mgr-compute-0-qnrkij[74632]: ignoring --setgroup ceph since I am not root
Nov 27 05:58:05 np0005537642 ceph-mgr[74636]: ceph version 19.2.3 (c92aebb279828e9c3c1f5d24613efca272649e62) squid (stable), process ceph-mgr, pid 2
Nov 27 05:58:05 np0005537642 ceph-mgr[74636]: pidfile_write: ignore empty --pid-file
Nov 27 05:58:05 np0005537642 ceph-mgr[74636]: mgr[py] Loading python module 'alerts'
Nov 27 05:58:06 np0005537642 ceph-mgr[74636]: mgr[py] Module alerts has missing NOTIFY_TYPES member
Nov 27 05:58:06 np0005537642 ceph-mgr[74636]: mgr[py] Loading python module 'balancer'
Nov 27 05:58:06 np0005537642 ceph-4c838139-e0c9-556a-a9ca-e4422f459af7-mgr-compute-0-qnrkij[74632]: 2025-11-27T10:58:06.016+0000 7fea93e2d140 -1 mgr[py] Module alerts has missing NOTIFY_TYPES member
Nov 27 05:58:06 np0005537642 ceph-mgr[74636]: mgr[py] Module balancer has missing NOTIFY_TYPES member
Nov 27 05:58:06 np0005537642 ceph-mgr[74636]: mgr[py] Loading python module 'cephadm'
Nov 27 05:58:06 np0005537642 ceph-4c838139-e0c9-556a-a9ca-e4422f459af7-mgr-compute-0-qnrkij[74632]: 2025-11-27T10:58:06.091+0000 7fea93e2d140 -1 mgr[py] Module balancer has missing NOTIFY_TYPES member
Nov 27 05:58:06 np0005537642 ceph-mon[74338]: mon.compute-0@0(leader).osd e59 e59: 3 total, 3 up, 3 in
Nov 27 05:58:06 np0005537642 ceph-mon[74338]: log_channel(cluster) log [DBG] : osdmap e59: 3 total, 3 up, 3 in
Nov 27 05:58:06 np0005537642 ceph-mon[74338]: log_channel(cluster) log [DBG] : mgrmap e19: compute-0.qnrkij(active, starting, since 0.413936s), standbys: compute-1.npcryb, compute-2.yyrxaz
Nov 27 05:58:06 np0005537642 ceph-mon[74338]: Active manager daemon compute-0.qnrkij restarted
Nov 27 05:58:06 np0005537642 ceph-mon[74338]: Activating manager daemon compute-0.qnrkij
Nov 27 05:58:06 np0005537642 ceph-osd[82775]: log_channel(cluster) log [DBG] : 5.1b scrub starts
Nov 27 05:58:06 np0005537642 ceph-osd[82775]: log_channel(cluster) log [DBG] : 5.1b scrub ok
Nov 27 05:58:06 np0005537642 ceph-mon[74338]: mon.compute-0@0(leader).osd e59 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 27 05:58:06 np0005537642 ceph-mgr[74636]: mgr[py] Loading python module 'crash'
Nov 27 05:58:06 np0005537642 ceph-mgr[74636]: mgr[py] Module crash has missing NOTIFY_TYPES member
Nov 27 05:58:06 np0005537642 ceph-mgr[74636]: mgr[py] Loading python module 'dashboard'
Nov 27 05:58:06 np0005537642 ceph-4c838139-e0c9-556a-a9ca-e4422f459af7-mgr-compute-0-qnrkij[74632]: 2025-11-27T10:58:06.924+0000 7fea93e2d140 -1 mgr[py] Module crash has missing NOTIFY_TYPES member
Nov 27 05:58:07 np0005537642 ceph-osd[82775]: log_channel(cluster) log [DBG] : 4.1a scrub starts
Nov 27 05:58:07 np0005537642 ceph-osd[82775]: log_channel(cluster) log [DBG] : 4.1a scrub ok
Nov 27 05:58:07 np0005537642 ceph-mgr[74636]: mgr[py] Loading python module 'devicehealth'
Nov 27 05:58:07 np0005537642 ceph-mgr[74636]: mgr[py] Module devicehealth has missing NOTIFY_TYPES member
Nov 27 05:58:07 np0005537642 ceph-mgr[74636]: mgr[py] Loading python module 'diskprediction_local'
Nov 27 05:58:07 np0005537642 ceph-4c838139-e0c9-556a-a9ca-e4422f459af7-mgr-compute-0-qnrkij[74632]: 2025-11-27T10:58:07.544+0000 7fea93e2d140 -1 mgr[py] Module devicehealth has missing NOTIFY_TYPES member
Nov 27 05:58:07 np0005537642 ceph-4c838139-e0c9-556a-a9ca-e4422f459af7-mgr-compute-0-qnrkij[74632]: /lib64/python3.9/site-packages/scipy/__init__.py:73: UserWarning: NumPy was imported from a Python sub-interpreter but NumPy does not properly support sub-interpreters. This will likely work for most users but might cause hard to track down issues or subtle bugs. A common user of the rare sub-interpreter feature is wsgi which also allows single-interpreter mode.
Nov 27 05:58:07 np0005537642 ceph-4c838139-e0c9-556a-a9ca-e4422f459af7-mgr-compute-0-qnrkij[74632]: Improvements in the case of bugs are welcome, but is not on the NumPy roadmap, and full support may require significant effort to achieve.
Nov 27 05:58:07 np0005537642 ceph-4c838139-e0c9-556a-a9ca-e4422f459af7-mgr-compute-0-qnrkij[74632]:  from numpy import show_config as show_numpy_config
Nov 27 05:58:07 np0005537642 ceph-4c838139-e0c9-556a-a9ca-e4422f459af7-mgr-compute-0-qnrkij[74632]: 2025-11-27T10:58:07.707+0000 7fea93e2d140 -1 mgr[py] Module diskprediction_local has missing NOTIFY_TYPES member
Nov 27 05:58:07 np0005537642 ceph-mgr[74636]: mgr[py] Module diskprediction_local has missing NOTIFY_TYPES member
Nov 27 05:58:07 np0005537642 ceph-mgr[74636]: mgr[py] Loading python module 'influx'
Nov 27 05:58:07 np0005537642 ceph-4c838139-e0c9-556a-a9ca-e4422f459af7-mgr-compute-0-qnrkij[74632]: 2025-11-27T10:58:07.774+0000 7fea93e2d140 -1 mgr[py] Module influx has missing NOTIFY_TYPES member
Nov 27 05:58:07 np0005537642 ceph-mgr[74636]: mgr[py] Module influx has missing NOTIFY_TYPES member
Nov 27 05:58:07 np0005537642 ceph-mgr[74636]: mgr[py] Loading python module 'insights'
Nov 27 05:58:07 np0005537642 ceph-mgr[74636]: mgr[py] Loading python module 'iostat'
Nov 27 05:58:07 np0005537642 ceph-4c838139-e0c9-556a-a9ca-e4422f459af7-mgr-compute-0-qnrkij[74632]: 2025-11-27T10:58:07.908+0000 7fea93e2d140 -1 mgr[py] Module iostat has missing NOTIFY_TYPES member
Nov 27 05:58:07 np0005537642 ceph-mgr[74636]: mgr[py] Module iostat has missing NOTIFY_TYPES member
Nov 27 05:58:07 np0005537642 ceph-mgr[74636]: mgr[py] Loading python module 'k8sevents'
Nov 27 05:58:08 np0005537642 ceph-mgr[74636]: mgr[py] Loading python module 'localpool'
Nov 27 05:58:08 np0005537642 ceph-mgr[74636]: mgr[py] Loading python module 'mds_autoscaler'
Nov 27 05:58:08 np0005537642 ceph-osd[82775]: log_channel(cluster) log [DBG] : 4.1b scrub starts
Nov 27 05:58:08 np0005537642 ceph-osd[82775]: log_channel(cluster) log [DBG] : 4.1b scrub ok
Nov 27 05:58:08 np0005537642 ceph-mgr[74636]: mgr[py] Loading python module 'mirroring'
Nov 27 05:58:08 np0005537642 ceph-mgr[74636]: mgr[py] Loading python module 'nfs'
Nov 27 05:58:08 np0005537642 ceph-4c838139-e0c9-556a-a9ca-e4422f459af7-mgr-compute-0-qnrkij[74632]: 2025-11-27T10:58:08.908+0000 7fea93e2d140 -1 mgr[py] Module nfs has missing NOTIFY_TYPES member
Nov 27 05:58:08 np0005537642 ceph-mgr[74636]: mgr[py] Module nfs has missing NOTIFY_TYPES member
Nov 27 05:58:08 np0005537642 ceph-mgr[74636]: mgr[py] Loading python module 'orchestrator'
Nov 27 05:58:09 np0005537642 ceph-4c838139-e0c9-556a-a9ca-e4422f459af7-mgr-compute-0-qnrkij[74632]: 2025-11-27T10:58:09.108+0000 7fea93e2d140 -1 mgr[py] Module orchestrator has missing NOTIFY_TYPES member
Nov 27 05:58:09 np0005537642 ceph-mgr[74636]: mgr[py] Module orchestrator has missing NOTIFY_TYPES member
Nov 27 05:58:09 np0005537642 ceph-mgr[74636]: mgr[py] Loading python module 'osd_perf_query'
Nov 27 05:58:09 np0005537642 ceph-4c838139-e0c9-556a-a9ca-e4422f459af7-mgr-compute-0-qnrkij[74632]: 2025-11-27T10:58:09.181+0000 7fea93e2d140 -1 mgr[py] Module osd_perf_query has missing NOTIFY_TYPES member
Nov 27 05:58:09 np0005537642 ceph-mgr[74636]: mgr[py] Module osd_perf_query has missing NOTIFY_TYPES member
Nov 27 05:58:09 np0005537642 ceph-mgr[74636]: mgr[py] Loading python module 'osd_support'
Nov 27 05:58:09 np0005537642 ceph-4c838139-e0c9-556a-a9ca-e4422f459af7-mgr-compute-0-qnrkij[74632]: 2025-11-27T10:58:09.244+0000 7fea93e2d140 -1 mgr[py] Module osd_support has missing NOTIFY_TYPES member
Nov 27 05:58:09 np0005537642 ceph-mgr[74636]: mgr[py] Module osd_support has missing NOTIFY_TYPES member
Nov 27 05:58:09 np0005537642 ceph-mgr[74636]: mgr[py] Loading python module 'pg_autoscaler'
Nov 27 05:58:09 np0005537642 ceph-4c838139-e0c9-556a-a9ca-e4422f459af7-mgr-compute-0-qnrkij[74632]: 2025-11-27T10:58:09.319+0000 7fea93e2d140 -1 mgr[py] Module pg_autoscaler has missing NOTIFY_TYPES member
Nov 27 05:58:09 np0005537642 ceph-mgr[74636]: mgr[py] Module pg_autoscaler has missing NOTIFY_TYPES member
Nov 27 05:58:09 np0005537642 ceph-mgr[74636]: mgr[py] Loading python module 'progress'
Nov 27 05:58:09 np0005537642 ceph-4c838139-e0c9-556a-a9ca-e4422f459af7-mgr-compute-0-qnrkij[74632]: 2025-11-27T10:58:09.385+0000 7fea93e2d140 -1 mgr[py] Module progress has missing NOTIFY_TYPES member
Nov 27 05:58:09 np0005537642 ceph-mgr[74636]: mgr[py] Module progress has missing NOTIFY_TYPES member
Nov 27 05:58:09 np0005537642 ceph-mgr[74636]: mgr[py] Loading python module 'prometheus'
Nov 27 05:58:09 np0005537642 ceph-osd[82775]: log_channel(cluster) log [DBG] : 3.1c scrub starts
Nov 27 05:58:09 np0005537642 ceph-osd[82775]: log_channel(cluster) log [DBG] : 3.1c scrub ok
Nov 27 05:58:09 np0005537642 ceph-4c838139-e0c9-556a-a9ca-e4422f459af7-mgr-compute-0-qnrkij[74632]: 2025-11-27T10:58:09.720+0000 7fea93e2d140 -1 mgr[py] Module prometheus has missing NOTIFY_TYPES member
Nov 27 05:58:09 np0005537642 ceph-mgr[74636]: mgr[py] Module prometheus has missing NOTIFY_TYPES member
Nov 27 05:58:09 np0005537642 ceph-mgr[74636]: mgr[py] Loading python module 'rbd_support'
Nov 27 05:58:09 np0005537642 ceph-4c838139-e0c9-556a-a9ca-e4422f459af7-mgr-compute-0-qnrkij[74632]: 2025-11-27T10:58:09.815+0000 7fea93e2d140 -1 mgr[py] Module rbd_support has missing NOTIFY_TYPES member
Nov 27 05:58:09 np0005537642 ceph-mgr[74636]: mgr[py] Module rbd_support has missing NOTIFY_TYPES member
Nov 27 05:58:09 np0005537642 ceph-mgr[74636]: mgr[py] Loading python module 'restful'
Nov 27 05:58:10 np0005537642 ceph-mgr[74636]: mgr[py] Loading python module 'rgw'
Nov 27 05:58:10 np0005537642 ceph-4c838139-e0c9-556a-a9ca-e4422f459af7-mgr-compute-0-qnrkij[74632]: 2025-11-27T10:58:10.233+0000 7fea93e2d140 -1 mgr[py] Module rgw has missing NOTIFY_TYPES member
Nov 27 05:58:10 np0005537642 ceph-mgr[74636]: mgr[py] Module rgw has missing NOTIFY_TYPES member
Nov 27 05:58:10 np0005537642 ceph-mgr[74636]: mgr[py] Loading python module 'rook'
Nov 27 05:58:10 np0005537642 ceph-osd[82775]: log_channel(cluster) log [DBG] : 6.19 scrub starts
Nov 27 05:58:10 np0005537642 ceph-osd[82775]: log_channel(cluster) log [DBG] : 6.19 scrub ok
Nov 27 05:58:10 np0005537642 ceph-4c838139-e0c9-556a-a9ca-e4422f459af7-mgr-compute-0-qnrkij[74632]: 2025-11-27T10:58:10.796+0000 7fea93e2d140 -1 mgr[py] Module rook has missing NOTIFY_TYPES member
Nov 27 05:58:10 np0005537642 ceph-mgr[74636]: mgr[py] Module rook has missing NOTIFY_TYPES member
Nov 27 05:58:10 np0005537642 ceph-mgr[74636]: mgr[py] Loading python module 'selftest'
Nov 27 05:58:10 np0005537642 ceph-4c838139-e0c9-556a-a9ca-e4422f459af7-mgr-compute-0-qnrkij[74632]: 2025-11-27T10:58:10.865+0000 7fea93e2d140 -1 mgr[py] Module selftest has missing NOTIFY_TYPES member
Nov 27 05:58:10 np0005537642 ceph-mgr[74636]: mgr[py] Module selftest has missing NOTIFY_TYPES member
Nov 27 05:58:10 np0005537642 ceph-mgr[74636]: mgr[py] Loading python module 'snap_schedule'
Nov 27 05:58:10 np0005537642 ceph-4c838139-e0c9-556a-a9ca-e4422f459af7-mgr-compute-0-qnrkij[74632]: 2025-11-27T10:58:10.942+0000 7fea93e2d140 -1 mgr[py] Module snap_schedule has missing NOTIFY_TYPES member
Nov 27 05:58:10 np0005537642 ceph-mgr[74636]: mgr[py] Module snap_schedule has missing NOTIFY_TYPES member
Nov 27 05:58:10 np0005537642 ceph-mgr[74636]: mgr[py] Loading python module 'stats'
Nov 27 05:58:11 np0005537642 ceph-mgr[74636]: mgr[py] Loading python module 'status'
Nov 27 05:58:11 np0005537642 ceph-4c838139-e0c9-556a-a9ca-e4422f459af7-mgr-compute-0-qnrkij[74632]: 2025-11-27T10:58:11.088+0000 7fea93e2d140 -1 mgr[py] Module status has missing NOTIFY_TYPES member
Nov 27 05:58:11 np0005537642 ceph-mgr[74636]: mgr[py] Module status has missing NOTIFY_TYPES member
Nov 27 05:58:11 np0005537642 ceph-mgr[74636]: mgr[py] Loading python module 'telegraf'
Nov 27 05:58:11 np0005537642 ceph-4c838139-e0c9-556a-a9ca-e4422f459af7-mgr-compute-0-qnrkij[74632]: 2025-11-27T10:58:11.155+0000 7fea93e2d140 -1 mgr[py] Module telegraf has missing NOTIFY_TYPES member
Nov 27 05:58:11 np0005537642 ceph-mgr[74636]: mgr[py] Module telegraf has missing NOTIFY_TYPES member
Nov 27 05:58:11 np0005537642 ceph-mgr[74636]: mgr[py] Loading python module 'telemetry'
Nov 27 05:58:11 np0005537642 ceph-4c838139-e0c9-556a-a9ca-e4422f459af7-mgr-compute-0-qnrkij[74632]: 2025-11-27T10:58:11.305+0000 7fea93e2d140 -1 mgr[py] Module telemetry has missing NOTIFY_TYPES member
Nov 27 05:58:11 np0005537642 ceph-mgr[74636]: mgr[py] Module telemetry has missing NOTIFY_TYPES member
Nov 27 05:58:11 np0005537642 ceph-mgr[74636]: mgr[py] Loading python module 'test_orchestrator'
Nov 27 05:58:11 np0005537642 ceph-osd[82775]: log_channel(cluster) log [DBG] : 4.c scrub starts
Nov 27 05:58:11 np0005537642 ceph-osd[82775]: log_channel(cluster) log [DBG] : 4.c scrub ok
Nov 27 05:58:11 np0005537642 ceph-mon[74338]: log_channel(cluster) log [DBG] : Standby manager daemon compute-1.npcryb restarted
Nov 27 05:58:11 np0005537642 ceph-mon[74338]: log_channel(cluster) log [DBG] : Standby manager daemon compute-1.npcryb started
Nov 27 05:58:11 np0005537642 ceph-mon[74338]: mon.compute-0@0(leader).osd e59 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 27 05:58:11 np0005537642 ceph-mon[74338]: log_channel(cluster) log [DBG] : mgrmap e20: compute-0.qnrkij(active, starting, since 5s), standbys: compute-1.npcryb, compute-2.yyrxaz
Nov 27 05:58:11 np0005537642 ceph-4c838139-e0c9-556a-a9ca-e4422f459af7-mgr-compute-0-qnrkij[74632]: 2025-11-27T10:58:11.518+0000 7fea93e2d140 -1 mgr[py] Module test_orchestrator has missing NOTIFY_TYPES member
Nov 27 05:58:11 np0005537642 ceph-mgr[74636]: mgr[py] Module test_orchestrator has missing NOTIFY_TYPES member
Nov 27 05:58:11 np0005537642 ceph-mgr[74636]: mgr[py] Loading python module 'volumes'
Nov 27 05:58:11 np0005537642 ceph-mon[74338]: log_channel(cluster) log [DBG] : Standby manager daemon compute-2.yyrxaz restarted
Nov 27 05:58:11 np0005537642 ceph-mon[74338]: log_channel(cluster) log [DBG] : Standby manager daemon compute-2.yyrxaz started
Nov 27 05:58:11 np0005537642 ceph-4c838139-e0c9-556a-a9ca-e4422f459af7-mgr-compute-0-qnrkij[74632]: 2025-11-27T10:58:11.776+0000 7fea93e2d140 -1 mgr[py] Module volumes has missing NOTIFY_TYPES member
Nov 27 05:58:11 np0005537642 ceph-mgr[74636]: mgr[py] Module volumes has missing NOTIFY_TYPES member
Nov 27 05:58:11 np0005537642 ceph-mgr[74636]: mgr[py] Loading python module 'zabbix'
Nov 27 05:58:11 np0005537642 ceph-4c838139-e0c9-556a-a9ca-e4422f459af7-mgr-compute-0-qnrkij[74632]: 2025-11-27T10:58:11.855+0000 7fea93e2d140 -1 mgr[py] Module zabbix has missing NOTIFY_TYPES member
Nov 27 05:58:11 np0005537642 ceph-mgr[74636]: mgr[py] Module zabbix has missing NOTIFY_TYPES member
Nov 27 05:58:11 np0005537642 ceph-mgr[74636]: ms_deliver_dispatch: unhandled message 0x555b054ef860 mon_map magic: 0 from mon.0 v2:192.168.122.100:3300/0
Nov 27 05:58:11 np0005537642 ceph-mon[74338]: log_channel(cluster) log [INF] : Active manager daemon compute-0.qnrkij restarted
Nov 27 05:58:11 np0005537642 ceph-mon[74338]: mon.compute-0@0(leader).osd e59 do_prune osdmap full prune enabled
Nov 27 05:58:11 np0005537642 ceph-mon[74338]: log_channel(cluster) log [INF] : Activating manager daemon compute-0.qnrkij
Nov 27 05:58:11 np0005537642 ceph-mon[74338]: mon.compute-0@0(leader).osd e60 e60: 3 total, 3 up, 3 in
Nov 27 05:58:11 np0005537642 ceph-mgr[74636]: mgr handle_mgr_map Activating!
Nov 27 05:58:11 np0005537642 ceph-mon[74338]: log_channel(cluster) log [DBG] : osdmap e60: 3 total, 3 up, 3 in
Nov 27 05:58:11 np0005537642 ceph-mgr[74636]: mgr handle_mgr_map I am now activating
Nov 27 05:58:11 np0005537642 ceph-mon[74338]: log_channel(cluster) log [DBG] : mgrmap e21: compute-0.qnrkij(active, starting, since 0.0403693s), standbys: compute-1.npcryb, compute-2.yyrxaz
Nov 27 05:58:11 np0005537642 ceph-mon[74338]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mon metadata", "id": "compute-0"} v 0)
Nov 27 05:58:11 np0005537642 ceph-mon[74338]: log_channel(audit) log [DBG] : from='mgr.14424 192.168.122.100:0/1707931784' entity='mgr.compute-0.qnrkij' cmd=[{"prefix": "mon metadata", "id": "compute-0"}]: dispatch
Nov 27 05:58:11 np0005537642 ceph-mon[74338]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mon metadata", "id": "compute-1"} v 0)
Nov 27 05:58:11 np0005537642 ceph-mon[74338]: log_channel(audit) log [DBG] : from='mgr.14424 192.168.122.100:0/1707931784' entity='mgr.compute-0.qnrkij' cmd=[{"prefix": "mon metadata", "id": "compute-1"}]: dispatch
Nov 27 05:58:11 np0005537642 ceph-mon[74338]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mon metadata", "id": "compute-2"} v 0)
Nov 27 05:58:11 np0005537642 ceph-mon[74338]: log_channel(audit) log [DBG] : from='mgr.14424 192.168.122.100:0/1707931784' entity='mgr.compute-0.qnrkij' cmd=[{"prefix": "mon metadata", "id": "compute-2"}]: dispatch
Nov 27 05:58:11 np0005537642 ceph-mon[74338]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mgr metadata", "who": "compute-0.qnrkij", "id": "compute-0.qnrkij"} v 0)
Nov 27 05:58:11 np0005537642 ceph-mon[74338]: log_channel(audit) log [DBG] : from='mgr.14424 192.168.122.100:0/1707931784' entity='mgr.compute-0.qnrkij' cmd=[{"prefix": "mgr metadata", "who": "compute-0.qnrkij", "id": "compute-0.qnrkij"}]: dispatch
Nov 27 05:58:11 np0005537642 ceph-mon[74338]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mgr metadata", "who": "compute-1.npcryb", "id": "compute-1.npcryb"} v 0)
Nov 27 05:58:11 np0005537642 ceph-mon[74338]: log_channel(audit) log [DBG] : from='mgr.14424 192.168.122.100:0/1707931784' entity='mgr.compute-0.qnrkij' cmd=[{"prefix": "mgr metadata", "who": "compute-1.npcryb", "id": "compute-1.npcryb"}]: dispatch
Nov 27 05:58:11 np0005537642 ceph-mon[74338]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mgr metadata", "who": "compute-2.yyrxaz", "id": "compute-2.yyrxaz"} v 0)
Nov 27 05:58:11 np0005537642 ceph-mon[74338]: log_channel(audit) log [DBG] : from='mgr.14424 192.168.122.100:0/1707931784' entity='mgr.compute-0.qnrkij' cmd=[{"prefix": "mgr metadata", "who": "compute-2.yyrxaz", "id": "compute-2.yyrxaz"}]: dispatch
Nov 27 05:58:11 np0005537642 ceph-mon[74338]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd metadata", "id": 0} v 0)
Nov 27 05:58:11 np0005537642 ceph-mon[74338]: log_channel(audit) log [DBG] : from='mgr.14424 192.168.122.100:0/1707931784' entity='mgr.compute-0.qnrkij' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch
Nov 27 05:58:11 np0005537642 ceph-mon[74338]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd metadata", "id": 1} v 0)
Nov 27 05:58:11 np0005537642 ceph-mon[74338]: log_channel(audit) log [DBG] : from='mgr.14424 192.168.122.100:0/1707931784' entity='mgr.compute-0.qnrkij' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch
Nov 27 05:58:11 np0005537642 ceph-mon[74338]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd metadata", "id": 2} v 0)
Nov 27 05:58:11 np0005537642 ceph-mon[74338]: log_channel(audit) log [DBG] : from='mgr.14424 192.168.122.100:0/1707931784' entity='mgr.compute-0.qnrkij' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch
Nov 27 05:58:11 np0005537642 ceph-mon[74338]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mds metadata"} v 0)
Nov 27 05:58:11 np0005537642 ceph-mon[74338]: log_channel(audit) log [DBG] : from='mgr.14424 192.168.122.100:0/1707931784' entity='mgr.compute-0.qnrkij' cmd=[{"prefix": "mds metadata"}]: dispatch
Nov 27 05:58:11 np0005537642 ceph-mon[74338]: mon.compute-0@0(leader).mds e1 all = 1
Nov 27 05:58:11 np0005537642 ceph-mon[74338]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd metadata"} v 0)
Nov 27 05:58:11 np0005537642 ceph-mon[74338]: log_channel(audit) log [DBG] : from='mgr.14424 192.168.122.100:0/1707931784' entity='mgr.compute-0.qnrkij' cmd=[{"prefix": "osd metadata"}]: dispatch
Nov 27 05:58:11 np0005537642 ceph-mon[74338]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mon metadata"} v 0)
Nov 27 05:58:11 np0005537642 ceph-mon[74338]: log_channel(audit) log [DBG] : from='mgr.14424 192.168.122.100:0/1707931784' entity='mgr.compute-0.qnrkij' cmd=[{"prefix": "mon metadata"}]: dispatch
Nov 27 05:58:11 np0005537642 ceph-mgr[74636]: [balancer DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Nov 27 05:58:11 np0005537642 ceph-mgr[74636]: mgr load Constructed class from module: balancer
Nov 27 05:58:11 np0005537642 ceph-mgr[74636]: [balancer INFO root] Starting
Nov 27 05:58:11 np0005537642 ceph-mon[74338]: log_channel(cluster) log [INF] : Manager daemon compute-0.qnrkij is now available
Nov 27 05:58:11 np0005537642 ceph-mgr[74636]: [cephadm DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Nov 27 05:58:11 np0005537642 ceph-mgr[74636]: [balancer INFO root] Optimize plan auto_2025-11-27_10:58:11
Nov 27 05:58:11 np0005537642 ceph-mgr[74636]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Nov 27 05:58:11 np0005537642 ceph-mgr[74636]: [balancer INFO root] Some PGs (1.000000) are unknown; try again later
Nov 27 05:58:11 np0005537642 ceph-mgr[74636]: mgr load Constructed class from module: cephadm
Nov 27 05:58:11 np0005537642 ceph-mgr[74636]: [crash DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Nov 27 05:58:11 np0005537642 ceph-mgr[74636]: mgr load Constructed class from module: crash
Nov 27 05:58:11 np0005537642 ceph-mgr[74636]: [dashboard DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Nov 27 05:58:11 np0005537642 ceph-mgr[74636]: mgr load Constructed class from module: dashboard
Nov 27 05:58:11 np0005537642 ceph-mgr[74636]: [dashboard INFO access_control] Loading user roles DB version=2
Nov 27 05:58:11 np0005537642 ceph-mgr[74636]: [devicehealth DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Nov 27 05:58:11 np0005537642 ceph-mgr[74636]: mgr load Constructed class from module: devicehealth
Nov 27 05:58:11 np0005537642 ceph-mgr[74636]: [iostat DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Nov 27 05:58:11 np0005537642 ceph-mgr[74636]: mgr load Constructed class from module: iostat
Nov 27 05:58:11 np0005537642 ceph-mgr[74636]: [devicehealth INFO root] Starting
Nov 27 05:58:11 np0005537642 ceph-mgr[74636]: [nfs DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Nov 27 05:58:11 np0005537642 ceph-mgr[74636]: mgr load Constructed class from module: nfs
Nov 27 05:58:11 np0005537642 ceph-mgr[74636]: [dashboard INFO sso] Loading SSO DB version=1
Nov 27 05:58:11 np0005537642 ceph-mgr[74636]: [orchestrator DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Nov 27 05:58:11 np0005537642 ceph-mgr[74636]: mgr load Constructed class from module: orchestrator
Nov 27 05:58:11 np0005537642 ceph-mgr[74636]: [dashboard INFO root] server: ssl=no host=192.168.122.100 port=8443
Nov 27 05:58:11 np0005537642 ceph-mgr[74636]: [dashboard INFO root] Configured CherryPy, starting engine...
Nov 27 05:58:11 np0005537642 ceph-mgr[74636]: [pg_autoscaler DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Nov 27 05:58:11 np0005537642 ceph-mgr[74636]: mgr load Constructed class from module: pg_autoscaler
Nov 27 05:58:11 np0005537642 ceph-mgr[74636]: [progress DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Nov 27 05:58:11 np0005537642 ceph-mgr[74636]: mgr load Constructed class from module: progress
Nov 27 05:58:11 np0005537642 ceph-mgr[74636]: [rbd_support DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Nov 27 05:58:11 np0005537642 ceph-mgr[74636]: [pg_autoscaler INFO root] _maybe_adjust
Nov 27 05:58:12 np0005537642 ceph-mgr[74636]: [progress INFO root] Loading...
Nov 27 05:58:12 np0005537642 ceph-mgr[74636]: [progress INFO root] Loaded [<progress.module.GhostEvent object at 0x7fea11995490>, <progress.module.GhostEvent object at 0x7fea119954c0>, <progress.module.GhostEvent object at 0x7fea119954f0>, <progress.module.GhostEvent object at 0x7fea11995520>, <progress.module.GhostEvent object at 0x7fea11995550>, <progress.module.GhostEvent object at 0x7fea11995580>, <progress.module.GhostEvent object at 0x7fea119955b0>, <progress.module.GhostEvent object at 0x7fea119955e0>, <progress.module.GhostEvent object at 0x7fea11995610>, <progress.module.GhostEvent object at 0x7fea11995640>, <progress.module.GhostEvent object at 0x7fea11995670>, <progress.module.GhostEvent object at 0x7fea119956a0>, <progress.module.GhostEvent object at 0x7fea119956d0>] historic events
Nov 27 05:58:12 np0005537642 ceph-mgr[74636]: [rbd_support INFO root] recovery thread starting
Nov 27 05:58:12 np0005537642 ceph-mgr[74636]: [rbd_support INFO root] starting setup
Nov 27 05:58:12 np0005537642 ceph-mgr[74636]: mgr load Constructed class from module: rbd_support
Nov 27 05:58:12 np0005537642 ceph-mgr[74636]: [progress INFO root] Loaded OSDMap, ready.
Nov 27 05:58:12 np0005537642 ceph-mgr[74636]: [restful DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Nov 27 05:58:12 np0005537642 ceph-mgr[74636]: mgr load Constructed class from module: restful
Nov 27 05:58:12 np0005537642 ceph-mon[74338]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/compute-0.qnrkij/mirror_snapshot_schedule"} v 0)
Nov 27 05:58:12 np0005537642 ceph-mon[74338]: log_channel(audit) log [INF] : from='mgr.14424 192.168.122.100:0/1707931784' entity='mgr.compute-0.qnrkij' cmd=[{"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/compute-0.qnrkij/mirror_snapshot_schedule"}]: dispatch
Nov 27 05:58:12 np0005537642 ceph-mgr[74636]: [restful INFO root] server_addr: :: server_port: 8003
Nov 27 05:58:12 np0005537642 ceph-mgr[74636]: [status DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Nov 27 05:58:12 np0005537642 ceph-mgr[74636]: mgr load Constructed class from module: status
Nov 27 05:58:12 np0005537642 ceph-mgr[74636]: [telemetry DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Nov 27 05:58:12 np0005537642 ceph-mgr[74636]: [restful WARNING root] server not running: no certificate configured
Nov 27 05:58:12 np0005537642 ceph-mgr[74636]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Nov 27 05:58:12 np0005537642 ceph-mgr[74636]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 27 05:58:12 np0005537642 ceph-mgr[74636]: mgr load Constructed class from module: telemetry
Nov 27 05:58:12 np0005537642 ceph-mgr[74636]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 27 05:58:12 np0005537642 ceph-mgr[74636]: [volumes DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Nov 27 05:58:12 np0005537642 ceph-mgr[74636]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 27 05:58:12 np0005537642 ceph-mgr[74636]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 27 05:58:12 np0005537642 ceph-mgr[74636]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: starting
Nov 27 05:58:12 np0005537642 ceph-mgr[74636]: [rbd_support INFO root] PerfHandler: starting
Nov 27 05:58:12 np0005537642 ceph-mgr[74636]: [rbd_support INFO root] load_task_task: vms, start_after=
Nov 27 05:58:12 np0005537642 ceph-mgr[74636]: [rbd_support INFO root] load_task_task: volumes, start_after=
Nov 27 05:58:12 np0005537642 ceph-mgr[74636]: [rbd_support INFO root] load_task_task: backups, start_after=
Nov 27 05:58:12 np0005537642 ceph-mgr[74636]: [rbd_support INFO root] load_task_task: images, start_after=
Nov 27 05:58:12 np0005537642 ceph-mgr[74636]: [rbd_support INFO root] TaskHandler: starting
Nov 27 05:58:12 np0005537642 ceph-mon[74338]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/compute-0.qnrkij/trash_purge_schedule"} v 0)
Nov 27 05:58:12 np0005537642 ceph-mon[74338]: log_channel(audit) log [INF] : from='mgr.14424 192.168.122.100:0/1707931784' entity='mgr.compute-0.qnrkij' cmd=[{"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/compute-0.qnrkij/trash_purge_schedule"}]: dispatch
Nov 27 05:58:12 np0005537642 ceph-mgr[74636]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Nov 27 05:58:12 np0005537642 ceph-mgr[74636]: mgr load Constructed class from module: volumes
Nov 27 05:58:12 np0005537642 ceph-mgr[74636]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 27 05:58:12 np0005537642 ceph-mgr[74636]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 27 05:58:12 np0005537642 ceph-mgr[74636]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 27 05:58:12 np0005537642 ceph-mgr[74636]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 27 05:58:12 np0005537642 ceph-mgr[74636]: [rbd_support INFO root] TrashPurgeScheduleHandler: starting
Nov 27 05:58:12 np0005537642 ceph-mgr[74636]: [rbd_support INFO root] setup complete
Nov 27 05:58:12 np0005537642 ceph-mgr[74636]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: CephFS -> /api/cephfs
Nov 27 05:58:12 np0005537642 ceph-mgr[74636]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: CephFsUi -> /ui-api/cephfs
Nov 27 05:58:12 np0005537642 ceph-mgr[74636]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: CephFSSubvolume -> /api/cephfs/subvolume
Nov 27 05:58:12 np0005537642 ceph-mgr[74636]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: CephFSSubvolumeGroups -> /api/cephfs/subvolume/group
Nov 27 05:58:12 np0005537642 ceph-mgr[74636]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: CephFSSubvolumeSnapshots -> /api/cephfs/subvolume/snapshot
Nov 27 05:58:12 np0005537642 ceph-mgr[74636]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: CephFsSnapshotClone -> /api/cephfs/subvolume/snapshot/clone
Nov 27 05:58:12 np0005537642 ceph-mgr[74636]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: CephFSSnapshotSchedule -> /api/cephfs/snapshot/schedule
Nov 27 05:58:12 np0005537642 ceph-mgr[74636]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: IscsiUi -> /ui-api/iscsi
Nov 27 05:58:12 np0005537642 ceph-mgr[74636]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: Iscsi -> /api/iscsi
Nov 27 05:58:12 np0005537642 ceph-mgr[74636]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: IscsiTarget -> /api/iscsi/target
Nov 27 05:58:12 np0005537642 ceph-mgr[74636]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: NFSGaneshaCluster -> /api/nfs-ganesha/cluster
Nov 27 05:58:12 np0005537642 ceph-mgr[74636]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: NFSGaneshaExports -> /api/nfs-ganesha/export
Nov 27 05:58:12 np0005537642 ceph-mgr[74636]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: NFSGaneshaUi -> /ui-api/nfs-ganesha
Nov 27 05:58:12 np0005537642 ceph-mgr[74636]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: Orchestrator -> /ui-api/orchestrator
Nov 27 05:58:12 np0005537642 ceph-mgr[74636]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: Service -> /api/service
Nov 27 05:58:12 np0005537642 ceph-mgr[74636]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: RbdMirroring -> /api/block/mirroring
Nov 27 05:58:12 np0005537642 ceph-mgr[74636]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: RbdMirroringSummary -> /api/block/mirroring/summary
Nov 27 05:58:12 np0005537642 ceph-mgr[74636]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: RbdMirroringPoolMode -> /api/block/mirroring/pool
Nov 27 05:58:12 np0005537642 ceph-mgr[74636]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: RbdMirroringPoolBootstrap -> /api/block/mirroring/pool/{pool_name}/bootstrap
Nov 27 05:58:12 np0005537642 ceph-mgr[74636]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: RbdMirroringPoolPeer -> /api/block/mirroring/pool/{pool_name}/peer
Nov 27 05:58:12 np0005537642 ceph-mgr[74636]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: RbdMirroringStatus -> /ui-api/block/mirroring
Nov 27 05:58:12 np0005537642 ceph-mgr[74636]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: Pool -> /api/pool
Nov 27 05:58:12 np0005537642 ceph-mgr[74636]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: PoolUi -> /ui-api/pool
Nov 27 05:58:12 np0005537642 ceph-mgr[74636]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: RBDPool -> /api/pool
Nov 27 05:58:12 np0005537642 ceph-mgr[74636]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: Rbd -> /api/block/image
Nov 27 05:58:12 np0005537642 ceph-mgr[74636]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: RbdStatus -> /ui-api/block/rbd
Nov 27 05:58:12 np0005537642 ceph-mgr[74636]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: RbdSnapshot -> /api/block/image/{image_spec}/snap
Nov 27 05:58:12 np0005537642 ceph-mgr[74636]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: RbdTrash -> /api/block/image/trash
Nov 27 05:58:12 np0005537642 ceph-mgr[74636]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: RbdNamespace -> /api/block/pool/{pool_name}/namespace
Nov 27 05:58:12 np0005537642 ceph-mgr[74636]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: Rgw -> /ui-api/rgw
Nov 27 05:58:12 np0005537642 ceph-mgr[74636]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: RgwMultisiteStatus -> /ui-api/rgw/multisite
Nov 27 05:58:12 np0005537642 ceph-mgr[74636]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: RgwMultisiteController -> /api/rgw/multisite
Nov 27 05:58:12 np0005537642 ceph-mgr[74636]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: RgwDaemon -> /api/rgw/daemon
Nov 27 05:58:12 np0005537642 ceph-mgr[74636]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: RgwSite -> /api/rgw/site
Nov 27 05:58:12 np0005537642 ceph-mgr[74636]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: RgwBucket -> /api/rgw/bucket
Nov 27 05:58:12 np0005537642 ceph-mgr[74636]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: RgwBucketUi -> /ui-api/rgw/bucket
Nov 27 05:58:12 np0005537642 ceph-mgr[74636]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: RgwUser -> /api/rgw/user
Nov 27 05:58:12 np0005537642 ceph-mgr[74636]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: rgwroles_CRUDClass -> /api/rgw/roles
Nov 27 05:58:12 np0005537642 ceph-mgr[74636]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: rgwroles_CRUDClassMetadata -> /ui-api/rgw/roles
Nov 27 05:58:12 np0005537642 ceph-mgr[74636]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: RgwRealm -> /api/rgw/realm
Nov 27 05:58:12 np0005537642 ceph-mgr[74636]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: RgwZonegroup -> /api/rgw/zonegroup
Nov 27 05:58:12 np0005537642 ceph-mgr[74636]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: RgwZone -> /api/rgw/zone
Nov 27 05:58:12 np0005537642 ceph-mgr[74636]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: Auth -> /api/auth
Nov 27 05:58:12 np0005537642 ceph-mgr[74636]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: clusteruser_CRUDClass -> /api/cluster/user
Nov 27 05:58:12 np0005537642 ceph-mgr[74636]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: clusteruser_CRUDClassMetadata -> /ui-api/cluster/user
Nov 27 05:58:12 np0005537642 ceph-mgr[74636]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: Cluster -> /api/cluster
Nov 27 05:58:12 np0005537642 ceph-mgr[74636]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: ClusterUpgrade -> /api/cluster/upgrade
Nov 27 05:58:12 np0005537642 ceph-mgr[74636]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: ClusterConfiguration -> /api/cluster_conf
Nov 27 05:58:12 np0005537642 ceph-mgr[74636]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: CrushRule -> /api/crush_rule
Nov 27 05:58:12 np0005537642 ceph-mgr[74636]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: CrushRuleUi -> /ui-api/crush_rule
Nov 27 05:58:12 np0005537642 ceph-mgr[74636]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: Daemon -> /api/daemon
Nov 27 05:58:12 np0005537642 ceph-mgr[74636]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: Docs -> /docs
Nov 27 05:58:12 np0005537642 ceph-mgr[74636]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: ErasureCodeProfile -> /api/erasure_code_profile
Nov 27 05:58:12 np0005537642 ceph-mgr[74636]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: ErasureCodeProfileUi -> /ui-api/erasure_code_profile
Nov 27 05:58:12 np0005537642 ceph-mgr[74636]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: FeedbackController -> /api/feedback
Nov 27 05:58:12 np0005537642 ceph-mgr[74636]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: FeedbackApiController -> /api/feedback/api_key
Nov 27 05:58:12 np0005537642 ceph-mgr[74636]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: FeedbackUiController -> /ui-api/feedback/api_key
Nov 27 05:58:12 np0005537642 ceph-mgr[74636]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: FrontendLogging -> /ui-api/logging
Nov 27 05:58:12 np0005537642 ceph-mgr[74636]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: Grafana -> /api/grafana
Nov 27 05:58:12 np0005537642 ceph-mgr[74636]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: Host -> /api/host
Nov 27 05:58:12 np0005537642 ceph-mgr[74636]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: HostUi -> /ui-api/host
Nov 27 05:58:12 np0005537642 ceph-mgr[74636]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: Health -> /api/health
Nov 27 05:58:12 np0005537642 ceph-mgr[74636]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: HomeController -> /
Nov 27 05:58:12 np0005537642 ceph-mgr[74636]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: LangsController -> /ui-api/langs
Nov 27 05:58:12 np0005537642 ceph-mgr[74636]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: LoginController -> /ui-api/login
Nov 27 05:58:12 np0005537642 ceph-mgr[74636]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: Logs -> /api/logs
Nov 27 05:58:12 np0005537642 ceph-mgr[74636]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: MgrModules -> /api/mgr/module
Nov 27 05:58:12 np0005537642 ceph-mgr[74636]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: Monitor -> /api/monitor
Nov 27 05:58:12 np0005537642 ceph-mgr[74636]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: NVMeoFGateway -> /api/nvmeof/gateway
Nov 27 05:58:12 np0005537642 ceph-mgr[74636]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: NVMeoFSpdk -> /api/nvmeof/spdk
Nov 27 05:58:12 np0005537642 ceph-mgr[74636]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: NVMeoFSubsystem -> /api/nvmeof/subsystem
Nov 27 05:58:12 np0005537642 ceph-mgr[74636]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: NVMeoFListener -> /api/nvmeof/subsystem/{nqn}/listener
Nov 27 05:58:12 np0005537642 ceph-mgr[74636]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: NVMeoFNamespace -> /api/nvmeof/subsystem/{nqn}/namespace
Nov 27 05:58:12 np0005537642 ceph-mgr[74636]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: NVMeoFHost -> /api/nvmeof/subsystem/{nqn}/host
Nov 27 05:58:12 np0005537642 ceph-mgr[74636]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: NVMeoFConnection -> /api/nvmeof/subsystem/{nqn}/connection
Nov 27 05:58:12 np0005537642 ceph-mgr[74636]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: NVMeoFTcpUI -> /ui-api/nvmeof
Nov 27 05:58:12 np0005537642 ceph-mgr[74636]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: Osd -> /api/osd
Nov 27 05:58:12 np0005537642 ceph-mgr[74636]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: OsdUi -> /ui-api/osd
Nov 27 05:58:12 np0005537642 ceph-mgr[74636]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: OsdFlagsController -> /api/osd/flags
Nov 27 05:58:12 np0005537642 ceph-mgr[74636]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: MdsPerfCounter -> /api/perf_counters/mds
Nov 27 05:58:12 np0005537642 ceph-mgr[74636]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: MonPerfCounter -> /api/perf_counters/mon
Nov 27 05:58:12 np0005537642 ceph-mgr[74636]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: OsdPerfCounter -> /api/perf_counters/osd
Nov 27 05:58:12 np0005537642 ceph-mgr[74636]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: RgwPerfCounter -> /api/perf_counters/rgw
Nov 27 05:58:12 np0005537642 ceph-mgr[74636]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: RbdMirrorPerfCounter -> /api/perf_counters/rbd-mirror
Nov 27 05:58:12 np0005537642 ceph-mgr[74636]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: MgrPerfCounter -> /api/perf_counters/mgr
Nov 27 05:58:12 np0005537642 ceph-mgr[74636]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: TcmuRunnerPerfCounter -> /api/perf_counters/tcmu-runner
Nov 27 05:58:12 np0005537642 ceph-mgr[74636]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: PerfCounters -> /api/perf_counters
Nov 27 05:58:12 np0005537642 ceph-mgr[74636]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: PrometheusReceiver -> /api/prometheus_receiver
Nov 27 05:58:12 np0005537642 ceph-mgr[74636]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: Prometheus -> /api/prometheus
Nov 27 05:58:12 np0005537642 ceph-mgr[74636]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: PrometheusNotifications -> /api/prometheus/notifications
Nov 27 05:58:12 np0005537642 ceph-mgr[74636]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: PrometheusSettings -> /ui-api/prometheus
Nov 27 05:58:12 np0005537642 ceph-mgr[74636]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: Role -> /api/role
Nov 27 05:58:12 np0005537642 ceph-mgr[74636]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: Scope -> /ui-api/scope
Nov 27 05:58:12 np0005537642 ceph-mgr[74636]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: Saml2 -> /auth/saml2
Nov 27 05:58:12 np0005537642 ceph-mgr[74636]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: Settings -> /api/settings
Nov 27 05:58:12 np0005537642 ceph-mgr[74636]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: StandardSettings -> /ui-api/standard_settings
Nov 27 05:58:12 np0005537642 ceph-mgr[74636]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: Summary -> /api/summary
Nov 27 05:58:12 np0005537642 ceph-mgr[74636]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: Task -> /api/task
Nov 27 05:58:12 np0005537642 ceph-mgr[74636]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: Telemetry -> /api/telemetry
Nov 27 05:58:12 np0005537642 ceph-mgr[74636]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: User -> /api/user
Nov 27 05:58:12 np0005537642 ceph-mgr[74636]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: UserPasswordPolicy -> /api/user
Nov 27 05:58:12 np0005537642 ceph-mgr[74636]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: UserChangePassword -> /api/user/{username}
Nov 27 05:58:12 np0005537642 ceph-mgr[74636]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: FeatureTogglesEndpoint -> /api/feature_toggles
Nov 27 05:58:12 np0005537642 ceph-mgr[74636]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: MessageOfTheDay -> /ui-api/motd
Nov 27 05:58:12 np0005537642 systemd[1]: Stopping User Manager for UID 42477...
Nov 27 05:58:12 np0005537642 systemd[75677]: Activating special unit Exit the Session...
Nov 27 05:58:12 np0005537642 systemd[75677]: Stopped target Main User Target.
Nov 27 05:58:12 np0005537642 systemd[75677]: Stopped target Basic System.
Nov 27 05:58:12 np0005537642 systemd[75677]: Stopped target Paths.
Nov 27 05:58:12 np0005537642 systemd[75677]: Stopped target Sockets.
Nov 27 05:58:12 np0005537642 systemd[75677]: Stopped target Timers.
Nov 27 05:58:12 np0005537642 systemd[75677]: Stopped Mark boot as successful after the user session has run 2 minutes.
Nov 27 05:58:12 np0005537642 systemd[75677]: Stopped Daily Cleanup of User's Temporary Directories.
Nov 27 05:58:12 np0005537642 systemd[75677]: Closed D-Bus User Message Bus Socket.
Nov 27 05:58:12 np0005537642 systemd[75677]: Stopped Create User's Volatile Files and Directories.
Nov 27 05:58:12 np0005537642 systemd[75677]: Removed slice User Application Slice.
Nov 27 05:58:12 np0005537642 systemd[75677]: Reached target Shutdown.
Nov 27 05:58:12 np0005537642 systemd[75677]: Finished Exit the Session.
Nov 27 05:58:12 np0005537642 systemd[75677]: Reached target Exit the Session.
Nov 27 05:58:12 np0005537642 systemd[1]: user@42477.service: Deactivated successfully.
Nov 27 05:58:12 np0005537642 systemd[1]: Stopped User Manager for UID 42477.
Nov 27 05:58:12 np0005537642 ceph-osd[82775]: log_channel(cluster) log [DBG] : 6.1a deep-scrub starts
Nov 27 05:58:12 np0005537642 ceph-osd[82775]: log_channel(cluster) log [DBG] : 6.1a deep-scrub ok
Nov 27 05:58:12 np0005537642 systemd[1]: Stopping User Runtime Directory /run/user/42477...
Nov 27 05:58:12 np0005537642 systemd[1]: run-user-42477.mount: Deactivated successfully.
Nov 27 05:58:12 np0005537642 systemd[1]: user-runtime-dir@42477.service: Deactivated successfully.
Nov 27 05:58:12 np0005537642 systemd[1]: Stopped User Runtime Directory /run/user/42477.
Nov 27 05:58:12 np0005537642 systemd[1]: Removed slice User Slice of UID 42477.
Nov 27 05:58:12 np0005537642 systemd[1]: user-42477.slice: Consumed 41.090s CPU time.
Nov 27 05:58:12 np0005537642 systemd-logind[801]: New session 35 of user ceph-admin.
Nov 27 05:58:12 np0005537642 systemd[1]: Created slice User Slice of UID 42477.
Nov 27 05:58:12 np0005537642 ceph-mgr[74636]: [dashboard INFO dashboard.module] Engine started.
Nov 27 05:58:12 np0005537642 systemd[1]: Starting User Runtime Directory /run/user/42477...
Nov 27 05:58:12 np0005537642 systemd[1]: Finished User Runtime Directory /run/user/42477.
Nov 27 05:58:12 np0005537642 systemd[1]: Starting User Manager for UID 42477...
Nov 27 05:58:12 np0005537642 ceph-mon[74338]: Active manager daemon compute-0.qnrkij restarted
Nov 27 05:58:12 np0005537642 ceph-mon[74338]: Activating manager daemon compute-0.qnrkij
Nov 27 05:58:12 np0005537642 ceph-mon[74338]: Manager daemon compute-0.qnrkij is now available
Nov 27 05:58:12 np0005537642 ceph-mon[74338]: from='mgr.14424 192.168.122.100:0/1707931784' entity='mgr.compute-0.qnrkij' cmd=[{"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/compute-0.qnrkij/mirror_snapshot_schedule"}]: dispatch
Nov 27 05:58:12 np0005537642 ceph-mon[74338]: from='mgr.14424 192.168.122.100:0/1707931784' entity='mgr.compute-0.qnrkij' cmd=[{"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/compute-0.qnrkij/trash_purge_schedule"}]: dispatch
Nov 27 05:58:12 np0005537642 systemd[92942]: Queued start job for default target Main User Target.
Nov 27 05:58:12 np0005537642 systemd[92942]: Created slice User Application Slice.
Nov 27 05:58:12 np0005537642 systemd[92942]: Started Mark boot as successful after the user session has run 2 minutes.
Nov 27 05:58:12 np0005537642 systemd[92942]: Started Daily Cleanup of User's Temporary Directories.
Nov 27 05:58:12 np0005537642 systemd[92942]: Reached target Paths.
Nov 27 05:58:12 np0005537642 systemd[92942]: Reached target Timers.
Nov 27 05:58:12 np0005537642 systemd[92942]: Starting D-Bus User Message Bus Socket...
Nov 27 05:58:12 np0005537642 systemd[92942]: Starting Create User's Volatile Files and Directories...
Nov 27 05:58:12 np0005537642 systemd[92942]: Listening on D-Bus User Message Bus Socket.
Nov 27 05:58:12 np0005537642 systemd[92942]: Reached target Sockets.
Nov 27 05:58:12 np0005537642 systemd[92942]: Finished Create User's Volatile Files and Directories.
Nov 27 05:58:12 np0005537642 systemd[92942]: Reached target Basic System.
Nov 27 05:58:12 np0005537642 systemd[92942]: Reached target Main User Target.
Nov 27 05:58:12 np0005537642 systemd[92942]: Startup finished in 148ms.
Nov 27 05:58:12 np0005537642 systemd[1]: Started User Manager for UID 42477.
Nov 27 05:58:12 np0005537642 systemd[1]: Started Session 35 of User ceph-admin.
Nov 27 05:58:12 np0005537642 ceph-mon[74338]: log_channel(cluster) log [DBG] : mgrmap e22: compute-0.qnrkij(active, since 1.08834s), standbys: compute-1.npcryb, compute-2.yyrxaz
Nov 27 05:58:12 np0005537642 ceph-mgr[74636]: log_channel(audit) log [DBG] : from='client.14418 -' entity='client.admin' cmd=[{"prefix": "fs volume create", "name": "cephfs", "placement": "compute-0 compute-1 compute-2 ", "target": ["mon-mgr", ""]}]: dispatch
Nov 27 05:58:12 np0005537642 ceph-mgr[74636]: [volumes INFO volumes.module] Starting _cmd_fs_volume_create(name:cephfs, placement:compute-0 compute-1 compute-2 , prefix:fs volume create, target:['mon-mgr', '']) < ""
Nov 27 05:58:12 np0005537642 ceph-mon[74338]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool create", "pool": "cephfs.cephfs.meta"} v 0)
Nov 27 05:58:12 np0005537642 ceph-mon[74338]: log_channel(audit) log [INF] : from='mgr.14424 192.168.122.100:0/1707931784' entity='mgr.compute-0.qnrkij' cmd=[{"prefix": "osd pool create", "pool": "cephfs.cephfs.meta"}]: dispatch
Nov 27 05:58:12 np0005537642 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v3: 197 pgs: 197 active+clean; 454 KiB data, 103 MiB used, 60 GiB / 60 GiB avail
Nov 27 05:58:12 np0005537642 rsyslogd[1004]: imjournal: journal files changed, reloading...  [v8.2510.0-2.el9 try https://www.rsyslog.com/e/0 ]
Nov 27 05:58:12 np0005537642 rsyslogd[1004]: imjournal: journal files changed, reloading...  [v8.2510.0-2.el9 try https://www.rsyslog.com/e/0 ]
Nov 27 05:58:12 np0005537642 ceph-mon[74338]: mon.compute-0@0(leader) e3 handle_command mon_command({"bulk": true, "prefix": "osd pool create", "pool": "cephfs.cephfs.data"} v 0)
Nov 27 05:58:12 np0005537642 ceph-mon[74338]: log_channel(audit) log [INF] : from='mgr.14424 192.168.122.100:0/1707931784' entity='mgr.compute-0.qnrkij' cmd=[{"bulk": true, "prefix": "osd pool create", "pool": "cephfs.cephfs.data"}]: dispatch
Nov 27 05:58:12 np0005537642 ceph-mon[74338]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "fs new", "fs_name": "cephfs", "metadata": "cephfs.cephfs.meta", "data": "cephfs.cephfs.data"} v 0)
Nov 27 05:58:12 np0005537642 ceph-mon[74338]: log_channel(audit) log [INF] : from='mgr.14424 192.168.122.100:0/1707931784' entity='mgr.compute-0.qnrkij' cmd=[{"prefix": "fs new", "fs_name": "cephfs", "metadata": "cephfs.cephfs.meta", "data": "cephfs.cephfs.data"}]: dispatch
Nov 27 05:58:12 np0005537642 ceph-mon[74338]: mon.compute-0@0(leader).osd e60 do_prune osdmap full prune enabled
Nov 27 05:58:12 np0005537642 ceph-mon[74338]: log_channel(cluster) log [ERR] : Health check failed: 1 filesystem is offline (MDS_ALL_DOWN)
Nov 27 05:58:12 np0005537642 ceph-mon[74338]: log_channel(cluster) log [WRN] : Health check failed: 1 filesystem is online with fewer MDS than max_mds (MDS_UP_LESS_THAN_MAX)
Nov 27 05:58:12 np0005537642 ceph-4c838139-e0c9-556a-a9ca-e4422f459af7-mon-compute-0[74334]: 2025-11-27T10:58:12.975+0000 7f2c33af7640 -1 log_channel(cluster) log [ERR] : Health check failed: 1 filesystem is offline (MDS_ALL_DOWN)
Nov 27 05:58:12 np0005537642 ceph-mon[74338]: log_channel(audit) log [INF] : from='mgr.14424 192.168.122.100:0/1707931784' entity='mgr.compute-0.qnrkij' cmd='[{"prefix": "fs new", "fs_name": "cephfs", "metadata": "cephfs.cephfs.meta", "data": "cephfs.cephfs.data"}]': finished
Nov 27 05:58:12 np0005537642 ceph-mon[74338]: mon.compute-0@0(leader).mds e2 new map
Nov 27 05:58:12 np0005537642 ceph-mon[74338]: mon.compute-0@0(leader).mds e2 print_map#012e2#012btime 2025-11-27T10:58:12:975683+0000#012enable_multiple, ever_enabled_multiple: 1,1#012default compat: compat={},rocompat={},incompat={1=base v0.20,2=client writeable ranges,3=default file layouts on dirs,4=dir inode in separate object,5=mds uses versioned encoding,6=dirfrag is stored in omap,8=no anchor table,9=file layout v2,10=snaprealm v2,11=minor log segments,12=quiesce subvolumes}#012legacy client fscid: 1#012 #012Filesystem 'cephfs' (1)#012fs_name#011cephfs#012epoch#0112#012flags#01112 joinable allow_snaps allow_multimds_snaps#012created#0112025-11-27T10:58:12.975652+0000#012modified#0112025-11-27T10:58:12.975652+0000#012tableserver#0110#012root#0110#012session_timeout#01160#012session_autoclose#011300#012max_file_size#0111099511627776#012max_xattr_size#01165536#012required_client_features#011{}#012last_failure#0110#012last_failure_osd_epoch#0110#012compat#011compat={},rocompat={},incompat={1=base v0.20,2=client writeable ranges,3=default file layouts on dirs,4=dir inode in separate object,5=mds uses versioned encoding,6=dirfrag is stored in omap,7=mds uses inline data,8=no anchor table,9=file layout v2,10=snaprealm v2,11=minor log segments,12=quiesce subvolumes}#012max_mds#0111#012in#011#012up#011{}#012failed#011#012damaged#011#012stopped#011#012data_pools#011[7]#012metadata_pool#0116#012inline_data#011disabled#012balancer#011#012bal_rank_mask#011-1#012standby_count_wanted#0110#012qdb_cluster#011leader: 0 members: #012 #012 
Nov 27 05:58:12 np0005537642 ceph-mon[74338]: mon.compute-0@0(leader).osd e61 e61: 3 total, 3 up, 3 in
Nov 27 05:58:12 np0005537642 ceph-mon[74338]: log_channel(cluster) log [DBG] : osdmap e61: 3 total, 3 up, 3 in
Nov 27 05:58:12 np0005537642 ceph-mon[74338]: log_channel(cluster) log [DBG] : fsmap cephfs:0
Nov 27 05:58:12 np0005537642 ceph-mgr[74636]: [cephadm INFO root] Saving service mds.cephfs spec with placement compute-0;compute-1;compute-2
Nov 27 05:58:12 np0005537642 ceph-mgr[74636]: log_channel(cephadm) log [INF] : Saving service mds.cephfs spec with placement compute-0;compute-1;compute-2
Nov 27 05:58:12 np0005537642 ceph-mon[74338]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.mds.cephfs}] v 0)
Nov 27 05:58:13 np0005537642 ceph-mon[74338]: log_channel(audit) log [INF] : from='mgr.14424 192.168.122.100:0/1707931784' entity='mgr.compute-0.qnrkij' 
Nov 27 05:58:13 np0005537642 ceph-mgr[74636]: [volumes INFO volumes.module] Finishing _cmd_fs_volume_create(name:cephfs, placement:compute-0 compute-1 compute-2 , prefix:fs volume create, target:['mon-mgr', '']) < ""
Nov 27 05:58:13 np0005537642 systemd[1]: libpod-92506e83a31c2706cfdf90027bbf0aa5d36f4e4946676004b9c9ae7f63c68d82.scope: Deactivated successfully.
Nov 27 05:58:13 np0005537642 conmon[92754]: conmon 92506e83a31c2706cfdf <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-92506e83a31c2706cfdf90027bbf0aa5d36f4e4946676004b9c9ae7f63c68d82.scope/container/memory.events
Nov 27 05:58:13 np0005537642 podman[92738]: 2025-11-27 10:58:13.032401027 +0000 UTC m=+9.018717905 container died 92506e83a31c2706cfdf90027bbf0aa5d36f4e4946676004b9c9ae7f63c68d82 (image=quay.io/ceph/ceph:v19, name=stupefied_gauss, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid)
Nov 27 05:58:13 np0005537642 systemd[1]: var-lib-containers-storage-overlay-05afa9de1e35f2f21932992b25af4b5f33018e03e53dd5d0f99664487cb9eb26-merged.mount: Deactivated successfully.
Nov 27 05:58:13 np0005537642 podman[92738]: 2025-11-27 10:58:13.106359945 +0000 UTC m=+9.092676813 container remove 92506e83a31c2706cfdf90027bbf0aa5d36f4e4946676004b9c9ae7f63c68d82 (image=quay.io/ceph/ceph:v19, name=stupefied_gauss, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS)
Nov 27 05:58:13 np0005537642 systemd[1]: libpod-conmon-92506e83a31c2706cfdf90027bbf0aa5d36f4e4946676004b9c9ae7f63c68d82.scope: Deactivated successfully.
Nov 27 05:58:13 np0005537642 ceph-mgr[74636]: [cephadm INFO cherrypy.error] [27/Nov/2025:10:58:13] ENGINE Bus STARTING
Nov 27 05:58:13 np0005537642 ceph-mgr[74636]: log_channel(cephadm) log [INF] : [27/Nov/2025:10:58:13] ENGINE Bus STARTING
Nov 27 05:58:13 np0005537642 ceph-mgr[74636]: [cephadm INFO cherrypy.error] [27/Nov/2025:10:58:13] ENGINE Serving on http://192.168.122.100:8765
Nov 27 05:58:13 np0005537642 ceph-mgr[74636]: log_channel(cephadm) log [INF] : [27/Nov/2025:10:58:13] ENGINE Serving on http://192.168.122.100:8765
Nov 27 05:58:13 np0005537642 ceph-mgr[74636]: [cephadm INFO cherrypy.error] [27/Nov/2025:10:58:13] ENGINE Serving on https://192.168.122.100:7150
Nov 27 05:58:13 np0005537642 ceph-mgr[74636]: log_channel(cephadm) log [INF] : [27/Nov/2025:10:58:13] ENGINE Serving on https://192.168.122.100:7150
Nov 27 05:58:13 np0005537642 ceph-mgr[74636]: [cephadm INFO cherrypy.error] [27/Nov/2025:10:58:13] ENGINE Bus STARTED
Nov 27 05:58:13 np0005537642 ceph-mgr[74636]: log_channel(cephadm) log [INF] : [27/Nov/2025:10:58:13] ENGINE Bus STARTED
Nov 27 05:58:13 np0005537642 ceph-mgr[74636]: [cephadm INFO cherrypy.error] [27/Nov/2025:10:58:13] ENGINE Client ('192.168.122.100', 50436) lost — peer dropped the TLS connection suddenly, during handshake: (6, 'TLS/SSL connection has been closed (EOF) (_ssl.c:1147)')
Nov 27 05:58:13 np0005537642 ceph-mgr[74636]: log_channel(cephadm) log [INF] : [27/Nov/2025:10:58:13] ENGINE Client ('192.168.122.100', 50436) lost — peer dropped the TLS connection suddenly, during handshake: (6, 'TLS/SSL connection has been closed (EOF) (_ssl.c:1147)')
Nov 27 05:58:13 np0005537642 ceph-osd[82775]: log_channel(cluster) log [DBG] : 4.18 deep-scrub starts
Nov 27 05:58:13 np0005537642 ceph-osd[82775]: log_channel(cluster) log [DBG] : 4.18 deep-scrub ok
Nov 27 05:58:13 np0005537642 podman[93144]: 2025-11-27 10:58:13.472894391 +0000 UTC m=+0.091892668 container exec 10d3b07b5dbe91b896d72c044972881d213b8aa535ac9c97588798b2ade7a7fa (image=quay.io/ceph/ceph:v19, name=ceph-4c838139-e0c9-556a-a9ca-e4422f459af7-mon-compute-0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, OSD_FLAVOR=default, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0)
Nov 27 05:58:13 np0005537642 python3[93146]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z   --volume /tmp/ceph_mds.yml:/home/ceph_spec.yaml:z   --entrypoint ceph quay.io/ceph/ceph:v19 --fsid 4c838139-e0c9-556a-a9ca-e4422f459af7 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   orch apply --in-file /home/ceph_spec.yaml _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 27 05:58:13 np0005537642 ceph-mon[74338]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1.devices.0}] v 0)
Nov 27 05:58:13 np0005537642 podman[93144]: 2025-11-27 10:58:13.597091623 +0000 UTC m=+0.216089840 container exec_died 10d3b07b5dbe91b896d72c044972881d213b8aa535ac9c97588798b2ade7a7fa (image=quay.io/ceph/ceph:v19, name=ceph-4c838139-e0c9-556a-a9ca-e4422f459af7-mon-compute-0, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.build-date=20250325, OSD_FLAVOR=default, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Nov 27 05:58:13 np0005537642 ceph-mon[74338]: log_channel(audit) log [INF] : from='mgr.14424 192.168.122.100:0/1707931784' entity='mgr.compute-0.qnrkij' 
Nov 27 05:58:13 np0005537642 ceph-mon[74338]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1}] v 0)
Nov 27 05:58:13 np0005537642 podman[93165]: 2025-11-27 10:58:13.616428562 +0000 UTC m=+0.073235628 container create 40271bf69fe77d87271c18db6550ea06bd3289c264c47f996076d80bf25b8334 (image=quay.io/ceph/ceph:v19, name=exciting_lalande, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, ceph=True, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1)
Nov 27 05:58:13 np0005537642 ceph-mon[74338]: log_channel(audit) log [INF] : from='mgr.14424 192.168.122.100:0/1707931784' entity='mgr.compute-0.qnrkij' 
Nov 27 05:58:13 np0005537642 systemd[1]: Started libpod-conmon-40271bf69fe77d87271c18db6550ea06bd3289c264c47f996076d80bf25b8334.scope.
Nov 27 05:58:13 np0005537642 systemd[1]: Started libcrun container.
Nov 27 05:58:13 np0005537642 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1c63456e4a7aa11d8f8c1e6e575a0890547ff7e800efe49932714f4961693c92/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 27 05:58:13 np0005537642 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1c63456e4a7aa11d8f8c1e6e575a0890547ff7e800efe49932714f4961693c92/merged/home/ceph_spec.yaml supports timestamps until 2038 (0x7fffffff)
Nov 27 05:58:13 np0005537642 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1c63456e4a7aa11d8f8c1e6e575a0890547ff7e800efe49932714f4961693c92/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Nov 27 05:58:13 np0005537642 podman[93165]: 2025-11-27 10:58:13.590336352 +0000 UTC m=+0.047143508 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Nov 27 05:58:13 np0005537642 podman[93165]: 2025-11-27 10:58:13.691363767 +0000 UTC m=+0.148170943 container init 40271bf69fe77d87271c18db6550ea06bd3289c264c47f996076d80bf25b8334 (image=quay.io/ceph/ceph:v19, name=exciting_lalande, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default)
Nov 27 05:58:13 np0005537642 ceph-mon[74338]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2.devices.0}] v 0)
Nov 27 05:58:13 np0005537642 podman[93165]: 2025-11-27 10:58:13.703538062 +0000 UTC m=+0.160345138 container start 40271bf69fe77d87271c18db6550ea06bd3289c264c47f996076d80bf25b8334 (image=quay.io/ceph/ceph:v19, name=exciting_lalande, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0)
Nov 27 05:58:13 np0005537642 podman[93165]: 2025-11-27 10:58:13.707435323 +0000 UTC m=+0.164242439 container attach 40271bf69fe77d87271c18db6550ea06bd3289c264c47f996076d80bf25b8334 (image=quay.io/ceph/ceph:v19, name=exciting_lalande, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, CEPH_REF=squid, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 27 05:58:13 np0005537642 ceph-mon[74338]: log_channel(audit) log [INF] : from='mgr.14424 192.168.122.100:0/1707931784' entity='mgr.compute-0.qnrkij' 
Nov 27 05:58:13 np0005537642 ceph-mon[74338]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2}] v 0)
Nov 27 05:58:13 np0005537642 ceph-mon[74338]: log_channel(audit) log [INF] : from='mgr.14424 192.168.122.100:0/1707931784' entity='mgr.compute-0.qnrkij' 
Nov 27 05:58:13 np0005537642 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v5: 197 pgs: 197 active+clean; 454 KiB data, 103 MiB used, 60 GiB / 60 GiB avail
Nov 27 05:58:13 np0005537642 ceph-mon[74338]: from='mgr.14424 192.168.122.100:0/1707931784' entity='mgr.compute-0.qnrkij' cmd=[{"prefix": "osd pool create", "pool": "cephfs.cephfs.meta"}]: dispatch
Nov 27 05:58:13 np0005537642 ceph-mon[74338]: from='mgr.14424 192.168.122.100:0/1707931784' entity='mgr.compute-0.qnrkij' cmd=[{"bulk": true, "prefix": "osd pool create", "pool": "cephfs.cephfs.data"}]: dispatch
Nov 27 05:58:13 np0005537642 ceph-mon[74338]: from='mgr.14424 192.168.122.100:0/1707931784' entity='mgr.compute-0.qnrkij' cmd=[{"prefix": "fs new", "fs_name": "cephfs", "metadata": "cephfs.cephfs.meta", "data": "cephfs.cephfs.data"}]: dispatch
Nov 27 05:58:13 np0005537642 ceph-mon[74338]: Health check failed: 1 filesystem is offline (MDS_ALL_DOWN)
Nov 27 05:58:13 np0005537642 ceph-mon[74338]: Health check failed: 1 filesystem is online with fewer MDS than max_mds (MDS_UP_LESS_THAN_MAX)
Nov 27 05:58:13 np0005537642 ceph-mon[74338]: from='mgr.14424 192.168.122.100:0/1707931784' entity='mgr.compute-0.qnrkij' cmd='[{"prefix": "fs new", "fs_name": "cephfs", "metadata": "cephfs.cephfs.meta", "data": "cephfs.cephfs.data"}]': finished
Nov 27 05:58:13 np0005537642 ceph-mon[74338]: from='mgr.14424 192.168.122.100:0/1707931784' entity='mgr.compute-0.qnrkij' 
Nov 27 05:58:13 np0005537642 ceph-mon[74338]: from='mgr.14424 192.168.122.100:0/1707931784' entity='mgr.compute-0.qnrkij' 
Nov 27 05:58:13 np0005537642 ceph-mon[74338]: from='mgr.14424 192.168.122.100:0/1707931784' entity='mgr.compute-0.qnrkij' 
Nov 27 05:58:13 np0005537642 ceph-mon[74338]: from='mgr.14424 192.168.122.100:0/1707931784' entity='mgr.compute-0.qnrkij' 
Nov 27 05:58:13 np0005537642 ceph-mon[74338]: from='mgr.14424 192.168.122.100:0/1707931784' entity='mgr.compute-0.qnrkij' 
Nov 27 05:58:14 np0005537642 ceph-mgr[74636]: [devicehealth INFO root] Check health
Nov 27 05:58:14 np0005537642 ceph-mgr[74636]: log_channel(audit) log [DBG] : from='client.14448 -' entity='client.admin' cmd=[{"prefix": "orch apply", "target": ["mon-mgr", ""]}]: dispatch
Nov 27 05:58:14 np0005537642 ceph-mgr[74636]: [cephadm INFO root] Saving service mds.cephfs spec with placement compute-0;compute-1;compute-2
Nov 27 05:58:14 np0005537642 ceph-mgr[74636]: log_channel(cephadm) log [INF] : Saving service mds.cephfs spec with placement compute-0;compute-1;compute-2
Nov 27 05:58:14 np0005537642 ceph-mon[74338]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.mds.cephfs}] v 0)
Nov 27 05:58:14 np0005537642 ceph-mon[74338]: log_channel(audit) log [INF] : from='mgr.14424 192.168.122.100:0/1707931784' entity='mgr.compute-0.qnrkij' 
Nov 27 05:58:14 np0005537642 exciting_lalande[93191]: Scheduled mds.cephfs update...
Nov 27 05:58:14 np0005537642 systemd[1]: libpod-40271bf69fe77d87271c18db6550ea06bd3289c264c47f996076d80bf25b8334.scope: Deactivated successfully.
Nov 27 05:58:14 np0005537642 podman[93165]: 2025-11-27 10:58:14.113848889 +0000 UTC m=+0.570655995 container died 40271bf69fe77d87271c18db6550ea06bd3289c264c47f996076d80bf25b8334 (image=quay.io/ceph/ceph:v19, name=exciting_lalande, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 27 05:58:14 np0005537642 systemd[1]: var-lib-containers-storage-overlay-1c63456e4a7aa11d8f8c1e6e575a0890547ff7e800efe49932714f4961693c92-merged.mount: Deactivated successfully.
Nov 27 05:58:14 np0005537642 podman[93165]: 2025-11-27 10:58:14.178298967 +0000 UTC m=+0.635106063 container remove 40271bf69fe77d87271c18db6550ea06bd3289c264c47f996076d80bf25b8334 (image=quay.io/ceph/ceph:v19, name=exciting_lalande, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, CEPH_REF=squid, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 27 05:58:14 np0005537642 systemd[1]: libpod-conmon-40271bf69fe77d87271c18db6550ea06bd3289c264c47f996076d80bf25b8334.scope: Deactivated successfully.
Nov 27 05:58:14 np0005537642 podman[93342]: 2025-11-27 10:58:14.233506333 +0000 UTC m=+0.066149037 container exec ef4ca26692ee3ac96369d611d0545c11cb87735cb1047144085ac48b0518225b (image=quay.io/prometheus/node-exporter:v1.7.0, name=ceph-4c838139-e0c9-556a-a9ca-e4422f459af7-node-exporter-compute-0, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Nov 27 05:58:14 np0005537642 podman[93342]: 2025-11-27 10:58:14.267887168 +0000 UTC m=+0.100529872 container exec_died ef4ca26692ee3ac96369d611d0545c11cb87735cb1047144085ac48b0518225b (image=quay.io/prometheus/node-exporter:v1.7.0, name=ceph-4c838139-e0c9-556a-a9ca-e4422f459af7-node-exporter-compute-0, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Nov 27 05:58:14 np0005537642 ceph-mon[74338]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Nov 27 05:58:14 np0005537642 ceph-mon[74338]: log_channel(audit) log [INF] : from='mgr.14424 192.168.122.100:0/1707931784' entity='mgr.compute-0.qnrkij' 
Nov 27 05:58:14 np0005537642 ceph-mon[74338]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Nov 27 05:58:14 np0005537642 ceph-mon[74338]: log_channel(audit) log [INF] : from='mgr.14424 192.168.122.100:0/1707931784' entity='mgr.compute-0.qnrkij' 
Nov 27 05:58:14 np0005537642 ceph-osd[82775]: log_channel(cluster) log [DBG] : 4.e scrub starts
Nov 27 05:58:14 np0005537642 ceph-osd[82775]: log_channel(cluster) log [DBG] : 4.e scrub ok
Nov 27 05:58:14 np0005537642 python3[93426]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z   --volume /tmp/ceph_mds.yml:/home/ceph_spec.yaml:z   --entrypoint ceph quay.io/ceph/ceph:v19 --fsid 4c838139-e0c9-556a-a9ca-e4422f459af7 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   nfs cluster create cephfs --ingress --virtual-ip=192.168.122.2/24 --ingress-mode=haproxy-protocol '--placement=compute-0 compute-1 compute-2 '#012 _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 27 05:58:14 np0005537642 podman[93454]: 2025-11-27 10:58:14.61701655 +0000 UTC m=+0.068044431 container create c6f9d4e4fce7a2b5b95b8321b17c56578b8082b3e0330ff0dd138bef6c27ee6e (image=quay.io/ceph/ceph:v19, name=vigilant_mahavira, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.build-date=20250325, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Nov 27 05:58:14 np0005537642 systemd[1]: Started libpod-conmon-c6f9d4e4fce7a2b5b95b8321b17c56578b8082b3e0330ff0dd138bef6c27ee6e.scope.
Nov 27 05:58:14 np0005537642 podman[93454]: 2025-11-27 10:58:14.585786254 +0000 UTC m=+0.036814135 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Nov 27 05:58:14 np0005537642 ceph-mon[74338]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1.devices.0}] v 0)
Nov 27 05:58:14 np0005537642 ceph-mon[74338]: log_channel(audit) log [INF] : from='mgr.14424 192.168.122.100:0/1707931784' entity='mgr.compute-0.qnrkij' 
Nov 27 05:58:14 np0005537642 ceph-mon[74338]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2.devices.0}] v 0)
Nov 27 05:58:14 np0005537642 systemd[1]: Started libcrun container.
Nov 27 05:58:14 np0005537642 ceph-mon[74338]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1}] v 0)
Nov 27 05:58:14 np0005537642 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4fb1b93aa03ceac07e577121068ec549f677a76e044e4723c52b380b980b0a65/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Nov 27 05:58:14 np0005537642 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4fb1b93aa03ceac07e577121068ec549f677a76e044e4723c52b380b980b0a65/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 27 05:58:14 np0005537642 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4fb1b93aa03ceac07e577121068ec549f677a76e044e4723c52b380b980b0a65/merged/home/ceph_spec.yaml supports timestamps until 2038 (0x7fffffff)
Nov 27 05:58:14 np0005537642 ceph-mon[74338]: log_channel(audit) log [INF] : from='mgr.14424 192.168.122.100:0/1707931784' entity='mgr.compute-0.qnrkij' 
Nov 27 05:58:14 np0005537642 ceph-mon[74338]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2}] v 0)
Nov 27 05:58:14 np0005537642 ceph-mon[74338]: log_channel(audit) log [INF] : from='mgr.14424 192.168.122.100:0/1707931784' entity='mgr.compute-0.qnrkij' 
Nov 27 05:58:14 np0005537642 ceph-mon[74338]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config rm", "who": "osd/host:compute-1", "name": "osd_memory_target"} v 0)
Nov 27 05:58:14 np0005537642 ceph-mon[74338]: log_channel(audit) log [INF] : from='mgr.14424 192.168.122.100:0/1707931784' entity='mgr.compute-0.qnrkij' cmd=[{"prefix": "config rm", "who": "osd/host:compute-1", "name": "osd_memory_target"}]: dispatch
Nov 27 05:58:14 np0005537642 podman[93454]: 2025-11-27 10:58:14.7439634 +0000 UTC m=+0.194991271 container init c6f9d4e4fce7a2b5b95b8321b17c56578b8082b3e0330ff0dd138bef6c27ee6e (image=quay.io/ceph/ceph:v19, name=vigilant_mahavira, org.label-schema.build-date=20250325, OSD_FLAVOR=default, io.buildah.version=1.40.1, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 27 05:58:14 np0005537642 podman[93454]: 2025-11-27 10:58:14.756337791 +0000 UTC m=+0.207365682 container start c6f9d4e4fce7a2b5b95b8321b17c56578b8082b3e0330ff0dd138bef6c27ee6e (image=quay.io/ceph/ceph:v19, name=vigilant_mahavira, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 27 05:58:14 np0005537642 podman[93454]: 2025-11-27 10:58:14.762297051 +0000 UTC m=+0.213324942 container attach c6f9d4e4fce7a2b5b95b8321b17c56578b8082b3e0330ff0dd138bef6c27ee6e (image=quay.io/ceph/ceph:v19, name=vigilant_mahavira, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, ceph=True)
Nov 27 05:58:14 np0005537642 ceph-mon[74338]: log_channel(audit) log [INF] : from='mgr.14424 192.168.122.100:0/1707931784' entity='mgr.compute-0.qnrkij' 
Nov 27 05:58:14 np0005537642 ceph-mon[74338]: log_channel(cluster) log [DBG] : mgrmap e23: compute-0.qnrkij(active, since 2s), standbys: compute-1.npcryb, compute-2.yyrxaz
Nov 27 05:58:14 np0005537642 ceph-mon[74338]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config rm", "who": "osd/host:compute-2", "name": "osd_memory_target"} v 0)
Nov 27 05:58:14 np0005537642 ceph-mon[74338]: log_channel(audit) log [INF] : from='mgr.14424 192.168.122.100:0/1707931784' entity='mgr.compute-0.qnrkij' cmd=[{"prefix": "config rm", "who": "osd/host:compute-2", "name": "osd_memory_target"}]: dispatch
Nov 27 05:58:15 np0005537642 ceph-mon[74338]: from='mgr.14424 192.168.122.100:0/1707931784' entity='mgr.compute-0.qnrkij' 
Nov 27 05:58:15 np0005537642 ceph-mon[74338]: from='mgr.14424 192.168.122.100:0/1707931784' entity='mgr.compute-0.qnrkij' 
Nov 27 05:58:15 np0005537642 ceph-mon[74338]: from='mgr.14424 192.168.122.100:0/1707931784' entity='mgr.compute-0.qnrkij' 
Nov 27 05:58:15 np0005537642 ceph-mon[74338]: from='mgr.14424 192.168.122.100:0/1707931784' entity='mgr.compute-0.qnrkij' 
Nov 27 05:58:15 np0005537642 ceph-mon[74338]: from='mgr.14424 192.168.122.100:0/1707931784' entity='mgr.compute-0.qnrkij' 
Nov 27 05:58:15 np0005537642 ceph-mon[74338]: from='mgr.14424 192.168.122.100:0/1707931784' entity='mgr.compute-0.qnrkij' 
Nov 27 05:58:15 np0005537642 ceph-mon[74338]: from='mgr.14424 192.168.122.100:0/1707931784' entity='mgr.compute-0.qnrkij' cmd=[{"prefix": "config rm", "who": "osd/host:compute-1", "name": "osd_memory_target"}]: dispatch
Nov 27 05:58:15 np0005537642 ceph-mon[74338]: from='mgr.14424 192.168.122.100:0/1707931784' entity='mgr.compute-0.qnrkij' 
Nov 27 05:58:15 np0005537642 ceph-mon[74338]: from='mgr.14424 192.168.122.100:0/1707931784' entity='mgr.compute-0.qnrkij' cmd=[{"prefix": "config rm", "who": "osd/host:compute-2", "name": "osd_memory_target"}]: dispatch
Nov 27 05:58:15 np0005537642 ceph-mgr[74636]: log_channel(audit) log [DBG] : from='client.14460 -' entity='client.admin' cmd=[{"prefix": "nfs cluster create", "cluster_id": "cephfs", "ingress": true, "virtual_ip": "192.168.122.2/24", "ingress_mode": "haproxy-protocol", "placement": "compute-0 compute-1 compute-2 ", "target": ["mon-mgr", ""]}]: dispatch
Nov 27 05:58:15 np0005537642 ceph-mon[74338]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool create", "pool": ".nfs", "yes_i_really_mean_it": true} v 0)
Nov 27 05:58:15 np0005537642 ceph-mon[74338]: log_channel(audit) log [INF] : from='mgr.14424 192.168.122.100:0/1707931784' entity='mgr.compute-0.qnrkij' cmd=[{"prefix": "osd pool create", "pool": ".nfs", "yes_i_really_mean_it": true}]: dispatch
Nov 27 05:58:15 np0005537642 ceph-osd[82775]: log_channel(cluster) log [DBG] : 5.18 scrub starts
Nov 27 05:58:15 np0005537642 ceph-osd[82775]: log_channel(cluster) log [DBG] : 5.18 scrub ok
Nov 27 05:58:15 np0005537642 ceph-mon[74338]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Nov 27 05:58:15 np0005537642 ceph-mon[74338]: log_channel(audit) log [INF] : from='mgr.14424 192.168.122.100:0/1707931784' entity='mgr.compute-0.qnrkij' 
Nov 27 05:58:15 np0005537642 ceph-mon[74338]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Nov 27 05:58:15 np0005537642 ceph-mon[74338]: log_channel(audit) log [INF] : from='mgr.14424 192.168.122.100:0/1707931784' entity='mgr.compute-0.qnrkij' 
Nov 27 05:58:15 np0005537642 ceph-mon[74338]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"} v 0)
Nov 27 05:58:15 np0005537642 ceph-mon[74338]: log_channel(audit) log [INF] : from='mgr.14424 192.168.122.100:0/1707931784' entity='mgr.compute-0.qnrkij' cmd=[{"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"}]: dispatch
Nov 27 05:58:15 np0005537642 ceph-mon[74338]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Nov 27 05:58:15 np0005537642 ceph-mon[74338]: log_channel(audit) log [DBG] : from='mgr.14424 192.168.122.100:0/1707931784' entity='mgr.compute-0.qnrkij' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 27 05:58:15 np0005537642 ceph-mon[74338]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Nov 27 05:58:15 np0005537642 ceph-mon[74338]: log_channel(audit) log [INF] : from='mgr.14424 192.168.122.100:0/1707931784' entity='mgr.compute-0.qnrkij' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 27 05:58:15 np0005537642 ceph-mgr[74636]: [cephadm INFO cephadm.serve] Updating compute-0:/etc/ceph/ceph.conf
Nov 27 05:58:15 np0005537642 ceph-mgr[74636]: log_channel(cephadm) log [INF] : Updating compute-0:/etc/ceph/ceph.conf
Nov 27 05:58:15 np0005537642 ceph-mgr[74636]: [cephadm INFO cephadm.serve] Updating compute-1:/etc/ceph/ceph.conf
Nov 27 05:58:15 np0005537642 ceph-mgr[74636]: [cephadm INFO cephadm.serve] Updating compute-2:/etc/ceph/ceph.conf
Nov 27 05:58:15 np0005537642 ceph-mgr[74636]: log_channel(cephadm) log [INF] : Updating compute-1:/etc/ceph/ceph.conf
Nov 27 05:58:15 np0005537642 ceph-mgr[74636]: log_channel(cephadm) log [INF] : Updating compute-2:/etc/ceph/ceph.conf
Nov 27 05:58:15 np0005537642 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v6: 197 pgs: 197 active+clean; 454 KiB data, 103 MiB used, 60 GiB / 60 GiB avail
Nov 27 05:58:16 np0005537642 ceph-mon[74338]: mon.compute-0@0(leader).osd e61 do_prune osdmap full prune enabled
Nov 27 05:58:16 np0005537642 ceph-mon[74338]: from='mgr.14424 192.168.122.100:0/1707931784' entity='mgr.compute-0.qnrkij' cmd=[{"prefix": "osd pool create", "pool": ".nfs", "yes_i_really_mean_it": true}]: dispatch
Nov 27 05:58:16 np0005537642 ceph-mon[74338]: from='mgr.14424 192.168.122.100:0/1707931784' entity='mgr.compute-0.qnrkij' 
Nov 27 05:58:16 np0005537642 ceph-mon[74338]: from='mgr.14424 192.168.122.100:0/1707931784' entity='mgr.compute-0.qnrkij' 
Nov 27 05:58:16 np0005537642 ceph-mon[74338]: from='mgr.14424 192.168.122.100:0/1707931784' entity='mgr.compute-0.qnrkij' cmd=[{"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"}]: dispatch
Nov 27 05:58:16 np0005537642 ceph-mon[74338]: from='mgr.14424 192.168.122.100:0/1707931784' entity='mgr.compute-0.qnrkij' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 27 05:58:16 np0005537642 ceph-mon[74338]: log_channel(audit) log [INF] : from='mgr.14424 192.168.122.100:0/1707931784' entity='mgr.compute-0.qnrkij' cmd='[{"prefix": "osd pool create", "pool": ".nfs", "yes_i_really_mean_it": true}]': finished
Nov 27 05:58:16 np0005537642 ceph-mon[74338]: mon.compute-0@0(leader).osd e62 e62: 3 total, 3 up, 3 in
Nov 27 05:58:16 np0005537642 ceph-mon[74338]: log_channel(cluster) log [DBG] : osdmap e62: 3 total, 3 up, 3 in
Nov 27 05:58:16 np0005537642 ceph-mon[74338]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool application enable", "pool": ".nfs", "app": "nfs"} v 0)
Nov 27 05:58:16 np0005537642 ceph-mon[74338]: log_channel(audit) log [INF] : from='mgr.14424 192.168.122.100:0/1707931784' entity='mgr.compute-0.qnrkij' cmd=[{"prefix": "osd pool application enable", "pool": ".nfs", "app": "nfs"}]: dispatch
Nov 27 05:58:16 np0005537642 ceph-mgr[74636]: [cephadm INFO cephadm.serve] Updating compute-2:/var/lib/ceph/4c838139-e0c9-556a-a9ca-e4422f459af7/config/ceph.conf
Nov 27 05:58:16 np0005537642 ceph-mgr[74636]: log_channel(cephadm) log [INF] : Updating compute-2:/var/lib/ceph/4c838139-e0c9-556a-a9ca-e4422f459af7/config/ceph.conf
Nov 27 05:58:16 np0005537642 ceph-mgr[74636]: [cephadm INFO cephadm.serve] Updating compute-1:/var/lib/ceph/4c838139-e0c9-556a-a9ca-e4422f459af7/config/ceph.conf
Nov 27 05:58:16 np0005537642 ceph-mgr[74636]: log_channel(cephadm) log [INF] : Updating compute-1:/var/lib/ceph/4c838139-e0c9-556a-a9ca-e4422f459af7/config/ceph.conf
Nov 27 05:58:16 np0005537642 ceph-osd[82775]: osd.1 pg_epoch: 62 pg[12.0( empty local-lis/les=0/0 n=0 ec=62/62 lis/c=0/0 les/c/f=0/0/0 sis=62) [1] r=0 lpr=62 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 27 05:58:16 np0005537642 ceph-mgr[74636]: [cephadm INFO cephadm.serve] Updating compute-0:/var/lib/ceph/4c838139-e0c9-556a-a9ca-e4422f459af7/config/ceph.conf
Nov 27 05:58:16 np0005537642 ceph-mgr[74636]: log_channel(cephadm) log [INF] : Updating compute-0:/var/lib/ceph/4c838139-e0c9-556a-a9ca-e4422f459af7/config/ceph.conf
Nov 27 05:58:16 np0005537642 ceph-mon[74338]: mon.compute-0@0(leader).osd e62 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 27 05:58:16 np0005537642 ceph-osd[82775]: log_channel(cluster) log [DBG] : 6.8 scrub starts
Nov 27 05:58:16 np0005537642 ceph-osd[82775]: log_channel(cluster) log [DBG] : 6.8 scrub ok
Nov 27 05:58:16 np0005537642 ceph-mon[74338]: log_channel(cluster) log [DBG] : mgrmap e24: compute-0.qnrkij(active, since 4s), standbys: compute-1.npcryb, compute-2.yyrxaz
Nov 27 05:58:16 np0005537642 ceph-mgr[74636]: [cephadm INFO cephadm.serve] Updating compute-1:/etc/ceph/ceph.client.admin.keyring
Nov 27 05:58:16 np0005537642 ceph-mgr[74636]: log_channel(cephadm) log [INF] : Updating compute-1:/etc/ceph/ceph.client.admin.keyring
Nov 27 05:58:16 np0005537642 ceph-mgr[74636]: [cephadm INFO cephadm.serve] Updating compute-2:/etc/ceph/ceph.client.admin.keyring
Nov 27 05:58:16 np0005537642 ceph-mgr[74636]: log_channel(cephadm) log [INF] : Updating compute-2:/etc/ceph/ceph.client.admin.keyring
Nov 27 05:58:17 np0005537642 ceph-mon[74338]: mon.compute-0@0(leader).osd e62 do_prune osdmap full prune enabled
Nov 27 05:58:17 np0005537642 ceph-mon[74338]: log_channel(audit) log [INF] : from='mgr.14424 192.168.122.100:0/1707931784' entity='mgr.compute-0.qnrkij' cmd='[{"prefix": "osd pool application enable", "pool": ".nfs", "app": "nfs"}]': finished
Nov 27 05:58:17 np0005537642 ceph-mon[74338]: mon.compute-0@0(leader).osd e63 e63: 3 total, 3 up, 3 in
Nov 27 05:58:17 np0005537642 ceph-mon[74338]: log_channel(cluster) log [DBG] : osdmap e63: 3 total, 3 up, 3 in
Nov 27 05:58:17 np0005537642 ceph-mon[74338]: from='mgr.14424 192.168.122.100:0/1707931784' entity='mgr.compute-0.qnrkij' cmd='[{"prefix": "osd pool create", "pool": ".nfs", "yes_i_really_mean_it": true}]': finished
Nov 27 05:58:17 np0005537642 ceph-mon[74338]: from='mgr.14424 192.168.122.100:0/1707931784' entity='mgr.compute-0.qnrkij' cmd=[{"prefix": "osd pool application enable", "pool": ".nfs", "app": "nfs"}]: dispatch
Nov 27 05:58:17 np0005537642 ceph-osd[82775]: osd.1 pg_epoch: 63 pg[12.0( empty local-lis/les=62/63 n=0 ec=62/62 lis/c=0/0 les/c/f=0/0/0 sis=62) [1] r=0 lpr=62 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 27 05:58:17 np0005537642 ceph-mgr[74636]: [nfs INFO nfs.cluster] Created empty object:conf-nfs.cephfs
Nov 27 05:58:17 np0005537642 ceph-mgr[74636]: [cephadm INFO root] Saving service nfs.cephfs spec with placement compute-0;compute-1;compute-2
Nov 27 05:58:17 np0005537642 ceph-mgr[74636]: log_channel(cephadm) log [INF] : Saving service nfs.cephfs spec with placement compute-0;compute-1;compute-2
Nov 27 05:58:17 np0005537642 ceph-mon[74338]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.nfs.cephfs}] v 0)
Nov 27 05:58:17 np0005537642 ceph-mon[74338]: log_channel(audit) log [INF] : from='mgr.14424 192.168.122.100:0/1707931784' entity='mgr.compute-0.qnrkij' 
Nov 27 05:58:17 np0005537642 ceph-mgr[74636]: [cephadm INFO root] Saving service ingress.nfs.cephfs spec with placement compute-0;compute-1;compute-2
Nov 27 05:58:17 np0005537642 ceph-mon[74338]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.ingress.nfs.cephfs}] v 0)
Nov 27 05:58:17 np0005537642 ceph-mgr[74636]: log_channel(cephadm) log [INF] : Saving service ingress.nfs.cephfs spec with placement compute-0;compute-1;compute-2
Nov 27 05:58:17 np0005537642 ceph-mon[74338]: log_channel(audit) log [INF] : from='mgr.14424 192.168.122.100:0/1707931784' entity='mgr.compute-0.qnrkij' 
Nov 27 05:58:17 np0005537642 systemd[1]: libpod-c6f9d4e4fce7a2b5b95b8321b17c56578b8082b3e0330ff0dd138bef6c27ee6e.scope: Deactivated successfully.
Nov 27 05:58:17 np0005537642 podman[93454]: 2025-11-27 10:58:17.245323616 +0000 UTC m=+2.696351497 container died c6f9d4e4fce7a2b5b95b8321b17c56578b8082b3e0330ff0dd138bef6c27ee6e (image=quay.io/ceph/ceph:v19, name=vigilant_mahavira, org.label-schema.license=GPLv2, OSD_FLAVOR=default, ceph=True, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS)
Nov 27 05:58:17 np0005537642 ceph-mgr[74636]: [cephadm INFO cephadm.serve] Updating compute-0:/etc/ceph/ceph.client.admin.keyring
Nov 27 05:58:17 np0005537642 ceph-mgr[74636]: log_channel(cephadm) log [INF] : Updating compute-0:/etc/ceph/ceph.client.admin.keyring
Nov 27 05:58:17 np0005537642 ceph-mgr[74636]: [cephadm INFO cephadm.serve] Updating compute-2:/var/lib/ceph/4c838139-e0c9-556a-a9ca-e4422f459af7/config/ceph.client.admin.keyring
Nov 27 05:58:17 np0005537642 ceph-mgr[74636]: log_channel(cephadm) log [INF] : Updating compute-2:/var/lib/ceph/4c838139-e0c9-556a-a9ca-e4422f459af7/config/ceph.client.admin.keyring
Nov 27 05:58:17 np0005537642 systemd[1]: var-lib-containers-storage-overlay-4fb1b93aa03ceac07e577121068ec549f677a76e044e4723c52b380b980b0a65-merged.mount: Deactivated successfully.
Nov 27 05:58:17 np0005537642 ceph-mgr[74636]: [cephadm INFO cephadm.serve] Updating compute-1:/var/lib/ceph/4c838139-e0c9-556a-a9ca-e4422f459af7/config/ceph.client.admin.keyring
Nov 27 05:58:17 np0005537642 ceph-mgr[74636]: log_channel(cephadm) log [INF] : Updating compute-1:/var/lib/ceph/4c838139-e0c9-556a-a9ca-e4422f459af7/config/ceph.client.admin.keyring
Nov 27 05:58:17 np0005537642 podman[93454]: 2025-11-27 10:58:17.336987506 +0000 UTC m=+2.788015367 container remove c6f9d4e4fce7a2b5b95b8321b17c56578b8082b3e0330ff0dd138bef6c27ee6e (image=quay.io/ceph/ceph:v19, name=vigilant_mahavira, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default)
Nov 27 05:58:17 np0005537642 systemd[1]: libpod-conmon-c6f9d4e4fce7a2b5b95b8321b17c56578b8082b3e0330ff0dd138bef6c27ee6e.scope: Deactivated successfully.
Nov 27 05:58:17 np0005537642 ceph-mon[74338]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2.devices.0}] v 0)
Nov 27 05:58:17 np0005537642 ceph-mon[74338]: log_channel(audit) log [INF] : from='mgr.14424 192.168.122.100:0/1707931784' entity='mgr.compute-0.qnrkij' 
Nov 27 05:58:17 np0005537642 ceph-mon[74338]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2}] v 0)
Nov 27 05:58:17 np0005537642 ceph-mon[74338]: log_channel(audit) log [INF] : from='mgr.14424 192.168.122.100:0/1707931784' entity='mgr.compute-0.qnrkij' 
Nov 27 05:58:17 np0005537642 ceph-mon[74338]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1.devices.0}] v 0)
Nov 27 05:58:17 np0005537642 ceph-mon[74338]: log_channel(audit) log [INF] : from='mgr.14424 192.168.122.100:0/1707931784' entity='mgr.compute-0.qnrkij' 
Nov 27 05:58:17 np0005537642 ceph-mon[74338]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1}] v 0)
Nov 27 05:58:17 np0005537642 ceph-mon[74338]: log_channel(audit) log [INF] : from='mgr.14424 192.168.122.100:0/1707931784' entity='mgr.compute-0.qnrkij' 
Nov 27 05:58:17 np0005537642 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v9: 198 pgs: 1 unknown, 197 active+clean; 454 KiB data, 103 MiB used, 60 GiB / 60 GiB avail
Nov 27 05:58:17 np0005537642 ceph-mgr[74636]: [cephadm INFO cephadm.serve] Updating compute-0:/var/lib/ceph/4c838139-e0c9-556a-a9ca-e4422f459af7/config/ceph.client.admin.keyring
Nov 27 05:58:17 np0005537642 ceph-mgr[74636]: log_channel(cephadm) log [INF] : Updating compute-0:/var/lib/ceph/4c838139-e0c9-556a-a9ca-e4422f459af7/config/ceph.client.admin.keyring
Nov 27 05:58:18 np0005537642 ceph-mon[74338]: mon.compute-0@0(leader).osd e63 do_prune osdmap full prune enabled
Nov 27 05:58:18 np0005537642 ceph-mon[74338]: mon.compute-0@0(leader).osd e64 e64: 3 total, 3 up, 3 in
Nov 27 05:58:18 np0005537642 ceph-mon[74338]: log_channel(cluster) log [DBG] : osdmap e64: 3 total, 3 up, 3 in
Nov 27 05:58:18 np0005537642 python3[94387]: ansible-ansible.legacy.stat Invoked with path=/etc/ceph/ceph.client.openstack.keyring follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Nov 27 05:58:18 np0005537642 ceph-mon[74338]: from='mgr.14424 192.168.122.100:0/1707931784' entity='mgr.compute-0.qnrkij' cmd='[{"prefix": "osd pool application enable", "pool": ".nfs", "app": "nfs"}]': finished
Nov 27 05:58:18 np0005537642 ceph-mon[74338]: from='mgr.14424 192.168.122.100:0/1707931784' entity='mgr.compute-0.qnrkij' 
Nov 27 05:58:18 np0005537642 ceph-mon[74338]: from='mgr.14424 192.168.122.100:0/1707931784' entity='mgr.compute-0.qnrkij' 
Nov 27 05:58:18 np0005537642 ceph-mon[74338]: from='mgr.14424 192.168.122.100:0/1707931784' entity='mgr.compute-0.qnrkij' 
Nov 27 05:58:18 np0005537642 ceph-mon[74338]: from='mgr.14424 192.168.122.100:0/1707931784' entity='mgr.compute-0.qnrkij' 
Nov 27 05:58:18 np0005537642 ceph-mon[74338]: from='mgr.14424 192.168.122.100:0/1707931784' entity='mgr.compute-0.qnrkij' 
Nov 27 05:58:18 np0005537642 ceph-mon[74338]: from='mgr.14424 192.168.122.100:0/1707931784' entity='mgr.compute-0.qnrkij' 
Nov 27 05:58:18 np0005537642 python3[94583]: ansible-ansible.legacy.copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1764241097.7705512-37285-182325273251658/source dest=/etc/ceph/ceph.client.openstack.keyring mode=0644 force=True owner=167 group=167 follow=False _original_basename=ceph_key.j2 checksum=b88ce7cf55567506508db92185485e00fd574b0c backup=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 27 05:58:18 np0005537642 ceph-mon[74338]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Nov 27 05:58:18 np0005537642 ceph-mon[74338]: log_channel(audit) log [INF] : from='mgr.14424 192.168.122.100:0/1707931784' entity='mgr.compute-0.qnrkij' 
Nov 27 05:58:18 np0005537642 ceph-mon[74338]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Nov 27 05:58:18 np0005537642 ceph-mon[74338]: log_channel(audit) log [INF] : from='mgr.14424 192.168.122.100:0/1707931784' entity='mgr.compute-0.qnrkij' 
Nov 27 05:58:18 np0005537642 ceph-mon[74338]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Nov 27 05:58:18 np0005537642 ceph-mon[74338]: log_channel(audit) log [INF] : from='mgr.14424 192.168.122.100:0/1707931784' entity='mgr.compute-0.qnrkij' 
Nov 27 05:58:18 np0005537642 ceph-mgr[74636]: [progress INFO root] update: starting ev d6aa61ad-7de2-4af8-8d25-64acab7fde18 (Updating node-exporter deployment (+2 -> 3))
Nov 27 05:58:18 np0005537642 ceph-mgr[74636]: [cephadm INFO cephadm.serve] Deploying daemon node-exporter.compute-1 on compute-1
Nov 27 05:58:18 np0005537642 ceph-mgr[74636]: log_channel(cephadm) log [INF] : Deploying daemon node-exporter.compute-1 on compute-1
Nov 27 05:58:18 np0005537642 ceph-mon[74338]: log_channel(cluster) log [DBG] : mgrmap e25: compute-0.qnrkij(active, since 7s), standbys: compute-1.npcryb, compute-2.yyrxaz
Nov 27 05:58:19 np0005537642 python3[94708]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v19 --fsid 4c838139-e0c9-556a-a9ca-e4422f459af7 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   auth import -i /etc/ceph/ceph.client.openstack.keyring _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 27 05:58:19 np0005537642 podman[94709]: 2025-11-27 10:58:19.26081246 +0000 UTC m=+0.038161873 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Nov 27 05:58:19 np0005537642 ceph-mon[74338]: Saving service mds.cephfs spec with placement compute-0;compute-1;compute-2
Nov 27 05:58:19 np0005537642 ceph-mon[74338]: [27/Nov/2025:10:58:13] ENGINE Bus STARTING
Nov 27 05:58:19 np0005537642 ceph-mon[74338]: [27/Nov/2025:10:58:13] ENGINE Serving on http://192.168.122.100:8765
Nov 27 05:58:19 np0005537642 ceph-mon[74338]: [27/Nov/2025:10:58:13] ENGINE Serving on https://192.168.122.100:7150
Nov 27 05:58:19 np0005537642 ceph-mon[74338]: [27/Nov/2025:10:58:13] ENGINE Bus STARTED
Nov 27 05:58:19 np0005537642 ceph-mon[74338]: [27/Nov/2025:10:58:13] ENGINE Client ('192.168.122.100', 50436) lost — peer dropped the TLS connection suddenly, during handshake: (6, 'TLS/SSL connection has been closed (EOF) (_ssl.c:1147)')
Nov 27 05:58:19 np0005537642 ceph-mon[74338]: Saving service mds.cephfs spec with placement compute-0;compute-1;compute-2
Nov 27 05:58:19 np0005537642 ceph-mon[74338]: Updating compute-0:/etc/ceph/ceph.conf
Nov 27 05:58:19 np0005537642 ceph-mon[74338]: Updating compute-1:/etc/ceph/ceph.conf
Nov 27 05:58:19 np0005537642 podman[94709]: 2025-11-27 10:58:19.553352317 +0000 UTC m=+0.330701680 container create fae3959a15942e1ad949d17ad69e2ffc402a87a9f05e16ad27635047cf44b610 (image=quay.io/ceph/ceph:v19, name=wizardly_austin, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_REF=squid, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 27 05:58:19 np0005537642 ceph-mon[74338]: Updating compute-2:/etc/ceph/ceph.conf
Nov 27 05:58:19 np0005537642 ceph-mon[74338]: Updating compute-2:/var/lib/ceph/4c838139-e0c9-556a-a9ca-e4422f459af7/config/ceph.conf
Nov 27 05:58:19 np0005537642 ceph-mon[74338]: Updating compute-1:/var/lib/ceph/4c838139-e0c9-556a-a9ca-e4422f459af7/config/ceph.conf
Nov 27 05:58:19 np0005537642 ceph-mon[74338]: Updating compute-0:/var/lib/ceph/4c838139-e0c9-556a-a9ca-e4422f459af7/config/ceph.conf
Nov 27 05:58:19 np0005537642 ceph-mon[74338]: Updating compute-1:/etc/ceph/ceph.client.admin.keyring
Nov 27 05:58:19 np0005537642 ceph-mon[74338]: Updating compute-2:/etc/ceph/ceph.client.admin.keyring
Nov 27 05:58:19 np0005537642 ceph-mon[74338]: Saving service nfs.cephfs spec with placement compute-0;compute-1;compute-2
Nov 27 05:58:19 np0005537642 ceph-mon[74338]: Saving service ingress.nfs.cephfs spec with placement compute-0;compute-1;compute-2
Nov 27 05:58:19 np0005537642 ceph-mon[74338]: Updating compute-0:/etc/ceph/ceph.client.admin.keyring
Nov 27 05:58:19 np0005537642 ceph-mon[74338]: Updating compute-2:/var/lib/ceph/4c838139-e0c9-556a-a9ca-e4422f459af7/config/ceph.client.admin.keyring
Nov 27 05:58:19 np0005537642 ceph-mon[74338]: Updating compute-1:/var/lib/ceph/4c838139-e0c9-556a-a9ca-e4422f459af7/config/ceph.client.admin.keyring
Nov 27 05:58:19 np0005537642 ceph-mon[74338]: Updating compute-0:/var/lib/ceph/4c838139-e0c9-556a-a9ca-e4422f459af7/config/ceph.client.admin.keyring
Nov 27 05:58:19 np0005537642 ceph-mon[74338]: from='mgr.14424 192.168.122.100:0/1707931784' entity='mgr.compute-0.qnrkij' 
Nov 27 05:58:19 np0005537642 ceph-mon[74338]: from='mgr.14424 192.168.122.100:0/1707931784' entity='mgr.compute-0.qnrkij' 
Nov 27 05:58:19 np0005537642 ceph-mon[74338]: from='mgr.14424 192.168.122.100:0/1707931784' entity='mgr.compute-0.qnrkij' 
Nov 27 05:58:19 np0005537642 systemd[1]: Started libpod-conmon-fae3959a15942e1ad949d17ad69e2ffc402a87a9f05e16ad27635047cf44b610.scope.
Nov 27 05:58:19 np0005537642 systemd[1]: Started libcrun container.
Nov 27 05:58:19 np0005537642 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/acb589f777746a1230056c23ab95d0267db487666a68462326ba65ea8b96c9f0/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 27 05:58:19 np0005537642 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/acb589f777746a1230056c23ab95d0267db487666a68462326ba65ea8b96c9f0/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Nov 27 05:58:19 np0005537642 podman[94709]: 2025-11-27 10:58:19.679772753 +0000 UTC m=+0.457122156 container init fae3959a15942e1ad949d17ad69e2ffc402a87a9f05e16ad27635047cf44b610 (image=quay.io/ceph/ceph:v19, name=wizardly_austin, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Nov 27 05:58:19 np0005537642 podman[94709]: 2025-11-27 10:58:19.69131203 +0000 UTC m=+0.468661383 container start fae3959a15942e1ad949d17ad69e2ffc402a87a9f05e16ad27635047cf44b610 (image=quay.io/ceph/ceph:v19, name=wizardly_austin, org.label-schema.license=GPLv2, CEPH_REF=squid, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325)
Nov 27 05:58:19 np0005537642 podman[94709]: 2025-11-27 10:58:19.698238477 +0000 UTC m=+0.475587900 container attach fae3959a15942e1ad949d17ad69e2ffc402a87a9f05e16ad27635047cf44b610 (image=quay.io/ceph/ceph:v19, name=wizardly_austin, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default)
Nov 27 05:58:19 np0005537642 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v11: 198 pgs: 198 active+clean; 454 KiB data, 103 MiB used, 60 GiB / 60 GiB avail; 29 KiB/s rd, 0 B/s wr, 12 op/s
Nov 27 05:58:20 np0005537642 ceph-mon[74338]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth import"} v 0)
Nov 27 05:58:20 np0005537642 ceph-mon[74338]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/1104627218' entity='client.admin' cmd=[{"prefix": "auth import"}]: dispatch
Nov 27 05:58:20 np0005537642 ceph-mon[74338]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/1104627218' entity='client.admin' cmd='[{"prefix": "auth import"}]': finished
Nov 27 05:58:20 np0005537642 systemd[1]: libpod-fae3959a15942e1ad949d17ad69e2ffc402a87a9f05e16ad27635047cf44b610.scope: Deactivated successfully.
Nov 27 05:58:20 np0005537642 podman[94709]: 2025-11-27 10:58:20.2067714 +0000 UTC m=+0.984120723 container died fae3959a15942e1ad949d17ad69e2ffc402a87a9f05e16ad27635047cf44b610 (image=quay.io/ceph/ceph:v19, name=wizardly_austin, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, ceph=True, CEPH_REF=squid, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325)
Nov 27 05:58:20 np0005537642 systemd[1]: var-lib-containers-storage-overlay-acb589f777746a1230056c23ab95d0267db487666a68462326ba65ea8b96c9f0-merged.mount: Deactivated successfully.
Nov 27 05:58:20 np0005537642 podman[94709]: 2025-11-27 10:58:20.278615048 +0000 UTC m=+1.055964411 container remove fae3959a15942e1ad949d17ad69e2ffc402a87a9f05e16ad27635047cf44b610 (image=quay.io/ceph/ceph:v19, name=wizardly_austin, CEPH_REF=squid, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325)
Nov 27 05:58:20 np0005537642 systemd[1]: libpod-conmon-fae3959a15942e1ad949d17ad69e2ffc402a87a9f05e16ad27635047cf44b610.scope: Deactivated successfully.
Nov 27 05:58:20 np0005537642 ceph-mon[74338]: Deploying daemon node-exporter.compute-1 on compute-1
Nov 27 05:58:20 np0005537642 ceph-mon[74338]: from='client.? 192.168.122.100:0/1104627218' entity='client.admin' cmd=[{"prefix": "auth import"}]: dispatch
Nov 27 05:58:20 np0005537642 ceph-mon[74338]: from='client.? 192.168.122.100:0/1104627218' entity='client.admin' cmd='[{"prefix": "auth import"}]': finished
Nov 27 05:58:21 np0005537642 python3[94786]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v19 --fsid 4c838139-e0c9-556a-a9ca-e4422f459af7 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   status --format json | jq .monmap.num_mons _uses_shell=True stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 27 05:58:21 np0005537642 podman[94788]: 2025-11-27 10:58:21.298949136 +0000 UTC m=+0.077068986 container create 30163ad76166bac34361bed4b824fc0b43b2e71e57382607d9dddd43038c4efd (image=quay.io/ceph/ceph:v19, name=nifty_feynman, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 27 05:58:21 np0005537642 systemd[1]: Started libpod-conmon-30163ad76166bac34361bed4b824fc0b43b2e71e57382607d9dddd43038c4efd.scope.
Nov 27 05:58:21 np0005537642 podman[94788]: 2025-11-27 10:58:21.266047453 +0000 UTC m=+0.044167343 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Nov 27 05:58:21 np0005537642 systemd[1]: Started libcrun container.
Nov 27 05:58:21 np0005537642 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/23ec4eae56477370d135cf69223664c57c0ca5534e8c33df26a959927537a59c/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 27 05:58:21 np0005537642 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/23ec4eae56477370d135cf69223664c57c0ca5534e8c33df26a959927537a59c/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Nov 27 05:58:21 np0005537642 podman[94788]: 2025-11-27 10:58:21.39919946 +0000 UTC m=+0.177319330 container init 30163ad76166bac34361bed4b824fc0b43b2e71e57382607d9dddd43038c4efd (image=quay.io/ceph/ceph:v19, name=nifty_feynman, org.label-schema.license=GPLv2, CEPH_REF=squid, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Nov 27 05:58:21 np0005537642 podman[94788]: 2025-11-27 10:58:21.406710553 +0000 UTC m=+0.184830363 container start 30163ad76166bac34361bed4b824fc0b43b2e71e57382607d9dddd43038c4efd (image=quay.io/ceph/ceph:v19, name=nifty_feynman, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1)
Nov 27 05:58:21 np0005537642 podman[94788]: 2025-11-27 10:58:21.410545702 +0000 UTC m=+0.188665572 container attach 30163ad76166bac34361bed4b824fc0b43b2e71e57382607d9dddd43038c4efd (image=quay.io/ceph/ceph:v19, name=nifty_feynman, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, CEPH_REF=squid, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.vendor=CentOS)
Nov 27 05:58:21 np0005537642 ceph-mon[74338]: mon.compute-0@0(leader).osd e64 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 27 05:58:21 np0005537642 ceph-mon[74338]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1.devices.0}] v 0)
Nov 27 05:58:21 np0005537642 ceph-mon[74338]: log_channel(audit) log [INF] : from='mgr.14424 192.168.122.100:0/1707931784' entity='mgr.compute-0.qnrkij' 
Nov 27 05:58:21 np0005537642 ceph-mon[74338]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1}] v 0)
Nov 27 05:58:21 np0005537642 ceph-mon[74338]: log_channel(audit) log [INF] : from='mgr.14424 192.168.122.100:0/1707931784' entity='mgr.compute-0.qnrkij' 
Nov 27 05:58:21 np0005537642 ceph-mon[74338]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.node-exporter}] v 0)
Nov 27 05:58:21 np0005537642 ceph-mon[74338]: log_channel(audit) log [INF] : from='mgr.14424 192.168.122.100:0/1707931784' entity='mgr.compute-0.qnrkij' 
Nov 27 05:58:21 np0005537642 ceph-mgr[74636]: [cephadm INFO cephadm.serve] Deploying daemon node-exporter.compute-2 on compute-2
Nov 27 05:58:21 np0005537642 ceph-mgr[74636]: log_channel(cephadm) log [INF] : Deploying daemon node-exporter.compute-2 on compute-2
Nov 27 05:58:21 np0005537642 ceph-mon[74338]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "status", "format": "json"} v 0)
Nov 27 05:58:21 np0005537642 ceph-mon[74338]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1858573391' entity='client.admin' cmd=[{"prefix": "status", "format": "json"}]: dispatch
Nov 27 05:58:21 np0005537642 nifty_feynman[94805]: 
Nov 27 05:58:21 np0005537642 nifty_feynman[94805]: {"fsid":"4c838139-e0c9-556a-a9ca-e4422f459af7","health":{"status":"HEALTH_ERR","checks":{"MDS_ALL_DOWN":{"severity":"HEALTH_ERR","summary":{"message":"1 filesystem is offline","count":1},"muted":false},"MDS_UP_LESS_THAN_MAX":{"severity":"HEALTH_WARN","summary":{"message":"1 filesystem is online with fewer MDS than max_mds","count":1},"muted":false}},"mutes":[]},"election_epoch":20,"quorum":[0,1,2],"quorum_names":["compute-0","compute-2","compute-1"],"quorum_age":84,"monmap":{"epoch":3,"min_mon_release_name":"squid","num_mons":3},"osdmap":{"epoch":64,"num_osds":3,"num_up_osds":3,"osd_up_since":1764241041,"num_in_osds":3,"osd_in_since":1764241022,"num_remapped_pgs":0},"pgmap":{"pgs_by_state":[{"state_name":"active+clean","count":198}],"num_pgs":198,"num_pools":12,"num_objects":195,"data_bytes":464595,"bytes_used":107831296,"bytes_avail":64304095232,"bytes_total":64411926528,"read_bytes_sec":30029,"write_bytes_sec":0,"read_op_per_sec":9,"write_op_per_sec":2},"fsmap":{"epoch":2,"btime":"2025-11-27T10:58:12:975683+0000","id":1,"up":0,"in":0,"max":1,"by_rank":[],"up:standby":0},"mgrmap":{"available":true,"num_standbys":2,"modules":["cephadm","dashboard","iostat","nfs","restful"],"services":{"dashboard":"http://192.168.122.100:8443/"}},"servicemap":{"epoch":6,"modified":"2025-11-27T10:57:49.929316+0000","services":{"mgr":{"daemons":{"summary":"","compute-0.qnrkij":{"start_epoch":0,"start_stamp":"0.000000","gid":0,"addr":"(unrecognized address family 0)/0","metadata":{},"task_status":{}},"compute-1.npcryb":{"start_epoch":0,"start_stamp":"0.000000","gid":0,"addr":"(unrecognized address family 0)/0","metadata":{},"task_status":{}},"compute-2.yyrxaz":{"start_epoch":0,"start_stamp":"0.000000","gid":0,"addr":"(unrecognized address family 0)/0","metadata":{},"task_status":{}}}},"mon":{"daemons":{"summary":"","compute-0":{"start_epoch":0,"start_stamp":"0.000000","gid":0,"addr":"(unrecognized address family 0)/0","metadata":{},"task_status":{}},"compute-1":{"start_epoch":0,"start_stamp":"0.000000","gid":0,"addr":"(unrecognized address family 0)/0","metadata":{},"task_status":{}},"compute-2":{"start_epoch":0,"start_stamp":"0.000000","gid":0,"addr":"(unrecognized address family 0)/0","metadata":{},"task_status":{}}}},"osd":{"daemons":{"summary":"","0":{"start_epoch":0,"start_stamp":"0.000000","gid":0,"addr":"(unrecognized address family 0)/0","metadata":{},"task_status":{}},"1":{"start_epoch":0,"start_stamp":"0.000000","gid":0,"addr":"(unrecognized address family 0)/0","metadata":{},"task_status":{}},"2":{"start_epoch":0,"start_stamp":"0.000000","gid":0,"addr":"(unrecognized address family 0)/0","metadata":{},"task_status":{}}}},"rgw":{"daemons":{"summary":"","24143":{"start_epoch":6,"start_stamp":"2025-11-27T10:57:49.920330+0000","gid":24143,"addr":"192.168.122.101:0/4171051465","metadata":{"arch":"x86_64","ceph_release":"squid","ceph_version":"ceph version 19.2.3 (c92aebb279828e9c3c1f5d24613efca272649e62) squid (stable)","ceph_version_short":"19.2.3","container_hostname":"compute-1","container_image":"quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec","cpu":"AMD EPYC-Rome Processor","distro":"centos","distro_description":"CentOS Stream 9","distro_version":"9","frontend_config#0":"beast endpoint=192.168.122.101:8082","frontend_type#0":"beast","hostname":"compute-1","id":"rgw.compute-1.mkskbt","kernel_description":"#1 SMP PREEMPT_DYNAMIC Thu Nov 20 14:15:03 UTC 2025","kernel_version":"5.14.0-642.el9.x86_64","mem_swap_kb":"1048572","mem_total_kb":"7864316","num_handles":"1","os":"Linux","pid":"2","realm_id":"","realm_name":"","zone_id":"675152cd-7f14-4b99-b8fc-74e8884ed61a","zone_name":"default","zonegroup_id":"b3bd84b0-a1e5-48d4-ab74-2f2514937c72","zonegroup_name":"default"},"task_status":{}},"24149":{"start_epoch":6,"start_stamp":"2025-11-27T10:57:49.921471+0000","gid":24149,"addr":"192.168.122.100:0/2968932671","metadata":{"arch":"x86_64","ceph_release":"squid","ceph_version":"ceph version 19.2.3 (c92aebb279828e9c3c1f5d24613efca272649e62) squid (stable)","ceph_version_short":"19.2.3","container_hostname":"compute-0","container_image":"quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec","cpu":"AMD EPYC-Rome Processor","distro":"centos","distro_description":"CentOS Stream 9","distro_version":"9","frontend_config#0":"beast endpoint=192.168.122.100:8082","frontend_type#0":"beast","hostname":"compute-0","id":"rgw.compute-0.xkdunz","kernel_description":"#1 SMP PREEMPT_DYNAMIC Thu Nov 20 14:15:03 UTC 2025","kernel_version":"5.14.0-642.el9.x86_64","mem_swap_kb":"1048572","mem_total_kb":"7864324","num_handles":"1","os":"Linux","pid":"2","realm_id":"","realm_name":"","zone_id":"675152cd-7f14-4b99-b8fc-74e8884ed61a","zone_name":"default","zonegroup_id":"b3bd84b0-a1e5-48d4-ab74-2f2514937c72","zonegroup_name":"default"},"task_status":{}},"24154":{"start_epoch":6,"start_stamp":"2025-11-27T10:57:49.920505+0000","gid":24154,"addr":"192.168.122.102:0/3611071583","metadata":{"arch":"x86_64","ceph_release":"squid","ceph_version":"ceph version 19.2.3 (c92aebb279828e9c3c1f5d24613efca272649e62) squid (stable)","ceph_version_short":"19.2.3","container_hostname":"compute-2","container_image":"quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec","cpu":"AMD EPYC-Rome Processor","distro":"centos","distro_description":"CentOS Stream 9","distro_version":"9","frontend_config#0":"beast endpoint=192.168.122.102:8082","frontend_type#0":"beast","hostname":"compute-2","id":"rgw.compute-2.ujaphm","kernel_description":"#1 SMP PREEMPT_DYNAMIC Thu Nov 20 14:15:03 UTC 2025","kernel_version":"5.14.0-642.el9.x86_64","mem_swap_kb":"1048572","mem_total_kb":"7864324","num_handles":"1","os":"Linux","pid":"2","realm_id":"","realm_name":"","zone_id":"675152cd-7f14-4b99-b8fc-74e8884ed61a","zone_name":"default","zonegroup_id":"b3bd84b0-a1e5-48d4-ab74-2f2514937c72","zonegroup_name":"default"},"task_status":{}}}}}},"progress_events":{"d6aa61ad-7de2-4af8-8d25-64acab7fde18":{"message":"Updating node-exporter deployment (+2 -> 3) (0s)\n      [............................] ","progress":0,"add_to_ceph_s":true}}}
Nov 27 05:58:21 np0005537642 systemd[1]: libpod-30163ad76166bac34361bed4b824fc0b43b2e71e57382607d9dddd43038c4efd.scope: Deactivated successfully.
Nov 27 05:58:21 np0005537642 podman[94788]: 2025-11-27 10:58:21.910939004 +0000 UTC m=+0.689058814 container died 30163ad76166bac34361bed4b824fc0b43b2e71e57382607d9dddd43038c4efd (image=quay.io/ceph/ceph:v19, name=nifty_feynman, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Nov 27 05:58:21 np0005537642 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v12: 198 pgs: 198 active+clean; 454 KiB data, 103 MiB used, 60 GiB / 60 GiB avail; 29 KiB/s rd, 0 B/s wr, 12 op/s
Nov 27 05:58:21 np0005537642 systemd[1]: var-lib-containers-storage-overlay-23ec4eae56477370d135cf69223664c57c0ca5534e8c33df26a959927537a59c-merged.mount: Deactivated successfully.
Nov 27 05:58:21 np0005537642 podman[94788]: 2025-11-27 10:58:21.961076866 +0000 UTC m=+0.739196716 container remove 30163ad76166bac34361bed4b824fc0b43b2e71e57382607d9dddd43038c4efd (image=quay.io/ceph/ceph:v19, name=nifty_feynman, org.label-schema.license=GPLv2, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 27 05:58:21 np0005537642 systemd[1]: libpod-conmon-30163ad76166bac34361bed4b824fc0b43b2e71e57382607d9dddd43038c4efd.scope: Deactivated successfully.
Nov 27 05:58:22 np0005537642 python3[94867]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v19 --fsid 4c838139-e0c9-556a-a9ca-e4422f459af7 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   mon dump --format json _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 27 05:58:22 np0005537642 podman[94868]: 2025-11-27 10:58:22.424754127 +0000 UTC m=+0.057939364 container create ce168528476984fa62cbe55289c314777db343d599e98e87d7c9dcfea8cd2fec (image=quay.io/ceph/ceph:v19, name=stupefied_babbage, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, ceph=True, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1)
Nov 27 05:58:22 np0005537642 systemd[1]: Started libpod-conmon-ce168528476984fa62cbe55289c314777db343d599e98e87d7c9dcfea8cd2fec.scope.
Nov 27 05:58:22 np0005537642 podman[94868]: 2025-11-27 10:58:22.400753016 +0000 UTC m=+0.033938233 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Nov 27 05:58:22 np0005537642 systemd[1]: Started libcrun container.
Nov 27 05:58:22 np0005537642 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6b6563c5d2d8c227ffbe4d04b83985ded0e9d23aeda013ffe76065ed9bdf1e22/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Nov 27 05:58:22 np0005537642 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6b6563c5d2d8c227ffbe4d04b83985ded0e9d23aeda013ffe76065ed9bdf1e22/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 27 05:58:22 np0005537642 podman[94868]: 2025-11-27 10:58:22.52602367 +0000 UTC m=+0.159208957 container init ce168528476984fa62cbe55289c314777db343d599e98e87d7c9dcfea8cd2fec (image=quay.io/ceph/ceph:v19, name=stupefied_babbage, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1)
Nov 27 05:58:22 np0005537642 podman[94868]: 2025-11-27 10:58:22.537470024 +0000 UTC m=+0.170655221 container start ce168528476984fa62cbe55289c314777db343d599e98e87d7c9dcfea8cd2fec (image=quay.io/ceph/ceph:v19, name=stupefied_babbage, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, CEPH_REF=squid, ceph=True, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 27 05:58:22 np0005537642 podman[94868]: 2025-11-27 10:58:22.541662223 +0000 UTC m=+0.174847510 container attach ce168528476984fa62cbe55289c314777db343d599e98e87d7c9dcfea8cd2fec (image=quay.io/ceph/ceph:v19, name=stupefied_babbage, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, CEPH_REF=squid, OSD_FLAVOR=default, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2)
Nov 27 05:58:22 np0005537642 ceph-mon[74338]: from='mgr.14424 192.168.122.100:0/1707931784' entity='mgr.compute-0.qnrkij' 
Nov 27 05:58:22 np0005537642 ceph-mon[74338]: from='mgr.14424 192.168.122.100:0/1707931784' entity='mgr.compute-0.qnrkij' 
Nov 27 05:58:22 np0005537642 ceph-mon[74338]: from='mgr.14424 192.168.122.100:0/1707931784' entity='mgr.compute-0.qnrkij' 
Nov 27 05:58:23 np0005537642 ceph-mon[74338]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Nov 27 05:58:23 np0005537642 ceph-mon[74338]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2932949581' entity='client.admin' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 27 05:58:23 np0005537642 stupefied_babbage[94883]: 
Nov 27 05:58:23 np0005537642 stupefied_babbage[94883]: {"epoch":3,"fsid":"4c838139-e0c9-556a-a9ca-e4422f459af7","modified":"2025-11-27T10:56:29.287830Z","created":"2025-11-27T10:53:19.458310Z","min_mon_release":19,"min_mon_release_name":"squid","election_strategy":1,"disallowed_leaders":"","stretch_mode":false,"tiebreaker_mon":"","removed_ranks":"","features":{"persistent":["kraken","luminous","mimic","osdmap-prune","nautilus","octopus","pacific","elector-pinging","quincy","reef","squid"],"optional":[]},"mons":[{"rank":0,"name":"compute-0","public_addrs":{"addrvec":[{"type":"v2","addr":"192.168.122.100:3300","nonce":0},{"type":"v1","addr":"192.168.122.100:6789","nonce":0}]},"addr":"192.168.122.100:6789/0","public_addr":"192.168.122.100:6789/0","priority":0,"weight":0,"crush_location":"{}"},{"rank":1,"name":"compute-2","public_addrs":{"addrvec":[{"type":"v2","addr":"192.168.122.102:3300","nonce":0},{"type":"v1","addr":"192.168.122.102:6789","nonce":0}]},"addr":"192.168.122.102:6789/0","public_addr":"192.168.122.102:6789/0","priority":0,"weight":0,"crush_location":"{}"},{"rank":2,"name":"compute-1","public_addrs":{"addrvec":[{"type":"v2","addr":"192.168.122.101:3300","nonce":0},{"type":"v1","addr":"192.168.122.101:6789","nonce":0}]},"addr":"192.168.122.101:6789/0","public_addr":"192.168.122.101:6789/0","priority":0,"weight":0,"crush_location":"{}"}],"quorum":[0,1,2]}
Nov 27 05:58:23 np0005537642 stupefied_babbage[94883]: dumped monmap epoch 3
Nov 27 05:58:23 np0005537642 systemd[1]: libpod-ce168528476984fa62cbe55289c314777db343d599e98e87d7c9dcfea8cd2fec.scope: Deactivated successfully.
Nov 27 05:58:23 np0005537642 podman[94868]: 2025-11-27 10:58:23.072913091 +0000 UTC m=+0.706098328 container died ce168528476984fa62cbe55289c314777db343d599e98e87d7c9dcfea8cd2fec (image=quay.io/ceph/ceph:v19, name=stupefied_babbage, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 27 05:58:23 np0005537642 systemd[1]: var-lib-containers-storage-overlay-6b6563c5d2d8c227ffbe4d04b83985ded0e9d23aeda013ffe76065ed9bdf1e22-merged.mount: Deactivated successfully.
Nov 27 05:58:23 np0005537642 podman[94868]: 2025-11-27 10:58:23.123217268 +0000 UTC m=+0.756402495 container remove ce168528476984fa62cbe55289c314777db343d599e98e87d7c9dcfea8cd2fec (image=quay.io/ceph/ceph:v19, name=stupefied_babbage, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 27 05:58:23 np0005537642 systemd[1]: libpod-conmon-ce168528476984fa62cbe55289c314777db343d599e98e87d7c9dcfea8cd2fec.scope: Deactivated successfully.
Nov 27 05:58:23 np0005537642 ceph-mon[74338]: Deploying daemon node-exporter.compute-2 on compute-2
Nov 27 05:58:23 np0005537642 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v13: 198 pgs: 198 active+clean; 454 KiB data, 103 MiB used, 60 GiB / 60 GiB avail; 23 KiB/s rd, 0 B/s wr, 9 op/s
Nov 27 05:58:23 np0005537642 python3[94944]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v19 --fsid 4c838139-e0c9-556a-a9ca-e4422f459af7 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   auth get client.openstack _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 27 05:58:24 np0005537642 podman[94945]: 2025-11-27 10:58:24.023883203 +0000 UTC m=+0.057645406 container create 02fde160b0220df712620f41473302604ad8c3709f6de1befbce272efce9fbdc (image=quay.io/ceph/ceph:v19, name=adoring_ellis, ceph=True, org.label-schema.vendor=CentOS, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325)
Nov 27 05:58:24 np0005537642 systemd[1]: Started libpod-conmon-02fde160b0220df712620f41473302604ad8c3709f6de1befbce272efce9fbdc.scope.
Nov 27 05:58:24 np0005537642 podman[94945]: 2025-11-27 10:58:23.992370999 +0000 UTC m=+0.026133172 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Nov 27 05:58:24 np0005537642 systemd[1]: Started libcrun container.
Nov 27 05:58:24 np0005537642 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/dfdba24794d93ca9f3ee9a5ff439caa4d9b6daeee360801df4dc558d32f8f098/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Nov 27 05:58:24 np0005537642 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/dfdba24794d93ca9f3ee9a5ff439caa4d9b6daeee360801df4dc558d32f8f098/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 27 05:58:24 np0005537642 podman[94945]: 2025-11-27 10:58:24.122051537 +0000 UTC m=+0.155813700 container init 02fde160b0220df712620f41473302604ad8c3709f6de1befbce272efce9fbdc (image=quay.io/ceph/ceph:v19, name=adoring_ellis, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, ceph=True)
Nov 27 05:58:24 np0005537642 podman[94945]: 2025-11-27 10:58:24.131239348 +0000 UTC m=+0.165001511 container start 02fde160b0220df712620f41473302604ad8c3709f6de1befbce272efce9fbdc (image=quay.io/ceph/ceph:v19, name=adoring_ellis, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, OSD_FLAVOR=default, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 27 05:58:24 np0005537642 podman[94945]: 2025-11-27 10:58:24.135077937 +0000 UTC m=+0.168840120 container attach 02fde160b0220df712620f41473302604ad8c3709f6de1befbce272efce9fbdc (image=quay.io/ceph/ceph:v19, name=adoring_ellis, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 27 05:58:24 np0005537642 ceph-mon[74338]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.openstack"} v 0)
Nov 27 05:58:24 np0005537642 ceph-mon[74338]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/3923402989' entity='client.admin' cmd=[{"prefix": "auth get", "entity": "client.openstack"}]: dispatch
Nov 27 05:58:24 np0005537642 adoring_ellis[94961]: [client.openstack]
Nov 27 05:58:24 np0005537642 adoring_ellis[94961]: #011key = AQB6LShpAAAAABAAj7edPlHnAt7gnU4wzerZlQ==
Nov 27 05:58:24 np0005537642 adoring_ellis[94961]: #011caps mgr = "allow *"
Nov 27 05:58:24 np0005537642 adoring_ellis[94961]: #011caps mon = "profile rbd"
Nov 27 05:58:24 np0005537642 adoring_ellis[94961]: #011caps osd = "profile rbd pool=vms, profile rbd pool=volumes, profile rbd pool=backups, profile rbd pool=images, profile rbd pool=cephfs.cephfs.meta, profile rbd pool=cephfs.cephfs.data"
Nov 27 05:58:24 np0005537642 systemd[1]: libpod-02fde160b0220df712620f41473302604ad8c3709f6de1befbce272efce9fbdc.scope: Deactivated successfully.
Nov 27 05:58:24 np0005537642 podman[94945]: 2025-11-27 10:58:24.599295682 +0000 UTC m=+0.633057885 container died 02fde160b0220df712620f41473302604ad8c3709f6de1befbce272efce9fbdc (image=quay.io/ceph/ceph:v19, name=adoring_ellis, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, io.buildah.version=1.40.1, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0)
Nov 27 05:58:24 np0005537642 systemd[1]: var-lib-containers-storage-overlay-dfdba24794d93ca9f3ee9a5ff439caa4d9b6daeee360801df4dc558d32f8f098-merged.mount: Deactivated successfully.
Nov 27 05:58:24 np0005537642 podman[94945]: 2025-11-27 10:58:24.675227076 +0000 UTC m=+0.708989269 container remove 02fde160b0220df712620f41473302604ad8c3709f6de1befbce272efce9fbdc (image=quay.io/ceph/ceph:v19, name=adoring_ellis, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, OSD_FLAVOR=default, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2)
Nov 27 05:58:24 np0005537642 systemd[1]: libpod-conmon-02fde160b0220df712620f41473302604ad8c3709f6de1befbce272efce9fbdc.scope: Deactivated successfully.
Nov 27 05:58:24 np0005537642 ceph-mon[74338]: from='client.? 192.168.122.100:0/3923402989' entity='client.admin' cmd=[{"prefix": "auth get", "entity": "client.openstack"}]: dispatch
Nov 27 05:58:25 np0005537642 ceph-mon[74338]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2.devices.0}] v 0)
Nov 27 05:58:25 np0005537642 ceph-mon[74338]: log_channel(audit) log [INF] : from='mgr.14424 192.168.122.100:0/1707931784' entity='mgr.compute-0.qnrkij' 
Nov 27 05:58:25 np0005537642 ceph-mon[74338]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2}] v 0)
Nov 27 05:58:25 np0005537642 ceph-mon[74338]: log_channel(audit) log [INF] : from='mgr.14424 192.168.122.100:0/1707931784' entity='mgr.compute-0.qnrkij' 
Nov 27 05:58:25 np0005537642 ceph-mon[74338]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.node-exporter}] v 0)
Nov 27 05:58:25 np0005537642 ceph-mon[74338]: log_channel(audit) log [INF] : from='mgr.14424 192.168.122.100:0/1707931784' entity='mgr.compute-0.qnrkij' 
Nov 27 05:58:25 np0005537642 ceph-mgr[74636]: [progress INFO root] complete: finished ev d6aa61ad-7de2-4af8-8d25-64acab7fde18 (Updating node-exporter deployment (+2 -> 3))
Nov 27 05:58:25 np0005537642 ceph-mgr[74636]: [progress INFO root] Completed event d6aa61ad-7de2-4af8-8d25-64acab7fde18 (Updating node-exporter deployment (+2 -> 3)) in 6 seconds
Nov 27 05:58:25 np0005537642 ceph-mon[74338]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.node-exporter}] v 0)
Nov 27 05:58:25 np0005537642 ceph-mon[74338]: log_channel(audit) log [INF] : from='mgr.14424 192.168.122.100:0/1707931784' entity='mgr.compute-0.qnrkij' 
Nov 27 05:58:25 np0005537642 ceph-mon[74338]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Nov 27 05:58:25 np0005537642 ceph-mon[74338]: log_channel(audit) log [DBG] : from='mgr.14424 192.168.122.100:0/1707931784' entity='mgr.compute-0.qnrkij' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 27 05:58:25 np0005537642 ceph-mon[74338]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Nov 27 05:58:25 np0005537642 ceph-mon[74338]: log_channel(audit) log [INF] : from='mgr.14424 192.168.122.100:0/1707931784' entity='mgr.compute-0.qnrkij' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 27 05:58:25 np0005537642 ceph-mon[74338]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Nov 27 05:58:25 np0005537642 ceph-mon[74338]: log_channel(audit) log [DBG] : from='mgr.14424 192.168.122.100:0/1707931784' entity='mgr.compute-0.qnrkij' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 27 05:58:25 np0005537642 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v14: 198 pgs: 198 active+clean; 454 KiB data, 103 MiB used, 60 GiB / 60 GiB avail; 20 KiB/s rd, 0 B/s wr, 8 op/s
Nov 27 05:58:25 np0005537642 podman[95113]: 2025-11-27 10:58:25.851656182 +0000 UTC m=+0.030586138 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Nov 27 05:58:26 np0005537642 podman[95113]: 2025-11-27 10:58:26.077017634 +0000 UTC m=+0.255947540 container create 28e13439b82787c7ba8eaddad1409ac6094a009af7f772d29674d6670587ecce (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=vigilant_meninsky, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, ceph=True)
Nov 27 05:58:26 np0005537642 systemd[1]: Started libpod-conmon-28e13439b82787c7ba8eaddad1409ac6094a009af7f772d29674d6670587ecce.scope.
Nov 27 05:58:26 np0005537642 ceph-mon[74338]: from='mgr.14424 192.168.122.100:0/1707931784' entity='mgr.compute-0.qnrkij' 
Nov 27 05:58:26 np0005537642 ceph-mon[74338]: from='mgr.14424 192.168.122.100:0/1707931784' entity='mgr.compute-0.qnrkij' 
Nov 27 05:58:26 np0005537642 ceph-mon[74338]: from='mgr.14424 192.168.122.100:0/1707931784' entity='mgr.compute-0.qnrkij' 
Nov 27 05:58:26 np0005537642 ceph-mon[74338]: from='mgr.14424 192.168.122.100:0/1707931784' entity='mgr.compute-0.qnrkij' 
Nov 27 05:58:26 np0005537642 ceph-mon[74338]: from='mgr.14424 192.168.122.100:0/1707931784' entity='mgr.compute-0.qnrkij' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 27 05:58:26 np0005537642 systemd[1]: Started libcrun container.
Nov 27 05:58:26 np0005537642 podman[95113]: 2025-11-27 10:58:26.309964671 +0000 UTC m=+0.488894577 container init 28e13439b82787c7ba8eaddad1409ac6094a009af7f772d29674d6670587ecce (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=vigilant_meninsky, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 27 05:58:26 np0005537642 podman[95113]: 2025-11-27 10:58:26.322903198 +0000 UTC m=+0.501833104 container start 28e13439b82787c7ba8eaddad1409ac6094a009af7f772d29674d6670587ecce (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=vigilant_meninsky, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, org.label-schema.schema-version=1.0, ceph=True, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 27 05:58:26 np0005537642 vigilant_meninsky[95255]: 167 167
Nov 27 05:58:26 np0005537642 systemd[1]: libpod-28e13439b82787c7ba8eaddad1409ac6094a009af7f772d29674d6670587ecce.scope: Deactivated successfully.
Nov 27 05:58:26 np0005537642 podman[95113]: 2025-11-27 10:58:26.335815434 +0000 UTC m=+0.514745400 container attach 28e13439b82787c7ba8eaddad1409ac6094a009af7f772d29674d6670587ecce (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=vigilant_meninsky, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, CEPH_REF=squid, org.label-schema.license=GPLv2, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 27 05:58:26 np0005537642 podman[95113]: 2025-11-27 10:58:26.338275334 +0000 UTC m=+0.517205240 container died 28e13439b82787c7ba8eaddad1409ac6094a009af7f772d29674d6670587ecce (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=vigilant_meninsky, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 27 05:58:26 np0005537642 systemd[1]: var-lib-containers-storage-overlay-15c99d99f9ffc5c9d9a9256b7535cb6fb8c37f96c17148504869bb6ff349d4ec-merged.mount: Deactivated successfully.
Nov 27 05:58:26 np0005537642 podman[95113]: 2025-11-27 10:58:26.41076619 +0000 UTC m=+0.589696066 container remove 28e13439b82787c7ba8eaddad1409ac6094a009af7f772d29674d6670587ecce (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=vigilant_meninsky, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1)
Nov 27 05:58:26 np0005537642 systemd[1]: libpod-conmon-28e13439b82787c7ba8eaddad1409ac6094a009af7f772d29674d6670587ecce.scope: Deactivated successfully.
Nov 27 05:58:26 np0005537642 ansible-async_wrapper.py[95259]: Invoked with j151941541910 30 /home/zuul/.ansible/tmp/ansible-tmp-1764241105.8229334-37357-81517695506267/AnsiballZ_command.py _
Nov 27 05:58:26 np0005537642 ansible-async_wrapper.py[95276]: Starting module and watcher
Nov 27 05:58:26 np0005537642 ansible-async_wrapper.py[95276]: Start watching 95277 (30)
Nov 27 05:58:26 np0005537642 ceph-mon[74338]: mon.compute-0@0(leader).osd e64 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 27 05:58:26 np0005537642 ansible-async_wrapper.py[95277]: Start module (95277)
Nov 27 05:58:26 np0005537642 ansible-async_wrapper.py[95259]: Return async_wrapper task started.
Nov 27 05:58:26 np0005537642 python3[95278]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v19 --fsid 4c838139-e0c9-556a-a9ca-e4422f459af7 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   orch status --format json _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 27 05:58:26 np0005537642 podman[95284]: 2025-11-27 10:58:26.62229253 +0000 UTC m=+0.059197700 container create a4414d3de310760578558304e4ea356d2fa1724376854619a07709e8d7bb9730 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=affectionate_noether, CEPH_REF=squid, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 27 05:58:26 np0005537642 systemd[1]: Started libpod-conmon-a4414d3de310760578558304e4ea356d2fa1724376854619a07709e8d7bb9730.scope.
Nov 27 05:58:26 np0005537642 podman[95284]: 2025-11-27 10:58:26.593698329 +0000 UTC m=+0.030603549 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Nov 27 05:58:26 np0005537642 systemd[1]: Started libcrun container.
Nov 27 05:58:26 np0005537642 podman[95298]: 2025-11-27 10:58:26.698035848 +0000 UTC m=+0.060088455 container create 77dffc2b2bbee9bb089b1101f37fcdc28b6d9cfae10c07060bb09422d92680f8 (image=quay.io/ceph/ceph:v19, name=determined_lamarr, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0)
Nov 27 05:58:26 np0005537642 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fcfceb965f02cfb32d20516600891e7e66ca01ffcd973038c7c02b81fa34e384/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 27 05:58:26 np0005537642 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fcfceb965f02cfb32d20516600891e7e66ca01ffcd973038c7c02b81fa34e384/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 27 05:58:26 np0005537642 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fcfceb965f02cfb32d20516600891e7e66ca01ffcd973038c7c02b81fa34e384/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 27 05:58:26 np0005537642 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fcfceb965f02cfb32d20516600891e7e66ca01ffcd973038c7c02b81fa34e384/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 27 05:58:26 np0005537642 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fcfceb965f02cfb32d20516600891e7e66ca01ffcd973038c7c02b81fa34e384/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Nov 27 05:58:26 np0005537642 podman[95284]: 2025-11-27 10:58:26.724288912 +0000 UTC m=+0.161194122 container init a4414d3de310760578558304e4ea356d2fa1724376854619a07709e8d7bb9730 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=affectionate_noether, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 27 05:58:26 np0005537642 podman[95284]: 2025-11-27 10:58:26.734967325 +0000 UTC m=+0.171872455 container start a4414d3de310760578558304e4ea356d2fa1724376854619a07709e8d7bb9730 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=affectionate_noether, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 27 05:58:26 np0005537642 systemd[1]: Started libpod-conmon-77dffc2b2bbee9bb089b1101f37fcdc28b6d9cfae10c07060bb09422d92680f8.scope.
Nov 27 05:58:26 np0005537642 podman[95284]: 2025-11-27 10:58:26.738784924 +0000 UTC m=+0.175690084 container attach a4414d3de310760578558304e4ea356d2fa1724376854619a07709e8d7bb9730 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=affectionate_noether, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, OSD_FLAVOR=default, org.label-schema.license=GPLv2)
Nov 27 05:58:26 np0005537642 systemd[1]: Started libcrun container.
Nov 27 05:58:26 np0005537642 podman[95298]: 2025-11-27 10:58:26.672467953 +0000 UTC m=+0.034520590 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Nov 27 05:58:26 np0005537642 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/45e8262031da0a79b9f98f07f8b2e0003d26603571c2d8d7b3a7d04c665ba8c6/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Nov 27 05:58:26 np0005537642 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/45e8262031da0a79b9f98f07f8b2e0003d26603571c2d8d7b3a7d04c665ba8c6/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 27 05:58:26 np0005537642 podman[95298]: 2025-11-27 10:58:26.851091819 +0000 UTC m=+0.213144496 container init 77dffc2b2bbee9bb089b1101f37fcdc28b6d9cfae10c07060bb09422d92680f8 (image=quay.io/ceph/ceph:v19, name=determined_lamarr, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, ceph=True)
Nov 27 05:58:26 np0005537642 podman[95298]: 2025-11-27 10:58:26.859850447 +0000 UTC m=+0.221903084 container start 77dffc2b2bbee9bb089b1101f37fcdc28b6d9cfae10c07060bb09422d92680f8 (image=quay.io/ceph/ceph:v19, name=determined_lamarr, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, io.buildah.version=1.40.1, OSD_FLAVOR=default, org.label-schema.license=GPLv2)
Nov 27 05:58:27 np0005537642 ceph-mgr[74636]: [progress INFO root] Writing back 14 completed events
Nov 27 05:58:27 np0005537642 ceph-mon[74338]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/progress/completed}] v 0)
Nov 27 05:58:27 np0005537642 affectionate_noether[95313]: --> passed data devices: 0 physical, 1 LVM
Nov 27 05:58:27 np0005537642 affectionate_noether[95313]: --> All data devices are unavailable
Nov 27 05:58:27 np0005537642 podman[95298]: 2025-11-27 10:58:27.070635226 +0000 UTC m=+0.432687873 container attach 77dffc2b2bbee9bb089b1101f37fcdc28b6d9cfae10c07060bb09422d92680f8 (image=quay.io/ceph/ceph:v19, name=determined_lamarr, CEPH_REF=squid, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True)
Nov 27 05:58:27 np0005537642 systemd[1]: libpod-a4414d3de310760578558304e4ea356d2fa1724376854619a07709e8d7bb9730.scope: Deactivated successfully.
Nov 27 05:58:27 np0005537642 podman[95284]: 2025-11-27 10:58:27.091330463 +0000 UTC m=+0.528235633 container died a4414d3de310760578558304e4ea356d2fa1724376854619a07709e8d7bb9730 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=affectionate_noether, CEPH_REF=squid, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 27 05:58:27 np0005537642 ceph-mon[74338]: log_channel(audit) log [INF] : from='mgr.14424 192.168.122.100:0/1707931784' entity='mgr.compute-0.qnrkij' 
Nov 27 05:58:27 np0005537642 systemd[1]: var-lib-containers-storage-overlay-fcfceb965f02cfb32d20516600891e7e66ca01ffcd973038c7c02b81fa34e384-merged.mount: Deactivated successfully.
Nov 27 05:58:27 np0005537642 ceph-mgr[74636]: log_channel(audit) log [DBG] : from='client.14496 -' entity='client.admin' cmd=[{"prefix": "orch status", "target": ["mon-mgr", ""], "format": "json"}]: dispatch
Nov 27 05:58:27 np0005537642 determined_lamarr[95321]: 
Nov 27 05:58:27 np0005537642 determined_lamarr[95321]: {"available": true, "backend": "cephadm", "paused": false, "workers": 10}
Nov 27 05:58:27 np0005537642 systemd[1]: libpod-77dffc2b2bbee9bb089b1101f37fcdc28b6d9cfae10c07060bb09422d92680f8.scope: Deactivated successfully.
Nov 27 05:58:27 np0005537642 podman[95284]: 2025-11-27 10:58:27.375633736 +0000 UTC m=+0.812538906 container remove a4414d3de310760578558304e4ea356d2fa1724376854619a07709e8d7bb9730 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=affectionate_noether, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 27 05:58:27 np0005537642 podman[95298]: 2025-11-27 10:58:27.378775545 +0000 UTC m=+0.740828192 container died 77dffc2b2bbee9bb089b1101f37fcdc28b6d9cfae10c07060bb09422d92680f8 (image=quay.io/ceph/ceph:v19, name=determined_lamarr, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, org.label-schema.license=GPLv2, io.buildah.version=1.40.1)
Nov 27 05:58:27 np0005537642 ceph-mon[74338]: from='mgr.14424 192.168.122.100:0/1707931784' entity='mgr.compute-0.qnrkij' 
Nov 27 05:58:27 np0005537642 systemd[1]: libpod-conmon-a4414d3de310760578558304e4ea356d2fa1724376854619a07709e8d7bb9730.scope: Deactivated successfully.
Nov 27 05:58:27 np0005537642 systemd[1]: var-lib-containers-storage-overlay-45e8262031da0a79b9f98f07f8b2e0003d26603571c2d8d7b3a7d04c665ba8c6-merged.mount: Deactivated successfully.
Nov 27 05:58:27 np0005537642 podman[95298]: 2025-11-27 10:58:27.471189716 +0000 UTC m=+0.833242323 container remove 77dffc2b2bbee9bb089b1101f37fcdc28b6d9cfae10c07060bb09422d92680f8 (image=quay.io/ceph/ceph:v19, name=determined_lamarr, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, ceph=True, OSD_FLAVOR=default, io.buildah.version=1.40.1, CEPH_REF=squid, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2)
Nov 27 05:58:27 np0005537642 systemd[1]: libpod-conmon-77dffc2b2bbee9bb089b1101f37fcdc28b6d9cfae10c07060bb09422d92680f8.scope: Deactivated successfully.
Nov 27 05:58:27 np0005537642 ansible-async_wrapper.py[95277]: Module complete (95277)
Nov 27 05:58:27 np0005537642 python3[95479]: ansible-ansible.legacy.async_status Invoked with jid=j151941541910.95259 mode=status _async_dir=/root/.ansible_async
Nov 27 05:58:27 np0005537642 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v15: 198 pgs: 198 active+clean; 454 KiB data, 103 MiB used, 60 GiB / 60 GiB avail; 18 KiB/s rd, 0 B/s wr, 7 op/s
Nov 27 05:58:28 np0005537642 podman[95548]: 2025-11-27 10:58:28.034213266 +0000 UTC m=+0.056369050 container create 5b06e4c2a8a65496efef2508a3d9929862740fb9615c991f4337024549c4a1d8 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=dreamy_bouman, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, org.label-schema.license=GPLv2, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS)
Nov 27 05:58:28 np0005537642 systemd[1]: Started libpod-conmon-5b06e4c2a8a65496efef2508a3d9929862740fb9615c991f4337024549c4a1d8.scope.
Nov 27 05:58:28 np0005537642 systemd[1]: Started libcrun container.
Nov 27 05:58:28 np0005537642 podman[95548]: 2025-11-27 10:58:28.010023249 +0000 UTC m=+0.032179013 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Nov 27 05:58:28 np0005537642 podman[95548]: 2025-11-27 10:58:28.110650093 +0000 UTC m=+0.132805857 container init 5b06e4c2a8a65496efef2508a3d9929862740fb9615c991f4337024549c4a1d8 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=dreamy_bouman, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid)
Nov 27 05:58:28 np0005537642 podman[95548]: 2025-11-27 10:58:28.116389396 +0000 UTC m=+0.138545160 container start 5b06e4c2a8a65496efef2508a3d9929862740fb9615c991f4337024549c4a1d8 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=dreamy_bouman, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, org.label-schema.license=GPLv2, ceph=True, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 27 05:58:28 np0005537642 dreamy_bouman[95586]: 167 167
Nov 27 05:58:28 np0005537642 systemd[1]: libpod-5b06e4c2a8a65496efef2508a3d9929862740fb9615c991f4337024549c4a1d8.scope: Deactivated successfully.
Nov 27 05:58:28 np0005537642 podman[95548]: 2025-11-27 10:58:28.122426317 +0000 UTC m=+0.144582071 container attach 5b06e4c2a8a65496efef2508a3d9929862740fb9615c991f4337024549c4a1d8 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=dreamy_bouman, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, ceph=True, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0)
Nov 27 05:58:28 np0005537642 podman[95548]: 2025-11-27 10:58:28.1228627 +0000 UTC m=+0.145018464 container died 5b06e4c2a8a65496efef2508a3d9929862740fb9615c991f4337024549c4a1d8 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=dreamy_bouman, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, ceph=True, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 27 05:58:28 np0005537642 python3[95583]: ansible-ansible.legacy.async_status Invoked with jid=j151941541910.95259 mode=cleanup _async_dir=/root/.ansible_async
Nov 27 05:58:28 np0005537642 systemd[1]: var-lib-containers-storage-overlay-c9f2272839d72cfe74e117c4a5db07d0190006ed6b33d645b7ec576c482c68d8-merged.mount: Deactivated successfully.
Nov 27 05:58:28 np0005537642 podman[95548]: 2025-11-27 10:58:28.182719127 +0000 UTC m=+0.204874871 container remove 5b06e4c2a8a65496efef2508a3d9929862740fb9615c991f4337024549c4a1d8 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=dreamy_bouman, OSD_FLAVOR=default, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, ceph=True, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid)
Nov 27 05:58:28 np0005537642 systemd[1]: libpod-conmon-5b06e4c2a8a65496efef2508a3d9929862740fb9615c991f4337024549c4a1d8.scope: Deactivated successfully.
Nov 27 05:58:28 np0005537642 podman[95610]: 2025-11-27 10:58:28.377532252 +0000 UTC m=+0.056156994 container create f33e35bb8ee4be45bf056a34a776fa9d0d4e8ac3026e4920bf1b8246ed1614ec (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=optimistic_kepler, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 27 05:58:28 np0005537642 systemd[1]: Started libpod-conmon-f33e35bb8ee4be45bf056a34a776fa9d0d4e8ac3026e4920bf1b8246ed1614ec.scope.
Nov 27 05:58:28 np0005537642 podman[95610]: 2025-11-27 10:58:28.350033002 +0000 UTC m=+0.028657774 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Nov 27 05:58:28 np0005537642 systemd[1]: Started libcrun container.
Nov 27 05:58:28 np0005537642 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/76f91951ecd5096a311a7b9fcaac777cb1ab4ead733938044d7ff7395a4ae3e8/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 27 05:58:28 np0005537642 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/76f91951ecd5096a311a7b9fcaac777cb1ab4ead733938044d7ff7395a4ae3e8/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 27 05:58:28 np0005537642 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/76f91951ecd5096a311a7b9fcaac777cb1ab4ead733938044d7ff7395a4ae3e8/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 27 05:58:28 np0005537642 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/76f91951ecd5096a311a7b9fcaac777cb1ab4ead733938044d7ff7395a4ae3e8/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 27 05:58:28 np0005537642 podman[95610]: 2025-11-27 10:58:28.501329013 +0000 UTC m=+0.179953795 container init f33e35bb8ee4be45bf056a34a776fa9d0d4e8ac3026e4920bf1b8246ed1614ec (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=optimistic_kepler, ceph=True, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 27 05:58:28 np0005537642 podman[95610]: 2025-11-27 10:58:28.50969147 +0000 UTC m=+0.188316242 container start f33e35bb8ee4be45bf056a34a776fa9d0d4e8ac3026e4920bf1b8246ed1614ec (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=optimistic_kepler, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 27 05:58:28 np0005537642 podman[95610]: 2025-11-27 10:58:28.514903328 +0000 UTC m=+0.193528150 container attach f33e35bb8ee4be45bf056a34a776fa9d0d4e8ac3026e4920bf1b8246ed1614ec (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=optimistic_kepler, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True)
Nov 27 05:58:28 np0005537642 optimistic_kepler[95626]: {
Nov 27 05:58:28 np0005537642 optimistic_kepler[95626]:    "1": [
Nov 27 05:58:28 np0005537642 optimistic_kepler[95626]:        {
Nov 27 05:58:28 np0005537642 optimistic_kepler[95626]:            "devices": [
Nov 27 05:58:28 np0005537642 optimistic_kepler[95626]:                "/dev/loop3"
Nov 27 05:58:28 np0005537642 optimistic_kepler[95626]:            ],
Nov 27 05:58:28 np0005537642 optimistic_kepler[95626]:            "lv_name": "ceph_lv0",
Nov 27 05:58:28 np0005537642 optimistic_kepler[95626]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Nov 27 05:58:28 np0005537642 optimistic_kepler[95626]:            "lv_size": "21470642176",
Nov 27 05:58:28 np0005537642 optimistic_kepler[95626]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=whPowo-sd77-WkNQ-nG3J-nhwn-01QM-SzpkeN,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=4c838139-e0c9-556a-a9ca-e4422f459af7,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=047f3e15-ba18-4c86-b24b-f8e9584c5eff,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Nov 27 05:58:28 np0005537642 optimistic_kepler[95626]:            "lv_uuid": "whPowo-sd77-WkNQ-nG3J-nhwn-01QM-SzpkeN",
Nov 27 05:58:28 np0005537642 optimistic_kepler[95626]:            "name": "ceph_lv0",
Nov 27 05:58:28 np0005537642 optimistic_kepler[95626]:            "path": "/dev/ceph_vg0/ceph_lv0",
Nov 27 05:58:28 np0005537642 optimistic_kepler[95626]:            "tags": {
Nov 27 05:58:28 np0005537642 optimistic_kepler[95626]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Nov 27 05:58:28 np0005537642 optimistic_kepler[95626]:                "ceph.block_uuid": "whPowo-sd77-WkNQ-nG3J-nhwn-01QM-SzpkeN",
Nov 27 05:58:28 np0005537642 optimistic_kepler[95626]:                "ceph.cephx_lockbox_secret": "",
Nov 27 05:58:28 np0005537642 optimistic_kepler[95626]:                "ceph.cluster_fsid": "4c838139-e0c9-556a-a9ca-e4422f459af7",
Nov 27 05:58:28 np0005537642 optimistic_kepler[95626]:                "ceph.cluster_name": "ceph",
Nov 27 05:58:28 np0005537642 optimistic_kepler[95626]:                "ceph.crush_device_class": "",
Nov 27 05:58:28 np0005537642 optimistic_kepler[95626]:                "ceph.encrypted": "0",
Nov 27 05:58:28 np0005537642 optimistic_kepler[95626]:                "ceph.osd_fsid": "047f3e15-ba18-4c86-b24b-f8e9584c5eff",
Nov 27 05:58:28 np0005537642 optimistic_kepler[95626]:                "ceph.osd_id": "1",
Nov 27 05:58:28 np0005537642 optimistic_kepler[95626]:                "ceph.osdspec_affinity": "default_drive_group",
Nov 27 05:58:28 np0005537642 optimistic_kepler[95626]:                "ceph.type": "block",
Nov 27 05:58:28 np0005537642 optimistic_kepler[95626]:                "ceph.vdo": "0",
Nov 27 05:58:28 np0005537642 optimistic_kepler[95626]:                "ceph.with_tpm": "0"
Nov 27 05:58:28 np0005537642 optimistic_kepler[95626]:            },
Nov 27 05:58:28 np0005537642 optimistic_kepler[95626]:            "type": "block",
Nov 27 05:58:28 np0005537642 optimistic_kepler[95626]:            "vg_name": "ceph_vg0"
Nov 27 05:58:28 np0005537642 optimistic_kepler[95626]:        }
Nov 27 05:58:28 np0005537642 optimistic_kepler[95626]:    ]
Nov 27 05:58:28 np0005537642 optimistic_kepler[95626]: }
Nov 27 05:58:28 np0005537642 systemd[1]: libpod-f33e35bb8ee4be45bf056a34a776fa9d0d4e8ac3026e4920bf1b8246ed1614ec.scope: Deactivated successfully.
Nov 27 05:58:28 np0005537642 podman[95610]: 2025-11-27 10:58:28.815503834 +0000 UTC m=+0.494128606 container died f33e35bb8ee4be45bf056a34a776fa9d0d4e8ac3026e4920bf1b8246ed1614ec (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=optimistic_kepler, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, io.buildah.version=1.40.1, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2)
Nov 27 05:58:28 np0005537642 systemd[1]: var-lib-containers-storage-overlay-76f91951ecd5096a311a7b9fcaac777cb1ab4ead733938044d7ff7395a4ae3e8-merged.mount: Deactivated successfully.
Nov 27 05:58:28 np0005537642 python3[95656]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v19 --fsid 4c838139-e0c9-556a-a9ca-e4422f459af7 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   orch status --format json _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 27 05:58:28 np0005537642 podman[95610]: 2025-11-27 10:58:28.872753558 +0000 UTC m=+0.551378330 container remove f33e35bb8ee4be45bf056a34a776fa9d0d4e8ac3026e4920bf1b8246ed1614ec (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=optimistic_kepler, org.label-schema.vendor=CentOS, CEPH_REF=squid, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, ceph=True)
Nov 27 05:58:28 np0005537642 systemd[1]: libpod-conmon-f33e35bb8ee4be45bf056a34a776fa9d0d4e8ac3026e4920bf1b8246ed1614ec.scope: Deactivated successfully.
Nov 27 05:58:28 np0005537642 podman[95672]: 2025-11-27 10:58:28.937622988 +0000 UTC m=+0.048002123 container create e8f65a72b583480ca6496480a2ba3893fd5f47bb8813edba7c1cb455d87f96e6 (image=quay.io/ceph/ceph:v19, name=interesting_sammet, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default)
Nov 27 05:58:28 np0005537642 systemd[1]: Started libpod-conmon-e8f65a72b583480ca6496480a2ba3893fd5f47bb8813edba7c1cb455d87f96e6.scope.
Nov 27 05:58:28 np0005537642 systemd[1]: Started libcrun container.
Nov 27 05:58:28 np0005537642 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/798b35f47b1ba926597461f7d2e8a55695156bc61b553b7c9f46dd7bfb2491fd/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Nov 27 05:58:28 np0005537642 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/798b35f47b1ba926597461f7d2e8a55695156bc61b553b7c9f46dd7bfb2491fd/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 27 05:58:29 np0005537642 podman[95672]: 2025-11-27 10:58:29.003199468 +0000 UTC m=+0.113578593 container init e8f65a72b583480ca6496480a2ba3893fd5f47bb8813edba7c1cb455d87f96e6 (image=quay.io/ceph/ceph:v19, name=interesting_sammet, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, ceph=True, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 27 05:58:29 np0005537642 podman[95672]: 2025-11-27 10:58:29.012354997 +0000 UTC m=+0.122734112 container start e8f65a72b583480ca6496480a2ba3893fd5f47bb8813edba7c1cb455d87f96e6 (image=quay.io/ceph/ceph:v19, name=interesting_sammet, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, org.label-schema.build-date=20250325, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 27 05:58:29 np0005537642 podman[95672]: 2025-11-27 10:58:28.919592166 +0000 UTC m=+0.029971301 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Nov 27 05:58:29 np0005537642 podman[95672]: 2025-11-27 10:58:29.019345326 +0000 UTC m=+0.129724441 container attach e8f65a72b583480ca6496480a2ba3893fd5f47bb8813edba7c1cb455d87f96e6 (image=quay.io/ceph/ceph:v19, name=interesting_sammet, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 27 05:58:29 np0005537642 ceph-mgr[74636]: log_channel(audit) log [DBG] : from='client.14502 -' entity='client.admin' cmd=[{"prefix": "orch status", "target": ["mon-mgr", ""], "format": "json"}]: dispatch
Nov 27 05:58:29 np0005537642 interesting_sammet[95705]: 
Nov 27 05:58:29 np0005537642 interesting_sammet[95705]: {"available": true, "backend": "cephadm", "paused": false, "workers": 10}
Nov 27 05:58:29 np0005537642 systemd[1]: libpod-e8f65a72b583480ca6496480a2ba3893fd5f47bb8813edba7c1cb455d87f96e6.scope: Deactivated successfully.
Nov 27 05:58:29 np0005537642 podman[95672]: 2025-11-27 10:58:29.394842736 +0000 UTC m=+0.505221861 container died e8f65a72b583480ca6496480a2ba3893fd5f47bb8813edba7c1cb455d87f96e6 (image=quay.io/ceph/ceph:v19, name=interesting_sammet, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, ceph=True, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Nov 27 05:58:29 np0005537642 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v16: 198 pgs: 198 active+clean; 454 KiB data, 103 MiB used, 60 GiB / 60 GiB avail
Nov 27 05:58:30 np0005537642 systemd[1]: var-lib-containers-storage-overlay-798b35f47b1ba926597461f7d2e8a55695156bc61b553b7c9f46dd7bfb2491fd-merged.mount: Deactivated successfully.
Nov 27 05:58:30 np0005537642 podman[95672]: 2025-11-27 10:58:30.387060558 +0000 UTC m=+1.497439713 container remove e8f65a72b583480ca6496480a2ba3893fd5f47bb8813edba7c1cb455d87f96e6 (image=quay.io/ceph/ceph:v19, name=interesting_sammet, CEPH_REF=squid, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, ceph=True, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325)
Nov 27 05:58:30 np0005537642 systemd[1]: libpod-conmon-e8f65a72b583480ca6496480a2ba3893fd5f47bb8813edba7c1cb455d87f96e6.scope: Deactivated successfully.
Nov 27 05:58:30 np0005537642 podman[95814]: 2025-11-27 10:58:30.598620138 +0000 UTC m=+0.066512597 container create c1aa3b1386470016f12ee14b445ce4a58e2a9255fd46146fe5c5f32c12fff956 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=lucid_roentgen, OSD_FLAVOR=default, CEPH_REF=squid, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2)
Nov 27 05:58:30 np0005537642 systemd[1]: Started libpod-conmon-c1aa3b1386470016f12ee14b445ce4a58e2a9255fd46146fe5c5f32c12fff956.scope.
Nov 27 05:58:30 np0005537642 podman[95814]: 2025-11-27 10:58:30.572391794 +0000 UTC m=+0.040284313 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Nov 27 05:58:30 np0005537642 systemd[1]: Started libcrun container.
Nov 27 05:58:30 np0005537642 podman[95814]: 2025-11-27 10:58:30.68013466 +0000 UTC m=+0.148027169 container init c1aa3b1386470016f12ee14b445ce4a58e2a9255fd46146fe5c5f32c12fff956 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=lucid_roentgen, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 27 05:58:30 np0005537642 podman[95814]: 2025-11-27 10:58:30.689727732 +0000 UTC m=+0.157620191 container start c1aa3b1386470016f12ee14b445ce4a58e2a9255fd46146fe5c5f32c12fff956 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=lucid_roentgen, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, ceph=True, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 27 05:58:30 np0005537642 podman[95814]: 2025-11-27 10:58:30.693908381 +0000 UTC m=+0.161800910 container attach c1aa3b1386470016f12ee14b445ce4a58e2a9255fd46146fe5c5f32c12fff956 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=lucid_roentgen, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 27 05:58:30 np0005537642 lucid_roentgen[95830]: 167 167
Nov 27 05:58:30 np0005537642 systemd[1]: libpod-c1aa3b1386470016f12ee14b445ce4a58e2a9255fd46146fe5c5f32c12fff956.scope: Deactivated successfully.
Nov 27 05:58:30 np0005537642 podman[95814]: 2025-11-27 10:58:30.696316519 +0000 UTC m=+0.164208988 container died c1aa3b1386470016f12ee14b445ce4a58e2a9255fd46146fe5c5f32c12fff956 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=lucid_roentgen, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, OSD_FLAVOR=default, io.buildah.version=1.40.1, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 27 05:58:30 np0005537642 systemd[1]: var-lib-containers-storage-overlay-91093577ca6293a77c559eeca5e0d82a1326673707ff0cb762d9fd2dd937277c-merged.mount: Deactivated successfully.
Nov 27 05:58:30 np0005537642 podman[95814]: 2025-11-27 10:58:30.747982535 +0000 UTC m=+0.215874954 container remove c1aa3b1386470016f12ee14b445ce4a58e2a9255fd46146fe5c5f32c12fff956 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=lucid_roentgen, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 27 05:58:30 np0005537642 systemd[1]: libpod-conmon-c1aa3b1386470016f12ee14b445ce4a58e2a9255fd46146fe5c5f32c12fff956.scope: Deactivated successfully.
Nov 27 05:58:30 np0005537642 podman[95854]: 2025-11-27 10:58:30.94807285 +0000 UTC m=+0.061401363 container create e604beadeaed077d27910d154c2f55d6d18700bb0886a511ac09cd4d1c83433d (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=jovial_shtern, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, CEPH_REF=squid, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS)
Nov 27 05:58:31 np0005537642 systemd[1]: Started libpod-conmon-e604beadeaed077d27910d154c2f55d6d18700bb0886a511ac09cd4d1c83433d.scope.
Nov 27 05:58:31 np0005537642 podman[95854]: 2025-11-27 10:58:30.92623109 +0000 UTC m=+0.039559623 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Nov 27 05:58:31 np0005537642 systemd[1]: Started libcrun container.
Nov 27 05:58:31 np0005537642 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6d94274347eeef833de96d8e0f42b46586fcd1f6e6358729c8d0a9be168aedee/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 27 05:58:31 np0005537642 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6d94274347eeef833de96d8e0f42b46586fcd1f6e6358729c8d0a9be168aedee/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 27 05:58:31 np0005537642 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6d94274347eeef833de96d8e0f42b46586fcd1f6e6358729c8d0a9be168aedee/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 27 05:58:31 np0005537642 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6d94274347eeef833de96d8e0f42b46586fcd1f6e6358729c8d0a9be168aedee/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 27 05:58:31 np0005537642 podman[95854]: 2025-11-27 10:58:31.066385655 +0000 UTC m=+0.179714188 container init e604beadeaed077d27910d154c2f55d6d18700bb0886a511ac09cd4d1c83433d (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=jovial_shtern, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, org.label-schema.schema-version=1.0)
Nov 27 05:58:31 np0005537642 podman[95854]: 2025-11-27 10:58:31.083185712 +0000 UTC m=+0.196514225 container start e604beadeaed077d27910d154c2f55d6d18700bb0886a511ac09cd4d1c83433d (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=jovial_shtern, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.license=GPLv2, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default)
Nov 27 05:58:31 np0005537642 podman[95854]: 2025-11-27 10:58:31.087736291 +0000 UTC m=+0.201064854 container attach e604beadeaed077d27910d154c2f55d6d18700bb0886a511ac09cd4d1c83433d (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=jovial_shtern, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, OSD_FLAVOR=default)
Nov 27 05:58:31 np0005537642 python3[95901]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v19 --fsid 4c838139-e0c9-556a-a9ca-e4422f459af7 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   orch ls --export -f json _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 27 05:58:31 np0005537642 ceph-mon[74338]: mon.compute-0@0(leader).osd e64 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 27 05:58:31 np0005537642 ansible-async_wrapper.py[95276]: Done in kid B.
Nov 27 05:58:31 np0005537642 podman[95919]: 2025-11-27 10:58:31.495230099 +0000 UTC m=+0.032606836 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Nov 27 05:58:31 np0005537642 podman[95919]: 2025-11-27 10:58:31.686647008 +0000 UTC m=+0.224023655 container create 161a607fcbe4d9bd87a592827d9c82171c22a9f0ed76e4e482f5b95bb6e61a08 (image=quay.io/ceph/ceph:v19, name=great_hamilton, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.license=GPLv2, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 27 05:58:31 np0005537642 systemd[1]: Started libpod-conmon-161a607fcbe4d9bd87a592827d9c82171c22a9f0ed76e4e482f5b95bb6e61a08.scope.
Nov 27 05:58:31 np0005537642 systemd[1]: Started libcrun container.
Nov 27 05:58:31 np0005537642 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/dde703ed06de00175b86bdeaa44d71ec16c50726775a8c471643017b1c023412/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Nov 27 05:58:31 np0005537642 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/dde703ed06de00175b86bdeaa44d71ec16c50726775a8c471643017b1c023412/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 27 05:58:31 np0005537642 podman[95919]: 2025-11-27 10:58:31.795027251 +0000 UTC m=+0.332403918 container init 161a607fcbe4d9bd87a592827d9c82171c22a9f0ed76e4e482f5b95bb6e61a08 (image=quay.io/ceph/ceph:v19, name=great_hamilton, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 27 05:58:31 np0005537642 podman[95919]: 2025-11-27 10:58:31.80274533 +0000 UTC m=+0.340121987 container start 161a607fcbe4d9bd87a592827d9c82171c22a9f0ed76e4e482f5b95bb6e61a08 (image=quay.io/ceph/ceph:v19, name=great_hamilton, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 27 05:58:31 np0005537642 podman[95919]: 2025-11-27 10:58:31.80665341 +0000 UTC m=+0.344030137 container attach 161a607fcbe4d9bd87a592827d9c82171c22a9f0ed76e4e482f5b95bb6e61a08 (image=quay.io/ceph/ceph:v19, name=great_hamilton, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, OSD_FLAVOR=default, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 27 05:58:31 np0005537642 lvm[95990]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Nov 27 05:58:31 np0005537642 lvm[95990]: VG ceph_vg0 finished
Nov 27 05:58:31 np0005537642 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v17: 198 pgs: 198 active+clean; 454 KiB data, 103 MiB used, 60 GiB / 60 GiB avail
Nov 27 05:58:31 np0005537642 jovial_shtern[95871]: {}
Nov 27 05:58:31 np0005537642 systemd[1]: libpod-e604beadeaed077d27910d154c2f55d6d18700bb0886a511ac09cd4d1c83433d.scope: Deactivated successfully.
Nov 27 05:58:31 np0005537642 podman[95854]: 2025-11-27 10:58:31.963462768 +0000 UTC m=+1.076791251 container died e604beadeaed077d27910d154c2f55d6d18700bb0886a511ac09cd4d1c83433d (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=jovial_shtern, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0)
Nov 27 05:58:31 np0005537642 systemd[1]: libpod-e604beadeaed077d27910d154c2f55d6d18700bb0886a511ac09cd4d1c83433d.scope: Consumed 1.462s CPU time.
Nov 27 05:58:31 np0005537642 systemd[1]: var-lib-containers-storage-overlay-6d94274347eeef833de96d8e0f42b46586fcd1f6e6358729c8d0a9be168aedee-merged.mount: Deactivated successfully.
Nov 27 05:58:32 np0005537642 podman[95854]: 2025-11-27 10:58:32.013540208 +0000 UTC m=+1.126868701 container remove e604beadeaed077d27910d154c2f55d6d18700bb0886a511ac09cd4d1c83433d (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=jovial_shtern, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True)
Nov 27 05:58:32 np0005537642 systemd[1]: libpod-conmon-e604beadeaed077d27910d154c2f55d6d18700bb0886a511ac09cd4d1c83433d.scope: Deactivated successfully.
Nov 27 05:58:32 np0005537642 ceph-mon[74338]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Nov 27 05:58:32 np0005537642 ceph-mon[74338]: log_channel(audit) log [INF] : from='mgr.14424 192.168.122.100:0/1707931784' entity='mgr.compute-0.qnrkij' 
Nov 27 05:58:32 np0005537642 ceph-mon[74338]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Nov 27 05:58:32 np0005537642 ceph-mon[74338]: log_channel(audit) log [INF] : from='mgr.14424 192.168.122.100:0/1707931784' entity='mgr.compute-0.qnrkij' 
Nov 27 05:58:32 np0005537642 ceph-mgr[74636]: [cephadm INFO cephadm.services.cephadmservice] Saving service rgw.rgw spec with placement compute-0;compute-1;compute-2
Nov 27 05:58:32 np0005537642 ceph-mgr[74636]: log_channel(cephadm) log [INF] : Saving service rgw.rgw spec with placement compute-0;compute-1;compute-2
Nov 27 05:58:32 np0005537642 ceph-mon[74338]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.rgw.rgw}] v 0)
Nov 27 05:58:32 np0005537642 ceph-mon[74338]: log_channel(audit) log [INF] : from='mgr.14424 192.168.122.100:0/1707931784' entity='mgr.compute-0.qnrkij' 
Nov 27 05:58:32 np0005537642 ceph-mon[74338]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.rgw.rgw}] v 0)
Nov 27 05:58:32 np0005537642 ceph-mon[74338]: log_channel(audit) log [INF] : from='mgr.14424 192.168.122.100:0/1707931784' entity='mgr.compute-0.qnrkij' 
Nov 27 05:58:32 np0005537642 ceph-mgr[74636]: [progress INFO root] update: starting ev 2cd04496-45ac-4e0d-bf3c-74e472076093 (Updating mds.cephfs deployment (+3 -> 3))
Nov 27 05:58:32 np0005537642 ceph-mon[74338]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get-or-create", "entity": "mds.cephfs.compute-2.pktzxb", "caps": ["mon", "profile mds", "osd", "allow rw tag cephfs *=*", "mds", "allow"]} v 0)
Nov 27 05:58:32 np0005537642 ceph-mon[74338]: log_channel(audit) log [INF] : from='mgr.14424 192.168.122.100:0/1707931784' entity='mgr.compute-0.qnrkij' cmd=[{"prefix": "auth get-or-create", "entity": "mds.cephfs.compute-2.pktzxb", "caps": ["mon", "profile mds", "osd", "allow rw tag cephfs *=*", "mds", "allow"]}]: dispatch
Nov 27 05:58:32 np0005537642 ceph-mon[74338]: log_channel(audit) log [INF] : from='mgr.14424 192.168.122.100:0/1707931784' entity='mgr.compute-0.qnrkij' cmd='[{"prefix": "auth get-or-create", "entity": "mds.cephfs.compute-2.pktzxb", "caps": ["mon", "profile mds", "osd", "allow rw tag cephfs *=*", "mds", "allow"]}]': finished
Nov 27 05:58:32 np0005537642 ceph-mon[74338]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Nov 27 05:58:32 np0005537642 ceph-mon[74338]: log_channel(audit) log [DBG] : from='mgr.14424 192.168.122.100:0/1707931784' entity='mgr.compute-0.qnrkij' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 27 05:58:32 np0005537642 ceph-mgr[74636]: [cephadm INFO cephadm.serve] Deploying daemon mds.cephfs.compute-2.pktzxb on compute-2
Nov 27 05:58:32 np0005537642 ceph-mgr[74636]: log_channel(cephadm) log [INF] : Deploying daemon mds.cephfs.compute-2.pktzxb on compute-2
Nov 27 05:58:32 np0005537642 ceph-mgr[74636]: log_channel(audit) log [DBG] : from='client.14508 -' entity='client.admin' cmd=[{"prefix": "orch ls", "export": true, "target": ["mon-mgr", ""], "format": "json"}]: dispatch
Nov 27 05:58:32 np0005537642 great_hamilton[95980]: 
Nov 27 05:58:32 np0005537642 great_hamilton[95980]: [{"networks": ["192.168.122.0/24"], "placement": {"count": 1, "hosts": ["compute-0"]}, "service_name": "alertmanager", "service_type": "alertmanager"}, {"placement": {"host_pattern": "*"}, "service_name": "crash", "service_type": "crash"}, {"networks": ["192.168.122.0/24"], "placement": {"count": 1, "hosts": ["compute-0"]}, "service_name": "grafana", "service_type": "grafana", "spec": {"anonymous_access": true, "protocol": "https"}}, {"placement": {"hosts": ["compute-0", "compute-1", "compute-2"]}, "service_id": "nfs.cephfs", "service_name": "ingress.nfs.cephfs", "service_type": "ingress", "spec": {"backend_service": "nfs.cephfs", "enable_haproxy_protocol": true, "first_virtual_router_id": 50, "frontend_port": 2049, "monitor_port": 9049, "virtual_ip": "192.168.122.2/24"}}, {"placement": {"count": 2}, "service_id": "rgw.default", "service_name": "ingress.rgw.default", "service_type": "ingress", "spec": {"backend_service": "rgw.rgw", "first_virtual_router_id": 50, "frontend_port": 8080, "monitor_port": 8999, "virtual_interface_networks": ["192.168.122.0/24"], "virtual_ip": "192.168.122.2/24"}}, {"placement": {"hosts": ["compute-0", "compute-1", "compute-2"]}, "service_id": "cephfs", "service_name": "mds.cephfs", "service_type": "mds"}, {"placement": {"hosts": ["compute-0", "compute-1", "compute-2"]}, "service_name": "mgr", "service_type": "mgr"}, {"placement": {"hosts": ["compute-0", "compute-1", "compute-2"]}, "service_name": "mon", "service_type": "mon"}, {"placement": {"hosts": ["compute-0", "compute-1", "compute-2"]}, "service_id": "cephfs", "service_name": "nfs.cephfs", "service_type": "nfs", "spec": {"enable_haproxy_protocol": true, "port": 12049}}, {"placement": {"host_pattern": "*"}, "service_name": "node-exporter", "service_type": "node-exporter"}, {"placement": {"hosts": ["compute-0", "compute-1", "compute-2"]}, "service_id": "default_drive_group", "service_name": "osd.default_drive_group", "service_type": "osd", "spec": {"data_devices": {"paths": ["/dev/ceph_vg0/ceph_lv0"]}, "filter_logic": "AND", "objectstore": "bluestore"}}, {"networks": ["192.168.122.0/24"], "placement": {"count": 1, "hosts": ["compute-0"]}, "service_name": "prometheus", "service_type": "prometheus"}, {"networks": ["192.168.122.0/24"], "placement": {"hosts": ["compute-0", "compute-1", "compute-2"]}, "service_id": "rgw", "service_name": "rgw.rgw", "service_type": "rgw", "spec": {"rgw_frontend_port": 8082}}]
Nov 27 05:58:32 np0005537642 systemd[1]: libpod-161a607fcbe4d9bd87a592827d9c82171c22a9f0ed76e4e482f5b95bb6e61a08.scope: Deactivated successfully.
Nov 27 05:58:32 np0005537642 podman[95919]: 2025-11-27 10:58:32.213930082 +0000 UTC m=+0.751306759 container died 161a607fcbe4d9bd87a592827d9c82171c22a9f0ed76e4e482f5b95bb6e61a08 (image=quay.io/ceph/ceph:v19, name=great_hamilton, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid)
Nov 27 05:58:32 np0005537642 systemd[1]: var-lib-containers-storage-overlay-dde703ed06de00175b86bdeaa44d71ec16c50726775a8c471643017b1c023412-merged.mount: Deactivated successfully.
Nov 27 05:58:32 np0005537642 podman[95919]: 2025-11-27 10:58:32.288289831 +0000 UTC m=+0.825666478 container remove 161a607fcbe4d9bd87a592827d9c82171c22a9f0ed76e4e482f5b95bb6e61a08 (image=quay.io/ceph/ceph:v19, name=great_hamilton, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, CEPH_REF=squid, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 27 05:58:32 np0005537642 systemd[1]: libpod-conmon-161a607fcbe4d9bd87a592827d9c82171c22a9f0ed76e4e482f5b95bb6e61a08.scope: Deactivated successfully.
Nov 27 05:58:33 np0005537642 ceph-mon[74338]: from='mgr.14424 192.168.122.100:0/1707931784' entity='mgr.compute-0.qnrkij' 
Nov 27 05:58:33 np0005537642 ceph-mon[74338]: from='mgr.14424 192.168.122.100:0/1707931784' entity='mgr.compute-0.qnrkij' 
Nov 27 05:58:33 np0005537642 ceph-mon[74338]: Saving service rgw.rgw spec with placement compute-0;compute-1;compute-2
Nov 27 05:58:33 np0005537642 ceph-mon[74338]: from='mgr.14424 192.168.122.100:0/1707931784' entity='mgr.compute-0.qnrkij' 
Nov 27 05:58:33 np0005537642 ceph-mon[74338]: from='mgr.14424 192.168.122.100:0/1707931784' entity='mgr.compute-0.qnrkij' 
Nov 27 05:58:33 np0005537642 ceph-mon[74338]: from='mgr.14424 192.168.122.100:0/1707931784' entity='mgr.compute-0.qnrkij' cmd=[{"prefix": "auth get-or-create", "entity": "mds.cephfs.compute-2.pktzxb", "caps": ["mon", "profile mds", "osd", "allow rw tag cephfs *=*", "mds", "allow"]}]: dispatch
Nov 27 05:58:33 np0005537642 ceph-mon[74338]: from='mgr.14424 192.168.122.100:0/1707931784' entity='mgr.compute-0.qnrkij' cmd='[{"prefix": "auth get-or-create", "entity": "mds.cephfs.compute-2.pktzxb", "caps": ["mon", "profile mds", "osd", "allow rw tag cephfs *=*", "mds", "allow"]}]': finished
Nov 27 05:58:33 np0005537642 ceph-mon[74338]: Deploying daemon mds.cephfs.compute-2.pktzxb on compute-2
Nov 27 05:58:33 np0005537642 python3[96065]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v19 --fsid 4c838139-e0c9-556a-a9ca-e4422f459af7 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   orch ps -f json _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 27 05:58:33 np0005537642 podman[96066]: 2025-11-27 10:58:33.487351189 +0000 UTC m=+0.056541584 container create 57fd89dfe6c3fc58ae301ac077067b5e79d403106ae65b157780384eb897c601 (image=quay.io/ceph/ceph:v19, name=nice_lovelace, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 27 05:58:33 np0005537642 systemd[1]: Started libpod-conmon-57fd89dfe6c3fc58ae301ac077067b5e79d403106ae65b157780384eb897c601.scope.
Nov 27 05:58:33 np0005537642 podman[96066]: 2025-11-27 10:58:33.460327423 +0000 UTC m=+0.029517828 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Nov 27 05:58:33 np0005537642 systemd[1]: Started libcrun container.
Nov 27 05:58:33 np0005537642 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/21ed79e2af6d9346913acf8879fc5fa2d5dcbf135d9273e1d3d2539d6e56b73a/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Nov 27 05:58:33 np0005537642 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/21ed79e2af6d9346913acf8879fc5fa2d5dcbf135d9273e1d3d2539d6e56b73a/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 27 05:58:33 np0005537642 podman[96066]: 2025-11-27 10:58:33.59349853 +0000 UTC m=+0.162688965 container init 57fd89dfe6c3fc58ae301ac077067b5e79d403106ae65b157780384eb897c601 (image=quay.io/ceph/ceph:v19, name=nice_lovelace, CEPH_REF=squid, io.buildah.version=1.40.1, ceph=True, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS)
Nov 27 05:58:33 np0005537642 podman[96066]: 2025-11-27 10:58:33.601805166 +0000 UTC m=+0.170995551 container start 57fd89dfe6c3fc58ae301ac077067b5e79d403106ae65b157780384eb897c601 (image=quay.io/ceph/ceph:v19, name=nice_lovelace, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_REF=squid, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 27 05:58:33 np0005537642 podman[96066]: 2025-11-27 10:58:33.606377935 +0000 UTC m=+0.175568320 container attach 57fd89dfe6c3fc58ae301ac077067b5e79d403106ae65b157780384eb897c601 (image=quay.io/ceph/ceph:v19, name=nice_lovelace, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Nov 27 05:58:33 np0005537642 ceph-mon[74338]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2.devices.0}] v 0)
Nov 27 05:58:33 np0005537642 ceph-mon[74338]: log_channel(audit) log [INF] : from='mgr.14424 192.168.122.100:0/1707931784' entity='mgr.compute-0.qnrkij' 
Nov 27 05:58:33 np0005537642 ceph-mon[74338]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2}] v 0)
Nov 27 05:58:33 np0005537642 ceph-mon[74338]: log_channel(audit) log [INF] : from='mgr.14424 192.168.122.100:0/1707931784' entity='mgr.compute-0.qnrkij' 
Nov 27 05:58:33 np0005537642 ceph-mon[74338]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.mds.cephfs}] v 0)
Nov 27 05:58:33 np0005537642 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v18: 198 pgs: 198 active+clean; 454 KiB data, 103 MiB used, 60 GiB / 60 GiB avail
Nov 27 05:58:34 np0005537642 ceph-mon[74338]: log_channel(audit) log [INF] : from='mgr.14424 192.168.122.100:0/1707931784' entity='mgr.compute-0.qnrkij' 
Nov 27 05:58:34 np0005537642 ceph-mon[74338]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get-or-create", "entity": "mds.cephfs.compute-0.pbsgjz", "caps": ["mon", "profile mds", "osd", "allow rw tag cephfs *=*", "mds", "allow"]} v 0)
Nov 27 05:58:34 np0005537642 ceph-mon[74338]: log_channel(audit) log [INF] : from='mgr.14424 192.168.122.100:0/1707931784' entity='mgr.compute-0.qnrkij' cmd=[{"prefix": "auth get-or-create", "entity": "mds.cephfs.compute-0.pbsgjz", "caps": ["mon", "profile mds", "osd", "allow rw tag cephfs *=*", "mds", "allow"]}]: dispatch
Nov 27 05:58:34 np0005537642 ceph-mgr[74636]: log_channel(audit) log [DBG] : from='client.14514 -' entity='client.admin' cmd=[{"prefix": "orch ps", "target": ["mon-mgr", ""], "format": "json"}]: dispatch
Nov 27 05:58:34 np0005537642 nice_lovelace[96081]: 
Nov 27 05:58:34 np0005537642 nice_lovelace[96081]: [{"container_id": "ecb89941845f", "container_image_digests": ["quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec", "quay.io/ceph/ceph@sha256:af0c5903e901e329adabe219dfc8d0c3efc1f05102a753902f33ee16c26b6cee"], "container_image_id": "aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c", "container_image_name": "quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec", "cpu_percentage": "0.10%", "created": "2025-11-27T10:54:15.149927Z", "daemon_id": "compute-0", "daemon_name": "crash.compute-0", "daemon_type": "crash", "hostname": "compute-0", "is_active": false, "last_refresh": "2025-11-27T10:58:14.334894Z", "memory_usage": 7795113, "ports": [], "service_name": "crash", "started": "2025-11-27T10:54:15.015680Z", "status": 1, "status_desc": "running", "systemd_unit": "ceph-4c838139-e0c9-556a-a9ca-e4422f459af7@crash.compute-0", "version": "19.2.3"}, {"container_id": "f6b1a7f4d559", "container_image_digests": ["quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec"], "container_image_id": "aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c", "container_image_name": "quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec", "cpu_percentage": "0.27%", "created": "2025-11-27T10:55:03.242198Z", "daemon_id": "compute-1", "daemon_name": "crash.compute-1", "daemon_type": "crash", "hostname": "compute-1", "is_active": false, "last_refresh": "2025-11-27T10:58:13.592058Z", "memory_usage": 7825522, "ports": [], "service_name": "crash", "started": "2025-11-27T10:55:03.127496Z", "status": 1, "status_desc": "running", "systemd_unit": "ceph-4c838139-e0c9-556a-a9ca-e4422f459af7@crash.compute-1", "version": "19.2.3"}, {"container_id": "3544fc189193", "container_image_digests": ["quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec"], "container_image_id": "aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c", "container_image_name": "quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec", "cpu_percentage": "0.24%", "created": "2025-11-27T10:56:59.457151Z", "daemon_id": "compute-2", "daemon_name": "crash.compute-2", "daemon_type": "crash", "hostname": "compute-2", "is_active": false, "last_refresh": "2025-11-27T10:58:13.700830Z", "memory_usage": 7821328, "ports": [], "service_name": "crash", "started": "2025-11-27T10:56:59.342873Z", "status": 1, "status_desc": "running", "systemd_unit": "ceph-4c838139-e0c9-556a-a9ca-e4422f459af7@crash.compute-2", "version": "19.2.3"}, {"daemon_id": "cephfs.compute-2.pktzxb", "daemon_name": "mds.cephfs.compute-2.pktzxb", "daemon_type": "mds", "events": ["2025-11-27T10:58:33.831127Z daemon:mds.cephfs.compute-2.pktzxb [INFO] \"Deployed mds.cephfs.compute-2.pktzxb on host 'compute-2'\""], "hostname": "compute-2", "is_active": false, "ports": [], "service_name": "mds.cephfs", "status": 2, "status_desc": "starting"}, {"container_id": "ce70338c0e33", "container_image_digests": ["quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec", "quay.io/ceph/ceph@sha256:af0c5903e901e329adabe219dfc8d0c3efc1f05102a753902f33ee16c26b6cee"], "container_image_id": "aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c", "container_image_name": "quay.io/ceph/ceph:v19", "cpu_percentage": "18.90%", "created": "2025-11-27T10:53:26.938911Z", "daemon_id": "compute-0.qnrkij", "daemon_name": "mgr.compute-0.qnrkij", "daemon_type": "mgr", "hostname": "compute-0", "is_active": false, "last_refresh": "2025-11-27T10:58:14.334685Z", "memory_usage": 543581798, "ports": [9283, 8765], "service_name": "mgr", "started": "2025-11-27T10:53:26.800753Z", "status": 1, "status_desc": "running", "systemd_unit": "ceph-4c838139-e0c9-556a-a9ca-e4422f459af7@mgr.compute-0.qnrkij", "version": "19.2.3"}, {"container_id": "74b585247437", "container_image_digests": ["quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec"], "container_image_id": "aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c", "container_image_name": "quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec", "cpu_percentage": "26.19%", "created": "2025-11-27T10:56:40.196573Z", "daemon_id": "compute-1.npcryb", "daemon_name": "mgr.compute-1.npcryb", "daemon_type": "mgr", "hostname": "compute-1", "is_active": false, "last_refresh": "2025-11-27T10:58:13.592649Z", "memory_usage": 503840768, "ports": [8765], "service_name": "mgr", "started": "2025-11-27T10:56:40.083284Z", "status": 1, "status_desc": "running", "systemd_unit": "ceph-4c838139-e0c9-556a-a9ca-e4422f459af7@mgr.compute-1.npcryb", "version": "19.2.3"}, {"container_id": "6ebec15612d1", "container_image_digests": ["quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec"], "container_image_id": "aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c", "container_image_name": "quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec", "cpu_percentage": "25.81%", "created": "2025-11-27T10:56:37.635612Z", "daemon_id": "compute-2.yyrxaz", "daemon_name": "mgr.compute-2.yyrxaz", "daemon_type": "mgr", "hostname": "compute-2", "is_active": false, "last_refresh": "2025-11-27T10:58:13.700495Z", "memory_usage": 506776780, "ports": [8765], "service_name": "mgr", "started": "2025-11-27T10:56:37.534400Z", "status": 1, "status_desc": "running", "systemd_unit": "ceph-4c838139-e0c9-556a-a9ca-e4422f459af7@mgr.compute-2.yyrxaz", "version": "19.2.3"}, {"container_id": "10d3b07b5dbe", "container_image_digests": ["quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec", "quay.io/ceph/ceph@sha256:af0c5903e901e329adabe219dfc8d0c3efc1f05102a753902f33ee16c26b6cee"], "container_image_id": "aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c", "container_image_name": "quay.io/ceph/ceph:v19", "cpu_percentage": "2.44%", "created": "2025-11-27T10:53:22.306689Z", "daemon_id": "compute-0", "daemon_name": "mon.compute-0", "daemon_type": "mon", "hostname": "compute-0", "is_active": false, "last_refresh": "2025-11-27T10:58:14.334391Z", "memory_request": 2147483648, "memory_usage": 60702064, "ports": [], "service_name": "mon", "started": "2025-11-27T10:53:24.756245Z", "status": 1, "status_desc": "running", "systemd_unit": "ceph-4c838139-e0c9-556a-a9ca-e4422f459af7@mon.compute-0", "version": "19.2.3"}, {"container_id": "92279ac45003", "container_image_digests": ["quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec"], "container_image_id": "aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c", "container_image_name": "quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec", "cpu_percentage": "1.67%", "created": "2025-11-27T10:56:29.065307Z", "daemon_id": "compute-1", "daemon_name": "mon.compute-1", "daemon_type": "mon", "hostname": "compute-1", "is_active": false, "last_refresh": "2025-11-27T10:58:13.592438Z", "memory_request": 2147483648, "memory_usage": 52135198, "ports": [], "service_name": "mon", "started": "2025-11-27T10:56:28.944912Z", "status": 1, "status_desc": "running", "systemd_unit": "ceph-4c838139-e0c9-556a-a9ca-e4422f459af7@mon.compute-1", "version": "19.2.3"}, {"container_id": "e34390d70697", "container_image_digests": ["quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec"], "container_image_id": "aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c", "container_image_name": "quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec", "cpu_percentage": "1.79%", "created": "2025-11-27T10:56:19.212171Z", "daemon_id": "compute-2", "daemon_name": "mon.compute-2", "daemon_type": "mon", "hostname": "compute-2", "is_active": false, "last_refresh": "2025-11-27T10:58:13.700222Z", "memory_request": 2147483648, "memory_usage": 5242880
Nov 27 05:58:34 np0005537642 ceph-mon[74338]: log_channel(audit) log [INF] : from='mgr.14424 192.168.122.100:0/1707931784' entity='mgr.compute-0.qnrkij' cmd='[{"prefix": "auth get-or-create", "entity": "mds.cephfs.compute-0.pbsgjz", "caps": ["mon", "profile mds", "osd", "allow rw tag cephfs *=*", "mds", "allow"]}]': finished
Nov 27 05:58:34 np0005537642 systemd[1]: libpod-57fd89dfe6c3fc58ae301ac077067b5e79d403106ae65b157780384eb897c601.scope: Deactivated successfully.
Nov 27 05:58:34 np0005537642 podman[96066]: 2025-11-27 10:58:34.122212096 +0000 UTC m=+0.691402481 container died 57fd89dfe6c3fc58ae301ac077067b5e79d403106ae65b157780384eb897c601 (image=quay.io/ceph/ceph:v19, name=nice_lovelace, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True)
Nov 27 05:58:34 np0005537642 ceph-mon[74338]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Nov 27 05:58:34 np0005537642 ceph-mon[74338]: log_channel(audit) log [DBG] : from='mgr.14424 192.168.122.100:0/1707931784' entity='mgr.compute-0.qnrkij' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 27 05:58:34 np0005537642 ceph-mgr[74636]: [cephadm INFO cephadm.serve] Deploying daemon mds.cephfs.compute-0.pbsgjz on compute-0
Nov 27 05:58:34 np0005537642 ceph-mgr[74636]: log_channel(cephadm) log [INF] : Deploying daemon mds.cephfs.compute-0.pbsgjz on compute-0
Nov 27 05:58:34 np0005537642 ceph-mon[74338]: mon.compute-0@0(leader).mds e3 new map
Nov 27 05:58:34 np0005537642 ceph-mon[74338]: mon.compute-0@0(leader).mds e3 print_map#012e3#012btime 2025-11-27T10:58:34:116201+0000#012enable_multiple, ever_enabled_multiple: 1,1#012default compat: compat={},rocompat={},incompat={1=base v0.20,2=client writeable ranges,3=default file layouts on dirs,4=dir inode in separate object,5=mds uses versioned encoding,6=dirfrag is stored in omap,8=no anchor table,9=file layout v2,10=snaprealm v2,11=minor log segments,12=quiesce subvolumes}#012legacy client fscid: 1#012 #012Filesystem 'cephfs' (1)#012fs_name#011cephfs#012epoch#0112#012flags#01112 joinable allow_snaps allow_multimds_snaps#012created#0112025-11-27T10:58:12.975652+0000#012modified#0112025-11-27T10:58:12.975652+0000#012tableserver#0110#012root#0110#012session_timeout#01160#012session_autoclose#011300#012max_file_size#0111099511627776#012max_xattr_size#01165536#012required_client_features#011{}#012last_failure#0110#012last_failure_osd_epoch#0110#012compat#011compat={},rocompat={},incompat={1=base v0.20,2=client writeable ranges,3=default file layouts on dirs,4=dir inode in separate object,5=mds uses versioned encoding,6=dirfrag is stored in omap,7=mds uses inline data,8=no anchor table,9=file layout v2,10=snaprealm v2,11=minor log segments,12=quiesce subvolumes}#012max_mds#0111#012in#011#012up#011{}#012failed#011#012damaged#011#012stopped#011#012data_pools#011[7]#012metadata_pool#0116#012inline_data#011disabled#012balancer#011#012bal_rank_mask#011-1#012standby_count_wanted#0110#012qdb_cluster#011leader: 0 members: #012 #012 #012Standby daemons:#012 #012[mds.cephfs.compute-2.pktzxb{-1:24226} state up:standby seq 1 addr [v2:192.168.122.102:6804/4184153211,v1:192.168.122.102:6805/4184153211] compat {c=[1],r=[1],i=[1fff]}]
Nov 27 05:58:34 np0005537642 ceph-mon[74338]: log_channel(cluster) log [DBG] : mds.? [v2:192.168.122.102:6804/4184153211,v1:192.168.122.102:6805/4184153211] up:boot
Nov 27 05:58:34 np0005537642 ceph-mon[74338]: mon.compute-0@0(leader).mds e3 assigned standby [v2:192.168.122.102:6804/4184153211,v1:192.168.122.102:6805/4184153211] as mds.0
Nov 27 05:58:34 np0005537642 ceph-mon[74338]: log_channel(cluster) log [INF] : daemon mds.cephfs.compute-2.pktzxb assigned to filesystem cephfs as rank 0 (now has 1 ranks)
Nov 27 05:58:34 np0005537642 ceph-mon[74338]: log_channel(cluster) log [INF] : Health check cleared: MDS_ALL_DOWN (was: 1 filesystem is offline)
Nov 27 05:58:34 np0005537642 ceph-mon[74338]: log_channel(cluster) log [INF] : Health check cleared: MDS_UP_LESS_THAN_MAX (was: 1 filesystem is online with fewer MDS than max_mds)
Nov 27 05:58:34 np0005537642 ceph-mon[74338]: log_channel(cluster) log [INF] : Cluster is now healthy
Nov 27 05:58:34 np0005537642 ceph-mgr[74636]: mgr.server handle_open ignoring open from mds.cephfs.compute-2.pktzxb v2:192.168.122.102:6804/4184153211; not ready for session (expect reconnect)
Nov 27 05:58:34 np0005537642 ceph-mon[74338]: log_channel(cluster) log [DBG] : fsmap cephfs:0 1 up:standby
Nov 27 05:58:34 np0005537642 ceph-mon[74338]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mds metadata", "who": "cephfs.compute-2.pktzxb"} v 0)
Nov 27 05:58:34 np0005537642 ceph-mon[74338]: log_channel(audit) log [DBG] : from='mgr.14424 192.168.122.100:0/1707931784' entity='mgr.compute-0.qnrkij' cmd=[{"prefix": "mds metadata", "who": "cephfs.compute-2.pktzxb"}]: dispatch
Nov 27 05:58:34 np0005537642 ceph-mon[74338]: mon.compute-0@0(leader).mds e3 all = 0
Nov 27 05:58:34 np0005537642 ceph-mon[74338]: mon.compute-0@0(leader).mds e4 new map
Nov 27 05:58:34 np0005537642 ceph-mon[74338]: mon.compute-0@0(leader).mds e4 print_map#012e4#012btime 2025-11-27T10:58:34:194850+0000#012enable_multiple, ever_enabled_multiple: 1,1#012default compat: compat={},rocompat={},incompat={1=base v0.20,2=client writeable ranges,3=default file layouts on dirs,4=dir inode in separate object,5=mds uses versioned encoding,6=dirfrag is stored in omap,8=no anchor table,9=file layout v2,10=snaprealm v2,11=minor log segments,12=quiesce subvolumes}#012legacy client fscid: 1#012 #012Filesystem 'cephfs' (1)#012fs_name#011cephfs#012epoch#0114#012flags#01112 joinable allow_snaps allow_multimds_snaps#012created#0112025-11-27T10:58:12.975652+0000#012modified#0112025-11-27T10:58:34.194837+0000#012tableserver#0110#012root#0110#012session_timeout#01160#012session_autoclose#011300#012max_file_size#0111099511627776#012max_xattr_size#01165536#012required_client_features#011{}#012last_failure#0110#012last_failure_osd_epoch#0110#012compat#011compat={},rocompat={},incompat={1=base v0.20,2=client writeable ranges,3=default file layouts on dirs,4=dir inode in separate object,5=mds uses versioned encoding,6=dirfrag is stored in omap,7=mds uses inline data,8=no anchor table,9=file layout v2,10=snaprealm v2,11=minor log segments,12=quiesce subvolumes}#012max_mds#0111#012in#0110#012up#011{0=24226}#012failed#011#012damaged#011#012stopped#011#012data_pools#011[7]#012metadata_pool#0116#012inline_data#011disabled#012balancer#011#012bal_rank_mask#011-1#012standby_count_wanted#0110#012qdb_cluster#011leader: 0 members: #012[mds.cephfs.compute-2.pktzxb{0:24226} state up:creating seq 1 addr [v2:192.168.122.102:6804/4184153211,v1:192.168.122.102:6805/4184153211] compat {c=[1],r=[1],i=[1fff]}]#012 #012 
Nov 27 05:58:34 np0005537642 ceph-mon[74338]: log_channel(cluster) log [DBG] : fsmap cephfs:1 {0=cephfs.compute-2.pktzxb=up:creating}
Nov 27 05:58:34 np0005537642 systemd[1]: var-lib-containers-storage-overlay-21ed79e2af6d9346913acf8879fc5fa2d5dcbf135d9273e1d3d2539d6e56b73a-merged.mount: Deactivated successfully.
Nov 27 05:58:34 np0005537642 ceph-mon[74338]: log_channel(cluster) log [INF] : daemon mds.cephfs.compute-2.pktzxb is now active in filesystem cephfs as rank 0
Nov 27 05:58:34 np0005537642 rsyslogd[1004]: message too long (15543) with configured size 8096, begin of message is: [{"container_id": "ecb89941845f", "container_image_digests": ["quay.io/ceph/ceph [v8.2510.0-2.el9 try https://www.rsyslog.com/e/2445 ]
Nov 27 05:58:34 np0005537642 podman[96066]: 2025-11-27 10:58:34.328336052 +0000 UTC m=+0.897526467 container remove 57fd89dfe6c3fc58ae301ac077067b5e79d403106ae65b157780384eb897c601 (image=quay.io/ceph/ceph:v19, name=nice_lovelace, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default)
Nov 27 05:58:34 np0005537642 systemd[1]: libpod-conmon-57fd89dfe6c3fc58ae301ac077067b5e79d403106ae65b157780384eb897c601.scope: Deactivated successfully.
Nov 27 05:58:34 np0005537642 podman[96210]: 2025-11-27 10:58:34.810048665 +0000 UTC m=+0.027324056 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Nov 27 05:58:35 np0005537642 ceph-mon[74338]: from='mgr.14424 192.168.122.100:0/1707931784' entity='mgr.compute-0.qnrkij' 
Nov 27 05:58:35 np0005537642 ceph-mon[74338]: from='mgr.14424 192.168.122.100:0/1707931784' entity='mgr.compute-0.qnrkij' 
Nov 27 05:58:35 np0005537642 ceph-mon[74338]: from='mgr.14424 192.168.122.100:0/1707931784' entity='mgr.compute-0.qnrkij' 
Nov 27 05:58:35 np0005537642 ceph-mon[74338]: from='mgr.14424 192.168.122.100:0/1707931784' entity='mgr.compute-0.qnrkij' cmd=[{"prefix": "auth get-or-create", "entity": "mds.cephfs.compute-0.pbsgjz", "caps": ["mon", "profile mds", "osd", "allow rw tag cephfs *=*", "mds", "allow"]}]: dispatch
Nov 27 05:58:35 np0005537642 ceph-mon[74338]: from='mgr.14424 192.168.122.100:0/1707931784' entity='mgr.compute-0.qnrkij' cmd='[{"prefix": "auth get-or-create", "entity": "mds.cephfs.compute-0.pbsgjz", "caps": ["mon", "profile mds", "osd", "allow rw tag cephfs *=*", "mds", "allow"]}]': finished
Nov 27 05:58:35 np0005537642 ceph-mon[74338]: Deploying daemon mds.cephfs.compute-0.pbsgjz on compute-0
Nov 27 05:58:35 np0005537642 ceph-mon[74338]: daemon mds.cephfs.compute-2.pktzxb assigned to filesystem cephfs as rank 0 (now has 1 ranks)
Nov 27 05:58:35 np0005537642 ceph-mon[74338]: Health check cleared: MDS_ALL_DOWN (was: 1 filesystem is offline)
Nov 27 05:58:35 np0005537642 ceph-mon[74338]: Health check cleared: MDS_UP_LESS_THAN_MAX (was: 1 filesystem is online with fewer MDS than max_mds)
Nov 27 05:58:35 np0005537642 ceph-mon[74338]: Cluster is now healthy
Nov 27 05:58:35 np0005537642 ceph-mon[74338]: daemon mds.cephfs.compute-2.pktzxb is now active in filesystem cephfs as rank 0
Nov 27 05:58:35 np0005537642 podman[96210]: 2025-11-27 10:58:35.566685874 +0000 UTC m=+0.783961215 container create 2debd5a6ed79590dfd2ec5f60da30a0f2ced8d6944dd7eaafcbc99cfff1fb79b (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=sweet_beaver, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, org.label-schema.build-date=20250325)
Nov 27 05:58:35 np0005537642 python3[96249]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v19 --fsid 4c838139-e0c9-556a-a9ca-e4422f459af7 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   -s -f json _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 27 05:58:35 np0005537642 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v19: 198 pgs: 198 active+clean; 454 KiB data, 103 MiB used, 60 GiB / 60 GiB avail
Nov 27 05:58:36 np0005537642 ceph-mon[74338]: mon.compute-0@0(leader).mds e5 new map
Nov 27 05:58:36 np0005537642 ceph-mon[74338]: mon.compute-0@0(leader).mds e5 print_map#012e5#012btime 2025-11-27T10:58:35:393794+0000#012enable_multiple, ever_enabled_multiple: 1,1#012default compat: compat={},rocompat={},incompat={1=base v0.20,2=client writeable ranges,3=default file layouts on dirs,4=dir inode in separate object,5=mds uses versioned encoding,6=dirfrag is stored in omap,8=no anchor table,9=file layout v2,10=snaprealm v2,11=minor log segments,12=quiesce subvolumes}#012legacy client fscid: 1#012 #012Filesystem 'cephfs' (1)#012fs_name#011cephfs#012epoch#0115#012flags#01112 joinable allow_snaps allow_multimds_snaps#012created#0112025-11-27T10:58:12.975652+0000#012modified#0112025-11-27T10:58:35.393791+0000#012tableserver#0110#012root#0110#012session_timeout#01160#012session_autoclose#011300#012max_file_size#0111099511627776#012max_xattr_size#01165536#012required_client_features#011{}#012last_failure#0110#012last_failure_osd_epoch#0110#012compat#011compat={},rocompat={},incompat={1=base v0.20,2=client writeable ranges,3=default file layouts on dirs,4=dir inode in separate object,5=mds uses versioned encoding,6=dirfrag is stored in omap,7=mds uses inline data,8=no anchor table,9=file layout v2,10=snaprealm v2,11=minor log segments,12=quiesce subvolumes}#012max_mds#0111#012in#0110#012up#011{0=24226}#012failed#011#012damaged#011#012stopped#011#012data_pools#011[7]#012metadata_pool#0116#012inline_data#011disabled#012balancer#011#012bal_rank_mask#011-1#012standby_count_wanted#0110#012qdb_cluster#011leader: 24226 members: 24226#012[mds.cephfs.compute-2.pktzxb{0:24226} state up:active seq 2 addr [v2:192.168.122.102:6804/4184153211,v1:192.168.122.102:6805/4184153211] compat {c=[1],r=[1],i=[1fff]}]#012 #012 
Nov 27 05:58:36 np0005537642 ceph-mon[74338]: log_channel(cluster) log [DBG] : mds.? [v2:192.168.122.102:6804/4184153211,v1:192.168.122.102:6805/4184153211] up:active
Nov 27 05:58:36 np0005537642 ceph-mon[74338]: log_channel(cluster) log [DBG] : fsmap cephfs:1 {0=cephfs.compute-2.pktzxb=up:active}
Nov 27 05:58:36 np0005537642 systemd[1]: Started libpod-conmon-2debd5a6ed79590dfd2ec5f60da30a0f2ced8d6944dd7eaafcbc99cfff1fb79b.scope.
Nov 27 05:58:36 np0005537642 systemd[1]: Started libcrun container.
Nov 27 05:58:36 np0005537642 podman[96250]: 2025-11-27 10:58:36.023385896 +0000 UTC m=+0.119672104 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Nov 27 05:58:36 np0005537642 podman[96250]: 2025-11-27 10:58:36.281254751 +0000 UTC m=+0.377540909 container create 312145fec01e61a1d7b17b79f9be3d8e15920d310ce82e3dd3abf7d34edd537a (image=quay.io/ceph/ceph:v19, name=serene_dubinsky, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325)
Nov 27 05:58:36 np0005537642 systemd[1]: Started libpod-conmon-312145fec01e61a1d7b17b79f9be3d8e15920d310ce82e3dd3abf7d34edd537a.scope.
Nov 27 05:58:36 np0005537642 systemd[1]: Started libcrun container.
Nov 27 05:58:36 np0005537642 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8548323a84b4d9b7bd4f203b1e9c1070f5d68316f4ff858f2b5f3afbaede0fc5/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Nov 27 05:58:36 np0005537642 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8548323a84b4d9b7bd4f203b1e9c1070f5d68316f4ff858f2b5f3afbaede0fc5/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 27 05:58:36 np0005537642 ceph-mon[74338]: mon.compute-0@0(leader).osd e64 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 27 05:58:36 np0005537642 podman[96210]: 2025-11-27 10:58:36.749945374 +0000 UTC m=+1.967220765 container init 2debd5a6ed79590dfd2ec5f60da30a0f2ced8d6944dd7eaafcbc99cfff1fb79b (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=sweet_beaver, OSD_FLAVOR=default, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 27 05:58:36 np0005537642 podman[96210]: 2025-11-27 10:58:36.76036577 +0000 UTC m=+1.977641111 container start 2debd5a6ed79590dfd2ec5f60da30a0f2ced8d6944dd7eaafcbc99cfff1fb79b (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=sweet_beaver, org.label-schema.vendor=CentOS, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 27 05:58:36 np0005537642 sweet_beaver[96265]: 167 167
Nov 27 05:58:36 np0005537642 systemd[1]: libpod-2debd5a6ed79590dfd2ec5f60da30a0f2ced8d6944dd7eaafcbc99cfff1fb79b.scope: Deactivated successfully.
Nov 27 05:58:36 np0005537642 podman[96210]: 2025-11-27 10:58:36.866686436 +0000 UTC m=+2.083961777 container attach 2debd5a6ed79590dfd2ec5f60da30a0f2ced8d6944dd7eaafcbc99cfff1fb79b (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=sweet_beaver, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Nov 27 05:58:36 np0005537642 podman[96210]: 2025-11-27 10:58:36.867856979 +0000 UTC m=+2.085132310 container died 2debd5a6ed79590dfd2ec5f60da30a0f2ced8d6944dd7eaafcbc99cfff1fb79b (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=sweet_beaver, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, org.label-schema.build-date=20250325)
Nov 27 05:58:37 np0005537642 systemd[1]: var-lib-containers-storage-overlay-72b33430ab0721af520349fd774669b05aad6e126bd95b54a401272d11164e7e-merged.mount: Deactivated successfully.
Nov 27 05:58:37 np0005537642 podman[96210]: 2025-11-27 10:58:37.762755299 +0000 UTC m=+2.980030630 container remove 2debd5a6ed79590dfd2ec5f60da30a0f2ced8d6944dd7eaafcbc99cfff1fb79b (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=sweet_beaver, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, org.label-schema.build-date=20250325, io.buildah.version=1.40.1)
Nov 27 05:58:37 np0005537642 systemd[1]: libpod-conmon-2debd5a6ed79590dfd2ec5f60da30a0f2ced8d6944dd7eaafcbc99cfff1fb79b.scope: Deactivated successfully.
Nov 27 05:58:37 np0005537642 podman[96250]: 2025-11-27 10:58:37.8276538 +0000 UTC m=+1.923939969 container init 312145fec01e61a1d7b17b79f9be3d8e15920d310ce82e3dd3abf7d34edd537a (image=quay.io/ceph/ceph:v19, name=serene_dubinsky, org.label-schema.build-date=20250325, CEPH_REF=squid, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 27 05:58:37 np0005537642 podman[96250]: 2025-11-27 10:58:37.834378381 +0000 UTC m=+1.930664539 container start 312145fec01e61a1d7b17b79f9be3d8e15920d310ce82e3dd3abf7d34edd537a (image=quay.io/ceph/ceph:v19, name=serene_dubinsky, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Nov 27 05:58:37 np0005537642 podman[96250]: 2025-11-27 10:58:37.865021691 +0000 UTC m=+1.961307889 container attach 312145fec01e61a1d7b17b79f9be3d8e15920d310ce82e3dd3abf7d34edd537a (image=quay.io/ceph/ceph:v19, name=serene_dubinsky, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, CEPH_REF=squid, OSD_FLAVOR=default, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 27 05:58:37 np0005537642 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v20: 198 pgs: 198 active+clean; 454 KiB data, 103 MiB used, 60 GiB / 60 GiB avail
Nov 27 05:58:38 np0005537642 systemd[1]: Reloading.
Nov 27 05:58:38 np0005537642 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 27 05:58:38 np0005537642 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 27 05:58:38 np0005537642 ceph-mon[74338]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "status", "format": "json"} v 0)
Nov 27 05:58:38 np0005537642 ceph-mon[74338]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1604522765' entity='client.admin' cmd=[{"prefix": "status", "format": "json"}]: dispatch
Nov 27 05:58:38 np0005537642 serene_dubinsky[96270]: 
Nov 27 05:58:38 np0005537642 serene_dubinsky[96270]: {"fsid":"4c838139-e0c9-556a-a9ca-e4422f459af7","health":{"status":"HEALTH_OK","checks":{},"mutes":[]},"election_epoch":20,"quorum":[0,1,2],"quorum_names":["compute-0","compute-2","compute-1"],"quorum_age":100,"monmap":{"epoch":3,"min_mon_release_name":"squid","num_mons":3},"osdmap":{"epoch":64,"num_osds":3,"num_up_osds":3,"osd_up_since":1764241041,"num_in_osds":3,"osd_in_since":1764241022,"num_remapped_pgs":0},"pgmap":{"pgs_by_state":[{"state_name":"active+clean","count":198}],"num_pgs":198,"num_pools":12,"num_objects":195,"data_bytes":464595,"bytes_used":107892736,"bytes_avail":64304033792,"bytes_total":64411926528},"fsmap":{"epoch":5,"btime":"2025-11-27T10:58:35:393794+0000","id":1,"up":1,"in":1,"max":1,"by_rank":[{"filesystem_id":1,"rank":0,"name":"cephfs.compute-2.pktzxb","status":"up:active","gid":24226}],"up:standby":0},"mgrmap":{"available":true,"num_standbys":2,"modules":["cephadm","dashboard","iostat","nfs","restful"],"services":{"dashboard":"http://192.168.122.100:8443/"}},"servicemap":{"epoch":6,"modified":"2025-11-27T10:57:49.929316+0000","services":{"mgr":{"daemons":{"summary":"","compute-0.qnrkij":{"start_epoch":0,"start_stamp":"0.000000","gid":0,"addr":"(unrecognized address family 0)/0","metadata":{},"task_status":{}},"compute-1.npcryb":{"start_epoch":0,"start_stamp":"0.000000","gid":0,"addr":"(unrecognized address family 0)/0","metadata":{},"task_status":{}},"compute-2.yyrxaz":{"start_epoch":0,"start_stamp":"0.000000","gid":0,"addr":"(unrecognized address family 0)/0","metadata":{},"task_status":{}}}},"mon":{"daemons":{"summary":"","compute-0":{"start_epoch":0,"start_stamp":"0.000000","gid":0,"addr":"(unrecognized address family 0)/0","metadata":{},"task_status":{}},"compute-1":{"start_epoch":0,"start_stamp":"0.000000","gid":0,"addr":"(unrecognized address family 0)/0","metadata":{},"task_status":{}},"compute-2":{"start_epoch":0,"start_stamp":"0.000000","gid":0,"addr":"(unrecognized address family 0)/0","metadata":{},"task_status":{}}}},"osd":{"daemons":{"summary":"","0":{"start_epoch":0,"start_stamp":"0.000000","gid":0,"addr":"(unrecognized address family 0)/0","metadata":{},"task_status":{}},"1":{"start_epoch":0,"start_stamp":"0.000000","gid":0,"addr":"(unrecognized address family 0)/0","metadata":{},"task_status":{}},"2":{"start_epoch":0,"start_stamp":"0.000000","gid":0,"addr":"(unrecognized address family 0)/0","metadata":{},"task_status":{}}}},"rgw":{"daemons":{"summary":"","24143":{"start_epoch":6,"start_stamp":"2025-11-27T10:57:49.920330+0000","gid":24143,"addr":"192.168.122.101:0/4171051465","metadata":{"arch":"x86_64","ceph_release":"squid","ceph_version":"ceph version 19.2.3 (c92aebb279828e9c3c1f5d24613efca272649e62) squid (stable)","ceph_version_short":"19.2.3","container_hostname":"compute-1","container_image":"quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec","cpu":"AMD EPYC-Rome Processor","distro":"centos","distro_description":"CentOS Stream 9","distro_version":"9","frontend_config#0":"beast endpoint=192.168.122.101:8082","frontend_type#0":"beast","hostname":"compute-1","id":"rgw.compute-1.mkskbt","kernel_description":"#1 SMP PREEMPT_DYNAMIC Thu Nov 20 14:15:03 UTC 2025","kernel_version":"5.14.0-642.el9.x86_64","mem_swap_kb":"1048572","mem_total_kb":"7864316","num_handles":"1","os":"Linux","pid":"2","realm_id":"","realm_name":"","zone_id":"675152cd-7f14-4b99-b8fc-74e8884ed61a","zone_name":"default","zonegroup_id":"b3bd84b0-a1e5-48d4-ab74-2f2514937c72","zonegroup_name":"default"},"task_status":{}},"24149":{"start_epoch":6,"start_stamp":"2025-11-27T10:57:49.921471+0000","gid":24149,"addr":"192.168.122.100:0/2968932671","metadata":{"arch":"x86_64","ceph_release":"squid","ceph_version":"ceph version 19.2.3 (c92aebb279828e9c3c1f5d24613efca272649e62) squid (stable)","ceph_version_short":"19.2.3","container_hostname":"compute-0","container_image":"quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec","cpu":"AMD EPYC-Rome Processor","distro":"centos","distro_description":"CentOS Stream 9","distro_version":"9","frontend_config#0":"beast endpoint=192.168.122.100:8082","frontend_type#0":"beast","hostname":"compute-0","id":"rgw.compute-0.xkdunz","kernel_description":"#1 SMP PREEMPT_DYNAMIC Thu Nov 20 14:15:03 UTC 2025","kernel_version":"5.14.0-642.el9.x86_64","mem_swap_kb":"1048572","mem_total_kb":"7864324","num_handles":"1","os":"Linux","pid":"2","realm_id":"","realm_name":"","zone_id":"675152cd-7f14-4b99-b8fc-74e8884ed61a","zone_name":"default","zonegroup_id":"b3bd84b0-a1e5-48d4-ab74-2f2514937c72","zonegroup_name":"default"},"task_status":{}},"24154":{"start_epoch":6,"start_stamp":"2025-11-27T10:57:49.920505+0000","gid":24154,"addr":"192.168.122.102:0/3611071583","metadata":{"arch":"x86_64","ceph_release":"squid","ceph_version":"ceph version 19.2.3 (c92aebb279828e9c3c1f5d24613efca272649e62) squid (stable)","ceph_version_short":"19.2.3","container_hostname":"compute-2","container_image":"quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec","cpu":"AMD EPYC-Rome Processor","distro":"centos","distro_description":"CentOS Stream 9","distro_version":"9","frontend_config#0":"beast endpoint=192.168.122.102:8082","frontend_type#0":"beast","hostname":"compute-2","id":"rgw.compute-2.ujaphm","kernel_description":"#1 SMP PREEMPT_DYNAMIC Thu Nov 20 14:15:03 UTC 2025","kernel_version":"5.14.0-642.el9.x86_64","mem_swap_kb":"1048572","mem_total_kb":"7864324","num_handles":"1","os":"Linux","pid":"2","realm_id":"","realm_name":"","zone_id":"675152cd-7f14-4b99-b8fc-74e8884ed61a","zone_name":"default","zonegroup_id":"b3bd84b0-a1e5-48d4-ab74-2f2514937c72","zonegroup_name":"default"},"task_status":{}}}}}},"progress_events":{"2cd04496-45ac-4e0d-bf3c-74e472076093":{"message":"Updating mds.cephfs deployment (+3 -> 3) (1s)\n      [=========...................] (remaining: 3s)","progress":0.3333333432674408,"add_to_ceph_s":true}}}
Nov 27 05:58:38 np0005537642 systemd[1]: libpod-312145fec01e61a1d7b17b79f9be3d8e15920d310ce82e3dd3abf7d34edd537a.scope: Deactivated successfully.
Nov 27 05:58:38 np0005537642 podman[96250]: 2025-11-27 10:58:38.332198241 +0000 UTC m=+2.428484399 container died 312145fec01e61a1d7b17b79f9be3d8e15920d310ce82e3dd3abf7d34edd537a (image=quay.io/ceph/ceph:v19, name=serene_dubinsky, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True)
Nov 27 05:58:38 np0005537642 systemd[1]: Reloading.
Nov 27 05:58:38 np0005537642 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 27 05:58:38 np0005537642 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 27 05:58:38 np0005537642 systemd[1]: Starting Ceph mds.cephfs.compute-0.pbsgjz for 4c838139-e0c9-556a-a9ca-e4422f459af7...
Nov 27 05:58:39 np0005537642 systemd[1]: var-lib-containers-storage-overlay-8548323a84b4d9b7bd4f203b1e9c1070f5d68316f4ff858f2b5f3afbaede0fc5-merged.mount: Deactivated successfully.
Nov 27 05:58:39 np0005537642 podman[96250]: 2025-11-27 10:58:39.668065649 +0000 UTC m=+3.764351767 container remove 312145fec01e61a1d7b17b79f9be3d8e15920d310ce82e3dd3abf7d34edd537a (image=quay.io/ceph/ceph:v19, name=serene_dubinsky, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default)
Nov 27 05:58:39 np0005537642 systemd[1]: libpod-conmon-312145fec01e61a1d7b17b79f9be3d8e15920d310ce82e3dd3abf7d34edd537a.scope: Deactivated successfully.
Nov 27 05:58:39 np0005537642 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v21: 198 pgs: 198 active+clean; 456 KiB data, 103 MiB used, 60 GiB / 60 GiB avail; 1.2 KiB/s wr, 3 op/s
Nov 27 05:58:39 np0005537642 podman[96449]: 2025-11-27 10:58:39.84121436 +0000 UTC m=+0.030409114 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Nov 27 05:58:40 np0005537642 podman[96449]: 2025-11-27 10:58:40.444278645 +0000 UTC m=+0.633473359 container create f065abb4f544792918084c5ed281ef19e5ab4ab7b5656e4afc127a5ba0d2b120 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-4c838139-e0c9-556a-a9ca-e4422f459af7-mds-cephfs-compute-0-pbsgjz, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 27 05:58:40 np0005537642 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d49af6ba19ebf24f93c665772bbaec7e3eac4350181bb156327a603aeba04a6d/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 27 05:58:40 np0005537642 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d49af6ba19ebf24f93c665772bbaec7e3eac4350181bb156327a603aeba04a6d/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 27 05:58:40 np0005537642 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d49af6ba19ebf24f93c665772bbaec7e3eac4350181bb156327a603aeba04a6d/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 27 05:58:40 np0005537642 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d49af6ba19ebf24f93c665772bbaec7e3eac4350181bb156327a603aeba04a6d/merged/var/lib/ceph/mds/ceph-cephfs.compute-0.pbsgjz supports timestamps until 2038 (0x7fffffff)
Nov 27 05:58:41 np0005537642 python3[96487]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v19 --fsid 4c838139-e0c9-556a-a9ca-e4422f459af7 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   config dump -f json _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 27 05:58:41 np0005537642 podman[96449]: 2025-11-27 10:58:41.213529723 +0000 UTC m=+1.402724487 container init f065abb4f544792918084c5ed281ef19e5ab4ab7b5656e4afc127a5ba0d2b120 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-4c838139-e0c9-556a-a9ca-e4422f459af7-mds-cephfs-compute-0-pbsgjz, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_REF=squid, ceph=True)
Nov 27 05:58:41 np0005537642 podman[96449]: 2025-11-27 10:58:41.223829605 +0000 UTC m=+1.413024319 container start f065abb4f544792918084c5ed281ef19e5ab4ab7b5656e4afc127a5ba0d2b120 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-4c838139-e0c9-556a-a9ca-e4422f459af7-mds-cephfs-compute-0-pbsgjz, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.license=GPLv2, ceph=True, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0)
Nov 27 05:58:41 np0005537642 ceph-mds[96507]: set uid:gid to 167:167 (ceph:ceph)
Nov 27 05:58:41 np0005537642 ceph-mds[96507]: ceph version 19.2.3 (c92aebb279828e9c3c1f5d24613efca272649e62) squid (stable), process ceph-mds, pid 2
Nov 27 05:58:41 np0005537642 ceph-mds[96507]: main not setting numa affinity
Nov 27 05:58:41 np0005537642 ceph-mds[96507]: pidfile_write: ignore empty --pid-file
Nov 27 05:58:41 np0005537642 ceph-4c838139-e0c9-556a-a9ca-e4422f459af7-mds-cephfs-compute-0-pbsgjz[96491]: starting mds.cephfs.compute-0.pbsgjz at 
Nov 27 05:58:41 np0005537642 ceph-mds[96507]: mds.cephfs.compute-0.pbsgjz Updating MDS map to version 5 from mon.0
Nov 27 05:58:41 np0005537642 podman[96494]: 2025-11-27 10:58:41.231602775 +0000 UTC m=+0.180140860 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Nov 27 05:58:41 np0005537642 bash[96449]: f065abb4f544792918084c5ed281ef19e5ab4ab7b5656e4afc127a5ba0d2b120
Nov 27 05:58:41 np0005537642 systemd[1]: Started Ceph mds.cephfs.compute-0.pbsgjz for 4c838139-e0c9-556a-a9ca-e4422f459af7.
Nov 27 05:58:41 np0005537642 ceph-mon[74338]: mon.compute-0@0(leader).osd e64 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 27 05:58:41 np0005537642 ceph-mon[74338]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Nov 27 05:58:41 np0005537642 podman[96494]: 2025-11-27 10:58:41.567368328 +0000 UTC m=+0.515906383 container create 8d91dea52f2cdf78384e7b6d73218408a0cb429c912d1897ff14989fddbdfab7 (image=quay.io/ceph/ceph:v19, name=festive_sinoussi, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid)
Nov 27 05:58:41 np0005537642 systemd[1]: Started libpod-conmon-8d91dea52f2cdf78384e7b6d73218408a0cb429c912d1897ff14989fddbdfab7.scope.
Nov 27 05:58:41 np0005537642 systemd[1]: Started libcrun container.
Nov 27 05:58:41 np0005537642 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8fa7b6b8c092a6e9a59e67041e9965c89f8032da96afd4730ac00c9589e7104b/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Nov 27 05:58:41 np0005537642 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8fa7b6b8c092a6e9a59e67041e9965c89f8032da96afd4730ac00c9589e7104b/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 27 05:58:41 np0005537642 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v22: 198 pgs: 198 active+clean; 456 KiB data, 103 MiB used, 60 GiB / 60 GiB avail; 1.2 KiB/s wr, 3 op/s
Nov 27 05:58:41 np0005537642 ceph-mon[74338]: mon.compute-0@0(leader).mds e6 new map
Nov 27 05:58:41 np0005537642 ceph-mon[74338]: mon.compute-0@0(leader).mds e6 print_map#012e6#012btime 2025-11-27T10:58:41:468301+0000#012enable_multiple, ever_enabled_multiple: 1,1#012default compat: compat={},rocompat={},incompat={1=base v0.20,2=client writeable ranges,3=default file layouts on dirs,4=dir inode in separate object,5=mds uses versioned encoding,6=dirfrag is stored in omap,8=no anchor table,9=file layout v2,10=snaprealm v2,11=minor log segments,12=quiesce subvolumes}#012legacy client fscid: 1#012 #012Filesystem 'cephfs' (1)#012fs_name#011cephfs#012epoch#0115#012flags#01112 joinable allow_snaps allow_multimds_snaps#012created#0112025-11-27T10:58:12.975652+0000#012modified#0112025-11-27T10:58:35.393791+0000#012tableserver#0110#012root#0110#012session_timeout#01160#012session_autoclose#011300#012max_file_size#0111099511627776#012max_xattr_size#01165536#012required_client_features#011{}#012last_failure#0110#012last_failure_osd_epoch#0110#012compat#011compat={},rocompat={},incompat={1=base v0.20,2=client writeable ranges,3=default file layouts on dirs,4=dir inode in separate object,5=mds uses versioned encoding,6=dirfrag is stored in omap,7=mds uses inline data,8=no anchor table,9=file layout v2,10=snaprealm v2,11=minor log segments,12=quiesce subvolumes}#012max_mds#0111#012in#0110#012up#011{0=24226}#012failed#011#012damaged#011#012stopped#011#012data_pools#011[7]#012metadata_pool#0116#012inline_data#011disabled#012balancer#011#012bal_rank_mask#011-1#012standby_count_wanted#0111#012qdb_cluster#011leader: 24226 members: 24226#012[mds.cephfs.compute-2.pktzxb{0:24226} state up:active seq 2 addr [v2:192.168.122.102:6804/4184153211,v1:192.168.122.102:6805/4184153211] compat {c=[1],r=[1],i=[1fff]}]#012 #012 #012Standby daemons:#012 #012[mds.cephfs.compute-0.pbsgjz{-1:14526} state up:standby seq 1 addr [v2:192.168.122.100:6806/3239645780,v1:192.168.122.100:6807/3239645780] compat {c=[1],r=[1],i=[1fff]}]
Nov 27 05:58:41 np0005537642 ceph-mds[96507]: mds.cephfs.compute-0.pbsgjz Updating MDS map to version 6 from mon.0
Nov 27 05:58:41 np0005537642 ceph-mds[96507]: mds.cephfs.compute-0.pbsgjz Monitors have assigned me to become a standby
Nov 27 05:58:41 np0005537642 ceph-mon[74338]: log_channel(audit) log [INF] : from='mgr.14424 192.168.122.100:0/1707931784' entity='mgr.compute-0.qnrkij' 
Nov 27 05:58:41 np0005537642 ceph-mon[74338]: log_channel(cluster) log [DBG] : mds.? [v2:192.168.122.100:6806/3239645780,v1:192.168.122.100:6807/3239645780] up:boot
Nov 27 05:58:41 np0005537642 ceph-mon[74338]: log_channel(cluster) log [DBG] : fsmap cephfs:1 {0=cephfs.compute-2.pktzxb=up:active} 1 up:standby
Nov 27 05:58:41 np0005537642 ceph-mon[74338]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mds metadata", "who": "cephfs.compute-0.pbsgjz"} v 0)
Nov 27 05:58:41 np0005537642 ceph-mon[74338]: log_channel(audit) log [DBG] : from='mgr.14424 192.168.122.100:0/1707931784' entity='mgr.compute-0.qnrkij' cmd=[{"prefix": "mds metadata", "who": "cephfs.compute-0.pbsgjz"}]: dispatch
Nov 27 05:58:41 np0005537642 ceph-mon[74338]: mon.compute-0@0(leader).mds e6 all = 0
Nov 27 05:58:41 np0005537642 ceph-mon[74338]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Nov 27 05:58:42 np0005537642 ceph-mgr[74636]: [volumes INFO mgr_util] scanning for idle connections..
Nov 27 05:58:42 np0005537642 ceph-mgr[74636]: [volumes INFO mgr_util] cleaning up connections: []
Nov 27 05:58:42 np0005537642 ceph-mgr[74636]: [volumes INFO mgr_util] scanning for idle connections..
Nov 27 05:58:42 np0005537642 ceph-mgr[74636]: [volumes INFO mgr_util] cleaning up connections: []
Nov 27 05:58:42 np0005537642 ceph-mgr[74636]: [volumes INFO mgr_util] scanning for idle connections..
Nov 27 05:58:42 np0005537642 ceph-mgr[74636]: [volumes INFO mgr_util] cleaning up connections: []
Nov 27 05:58:42 np0005537642 podman[96494]: 2025-11-27 10:58:42.256979357 +0000 UTC m=+1.205517392 container init 8d91dea52f2cdf78384e7b6d73218408a0cb429c912d1897ff14989fddbdfab7 (image=quay.io/ceph/ceph:v19, name=festive_sinoussi, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, ceph=True, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, CEPH_REF=squid)
Nov 27 05:58:42 np0005537642 podman[96494]: 2025-11-27 10:58:42.269942375 +0000 UTC m=+1.218480420 container start 8d91dea52f2cdf78384e7b6d73218408a0cb429c912d1897ff14989fddbdfab7 (image=quay.io/ceph/ceph:v19, name=festive_sinoussi, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Nov 27 05:58:42 np0005537642 ceph-mon[74338]: log_channel(audit) log [INF] : from='mgr.14424 192.168.122.100:0/1707931784' entity='mgr.compute-0.qnrkij' 
Nov 27 05:58:42 np0005537642 ceph-mon[74338]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.mds.cephfs}] v 0)
Nov 27 05:58:42 np0005537642 podman[96494]: 2025-11-27 10:58:42.517804257 +0000 UTC m=+1.466342352 container attach 8d91dea52f2cdf78384e7b6d73218408a0cb429c912d1897ff14989fddbdfab7 (image=quay.io/ceph/ceph:v19, name=festive_sinoussi, io.buildah.version=1.40.1, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, org.label-schema.vendor=CentOS)
Nov 27 05:58:42 np0005537642 ceph-mon[74338]: log_channel(audit) log [INF] : from='mgr.14424 192.168.122.100:0/1707931784' entity='mgr.compute-0.qnrkij' 
Nov 27 05:58:42 np0005537642 ceph-mon[74338]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get-or-create", "entity": "mds.cephfs.compute-1.dfsdca", "caps": ["mon", "profile mds", "osd", "allow rw tag cephfs *=*", "mds", "allow"]} v 0)
Nov 27 05:58:42 np0005537642 ceph-mon[74338]: log_channel(audit) log [INF] : from='mgr.14424 192.168.122.100:0/1707931784' entity='mgr.compute-0.qnrkij' cmd=[{"prefix": "auth get-or-create", "entity": "mds.cephfs.compute-1.dfsdca", "caps": ["mon", "profile mds", "osd", "allow rw tag cephfs *=*", "mds", "allow"]}]: dispatch
Nov 27 05:58:42 np0005537642 ceph-mon[74338]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config dump", "format": "json"} v 0)
Nov 27 05:58:42 np0005537642 ceph-mon[74338]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1947042074' entity='client.admin' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch
Nov 27 05:58:42 np0005537642 festive_sinoussi[96529]: 
Nov 27 05:58:42 np0005537642 festive_sinoussi[96529]: [{"section":"global","name":"cluster_network","value":"172.20.0.0/24","level":"advanced","can_update_at_runtime":false,"mask":""},{"section":"global","name":"container_image","value":"quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec","level":"basic","can_update_at_runtime":false,"mask":""},{"section":"global","name":"log_to_file","value":"true","level":"basic","can_update_at_runtime":true,"mask":""},{"section":"global","name":"mon_cluster_log_to_file","value":"true","level":"advanced","can_update_at_runtime":true,"mask":""},{"section":"global","name":"ms_bind_ipv4","value":"true","level":"advanced","can_update_at_runtime":true,"mask":""},{"section":"global","name":"ms_bind_ipv6","value":"false","level":"advanced","can_update_at_runtime":true,"mask":""},{"section":"global","name":"osd_pool_default_size","value":"1","level":"advanced","can_update_at_runtime":true,"mask":""},{"section":"global","name":"public_network","value":"192.168.122.0/24","level":"advanced","can_update_at_runtime":false,"mask":""},{"section":"global","name":"rgw_keystone_accepted_admin_roles","value":"ResellerAdmin, swiftoperator","level":"advanced","can_update_at_runtime":false,"mask":""},{"section":"global","name":"rgw_keystone_accepted_roles","value":"member, Member, admin","level":"advanced","can_update_at_runtime":false,"mask":""},{"section":"global","name":"rgw_keystone_admin_domain","value":"default","level":"advanced","can_update_at_runtime":false,"mask":""},{"section":"global","name":"rgw_keystone_admin_password","value":"12345678","level":"advanced","can_update_at_runtime":false,"mask":""},{"section":"global","name":"rgw_keystone_admin_project","value":"service","level":"advanced","can_update_at_runtime":false,"mask":""},{"section":"global","name":"rgw_keystone_admin_user","value":"swift","level":"advanced","can_update_at_runtime":false,"mask":""},{"section":"global","name":"rgw_keystone_api_version","value":"3","level":"advanced","can_update_at_runtime":true,"mask":""},{"section":"global","name":"rgw_keystone_implicit_tenants","value":"true","level":"advanced","can_update_at_runtime":false,"mask":""},{"section":"global","name":"rgw_keystone_url","value":"https://keystone-internal.openstack.svc:5000","level":"basic","can_update_at_runtime":false,"mask":""},{"section":"global","name":"rgw_keystone_verify_ssl","value":"false","level":"advanced","can_update_at_runtime":true,"mask":""},{"section":"global","name":"rgw_max_attr_name_len","value":"128","level":"advanced","can_update_at_runtime":true,"mask":""},{"section":"global","name":"rgw_max_attr_size","value":"1024","level":"advanced","can_update_at_runtime":true,"mask":""},{"section":"global","name":"rgw_max_attrs_num_in_req","value":"90","level":"advanced","can_update_at_runtime":true,"mask":""},{"section":"global","name":"rgw_s3_auth_use_keystone","value":"true","level":"advanced","can_update_at_runtime":true,"mask":""},{"section":"global","name":"rgw_swift_account_in_url","value":"true","level":"advanced","can_update_at_runtime":true,"mask":""},{"section":"global","name":"rgw_swift_enforce_content_length","value":"true","level":"advanced","can_update_at_runtime":true,"mask":""},{"section":"global","name":"rgw_swift_versioning_enabled","value":"true","level":"advanced","can_update_at_runtime":true,"mask":""},{"section":"global","name":"rgw_trust_forwarded_https","value":"true","level":"advanced","can_update_at_runtime":true,"mask":""},{"section":"mon","name":"auth_allow_insecure_global_id_reclaim","value":"false","level":"advanced","can_update_at_runtime":true,"mask":""},{"section":"mon","name":"mon_warn_on_pool_no_redundancy","value":"false","level":"advanced","can_update_at_runtime":true,"mask":""},{"section":"mgr","name":"mgr/cephadm/container_init","value":"True","level":"advanced","can_update_at_runtime":false,"mask":""},{"section":"mgr","name":"mgr/cephadm/migration_current","value":"7","level":"advanced","can_update_at_runtime":false,"mask":""},{"section":"mgr","name":"mgr/cephadm/use_repo_digest","value":"false","level":"advanced","can_update_at_runtime":false,"mask":""},{"section":"mgr","name":"mgr/dashboard/ALERTMANAGER_API_HOST","value":"http://192.168.122.100:9093","level":"advanced","can_update_at_runtime":false,"mask":""},{"section":"mgr","name":"mgr/dashboard/GRAFANA_API_PASSWORD","value":"/home/grafana_password.yml","level":"advanced","can_update_at_runtime":false,"mask":""},{"section":"mgr","name":"mgr/dashboard/GRAFANA_API_URL","value":"http://192.168.122.100:3100","level":"advanced","can_update_at_runtime":false,"mask":""},{"section":"mgr","name":"mgr/dashboard/GRAFANA_API_USERNAME","value":"admin","level":"advanced","can_update_at_runtime":false,"mask":""},{"section":"mgr","name":"mgr/dashboard/PROMETHEUS_API_HOST","value":"http://192.168.122.100:9092","level":"advanced","can_update_at_runtime":false,"mask":""},{"section":"mgr","name":"mgr/dashboard/compute-0.qnrkij/server_addr","value":"192.168.122.100","level":"advanced","can_update_at_runtime":false,"mask":""},{"section":"mgr","name":"mgr/dashboard/compute-1.npcryb/server_addr","value":"192.168.122.101","level":"advanced","can_update_at_runtime":false,"mask":""},{"section":"mgr","name":"mgr/dashboard/compute-2.yyrxaz/server_addr","value":"192.168.122.102","level":"advanced","can_update_at_runtime":false,"mask":""},{"section":"mgr","name":"mgr/dashboard/server_port","value":"8443","level":"advanced","can_update_at_runtime":false,"mask":""},{"section":"mgr","name":"mgr/dashboard/ssl","value":"false","level":"advanced","can_update_at_runtime":false,"mask":""},{"section":"mgr","name":"mgr/dashboard/ssl_server_port","value":"8443","level":"advanced","can_update_at_runtime":false,"mask":""},{"section":"mgr","name":"mgr/orchestrator/orchestrator","value":"cephadm","level":"advanced","can_update_at_runtime":true,"mask":""},{"section":"osd","name":"osd_memory_target_autotune","value":"true","level":"advanced","can_update_at_runtime":true,"mask":""},{"section":"osd.1","name":"osd_mclock_max_capacity_iops_hdd","value":"319.891205","level":"basic","can_update_at_runtime":true,"mask":""},{"section":"client.rgw.rgw.compute-0.xkdunz","name":"rgw_frontends","value":"beast endpoint=192.168.122.100:8082","level":"basic","can_update_at_runtime":false,"mask":""},{"section":"client.rgw.rgw.compute-1.mkskbt","name":"rgw_frontends","value":"beast endpoint=192.168.122.101:8082","level":"basic","can_update_at_runtime":false,"mask":""},{"section":"client.rgw.rgw.compute-2.ujaphm","name":"rgw_frontends","value":"beast endpoint=192.168.122.102:8082","level":"basic","can_update_at_runtime":false,"mask":""}]
Nov 27 05:58:42 np0005537642 systemd[1]: libpod-8d91dea52f2cdf78384e7b6d73218408a0cb429c912d1897ff14989fddbdfab7.scope: Deactivated successfully.
Nov 27 05:58:42 np0005537642 podman[96494]: 2025-11-27 10:58:42.666747275 +0000 UTC m=+1.615285350 container died 8d91dea52f2cdf78384e7b6d73218408a0cb429c912d1897ff14989fddbdfab7 (image=quay.io/ceph/ceph:v19, name=festive_sinoussi, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_REF=squid)
Nov 27 05:58:42 np0005537642 ceph-mon[74338]: log_channel(audit) log [INF] : from='mgr.14424 192.168.122.100:0/1707931784' entity='mgr.compute-0.qnrkij' cmd='[{"prefix": "auth get-or-create", "entity": "mds.cephfs.compute-1.dfsdca", "caps": ["mon", "profile mds", "osd", "allow rw tag cephfs *=*", "mds", "allow"]}]': finished
Nov 27 05:58:42 np0005537642 ceph-mon[74338]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Nov 27 05:58:42 np0005537642 ceph-mon[74338]: log_channel(audit) log [DBG] : from='mgr.14424 192.168.122.100:0/1707931784' entity='mgr.compute-0.qnrkij' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 27 05:58:42 np0005537642 ceph-mgr[74636]: [cephadm INFO cephadm.serve] Deploying daemon mds.cephfs.compute-1.dfsdca on compute-1
Nov 27 05:58:42 np0005537642 ceph-mgr[74636]: log_channel(cephadm) log [INF] : Deploying daemon mds.cephfs.compute-1.dfsdca on compute-1
Nov 27 05:58:42 np0005537642 systemd[1]: var-lib-containers-storage-overlay-8fa7b6b8c092a6e9a59e67041e9965c89f8032da96afd4730ac00c9589e7104b-merged.mount: Deactivated successfully.
Nov 27 05:58:42 np0005537642 podman[96494]: 2025-11-27 10:58:42.751945661 +0000 UTC m=+1.700483666 container remove 8d91dea52f2cdf78384e7b6d73218408a0cb429c912d1897ff14989fddbdfab7 (image=quay.io/ceph/ceph:v19, name=festive_sinoussi, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 27 05:58:42 np0005537642 systemd[1]: libpod-conmon-8d91dea52f2cdf78384e7b6d73218408a0cb429c912d1897ff14989fddbdfab7.scope: Deactivated successfully.
Nov 27 05:58:42 np0005537642 ceph-mon[74338]: from='mgr.14424 192.168.122.100:0/1707931784' entity='mgr.compute-0.qnrkij' 
Nov 27 05:58:42 np0005537642 ceph-mon[74338]: from='mgr.14424 192.168.122.100:0/1707931784' entity='mgr.compute-0.qnrkij' 
Nov 27 05:58:42 np0005537642 ceph-mon[74338]: from='mgr.14424 192.168.122.100:0/1707931784' entity='mgr.compute-0.qnrkij' 
Nov 27 05:58:42 np0005537642 ceph-mon[74338]: from='mgr.14424 192.168.122.100:0/1707931784' entity='mgr.compute-0.qnrkij' cmd=[{"prefix": "auth get-or-create", "entity": "mds.cephfs.compute-1.dfsdca", "caps": ["mon", "profile mds", "osd", "allow rw tag cephfs *=*", "mds", "allow"]}]: dispatch
Nov 27 05:58:42 np0005537642 ceph-mon[74338]: from='mgr.14424 192.168.122.100:0/1707931784' entity='mgr.compute-0.qnrkij' cmd='[{"prefix": "auth get-or-create", "entity": "mds.cephfs.compute-1.dfsdca", "caps": ["mon", "profile mds", "osd", "allow rw tag cephfs *=*", "mds", "allow"]}]': finished
Nov 27 05:58:43 np0005537642 python3[96592]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v19 --fsid 4c838139-e0c9-556a-a9ca-e4422f459af7 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   osd get-require-min-compat-client _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 27 05:58:43 np0005537642 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v23: 198 pgs: 198 active+clean; 456 KiB data, 103 MiB used, 60 GiB / 60 GiB avail; 1.2 KiB/s wr, 3 op/s
Nov 27 05:58:43 np0005537642 podman[96593]: 2025-11-27 10:58:43.938716616 +0000 UTC m=+0.063846565 container create 1aba88fdc767b63af36e6508cb10a0171099a5906a76736a6e94ad4d4361f45c (image=quay.io/ceph/ceph:v19, name=elastic_dirac, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Nov 27 05:58:43 np0005537642 systemd[1]: Started libpod-conmon-1aba88fdc767b63af36e6508cb10a0171099a5906a76736a6e94ad4d4361f45c.scope.
Nov 27 05:58:44 np0005537642 podman[96593]: 2025-11-27 10:58:43.908064353 +0000 UTC m=+0.033194292 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Nov 27 05:58:44 np0005537642 ceph-mon[74338]: Deploying daemon mds.cephfs.compute-1.dfsdca on compute-1
Nov 27 05:58:44 np0005537642 systemd[1]: Started libcrun container.
Nov 27 05:58:44 np0005537642 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4d6ed4b9657335648e0f4eccc0b8e78f19a9b54f2fd1917248a2c02c81cca826/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 27 05:58:44 np0005537642 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4d6ed4b9657335648e0f4eccc0b8e78f19a9b54f2fd1917248a2c02c81cca826/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Nov 27 05:58:44 np0005537642 podman[96593]: 2025-11-27 10:58:44.051799711 +0000 UTC m=+0.176929720 container init 1aba88fdc767b63af36e6508cb10a0171099a5906a76736a6e94ad4d4361f45c (image=quay.io/ceph/ceph:v19, name=elastic_dirac, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, io.buildah.version=1.40.1)
Nov 27 05:58:44 np0005537642 podman[96593]: 2025-11-27 10:58:44.05918587 +0000 UTC m=+0.184315809 container start 1aba88fdc767b63af36e6508cb10a0171099a5906a76736a6e94ad4d4361f45c (image=quay.io/ceph/ceph:v19, name=elastic_dirac, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, ceph=True, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 27 05:58:44 np0005537642 podman[96593]: 2025-11-27 10:58:44.065801467 +0000 UTC m=+0.190931416 container attach 1aba88fdc767b63af36e6508cb10a0171099a5906a76736a6e94ad4d4361f45c (image=quay.io/ceph/ceph:v19, name=elastic_dirac, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 27 05:58:44 np0005537642 ceph-mon[74338]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd get-require-min-compat-client"} v 0)
Nov 27 05:58:44 np0005537642 ceph-mon[74338]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/760322278' entity='client.admin' cmd=[{"prefix": "osd get-require-min-compat-client"}]: dispatch
Nov 27 05:58:44 np0005537642 elastic_dirac[96608]: mimic
Nov 27 05:58:44 np0005537642 systemd[1]: libpod-1aba88fdc767b63af36e6508cb10a0171099a5906a76736a6e94ad4d4361f45c.scope: Deactivated successfully.
Nov 27 05:58:44 np0005537642 podman[96593]: 2025-11-27 10:58:44.477721304 +0000 UTC m=+0.602851253 container died 1aba88fdc767b63af36e6508cb10a0171099a5906a76736a6e94ad4d4361f45c (image=quay.io/ceph/ceph:v19, name=elastic_dirac, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, OSD_FLAVOR=default, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 27 05:58:44 np0005537642 ceph-mon[74338]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1.devices.0}] v 0)
Nov 27 05:58:44 np0005537642 ceph-mon[74338]: log_channel(audit) log [INF] : from='mgr.14424 192.168.122.100:0/1707931784' entity='mgr.compute-0.qnrkij' 
Nov 27 05:58:44 np0005537642 ceph-mon[74338]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1}] v 0)
Nov 27 05:58:44 np0005537642 ceph-mon[74338]: log_channel(audit) log [INF] : from='mgr.14424 192.168.122.100:0/1707931784' entity='mgr.compute-0.qnrkij' 
Nov 27 05:58:44 np0005537642 ceph-mon[74338]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.mds.cephfs}] v 0)
Nov 27 05:58:44 np0005537642 systemd[1]: var-lib-containers-storage-overlay-4d6ed4b9657335648e0f4eccc0b8e78f19a9b54f2fd1917248a2c02c81cca826-merged.mount: Deactivated successfully.
Nov 27 05:58:44 np0005537642 ceph-mon[74338]: log_channel(audit) log [INF] : from='mgr.14424 192.168.122.100:0/1707931784' entity='mgr.compute-0.qnrkij' 
Nov 27 05:58:44 np0005537642 ceph-mgr[74636]: [progress INFO root] complete: finished ev 2cd04496-45ac-4e0d-bf3c-74e472076093 (Updating mds.cephfs deployment (+3 -> 3))
Nov 27 05:58:44 np0005537642 ceph-mgr[74636]: [progress INFO root] Completed event 2cd04496-45ac-4e0d-bf3c-74e472076093 (Updating mds.cephfs deployment (+3 -> 3)) in 12 seconds
Nov 27 05:58:44 np0005537642 ceph-mon[74338]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config set, name=mds_join_fs}] v 0)
Nov 27 05:58:44 np0005537642 ceph-mon[74338]: log_channel(audit) log [INF] : from='mgr.14424 192.168.122.100:0/1707931784' entity='mgr.compute-0.qnrkij' 
Nov 27 05:58:44 np0005537642 ceph-mon[74338]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.mds.cephfs}] v 0)
Nov 27 05:58:44 np0005537642 podman[96593]: 2025-11-27 10:58:44.556795466 +0000 UTC m=+0.681925375 container remove 1aba88fdc767b63af36e6508cb10a0171099a5906a76736a6e94ad4d4361f45c (image=quay.io/ceph/ceph:v19, name=elastic_dirac, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 27 05:58:44 np0005537642 systemd[1]: libpod-conmon-1aba88fdc767b63af36e6508cb10a0171099a5906a76736a6e94ad4d4361f45c.scope: Deactivated successfully.
Nov 27 05:58:44 np0005537642 ceph-mon[74338]: log_channel(audit) log [INF] : from='mgr.14424 192.168.122.100:0/1707931784' entity='mgr.compute-0.qnrkij' 
Nov 27 05:58:44 np0005537642 ceph-mgr[74636]: [progress INFO root] update: starting ev d6adb227-259a-4256-9d71-ee64a6aa8224 (Updating nfs.cephfs deployment (+3 -> 3))
Nov 27 05:58:44 np0005537642 ceph-mon[74338]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.nfs.cephfs}] v 0)
Nov 27 05:58:44 np0005537642 ceph-mon[74338]: log_channel(audit) log [INF] : from='mgr.14424 192.168.122.100:0/1707931784' entity='mgr.compute-0.qnrkij' 
Nov 27 05:58:44 np0005537642 ceph-mgr[74636]: [cephadm INFO cephadm.services.nfs] Creating key for client.nfs.cephfs.0.0.compute-1.qwiywa
Nov 27 05:58:44 np0005537642 ceph-mgr[74636]: log_channel(cephadm) log [INF] : Creating key for client.nfs.cephfs.0.0.compute-1.qwiywa
Nov 27 05:58:44 np0005537642 ceph-mon[74338]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get-or-create", "entity": "client.nfs.cephfs.0.0.compute-1.qwiywa", "caps": ["mon", "allow r", "osd", "allow rw pool=.nfs namespace=cephfs"]} v 0)
Nov 27 05:58:44 np0005537642 ceph-mon[74338]: log_channel(audit) log [INF] : from='mgr.14424 192.168.122.100:0/1707931784' entity='mgr.compute-0.qnrkij' cmd=[{"prefix": "auth get-or-create", "entity": "client.nfs.cephfs.0.0.compute-1.qwiywa", "caps": ["mon", "allow r", "osd", "allow rw pool=.nfs namespace=cephfs"]}]: dispatch
Nov 27 05:58:44 np0005537642 ceph-mon[74338]: log_channel(audit) log [INF] : from='mgr.14424 192.168.122.100:0/1707931784' entity='mgr.compute-0.qnrkij' cmd='[{"prefix": "auth get-or-create", "entity": "client.nfs.cephfs.0.0.compute-1.qwiywa", "caps": ["mon", "allow r", "osd", "allow rw pool=.nfs namespace=cephfs"]}]': finished
Nov 27 05:58:44 np0005537642 ceph-mgr[74636]: [cephadm INFO root] Ensuring nfs.cephfs.0 is in the ganesha grace table
Nov 27 05:58:44 np0005537642 ceph-mgr[74636]: log_channel(cephadm) log [INF] : Ensuring nfs.cephfs.0 is in the ganesha grace table
Nov 27 05:58:44 np0005537642 ceph-mon[74338]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get-or-create", "entity": "client.mgr.nfs.grace.nfs.cephfs", "caps": ["mon", "allow r", "osd", "allow rwx pool .nfs"]} v 0)
Nov 27 05:58:44 np0005537642 ceph-mon[74338]: log_channel(audit) log [INF] : from='mgr.14424 192.168.122.100:0/1707931784' entity='mgr.compute-0.qnrkij' cmd=[{"prefix": "auth get-or-create", "entity": "client.mgr.nfs.grace.nfs.cephfs", "caps": ["mon", "allow r", "osd", "allow rwx pool .nfs"]}]: dispatch
Nov 27 05:58:44 np0005537642 ceph-mon[74338]: log_channel(audit) log [INF] : from='mgr.14424 192.168.122.100:0/1707931784' entity='mgr.compute-0.qnrkij' cmd='[{"prefix": "auth get-or-create", "entity": "client.mgr.nfs.grace.nfs.cephfs", "caps": ["mon", "allow r", "osd", "allow rwx pool .nfs"]}]': finished
Nov 27 05:58:44 np0005537642 ceph-mon[74338]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Nov 27 05:58:44 np0005537642 ceph-mon[74338]: log_channel(audit) log [DBG] : from='mgr.14424 192.168.122.100:0/1707931784' entity='mgr.compute-0.qnrkij' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 27 05:58:44 np0005537642 ceph-mon[74338]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth rm", "entity": "client.mgr.nfs.grace.nfs.cephfs"} v 0)
Nov 27 05:58:44 np0005537642 ceph-mon[74338]: log_channel(audit) log [INF] : from='mgr.14424 192.168.122.100:0/1707931784' entity='mgr.compute-0.qnrkij' cmd=[{"prefix": "auth rm", "entity": "client.mgr.nfs.grace.nfs.cephfs"}]: dispatch
Nov 27 05:58:44 np0005537642 ceph-mon[74338]: log_channel(audit) log [INF] : from='mgr.14424 192.168.122.100:0/1707931784' entity='mgr.compute-0.qnrkij' cmd='[{"prefix": "auth rm", "entity": "client.mgr.nfs.grace.nfs.cephfs"}]': finished
Nov 27 05:58:44 np0005537642 ceph-mgr[74636]: [cephadm INFO cephadm.services.nfs] Rados config object exists: conf-nfs.cephfs
Nov 27 05:58:44 np0005537642 ceph-mgr[74636]: log_channel(cephadm) log [INF] : Rados config object exists: conf-nfs.cephfs
Nov 27 05:58:44 np0005537642 ceph-mgr[74636]: [cephadm INFO cephadm.services.nfs] Creating key for client.nfs.cephfs.0.0.compute-1.qwiywa-rgw
Nov 27 05:58:44 np0005537642 ceph-mgr[74636]: log_channel(cephadm) log [INF] : Creating key for client.nfs.cephfs.0.0.compute-1.qwiywa-rgw
Nov 27 05:58:44 np0005537642 ceph-mon[74338]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get-or-create", "entity": "client.nfs.cephfs.0.0.compute-1.qwiywa-rgw", "caps": ["mon", "allow r", "osd", "allow rwx tag rgw *=*"]} v 0)
Nov 27 05:58:44 np0005537642 ceph-mon[74338]: log_channel(audit) log [INF] : from='mgr.14424 192.168.122.100:0/1707931784' entity='mgr.compute-0.qnrkij' cmd=[{"prefix": "auth get-or-create", "entity": "client.nfs.cephfs.0.0.compute-1.qwiywa-rgw", "caps": ["mon", "allow r", "osd", "allow rwx tag rgw *=*"]}]: dispatch
Nov 27 05:58:44 np0005537642 ceph-mon[74338]: log_channel(audit) log [INF] : from='mgr.14424 192.168.122.100:0/1707931784' entity='mgr.compute-0.qnrkij' cmd='[{"prefix": "auth get-or-create", "entity": "client.nfs.cephfs.0.0.compute-1.qwiywa-rgw", "caps": ["mon", "allow r", "osd", "allow rwx tag rgw *=*"]}]': finished
Nov 27 05:58:44 np0005537642 ceph-mgr[74636]: [cephadm WARNING cephadm.services.nfs] Bind address in nfs.cephfs.0.0.compute-1.qwiywa's ganesha conf is defaulting to empty
Nov 27 05:58:44 np0005537642 ceph-mgr[74636]: log_channel(cephadm) log [WRN] : Bind address in nfs.cephfs.0.0.compute-1.qwiywa's ganesha conf is defaulting to empty
Nov 27 05:58:44 np0005537642 ceph-mon[74338]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Nov 27 05:58:44 np0005537642 ceph-mon[74338]: log_channel(audit) log [DBG] : from='mgr.14424 192.168.122.100:0/1707931784' entity='mgr.compute-0.qnrkij' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 27 05:58:44 np0005537642 ceph-mgr[74636]: [cephadm INFO cephadm.serve] Deploying daemon nfs.cephfs.0.0.compute-1.qwiywa on compute-1
Nov 27 05:58:44 np0005537642 ceph-mgr[74636]: log_channel(cephadm) log [INF] : Deploying daemon nfs.cephfs.0.0.compute-1.qwiywa on compute-1
Nov 27 05:58:45 np0005537642 ceph-mon[74338]: mon.compute-0@0(leader).mds e7 new map
Nov 27 05:58:45 np0005537642 ceph-mon[74338]: mon.compute-0@0(leader).mds e7 print_map#012e7#012btime 2025-11-27T10:58:45:049612+0000#012enable_multiple, ever_enabled_multiple: 1,1#012default compat: compat={},rocompat={},incompat={1=base v0.20,2=client writeable ranges,3=default file layouts on dirs,4=dir inode in separate object,5=mds uses versioned encoding,6=dirfrag is stored in omap,8=no anchor table,9=file layout v2,10=snaprealm v2,11=minor log segments,12=quiesce subvolumes}#012legacy client fscid: 1#012 #012Filesystem 'cephfs' (1)#012fs_name#011cephfs#012epoch#0115#012flags#01112 joinable allow_snaps allow_multimds_snaps#012created#0112025-11-27T10:58:12.975652+0000#012modified#0112025-11-27T10:58:35.393791+0000#012tableserver#0110#012root#0110#012session_timeout#01160#012session_autoclose#011300#012max_file_size#0111099511627776#012max_xattr_size#01165536#012required_client_features#011{}#012last_failure#0110#012last_failure_osd_epoch#0110#012compat#011compat={},rocompat={},incompat={1=base v0.20,2=client writeable ranges,3=default file layouts on dirs,4=dir inode in separate object,5=mds uses versioned encoding,6=dirfrag is stored in omap,7=mds uses inline data,8=no anchor table,9=file layout v2,10=snaprealm v2,11=minor log segments,12=quiesce subvolumes}#012max_mds#0111#012in#0110#012up#011{0=24226}#012failed#011#012damaged#011#012stopped#011#012data_pools#011[7]#012metadata_pool#0116#012inline_data#011disabled#012balancer#011#012bal_rank_mask#011-1#012standby_count_wanted#0111#012qdb_cluster#011leader: 24226 members: 24226#012[mds.cephfs.compute-2.pktzxb{0:24226} state up:active seq 2 addr [v2:192.168.122.102:6804/4184153211,v1:192.168.122.102:6805/4184153211] compat {c=[1],r=[1],i=[1fff]}]#012 #012 #012Standby daemons:#012 #012[mds.cephfs.compute-0.pbsgjz{-1:14526} state up:standby seq 1 addr [v2:192.168.122.100:6806/3239645780,v1:192.168.122.100:6807/3239645780] compat {c=[1],r=[1],i=[1fff]}]#012[mds.cephfs.compute-1.dfsdca{-1:24203} state up:standby seq 1 addr [v2:192.168.122.101:6804/978550490,v1:192.168.122.101:6805/978550490] compat {c=[1],r=[1],i=[1fff]}]
Nov 27 05:58:45 np0005537642 ceph-mon[74338]: log_channel(cluster) log [DBG] : mds.? [v2:192.168.122.101:6804/978550490,v1:192.168.122.101:6805/978550490] up:boot
Nov 27 05:58:45 np0005537642 ceph-mon[74338]: log_channel(cluster) log [DBG] : fsmap cephfs:1 {0=cephfs.compute-2.pktzxb=up:active} 2 up:standby
Nov 27 05:58:45 np0005537642 ceph-mon[74338]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mds metadata", "who": "cephfs.compute-1.dfsdca"} v 0)
Nov 27 05:58:45 np0005537642 ceph-mon[74338]: log_channel(audit) log [DBG] : from='mgr.14424 192.168.122.100:0/1707931784' entity='mgr.compute-0.qnrkij' cmd=[{"prefix": "mds metadata", "who": "cephfs.compute-1.dfsdca"}]: dispatch
Nov 27 05:58:45 np0005537642 ceph-mon[74338]: mon.compute-0@0(leader).mds e7 all = 0
Nov 27 05:58:45 np0005537642 ceph-mon[74338]: from='mgr.14424 192.168.122.100:0/1707931784' entity='mgr.compute-0.qnrkij' 
Nov 27 05:58:45 np0005537642 ceph-mon[74338]: from='mgr.14424 192.168.122.100:0/1707931784' entity='mgr.compute-0.qnrkij' 
Nov 27 05:58:45 np0005537642 ceph-mon[74338]: from='mgr.14424 192.168.122.100:0/1707931784' entity='mgr.compute-0.qnrkij' 
Nov 27 05:58:45 np0005537642 ceph-mon[74338]: from='mgr.14424 192.168.122.100:0/1707931784' entity='mgr.compute-0.qnrkij' 
Nov 27 05:58:45 np0005537642 ceph-mon[74338]: from='mgr.14424 192.168.122.100:0/1707931784' entity='mgr.compute-0.qnrkij' 
Nov 27 05:58:45 np0005537642 ceph-mon[74338]: from='mgr.14424 192.168.122.100:0/1707931784' entity='mgr.compute-0.qnrkij' 
Nov 27 05:58:45 np0005537642 ceph-mon[74338]: Creating key for client.nfs.cephfs.0.0.compute-1.qwiywa
Nov 27 05:58:45 np0005537642 ceph-mon[74338]: from='mgr.14424 192.168.122.100:0/1707931784' entity='mgr.compute-0.qnrkij' cmd=[{"prefix": "auth get-or-create", "entity": "client.nfs.cephfs.0.0.compute-1.qwiywa", "caps": ["mon", "allow r", "osd", "allow rw pool=.nfs namespace=cephfs"]}]: dispatch
Nov 27 05:58:45 np0005537642 ceph-mon[74338]: from='mgr.14424 192.168.122.100:0/1707931784' entity='mgr.compute-0.qnrkij' cmd='[{"prefix": "auth get-or-create", "entity": "client.nfs.cephfs.0.0.compute-1.qwiywa", "caps": ["mon", "allow r", "osd", "allow rw pool=.nfs namespace=cephfs"]}]': finished
Nov 27 05:58:45 np0005537642 ceph-mon[74338]: Ensuring nfs.cephfs.0 is in the ganesha grace table
Nov 27 05:58:45 np0005537642 ceph-mon[74338]: from='mgr.14424 192.168.122.100:0/1707931784' entity='mgr.compute-0.qnrkij' cmd=[{"prefix": "auth get-or-create", "entity": "client.mgr.nfs.grace.nfs.cephfs", "caps": ["mon", "allow r", "osd", "allow rwx pool .nfs"]}]: dispatch
Nov 27 05:58:45 np0005537642 ceph-mon[74338]: from='mgr.14424 192.168.122.100:0/1707931784' entity='mgr.compute-0.qnrkij' cmd='[{"prefix": "auth get-or-create", "entity": "client.mgr.nfs.grace.nfs.cephfs", "caps": ["mon", "allow r", "osd", "allow rwx pool .nfs"]}]': finished
Nov 27 05:58:45 np0005537642 ceph-mon[74338]: from='mgr.14424 192.168.122.100:0/1707931784' entity='mgr.compute-0.qnrkij' cmd=[{"prefix": "auth rm", "entity": "client.mgr.nfs.grace.nfs.cephfs"}]: dispatch
Nov 27 05:58:45 np0005537642 ceph-mon[74338]: from='mgr.14424 192.168.122.100:0/1707931784' entity='mgr.compute-0.qnrkij' cmd='[{"prefix": "auth rm", "entity": "client.mgr.nfs.grace.nfs.cephfs"}]': finished
Nov 27 05:58:45 np0005537642 ceph-mon[74338]: from='mgr.14424 192.168.122.100:0/1707931784' entity='mgr.compute-0.qnrkij' cmd=[{"prefix": "auth get-or-create", "entity": "client.nfs.cephfs.0.0.compute-1.qwiywa-rgw", "caps": ["mon", "allow r", "osd", "allow rwx tag rgw *=*"]}]: dispatch
Nov 27 05:58:45 np0005537642 ceph-mon[74338]: from='mgr.14424 192.168.122.100:0/1707931784' entity='mgr.compute-0.qnrkij' cmd='[{"prefix": "auth get-or-create", "entity": "client.nfs.cephfs.0.0.compute-1.qwiywa-rgw", "caps": ["mon", "allow r", "osd", "allow rwx tag rgw *=*"]}]': finished
Nov 27 05:58:45 np0005537642 python3[96706]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v19 --fsid 4c838139-e0c9-556a-a9ca-e4422f459af7 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   versions -f json _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 27 05:58:45 np0005537642 podman[96707]: 2025-11-27 10:58:45.788960389 +0000 UTC m=+0.081855518 container create 068a8adad6ce106fe3ef6115497ea5f4978aa3673f987a7594d4efc5037cde5e (image=quay.io/ceph/ceph:v19, name=heuristic_shockley, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, CEPH_REF=squid, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1)
Nov 27 05:58:45 np0005537642 podman[96707]: 2025-11-27 10:58:45.746943501 +0000 UTC m=+0.039838720 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Nov 27 05:58:45 np0005537642 systemd[1]: Started libpod-conmon-068a8adad6ce106fe3ef6115497ea5f4978aa3673f987a7594d4efc5037cde5e.scope.
Nov 27 05:58:45 np0005537642 systemd[1]: Started libcrun container.
Nov 27 05:58:45 np0005537642 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8be627dca58278e8a2870dd3278e43768cf6ef50d10151b681511a3c824f0855/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Nov 27 05:58:45 np0005537642 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8be627dca58278e8a2870dd3278e43768cf6ef50d10151b681511a3c824f0855/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 27 05:58:45 np0005537642 podman[96707]: 2025-11-27 10:58:45.90038649 +0000 UTC m=+0.193281629 container init 068a8adad6ce106fe3ef6115497ea5f4978aa3673f987a7594d4efc5037cde5e (image=quay.io/ceph/ceph:v19, name=heuristic_shockley, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid)
Nov 27 05:58:45 np0005537642 podman[96707]: 2025-11-27 10:58:45.909539416 +0000 UTC m=+0.202434575 container start 068a8adad6ce106fe3ef6115497ea5f4978aa3673f987a7594d4efc5037cde5e (image=quay.io/ceph/ceph:v19, name=heuristic_shockley, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325)
Nov 27 05:58:45 np0005537642 podman[96707]: 2025-11-27 10:58:45.917245423 +0000 UTC m=+0.210140582 container attach 068a8adad6ce106fe3ef6115497ea5f4978aa3673f987a7594d4efc5037cde5e (image=quay.io/ceph/ceph:v19, name=heuristic_shockley, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 27 05:58:45 np0005537642 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v24: 198 pgs: 198 active+clean; 456 KiB data, 103 MiB used, 60 GiB / 60 GiB avail; 1.2 KiB/s wr, 3 op/s
Nov 27 05:58:46 np0005537642 ceph-mon[74338]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "versions", "format": "json"} v 0)
Nov 27 05:58:46 np0005537642 ceph-mon[74338]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1710112738' entity='client.admin' cmd=[{"prefix": "versions", "format": "json"}]: dispatch
Nov 27 05:58:46 np0005537642 heuristic_shockley[96722]: 
Nov 27 05:58:46 np0005537642 ceph-mon[74338]: Rados config object exists: conf-nfs.cephfs
Nov 27 05:58:46 np0005537642 ceph-mon[74338]: Creating key for client.nfs.cephfs.0.0.compute-1.qwiywa-rgw
Nov 27 05:58:46 np0005537642 ceph-mon[74338]: Bind address in nfs.cephfs.0.0.compute-1.qwiywa's ganesha conf is defaulting to empty
Nov 27 05:58:46 np0005537642 ceph-mon[74338]: Deploying daemon nfs.cephfs.0.0.compute-1.qwiywa on compute-1
Nov 27 05:58:46 np0005537642 heuristic_shockley[96722]: {"mon":{"ceph version 19.2.3 (c92aebb279828e9c3c1f5d24613efca272649e62) squid (stable)":3},"mgr":{"ceph version 19.2.3 (c92aebb279828e9c3c1f5d24613efca272649e62) squid (stable)":3},"osd":{"ceph version 19.2.3 (c92aebb279828e9c3c1f5d24613efca272649e62) squid (stable)":3},"mds":{"ceph version 19.2.3 (c92aebb279828e9c3c1f5d24613efca272649e62) squid (stable)":3},"rgw":{"ceph version 19.2.3 (c92aebb279828e9c3c1f5d24613efca272649e62) squid (stable)":3},"overall":{"ceph version 19.2.3 (c92aebb279828e9c3c1f5d24613efca272649e62) squid (stable)":15}}
Nov 27 05:58:46 np0005537642 systemd[1]: libpod-068a8adad6ce106fe3ef6115497ea5f4978aa3673f987a7594d4efc5037cde5e.scope: Deactivated successfully.
Nov 27 05:58:46 np0005537642 podman[96707]: 2025-11-27 10:58:46.383812515 +0000 UTC m=+0.676707664 container died 068a8adad6ce106fe3ef6115497ea5f4978aa3673f987a7594d4efc5037cde5e (image=quay.io/ceph/ceph:v19, name=heuristic_shockley, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.license=GPLv2)
Nov 27 05:58:46 np0005537642 ceph-mon[74338]: mon.compute-0@0(leader).osd e64 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 27 05:58:46 np0005537642 ceph-mon[74338]: mon.compute-0@0(leader).mds e8 new map
Nov 27 05:58:46 np0005537642 ceph-mon[74338]: mon.compute-0@0(leader).mds e8 print_map#012e8#012btime 2025-11-27T10:58:46:187215+0000#012enable_multiple, ever_enabled_multiple: 1,1#012default compat: compat={},rocompat={},incompat={1=base v0.20,2=client writeable ranges,3=default file layouts on dirs,4=dir inode in separate object,5=mds uses versioned encoding,6=dirfrag is stored in omap,8=no anchor table,9=file layout v2,10=snaprealm v2,11=minor log segments,12=quiesce subvolumes}#012legacy client fscid: 1#012 #012Filesystem 'cephfs' (1)#012fs_name#011cephfs#012epoch#0115#012flags#01112 joinable allow_snaps allow_multimds_snaps#012created#0112025-11-27T10:58:12.975652+0000#012modified#0112025-11-27T10:58:35.393791+0000#012tableserver#0110#012root#0110#012session_timeout#01160#012session_autoclose#011300#012max_file_size#0111099511627776#012max_xattr_size#01165536#012required_client_features#011{}#012last_failure#0110#012last_failure_osd_epoch#0110#012compat#011compat={},rocompat={},incompat={1=base v0.20,2=client writeable ranges,3=default file layouts on dirs,4=dir inode in separate object,5=mds uses versioned encoding,6=dirfrag is stored in omap,7=mds uses inline data,8=no anchor table,9=file layout v2,10=snaprealm v2,11=minor log segments,12=quiesce subvolumes}#012max_mds#0111#012in#0110#012up#011{0=24226}#012failed#011#012damaged#011#012stopped#011#012data_pools#011[7]#012metadata_pool#0116#012inline_data#011disabled#012balancer#011#012bal_rank_mask#011-1#012standby_count_wanted#0111#012qdb_cluster#011leader: 24226 members: 24226#012[mds.cephfs.compute-2.pktzxb{0:24226} state up:active seq 2 addr [v2:192.168.122.102:6804/4184153211,v1:192.168.122.102:6805/4184153211] compat {c=[1],r=[1],i=[1fff]}]#012 #012 #012Standby daemons:#012 #012[mds.cephfs.compute-0.pbsgjz{-1:14526} state up:standby seq 2 join_fscid=1 addr [v2:192.168.122.100:6806/3239645780,v1:192.168.122.100:6807/3239645780] compat {c=[1],r=[1],i=[1fff]}]#012[mds.cephfs.compute-1.dfsdca{-1:24203} state up:standby seq 1 addr [v2:192.168.122.101:6804/978550490,v1:192.168.122.101:6805/978550490] compat {c=[1],r=[1],i=[1fff]}]
Nov 27 05:58:46 np0005537642 ceph-mds[96507]: mds.cephfs.compute-0.pbsgjz Updating MDS map to version 8 from mon.0
Nov 27 05:58:46 np0005537642 ceph-mon[74338]: log_channel(cluster) log [DBG] : mds.? [v2:192.168.122.100:6806/3239645780,v1:192.168.122.100:6807/3239645780] up:standby
Nov 27 05:58:46 np0005537642 ceph-mon[74338]: log_channel(cluster) log [DBG] : fsmap cephfs:1 {0=cephfs.compute-2.pktzxb=up:active} 2 up:standby
Nov 27 05:58:46 np0005537642 ceph-mon[74338]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1.devices.0}] v 0)
Nov 27 05:58:46 np0005537642 systemd[1]: var-lib-containers-storage-overlay-8be627dca58278e8a2870dd3278e43768cf6ef50d10151b681511a3c824f0855-merged.mount: Deactivated successfully.
Nov 27 05:58:47 np0005537642 ceph-mgr[74636]: [progress INFO root] Writing back 15 completed events
Nov 27 05:58:47 np0005537642 ceph-mon[74338]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/progress/completed}] v 0)
Nov 27 05:58:47 np0005537642 ceph-mon[74338]: log_channel(audit) log [INF] : from='mgr.14424 192.168.122.100:0/1707931784' entity='mgr.compute-0.qnrkij' 
Nov 27 05:58:47 np0005537642 ceph-mon[74338]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1}] v 0)
Nov 27 05:58:47 np0005537642 ceph-mon[74338]: log_channel(audit) log [INF] : from='mgr.14424 192.168.122.100:0/1707931784' entity='mgr.compute-0.qnrkij' 
Nov 27 05:58:47 np0005537642 podman[96707]: 2025-11-27 10:58:47.4957141 +0000 UTC m=+1.788609249 container remove 068a8adad6ce106fe3ef6115497ea5f4978aa3673f987a7594d4efc5037cde5e (image=quay.io/ceph/ceph:v19, name=heuristic_shockley, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_REF=squid, ceph=True, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325)
Nov 27 05:58:47 np0005537642 ceph-mon[74338]: from='mgr.14424 192.168.122.100:0/1707931784' entity='mgr.compute-0.qnrkij' 
Nov 27 05:58:47 np0005537642 ceph-mon[74338]: log_channel(audit) log [INF] : from='mgr.14424 192.168.122.100:0/1707931784' entity='mgr.compute-0.qnrkij' 
Nov 27 05:58:47 np0005537642 ceph-mon[74338]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.nfs.cephfs}] v 0)
Nov 27 05:58:47 np0005537642 ceph-mon[74338]: log_channel(audit) log [INF] : from='mgr.14424 192.168.122.100:0/1707931784' entity='mgr.compute-0.qnrkij' 
Nov 27 05:58:47 np0005537642 ceph-mgr[74636]: [cephadm INFO cephadm.services.nfs] Creating key for client.nfs.cephfs.1.0.compute-2.ybojnw
Nov 27 05:58:47 np0005537642 ceph-mgr[74636]: log_channel(cephadm) log [INF] : Creating key for client.nfs.cephfs.1.0.compute-2.ybojnw
Nov 27 05:58:47 np0005537642 systemd[1]: libpod-conmon-068a8adad6ce106fe3ef6115497ea5f4978aa3673f987a7594d4efc5037cde5e.scope: Deactivated successfully.
Nov 27 05:58:47 np0005537642 ceph-mon[74338]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get-or-create", "entity": "client.nfs.cephfs.1.0.compute-2.ybojnw", "caps": ["mon", "allow r", "osd", "allow rw pool=.nfs namespace=cephfs"]} v 0)
Nov 27 05:58:47 np0005537642 ceph-mon[74338]: log_channel(audit) log [INF] : from='mgr.14424 192.168.122.100:0/1707931784' entity='mgr.compute-0.qnrkij' cmd=[{"prefix": "auth get-or-create", "entity": "client.nfs.cephfs.1.0.compute-2.ybojnw", "caps": ["mon", "allow r", "osd", "allow rw pool=.nfs namespace=cephfs"]}]: dispatch
Nov 27 05:58:47 np0005537642 ceph-mon[74338]: log_channel(audit) log [INF] : from='mgr.14424 192.168.122.100:0/1707931784' entity='mgr.compute-0.qnrkij' cmd='[{"prefix": "auth get-or-create", "entity": "client.nfs.cephfs.1.0.compute-2.ybojnw", "caps": ["mon", "allow r", "osd", "allow rw pool=.nfs namespace=cephfs"]}]': finished
Nov 27 05:58:47 np0005537642 ceph-mgr[74636]: [cephadm INFO root] Ensuring nfs.cephfs.1 is in the ganesha grace table
Nov 27 05:58:47 np0005537642 ceph-mgr[74636]: log_channel(cephadm) log [INF] : Ensuring nfs.cephfs.1 is in the ganesha grace table
Nov 27 05:58:47 np0005537642 ceph-mon[74338]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get-or-create", "entity": "client.mgr.nfs.grace.nfs.cephfs", "caps": ["mon", "allow r", "osd", "allow rwx pool .nfs"]} v 0)
Nov 27 05:58:47 np0005537642 ceph-mon[74338]: log_channel(audit) log [INF] : from='mgr.14424 192.168.122.100:0/1707931784' entity='mgr.compute-0.qnrkij' cmd=[{"prefix": "auth get-or-create", "entity": "client.mgr.nfs.grace.nfs.cephfs", "caps": ["mon", "allow r", "osd", "allow rwx pool .nfs"]}]: dispatch
Nov 27 05:58:47 np0005537642 ceph-mon[74338]: log_channel(audit) log [INF] : from='mgr.14424 192.168.122.100:0/1707931784' entity='mgr.compute-0.qnrkij' cmd='[{"prefix": "auth get-or-create", "entity": "client.mgr.nfs.grace.nfs.cephfs", "caps": ["mon", "allow r", "osd", "allow rwx pool .nfs"]}]': finished
Nov 27 05:58:47 np0005537642 ceph-mon[74338]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Nov 27 05:58:47 np0005537642 ceph-mon[74338]: log_channel(audit) log [DBG] : from='mgr.14424 192.168.122.100:0/1707931784' entity='mgr.compute-0.qnrkij' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 27 05:58:47 np0005537642 ceph-mon[74338]: mon.compute-0@0(leader).mds e9 new map
Nov 27 05:58:47 np0005537642 ceph-mon[74338]: mon.compute-0@0(leader).mds e9 print_map#012e9#012btime 2025-11-27T10:58:47:615762+0000#012enable_multiple, ever_enabled_multiple: 1,1#012default compat: compat={},rocompat={},incompat={1=base v0.20,2=client writeable ranges,3=default file layouts on dirs,4=dir inode in separate object,5=mds uses versioned encoding,6=dirfrag is stored in omap,8=no anchor table,9=file layout v2,10=snaprealm v2,11=minor log segments,12=quiesce subvolumes}#012legacy client fscid: 1#012 #012Filesystem 'cephfs' (1)#012fs_name#011cephfs#012epoch#0119#012flags#01112 joinable allow_snaps allow_multimds_snaps#012created#0112025-11-27T10:58:12.975652+0000#012modified#0112025-11-27T10:58:46.622137+0000#012tableserver#0110#012root#0110#012session_timeout#01160#012session_autoclose#011300#012max_file_size#0111099511627776#012max_xattr_size#01165536#012required_client_features#011{}#012last_failure#0110#012last_failure_osd_epoch#0110#012compat#011compat={},rocompat={},incompat={1=base v0.20,2=client writeable ranges,3=default file layouts on dirs,4=dir inode in separate object,5=mds uses versioned encoding,6=dirfrag is stored in omap,7=mds uses inline data,8=no anchor table,9=file layout v2,10=snaprealm v2,11=minor log segments,12=quiesce subvolumes}#012max_mds#0111#012in#0110#012up#011{0=24226}#012failed#011#012damaged#011#012stopped#011#012data_pools#011[7]#012metadata_pool#0116#012inline_data#011disabled#012balancer#011#012bal_rank_mask#011-1#012standby_count_wanted#0111#012qdb_cluster#011leader: 24226 members: 24226#012[mds.cephfs.compute-2.pktzxb{0:24226} state up:active seq 5 join_fscid=1 addr [v2:192.168.122.102:6804/4184153211,v1:192.168.122.102:6805/4184153211] compat {c=[1],r=[1],i=[1fff]}]#012 #012 #012Standby daemons:#012 #012[mds.cephfs.compute-0.pbsgjz{-1:14526} state up:standby seq 2 join_fscid=1 addr [v2:192.168.122.100:6806/3239645780,v1:192.168.122.100:6807/3239645780] compat {c=[1],r=[1],i=[1fff]}]#012[mds.cephfs.compute-1.dfsdca{-1:24203} state up:standby seq 1 addr [v2:192.168.122.101:6804/978550490,v1:192.168.122.101:6805/978550490] compat {c=[1],r=[1],i=[1fff]}]
Nov 27 05:58:47 np0005537642 ceph-mon[74338]: log_channel(cluster) log [DBG] : mds.? [v2:192.168.122.102:6804/4184153211,v1:192.168.122.102:6805/4184153211] up:active
Nov 27 05:58:47 np0005537642 ceph-mon[74338]: log_channel(cluster) log [DBG] : fsmap cephfs:1 {0=cephfs.compute-2.pktzxb=up:active} 2 up:standby
Nov 27 05:58:47 np0005537642 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v25: 198 pgs: 198 active+clean; 456 KiB data, 103 MiB used, 60 GiB / 60 GiB avail; 1.2 KiB/s wr, 3 op/s
Nov 27 05:58:48 np0005537642 ceph-mon[74338]: from='mgr.14424 192.168.122.100:0/1707931784' entity='mgr.compute-0.qnrkij' 
Nov 27 05:58:48 np0005537642 ceph-mon[74338]: from='mgr.14424 192.168.122.100:0/1707931784' entity='mgr.compute-0.qnrkij' 
Nov 27 05:58:48 np0005537642 ceph-mon[74338]: from='mgr.14424 192.168.122.100:0/1707931784' entity='mgr.compute-0.qnrkij' 
Nov 27 05:58:48 np0005537642 ceph-mon[74338]: Creating key for client.nfs.cephfs.1.0.compute-2.ybojnw
Nov 27 05:58:48 np0005537642 ceph-mon[74338]: from='mgr.14424 192.168.122.100:0/1707931784' entity='mgr.compute-0.qnrkij' cmd=[{"prefix": "auth get-or-create", "entity": "client.nfs.cephfs.1.0.compute-2.ybojnw", "caps": ["mon", "allow r", "osd", "allow rw pool=.nfs namespace=cephfs"]}]: dispatch
Nov 27 05:58:48 np0005537642 ceph-mon[74338]: from='mgr.14424 192.168.122.100:0/1707931784' entity='mgr.compute-0.qnrkij' cmd='[{"prefix": "auth get-or-create", "entity": "client.nfs.cephfs.1.0.compute-2.ybojnw", "caps": ["mon", "allow r", "osd", "allow rw pool=.nfs namespace=cephfs"]}]': finished
Nov 27 05:58:48 np0005537642 ceph-mon[74338]: Ensuring nfs.cephfs.1 is in the ganesha grace table
Nov 27 05:58:48 np0005537642 ceph-mon[74338]: from='mgr.14424 192.168.122.100:0/1707931784' entity='mgr.compute-0.qnrkij' cmd=[{"prefix": "auth get-or-create", "entity": "client.mgr.nfs.grace.nfs.cephfs", "caps": ["mon", "allow r", "osd", "allow rwx pool .nfs"]}]: dispatch
Nov 27 05:58:48 np0005537642 ceph-mon[74338]: from='mgr.14424 192.168.122.100:0/1707931784' entity='mgr.compute-0.qnrkij' cmd='[{"prefix": "auth get-or-create", "entity": "client.mgr.nfs.grace.nfs.cephfs", "caps": ["mon", "allow r", "osd", "allow rwx pool .nfs"]}]': finished
Nov 27 05:58:48 np0005537642 ceph-mon[74338]: mon.compute-0@0(leader).mds e10 new map
Nov 27 05:58:48 np0005537642 ceph-mon[74338]: mon.compute-0@0(leader).mds e10 print_map#012e10#012btime 2025-11-27T10:58:48:705975+0000#012enable_multiple, ever_enabled_multiple: 1,1#012default compat: compat={},rocompat={},incompat={1=base v0.20,2=client writeable ranges,3=default file layouts on dirs,4=dir inode in separate object,5=mds uses versioned encoding,6=dirfrag is stored in omap,8=no anchor table,9=file layout v2,10=snaprealm v2,11=minor log segments,12=quiesce subvolumes}#012legacy client fscid: 1#012 #012Filesystem 'cephfs' (1)#012fs_name#011cephfs#012epoch#0119#012flags#01112 joinable allow_snaps allow_multimds_snaps#012created#0112025-11-27T10:58:12.975652+0000#012modified#0112025-11-27T10:58:46.622137+0000#012tableserver#0110#012root#0110#012session_timeout#01160#012session_autoclose#011300#012max_file_size#0111099511627776#012max_xattr_size#01165536#012required_client_features#011{}#012last_failure#0110#012last_failure_osd_epoch#0110#012compat#011compat={},rocompat={},incompat={1=base v0.20,2=client writeable ranges,3=default file layouts on dirs,4=dir inode in separate object,5=mds uses versioned encoding,6=dirfrag is stored in omap,7=mds uses inline data,8=no anchor table,9=file layout v2,10=snaprealm v2,11=minor log segments,12=quiesce subvolumes}#012max_mds#0111#012in#0110#012up#011{0=24226}#012failed#011#012damaged#011#012stopped#011#012data_pools#011[7]#012metadata_pool#0116#012inline_data#011disabled#012balancer#011#012bal_rank_mask#011-1#012standby_count_wanted#0111#012qdb_cluster#011leader: 24226 members: 24226#012[mds.cephfs.compute-2.pktzxb{0:24226} state up:active seq 5 join_fscid=1 addr [v2:192.168.122.102:6804/4184153211,v1:192.168.122.102:6805/4184153211] compat {c=[1],r=[1],i=[1fff]}]#012 #012 #012Standby daemons:#012 #012[mds.cephfs.compute-0.pbsgjz{-1:14526} state up:standby seq 2 join_fscid=1 addr [v2:192.168.122.100:6806/3239645780,v1:192.168.122.100:6807/3239645780] compat {c=[1],r=[1],i=[1fff]}]#012[mds.cephfs.compute-1.dfsdca{-1:24203} state up:standby seq 2 join_fscid=1 addr [v2:192.168.122.101:6804/978550490,v1:192.168.122.101:6805/978550490] compat {c=[1],r=[1],i=[1fff]}]
Nov 27 05:58:48 np0005537642 ceph-mon[74338]: log_channel(cluster) log [DBG] : mds.? [v2:192.168.122.101:6804/978550490,v1:192.168.122.101:6805/978550490] up:standby
Nov 27 05:58:48 np0005537642 ceph-mon[74338]: log_channel(cluster) log [DBG] : fsmap cephfs:1 {0=cephfs.compute-2.pktzxb=up:active} 2 up:standby
Nov 27 05:58:49 np0005537642 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v26: 198 pgs: 198 active+clean; 456 KiB data, 103 MiB used, 60 GiB / 60 GiB avail; 938 B/s rd, 1.9 KiB/s wr, 5 op/s
Nov 27 05:58:50 np0005537642 ceph-mon[74338]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth rm", "entity": "client.mgr.nfs.grace.nfs.cephfs"} v 0)
Nov 27 05:58:50 np0005537642 ceph-mon[74338]: log_channel(audit) log [INF] : from='mgr.14424 192.168.122.100:0/1707931784' entity='mgr.compute-0.qnrkij' cmd=[{"prefix": "auth rm", "entity": "client.mgr.nfs.grace.nfs.cephfs"}]: dispatch
Nov 27 05:58:51 np0005537642 ceph-mon[74338]: from='mgr.14424 192.168.122.100:0/1707931784' entity='mgr.compute-0.qnrkij' cmd=[{"prefix": "auth rm", "entity": "client.mgr.nfs.grace.nfs.cephfs"}]: dispatch
Nov 27 05:58:51 np0005537642 ceph-mon[74338]: log_channel(audit) log [INF] : from='mgr.14424 192.168.122.100:0/1707931784' entity='mgr.compute-0.qnrkij' cmd='[{"prefix": "auth rm", "entity": "client.mgr.nfs.grace.nfs.cephfs"}]': finished
Nov 27 05:58:51 np0005537642 ceph-mon[74338]: mon.compute-0@0(leader).osd e64 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 27 05:58:51 np0005537642 ceph-mgr[74636]: [cephadm INFO cephadm.services.nfs] Rados config object exists: conf-nfs.cephfs
Nov 27 05:58:51 np0005537642 ceph-mgr[74636]: log_channel(cephadm) log [INF] : Rados config object exists: conf-nfs.cephfs
Nov 27 05:58:51 np0005537642 ceph-mgr[74636]: [cephadm INFO cephadm.services.nfs] Creating key for client.nfs.cephfs.1.0.compute-2.ybojnw-rgw
Nov 27 05:58:51 np0005537642 ceph-mgr[74636]: log_channel(cephadm) log [INF] : Creating key for client.nfs.cephfs.1.0.compute-2.ybojnw-rgw
Nov 27 05:58:51 np0005537642 ceph-mon[74338]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get-or-create", "entity": "client.nfs.cephfs.1.0.compute-2.ybojnw-rgw", "caps": ["mon", "allow r", "osd", "allow rwx tag rgw *=*"]} v 0)
Nov 27 05:58:51 np0005537642 ceph-mon[74338]: log_channel(audit) log [INF] : from='mgr.14424 192.168.122.100:0/1707931784' entity='mgr.compute-0.qnrkij' cmd=[{"prefix": "auth get-or-create", "entity": "client.nfs.cephfs.1.0.compute-2.ybojnw-rgw", "caps": ["mon", "allow r", "osd", "allow rwx tag rgw *=*"]}]: dispatch
Nov 27 05:58:51 np0005537642 ceph-mon[74338]: log_channel(audit) log [INF] : from='mgr.14424 192.168.122.100:0/1707931784' entity='mgr.compute-0.qnrkij' cmd='[{"prefix": "auth get-or-create", "entity": "client.nfs.cephfs.1.0.compute-2.ybojnw-rgw", "caps": ["mon", "allow r", "osd", "allow rwx tag rgw *=*"]}]': finished
Nov 27 05:58:51 np0005537642 ceph-mgr[74636]: [cephadm WARNING cephadm.services.nfs] Bind address in nfs.cephfs.1.0.compute-2.ybojnw's ganesha conf is defaulting to empty
Nov 27 05:58:51 np0005537642 ceph-mgr[74636]: log_channel(cephadm) log [WRN] : Bind address in nfs.cephfs.1.0.compute-2.ybojnw's ganesha conf is defaulting to empty
Nov 27 05:58:51 np0005537642 ceph-mon[74338]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Nov 27 05:58:51 np0005537642 ceph-mon[74338]: log_channel(audit) log [DBG] : from='mgr.14424 192.168.122.100:0/1707931784' entity='mgr.compute-0.qnrkij' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 27 05:58:51 np0005537642 ceph-mgr[74636]: [cephadm INFO cephadm.serve] Deploying daemon nfs.cephfs.1.0.compute-2.ybojnw on compute-2
Nov 27 05:58:51 np0005537642 ceph-mgr[74636]: log_channel(cephadm) log [INF] : Deploying daemon nfs.cephfs.1.0.compute-2.ybojnw on compute-2
Nov 27 05:58:51 np0005537642 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v27: 198 pgs: 198 active+clean; 456 KiB data, 103 MiB used, 60 GiB / 60 GiB avail; 938 B/s rd, 767 B/s wr, 1 op/s
Nov 27 05:58:52 np0005537642 ceph-mon[74338]: from='mgr.14424 192.168.122.100:0/1707931784' entity='mgr.compute-0.qnrkij' cmd='[{"prefix": "auth rm", "entity": "client.mgr.nfs.grace.nfs.cephfs"}]': finished
Nov 27 05:58:52 np0005537642 ceph-mon[74338]: Rados config object exists: conf-nfs.cephfs
Nov 27 05:58:52 np0005537642 ceph-mon[74338]: Creating key for client.nfs.cephfs.1.0.compute-2.ybojnw-rgw
Nov 27 05:58:52 np0005537642 ceph-mon[74338]: from='mgr.14424 192.168.122.100:0/1707931784' entity='mgr.compute-0.qnrkij' cmd=[{"prefix": "auth get-or-create", "entity": "client.nfs.cephfs.1.0.compute-2.ybojnw-rgw", "caps": ["mon", "allow r", "osd", "allow rwx tag rgw *=*"]}]: dispatch
Nov 27 05:58:52 np0005537642 ceph-mon[74338]: from='mgr.14424 192.168.122.100:0/1707931784' entity='mgr.compute-0.qnrkij' cmd='[{"prefix": "auth get-or-create", "entity": "client.nfs.cephfs.1.0.compute-2.ybojnw-rgw", "caps": ["mon", "allow r", "osd", "allow rwx tag rgw *=*"]}]': finished
Nov 27 05:58:52 np0005537642 ceph-mon[74338]: Bind address in nfs.cephfs.1.0.compute-2.ybojnw's ganesha conf is defaulting to empty
Nov 27 05:58:52 np0005537642 ceph-mon[74338]: Deploying daemon nfs.cephfs.1.0.compute-2.ybojnw on compute-2
Nov 27 05:58:53 np0005537642 ceph-mon[74338]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2.devices.0}] v 0)
Nov 27 05:58:53 np0005537642 ceph-mon[74338]: log_channel(audit) log [INF] : from='mgr.14424 192.168.122.100:0/1707931784' entity='mgr.compute-0.qnrkij' 
Nov 27 05:58:53 np0005537642 ceph-mon[74338]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2}] v 0)
Nov 27 05:58:53 np0005537642 ceph-mon[74338]: log_channel(audit) log [INF] : from='mgr.14424 192.168.122.100:0/1707931784' entity='mgr.compute-0.qnrkij' 
Nov 27 05:58:53 np0005537642 ceph-mon[74338]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.nfs.cephfs}] v 0)
Nov 27 05:58:53 np0005537642 ceph-mon[74338]: log_channel(audit) log [INF] : from='mgr.14424 192.168.122.100:0/1707931784' entity='mgr.compute-0.qnrkij' 
Nov 27 05:58:53 np0005537642 ceph-mgr[74636]: [cephadm INFO cephadm.services.nfs] Creating key for client.nfs.cephfs.2.0.compute-0.ymahkb
Nov 27 05:58:53 np0005537642 ceph-mgr[74636]: log_channel(cephadm) log [INF] : Creating key for client.nfs.cephfs.2.0.compute-0.ymahkb
Nov 27 05:58:53 np0005537642 ceph-mon[74338]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get-or-create", "entity": "client.nfs.cephfs.2.0.compute-0.ymahkb", "caps": ["mon", "allow r", "osd", "allow rw pool=.nfs namespace=cephfs"]} v 0)
Nov 27 05:58:53 np0005537642 ceph-mon[74338]: log_channel(audit) log [INF] : from='mgr.14424 192.168.122.100:0/1707931784' entity='mgr.compute-0.qnrkij' cmd=[{"prefix": "auth get-or-create", "entity": "client.nfs.cephfs.2.0.compute-0.ymahkb", "caps": ["mon", "allow r", "osd", "allow rw pool=.nfs namespace=cephfs"]}]: dispatch
Nov 27 05:58:53 np0005537642 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v28: 198 pgs: 198 active+clean; 456 KiB data, 103 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 767 B/s wr, 1 op/s
Nov 27 05:58:54 np0005537642 ceph-mon[74338]: from='mgr.14424 192.168.122.100:0/1707931784' entity='mgr.compute-0.qnrkij' 
Nov 27 05:58:54 np0005537642 ceph-mon[74338]: from='mgr.14424 192.168.122.100:0/1707931784' entity='mgr.compute-0.qnrkij' 
Nov 27 05:58:54 np0005537642 ceph-mon[74338]: log_channel(audit) log [INF] : from='mgr.14424 192.168.122.100:0/1707931784' entity='mgr.compute-0.qnrkij' cmd='[{"prefix": "auth get-or-create", "entity": "client.nfs.cephfs.2.0.compute-0.ymahkb", "caps": ["mon", "allow r", "osd", "allow rw pool=.nfs namespace=cephfs"]}]': finished
Nov 27 05:58:54 np0005537642 ceph-mgr[74636]: [cephadm INFO root] Ensuring nfs.cephfs.2 is in the ganesha grace table
Nov 27 05:58:54 np0005537642 ceph-mgr[74636]: log_channel(cephadm) log [INF] : Ensuring nfs.cephfs.2 is in the ganesha grace table
Nov 27 05:58:54 np0005537642 ceph-mon[74338]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get-or-create", "entity": "client.mgr.nfs.grace.nfs.cephfs", "caps": ["mon", "allow r", "osd", "allow rwx pool .nfs"]} v 0)
Nov 27 05:58:54 np0005537642 ceph-mon[74338]: log_channel(audit) log [INF] : from='mgr.14424 192.168.122.100:0/1707931784' entity='mgr.compute-0.qnrkij' cmd=[{"prefix": "auth get-or-create", "entity": "client.mgr.nfs.grace.nfs.cephfs", "caps": ["mon", "allow r", "osd", "allow rwx pool .nfs"]}]: dispatch
Nov 27 05:58:54 np0005537642 ceph-mon[74338]: log_channel(audit) log [INF] : from='mgr.14424 192.168.122.100:0/1707931784' entity='mgr.compute-0.qnrkij' cmd='[{"prefix": "auth get-or-create", "entity": "client.mgr.nfs.grace.nfs.cephfs", "caps": ["mon", "allow r", "osd", "allow rwx pool .nfs"]}]': finished
Nov 27 05:58:54 np0005537642 ceph-mon[74338]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Nov 27 05:58:54 np0005537642 ceph-mon[74338]: log_channel(audit) log [DBG] : from='mgr.14424 192.168.122.100:0/1707931784' entity='mgr.compute-0.qnrkij' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 27 05:58:55 np0005537642 ceph-mon[74338]: from='mgr.14424 192.168.122.100:0/1707931784' entity='mgr.compute-0.qnrkij' 
Nov 27 05:58:55 np0005537642 ceph-mon[74338]: Creating key for client.nfs.cephfs.2.0.compute-0.ymahkb
Nov 27 05:58:55 np0005537642 ceph-mon[74338]: from='mgr.14424 192.168.122.100:0/1707931784' entity='mgr.compute-0.qnrkij' cmd=[{"prefix": "auth get-or-create", "entity": "client.nfs.cephfs.2.0.compute-0.ymahkb", "caps": ["mon", "allow r", "osd", "allow rw pool=.nfs namespace=cephfs"]}]: dispatch
Nov 27 05:58:55 np0005537642 ceph-mon[74338]: from='mgr.14424 192.168.122.100:0/1707931784' entity='mgr.compute-0.qnrkij' cmd='[{"prefix": "auth get-or-create", "entity": "client.nfs.cephfs.2.0.compute-0.ymahkb", "caps": ["mon", "allow r", "osd", "allow rw pool=.nfs namespace=cephfs"]}]': finished
Nov 27 05:58:55 np0005537642 ceph-mon[74338]: Ensuring nfs.cephfs.2 is in the ganesha grace table
Nov 27 05:58:55 np0005537642 ceph-mon[74338]: from='mgr.14424 192.168.122.100:0/1707931784' entity='mgr.compute-0.qnrkij' cmd=[{"prefix": "auth get-or-create", "entity": "client.mgr.nfs.grace.nfs.cephfs", "caps": ["mon", "allow r", "osd", "allow rwx pool .nfs"]}]: dispatch
Nov 27 05:58:55 np0005537642 ceph-mon[74338]: from='mgr.14424 192.168.122.100:0/1707931784' entity='mgr.compute-0.qnrkij' cmd='[{"prefix": "auth get-or-create", "entity": "client.mgr.nfs.grace.nfs.cephfs", "caps": ["mon", "allow r", "osd", "allow rwx pool .nfs"]}]': finished
Nov 27 05:58:55 np0005537642 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v29: 198 pgs: 198 active+clean; 456 KiB data, 103 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 767 B/s wr, 1 op/s
Nov 27 05:58:55 np0005537642 ceph-mon[74338]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth rm", "entity": "client.mgr.nfs.grace.nfs.cephfs"} v 0)
Nov 27 05:58:55 np0005537642 ceph-mon[74338]: log_channel(audit) log [INF] : from='mgr.14424 192.168.122.100:0/1707931784' entity='mgr.compute-0.qnrkij' cmd=[{"prefix": "auth rm", "entity": "client.mgr.nfs.grace.nfs.cephfs"}]: dispatch
Nov 27 05:58:56 np0005537642 ceph-mon[74338]: log_channel(audit) log [INF] : from='mgr.14424 192.168.122.100:0/1707931784' entity='mgr.compute-0.qnrkij' cmd='[{"prefix": "auth rm", "entity": "client.mgr.nfs.grace.nfs.cephfs"}]': finished
Nov 27 05:58:56 np0005537642 ceph-mon[74338]: mon.compute-0@0(leader).osd e64 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 27 05:58:56 np0005537642 ceph-mgr[74636]: [cephadm INFO cephadm.services.nfs] Rados config object exists: conf-nfs.cephfs
Nov 27 05:58:56 np0005537642 ceph-mgr[74636]: log_channel(cephadm) log [INF] : Rados config object exists: conf-nfs.cephfs
Nov 27 05:58:56 np0005537642 ceph-mgr[74636]: [cephadm INFO cephadm.services.nfs] Creating key for client.nfs.cephfs.2.0.compute-0.ymahkb-rgw
Nov 27 05:58:56 np0005537642 ceph-mgr[74636]: log_channel(cephadm) log [INF] : Creating key for client.nfs.cephfs.2.0.compute-0.ymahkb-rgw
Nov 27 05:58:56 np0005537642 ceph-mon[74338]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get-or-create", "entity": "client.nfs.cephfs.2.0.compute-0.ymahkb-rgw", "caps": ["mon", "allow r", "osd", "allow rwx tag rgw *=*"]} v 0)
Nov 27 05:58:56 np0005537642 ceph-mon[74338]: log_channel(audit) log [INF] : from='mgr.14424 192.168.122.100:0/1707931784' entity='mgr.compute-0.qnrkij' cmd=[{"prefix": "auth get-or-create", "entity": "client.nfs.cephfs.2.0.compute-0.ymahkb-rgw", "caps": ["mon", "allow r", "osd", "allow rwx tag rgw *=*"]}]: dispatch
Nov 27 05:58:56 np0005537642 ceph-mon[74338]: from='mgr.14424 192.168.122.100:0/1707931784' entity='mgr.compute-0.qnrkij' cmd=[{"prefix": "auth rm", "entity": "client.mgr.nfs.grace.nfs.cephfs"}]: dispatch
Nov 27 05:58:56 np0005537642 ceph-mon[74338]: from='mgr.14424 192.168.122.100:0/1707931784' entity='mgr.compute-0.qnrkij' cmd='[{"prefix": "auth rm", "entity": "client.mgr.nfs.grace.nfs.cephfs"}]': finished
Nov 27 05:58:57 np0005537642 ceph-mon[74338]: log_channel(audit) log [INF] : from='mgr.14424 192.168.122.100:0/1707931784' entity='mgr.compute-0.qnrkij' cmd='[{"prefix": "auth get-or-create", "entity": "client.nfs.cephfs.2.0.compute-0.ymahkb-rgw", "caps": ["mon", "allow r", "osd", "allow rwx tag rgw *=*"]}]': finished
Nov 27 05:58:57 np0005537642 ceph-mgr[74636]: [cephadm WARNING cephadm.services.nfs] Bind address in nfs.cephfs.2.0.compute-0.ymahkb's ganesha conf is defaulting to empty
Nov 27 05:58:57 np0005537642 ceph-mgr[74636]: log_channel(cephadm) log [WRN] : Bind address in nfs.cephfs.2.0.compute-0.ymahkb's ganesha conf is defaulting to empty
Nov 27 05:58:57 np0005537642 ceph-mon[74338]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Nov 27 05:58:57 np0005537642 ceph-mon[74338]: log_channel(audit) log [DBG] : from='mgr.14424 192.168.122.100:0/1707931784' entity='mgr.compute-0.qnrkij' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 27 05:58:57 np0005537642 ceph-mgr[74636]: [cephadm INFO cephadm.serve] Deploying daemon nfs.cephfs.2.0.compute-0.ymahkb on compute-0
Nov 27 05:58:57 np0005537642 ceph-mgr[74636]: log_channel(cephadm) log [INF] : Deploying daemon nfs.cephfs.2.0.compute-0.ymahkb on compute-0
Nov 27 05:58:57 np0005537642 podman[96924]: 2025-11-27 10:58:57.673055579 +0000 UTC m=+0.029704438 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Nov 27 05:58:57 np0005537642 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v30: 198 pgs: 198 active+clean; 456 KiB data, 103 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 767 B/s wr, 1 op/s
Nov 27 05:58:58 np0005537642 ceph-mon[74338]: Rados config object exists: conf-nfs.cephfs
Nov 27 05:58:58 np0005537642 ceph-mon[74338]: Creating key for client.nfs.cephfs.2.0.compute-0.ymahkb-rgw
Nov 27 05:58:58 np0005537642 ceph-mon[74338]: from='mgr.14424 192.168.122.100:0/1707931784' entity='mgr.compute-0.qnrkij' cmd=[{"prefix": "auth get-or-create", "entity": "client.nfs.cephfs.2.0.compute-0.ymahkb-rgw", "caps": ["mon", "allow r", "osd", "allow rwx tag rgw *=*"]}]: dispatch
Nov 27 05:58:58 np0005537642 ceph-mon[74338]: from='mgr.14424 192.168.122.100:0/1707931784' entity='mgr.compute-0.qnrkij' cmd='[{"prefix": "auth get-or-create", "entity": "client.nfs.cephfs.2.0.compute-0.ymahkb-rgw", "caps": ["mon", "allow r", "osd", "allow rwx tag rgw *=*"]}]': finished
Nov 27 05:58:58 np0005537642 ceph-mon[74338]: Bind address in nfs.cephfs.2.0.compute-0.ymahkb's ganesha conf is defaulting to empty
Nov 27 05:58:58 np0005537642 ceph-mon[74338]: Deploying daemon nfs.cephfs.2.0.compute-0.ymahkb on compute-0
Nov 27 05:58:58 np0005537642 podman[96924]: 2025-11-27 10:58:58.023387793 +0000 UTC m=+0.380036582 container create 267f06076c7e9edb1c6919b143f0fb468d9d38f07367ecd78c8d75c26d4a6ce2 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=thirsty_noyce, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid)
Nov 27 05:58:58 np0005537642 systemd[1]: Started libpod-conmon-267f06076c7e9edb1c6919b143f0fb468d9d38f07367ecd78c8d75c26d4a6ce2.scope.
Nov 27 05:58:58 np0005537642 systemd[1]: Started libcrun container.
Nov 27 05:58:58 np0005537642 podman[96924]: 2025-11-27 10:58:58.400881656 +0000 UTC m=+0.757530475 container init 267f06076c7e9edb1c6919b143f0fb468d9d38f07367ecd78c8d75c26d4a6ce2 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=thirsty_noyce, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, OSD_FLAVOR=default)
Nov 27 05:58:58 np0005537642 podman[96924]: 2025-11-27 10:58:58.413320539 +0000 UTC m=+0.769969348 container start 267f06076c7e9edb1c6919b143f0fb468d9d38f07367ecd78c8d75c26d4a6ce2 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=thirsty_noyce, CEPH_REF=squid, ceph=True, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Nov 27 05:58:58 np0005537642 thirsty_noyce[96941]: 167 167
Nov 27 05:58:58 np0005537642 systemd[1]: libpod-267f06076c7e9edb1c6919b143f0fb468d9d38f07367ecd78c8d75c26d4a6ce2.scope: Deactivated successfully.
Nov 27 05:58:58 np0005537642 podman[96924]: 2025-11-27 10:58:58.492612917 +0000 UTC m=+0.849261796 container attach 267f06076c7e9edb1c6919b143f0fb468d9d38f07367ecd78c8d75c26d4a6ce2 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=thirsty_noyce, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Nov 27 05:58:58 np0005537642 podman[96924]: 2025-11-27 10:58:58.493491181 +0000 UTC m=+0.850139970 container died 267f06076c7e9edb1c6919b143f0fb468d9d38f07367ecd78c8d75c26d4a6ce2 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=thirsty_noyce, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, io.buildah.version=1.40.1, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 27 05:58:59 np0005537642 systemd[1]: var-lib-containers-storage-overlay-8d0ecec881f7bf08af1c5c1efac663a65bbabefbbe69b1e5e39727a0d8f331d1-merged.mount: Deactivated successfully.
Nov 27 05:58:59 np0005537642 podman[96924]: 2025-11-27 10:58:59.584635508 +0000 UTC m=+1.941284287 container remove 267f06076c7e9edb1c6919b143f0fb468d9d38f07367ecd78c8d75c26d4a6ce2 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=thirsty_noyce, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.vendor=CentOS)
Nov 27 05:58:59 np0005537642 systemd[1]: libpod-conmon-267f06076c7e9edb1c6919b143f0fb468d9d38f07367ecd78c8d75c26d4a6ce2.scope: Deactivated successfully.
Nov 27 05:58:59 np0005537642 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v31: 198 pgs: 198 active+clean; 456 KiB data, 103 MiB used, 60 GiB / 60 GiB avail; 3.0 KiB/s rd, 1.6 KiB/s wr, 5 op/s
Nov 27 05:59:00 np0005537642 systemd[1]: Reloading.
Nov 27 05:59:00 np0005537642 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 27 05:59:00 np0005537642 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 27 05:59:00 np0005537642 systemd[1]: Reloading.
Nov 27 05:59:00 np0005537642 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 27 05:59:00 np0005537642 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 27 05:59:00 np0005537642 systemd[1]: Starting Ceph nfs.cephfs.2.0.compute-0.ymahkb for 4c838139-e0c9-556a-a9ca-e4422f459af7...
Nov 27 05:59:01 np0005537642 podman[97086]: 2025-11-27 10:59:01.264151848 +0000 UTC m=+0.042750119 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Nov 27 05:59:01 np0005537642 podman[97086]: 2025-11-27 10:59:01.383177463 +0000 UTC m=+0.161775654 container create 53f9f5e8dda735f05eb81d1b684c0d159d53601b93e16ccc276ab30167724430 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-4c838139-e0c9-556a-a9ca-e4422f459af7-nfs-cephfs-2-0-compute-0-ymahkb, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, ceph=True, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Nov 27 05:59:01 np0005537642 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ec2035aaadbced0b965f5a6964320084d64805f69f5c1ba2bb45496bf564d620/merged/etc/ganesha supports timestamps until 2038 (0x7fffffff)
Nov 27 05:59:01 np0005537642 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ec2035aaadbced0b965f5a6964320084d64805f69f5c1ba2bb45496bf564d620/merged/etc/ceph/keyring supports timestamps until 2038 (0x7fffffff)
Nov 27 05:59:01 np0005537642 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ec2035aaadbced0b965f5a6964320084d64805f69f5c1ba2bb45496bf564d620/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 27 05:59:01 np0005537642 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ec2035aaadbced0b965f5a6964320084d64805f69f5c1ba2bb45496bf564d620/merged/var/lib/ceph/radosgw/ceph-nfs.cephfs.2.0.compute-0.ymahkb-rgw/keyring supports timestamps until 2038 (0x7fffffff)
Nov 27 05:59:01 np0005537642 podman[97086]: 2025-11-27 10:59:01.625186259 +0000 UTC m=+0.403784530 container init 53f9f5e8dda735f05eb81d1b684c0d159d53601b93e16ccc276ab30167724430 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-4c838139-e0c9-556a-a9ca-e4422f459af7-nfs-cephfs-2-0-compute-0-ymahkb, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, OSD_FLAVOR=default, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, ceph=True, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 27 05:59:01 np0005537642 ceph-mon[74338]: mon.compute-0@0(leader).osd e64 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 27 05:59:01 np0005537642 podman[97086]: 2025-11-27 10:59:01.634367335 +0000 UTC m=+0.412965556 container start 53f9f5e8dda735f05eb81d1b684c0d159d53601b93e16ccc276ab30167724430 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-4c838139-e0c9-556a-a9ca-e4422f459af7-nfs-cephfs-2-0-compute-0-ymahkb, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, org.label-schema.build-date=20250325, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2)
Nov 27 05:59:01 np0005537642 ceph-4c838139-e0c9-556a-a9ca-e4422f459af7-nfs-cephfs-2-0-compute-0-ymahkb[97101]: 27/11/2025 10:59:01 : epoch 69282ef5 : compute-0 : ganesha.nfsd-2[main] init_logging :LOG :NULL :LOG: Setting log level for all components to NIV_EVENT
Nov 27 05:59:01 np0005537642 ceph-4c838139-e0c9-556a-a9ca-e4422f459af7-nfs-cephfs-2-0-compute-0-ymahkb[97101]: 27/11/2025 10:59:01 : epoch 69282ef5 : compute-0 : ganesha.nfsd-2[main] main :MAIN :EVENT :ganesha.nfsd Starting: Ganesha Version 5.9
Nov 27 05:59:01 np0005537642 bash[97086]: 53f9f5e8dda735f05eb81d1b684c0d159d53601b93e16ccc276ab30167724430
Nov 27 05:59:01 np0005537642 systemd[1]: Started Ceph nfs.cephfs.2.0.compute-0.ymahkb for 4c838139-e0c9-556a-a9ca-e4422f459af7.
Nov 27 05:59:01 np0005537642 ceph-4c838139-e0c9-556a-a9ca-e4422f459af7-nfs-cephfs-2-0-compute-0-ymahkb[97101]: 27/11/2025 10:59:01 : epoch 69282ef5 : compute-0 : ganesha.nfsd-2[main] nfs_set_param_from_conf :NFS STARTUP :EVENT :Configuration file successfully parsed
Nov 27 05:59:01 np0005537642 ceph-4c838139-e0c9-556a-a9ca-e4422f459af7-nfs-cephfs-2-0-compute-0-ymahkb[97101]: 27/11/2025 10:59:01 : epoch 69282ef5 : compute-0 : ganesha.nfsd-2[main] monitoring_init :NFS STARTUP :EVENT :Init monitoring at 0.0.0.0:9587
Nov 27 05:59:01 np0005537642 ceph-4c838139-e0c9-556a-a9ca-e4422f459af7-nfs-cephfs-2-0-compute-0-ymahkb[97101]: 27/11/2025 10:59:01 : epoch 69282ef5 : compute-0 : ganesha.nfsd-2[main] fsal_init_fds_limit :MDCACHE LRU :EVENT :Setting the system-imposed limit on FDs to 1048576.
Nov 27 05:59:01 np0005537642 ceph-4c838139-e0c9-556a-a9ca-e4422f459af7-nfs-cephfs-2-0-compute-0-ymahkb[97101]: 27/11/2025 10:59:01 : epoch 69282ef5 : compute-0 : ganesha.nfsd-2[main] init_server_pkgs :NFS STARTUP :EVENT :Initializing ID Mapper.
Nov 27 05:59:01 np0005537642 ceph-4c838139-e0c9-556a-a9ca-e4422f459af7-nfs-cephfs-2-0-compute-0-ymahkb[97101]: 27/11/2025 10:59:01 : epoch 69282ef5 : compute-0 : ganesha.nfsd-2[main] init_server_pkgs :NFS STARTUP :EVENT :ID Mapper successfully initialized.
Nov 27 05:59:01 np0005537642 ceph-4c838139-e0c9-556a-a9ca-e4422f459af7-nfs-cephfs-2-0-compute-0-ymahkb[97101]: 27/11/2025 10:59:01 : epoch 69282ef5 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Nov 27 05:59:01 np0005537642 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v32: 198 pgs: 198 active+clean; 456 KiB data, 103 MiB used, 60 GiB / 60 GiB avail; 2.1 KiB/s rd, 852 B/s wr, 3 op/s
Nov 27 05:59:01 np0005537642 ceph-mon[74338]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Nov 27 05:59:02 np0005537642 ceph-mon[74338]: log_channel(audit) log [INF] : from='mgr.14424 192.168.122.100:0/1707931784' entity='mgr.compute-0.qnrkij' 
Nov 27 05:59:02 np0005537642 ceph-mon[74338]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Nov 27 05:59:02 np0005537642 ceph-4c838139-e0c9-556a-a9ca-e4422f459af7-nfs-cephfs-2-0-compute-0-ymahkb[97101]: 27/11/2025 10:59:02 : epoch 69282ef5 : compute-0 : ganesha.nfsd-2[main] rados_kv_traverse :CLIENT ID :EVENT :Failed to lst kv ret=-2
Nov 27 05:59:02 np0005537642 ceph-4c838139-e0c9-556a-a9ca-e4422f459af7-nfs-cephfs-2-0-compute-0-ymahkb[97101]: 27/11/2025 10:59:02 : epoch 69282ef5 : compute-0 : ganesha.nfsd-2[main] rados_cluster_read_clids :CLIENT ID :EVENT :Failed to traverse recovery db: -2
Nov 27 05:59:02 np0005537642 ceph-4c838139-e0c9-556a-a9ca-e4422f459af7-nfs-cephfs-2-0-compute-0-ymahkb[97101]: 27/11/2025 10:59:02 : epoch 69282ef5 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Nov 27 05:59:02 np0005537642 ceph-4c838139-e0c9-556a-a9ca-e4422f459af7-nfs-cephfs-2-0-compute-0-ymahkb[97101]: 27/11/2025 10:59:02 : epoch 69282ef5 : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Nov 27 05:59:02 np0005537642 ceph-4c838139-e0c9-556a-a9ca-e4422f459af7-nfs-cephfs-2-0-compute-0-ymahkb[97101]: 27/11/2025 10:59:02 : epoch 69282ef5 : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Nov 27 05:59:02 np0005537642 ceph-mon[74338]: log_channel(audit) log [INF] : from='mgr.14424 192.168.122.100:0/1707931784' entity='mgr.compute-0.qnrkij' 
Nov 27 05:59:02 np0005537642 ceph-mon[74338]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.nfs.cephfs}] v 0)
Nov 27 05:59:02 np0005537642 ceph-4c838139-e0c9-556a-a9ca-e4422f459af7-nfs-cephfs-2-0-compute-0-ymahkb[97101]: 27/11/2025 10:59:02 : epoch 69282ef5 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Nov 27 05:59:02 np0005537642 ceph-4c838139-e0c9-556a-a9ca-e4422f459af7-nfs-cephfs-2-0-compute-0-ymahkb[97101]: 27/11/2025 10:59:02 : epoch 69282ef5 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Nov 27 05:59:02 np0005537642 ceph-4c838139-e0c9-556a-a9ca-e4422f459af7-nfs-cephfs-2-0-compute-0-ymahkb[97101]: 27/11/2025 10:59:02 : epoch 69282ef5 : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Nov 27 05:59:02 np0005537642 ceph-4c838139-e0c9-556a-a9ca-e4422f459af7-nfs-cephfs-2-0-compute-0-ymahkb[97101]: 27/11/2025 10:59:02 : epoch 69282ef5 : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Nov 27 05:59:02 np0005537642 ceph-mon[74338]: log_channel(audit) log [INF] : from='mgr.14424 192.168.122.100:0/1707931784' entity='mgr.compute-0.qnrkij' 
Nov 27 05:59:02 np0005537642 ceph-mgr[74636]: [progress INFO root] complete: finished ev d6adb227-259a-4256-9d71-ee64a6aa8224 (Updating nfs.cephfs deployment (+3 -> 3))
Nov 27 05:59:02 np0005537642 ceph-mgr[74636]: [progress INFO root] Completed event d6adb227-259a-4256-9d71-ee64a6aa8224 (Updating nfs.cephfs deployment (+3 -> 3)) in 18 seconds
Nov 27 05:59:02 np0005537642 ceph-4c838139-e0c9-556a-a9ca-e4422f459af7-nfs-cephfs-2-0-compute-0-ymahkb[97101]: 27/11/2025 10:59:02 : epoch 69282ef5 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Nov 27 05:59:02 np0005537642 ceph-4c838139-e0c9-556a-a9ca-e4422f459af7-nfs-cephfs-2-0-compute-0-ymahkb[97101]: 27/11/2025 10:59:02 : epoch 69282ef5 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Nov 27 05:59:02 np0005537642 ceph-4c838139-e0c9-556a-a9ca-e4422f459af7-nfs-cephfs-2-0-compute-0-ymahkb[97101]: 27/11/2025 10:59:02 : epoch 69282ef5 : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Nov 27 05:59:02 np0005537642 ceph-mon[74338]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.nfs.cephfs}] v 0)
Nov 27 05:59:02 np0005537642 ceph-mon[74338]: log_channel(audit) log [INF] : from='mgr.14424 192.168.122.100:0/1707931784' entity='mgr.compute-0.qnrkij' 
Nov 27 05:59:02 np0005537642 ceph-mgr[74636]: [progress INFO root] update: starting ev 9162d3b0-cce2-464c-9212-8d63e174e08f (Updating ingress.nfs.cephfs deployment (+6 -> 6))
Nov 27 05:59:02 np0005537642 ceph-mon[74338]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/ingress.nfs.cephfs/monitor_password}] v 0)
Nov 27 05:59:02 np0005537642 ceph-4c838139-e0c9-556a-a9ca-e4422f459af7-nfs-cephfs-2-0-compute-0-ymahkb[97101]: 27/11/2025 10:59:02 : epoch 69282ef5 : compute-0 : ganesha.nfsd-2[main] rados_cluster_end_grace :CLIENT ID :EVENT :Failed to remove rec-0000000000000003:nfs.cephfs.2: -2
Nov 27 05:59:02 np0005537642 ceph-4c838139-e0c9-556a-a9ca-e4422f459af7-nfs-cephfs-2-0-compute-0-ymahkb[97101]: 27/11/2025 10:59:02 : epoch 69282ef5 : compute-0 : ganesha.nfsd-2[main] nfs_lift_grace_locked :STATE :EVENT :NFS Server Now NOT IN GRACE
Nov 27 05:59:02 np0005537642 ceph-4c838139-e0c9-556a-a9ca-e4422f459af7-nfs-cephfs-2-0-compute-0-ymahkb[97101]: 27/11/2025 10:59:02 : epoch 69282ef5 : compute-0 : ganesha.nfsd-2[main] main :NFS STARTUP :WARN :No export entries found in configuration file !!!
Nov 27 05:59:02 np0005537642 ceph-4c838139-e0c9-556a-a9ca-e4422f459af7-nfs-cephfs-2-0-compute-0-ymahkb[97101]: 27/11/2025 10:59:02 : epoch 69282ef5 : compute-0 : ganesha.nfsd-2[main] config_errs_to_log :CONFIG :WARN :Config File (/etc/ganesha/ganesha.conf:24): Unknown block (RADOS_URLS)
Nov 27 05:59:02 np0005537642 ceph-4c838139-e0c9-556a-a9ca-e4422f459af7-nfs-cephfs-2-0-compute-0-ymahkb[97101]: 27/11/2025 10:59:02 : epoch 69282ef5 : compute-0 : ganesha.nfsd-2[main] config_errs_to_log :CONFIG :WARN :Config File (/etc/ganesha/ganesha.conf:29): Unknown block (RGW)
Nov 27 05:59:02 np0005537642 ceph-4c838139-e0c9-556a-a9ca-e4422f459af7-nfs-cephfs-2-0-compute-0-ymahkb[97101]: 27/11/2025 10:59:02 : epoch 69282ef5 : compute-0 : ganesha.nfsd-2[main] lower_my_caps :NFS STARTUP :EVENT :CAP_SYS_RESOURCE was successfully removed for proper quota management in FSAL
Nov 27 05:59:02 np0005537642 ceph-4c838139-e0c9-556a-a9ca-e4422f459af7-nfs-cephfs-2-0-compute-0-ymahkb[97101]: 27/11/2025 10:59:02 : epoch 69282ef5 : compute-0 : ganesha.nfsd-2[main] lower_my_caps :NFS STARTUP :EVENT :currently set capabilities are: cap_chown,cap_dac_override,cap_fowner,cap_fsetid,cap_kill,cap_setgid,cap_setuid,cap_setpcap,cap_net_bind_service,cap_sys_chroot,cap_setfcap=ep
Nov 27 05:59:02 np0005537642 ceph-4c838139-e0c9-556a-a9ca-e4422f459af7-nfs-cephfs-2-0-compute-0-ymahkb[97101]: 27/11/2025 10:59:02 : epoch 69282ef5 : compute-0 : ganesha.nfsd-2[main] gsh_dbus_pkginit :DBUS :CRIT :dbus_bus_get failed (Failed to connect to socket /run/dbus/system_bus_socket: No such file or directory)
Nov 27 05:59:02 np0005537642 ceph-4c838139-e0c9-556a-a9ca-e4422f459af7-nfs-cephfs-2-0-compute-0-ymahkb[97101]: 27/11/2025 10:59:02 : epoch 69282ef5 : compute-0 : ganesha.nfsd-2[main] gsh_dbus_register_path :DBUS :CRIT :dbus_connection_register_object_path called with no DBUS connection
Nov 27 05:59:02 np0005537642 ceph-4c838139-e0c9-556a-a9ca-e4422f459af7-nfs-cephfs-2-0-compute-0-ymahkb[97101]: 27/11/2025 10:59:02 : epoch 69282ef5 : compute-0 : ganesha.nfsd-2[main] gsh_dbus_register_path :DBUS :CRIT :dbus_connection_register_object_path called with no DBUS connection
Nov 27 05:59:02 np0005537642 ceph-4c838139-e0c9-556a-a9ca-e4422f459af7-nfs-cephfs-2-0-compute-0-ymahkb[97101]: 27/11/2025 10:59:02 : epoch 69282ef5 : compute-0 : ganesha.nfsd-2[main] gsh_dbus_register_path :DBUS :CRIT :dbus_connection_register_object_path called with no DBUS connection
Nov 27 05:59:03 np0005537642 ceph-mon[74338]: log_channel(audit) log [INF] : from='mgr.14424 192.168.122.100:0/1707931784' entity='mgr.compute-0.qnrkij' 
Nov 27 05:59:03 np0005537642 ceph-4c838139-e0c9-556a-a9ca-e4422f459af7-nfs-cephfs-2-0-compute-0-ymahkb[97101]: 27/11/2025 10:59:03 : epoch 69282ef5 : compute-0 : ganesha.nfsd-2[main] nfs_Init_svc :DISP :CRIT :Cannot acquire credentials for principal nfs
Nov 27 05:59:03 np0005537642 ceph-4c838139-e0c9-556a-a9ca-e4422f459af7-nfs-cephfs-2-0-compute-0-ymahkb[97101]: 27/11/2025 10:59:03 : epoch 69282ef5 : compute-0 : ganesha.nfsd-2[main] gsh_dbus_register_path :DBUS :CRIT :dbus_connection_register_object_path called with no DBUS connection
Nov 27 05:59:03 np0005537642 ceph-4c838139-e0c9-556a-a9ca-e4422f459af7-nfs-cephfs-2-0-compute-0-ymahkb[97101]: 27/11/2025 10:59:03 : epoch 69282ef5 : compute-0 : ganesha.nfsd-2[main] nfs_Init_admin_thread :NFS CB :EVENT :Admin thread initialized
Nov 27 05:59:03 np0005537642 ceph-4c838139-e0c9-556a-a9ca-e4422f459af7-nfs-cephfs-2-0-compute-0-ymahkb[97101]: 27/11/2025 10:59:03 : epoch 69282ef5 : compute-0 : ganesha.nfsd-2[main] nfs_rpc_cb_init_ccache :NFS STARTUP :EVENT :Callback creds directory (/var/run/ganesha) already exists
Nov 27 05:59:03 np0005537642 ceph-mgr[74636]: [cephadm INFO cephadm.serve] Deploying daemon haproxy.nfs.cephfs.compute-1.btqgqz on compute-1
Nov 27 05:59:03 np0005537642 ceph-mgr[74636]: log_channel(cephadm) log [INF] : Deploying daemon haproxy.nfs.cephfs.compute-1.btqgqz on compute-1
Nov 27 05:59:03 np0005537642 ceph-4c838139-e0c9-556a-a9ca-e4422f459af7-nfs-cephfs-2-0-compute-0-ymahkb[97101]: 27/11/2025 10:59:03 : epoch 69282ef5 : compute-0 : ganesha.nfsd-2[main] find_keytab_entry :NFS CB :WARN :Configuration file does not specify default realm while getting default realm name
Nov 27 05:59:03 np0005537642 ceph-4c838139-e0c9-556a-a9ca-e4422f459af7-nfs-cephfs-2-0-compute-0-ymahkb[97101]: 27/11/2025 10:59:03 : epoch 69282ef5 : compute-0 : ganesha.nfsd-2[main] gssd_refresh_krb5_machine_credential :NFS CB :CRIT :ERROR: gssd_refresh_krb5_machine_credential: no usable keytab entry found in keytab /etc/krb5.keytab for connection with host localhost
Nov 27 05:59:03 np0005537642 ceph-4c838139-e0c9-556a-a9ca-e4422f459af7-nfs-cephfs-2-0-compute-0-ymahkb[97101]: 27/11/2025 10:59:03 : epoch 69282ef5 : compute-0 : ganesha.nfsd-2[main] nfs_rpc_cb_init_ccache :NFS STARTUP :WARN :gssd_refresh_krb5_machine_credential failed (-1765328160:0)
Nov 27 05:59:03 np0005537642 ceph-4c838139-e0c9-556a-a9ca-e4422f459af7-nfs-cephfs-2-0-compute-0-ymahkb[97101]: 27/11/2025 10:59:03 : epoch 69282ef5 : compute-0 : ganesha.nfsd-2[main] nfs_Start_threads :THREAD :EVENT :Starting delayed executor.
Nov 27 05:59:03 np0005537642 ceph-4c838139-e0c9-556a-a9ca-e4422f459af7-nfs-cephfs-2-0-compute-0-ymahkb[97101]: 27/11/2025 10:59:03 : epoch 69282ef5 : compute-0 : ganesha.nfsd-2[main] nfs_Start_threads :THREAD :EVENT :gsh_dbusthread was started successfully
Nov 27 05:59:03 np0005537642 ceph-4c838139-e0c9-556a-a9ca-e4422f459af7-nfs-cephfs-2-0-compute-0-ymahkb[97101]: 27/11/2025 10:59:03 : epoch 69282ef5 : compute-0 : ganesha.nfsd-2[dbus] gsh_dbus_thread :DBUS :CRIT :DBUS not initialized, service thread exiting
Nov 27 05:59:03 np0005537642 ceph-4c838139-e0c9-556a-a9ca-e4422f459af7-nfs-cephfs-2-0-compute-0-ymahkb[97101]: 27/11/2025 10:59:03 : epoch 69282ef5 : compute-0 : ganesha.nfsd-2[dbus] gsh_dbus_thread :DBUS :EVENT :shutdown
Nov 27 05:59:03 np0005537642 ceph-4c838139-e0c9-556a-a9ca-e4422f459af7-nfs-cephfs-2-0-compute-0-ymahkb[97101]: 27/11/2025 10:59:03 : epoch 69282ef5 : compute-0 : ganesha.nfsd-2[main] nfs_Start_threads :THREAD :EVENT :admin thread was started successfully
Nov 27 05:59:03 np0005537642 ceph-4c838139-e0c9-556a-a9ca-e4422f459af7-nfs-cephfs-2-0-compute-0-ymahkb[97101]: 27/11/2025 10:59:03 : epoch 69282ef5 : compute-0 : ganesha.nfsd-2[main] nfs_Start_threads :THREAD :EVENT :reaper thread was started successfully
Nov 27 05:59:03 np0005537642 ceph-4c838139-e0c9-556a-a9ca-e4422f459af7-nfs-cephfs-2-0-compute-0-ymahkb[97101]: 27/11/2025 10:59:03 : epoch 69282ef5 : compute-0 : ganesha.nfsd-2[main] nfs_Start_threads :THREAD :EVENT :General fridge was started successfully
Nov 27 05:59:03 np0005537642 ceph-4c838139-e0c9-556a-a9ca-e4422f459af7-nfs-cephfs-2-0-compute-0-ymahkb[97101]: 27/11/2025 10:59:03 : epoch 69282ef5 : compute-0 : ganesha.nfsd-2[main] nfs_start :NFS STARTUP :EVENT :-------------------------------------------------
Nov 27 05:59:03 np0005537642 ceph-4c838139-e0c9-556a-a9ca-e4422f459af7-nfs-cephfs-2-0-compute-0-ymahkb[97101]: 27/11/2025 10:59:03 : epoch 69282ef5 : compute-0 : ganesha.nfsd-2[main] nfs_start :NFS STARTUP :EVENT :             NFS SERVER INITIALIZED
Nov 27 05:59:03 np0005537642 ceph-4c838139-e0c9-556a-a9ca-e4422f459af7-nfs-cephfs-2-0-compute-0-ymahkb[97101]: 27/11/2025 10:59:03 : epoch 69282ef5 : compute-0 : ganesha.nfsd-2[main] nfs_start :NFS STARTUP :EVENT :-------------------------------------------------
Nov 27 05:59:03 np0005537642 ceph-mon[74338]: from='mgr.14424 192.168.122.100:0/1707931784' entity='mgr.compute-0.qnrkij' 
Nov 27 05:59:03 np0005537642 ceph-mon[74338]: from='mgr.14424 192.168.122.100:0/1707931784' entity='mgr.compute-0.qnrkij' 
Nov 27 05:59:03 np0005537642 ceph-mon[74338]: from='mgr.14424 192.168.122.100:0/1707931784' entity='mgr.compute-0.qnrkij' 
Nov 27 05:59:03 np0005537642 ceph-mon[74338]: from='mgr.14424 192.168.122.100:0/1707931784' entity='mgr.compute-0.qnrkij' 
Nov 27 05:59:03 np0005537642 ceph-mon[74338]: from='mgr.14424 192.168.122.100:0/1707931784' entity='mgr.compute-0.qnrkij' 
Nov 27 05:59:03 np0005537642 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v33: 198 pgs: 198 active+clean; 456 KiB data, 103 MiB used, 60 GiB / 60 GiB avail; 5.2 KiB/s rd, 1.7 KiB/s wr, 7 op/s
Nov 27 05:59:04 np0005537642 ceph-mon[74338]: Deploying daemon haproxy.nfs.cephfs.compute-1.btqgqz on compute-1
Nov 27 05:59:05 np0005537642 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v34: 198 pgs: 198 active+clean; 456 KiB data, 103 MiB used, 60 GiB / 60 GiB avail; 5.1 KiB/s rd, 1.7 KiB/s wr, 7 op/s
Nov 27 05:59:06 np0005537642 ceph-mon[74338]: mon.compute-0@0(leader).osd e64 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 27 05:59:07 np0005537642 ceph-mon[74338]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1.devices.0}] v 0)
Nov 27 05:59:07 np0005537642 ceph-mon[74338]: log_channel(audit) log [INF] : from='mgr.14424 192.168.122.100:0/1707931784' entity='mgr.compute-0.qnrkij' 
Nov 27 05:59:07 np0005537642 ceph-mon[74338]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1}] v 0)
Nov 27 05:59:07 np0005537642 ceph-mgr[74636]: [progress INFO root] Writing back 16 completed events
Nov 27 05:59:07 np0005537642 ceph-mon[74338]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/progress/completed}] v 0)
Nov 27 05:59:07 np0005537642 ceph-mon[74338]: log_channel(audit) log [INF] : from='mgr.14424 192.168.122.100:0/1707931784' entity='mgr.compute-0.qnrkij' 
Nov 27 05:59:07 np0005537642 ceph-mon[74338]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.ingress.nfs.cephfs}] v 0)
Nov 27 05:59:07 np0005537642 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v35: 198 pgs: 198 active+clean; 456 KiB data, 103 MiB used, 60 GiB / 60 GiB avail; 5.1 KiB/s rd, 1.7 KiB/s wr, 7 op/s
Nov 27 05:59:07 np0005537642 ceph-mon[74338]: from='mgr.14424 192.168.122.100:0/1707931784' entity='mgr.compute-0.qnrkij' 
Nov 27 05:59:08 np0005537642 ceph-mon[74338]: log_channel(audit) log [INF] : from='mgr.14424 192.168.122.100:0/1707931784' entity='mgr.compute-0.qnrkij' 
Nov 27 05:59:08 np0005537642 ceph-mon[74338]: log_channel(audit) log [INF] : from='mgr.14424 192.168.122.100:0/1707931784' entity='mgr.compute-0.qnrkij' 
Nov 27 05:59:08 np0005537642 ceph-mgr[74636]: [cephadm INFO cephadm.serve] Deploying daemon haproxy.nfs.cephfs.compute-0.vcfcow on compute-0
Nov 27 05:59:08 np0005537642 ceph-mgr[74636]: log_channel(cephadm) log [INF] : Deploying daemon haproxy.nfs.cephfs.compute-0.vcfcow on compute-0
Nov 27 05:59:08 np0005537642 ceph-4c838139-e0c9-556a-a9ca-e4422f459af7-nfs-cephfs-2-0-compute-0-ymahkb[97101]: 27/11/2025 10:59:08 : epoch 69282ef5 : compute-0 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f37e0000df0 fd 37 proxy header rest len failed header rlen = % (will set dead)
Nov 27 05:59:09 np0005537642 ceph-mon[74338]: from='mgr.14424 192.168.122.100:0/1707931784' entity='mgr.compute-0.qnrkij' 
Nov 27 05:59:09 np0005537642 ceph-mon[74338]: from='mgr.14424 192.168.122.100:0/1707931784' entity='mgr.compute-0.qnrkij' 
Nov 27 05:59:09 np0005537642 ceph-mon[74338]: from='mgr.14424 192.168.122.100:0/1707931784' entity='mgr.compute-0.qnrkij' 
Nov 27 05:59:09 np0005537642 ceph-mon[74338]: Deploying daemon haproxy.nfs.cephfs.compute-0.vcfcow on compute-0
Nov 27 05:59:09 np0005537642 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v36: 198 pgs: 198 active+clean; 456 KiB data, 103 MiB used, 60 GiB / 60 GiB avail; 5.4 KiB/s rd, 1.8 KiB/s wr, 7 op/s
Nov 27 05:59:10 np0005537642 ceph-4c838139-e0c9-556a-a9ca-e4422f459af7-nfs-cephfs-2-0-compute-0-ymahkb[97101]: 27/11/2025 10:59:10 : epoch 69282ef5 : compute-0 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f37d40014d0 fd 37 proxy header rest len failed header rlen = % (will set dead)
Nov 27 05:59:11 np0005537642 ceph-mon[74338]: mon.compute-0@0(leader).osd e64 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 27 05:59:11 np0005537642 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v37: 198 pgs: 198 active+clean; 456 KiB data, 103 MiB used, 60 GiB / 60 GiB avail; 3.4 KiB/s rd, 1023 B/s wr, 4 op/s
Nov 27 05:59:11 np0005537642 ceph-mgr[74636]: [balancer INFO root] Optimize plan auto_2025-11-27_10:59:11
Nov 27 05:59:11 np0005537642 ceph-mgr[74636]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Nov 27 05:59:11 np0005537642 ceph-mgr[74636]: [balancer INFO root] do_upmap
Nov 27 05:59:11 np0005537642 ceph-mgr[74636]: [balancer INFO root] pools ['default.rgw.control', '.mgr', 'default.rgw.meta', 'cephfs.cephfs.meta', '.nfs', 'volumes', 'vms', 'backups', 'images', 'cephfs.cephfs.data', '.rgw.root', 'default.rgw.log']
Nov 27 05:59:11 np0005537642 ceph-mgr[74636]: [balancer INFO root] prepared 0/10 upmap changes
Nov 27 05:59:12 np0005537642 ceph-mgr[74636]: [pg_autoscaler INFO root] _maybe_adjust
Nov 27 05:59:12 np0005537642 ceph-mgr[74636]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 27 05:59:12 np0005537642 ceph-mgr[74636]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Nov 27 05:59:12 np0005537642 ceph-mgr[74636]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 27 05:59:12 np0005537642 ceph-mgr[74636]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 27 05:59:12 np0005537642 ceph-mgr[74636]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 27 05:59:12 np0005537642 ceph-mgr[74636]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 27 05:59:12 np0005537642 ceph-mgr[74636]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 27 05:59:12 np0005537642 ceph-mgr[74636]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 27 05:59:12 np0005537642 ceph-mgr[74636]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 27 05:59:12 np0005537642 ceph-mgr[74636]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 27 05:59:12 np0005537642 ceph-mgr[74636]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 27 05:59:12 np0005537642 ceph-mgr[74636]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 32)
Nov 27 05:59:12 np0005537642 ceph-mgr[74636]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 27 05:59:12 np0005537642 ceph-mgr[74636]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 27 05:59:12 np0005537642 ceph-mgr[74636]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 27 05:59:12 np0005537642 ceph-mgr[74636]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 1)
Nov 27 05:59:12 np0005537642 ceph-mgr[74636]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 27 05:59:12 np0005537642 ceph-mgr[74636]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 1)
Nov 27 05:59:12 np0005537642 ceph-mgr[74636]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 27 05:59:12 np0005537642 ceph-mgr[74636]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 1)
Nov 27 05:59:12 np0005537642 ceph-mgr[74636]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 27 05:59:12 np0005537642 ceph-mgr[74636]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 0.0 of space, bias 4.0, pg target 0.0 quantized to 32 (current 1)
Nov 27 05:59:12 np0005537642 ceph-mgr[74636]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 27 05:59:12 np0005537642 ceph-mgr[74636]: [pg_autoscaler INFO root] Pool '.nfs' root_id -1 using 6.359070782053786e-08 of space, bias 1.0, pg target 1.907721234616136e-05 quantized to 32 (current 1)
Nov 27 05:59:12 np0005537642 ceph-mon[74338]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": ".rgw.root", "var": "pg_num", "val": "32"} v 0)
Nov 27 05:59:12 np0005537642 ceph-mon[74338]: log_channel(audit) log [INF] : from='mgr.14424 192.168.122.100:0/1707931784' entity='mgr.compute-0.qnrkij' cmd=[{"prefix": "osd pool set", "pool": ".rgw.root", "var": "pg_num", "val": "32"}]: dispatch
Nov 27 05:59:12 np0005537642 ceph-mgr[74636]: [volumes INFO mgr_util] scanning for idle connections..
Nov 27 05:59:12 np0005537642 ceph-mgr[74636]: [volumes INFO mgr_util] cleaning up connections: []
Nov 27 05:59:12 np0005537642 ceph-mgr[74636]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Nov 27 05:59:12 np0005537642 ceph-mgr[74636]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 27 05:59:12 np0005537642 ceph-mgr[74636]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 27 05:59:12 np0005537642 ceph-mgr[74636]: [volumes INFO mgr_util] scanning for idle connections..
Nov 27 05:59:12 np0005537642 ceph-mgr[74636]: [volumes INFO mgr_util] cleaning up connections: []
Nov 27 05:59:12 np0005537642 ceph-mgr[74636]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 27 05:59:12 np0005537642 ceph-mgr[74636]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 27 05:59:12 np0005537642 ceph-mgr[74636]: [volumes INFO mgr_util] scanning for idle connections..
Nov 27 05:59:12 np0005537642 ceph-mgr[74636]: [volumes INFO mgr_util] cleaning up connections: []
Nov 27 05:59:12 np0005537642 ceph-mgr[74636]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Nov 27 05:59:12 np0005537642 ceph-mgr[74636]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 27 05:59:12 np0005537642 ceph-mgr[74636]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 27 05:59:12 np0005537642 ceph-mgr[74636]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 27 05:59:12 np0005537642 ceph-mgr[74636]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 27 05:59:12 np0005537642 ceph-4c838139-e0c9-556a-a9ca-e4422f459af7-nfs-cephfs-2-0-compute-0-ymahkb[97101]: 27/11/2025 10:59:12 : epoch 69282ef5 : compute-0 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f37bc000b60 fd 37 proxy header rest len failed header rlen = % (will set dead)
Nov 27 05:59:13 np0005537642 ceph-mon[74338]: mon.compute-0@0(leader).osd e64 do_prune osdmap full prune enabled
Nov 27 05:59:13 np0005537642 ceph-mon[74338]: from='mgr.14424 192.168.122.100:0/1707931784' entity='mgr.compute-0.qnrkij' cmd=[{"prefix": "osd pool set", "pool": ".rgw.root", "var": "pg_num", "val": "32"}]: dispatch
Nov 27 05:59:13 np0005537642 ceph-mon[74338]: log_channel(audit) log [INF] : from='mgr.14424 192.168.122.100:0/1707931784' entity='mgr.compute-0.qnrkij' cmd='[{"prefix": "osd pool set", "pool": ".rgw.root", "var": "pg_num", "val": "32"}]': finished
Nov 27 05:59:13 np0005537642 ceph-mon[74338]: mon.compute-0@0(leader).osd e65 e65: 3 total, 3 up, 3 in
Nov 27 05:59:13 np0005537642 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v38: 198 pgs: 198 active+clean; 456 KiB data, 103 MiB used, 60 GiB / 60 GiB avail; 3.4 KiB/s rd, 1023 B/s wr, 4 op/s
Nov 27 05:59:13 np0005537642 ceph-mon[74338]: log_channel(cluster) log [DBG] : osdmap e65: 3 total, 3 up, 3 in
Nov 27 05:59:13 np0005537642 ceph-mgr[74636]: [progress INFO root] update: starting ev 21e957c0-1cfd-4522-8077-7a2d6582104a (PG autoscaler increasing pool 8 PGs from 1 to 32)
Nov 27 05:59:13 np0005537642 ceph-mon[74338]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pg_num", "val": "32"} v 0)
Nov 27 05:59:13 np0005537642 ceph-mon[74338]: log_channel(audit) log [INF] : from='mgr.14424 192.168.122.100:0/1707931784' entity='mgr.compute-0.qnrkij' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pg_num", "val": "32"}]: dispatch
Nov 27 05:59:14 np0005537642 ceph-mon[74338]: from='mgr.14424 192.168.122.100:0/1707931784' entity='mgr.compute-0.qnrkij' cmd='[{"prefix": "osd pool set", "pool": ".rgw.root", "var": "pg_num", "val": "32"}]': finished
Nov 27 05:59:14 np0005537642 ceph-mon[74338]: from='mgr.14424 192.168.122.100:0/1707931784' entity='mgr.compute-0.qnrkij' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pg_num", "val": "32"}]: dispatch
Nov 27 05:59:14 np0005537642 ceph-4c838139-e0c9-556a-a9ca-e4422f459af7-nfs-cephfs-2-0-compute-0-ymahkb[97101]: 27/11/2025 10:59:14 : epoch 69282ef5 : compute-0 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f37dc001ac0 fd 37 proxy header rest len failed header rlen = % (will set dead)
Nov 27 05:59:14 np0005537642 ceph-mon[74338]: mon.compute-0@0(leader).osd e65 do_prune osdmap full prune enabled
Nov 27 05:59:14 np0005537642 ceph-mon[74338]: log_channel(audit) log [INF] : from='mgr.14424 192.168.122.100:0/1707931784' entity='mgr.compute-0.qnrkij' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pg_num", "val": "32"}]': finished
Nov 27 05:59:14 np0005537642 ceph-mon[74338]: mon.compute-0@0(leader).osd e66 e66: 3 total, 3 up, 3 in
Nov 27 05:59:15 np0005537642 podman[97247]: 2025-11-27 10:59:15.014370079 +0000 UTC m=+6.084216528 image pull e85424b0d443f37ddd2dd8a3bb2ef6f18dd352b987723a921b64289023af2914 quay.io/ceph/haproxy:2.3
Nov 27 05:59:15 np0005537642 ceph-mon[74338]: log_channel(cluster) log [DBG] : osdmap e66: 3 total, 3 up, 3 in
Nov 27 05:59:15 np0005537642 ceph-mgr[74636]: [progress INFO root] update: starting ev ead5154e-49c2-4353-8255-c4ad50225ba1 (PG autoscaler increasing pool 9 PGs from 1 to 32)
Nov 27 05:59:15 np0005537642 ceph-mon[74338]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.control", "var": "pg_num", "val": "32"} v 0)
Nov 27 05:59:15 np0005537642 ceph-mon[74338]: log_channel(audit) log [INF] : from='mgr.14424 192.168.122.100:0/1707931784' entity='mgr.compute-0.qnrkij' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.control", "var": "pg_num", "val": "32"}]: dispatch
Nov 27 05:59:15 np0005537642 podman[97247]: 2025-11-27 10:59:15.315082421 +0000 UTC m=+6.384928820 container create 9edcd42756a96e8f1f4b0b763a6b0ef3cc1d3aba6408451cd6f2fb24225ca384 (image=quay.io/ceph/haproxy:2.3, name=eager_benz)
Nov 27 05:59:15 np0005537642 systemd[1]: Started libpod-conmon-9edcd42756a96e8f1f4b0b763a6b0ef3cc1d3aba6408451cd6f2fb24225ca384.scope.
Nov 27 05:59:15 np0005537642 systemd[1]: Started libcrun container.
Nov 27 05:59:15 np0005537642 podman[97247]: 2025-11-27 10:59:15.599975047 +0000 UTC m=+6.669821486 container init 9edcd42756a96e8f1f4b0b763a6b0ef3cc1d3aba6408451cd6f2fb24225ca384 (image=quay.io/ceph/haproxy:2.3, name=eager_benz)
Nov 27 05:59:15 np0005537642 podman[97247]: 2025-11-27 10:59:15.6134771 +0000 UTC m=+6.683323499 container start 9edcd42756a96e8f1f4b0b763a6b0ef3cc1d3aba6408451cd6f2fb24225ca384 (image=quay.io/ceph/haproxy:2.3, name=eager_benz)
Nov 27 05:59:15 np0005537642 eager_benz[97369]: 0 0
Nov 27 05:59:15 np0005537642 systemd[1]: libpod-9edcd42756a96e8f1f4b0b763a6b0ef3cc1d3aba6408451cd6f2fb24225ca384.scope: Deactivated successfully.
Nov 27 05:59:15 np0005537642 podman[97247]: 2025-11-27 10:59:15.709035635 +0000 UTC m=+6.778882034 container attach 9edcd42756a96e8f1f4b0b763a6b0ef3cc1d3aba6408451cd6f2fb24225ca384 (image=quay.io/ceph/haproxy:2.3, name=eager_benz)
Nov 27 05:59:15 np0005537642 podman[97247]: 2025-11-27 10:59:15.709525798 +0000 UTC m=+6.779372187 container died 9edcd42756a96e8f1f4b0b763a6b0ef3cc1d3aba6408451cd6f2fb24225ca384 (image=quay.io/ceph/haproxy:2.3, name=eager_benz)
Nov 27 05:59:15 np0005537642 ceph-mon[74338]: from='mgr.14424 192.168.122.100:0/1707931784' entity='mgr.compute-0.qnrkij' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pg_num", "val": "32"}]': finished
Nov 27 05:59:15 np0005537642 ceph-mon[74338]: from='mgr.14424 192.168.122.100:0/1707931784' entity='mgr.compute-0.qnrkij' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.control", "var": "pg_num", "val": "32"}]: dispatch
Nov 27 05:59:15 np0005537642 systemd[1]: var-lib-containers-storage-overlay-94e54eb9cfbc02123a974a8d9f69307524d6aa2272e4db9d72c8dd0d67cb6e5b-merged.mount: Deactivated successfully.
Nov 27 05:59:15 np0005537642 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v41: 198 pgs: 198 active+clean; 456 KiB data, 103 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 127 B/s wr, 0 op/s
Nov 27 05:59:15 np0005537642 ceph-mon[74338]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": ".rgw.root", "var": "pg_num_actual", "val": "32"} v 0)
Nov 27 05:59:15 np0005537642 ceph-mon[74338]: log_channel(audit) log [INF] : from='mgr.14424 192.168.122.100:0/1707931784' entity='mgr.compute-0.qnrkij' cmd=[{"prefix": "osd pool set", "pool": ".rgw.root", "var": "pg_num_actual", "val": "32"}]: dispatch
Nov 27 05:59:15 np0005537642 ceph-mon[74338]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pg_num_actual", "val": "32"} v 0)
Nov 27 05:59:15 np0005537642 ceph-mon[74338]: log_channel(audit) log [INF] : from='mgr.14424 192.168.122.100:0/1707931784' entity='mgr.compute-0.qnrkij' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pg_num_actual", "val": "32"}]: dispatch
Nov 27 05:59:15 np0005537642 ceph-mon[74338]: mon.compute-0@0(leader).osd e66 do_prune osdmap full prune enabled
Nov 27 05:59:16 np0005537642 ceph-mon[74338]: log_channel(audit) log [INF] : from='mgr.14424 192.168.122.100:0/1707931784' entity='mgr.compute-0.qnrkij' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.control", "var": "pg_num", "val": "32"}]': finished
Nov 27 05:59:16 np0005537642 ceph-mon[74338]: log_channel(audit) log [INF] : from='mgr.14424 192.168.122.100:0/1707931784' entity='mgr.compute-0.qnrkij' cmd='[{"prefix": "osd pool set", "pool": ".rgw.root", "var": "pg_num_actual", "val": "32"}]': finished
Nov 27 05:59:16 np0005537642 ceph-mon[74338]: log_channel(audit) log [INF] : from='mgr.14424 192.168.122.100:0/1707931784' entity='mgr.compute-0.qnrkij' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pg_num_actual", "val": "32"}]': finished
Nov 27 05:59:16 np0005537642 ceph-mon[74338]: mon.compute-0@0(leader).osd e67 e67: 3 total, 3 up, 3 in
Nov 27 05:59:16 np0005537642 ceph-mon[74338]: log_channel(cluster) log [DBG] : osdmap e67: 3 total, 3 up, 3 in
Nov 27 05:59:16 np0005537642 ceph-mgr[74636]: [progress INFO root] update: starting ev 622b389d-fb8b-4c5b-984c-5d1330913ad9 (PG autoscaler increasing pool 10 PGs from 1 to 32)
Nov 27 05:59:16 np0005537642 ceph-mon[74338]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_num", "val": "32"} v 0)
Nov 27 05:59:16 np0005537642 ceph-mon[74338]: log_channel(audit) log [INF] : from='mgr.14424 192.168.122.100:0/1707931784' entity='mgr.compute-0.qnrkij' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_num", "val": "32"}]: dispatch
Nov 27 05:59:16 np0005537642 podman[97247]: 2025-11-27 10:59:16.278642594 +0000 UTC m=+7.348488963 container remove 9edcd42756a96e8f1f4b0b763a6b0ef3cc1d3aba6408451cd6f2fb24225ca384 (image=quay.io/ceph/haproxy:2.3, name=eager_benz)
Nov 27 05:59:16 np0005537642 systemd[1]: libpod-conmon-9edcd42756a96e8f1f4b0b763a6b0ef3cc1d3aba6408451cd6f2fb24225ca384.scope: Deactivated successfully.
Nov 27 05:59:16 np0005537642 ceph-4c838139-e0c9-556a-a9ca-e4422f459af7-nfs-cephfs-2-0-compute-0-ymahkb[97101]: 27/11/2025 10:59:16 : epoch 69282ef5 : compute-0 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f37c4000fa0 fd 37 proxy header rest len failed header rlen = % (will set dead)
Nov 27 05:59:16 np0005537642 ceph-mon[74338]: mon.compute-0@0(leader).osd e67 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 27 05:59:16 np0005537642 systemd[1]: Reloading.
Nov 27 05:59:16 np0005537642 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 27 05:59:16 np0005537642 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 27 05:59:16 np0005537642 ceph-mon[74338]: from='mgr.14424 192.168.122.100:0/1707931784' entity='mgr.compute-0.qnrkij' cmd=[{"prefix": "osd pool set", "pool": ".rgw.root", "var": "pg_num_actual", "val": "32"}]: dispatch
Nov 27 05:59:16 np0005537642 ceph-mon[74338]: from='mgr.14424 192.168.122.100:0/1707931784' entity='mgr.compute-0.qnrkij' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pg_num_actual", "val": "32"}]: dispatch
Nov 27 05:59:16 np0005537642 ceph-mon[74338]: from='mgr.14424 192.168.122.100:0/1707931784' entity='mgr.compute-0.qnrkij' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.control", "var": "pg_num", "val": "32"}]': finished
Nov 27 05:59:16 np0005537642 ceph-mon[74338]: from='mgr.14424 192.168.122.100:0/1707931784' entity='mgr.compute-0.qnrkij' cmd='[{"prefix": "osd pool set", "pool": ".rgw.root", "var": "pg_num_actual", "val": "32"}]': finished
Nov 27 05:59:16 np0005537642 ceph-mon[74338]: from='mgr.14424 192.168.122.100:0/1707931784' entity='mgr.compute-0.qnrkij' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pg_num_actual", "val": "32"}]': finished
Nov 27 05:59:16 np0005537642 ceph-mon[74338]: from='mgr.14424 192.168.122.100:0/1707931784' entity='mgr.compute-0.qnrkij' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_num", "val": "32"}]: dispatch
Nov 27 05:59:17 np0005537642 systemd[1]: Reloading.
Nov 27 05:59:17 np0005537642 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 27 05:59:17 np0005537642 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 27 05:59:17 np0005537642 ceph-mon[74338]: mon.compute-0@0(leader).osd e67 do_prune osdmap full prune enabled
Nov 27 05:59:17 np0005537642 ceph-mon[74338]: log_channel(audit) log [INF] : from='mgr.14424 192.168.122.100:0/1707931784' entity='mgr.compute-0.qnrkij' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_num", "val": "32"}]': finished
Nov 27 05:59:17 np0005537642 ceph-mon[74338]: mon.compute-0@0(leader).osd e68 e68: 3 total, 3 up, 3 in
Nov 27 05:59:17 np0005537642 ceph-mon[74338]: log_channel(cluster) log [DBG] : osdmap e68: 3 total, 3 up, 3 in
Nov 27 05:59:17 np0005537642 ceph-mgr[74636]: [progress INFO root] update: starting ev a7b8c5a2-99a5-41df-9c2d-3243f59f80b9 (PG autoscaler increasing pool 11 PGs from 1 to 32)
Nov 27 05:59:17 np0005537642 ceph-mon[74338]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": ".nfs", "var": "pg_num", "val": "32"} v 0)
Nov 27 05:59:17 np0005537642 ceph-mon[74338]: log_channel(audit) log [INF] : from='mgr.14424 192.168.122.100:0/1707931784' entity='mgr.compute-0.qnrkij' cmd=[{"prefix": "osd pool set", "pool": ".nfs", "var": "pg_num", "val": "32"}]: dispatch
Nov 27 05:59:17 np0005537642 systemd[1]: Starting Ceph haproxy.nfs.cephfs.compute-0.vcfcow for 4c838139-e0c9-556a-a9ca-e4422f459af7...
Nov 27 05:59:17 np0005537642 podman[97516]: 2025-11-27 10:59:17.698204087 +0000 UTC m=+0.121503583 container create bdcf3b8372dc80dc778da56341fccbda9cc403cf6c6dbfd21e1cbbfaf9135c0c (image=quay.io/ceph/haproxy:2.3, name=ceph-4c838139-e0c9-556a-a9ca-e4422f459af7-haproxy-nfs-cephfs-compute-0-vcfcow)
Nov 27 05:59:17 np0005537642 podman[97516]: 2025-11-27 10:59:17.613236796 +0000 UTC m=+0.036536342 image pull e85424b0d443f37ddd2dd8a3bb2ef6f18dd352b987723a921b64289023af2914 quay.io/ceph/haproxy:2.3
Nov 27 05:59:17 np0005537642 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v44: 260 pgs: 62 unknown, 198 active+clean; 456 KiB data, 103 MiB used, 60 GiB / 60 GiB avail
Nov 27 05:59:17 np0005537642 ceph-mon[74338]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.control", "var": "pg_num_actual", "val": "32"} v 0)
Nov 27 05:59:17 np0005537642 ceph-mon[74338]: log_channel(audit) log [INF] : from='mgr.14424 192.168.122.100:0/1707931784' entity='mgr.compute-0.qnrkij' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.control", "var": "pg_num_actual", "val": "32"}]: dispatch
Nov 27 05:59:17 np0005537642 ceph-mon[74338]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_num_actual", "val": "32"} v 0)
Nov 27 05:59:17 np0005537642 ceph-mon[74338]: log_channel(audit) log [INF] : from='mgr.14424 192.168.122.100:0/1707931784' entity='mgr.compute-0.qnrkij' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_num_actual", "val": "32"}]: dispatch
Nov 27 05:59:18 np0005537642 ceph-mgr[74636]: [progress WARNING root] Starting Global Recovery Event,62 pgs not in active + clean state
Nov 27 05:59:18 np0005537642 ceph-mon[74338]: mon.compute-0@0(leader).osd e68 do_prune osdmap full prune enabled
Nov 27 05:59:18 np0005537642 ceph-mon[74338]: from='mgr.14424 192.168.122.100:0/1707931784' entity='mgr.compute-0.qnrkij' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_num", "val": "32"}]': finished
Nov 27 05:59:18 np0005537642 ceph-mon[74338]: from='mgr.14424 192.168.122.100:0/1707931784' entity='mgr.compute-0.qnrkij' cmd=[{"prefix": "osd pool set", "pool": ".nfs", "var": "pg_num", "val": "32"}]: dispatch
Nov 27 05:59:18 np0005537642 ceph-mon[74338]: from='mgr.14424 192.168.122.100:0/1707931784' entity='mgr.compute-0.qnrkij' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.control", "var": "pg_num_actual", "val": "32"}]: dispatch
Nov 27 05:59:18 np0005537642 ceph-mon[74338]: from='mgr.14424 192.168.122.100:0/1707931784' entity='mgr.compute-0.qnrkij' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_num_actual", "val": "32"}]: dispatch
Nov 27 05:59:18 np0005537642 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7b3ad34c32c97dd6dd2df761030b5e3c847a04cf27a7f1ee435690d12440ccf0/merged/var/lib/haproxy supports timestamps until 2038 (0x7fffffff)
Nov 27 05:59:18 np0005537642 ceph-mon[74338]: log_channel(audit) log [INF] : from='mgr.14424 192.168.122.100:0/1707931784' entity='mgr.compute-0.qnrkij' cmd='[{"prefix": "osd pool set", "pool": ".nfs", "var": "pg_num", "val": "32"}]': finished
Nov 27 05:59:18 np0005537642 ceph-mon[74338]: log_channel(audit) log [INF] : from='mgr.14424 192.168.122.100:0/1707931784' entity='mgr.compute-0.qnrkij' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.control", "var": "pg_num_actual", "val": "32"}]': finished
Nov 27 05:59:18 np0005537642 ceph-mon[74338]: log_channel(audit) log [INF] : from='mgr.14424 192.168.122.100:0/1707931784' entity='mgr.compute-0.qnrkij' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_num_actual", "val": "32"}]': finished
Nov 27 05:59:18 np0005537642 ceph-mon[74338]: mon.compute-0@0(leader).osd e69 e69: 3 total, 3 up, 3 in
Nov 27 05:59:18 np0005537642 ceph-osd[82775]: osd.1 pg_epoch: 69 pg[10.0( v 54'48 (0'0,54'48] local-lis/les=52/53 n=8 ec=52/52 lis/c=52/52 les/c/f=53/53/0 sis=69 pruub=15.069886208s) [1] r=0 lpr=69 pi=[52,69)/1 crt=54'48 lcod 54'47 mlcod 54'47 active pruub 230.261795044s@ mbc={}] PeeringState::start_peering_interval up [1] -> [1], acting [1] -> [1], acting_primary 1 -> 1, up_primary 1 -> 1, role 0 -> 0, features acting 4540701547738038271 upacting 4540701547738038271
Nov 27 05:59:18 np0005537642 ceph-mon[74338]: log_channel(cluster) log [DBG] : osdmap e69: 3 total, 3 up, 3 in
Nov 27 05:59:18 np0005537642 ceph-osd[82775]: osd.1 pg_epoch: 69 pg[10.0( v 54'48 lc 0'0 (0'0,54'48] local-lis/les=52/53 n=0 ec=52/52 lis/c=52/52 les/c/f=53/53/0 sis=69 pruub=15.069886208s) [1] r=0 lpr=69 pi=[52,69)/1 crt=54'48 lcod 54'47 mlcod 0'0 unknown pruub 230.261795044s@ mbc={}] state<Start>: transitioning to Primary
Nov 27 05:59:18 np0005537642 ceph-mgr[74636]: [progress INFO root] update: starting ev c1667773-5d95-4f8f-b21b-81e2c7a9e4bf (PG autoscaler increasing pool 12 PGs from 1 to 32)
Nov 27 05:59:18 np0005537642 ceph-mgr[74636]: [progress INFO root] complete: finished ev 21e957c0-1cfd-4522-8077-7a2d6582104a (PG autoscaler increasing pool 8 PGs from 1 to 32)
Nov 27 05:59:18 np0005537642 ceph-mgr[74636]: [progress INFO root] Completed event 21e957c0-1cfd-4522-8077-7a2d6582104a (PG autoscaler increasing pool 8 PGs from 1 to 32) in 5 seconds
Nov 27 05:59:18 np0005537642 ceph-mgr[74636]: [progress INFO root] complete: finished ev ead5154e-49c2-4353-8255-c4ad50225ba1 (PG autoscaler increasing pool 9 PGs from 1 to 32)
Nov 27 05:59:18 np0005537642 ceph-mgr[74636]: [progress INFO root] Completed event ead5154e-49c2-4353-8255-c4ad50225ba1 (PG autoscaler increasing pool 9 PGs from 1 to 32) in 3 seconds
Nov 27 05:59:18 np0005537642 ceph-mgr[74636]: [progress INFO root] complete: finished ev 622b389d-fb8b-4c5b-984c-5d1330913ad9 (PG autoscaler increasing pool 10 PGs from 1 to 32)
Nov 27 05:59:18 np0005537642 ceph-mgr[74636]: [progress INFO root] Completed event 622b389d-fb8b-4c5b-984c-5d1330913ad9 (PG autoscaler increasing pool 10 PGs from 1 to 32) in 2 seconds
Nov 27 05:59:18 np0005537642 ceph-mgr[74636]: [progress INFO root] complete: finished ev a7b8c5a2-99a5-41df-9c2d-3243f59f80b9 (PG autoscaler increasing pool 11 PGs from 1 to 32)
Nov 27 05:59:18 np0005537642 ceph-mgr[74636]: [progress INFO root] Completed event a7b8c5a2-99a5-41df-9c2d-3243f59f80b9 (PG autoscaler increasing pool 11 PGs from 1 to 32) in 1 seconds
Nov 27 05:59:18 np0005537642 ceph-mgr[74636]: [progress INFO root] complete: finished ev c1667773-5d95-4f8f-b21b-81e2c7a9e4bf (PG autoscaler increasing pool 12 PGs from 1 to 32)
Nov 27 05:59:18 np0005537642 ceph-mgr[74636]: [progress INFO root] Completed event c1667773-5d95-4f8f-b21b-81e2c7a9e4bf (PG autoscaler increasing pool 12 PGs from 1 to 32) in 0 seconds
Nov 27 05:59:18 np0005537642 podman[97516]: 2025-11-27 10:59:18.47962234 +0000 UTC m=+0.902921836 container init bdcf3b8372dc80dc778da56341fccbda9cc403cf6c6dbfd21e1cbbfaf9135c0c (image=quay.io/ceph/haproxy:2.3, name=ceph-4c838139-e0c9-556a-a9ca-e4422f459af7-haproxy-nfs-cephfs-compute-0-vcfcow)
Nov 27 05:59:18 np0005537642 podman[97516]: 2025-11-27 10:59:18.484328836 +0000 UTC m=+0.907628302 container start bdcf3b8372dc80dc778da56341fccbda9cc403cf6c6dbfd21e1cbbfaf9135c0c (image=quay.io/ceph/haproxy:2.3, name=ceph-4c838139-e0c9-556a-a9ca-e4422f459af7-haproxy-nfs-cephfs-compute-0-vcfcow)
Nov 27 05:59:18 np0005537642 bash[97516]: bdcf3b8372dc80dc778da56341fccbda9cc403cf6c6dbfd21e1cbbfaf9135c0c
Nov 27 05:59:18 np0005537642 systemd[1]: Started Ceph haproxy.nfs.cephfs.compute-0.vcfcow for 4c838139-e0c9-556a-a9ca-e4422f459af7.
Nov 27 05:59:18 np0005537642 ceph-4c838139-e0c9-556a-a9ca-e4422f459af7-haproxy-nfs-cephfs-compute-0-vcfcow[97533]: [NOTICE] 330/105918 (2) : New worker #1 (4) forked
Nov 27 05:59:18 np0005537642 ceph-4c838139-e0c9-556a-a9ca-e4422f459af7-haproxy-nfs-cephfs-compute-0-vcfcow[97533]: [WARNING] 330/105918 (4) : Server backend/nfs.cephfs.0 is DOWN, reason: Layer4 connection problem, info: "Connection refused", check duration: 0ms. 2 active and 0 backup servers left. 0 sessions active, 0 requeued, 0 remaining in queue.
Nov 27 05:59:18 np0005537642 ceph-mon[74338]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Nov 27 05:59:18 np0005537642 ceph-mon[74338]: log_channel(audit) log [INF] : from='mgr.14424 192.168.122.100:0/1707931784' entity='mgr.compute-0.qnrkij' 
Nov 27 05:59:18 np0005537642 ceph-mon[74338]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Nov 27 05:59:18 np0005537642 ceph-mon[74338]: log_channel(audit) log [INF] : from='mgr.14424 192.168.122.100:0/1707931784' entity='mgr.compute-0.qnrkij' 
Nov 27 05:59:18 np0005537642 ceph-mon[74338]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.ingress.nfs.cephfs}] v 0)
Nov 27 05:59:18 np0005537642 ceph-mon[74338]: log_channel(audit) log [INF] : from='mgr.14424 192.168.122.100:0/1707931784' entity='mgr.compute-0.qnrkij' 
Nov 27 05:59:18 np0005537642 ceph-mgr[74636]: [cephadm INFO cephadm.serve] Deploying daemon haproxy.nfs.cephfs.compute-2.uabhnb on compute-2
Nov 27 05:59:18 np0005537642 ceph-mgr[74636]: log_channel(cephadm) log [INF] : Deploying daemon haproxy.nfs.cephfs.compute-2.uabhnb on compute-2
Nov 27 05:59:18 np0005537642 ceph-4c838139-e0c9-556a-a9ca-e4422f459af7-nfs-cephfs-2-0-compute-0-ymahkb[97101]: 27/11/2025 10:59:18 : epoch 69282ef5 : compute-0 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f37d40021d0 fd 37 proxy header rest len failed header rlen = % (will set dead)
Nov 27 05:59:19 np0005537642 ceph-mon[74338]: mon.compute-0@0(leader).osd e69 do_prune osdmap full prune enabled
Nov 27 05:59:19 np0005537642 ceph-mon[74338]: from='mgr.14424 192.168.122.100:0/1707931784' entity='mgr.compute-0.qnrkij' cmd='[{"prefix": "osd pool set", "pool": ".nfs", "var": "pg_num", "val": "32"}]': finished
Nov 27 05:59:19 np0005537642 ceph-mon[74338]: from='mgr.14424 192.168.122.100:0/1707931784' entity='mgr.compute-0.qnrkij' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.control", "var": "pg_num_actual", "val": "32"}]': finished
Nov 27 05:59:19 np0005537642 ceph-mon[74338]: from='mgr.14424 192.168.122.100:0/1707931784' entity='mgr.compute-0.qnrkij' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_num_actual", "val": "32"}]': finished
Nov 27 05:59:19 np0005537642 ceph-mon[74338]: from='mgr.14424 192.168.122.100:0/1707931784' entity='mgr.compute-0.qnrkij' 
Nov 27 05:59:19 np0005537642 ceph-mon[74338]: from='mgr.14424 192.168.122.100:0/1707931784' entity='mgr.compute-0.qnrkij' 
Nov 27 05:59:19 np0005537642 ceph-mon[74338]: from='mgr.14424 192.168.122.100:0/1707931784' entity='mgr.compute-0.qnrkij' 
Nov 27 05:59:19 np0005537642 ceph-mon[74338]: Deploying daemon haproxy.nfs.cephfs.compute-2.uabhnb on compute-2
Nov 27 05:59:19 np0005537642 ceph-mon[74338]: mon.compute-0@0(leader).osd e70 e70: 3 total, 3 up, 3 in
Nov 27 05:59:19 np0005537642 ceph-mon[74338]: log_channel(cluster) log [DBG] : osdmap e70: 3 total, 3 up, 3 in
Nov 27 05:59:19 np0005537642 ceph-osd[82775]: osd.1 pg_epoch: 70 pg[10.1b( v 54'48 lc 0'0 (0'0,54'48] local-lis/les=52/53 n=0 ec=69/52 lis/c=52/52 les/c/f=53/53/0 sis=69) [1] r=0 lpr=69 pi=[52,69)/1 crt=54'48 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 27 05:59:19 np0005537642 ceph-osd[82775]: osd.1 pg_epoch: 70 pg[10.7( v 54'48 lc 0'0 (0'0,54'48] local-lis/les=52/53 n=1 ec=69/52 lis/c=52/52 les/c/f=53/53/0 sis=69) [1] r=0 lpr=69 pi=[52,69)/1 crt=54'48 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 27 05:59:19 np0005537642 ceph-osd[82775]: osd.1 pg_epoch: 70 pg[10.11( v 54'48 lc 0'0 (0'0,54'48] local-lis/les=52/53 n=0 ec=69/52 lis/c=52/52 les/c/f=53/53/0 sis=69) [1] r=0 lpr=69 pi=[52,69)/1 crt=54'48 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 27 05:59:19 np0005537642 ceph-osd[82775]: osd.1 pg_epoch: 70 pg[10.10( v 54'48 lc 0'0 (0'0,54'48] local-lis/les=52/53 n=0 ec=69/52 lis/c=52/52 les/c/f=53/53/0 sis=69) [1] r=0 lpr=69 pi=[52,69)/1 crt=54'48 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 27 05:59:19 np0005537642 ceph-osd[82775]: osd.1 pg_epoch: 70 pg[10.12( v 54'48 lc 0'0 (0'0,54'48] local-lis/les=52/53 n=0 ec=69/52 lis/c=52/52 les/c/f=53/53/0 sis=69) [1] r=0 lpr=69 pi=[52,69)/1 crt=54'48 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 27 05:59:19 np0005537642 ceph-osd[82775]: osd.1 pg_epoch: 70 pg[10.1f( v 54'48 lc 0'0 (0'0,54'48] local-lis/les=52/53 n=0 ec=69/52 lis/c=52/52 les/c/f=53/53/0 sis=69) [1] r=0 lpr=69 pi=[52,69)/1 crt=54'48 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 27 05:59:19 np0005537642 ceph-osd[82775]: osd.1 pg_epoch: 70 pg[10.1e( v 54'48 lc 0'0 (0'0,54'48] local-lis/les=52/53 n=0 ec=69/52 lis/c=52/52 les/c/f=53/53/0 sis=69) [1] r=0 lpr=69 pi=[52,69)/1 crt=54'48 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 27 05:59:19 np0005537642 ceph-osd[82775]: osd.1 pg_epoch: 70 pg[10.1d( v 54'48 lc 0'0 (0'0,54'48] local-lis/les=52/53 n=0 ec=69/52 lis/c=52/52 les/c/f=53/53/0 sis=69) [1] r=0 lpr=69 pi=[52,69)/1 crt=54'48 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 27 05:59:19 np0005537642 ceph-osd[82775]: osd.1 pg_epoch: 70 pg[10.1a( v 54'48 lc 0'0 (0'0,54'48] local-lis/les=52/53 n=0 ec=69/52 lis/c=52/52 les/c/f=53/53/0 sis=69) [1] r=0 lpr=69 pi=[52,69)/1 crt=54'48 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 27 05:59:19 np0005537642 ceph-osd[82775]: osd.1 pg_epoch: 70 pg[10.19( v 54'48 lc 0'0 (0'0,54'48] local-lis/les=52/53 n=0 ec=69/52 lis/c=52/52 les/c/f=53/53/0 sis=69) [1] r=0 lpr=69 pi=[52,69)/1 crt=54'48 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 27 05:59:19 np0005537642 ceph-osd[82775]: osd.1 pg_epoch: 70 pg[10.1c( v 54'48 lc 0'0 (0'0,54'48] local-lis/les=52/53 n=0 ec=69/52 lis/c=52/52 les/c/f=53/53/0 sis=69) [1] r=0 lpr=69 pi=[52,69)/1 crt=54'48 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 27 05:59:19 np0005537642 ceph-osd[82775]: osd.1 pg_epoch: 70 pg[10.18( v 54'48 lc 0'0 (0'0,54'48] local-lis/les=52/53 n=0 ec=69/52 lis/c=52/52 les/c/f=53/53/0 sis=69) [1] r=0 lpr=69 pi=[52,69)/1 crt=54'48 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 27 05:59:19 np0005537642 ceph-osd[82775]: osd.1 pg_epoch: 70 pg[10.6( v 54'48 lc 0'0 (0'0,54'48] local-lis/les=52/53 n=1 ec=69/52 lis/c=52/52 les/c/f=53/53/0 sis=69) [1] r=0 lpr=69 pi=[52,69)/1 crt=54'48 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 27 05:59:19 np0005537642 ceph-osd[82775]: osd.1 pg_epoch: 70 pg[10.4( v 54'48 lc 0'0 (0'0,54'48] local-lis/les=52/53 n=1 ec=69/52 lis/c=52/52 les/c/f=53/53/0 sis=69) [1] r=0 lpr=69 pi=[52,69)/1 crt=54'48 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 27 05:59:19 np0005537642 ceph-osd[82775]: osd.1 pg_epoch: 70 pg[10.5( v 54'48 lc 0'0 (0'0,54'48] local-lis/les=52/53 n=1 ec=69/52 lis/c=52/52 les/c/f=53/53/0 sis=69) [1] r=0 lpr=69 pi=[52,69)/1 crt=54'48 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 27 05:59:19 np0005537642 ceph-osd[82775]: osd.1 pg_epoch: 70 pg[10.3( v 54'48 lc 0'0 (0'0,54'48] local-lis/les=52/53 n=1 ec=69/52 lis/c=52/52 les/c/f=53/53/0 sis=69) [1] r=0 lpr=69 pi=[52,69)/1 crt=54'48 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 27 05:59:19 np0005537642 ceph-osd[82775]: osd.1 pg_epoch: 70 pg[10.b( v 54'48 lc 0'0 (0'0,54'48] local-lis/les=52/53 n=0 ec=69/52 lis/c=52/52 les/c/f=53/53/0 sis=69) [1] r=0 lpr=69 pi=[52,69)/1 crt=54'48 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 27 05:59:19 np0005537642 ceph-osd[82775]: osd.1 pg_epoch: 70 pg[10.8( v 54'48 lc 0'0 (0'0,54'48] local-lis/les=52/53 n=1 ec=69/52 lis/c=52/52 les/c/f=53/53/0 sis=69) [1] r=0 lpr=69 pi=[52,69)/1 crt=54'48 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 27 05:59:19 np0005537642 ceph-osd[82775]: osd.1 pg_epoch: 70 pg[10.a( v 54'48 lc 0'0 (0'0,54'48] local-lis/les=52/53 n=0 ec=69/52 lis/c=52/52 les/c/f=53/53/0 sis=69) [1] r=0 lpr=69 pi=[52,69)/1 crt=54'48 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 27 05:59:19 np0005537642 ceph-osd[82775]: osd.1 pg_epoch: 70 pg[10.9( v 54'48 lc 0'0 (0'0,54'48] local-lis/les=52/53 n=0 ec=69/52 lis/c=52/52 les/c/f=53/53/0 sis=69) [1] r=0 lpr=69 pi=[52,69)/1 crt=54'48 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 27 05:59:19 np0005537642 ceph-osd[82775]: osd.1 pg_epoch: 70 pg[10.d( v 54'48 lc 0'0 (0'0,54'48] local-lis/les=52/53 n=0 ec=69/52 lis/c=52/52 les/c/f=53/53/0 sis=69) [1] r=0 lpr=69 pi=[52,69)/1 crt=54'48 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 27 05:59:19 np0005537642 ceph-osd[82775]: osd.1 pg_epoch: 70 pg[10.c( v 54'48 lc 0'0 (0'0,54'48] local-lis/les=52/53 n=0 ec=69/52 lis/c=52/52 les/c/f=53/53/0 sis=69) [1] r=0 lpr=69 pi=[52,69)/1 crt=54'48 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 27 05:59:19 np0005537642 ceph-osd[82775]: osd.1 pg_epoch: 70 pg[10.f( v 54'48 lc 0'0 (0'0,54'48] local-lis/les=52/53 n=0 ec=69/52 lis/c=52/52 les/c/f=53/53/0 sis=69) [1] r=0 lpr=69 pi=[52,69)/1 crt=54'48 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 27 05:59:19 np0005537642 ceph-osd[82775]: osd.1 pg_epoch: 70 pg[10.e( v 54'48 lc 0'0 (0'0,54'48] local-lis/les=52/53 n=0 ec=69/52 lis/c=52/52 les/c/f=53/53/0 sis=69) [1] r=0 lpr=69 pi=[52,69)/1 crt=54'48 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 27 05:59:19 np0005537642 ceph-osd[82775]: osd.1 pg_epoch: 70 pg[10.1( v 54'48 (0'0,54'48] local-lis/les=52/53 n=1 ec=69/52 lis/c=52/52 les/c/f=53/53/0 sis=69) [1] r=0 lpr=69 pi=[52,69)/1 crt=54'48 lcod 0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 27 05:59:19 np0005537642 ceph-osd[82775]: osd.1 pg_epoch: 70 pg[10.2( v 54'48 lc 0'0 (0'0,54'48] local-lis/les=52/53 n=1 ec=69/52 lis/c=52/52 les/c/f=53/53/0 sis=69) [1] r=0 lpr=69 pi=[52,69)/1 crt=54'48 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 27 05:59:19 np0005537642 ceph-osd[82775]: osd.1 pg_epoch: 70 pg[10.13( v 54'48 lc 0'0 (0'0,54'48] local-lis/les=52/53 n=0 ec=69/52 lis/c=52/52 les/c/f=53/53/0 sis=69) [1] r=0 lpr=69 pi=[52,69)/1 crt=54'48 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 27 05:59:19 np0005537642 ceph-osd[82775]: osd.1 pg_epoch: 70 pg[10.15( v 54'48 lc 0'0 (0'0,54'48] local-lis/les=52/53 n=0 ec=69/52 lis/c=52/52 les/c/f=53/53/0 sis=69) [1] r=0 lpr=69 pi=[52,69)/1 crt=54'48 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 27 05:59:19 np0005537642 ceph-osd[82775]: osd.1 pg_epoch: 70 pg[10.14( v 54'48 lc 0'0 (0'0,54'48] local-lis/les=52/53 n=0 ec=69/52 lis/c=52/52 les/c/f=53/53/0 sis=69) [1] r=0 lpr=69 pi=[52,69)/1 crt=54'48 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 27 05:59:19 np0005537642 ceph-osd[82775]: osd.1 pg_epoch: 70 pg[10.16( v 54'48 lc 0'0 (0'0,54'48] local-lis/les=52/53 n=0 ec=69/52 lis/c=52/52 les/c/f=53/53/0 sis=69) [1] r=0 lpr=69 pi=[52,69)/1 crt=54'48 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 27 05:59:19 np0005537642 ceph-osd[82775]: osd.1 pg_epoch: 70 pg[10.17( v 54'48 lc 0'0 (0'0,54'48] local-lis/les=52/53 n=0 ec=69/52 lis/c=52/52 les/c/f=53/53/0 sis=69) [1] r=0 lpr=69 pi=[52,69)/1 crt=54'48 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 27 05:59:19 np0005537642 ceph-osd[82775]: osd.1 pg_epoch: 70 pg[10.7( v 54'48 (0'0,54'48] local-lis/les=69/70 n=1 ec=69/52 lis/c=52/52 les/c/f=53/53/0 sis=69) [1] r=0 lpr=69 pi=[52,69)/1 crt=54'48 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 27 05:59:19 np0005537642 ceph-osd[82775]: osd.1 pg_epoch: 70 pg[10.1f( v 54'48 (0'0,54'48] local-lis/les=69/70 n=0 ec=69/52 lis/c=52/52 les/c/f=53/53/0 sis=69) [1] r=0 lpr=69 pi=[52,69)/1 crt=54'48 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 27 05:59:19 np0005537642 ceph-osd[82775]: osd.1 pg_epoch: 70 pg[10.10( v 54'48 (0'0,54'48] local-lis/les=69/70 n=0 ec=69/52 lis/c=52/52 les/c/f=53/53/0 sis=69) [1] r=0 lpr=69 pi=[52,69)/1 crt=54'48 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 27 05:59:19 np0005537642 ceph-osd[82775]: osd.1 pg_epoch: 70 pg[10.11( v 54'48 (0'0,54'48] local-lis/les=69/70 n=0 ec=69/52 lis/c=52/52 les/c/f=53/53/0 sis=69) [1] r=0 lpr=69 pi=[52,69)/1 crt=54'48 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 27 05:59:19 np0005537642 ceph-osd[82775]: osd.1 pg_epoch: 70 pg[10.12( v 54'48 (0'0,54'48] local-lis/les=69/70 n=0 ec=69/52 lis/c=52/52 les/c/f=53/53/0 sis=69) [1] r=0 lpr=69 pi=[52,69)/1 crt=54'48 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 27 05:59:19 np0005537642 ceph-osd[82775]: osd.1 pg_epoch: 70 pg[10.1b( v 54'48 (0'0,54'48] local-lis/les=69/70 n=0 ec=69/52 lis/c=52/52 les/c/f=53/53/0 sis=69) [1] r=0 lpr=69 pi=[52,69)/1 crt=54'48 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 27 05:59:19 np0005537642 ceph-osd[82775]: osd.1 pg_epoch: 70 pg[10.1e( v 54'48 (0'0,54'48] local-lis/les=69/70 n=0 ec=69/52 lis/c=52/52 les/c/f=53/53/0 sis=69) [1] r=0 lpr=69 pi=[52,69)/1 crt=54'48 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 27 05:59:19 np0005537642 ceph-osd[82775]: osd.1 pg_epoch: 70 pg[10.1d( v 54'48 (0'0,54'48] local-lis/les=69/70 n=0 ec=69/52 lis/c=52/52 les/c/f=53/53/0 sis=69) [1] r=0 lpr=69 pi=[52,69)/1 crt=54'48 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 27 05:59:19 np0005537642 ceph-osd[82775]: osd.1 pg_epoch: 70 pg[10.1a( v 54'48 (0'0,54'48] local-lis/les=69/70 n=0 ec=69/52 lis/c=52/52 les/c/f=53/53/0 sis=69) [1] r=0 lpr=69 pi=[52,69)/1 crt=54'48 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 27 05:59:19 np0005537642 ceph-osd[82775]: osd.1 pg_epoch: 70 pg[10.19( v 54'48 (0'0,54'48] local-lis/les=69/70 n=0 ec=69/52 lis/c=52/52 les/c/f=53/53/0 sis=69) [1] r=0 lpr=69 pi=[52,69)/1 crt=54'48 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 27 05:59:19 np0005537642 ceph-osd[82775]: osd.1 pg_epoch: 70 pg[10.1c( v 54'48 (0'0,54'48] local-lis/les=69/70 n=0 ec=69/52 lis/c=52/52 les/c/f=53/53/0 sis=69) [1] r=0 lpr=69 pi=[52,69)/1 crt=54'48 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 27 05:59:19 np0005537642 ceph-osd[82775]: osd.1 pg_epoch: 70 pg[10.6( v 54'48 (0'0,54'48] local-lis/les=69/70 n=1 ec=69/52 lis/c=52/52 les/c/f=53/53/0 sis=69) [1] r=0 lpr=69 pi=[52,69)/1 crt=54'48 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 27 05:59:19 np0005537642 ceph-osd[82775]: osd.1 pg_epoch: 70 pg[10.4( v 54'48 (0'0,54'48] local-lis/les=69/70 n=1 ec=69/52 lis/c=52/52 les/c/f=53/53/0 sis=69) [1] r=0 lpr=69 pi=[52,69)/1 crt=54'48 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 27 05:59:19 np0005537642 ceph-osd[82775]: osd.1 pg_epoch: 70 pg[10.5( v 54'48 (0'0,54'48] local-lis/les=69/70 n=1 ec=69/52 lis/c=52/52 les/c/f=53/53/0 sis=69) [1] r=0 lpr=69 pi=[52,69)/1 crt=54'48 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 27 05:59:19 np0005537642 ceph-osd[82775]: osd.1 pg_epoch: 70 pg[10.18( v 54'48 (0'0,54'48] local-lis/les=69/70 n=0 ec=69/52 lis/c=52/52 les/c/f=53/53/0 sis=69) [1] r=0 lpr=69 pi=[52,69)/1 crt=54'48 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 27 05:59:19 np0005537642 ceph-osd[82775]: osd.1 pg_epoch: 70 pg[10.b( v 54'48 (0'0,54'48] local-lis/les=69/70 n=0 ec=69/52 lis/c=52/52 les/c/f=53/53/0 sis=69) [1] r=0 lpr=69 pi=[52,69)/1 crt=54'48 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 27 05:59:19 np0005537642 ceph-osd[82775]: osd.1 pg_epoch: 70 pg[10.3( v 54'48 (0'0,54'48] local-lis/les=69/70 n=1 ec=69/52 lis/c=52/52 les/c/f=53/53/0 sis=69) [1] r=0 lpr=69 pi=[52,69)/1 crt=54'48 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 27 05:59:19 np0005537642 ceph-osd[82775]: osd.1 pg_epoch: 70 pg[10.a( v 54'48 (0'0,54'48] local-lis/les=69/70 n=0 ec=69/52 lis/c=52/52 les/c/f=53/53/0 sis=69) [1] r=0 lpr=69 pi=[52,69)/1 crt=54'48 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 27 05:59:19 np0005537642 ceph-osd[82775]: osd.1 pg_epoch: 70 pg[10.9( v 54'48 (0'0,54'48] local-lis/les=69/70 n=0 ec=69/52 lis/c=52/52 les/c/f=53/53/0 sis=69) [1] r=0 lpr=69 pi=[52,69)/1 crt=54'48 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 27 05:59:19 np0005537642 ceph-osd[82775]: osd.1 pg_epoch: 70 pg[10.c( v 54'48 (0'0,54'48] local-lis/les=69/70 n=0 ec=69/52 lis/c=52/52 les/c/f=53/53/0 sis=69) [1] r=0 lpr=69 pi=[52,69)/1 crt=54'48 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 27 05:59:19 np0005537642 ceph-osd[82775]: osd.1 pg_epoch: 70 pg[10.e( v 54'48 (0'0,54'48] local-lis/les=69/70 n=0 ec=69/52 lis/c=52/52 les/c/f=53/53/0 sis=69) [1] r=0 lpr=69 pi=[52,69)/1 crt=54'48 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 27 05:59:19 np0005537642 ceph-osd[82775]: osd.1 pg_epoch: 70 pg[10.f( v 54'48 (0'0,54'48] local-lis/les=69/70 n=0 ec=69/52 lis/c=52/52 les/c/f=53/53/0 sis=69) [1] r=0 lpr=69 pi=[52,69)/1 crt=54'48 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 27 05:59:19 np0005537642 ceph-osd[82775]: osd.1 pg_epoch: 70 pg[10.d( v 54'48 (0'0,54'48] local-lis/les=69/70 n=0 ec=69/52 lis/c=52/52 les/c/f=53/53/0 sis=69) [1] r=0 lpr=69 pi=[52,69)/1 crt=54'48 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 27 05:59:19 np0005537642 ceph-osd[82775]: osd.1 pg_epoch: 70 pg[10.1( v 54'48 (0'0,54'48] local-lis/les=69/70 n=1 ec=69/52 lis/c=52/52 les/c/f=53/53/0 sis=69) [1] r=0 lpr=69 pi=[52,69)/1 crt=54'48 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 27 05:59:19 np0005537642 ceph-osd[82775]: osd.1 pg_epoch: 70 pg[10.2( v 54'48 (0'0,54'48] local-lis/les=69/70 n=1 ec=69/52 lis/c=52/52 les/c/f=53/53/0 sis=69) [1] r=0 lpr=69 pi=[52,69)/1 crt=54'48 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 27 05:59:19 np0005537642 ceph-osd[82775]: osd.1 pg_epoch: 70 pg[10.8( v 54'48 (0'0,54'48] local-lis/les=69/70 n=1 ec=69/52 lis/c=52/52 les/c/f=53/53/0 sis=69) [1] r=0 lpr=69 pi=[52,69)/1 crt=54'48 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 27 05:59:19 np0005537642 ceph-osd[82775]: osd.1 pg_epoch: 70 pg[10.13( v 54'48 (0'0,54'48] local-lis/les=69/70 n=0 ec=69/52 lis/c=52/52 les/c/f=53/53/0 sis=69) [1] r=0 lpr=69 pi=[52,69)/1 crt=54'48 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 27 05:59:19 np0005537642 ceph-osd[82775]: osd.1 pg_epoch: 70 pg[10.15( v 54'48 (0'0,54'48] local-lis/les=69/70 n=0 ec=69/52 lis/c=52/52 les/c/f=53/53/0 sis=69) [1] r=0 lpr=69 pi=[52,69)/1 crt=54'48 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 27 05:59:19 np0005537642 ceph-osd[82775]: osd.1 pg_epoch: 70 pg[10.14( v 54'48 (0'0,54'48] local-lis/les=69/70 n=0 ec=69/52 lis/c=52/52 les/c/f=53/53/0 sis=69) [1] r=0 lpr=69 pi=[52,69)/1 crt=54'48 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 27 05:59:19 np0005537642 ceph-osd[82775]: osd.1 pg_epoch: 70 pg[10.17( v 54'48 (0'0,54'48] local-lis/les=69/70 n=0 ec=69/52 lis/c=52/52 les/c/f=53/53/0 sis=69) [1] r=0 lpr=69 pi=[52,69)/1 crt=54'48 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 27 05:59:19 np0005537642 ceph-osd[82775]: osd.1 pg_epoch: 70 pg[10.0( v 54'48 (0'0,54'48] local-lis/les=69/70 n=0 ec=52/52 lis/c=52/52 les/c/f=53/53/0 sis=69) [1] r=0 lpr=69 pi=[52,69)/1 crt=54'48 lcod 54'47 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 27 05:59:19 np0005537642 ceph-osd[82775]: osd.1 pg_epoch: 70 pg[10.16( v 54'48 (0'0,54'48] local-lis/les=69/70 n=0 ec=69/52 lis/c=52/52 les/c/f=53/53/0 sis=69) [1] r=0 lpr=69 pi=[52,69)/1 crt=54'48 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 27 05:59:19 np0005537642 ceph-osd[82775]: log_channel(cluster) log [DBG] : 10.7 scrub starts
Nov 27 05:59:19 np0005537642 ceph-osd[82775]: log_channel(cluster) log [DBG] : 10.7 scrub ok
Nov 27 05:59:19 np0005537642 ceph-4c838139-e0c9-556a-a9ca-e4422f459af7-nfs-cephfs-2-0-compute-0-ymahkb[97101]: 27/11/2025 10:59:19 : epoch 69282ef5 : compute-0 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f37bc0016a0 fd 37 proxy header rest len failed header rlen = % (will set dead)
Nov 27 05:59:19 np0005537642 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v47: 322 pgs: 62 unknown, 260 active+clean; 456 KiB data, 103 MiB used, 60 GiB / 60 GiB avail
Nov 27 05:59:19 np0005537642 ceph-mon[74338]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": ".nfs", "var": "pg_num_actual", "val": "32"} v 0)
Nov 27 05:59:19 np0005537642 ceph-mon[74338]: log_channel(audit) log [INF] : from='mgr.14424 192.168.122.100:0/1707931784' entity='mgr.compute-0.qnrkij' cmd=[{"prefix": "osd pool set", "pool": ".nfs", "var": "pg_num_actual", "val": "32"}]: dispatch
Nov 27 05:59:20 np0005537642 ceph-mon[74338]: mon.compute-0@0(leader).osd e70 do_prune osdmap full prune enabled
Nov 27 05:59:20 np0005537642 ceph-mon[74338]: log_channel(audit) log [INF] : from='mgr.14424 192.168.122.100:0/1707931784' entity='mgr.compute-0.qnrkij' cmd='[{"prefix": "osd pool set", "pool": ".nfs", "var": "pg_num_actual", "val": "32"}]': finished
Nov 27 05:59:20 np0005537642 ceph-mon[74338]: mon.compute-0@0(leader).osd e71 e71: 3 total, 3 up, 3 in
Nov 27 05:59:20 np0005537642 ceph-mon[74338]: log_channel(cluster) log [DBG] : osdmap e71: 3 total, 3 up, 3 in
Nov 27 05:59:20 np0005537642 ceph-osd[82775]: osd.1 pg_epoch: 71 pg[12.0( v 69'48 (0'0,69'48] local-lis/les=62/63 n=5 ec=62/62 lis/c=62/62 les/c/f=63/63/0 sis=71 pruub=8.619353294s) [1] r=0 lpr=71 pi=[62,71)/1 crt=69'48 lcod 69'47 mlcod 69'47 active pruub 225.882461548s@ mbc={}] PeeringState::start_peering_interval up [1] -> [1], acting [1] -> [1], acting_primary 1 -> 1, up_primary 1 -> 1, role 0 -> 0, features acting 4540701547738038271 upacting 4540701547738038271
Nov 27 05:59:20 np0005537642 ceph-osd[82775]: osd.1 pg_epoch: 71 pg[12.0( v 69'48 lc 0'0 (0'0,69'48] local-lis/les=62/63 n=0 ec=62/62 lis/c=62/62 les/c/f=63/63/0 sis=71 pruub=8.619353294s) [1] r=0 lpr=71 pi=[62,71)/1 crt=69'48 lcod 69'47 mlcod 0'0 unknown pruub 225.882461548s@ mbc={}] state<Start>: transitioning to Primary
Nov 27 05:59:20 np0005537642 ceph-osd[82775]: bluestore(/var/lib/ceph/osd/ceph-1).collection(12.0_head 0x562f5a4038c0) operator()   moving buffer(0x562f58f643e8 space 0x562f58edd870 0x0~1000 clean)
Nov 27 05:59:20 np0005537642 ceph-mon[74338]: from='mgr.14424 192.168.122.100:0/1707931784' entity='mgr.compute-0.qnrkij' cmd=[{"prefix": "osd pool set", "pool": ".nfs", "var": "pg_num_actual", "val": "32"}]: dispatch
Nov 27 05:59:20 np0005537642 ceph-osd[82775]: log_channel(cluster) log [DBG] : 10.1f scrub starts
Nov 27 05:59:20 np0005537642 ceph-osd[82775]: log_channel(cluster) log [DBG] : 10.1f scrub ok
Nov 27 05:59:20 np0005537642 ceph-4c838139-e0c9-556a-a9ca-e4422f459af7-nfs-cephfs-2-0-compute-0-ymahkb[97101]: 27/11/2025 10:59:20 : epoch 69282ef5 : compute-0 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f37dc0023e0 fd 37 proxy header rest len failed header rlen = % (will set dead)
Nov 27 05:59:21 np0005537642 ceph-osd[82775]: log_channel(cluster) log [DBG] : 10.11 deep-scrub starts
Nov 27 05:59:21 np0005537642 ceph-mon[74338]: mon.compute-0@0(leader).osd e71 do_prune osdmap full prune enabled
Nov 27 05:59:21 np0005537642 ceph-osd[82775]: log_channel(cluster) log [DBG] : 10.11 deep-scrub ok
Nov 27 05:59:21 np0005537642 ceph-mon[74338]: mon.compute-0@0(leader).osd e72 e72: 3 total, 3 up, 3 in
Nov 27 05:59:21 np0005537642 ceph-mon[74338]: log_channel(cluster) log [DBG] : osdmap e72: 3 total, 3 up, 3 in
Nov 27 05:59:21 np0005537642 ceph-osd[82775]: osd.1 pg_epoch: 72 pg[12.11( v 69'48 lc 0'0 (0'0,69'48] local-lis/les=62/63 n=0 ec=71/62 lis/c=62/62 les/c/f=63/63/0 sis=71) [1] r=0 lpr=71 pi=[62,71)/1 crt=69'48 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 27 05:59:21 np0005537642 ceph-osd[82775]: osd.1 pg_epoch: 72 pg[12.10( v 69'48 lc 0'0 (0'0,69'48] local-lis/les=62/63 n=0 ec=71/62 lis/c=62/62 les/c/f=63/63/0 sis=71) [1] r=0 lpr=71 pi=[62,71)/1 crt=69'48 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 27 05:59:21 np0005537642 ceph-osd[82775]: osd.1 pg_epoch: 72 pg[12.13( v 69'48 lc 0'0 (0'0,69'48] local-lis/les=62/63 n=0 ec=71/62 lis/c=62/62 les/c/f=63/63/0 sis=71) [1] r=0 lpr=71 pi=[62,71)/1 crt=69'48 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 27 05:59:21 np0005537642 ceph-osd[82775]: osd.1 pg_epoch: 72 pg[12.12( v 69'48 lc 0'0 (0'0,69'48] local-lis/les=62/63 n=0 ec=71/62 lis/c=62/62 les/c/f=63/63/0 sis=71) [1] r=0 lpr=71 pi=[62,71)/1 crt=69'48 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 27 05:59:21 np0005537642 ceph-osd[82775]: osd.1 pg_epoch: 72 pg[12.15( v 69'48 lc 0'0 (0'0,69'48] local-lis/les=62/63 n=0 ec=71/62 lis/c=62/62 les/c/f=63/63/0 sis=71) [1] r=0 lpr=71 pi=[62,71)/1 crt=69'48 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 27 05:59:21 np0005537642 ceph-osd[82775]: osd.1 pg_epoch: 72 pg[12.7( v 69'48 lc 0'0 (0'0,69'48] local-lis/les=62/63 n=0 ec=71/62 lis/c=62/62 les/c/f=63/63/0 sis=71) [1] r=0 lpr=71 pi=[62,71)/1 crt=69'48 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 27 05:59:21 np0005537642 ceph-osd[82775]: osd.1 pg_epoch: 72 pg[12.6( v 69'48 lc 0'0 (0'0,69'48] local-lis/les=62/63 n=0 ec=71/62 lis/c=62/62 les/c/f=63/63/0 sis=71) [1] r=0 lpr=71 pi=[62,71)/1 crt=69'48 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 27 05:59:21 np0005537642 ceph-osd[82775]: osd.1 pg_epoch: 72 pg[12.4( v 69'48 lc 0'0 (0'0,69'48] local-lis/les=62/63 n=1 ec=71/62 lis/c=62/62 les/c/f=63/63/0 sis=71) [1] r=0 lpr=71 pi=[62,71)/1 crt=69'48 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 27 05:59:21 np0005537642 ceph-osd[82775]: osd.1 pg_epoch: 72 pg[12.9( v 69'48 lc 0'0 (0'0,69'48] local-lis/les=62/63 n=0 ec=71/62 lis/c=62/62 les/c/f=63/63/0 sis=71) [1] r=0 lpr=71 pi=[62,71)/1 crt=69'48 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 27 05:59:21 np0005537642 ceph-osd[82775]: osd.1 pg_epoch: 72 pg[12.8( v 69'48 lc 0'0 (0'0,69'48] local-lis/les=62/63 n=0 ec=71/62 lis/c=62/62 les/c/f=63/63/0 sis=71) [1] r=0 lpr=71 pi=[62,71)/1 crt=69'48 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 27 05:59:21 np0005537642 ceph-osd[82775]: osd.1 pg_epoch: 72 pg[12.a( v 69'48 lc 0'0 (0'0,69'48] local-lis/les=62/63 n=0 ec=71/62 lis/c=62/62 les/c/f=63/63/0 sis=71) [1] r=0 lpr=71 pi=[62,71)/1 crt=69'48 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 27 05:59:21 np0005537642 ceph-osd[82775]: osd.1 pg_epoch: 72 pg[12.c( v 69'48 lc 0'0 (0'0,69'48] local-lis/les=62/63 n=0 ec=71/62 lis/c=62/62 les/c/f=63/63/0 sis=71) [1] r=0 lpr=71 pi=[62,71)/1 crt=69'48 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 27 05:59:21 np0005537642 ceph-osd[82775]: osd.1 pg_epoch: 72 pg[12.f( v 69'48 lc 0'0 (0'0,69'48] local-lis/les=62/63 n=0 ec=71/62 lis/c=62/62 les/c/f=63/63/0 sis=71) [1] r=0 lpr=71 pi=[62,71)/1 crt=69'48 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 27 05:59:21 np0005537642 ceph-osd[82775]: osd.1 pg_epoch: 72 pg[12.b( v 69'48 lc 0'0 (0'0,69'48] local-lis/les=62/63 n=0 ec=71/62 lis/c=62/62 les/c/f=63/63/0 sis=71) [1] r=0 lpr=71 pi=[62,71)/1 crt=69'48 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 27 05:59:21 np0005537642 ceph-osd[82775]: osd.1 pg_epoch: 72 pg[12.e( v 69'48 lc 0'0 (0'0,69'48] local-lis/les=62/63 n=0 ec=71/62 lis/c=62/62 les/c/f=63/63/0 sis=71) [1] r=0 lpr=71 pi=[62,71)/1 crt=69'48 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 27 05:59:21 np0005537642 ceph-osd[82775]: osd.1 pg_epoch: 72 pg[12.5( v 69'48 lc 0'0 (0'0,69'48] local-lis/les=62/63 n=1 ec=71/62 lis/c=62/62 les/c/f=63/63/0 sis=71) [1] r=0 lpr=71 pi=[62,71)/1 crt=69'48 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 27 05:59:21 np0005537642 ceph-osd[82775]: osd.1 pg_epoch: 72 pg[12.d( v 69'48 lc 0'0 (0'0,69'48] local-lis/les=62/63 n=0 ec=71/62 lis/c=62/62 les/c/f=63/63/0 sis=71) [1] r=0 lpr=71 pi=[62,71)/1 crt=69'48 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 27 05:59:21 np0005537642 ceph-osd[82775]: osd.1 pg_epoch: 72 pg[12.2( v 69'48 lc 0'0 (0'0,69'48] local-lis/les=62/63 n=1 ec=71/62 lis/c=62/62 les/c/f=63/63/0 sis=71) [1] r=0 lpr=71 pi=[62,71)/1 crt=69'48 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 27 05:59:21 np0005537642 ceph-osd[82775]: osd.1 pg_epoch: 72 pg[12.3( v 69'48 lc 0'0 (0'0,69'48] local-lis/les=62/63 n=1 ec=71/62 lis/c=62/62 les/c/f=63/63/0 sis=71) [1] r=0 lpr=71 pi=[62,71)/1 crt=69'48 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 27 05:59:21 np0005537642 ceph-osd[82775]: osd.1 pg_epoch: 72 pg[12.1e( v 69'48 lc 0'0 (0'0,69'48] local-lis/les=62/63 n=0 ec=71/62 lis/c=62/62 les/c/f=63/63/0 sis=71) [1] r=0 lpr=71 pi=[62,71)/1 crt=69'48 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 27 05:59:21 np0005537642 ceph-osd[82775]: osd.1 pg_epoch: 72 pg[12.1f( v 69'48 lc 0'0 (0'0,69'48] local-lis/les=62/63 n=0 ec=71/62 lis/c=62/62 les/c/f=63/63/0 sis=71) [1] r=0 lpr=71 pi=[62,71)/1 crt=69'48 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 27 05:59:21 np0005537642 ceph-osd[82775]: osd.1 pg_epoch: 72 pg[12.1c( v 69'48 lc 0'0 (0'0,69'48] local-lis/les=62/63 n=0 ec=71/62 lis/c=62/62 les/c/f=63/63/0 sis=71) [1] r=0 lpr=71 pi=[62,71)/1 crt=69'48 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 27 05:59:21 np0005537642 ceph-osd[82775]: osd.1 pg_epoch: 72 pg[12.1a( v 69'48 lc 0'0 (0'0,69'48] local-lis/les=62/63 n=0 ec=71/62 lis/c=62/62 les/c/f=63/63/0 sis=71) [1] r=0 lpr=71 pi=[62,71)/1 crt=69'48 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 27 05:59:21 np0005537642 ceph-osd[82775]: osd.1 pg_epoch: 72 pg[12.1b( v 69'48 lc 0'0 (0'0,69'48] local-lis/les=62/63 n=0 ec=71/62 lis/c=62/62 les/c/f=63/63/0 sis=71) [1] r=0 lpr=71 pi=[62,71)/1 crt=69'48 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 27 05:59:21 np0005537642 ceph-osd[82775]: osd.1 pg_epoch: 72 pg[12.18( v 69'48 lc 0'0 (0'0,69'48] local-lis/les=62/63 n=0 ec=71/62 lis/c=62/62 les/c/f=63/63/0 sis=71) [1] r=0 lpr=71 pi=[62,71)/1 crt=69'48 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 27 05:59:21 np0005537642 ceph-osd[82775]: osd.1 pg_epoch: 72 pg[12.19( v 69'48 lc 0'0 (0'0,69'48] local-lis/les=62/63 n=0 ec=71/62 lis/c=62/62 les/c/f=63/63/0 sis=71) [1] r=0 lpr=71 pi=[62,71)/1 crt=69'48 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 27 05:59:21 np0005537642 ceph-osd[82775]: osd.1 pg_epoch: 72 pg[12.16( v 69'48 lc 0'0 (0'0,69'48] local-lis/les=62/63 n=0 ec=71/62 lis/c=62/62 les/c/f=63/63/0 sis=71) [1] r=0 lpr=71 pi=[62,71)/1 crt=69'48 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 27 05:59:21 np0005537642 ceph-osd[82775]: osd.1 pg_epoch: 72 pg[12.17( v 69'48 lc 0'0 (0'0,69'48] local-lis/les=62/63 n=0 ec=71/62 lis/c=62/62 les/c/f=63/63/0 sis=71) [1] r=0 lpr=71 pi=[62,71)/1 crt=69'48 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 27 05:59:21 np0005537642 ceph-osd[82775]: osd.1 pg_epoch: 72 pg[12.14( v 69'48 lc 0'0 (0'0,69'48] local-lis/les=62/63 n=0 ec=71/62 lis/c=62/62 les/c/f=63/63/0 sis=71) [1] r=0 lpr=71 pi=[62,71)/1 crt=69'48 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 27 05:59:21 np0005537642 ceph-osd[82775]: osd.1 pg_epoch: 72 pg[12.1d( v 69'48 lc 0'0 (0'0,69'48] local-lis/les=62/63 n=0 ec=71/62 lis/c=62/62 les/c/f=63/63/0 sis=71) [1] r=0 lpr=71 pi=[62,71)/1 crt=69'48 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 27 05:59:21 np0005537642 ceph-mon[74338]: from='mgr.14424 192.168.122.100:0/1707931784' entity='mgr.compute-0.qnrkij' cmd='[{"prefix": "osd pool set", "pool": ".nfs", "var": "pg_num_actual", "val": "32"}]': finished
Nov 27 05:59:21 np0005537642 ceph-osd[82775]: osd.1 pg_epoch: 72 pg[12.1( v 69'48 (0'0,69'48] local-lis/les=62/63 n=1 ec=71/62 lis/c=62/62 les/c/f=63/63/0 sis=71) [1] r=0 lpr=71 pi=[62,71)/1 crt=69'48 lcod 0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 27 05:59:21 np0005537642 ceph-osd[82775]: osd.1 pg_epoch: 72 pg[12.11( v 69'48 (0'0,69'48] local-lis/les=71/72 n=0 ec=71/62 lis/c=62/62 les/c/f=63/63/0 sis=71) [1] r=0 lpr=71 pi=[62,71)/1 crt=69'48 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 27 05:59:21 np0005537642 ceph-osd[82775]: osd.1 pg_epoch: 72 pg[12.10( v 69'48 (0'0,69'48] local-lis/les=71/72 n=0 ec=71/62 lis/c=62/62 les/c/f=63/63/0 sis=71) [1] r=0 lpr=71 pi=[62,71)/1 crt=69'48 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 27 05:59:21 np0005537642 ceph-osd[82775]: osd.1 pg_epoch: 72 pg[12.12( v 69'48 (0'0,69'48] local-lis/les=71/72 n=0 ec=71/62 lis/c=62/62 les/c/f=63/63/0 sis=71) [1] r=0 lpr=71 pi=[62,71)/1 crt=69'48 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 27 05:59:21 np0005537642 ceph-osd[82775]: osd.1 pg_epoch: 72 pg[12.7( v 69'48 (0'0,69'48] local-lis/les=71/72 n=0 ec=71/62 lis/c=62/62 les/c/f=63/63/0 sis=71) [1] r=0 lpr=71 pi=[62,71)/1 crt=69'48 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 27 05:59:21 np0005537642 ceph-osd[82775]: osd.1 pg_epoch: 72 pg[12.13( v 69'48 (0'0,69'48] local-lis/les=71/72 n=0 ec=71/62 lis/c=62/62 les/c/f=63/63/0 sis=71) [1] r=0 lpr=71 pi=[62,71)/1 crt=69'48 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 27 05:59:21 np0005537642 ceph-osd[82775]: osd.1 pg_epoch: 72 pg[12.15( v 69'48 (0'0,69'48] local-lis/les=71/72 n=0 ec=71/62 lis/c=62/62 les/c/f=63/63/0 sis=71) [1] r=0 lpr=71 pi=[62,71)/1 crt=69'48 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 27 05:59:21 np0005537642 ceph-osd[82775]: osd.1 pg_epoch: 72 pg[12.9( v 69'48 (0'0,69'48] local-lis/les=71/72 n=0 ec=71/62 lis/c=62/62 les/c/f=63/63/0 sis=71) [1] r=0 lpr=71 pi=[62,71)/1 crt=69'48 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 27 05:59:21 np0005537642 ceph-osd[82775]: osd.1 pg_epoch: 72 pg[12.6( v 69'48 (0'0,69'48] local-lis/les=71/72 n=0 ec=71/62 lis/c=62/62 les/c/f=63/63/0 sis=71) [1] r=0 lpr=71 pi=[62,71)/1 crt=69'48 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 27 05:59:21 np0005537642 ceph-osd[82775]: osd.1 pg_epoch: 72 pg[12.4( v 69'48 (0'0,69'48] local-lis/les=71/72 n=1 ec=71/62 lis/c=62/62 les/c/f=63/63/0 sis=71) [1] r=0 lpr=71 pi=[62,71)/1 crt=69'48 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 27 05:59:21 np0005537642 ceph-osd[82775]: osd.1 pg_epoch: 72 pg[12.8( v 69'48 (0'0,69'48] local-lis/les=71/72 n=0 ec=71/62 lis/c=62/62 les/c/f=63/63/0 sis=71) [1] r=0 lpr=71 pi=[62,71)/1 crt=69'48 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 27 05:59:21 np0005537642 ceph-osd[82775]: osd.1 pg_epoch: 72 pg[12.a( v 69'48 (0'0,69'48] local-lis/les=71/72 n=0 ec=71/62 lis/c=62/62 les/c/f=63/63/0 sis=71) [1] r=0 lpr=71 pi=[62,71)/1 crt=69'48 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 27 05:59:21 np0005537642 ceph-osd[82775]: osd.1 pg_epoch: 72 pg[12.c( v 69'48 (0'0,69'48] local-lis/les=71/72 n=0 ec=71/62 lis/c=62/62 les/c/f=63/63/0 sis=71) [1] r=0 lpr=71 pi=[62,71)/1 crt=69'48 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 27 05:59:21 np0005537642 ceph-osd[82775]: osd.1 pg_epoch: 72 pg[12.f( v 69'48 (0'0,69'48] local-lis/les=71/72 n=0 ec=71/62 lis/c=62/62 les/c/f=63/63/0 sis=71) [1] r=0 lpr=71 pi=[62,71)/1 crt=69'48 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 27 05:59:21 np0005537642 ceph-osd[82775]: osd.1 pg_epoch: 72 pg[12.b( v 69'48 (0'0,69'48] local-lis/les=71/72 n=0 ec=71/62 lis/c=62/62 les/c/f=63/63/0 sis=71) [1] r=0 lpr=71 pi=[62,71)/1 crt=69'48 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 27 05:59:21 np0005537642 ceph-osd[82775]: osd.1 pg_epoch: 72 pg[12.e( v 69'48 (0'0,69'48] local-lis/les=71/72 n=0 ec=71/62 lis/c=62/62 les/c/f=63/63/0 sis=71) [1] r=0 lpr=71 pi=[62,71)/1 crt=69'48 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 27 05:59:21 np0005537642 ceph-osd[82775]: osd.1 pg_epoch: 72 pg[12.5( v 69'48 (0'0,69'48] local-lis/les=71/72 n=1 ec=71/62 lis/c=62/62 les/c/f=63/63/0 sis=71) [1] r=0 lpr=71 pi=[62,71)/1 crt=69'48 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 27 05:59:21 np0005537642 ceph-osd[82775]: osd.1 pg_epoch: 72 pg[12.2( v 69'48 (0'0,69'48] local-lis/les=71/72 n=1 ec=71/62 lis/c=62/62 les/c/f=63/63/0 sis=71) [1] r=0 lpr=71 pi=[62,71)/1 crt=69'48 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 27 05:59:21 np0005537642 ceph-osd[82775]: osd.1 pg_epoch: 72 pg[12.0( v 69'48 (0'0,69'48] local-lis/les=71/72 n=0 ec=62/62 lis/c=62/62 les/c/f=63/63/0 sis=71) [1] r=0 lpr=71 pi=[62,71)/1 crt=69'48 lcod 69'47 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 27 05:59:21 np0005537642 ceph-osd[82775]: osd.1 pg_epoch: 72 pg[12.3( v 69'48 (0'0,69'48] local-lis/les=71/72 n=1 ec=71/62 lis/c=62/62 les/c/f=63/63/0 sis=71) [1] r=0 lpr=71 pi=[62,71)/1 crt=69'48 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 27 05:59:21 np0005537642 ceph-osd[82775]: osd.1 pg_epoch: 72 pg[12.d( v 69'48 (0'0,69'48] local-lis/les=71/72 n=0 ec=71/62 lis/c=62/62 les/c/f=63/63/0 sis=71) [1] r=0 lpr=71 pi=[62,71)/1 crt=69'48 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 27 05:59:21 np0005537642 ceph-osd[82775]: osd.1 pg_epoch: 72 pg[12.1e( v 69'48 (0'0,69'48] local-lis/les=71/72 n=0 ec=71/62 lis/c=62/62 les/c/f=63/63/0 sis=71) [1] r=0 lpr=71 pi=[62,71)/1 crt=69'48 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 27 05:59:21 np0005537642 ceph-osd[82775]: osd.1 pg_epoch: 72 pg[12.1f( v 69'48 (0'0,69'48] local-lis/les=71/72 n=0 ec=71/62 lis/c=62/62 les/c/f=63/63/0 sis=71) [1] r=0 lpr=71 pi=[62,71)/1 crt=69'48 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 27 05:59:21 np0005537642 ceph-osd[82775]: osd.1 pg_epoch: 72 pg[12.1a( v 69'48 (0'0,69'48] local-lis/les=71/72 n=0 ec=71/62 lis/c=62/62 les/c/f=63/63/0 sis=71) [1] r=0 lpr=71 pi=[62,71)/1 crt=69'48 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 27 05:59:21 np0005537642 ceph-osd[82775]: osd.1 pg_epoch: 72 pg[12.1b( v 69'48 (0'0,69'48] local-lis/les=71/72 n=0 ec=71/62 lis/c=62/62 les/c/f=63/63/0 sis=71) [1] r=0 lpr=71 pi=[62,71)/1 crt=69'48 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 27 05:59:21 np0005537642 ceph-osd[82775]: osd.1 pg_epoch: 72 pg[12.16( v 69'48 (0'0,69'48] local-lis/les=71/72 n=0 ec=71/62 lis/c=62/62 les/c/f=63/63/0 sis=71) [1] r=0 lpr=71 pi=[62,71)/1 crt=69'48 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 27 05:59:21 np0005537642 ceph-osd[82775]: osd.1 pg_epoch: 72 pg[12.19( v 69'48 (0'0,69'48] local-lis/les=71/72 n=0 ec=71/62 lis/c=62/62 les/c/f=63/63/0 sis=71) [1] r=0 lpr=71 pi=[62,71)/1 crt=69'48 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 27 05:59:21 np0005537642 ceph-osd[82775]: osd.1 pg_epoch: 72 pg[12.1c( v 69'48 (0'0,69'48] local-lis/les=71/72 n=0 ec=71/62 lis/c=62/62 les/c/f=63/63/0 sis=71) [1] r=0 lpr=71 pi=[62,71)/1 crt=69'48 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 27 05:59:21 np0005537642 ceph-osd[82775]: osd.1 pg_epoch: 72 pg[12.17( v 69'48 (0'0,69'48] local-lis/les=71/72 n=0 ec=71/62 lis/c=62/62 les/c/f=63/63/0 sis=71) [1] r=0 lpr=71 pi=[62,71)/1 crt=69'48 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 27 05:59:21 np0005537642 ceph-osd[82775]: osd.1 pg_epoch: 72 pg[12.18( v 69'48 (0'0,69'48] local-lis/les=71/72 n=0 ec=71/62 lis/c=62/62 les/c/f=63/63/0 sis=71) [1] r=0 lpr=71 pi=[62,71)/1 crt=69'48 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 27 05:59:21 np0005537642 ceph-osd[82775]: osd.1 pg_epoch: 72 pg[12.1d( v 69'48 (0'0,69'48] local-lis/les=71/72 n=0 ec=71/62 lis/c=62/62 les/c/f=63/63/0 sis=71) [1] r=0 lpr=71 pi=[62,71)/1 crt=69'48 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 27 05:59:21 np0005537642 ceph-osd[82775]: osd.1 pg_epoch: 72 pg[12.14( v 69'48 (0'0,69'48] local-lis/les=71/72 n=0 ec=71/62 lis/c=62/62 les/c/f=63/63/0 sis=71) [1] r=0 lpr=71 pi=[62,71)/1 crt=69'48 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 27 05:59:21 np0005537642 ceph-osd[82775]: osd.1 pg_epoch: 72 pg[12.1( v 69'48 (0'0,69'48] local-lis/les=71/72 n=1 ec=71/62 lis/c=62/62 les/c/f=63/63/0 sis=71) [1] r=0 lpr=71 pi=[62,71)/1 crt=69'48 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 27 05:59:21 np0005537642 ceph-mon[74338]: mon.compute-0@0(leader).osd e72 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 27 05:59:21 np0005537642 ceph-4c838139-e0c9-556a-a9ca-e4422f459af7-nfs-cephfs-2-0-compute-0-ymahkb[97101]: 27/11/2025 10:59:21 : epoch 69282ef5 : compute-0 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f37c4001ac0 fd 37 proxy header rest len failed header rlen = % (will set dead)
Nov 27 05:59:21 np0005537642 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v50: 353 pgs: 93 unknown, 260 active+clean; 456 KiB data, 103 MiB used, 60 GiB / 60 GiB avail
Nov 27 05:59:22 np0005537642 ceph-mon[74338]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2.devices.0}] v 0)
Nov 27 05:59:22 np0005537642 ceph-mon[74338]: log_channel(audit) log [INF] : from='mgr.14424 192.168.122.100:0/1707931784' entity='mgr.compute-0.qnrkij' 
Nov 27 05:59:22 np0005537642 ceph-mon[74338]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2}] v 0)
Nov 27 05:59:22 np0005537642 ceph-mon[74338]: log_channel(audit) log [INF] : from='mgr.14424 192.168.122.100:0/1707931784' entity='mgr.compute-0.qnrkij' 
Nov 27 05:59:22 np0005537642 ceph-mon[74338]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.ingress.nfs.cephfs}] v 0)
Nov 27 05:59:22 np0005537642 ceph-mon[74338]: log_channel(audit) log [INF] : from='mgr.14424 192.168.122.100:0/1707931784' entity='mgr.compute-0.qnrkij' 
Nov 27 05:59:22 np0005537642 ceph-mon[74338]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/ingress.nfs.cephfs/keepalived_password}] v 0)
Nov 27 05:59:22 np0005537642 ceph-mon[74338]: log_channel(audit) log [INF] : from='mgr.14424 192.168.122.100:0/1707931784' entity='mgr.compute-0.qnrkij' 
Nov 27 05:59:22 np0005537642 ceph-mgr[74636]: [cephadm INFO cephadm.services.ingress] 192.168.122.2 is in 192.168.122.0/24 on compute-0 interface br-ex
Nov 27 05:59:22 np0005537642 ceph-mgr[74636]: log_channel(cephadm) log [INF] : 192.168.122.2 is in 192.168.122.0/24 on compute-0 interface br-ex
Nov 27 05:59:22 np0005537642 ceph-mgr[74636]: [cephadm INFO cephadm.services.ingress] 192.168.122.2 is in 192.168.122.0/24 on compute-1 interface br-ex
Nov 27 05:59:22 np0005537642 ceph-mgr[74636]: log_channel(cephadm) log [INF] : 192.168.122.2 is in 192.168.122.0/24 on compute-1 interface br-ex
Nov 27 05:59:22 np0005537642 ceph-mgr[74636]: [cephadm INFO cephadm.services.ingress] 192.168.122.2 is in 192.168.122.0/24 on compute-2 interface br-ex
Nov 27 05:59:22 np0005537642 ceph-mgr[74636]: log_channel(cephadm) log [INF] : 192.168.122.2 is in 192.168.122.0/24 on compute-2 interface br-ex
Nov 27 05:59:22 np0005537642 ceph-mgr[74636]: [cephadm INFO cephadm.serve] Deploying daemon keepalived.nfs.cephfs.compute-0.zwobpl on compute-0
Nov 27 05:59:22 np0005537642 ceph-mgr[74636]: log_channel(cephadm) log [INF] : Deploying daemon keepalived.nfs.cephfs.compute-0.zwobpl on compute-0
Nov 27 05:59:22 np0005537642 ceph-osd[82775]: log_channel(cluster) log [DBG] : 10.1b scrub starts
Nov 27 05:59:22 np0005537642 ceph-osd[82775]: log_channel(cluster) log [DBG] : 10.1b scrub ok
Nov 27 05:59:22 np0005537642 ceph-mon[74338]: from='mgr.14424 192.168.122.100:0/1707931784' entity='mgr.compute-0.qnrkij' 
Nov 27 05:59:22 np0005537642 ceph-mon[74338]: from='mgr.14424 192.168.122.100:0/1707931784' entity='mgr.compute-0.qnrkij' 
Nov 27 05:59:22 np0005537642 ceph-mon[74338]: from='mgr.14424 192.168.122.100:0/1707931784' entity='mgr.compute-0.qnrkij' 
Nov 27 05:59:22 np0005537642 ceph-mon[74338]: from='mgr.14424 192.168.122.100:0/1707931784' entity='mgr.compute-0.qnrkij' 
Nov 27 05:59:22 np0005537642 ceph-4c838139-e0c9-556a-a9ca-e4422f459af7-nfs-cephfs-2-0-compute-0-ymahkb[97101]: 27/11/2025 10:59:22 : epoch 69282ef5 : compute-0 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f37d40021d0 fd 37 proxy header rest len failed header rlen = % (will set dead)
Nov 27 05:59:23 np0005537642 ceph-mgr[74636]: [progress INFO root] Writing back 21 completed events
Nov 27 05:59:23 np0005537642 ceph-mon[74338]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/progress/completed}] v 0)
Nov 27 05:59:23 np0005537642 ceph-mon[74338]: log_channel(audit) log [INF] : from='mgr.14424 192.168.122.100:0/1707931784' entity='mgr.compute-0.qnrkij' 
Nov 27 05:59:23 np0005537642 ceph-osd[82775]: log_channel(cluster) log [DBG] : 10.1e scrub starts
Nov 27 05:59:23 np0005537642 ceph-osd[82775]: log_channel(cluster) log [DBG] : 10.1e scrub ok
Nov 27 05:59:23 np0005537642 ceph-mon[74338]: 192.168.122.2 is in 192.168.122.0/24 on compute-0 interface br-ex
Nov 27 05:59:23 np0005537642 ceph-mon[74338]: 192.168.122.2 is in 192.168.122.0/24 on compute-1 interface br-ex
Nov 27 05:59:23 np0005537642 ceph-mon[74338]: 192.168.122.2 is in 192.168.122.0/24 on compute-2 interface br-ex
Nov 27 05:59:23 np0005537642 ceph-mon[74338]: Deploying daemon keepalived.nfs.cephfs.compute-0.zwobpl on compute-0
Nov 27 05:59:23 np0005537642 ceph-mon[74338]: from='mgr.14424 192.168.122.100:0/1707931784' entity='mgr.compute-0.qnrkij' 
Nov 27 05:59:23 np0005537642 ceph-4c838139-e0c9-556a-a9ca-e4422f459af7-nfs-cephfs-2-0-compute-0-ymahkb[97101]: 27/11/2025 10:59:23 : epoch 69282ef5 : compute-0 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f37bc0016a0 fd 37 proxy header rest len failed header rlen = % (will set dead)
Nov 27 05:59:23 np0005537642 ceph-4c838139-e0c9-556a-a9ca-e4422f459af7-nfs-cephfs-2-0-compute-0-ymahkb[97101]: 27/11/2025 10:59:23 : epoch 69282ef5 : compute-0 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f37dc0023e0 fd 37 proxy header rest len failed header rlen = % (will set dead)
Nov 27 05:59:23 np0005537642 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v51: 353 pgs: 353 active+clean; 456 KiB data, 103 MiB used, 60 GiB / 60 GiB avail
Nov 27 05:59:23 np0005537642 ceph-mon[74338]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": ".nfs", "var": "pgp_num_actual", "val": "32"} v 0)
Nov 27 05:59:23 np0005537642 ceph-mon[74338]: log_channel(audit) log [INF] : from='mgr.14424 192.168.122.100:0/1707931784' entity='mgr.compute-0.qnrkij' cmd=[{"prefix": "osd pool set", "pool": ".nfs", "var": "pgp_num_actual", "val": "32"}]: dispatch
Nov 27 05:59:23 np0005537642 ceph-mon[74338]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": ".rgw.root", "var": "pgp_num_actual", "val": "32"} v 0)
Nov 27 05:59:23 np0005537642 ceph-mon[74338]: log_channel(audit) log [INF] : from='mgr.14424 192.168.122.100:0/1707931784' entity='mgr.compute-0.qnrkij' cmd=[{"prefix": "osd pool set", "pool": ".rgw.root", "var": "pgp_num_actual", "val": "32"}]: dispatch
Nov 27 05:59:23 np0005537642 ceph-mon[74338]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.control", "var": "pgp_num_actual", "val": "32"} v 0)
Nov 27 05:59:23 np0005537642 ceph-mon[74338]: log_channel(audit) log [INF] : from='mgr.14424 192.168.122.100:0/1707931784' entity='mgr.compute-0.qnrkij' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.control", "var": "pgp_num_actual", "val": "32"}]: dispatch
Nov 27 05:59:23 np0005537642 ceph-mon[74338]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "2"} v 0)
Nov 27 05:59:23 np0005537642 ceph-mon[74338]: log_channel(audit) log [INF] : from='mgr.14424 192.168.122.100:0/1707931784' entity='mgr.compute-0.qnrkij' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "2"}]: dispatch
Nov 27 05:59:23 np0005537642 ceph-mon[74338]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pgp_num_actual", "val": "32"} v 0)
Nov 27 05:59:23 np0005537642 ceph-mon[74338]: log_channel(audit) log [INF] : from='mgr.14424 192.168.122.100:0/1707931784' entity='mgr.compute-0.qnrkij' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pgp_num_actual", "val": "32"}]: dispatch
Nov 27 05:59:24 np0005537642 ceph-osd[82775]: log_channel(cluster) log [DBG] : 10.1d scrub starts
Nov 27 05:59:24 np0005537642 ceph-osd[82775]: log_channel(cluster) log [DBG] : 10.1d scrub ok
Nov 27 05:59:24 np0005537642 ceph-mon[74338]: mon.compute-0@0(leader).osd e72 do_prune osdmap full prune enabled
Nov 27 05:59:24 np0005537642 ceph-mon[74338]: from='mgr.14424 192.168.122.100:0/1707931784' entity='mgr.compute-0.qnrkij' cmd=[{"prefix": "osd pool set", "pool": ".nfs", "var": "pgp_num_actual", "val": "32"}]: dispatch
Nov 27 05:59:24 np0005537642 ceph-mon[74338]: from='mgr.14424 192.168.122.100:0/1707931784' entity='mgr.compute-0.qnrkij' cmd=[{"prefix": "osd pool set", "pool": ".rgw.root", "var": "pgp_num_actual", "val": "32"}]: dispatch
Nov 27 05:59:24 np0005537642 ceph-mon[74338]: from='mgr.14424 192.168.122.100:0/1707931784' entity='mgr.compute-0.qnrkij' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.control", "var": "pgp_num_actual", "val": "32"}]: dispatch
Nov 27 05:59:24 np0005537642 ceph-mon[74338]: from='mgr.14424 192.168.122.100:0/1707931784' entity='mgr.compute-0.qnrkij' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "2"}]: dispatch
Nov 27 05:59:24 np0005537642 ceph-mon[74338]: from='mgr.14424 192.168.122.100:0/1707931784' entity='mgr.compute-0.qnrkij' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pgp_num_actual", "val": "32"}]: dispatch
Nov 27 05:59:24 np0005537642 ceph-4c838139-e0c9-556a-a9ca-e4422f459af7-nfs-cephfs-2-0-compute-0-ymahkb[97101]: 27/11/2025 10:59:24 : epoch 69282ef5 : compute-0 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f37c4001ac0 fd 37 proxy header rest len failed header rlen = % (will set dead)
Nov 27 05:59:24 np0005537642 ceph-mon[74338]: log_channel(audit) log [INF] : from='mgr.14424 192.168.122.100:0/1707931784' entity='mgr.compute-0.qnrkij' cmd='[{"prefix": "osd pool set", "pool": ".nfs", "var": "pgp_num_actual", "val": "32"}]': finished
Nov 27 05:59:24 np0005537642 ceph-mon[74338]: log_channel(audit) log [INF] : from='mgr.14424 192.168.122.100:0/1707931784' entity='mgr.compute-0.qnrkij' cmd='[{"prefix": "osd pool set", "pool": ".rgw.root", "var": "pgp_num_actual", "val": "32"}]': finished
Nov 27 05:59:24 np0005537642 ceph-mon[74338]: log_channel(audit) log [INF] : from='mgr.14424 192.168.122.100:0/1707931784' entity='mgr.compute-0.qnrkij' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.control", "var": "pgp_num_actual", "val": "32"}]': finished
Nov 27 05:59:24 np0005537642 ceph-mon[74338]: log_channel(audit) log [INF] : from='mgr.14424 192.168.122.100:0/1707931784' entity='mgr.compute-0.qnrkij' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "2"}]': finished
Nov 27 05:59:24 np0005537642 ceph-mon[74338]: log_channel(audit) log [INF] : from='mgr.14424 192.168.122.100:0/1707931784' entity='mgr.compute-0.qnrkij' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pgp_num_actual", "val": "32"}]': finished
Nov 27 05:59:24 np0005537642 ceph-mon[74338]: mon.compute-0@0(leader).osd e73 e73: 3 total, 3 up, 3 in
Nov 27 05:59:24 np0005537642 ceph-mon[74338]: log_channel(cluster) log [DBG] : osdmap e73: 3 total, 3 up, 3 in
Nov 27 05:59:24 np0005537642 ceph-osd[82775]: osd.1 pg_epoch: 73 pg[12.11( v 69'48 (0'0,69'48] local-lis/les=71/72 n=0 ec=71/62 lis/c=71/71 les/c/f=72/72/0 sis=73 pruub=12.912584305s) [2] r=-1 lpr=73 pi=[71,73)/1 crt=69'48 lcod 0'0 mlcod 0'0 active pruub 234.284393311s@ mbc={}] PeeringState::start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Nov 27 05:59:24 np0005537642 ceph-osd[82775]: osd.1 pg_epoch: 73 pg[12.11( v 69'48 (0'0,69'48] local-lis/les=71/72 n=0 ec=71/62 lis/c=71/71 les/c/f=72/72/0 sis=73 pruub=12.912545204s) [2] r=-1 lpr=73 pi=[71,73)/1 crt=69'48 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 234.284393311s@ mbc={}] state<Start>: transitioning to Stray
Nov 27 05:59:24 np0005537642 ceph-osd[82775]: osd.1 pg_epoch: 73 pg[12.10( v 72'51 (0'0,72'51] local-lis/les=71/72 n=0 ec=71/62 lis/c=71/71 les/c/f=72/72/0 sis=73 pruub=12.918250084s) [0] r=-1 lpr=73 pi=[71,73)/1 crt=72'49 lcod 72'50 mlcod 72'50 active pruub 234.290252686s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Nov 27 05:59:24 np0005537642 ceph-osd[82775]: osd.1 pg_epoch: 73 pg[10.15( v 72'51 (0'0,72'51] local-lis/les=69/70 n=0 ec=69/52 lis/c=69/69 les/c/f=70/70/0 sis=73 pruub=10.887450218s) [0] r=-1 lpr=73 pi=[69,73)/1 crt=70'49 lcod 72'50 mlcod 72'50 active pruub 232.259521484s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Nov 27 05:59:24 np0005537642 ceph-osd[82775]: osd.1 pg_epoch: 73 pg[12.13( v 69'48 (0'0,69'48] local-lis/les=71/72 n=0 ec=71/62 lis/c=71/71 les/c/f=72/72/0 sis=73 pruub=12.918197632s) [2] r=-1 lpr=73 pi=[71,73)/1 crt=69'48 lcod 0'0 mlcod 0'0 active pruub 234.290267944s@ mbc={}] PeeringState::start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Nov 27 05:59:24 np0005537642 ceph-osd[82775]: osd.1 pg_epoch: 73 pg[12.13( v 69'48 (0'0,69'48] local-lis/les=71/72 n=0 ec=71/62 lis/c=71/71 les/c/f=72/72/0 sis=73 pruub=12.918185234s) [2] r=-1 lpr=73 pi=[71,73)/1 crt=69'48 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 234.290267944s@ mbc={}] state<Start>: transitioning to Stray
Nov 27 05:59:24 np0005537642 ceph-osd[82775]: osd.1 pg_epoch: 73 pg[10.15( v 72'51 (0'0,72'51] local-lis/les=69/70 n=0 ec=69/52 lis/c=69/69 les/c/f=70/70/0 sis=73 pruub=10.887408257s) [0] r=-1 lpr=73 pi=[69,73)/1 crt=70'49 lcod 72'50 mlcod 0'0 unknown NOTIFY pruub 232.259521484s@ mbc={}] state<Start>: transitioning to Stray
Nov 27 05:59:24 np0005537642 ceph-osd[82775]: osd.1 pg_epoch: 73 pg[12.10( v 72'51 (0'0,72'51] local-lis/les=71/72 n=0 ec=71/62 lis/c=71/71 les/c/f=72/72/0 sis=73 pruub=12.918107033s) [0] r=-1 lpr=73 pi=[71,73)/1 crt=72'49 lcod 72'50 mlcod 0'0 unknown NOTIFY pruub 234.290252686s@ mbc={}] state<Start>: transitioning to Stray
Nov 27 05:59:24 np0005537642 ceph-osd[82775]: osd.1 pg_epoch: 73 pg[10.14( v 72'51 (0'0,72'51] local-lis/les=69/70 n=0 ec=69/52 lis/c=69/69 les/c/f=70/70/0 sis=73 pruub=10.887272835s) [0] r=-1 lpr=73 pi=[69,73)/1 crt=70'49 lcod 72'50 mlcod 72'50 active pruub 232.259536743s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Nov 27 05:59:24 np0005537642 ceph-osd[82775]: osd.1 pg_epoch: 73 pg[12.12( v 69'48 (0'0,69'48] local-lis/les=71/72 n=0 ec=71/62 lis/c=71/71 les/c/f=72/72/0 sis=73 pruub=12.917965889s) [0] r=-1 lpr=73 pi=[71,73)/1 crt=69'48 lcod 0'0 mlcod 0'0 active pruub 234.290267944s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Nov 27 05:59:24 np0005537642 ceph-osd[82775]: osd.1 pg_epoch: 73 pg[10.14( v 72'51 (0'0,72'51] local-lis/les=69/70 n=0 ec=69/52 lis/c=69/69 les/c/f=70/70/0 sis=73 pruub=10.887235641s) [0] r=-1 lpr=73 pi=[69,73)/1 crt=70'49 lcod 72'50 mlcod 0'0 unknown NOTIFY pruub 232.259536743s@ mbc={}] state<Start>: transitioning to Stray
Nov 27 05:59:24 np0005537642 ceph-osd[82775]: osd.1 pg_epoch: 73 pg[12.12( v 69'48 (0'0,69'48] local-lis/les=71/72 n=0 ec=71/62 lis/c=71/71 les/c/f=72/72/0 sis=73 pruub=12.917954445s) [0] r=-1 lpr=73 pi=[71,73)/1 crt=69'48 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 234.290267944s@ mbc={}] state<Start>: transitioning to Stray
Nov 27 05:59:24 np0005537642 ceph-osd[82775]: osd.1 pg_epoch: 73 pg[10.2( v 54'48 (0'0,54'48] local-lis/les=69/70 n=1 ec=69/52 lis/c=69/69 les/c/f=70/70/0 sis=73 pruub=10.886933327s) [0] r=-1 lpr=73 pi=[69,73)/1 crt=54'48 lcod 0'0 mlcod 0'0 active pruub 232.259460449s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Nov 27 05:59:24 np0005537642 ceph-osd[82775]: osd.1 pg_epoch: 73 pg[10.2( v 54'48 (0'0,54'48] local-lis/les=69/70 n=1 ec=69/52 lis/c=69/69 les/c/f=70/70/0 sis=73 pruub=10.886900902s) [0] r=-1 lpr=73 pi=[69,73)/1 crt=54'48 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 232.259460449s@ mbc={}] state<Start>: transitioning to Stray
Nov 27 05:59:24 np0005537642 ceph-osd[82775]: osd.1 pg_epoch: 73 pg[10.13( v 54'48 (0'0,54'48] local-lis/les=69/70 n=0 ec=69/52 lis/c=69/69 les/c/f=70/70/0 sis=73 pruub=10.886899948s) [0] r=-1 lpr=73 pi=[69,73)/1 crt=54'48 lcod 0'0 mlcod 0'0 active pruub 232.259506226s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Nov 27 05:59:24 np0005537642 ceph-osd[82775]: osd.1 pg_epoch: 73 pg[12.4( v 69'48 (0'0,69'48] local-lis/les=71/72 n=1 ec=71/62 lis/c=71/71 les/c/f=72/72/0 sis=73 pruub=12.917963982s) [2] r=-1 lpr=73 pi=[71,73)/1 crt=69'48 lcod 0'0 mlcod 0'0 active pruub 234.290588379s@ mbc={}] PeeringState::start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Nov 27 05:59:24 np0005537642 ceph-osd[82775]: osd.1 pg_epoch: 73 pg[10.13( v 54'48 (0'0,54'48] local-lis/les=69/70 n=0 ec=69/52 lis/c=69/69 les/c/f=70/70/0 sis=73 pruub=10.886872292s) [0] r=-1 lpr=73 pi=[69,73)/1 crt=54'48 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 232.259506226s@ mbc={}] state<Start>: transitioning to Stray
Nov 27 05:59:24 np0005537642 ceph-osd[82775]: osd.1 pg_epoch: 73 pg[12.4( v 69'48 (0'0,69'48] local-lis/les=71/72 n=1 ec=71/62 lis/c=71/71 les/c/f=72/72/0 sis=73 pruub=12.917928696s) [2] r=-1 lpr=73 pi=[71,73)/1 crt=69'48 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 234.290588379s@ mbc={}] state<Start>: transitioning to Stray
Nov 27 05:59:24 np0005537642 ceph-osd[82775]: osd.1 pg_epoch: 73 pg[10.1( v 54'48 (0'0,54'48] local-lis/les=69/70 n=1 ec=69/52 lis/c=69/69 les/c/f=70/70/0 sis=73 pruub=10.886655807s) [2] r=-1 lpr=73 pi=[69,73)/1 crt=54'48 lcod 0'0 mlcod 0'0 active pruub 232.259460449s@ mbc={}] PeeringState::start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Nov 27 05:59:24 np0005537642 ceph-osd[82775]: osd.1 pg_epoch: 73 pg[12.7( v 69'48 (0'0,69'48] local-lis/les=71/72 n=0 ec=71/62 lis/c=71/71 les/c/f=72/72/0 sis=73 pruub=12.917452812s) [2] r=-1 lpr=73 pi=[71,73)/1 crt=69'48 lcod 0'0 mlcod 0'0 active pruub 234.290267944s@ mbc={}] PeeringState::start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Nov 27 05:59:24 np0005537642 ceph-osd[82775]: osd.1 pg_epoch: 73 pg[10.1( v 54'48 (0'0,54'48] local-lis/les=69/70 n=1 ec=69/52 lis/c=69/69 les/c/f=70/70/0 sis=73 pruub=10.886642456s) [2] r=-1 lpr=73 pi=[69,73)/1 crt=54'48 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 232.259460449s@ mbc={}] state<Start>: transitioning to Stray
Nov 27 05:59:24 np0005537642 ceph-osd[82775]: osd.1 pg_epoch: 73 pg[12.7( v 69'48 (0'0,69'48] local-lis/les=71/72 n=0 ec=71/62 lis/c=71/71 les/c/f=72/72/0 sis=73 pruub=12.917435646s) [2] r=-1 lpr=73 pi=[71,73)/1 crt=69'48 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 234.290267944s@ mbc={}] state<Start>: transitioning to Stray
Nov 27 05:59:24 np0005537642 ceph-osd[82775]: osd.1 pg_epoch: 73 pg[12.6( v 69'48 (0'0,69'48] local-lis/les=71/72 n=0 ec=71/62 lis/c=71/71 les/c/f=72/72/0 sis=73 pruub=12.917586327s) [0] r=-1 lpr=73 pi=[71,73)/1 crt=69'48 lcod 0'0 mlcod 0'0 active pruub 234.290573120s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Nov 27 05:59:24 np0005537642 ceph-osd[82775]: osd.1 pg_epoch: 73 pg[12.6( v 69'48 (0'0,69'48] local-lis/les=71/72 n=0 ec=71/62 lis/c=71/71 les/c/f=72/72/0 sis=73 pruub=12.917541504s) [0] r=-1 lpr=73 pi=[71,73)/1 crt=69'48 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 234.290573120s@ mbc={}] state<Start>: transitioning to Stray
Nov 27 05:59:24 np0005537642 ceph-osd[82775]: osd.1 pg_epoch: 73 pg[10.f( v 54'48 (0'0,54'48] local-lis/les=69/70 n=0 ec=69/52 lis/c=69/69 les/c/f=70/70/0 sis=73 pruub=10.886322021s) [2] r=-1 lpr=73 pi=[69,73)/1 crt=54'48 lcod 0'0 mlcod 0'0 active pruub 232.259368896s@ mbc={}] PeeringState::start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Nov 27 05:59:24 np0005537642 ceph-osd[82775]: osd.1 pg_epoch: 73 pg[10.f( v 54'48 (0'0,54'48] local-lis/les=69/70 n=0 ec=69/52 lis/c=69/69 les/c/f=70/70/0 sis=73 pruub=10.886308670s) [2] r=-1 lpr=73 pi=[69,73)/1 crt=54'48 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 232.259368896s@ mbc={}] state<Start>: transitioning to Stray
Nov 27 05:59:24 np0005537642 ceph-osd[82775]: osd.1 pg_epoch: 73 pg[12.8( v 69'48 (0'0,69'48] local-lis/les=71/72 n=0 ec=71/62 lis/c=71/71 les/c/f=72/72/0 sis=73 pruub=12.917435646s) [0] r=-1 lpr=73 pi=[71,73)/1 crt=69'48 lcod 0'0 mlcod 0'0 active pruub 234.290649414s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Nov 27 05:59:24 np0005537642 ceph-osd[82775]: osd.1 pg_epoch: 73 pg[12.8( v 69'48 (0'0,69'48] local-lis/les=71/72 n=0 ec=71/62 lis/c=71/71 les/c/f=72/72/0 sis=73 pruub=12.917407036s) [0] r=-1 lpr=73 pi=[71,73)/1 crt=69'48 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 234.290649414s@ mbc={}] state<Start>: transitioning to Stray
Nov 27 05:59:24 np0005537642 ceph-osd[82775]: osd.1 pg_epoch: 73 pg[12.a( v 69'48 (0'0,69'48] local-lis/les=71/72 n=0 ec=71/62 lis/c=71/71 les/c/f=72/72/0 sis=73 pruub=12.917453766s) [0] r=-1 lpr=73 pi=[71,73)/1 crt=69'48 lcod 0'0 mlcod 0'0 active pruub 234.290710449s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Nov 27 05:59:24 np0005537642 ceph-osd[82775]: osd.1 pg_epoch: 73 pg[12.c( v 69'48 (0'0,69'48] local-lis/les=71/72 n=0 ec=71/62 lis/c=71/71 les/c/f=72/72/0 sis=73 pruub=12.917612076s) [0] r=-1 lpr=73 pi=[71,73)/1 crt=69'48 lcod 0'0 mlcod 0'0 active pruub 234.290924072s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Nov 27 05:59:24 np0005537642 ceph-osd[82775]: osd.1 pg_epoch: 73 pg[12.a( v 69'48 (0'0,69'48] local-lis/les=71/72 n=0 ec=71/62 lis/c=71/71 les/c/f=72/72/0 sis=73 pruub=12.917424202s) [0] r=-1 lpr=73 pi=[71,73)/1 crt=69'48 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 234.290710449s@ mbc={}] state<Start>: transitioning to Stray
Nov 27 05:59:24 np0005537642 ceph-osd[82775]: osd.1 pg_epoch: 73 pg[12.c( v 69'48 (0'0,69'48] local-lis/les=71/72 n=0 ec=71/62 lis/c=71/71 les/c/f=72/72/0 sis=73 pruub=12.917593002s) [0] r=-1 lpr=73 pi=[71,73)/1 crt=69'48 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 234.290924072s@ mbc={}] state<Start>: transitioning to Stray
Nov 27 05:59:24 np0005537642 ceph-osd[82775]: osd.1 pg_epoch: 73 pg[12.b( v 69'48 (0'0,69'48] local-lis/les=71/72 n=0 ec=71/62 lis/c=71/71 les/c/f=72/72/0 sis=73 pruub=12.917551994s) [0] r=-1 lpr=73 pi=[71,73)/1 crt=69'48 lcod 0'0 mlcod 0'0 active pruub 234.290954590s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Nov 27 05:59:24 np0005537642 ceph-osd[82775]: osd.1 pg_epoch: 73 pg[10.8( v 54'48 (0'0,54'48] local-lis/les=69/70 n=1 ec=69/52 lis/c=69/69 les/c/f=70/70/0 sis=73 pruub=10.886046410s) [0] r=-1 lpr=73 pi=[69,73)/1 crt=54'48 lcod 0'0 mlcod 0'0 active pruub 232.259475708s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Nov 27 05:59:24 np0005537642 ceph-osd[82775]: osd.1 pg_epoch: 73 pg[12.b( v 69'48 (0'0,69'48] local-lis/les=71/72 n=0 ec=71/62 lis/c=71/71 les/c/f=72/72/0 sis=73 pruub=12.917534828s) [0] r=-1 lpr=73 pi=[71,73)/1 crt=69'48 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 234.290954590s@ mbc={}] state<Start>: transitioning to Stray
Nov 27 05:59:24 np0005537642 ceph-osd[82775]: osd.1 pg_epoch: 73 pg[10.8( v 54'48 (0'0,54'48] local-lis/les=69/70 n=1 ec=69/52 lis/c=69/69 les/c/f=70/70/0 sis=73 pruub=10.886036873s) [0] r=-1 lpr=73 pi=[69,73)/1 crt=54'48 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 232.259475708s@ mbc={}] state<Start>: transitioning to Stray
Nov 27 05:59:24 np0005537642 ceph-osd[82775]: osd.1 pg_epoch: 73 pg[12.e( v 69'48 (0'0,69'48] local-lis/les=71/72 n=0 ec=71/62 lis/c=71/71 les/c/f=72/72/0 sis=73 pruub=12.917456627s) [0] r=-1 lpr=73 pi=[71,73)/1 crt=69'48 lcod 0'0 mlcod 0'0 active pruub 234.290969849s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Nov 27 05:59:24 np0005537642 ceph-osd[82775]: osd.1 pg_epoch: 73 pg[12.e( v 69'48 (0'0,69'48] local-lis/les=71/72 n=0 ec=71/62 lis/c=71/71 les/c/f=72/72/0 sis=73 pruub=12.917440414s) [0] r=-1 lpr=73 pi=[71,73)/1 crt=69'48 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 234.290969849s@ mbc={}] state<Start>: transitioning to Stray
Nov 27 05:59:24 np0005537642 ceph-osd[82775]: osd.1 pg_epoch: 73 pg[12.2( v 69'48 (0'0,69'48] local-lis/les=71/72 n=1 ec=71/62 lis/c=71/71 les/c/f=72/72/0 sis=73 pruub=12.917401314s) [2] r=-1 lpr=73 pi=[71,73)/1 crt=69'48 lcod 0'0 mlcod 0'0 active pruub 234.291046143s@ mbc={}] PeeringState::start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Nov 27 05:59:24 np0005537642 ceph-osd[82775]: osd.1 pg_epoch: 73 pg[12.2( v 69'48 (0'0,69'48] local-lis/les=71/72 n=1 ec=71/62 lis/c=71/71 les/c/f=72/72/0 sis=73 pruub=12.917386055s) [2] r=-1 lpr=73 pi=[71,73)/1 crt=69'48 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 234.291046143s@ mbc={}] state<Start>: transitioning to Stray
Nov 27 05:59:24 np0005537642 ceph-osd[82775]: osd.1 pg_epoch: 73 pg[12.3( v 69'48 (0'0,69'48] local-lis/les=71/72 n=1 ec=71/62 lis/c=71/71 les/c/f=72/72/0 sis=73 pruub=12.917266846s) [2] r=-1 lpr=73 pi=[71,73)/1 crt=69'48 lcod 0'0 mlcod 0'0 active pruub 234.291046143s@ mbc={}] PeeringState::start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Nov 27 05:59:24 np0005537642 ceph-osd[82775]: osd.1 pg_epoch: 73 pg[12.3( v 69'48 (0'0,69'48] local-lis/les=71/72 n=1 ec=71/62 lis/c=71/71 les/c/f=72/72/0 sis=73 pruub=12.917252541s) [2] r=-1 lpr=73 pi=[71,73)/1 crt=69'48 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 234.291046143s@ mbc={}] state<Start>: transitioning to Stray
Nov 27 05:59:24 np0005537642 ceph-osd[82775]: osd.1 pg_epoch: 73 pg[10.4( v 54'48 (0'0,54'48] local-lis/les=69/70 n=1 ec=69/52 lis/c=69/69 les/c/f=70/70/0 sis=73 pruub=10.885284424s) [2] r=-1 lpr=73 pi=[69,73)/1 crt=54'48 lcod 0'0 mlcod 0'0 active pruub 232.259124756s@ mbc={}] PeeringState::start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Nov 27 05:59:24 np0005537642 ceph-osd[82775]: osd.1 pg_epoch: 73 pg[10.3( v 72'51 (0'0,72'51] local-lis/les=69/70 n=1 ec=69/52 lis/c=69/69 les/c/f=70/70/0 sis=73 pruub=10.885358810s) [2] r=-1 lpr=73 pi=[69,73)/1 crt=70'49 lcod 72'50 mlcod 72'50 active pruub 232.259201050s@ mbc={}] PeeringState::start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Nov 27 05:59:24 np0005537642 ceph-osd[82775]: osd.1 pg_epoch: 73 pg[10.5( v 54'48 (0'0,54'48] local-lis/les=69/70 n=1 ec=69/52 lis/c=69/69 les/c/f=70/70/0 sis=73 pruub=10.885281563s) [0] r=-1 lpr=73 pi=[69,73)/1 crt=54'48 lcod 0'0 mlcod 0'0 active pruub 232.259124756s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Nov 27 05:59:24 np0005537642 ceph-osd[82775]: osd.1 pg_epoch: 73 pg[10.4( v 54'48 (0'0,54'48] local-lis/les=69/70 n=1 ec=69/52 lis/c=69/69 les/c/f=70/70/0 sis=73 pruub=10.885272026s) [2] r=-1 lpr=73 pi=[69,73)/1 crt=54'48 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 232.259124756s@ mbc={}] state<Start>: transitioning to Stray
Nov 27 05:59:24 np0005537642 ceph-osd[82775]: osd.1 pg_epoch: 73 pg[10.3( v 72'51 (0'0,72'51] local-lis/les=69/70 n=1 ec=69/52 lis/c=69/69 les/c/f=70/70/0 sis=73 pruub=10.885301590s) [2] r=-1 lpr=73 pi=[69,73)/1 crt=70'49 lcod 72'50 mlcod 0'0 unknown NOTIFY pruub 232.259201050s@ mbc={}] state<Start>: transitioning to Stray
Nov 27 05:59:24 np0005537642 ceph-osd[82775]: osd.1 pg_epoch: 73 pg[10.5( v 54'48 (0'0,54'48] local-lis/les=69/70 n=1 ec=69/52 lis/c=69/69 les/c/f=70/70/0 sis=73 pruub=10.885191917s) [0] r=-1 lpr=73 pi=[69,73)/1 crt=54'48 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 232.259124756s@ mbc={}] state<Start>: transitioning to Stray
Nov 27 05:59:24 np0005537642 ceph-osd[82775]: osd.1 pg_epoch: 73 pg[12.1e( v 69'48 (0'0,69'48] local-lis/les=71/72 n=0 ec=71/62 lis/c=71/71 les/c/f=72/72/0 sis=73 pruub=12.917170525s) [2] r=-1 lpr=73 pi=[71,73)/1 crt=69'48 lcod 0'0 mlcod 0'0 active pruub 234.291137695s@ mbc={}] PeeringState::start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Nov 27 05:59:24 np0005537642 ceph-osd[82775]: osd.1 pg_epoch: 73 pg[12.1e( v 69'48 (0'0,69'48] local-lis/les=71/72 n=0 ec=71/62 lis/c=71/71 les/c/f=72/72/0 sis=73 pruub=12.917160034s) [2] r=-1 lpr=73 pi=[71,73)/1 crt=69'48 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 234.291137695s@ mbc={}] state<Start>: transitioning to Stray
Nov 27 05:59:24 np0005537642 ceph-osd[82775]: osd.1 pg_epoch: 73 pg[10.18( v 54'48 (0'0,54'48] local-lis/les=69/70 n=0 ec=69/52 lis/c=69/69 les/c/f=70/70/0 sis=73 pruub=10.885099411s) [0] r=-1 lpr=73 pi=[69,73)/1 crt=54'48 lcod 0'0 mlcod 0'0 active pruub 232.259140015s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Nov 27 05:59:24 np0005537642 ceph-osd[82775]: osd.1 pg_epoch: 73 pg[10.18( v 54'48 (0'0,54'48] local-lis/les=69/70 n=0 ec=69/52 lis/c=69/69 les/c/f=70/70/0 sis=73 pruub=10.885081291s) [0] r=-1 lpr=73 pi=[69,73)/1 crt=54'48 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 232.259140015s@ mbc={}] state<Start>: transitioning to Stray
Nov 27 05:59:24 np0005537642 ceph-osd[82775]: osd.1 pg_epoch: 73 pg[10.19( v 54'48 (0'0,54'48] local-lis/les=69/70 n=0 ec=69/52 lis/c=69/69 les/c/f=70/70/0 sis=73 pruub=10.884445190s) [0] r=-1 lpr=73 pi=[69,73)/1 crt=54'48 lcod 0'0 mlcod 0'0 active pruub 232.258697510s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Nov 27 05:59:24 np0005537642 ceph-osd[82775]: osd.1 pg_epoch: 73 pg[10.19( v 54'48 (0'0,54'48] local-lis/les=69/70 n=0 ec=69/52 lis/c=69/69 les/c/f=70/70/0 sis=73 pruub=10.884427071s) [0] r=-1 lpr=73 pi=[69,73)/1 crt=54'48 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 232.258697510s@ mbc={}] state<Start>: transitioning to Stray
Nov 27 05:59:24 np0005537642 ceph-osd[82775]: osd.1 pg_epoch: 73 pg[12.1c( v 69'48 (0'0,69'48] local-lis/les=71/72 n=0 ec=71/62 lis/c=71/71 les/c/f=72/72/0 sis=73 pruub=12.916934967s) [0] r=-1 lpr=73 pi=[71,73)/1 crt=69'48 lcod 0'0 mlcod 0'0 active pruub 234.291244507s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Nov 27 05:59:24 np0005537642 ceph-osd[82775]: osd.1 pg_epoch: 73 pg[12.1c( v 69'48 (0'0,69'48] local-lis/les=71/72 n=0 ec=71/62 lis/c=71/71 les/c/f=72/72/0 sis=73 pruub=12.916917801s) [0] r=-1 lpr=73 pi=[71,73)/1 crt=69'48 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 234.291244507s@ mbc={}] state<Start>: transitioning to Stray
Nov 27 05:59:24 np0005537642 ceph-osd[82775]: osd.1 pg_epoch: 73 pg[12.1a( v 72'51 (0'0,72'51] local-lis/les=71/72 n=0 ec=71/62 lis/c=71/71 les/c/f=72/72/0 sis=73 pruub=12.916749001s) [2] r=-1 lpr=73 pi=[71,73)/1 crt=72'49 lcod 72'50 mlcod 72'50 active pruub 234.291152954s@ mbc={}] PeeringState::start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Nov 27 05:59:24 np0005537642 ceph-osd[82775]: osd.1 pg_epoch: 73 pg[10.1e( v 54'48 (0'0,54'48] local-lis/les=69/70 n=0 ec=69/52 lis/c=69/69 les/c/f=70/70/0 sis=73 pruub=10.884109497s) [2] r=-1 lpr=73 pi=[69,73)/1 crt=54'48 lcod 0'0 mlcod 0'0 active pruub 232.258590698s@ mbc={}] PeeringState::start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Nov 27 05:59:24 np0005537642 ceph-osd[82775]: osd.1 pg_epoch: 73 pg[10.1e( v 54'48 (0'0,54'48] local-lis/les=69/70 n=0 ec=69/52 lis/c=69/69 les/c/f=70/70/0 sis=73 pruub=10.884095192s) [2] r=-1 lpr=73 pi=[69,73)/1 crt=54'48 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 232.258590698s@ mbc={}] state<Start>: transitioning to Stray
Nov 27 05:59:24 np0005537642 ceph-osd[82775]: osd.1 pg_epoch: 73 pg[12.18( v 69'48 (0'0,69'48] local-lis/les=71/72 n=0 ec=71/62 lis/c=71/71 les/c/f=72/72/0 sis=73 pruub=12.917386055s) [2] r=-1 lpr=73 pi=[71,73)/1 crt=69'48 lcod 0'0 mlcod 0'0 active pruub 234.291885376s@ mbc={}] PeeringState::start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Nov 27 05:59:24 np0005537642 ceph-osd[82775]: osd.1 pg_epoch: 73 pg[12.1a( v 72'51 (0'0,72'51] local-lis/les=71/72 n=0 ec=71/62 lis/c=71/71 les/c/f=72/72/0 sis=73 pruub=12.916628838s) [2] r=-1 lpr=73 pi=[71,73)/1 crt=72'49 lcod 72'50 mlcod 0'0 unknown NOTIFY pruub 234.291152954s@ mbc={}] state<Start>: transitioning to Stray
Nov 27 05:59:24 np0005537642 ceph-osd[82775]: osd.1 pg_epoch: 73 pg[12.19( v 69'48 (0'0,69'48] local-lis/les=71/72 n=0 ec=71/62 lis/c=71/71 les/c/f=72/72/0 sis=73 pruub=12.916651726s) [0] r=-1 lpr=73 pi=[71,73)/1 crt=69'48 lcod 0'0 mlcod 0'0 active pruub 234.291229248s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Nov 27 05:59:24 np0005537642 ceph-osd[82775]: osd.1 pg_epoch: 73 pg[12.18( v 69'48 (0'0,69'48] local-lis/les=71/72 n=0 ec=71/62 lis/c=71/71 les/c/f=72/72/0 sis=73 pruub=12.917367935s) [2] r=-1 lpr=73 pi=[71,73)/1 crt=69'48 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 234.291885376s@ mbc={}] state<Start>: transitioning to Stray
Nov 27 05:59:24 np0005537642 ceph-osd[82775]: osd.1 pg_epoch: 73 pg[10.10( v 54'48 (0'0,54'48] local-lis/les=69/70 n=0 ec=69/52 lis/c=69/69 les/c/f=70/70/0 sis=73 pruub=10.883831978s) [2] r=-1 lpr=73 pi=[69,73)/1 crt=54'48 lcod 0'0 mlcod 0'0 active pruub 232.258453369s@ mbc={}] PeeringState::start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Nov 27 05:59:24 np0005537642 ceph-osd[82775]: osd.1 pg_epoch: 73 pg[12.19( v 69'48 (0'0,69'48] local-lis/les=71/72 n=0 ec=71/62 lis/c=71/71 les/c/f=72/72/0 sis=73 pruub=12.916635513s) [0] r=-1 lpr=73 pi=[71,73)/1 crt=69'48 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 234.291229248s@ mbc={}] state<Start>: transitioning to Stray
Nov 27 05:59:24 np0005537642 ceph-osd[82775]: osd.1 pg_epoch: 73 pg[10.10( v 54'48 (0'0,54'48] local-lis/les=69/70 n=0 ec=69/52 lis/c=69/69 les/c/f=70/70/0 sis=73 pruub=10.883821487s) [2] r=-1 lpr=73 pi=[69,73)/1 crt=54'48 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 232.258453369s@ mbc={}] state<Start>: transitioning to Stray
Nov 27 05:59:24 np0005537642 ceph-osd[82775]: osd.1 pg_epoch: 73 pg[10.11( v 54'48 (0'0,54'48] local-lis/les=69/70 n=0 ec=69/52 lis/c=69/69 les/c/f=70/70/0 sis=73 pruub=10.883650780s) [2] r=-1 lpr=73 pi=[69,73)/1 crt=54'48 lcod 0'0 mlcod 0'0 active pruub 232.258468628s@ mbc={}] PeeringState::start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Nov 27 05:59:24 np0005537642 ceph-osd[82775]: osd.1 pg_epoch: 73 pg[12.17( v 69'48 (0'0,69'48] local-lis/les=71/72 n=0 ec=71/62 lis/c=71/71 les/c/f=72/72/0 sis=73 pruub=12.916437149s) [2] r=-1 lpr=73 pi=[71,73)/1 crt=69'48 lcod 0'0 mlcod 0'0 active pruub 234.291275024s@ mbc={}] PeeringState::start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Nov 27 05:59:24 np0005537642 ceph-osd[82775]: osd.1 pg_epoch: 73 pg[10.12( v 54'48 (0'0,54'48] local-lis/les=69/70 n=0 ec=69/52 lis/c=69/69 les/c/f=70/70/0 sis=73 pruub=10.883680344s) [2] r=-1 lpr=73 pi=[69,73)/1 crt=54'48 lcod 0'0 mlcod 0'0 active pruub 232.258514404s@ mbc={}] PeeringState::start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Nov 27 05:59:24 np0005537642 ceph-osd[82775]: osd.1 pg_epoch: 73 pg[10.11( v 54'48 (0'0,54'48] local-lis/les=69/70 n=0 ec=69/52 lis/c=69/69 les/c/f=70/70/0 sis=73 pruub=10.883629799s) [2] r=-1 lpr=73 pi=[69,73)/1 crt=54'48 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 232.258468628s@ mbc={}] state<Start>: transitioning to Stray
Nov 27 05:59:24 np0005537642 ceph-osd[82775]: osd.1 pg_epoch: 73 pg[12.17( v 69'48 (0'0,69'48] local-lis/les=71/72 n=0 ec=71/62 lis/c=71/71 les/c/f=72/72/0 sis=73 pruub=12.916420937s) [2] r=-1 lpr=73 pi=[71,73)/1 crt=69'48 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 234.291275024s@ mbc={}] state<Start>: transitioning to Stray
Nov 27 05:59:24 np0005537642 ceph-osd[82775]: osd.1 pg_epoch: 73 pg[12.1d( v 69'48 (0'0,69'48] local-lis/les=71/72 n=0 ec=71/62 lis/c=71/71 les/c/f=72/72/0 sis=73 pruub=12.917022705s) [2] r=-1 lpr=73 pi=[71,73)/1 crt=69'48 lcod 0'0 mlcod 0'0 active pruub 234.291900635s@ mbc={}] PeeringState::start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Nov 27 05:59:24 np0005537642 ceph-osd[82775]: osd.1 pg_epoch: 73 pg[10.12( v 54'48 (0'0,54'48] local-lis/les=69/70 n=0 ec=69/52 lis/c=69/69 les/c/f=70/70/0 sis=73 pruub=10.883654594s) [2] r=-1 lpr=73 pi=[69,73)/1 crt=54'48 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 232.258514404s@ mbc={}] state<Start>: transitioning to Stray
Nov 27 05:59:24 np0005537642 ceph-osd[82775]: osd.1 pg_epoch: 73 pg[12.1d( v 69'48 (0'0,69'48] local-lis/les=71/72 n=0 ec=71/62 lis/c=71/71 les/c/f=72/72/0 sis=73 pruub=12.917006493s) [2] r=-1 lpr=73 pi=[71,73)/1 crt=69'48 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 234.291900635s@ mbc={}] state<Start>: transitioning to Stray
Nov 27 05:59:24 np0005537642 ceph-osd[82775]: osd.1 pg_epoch: 73 pg[10.1b( v 54'48 (0'0,54'48] local-lis/les=69/70 n=0 ec=69/52 lis/c=69/69 les/c/f=70/70/0 sis=73 pruub=10.883598328s) [0] r=-1 lpr=73 pi=[69,73)/1 crt=54'48 lcod 0'0 mlcod 0'0 active pruub 232.258575439s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Nov 27 05:59:24 np0005537642 ceph-osd[82775]: osd.1 pg_epoch: 73 pg[10.1b( v 54'48 (0'0,54'48] local-lis/les=69/70 n=0 ec=69/52 lis/c=69/69 les/c/f=70/70/0 sis=73 pruub=10.883584976s) [0] r=-1 lpr=73 pi=[69,73)/1 crt=54'48 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 232.258575439s@ mbc={}] state<Start>: transitioning to Stray
Nov 27 05:59:24 np0005537642 ceph-osd[82775]: osd.1 pg_epoch: 73 pg[12.9( v 69'48 (0'0,69'48] local-lis/les=71/72 n=0 ec=71/62 lis/c=71/71 les/c/f=72/72/0 sis=73 pruub=12.914780617s) [2] r=-1 lpr=73 pi=[71,73)/1 crt=69'48 lcod 0'0 mlcod 0'0 active pruub 234.290344238s@ mbc={}] PeeringState::start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Nov 27 05:59:24 np0005537642 ceph-osd[82775]: osd.1 pg_epoch: 73 pg[12.9( v 69'48 (0'0,69'48] local-lis/les=71/72 n=0 ec=71/62 lis/c=71/71 les/c/f=72/72/0 sis=73 pruub=12.914670944s) [2] r=-1 lpr=73 pi=[71,73)/1 crt=69'48 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 234.290344238s@ mbc={}] state<Start>: transitioning to Stray
Nov 27 05:59:24 np0005537642 ceph-osd[82775]: osd.1 pg_epoch: 73 pg[8.14( empty local-lis/les=0/0 n=0 ec=67/48 lis/c=67/67 les/c/f=68/68/0 sis=73) [1] r=0 lpr=73 pi=[67,73)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 27 05:59:24 np0005537642 ceph-osd[82775]: osd.1 pg_epoch: 73 pg[11.14( empty local-lis/les=0/0 n=0 ec=69/55 lis/c=69/69 les/c/f=70/70/0 sis=73) [1] r=0 lpr=73 pi=[69,73)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 27 05:59:24 np0005537642 ceph-osd[82775]: osd.1 pg_epoch: 73 pg[8.17( empty local-lis/les=0/0 n=0 ec=67/48 lis/c=67/67 les/c/f=68/68/0 sis=73) [1] r=0 lpr=73 pi=[67,73)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 27 05:59:24 np0005537642 ceph-osd[82775]: osd.1 pg_epoch: 73 pg[11.12( empty local-lis/les=0/0 n=0 ec=69/55 lis/c=69/69 les/c/f=70/70/0 sis=73) [1] r=0 lpr=73 pi=[69,73)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 27 05:59:24 np0005537642 ceph-osd[82775]: osd.1 pg_epoch: 73 pg[11.1( empty local-lis/les=0/0 n=0 ec=69/55 lis/c=69/69 les/c/f=70/70/0 sis=73) [1] r=0 lpr=73 pi=[69,73)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 27 05:59:24 np0005537642 ceph-osd[82775]: osd.1 pg_epoch: 73 pg[8.8( empty local-lis/les=0/0 n=0 ec=67/48 lis/c=67/67 les/c/f=68/68/0 sis=73) [1] r=0 lpr=73 pi=[67,73)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 27 05:59:24 np0005537642 ceph-osd[82775]: osd.1 pg_epoch: 73 pg[8.10( empty local-lis/les=0/0 n=0 ec=67/48 lis/c=67/67 les/c/f=68/68/0 sis=73) [1] r=0 lpr=73 pi=[67,73)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 27 05:59:24 np0005537642 ceph-osd[82775]: osd.1 pg_epoch: 73 pg[11.f( empty local-lis/les=0/0 n=0 ec=69/55 lis/c=69/69 les/c/f=70/70/0 sis=73) [1] r=0 lpr=73 pi=[69,73)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 27 05:59:24 np0005537642 ceph-osd[82775]: osd.1 pg_epoch: 73 pg[11.4( empty local-lis/les=0/0 n=0 ec=69/55 lis/c=69/69 les/c/f=70/70/0 sis=73) [1] r=0 lpr=73 pi=[69,73)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 27 05:59:24 np0005537642 ceph-osd[82775]: osd.1 pg_epoch: 73 pg[11.7( empty local-lis/les=0/0 n=0 ec=69/55 lis/c=69/69 les/c/f=70/70/0 sis=73) [1] r=0 lpr=73 pi=[69,73)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 27 05:59:24 np0005537642 ceph-osd[82775]: osd.1 pg_epoch: 73 pg[8.4( empty local-lis/les=0/0 n=0 ec=67/48 lis/c=67/67 les/c/f=68/68/0 sis=73) [1] r=0 lpr=73 pi=[67,73)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 27 05:59:24 np0005537642 ceph-osd[82775]: osd.1 pg_epoch: 73 pg[8.1b( empty local-lis/les=0/0 n=0 ec=67/48 lis/c=67/67 les/c/f=68/68/0 sis=73) [1] r=0 lpr=73 pi=[67,73)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 27 05:59:24 np0005537642 ceph-osd[82775]: osd.1 pg_epoch: 73 pg[11.1a( empty local-lis/les=0/0 n=0 ec=69/55 lis/c=69/69 les/c/f=70/70/0 sis=73) [1] r=0 lpr=73 pi=[69,73)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 27 05:59:24 np0005537642 ceph-osd[82775]: osd.1 pg_epoch: 73 pg[11.5( empty local-lis/les=0/0 n=0 ec=69/55 lis/c=69/69 les/c/f=70/70/0 sis=73) [1] r=0 lpr=73 pi=[69,73)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 27 05:59:24 np0005537642 ceph-osd[82775]: osd.1 pg_epoch: 73 pg[8.19( empty local-lis/les=0/0 n=0 ec=67/48 lis/c=67/67 les/c/f=68/68/0 sis=73) [1] r=0 lpr=73 pi=[67,73)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 27 05:59:24 np0005537642 ceph-osd[82775]: osd.1 pg_epoch: 73 pg[8.18( empty local-lis/les=0/0 n=0 ec=67/48 lis/c=67/67 les/c/f=68/68/0 sis=73) [1] r=0 lpr=73 pi=[67,73)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 27 05:59:24 np0005537642 ceph-osd[82775]: osd.1 pg_epoch: 73 pg[11.1c( empty local-lis/les=0/0 n=0 ec=69/55 lis/c=69/69 les/c/f=70/70/0 sis=73) [1] r=0 lpr=73 pi=[69,73)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 27 05:59:24 np0005537642 ceph-osd[82775]: osd.1 pg_epoch: 73 pg[11.1b( empty local-lis/les=0/0 n=0 ec=69/55 lis/c=69/69 les/c/f=70/70/0 sis=73) [1] r=0 lpr=73 pi=[69,73)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 27 05:59:24 np0005537642 ceph-osd[82775]: osd.1 pg_epoch: 73 pg[11.1d( empty local-lis/les=0/0 n=0 ec=69/55 lis/c=69/69 les/c/f=70/70/0 sis=73) [1] r=0 lpr=73 pi=[69,73)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 27 05:59:24 np0005537642 ceph-osd[82775]: osd.1 pg_epoch: 73 pg[11.1e( empty local-lis/les=0/0 n=0 ec=69/55 lis/c=69/69 les/c/f=70/70/0 sis=73) [1] r=0 lpr=73 pi=[69,73)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 27 05:59:24 np0005537642 ceph-osd[82775]: osd.1 pg_epoch: 73 pg[8.12( empty local-lis/les=0/0 n=0 ec=67/48 lis/c=67/67 les/c/f=68/68/0 sis=73) [1] r=0 lpr=73 pi=[67,73)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 27 05:59:25 np0005537642 ceph-osd[82775]: log_channel(cluster) log [DBG] : 10.17 scrub starts
Nov 27 05:59:25 np0005537642 ceph-osd[82775]: log_channel(cluster) log [DBG] : 10.17 scrub ok
Nov 27 05:59:25 np0005537642 ceph-mon[74338]: mon.compute-0@0(leader).osd e73 do_prune osdmap full prune enabled
Nov 27 05:59:25 np0005537642 ceph-mon[74338]: mon.compute-0@0(leader).osd e74 e74: 3 total, 3 up, 3 in
Nov 27 05:59:25 np0005537642 ceph-mon[74338]: log_channel(cluster) log [DBG] : osdmap e74: 3 total, 3 up, 3 in
Nov 27 05:59:25 np0005537642 ceph-mon[74338]: from='mgr.14424 192.168.122.100:0/1707931784' entity='mgr.compute-0.qnrkij' cmd='[{"prefix": "osd pool set", "pool": ".nfs", "var": "pgp_num_actual", "val": "32"}]': finished
Nov 27 05:59:25 np0005537642 ceph-mon[74338]: from='mgr.14424 192.168.122.100:0/1707931784' entity='mgr.compute-0.qnrkij' cmd='[{"prefix": "osd pool set", "pool": ".rgw.root", "var": "pgp_num_actual", "val": "32"}]': finished
Nov 27 05:59:25 np0005537642 ceph-mon[74338]: from='mgr.14424 192.168.122.100:0/1707931784' entity='mgr.compute-0.qnrkij' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.control", "var": "pgp_num_actual", "val": "32"}]': finished
Nov 27 05:59:25 np0005537642 ceph-mon[74338]: from='mgr.14424 192.168.122.100:0/1707931784' entity='mgr.compute-0.qnrkij' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "2"}]': finished
Nov 27 05:59:25 np0005537642 ceph-mon[74338]: from='mgr.14424 192.168.122.100:0/1707931784' entity='mgr.compute-0.qnrkij' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pgp_num_actual", "val": "32"}]': finished
Nov 27 05:59:25 np0005537642 ceph-osd[82775]: osd.1 pg_epoch: 74 pg[8.10( v 49'6 (0'0,49'6] local-lis/les=73/74 n=0 ec=67/48 lis/c=67/67 les/c/f=68/68/0 sis=73) [1] r=0 lpr=73 pi=[67,73)/1 crt=49'6 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 27 05:59:25 np0005537642 ceph-osd[82775]: osd.1 pg_epoch: 74 pg[11.1a( empty local-lis/les=73/74 n=0 ec=69/55 lis/c=69/69 les/c/f=70/70/0 sis=73) [1] r=0 lpr=73 pi=[69,73)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 27 05:59:25 np0005537642 ceph-osd[82775]: osd.1 pg_epoch: 74 pg[11.1c( empty local-lis/les=73/74 n=0 ec=69/55 lis/c=69/69 les/c/f=70/70/0 sis=73) [1] r=0 lpr=73 pi=[69,73)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 27 05:59:25 np0005537642 ceph-osd[82775]: osd.1 pg_epoch: 74 pg[8.12( v 49'6 (0'0,49'6] local-lis/les=73/74 n=0 ec=67/48 lis/c=67/67 les/c/f=68/68/0 sis=73) [1] r=0 lpr=73 pi=[67,73)/1 crt=49'6 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 27 05:59:25 np0005537642 ceph-osd[82775]: osd.1 pg_epoch: 74 pg[8.19( v 49'6 lc 0'0 (0'0,49'6] local-lis/les=73/74 n=0 ec=67/48 lis/c=67/67 les/c/f=68/68/0 sis=73) [1] r=0 lpr=73 pi=[67,73)/1 crt=49'6 mlcod 0'0 active+degraded m=1 mbc={255={(0+1)=1}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 27 05:59:25 np0005537642 ceph-osd[82775]: osd.1 pg_epoch: 74 pg[11.1e( empty local-lis/les=73/74 n=0 ec=69/55 lis/c=69/69 les/c/f=70/70/0 sis=73) [1] r=0 lpr=73 pi=[69,73)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 27 05:59:25 np0005537642 ceph-osd[82775]: osd.1 pg_epoch: 74 pg[11.1b( empty local-lis/les=73/74 n=0 ec=69/55 lis/c=69/69 les/c/f=70/70/0 sis=73) [1] r=0 lpr=73 pi=[69,73)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 27 05:59:25 np0005537642 ceph-osd[82775]: osd.1 pg_epoch: 74 pg[8.1b( v 49'6 (0'0,49'6] local-lis/les=73/74 n=0 ec=67/48 lis/c=67/67 les/c/f=68/68/0 sis=73) [1] r=0 lpr=73 pi=[67,73)/1 crt=49'6 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 27 05:59:25 np0005537642 ceph-osd[82775]: osd.1 pg_epoch: 74 pg[11.7( empty local-lis/les=73/74 n=0 ec=69/55 lis/c=69/69 les/c/f=70/70/0 sis=73) [1] r=0 lpr=73 pi=[69,73)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 27 05:59:25 np0005537642 ceph-osd[82775]: osd.1 pg_epoch: 74 pg[11.5( empty local-lis/les=73/74 n=0 ec=69/55 lis/c=69/69 les/c/f=70/70/0 sis=73) [1] r=0 lpr=73 pi=[69,73)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 27 05:59:25 np0005537642 ceph-osd[82775]: osd.1 pg_epoch: 74 pg[11.4( empty local-lis/les=73/74 n=0 ec=69/55 lis/c=69/69 les/c/f=70/70/0 sis=73) [1] r=0 lpr=73 pi=[69,73)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 27 05:59:25 np0005537642 ceph-osd[82775]: osd.1 pg_epoch: 74 pg[11.f( empty local-lis/les=73/74 n=0 ec=69/55 lis/c=69/69 les/c/f=70/70/0 sis=73) [1] r=0 lpr=73 pi=[69,73)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 27 05:59:25 np0005537642 ceph-osd[82775]: osd.1 pg_epoch: 74 pg[11.1( empty local-lis/les=73/74 n=0 ec=69/55 lis/c=69/69 les/c/f=70/70/0 sis=73) [1] r=0 lpr=73 pi=[69,73)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 27 05:59:25 np0005537642 ceph-osd[82775]: osd.1 pg_epoch: 74 pg[8.8( v 49'6 (0'0,49'6] local-lis/les=73/74 n=0 ec=67/48 lis/c=67/67 les/c/f=68/68/0 sis=73) [1] r=0 lpr=73 pi=[67,73)/1 crt=49'6 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 27 05:59:25 np0005537642 ceph-osd[82775]: osd.1 pg_epoch: 74 pg[11.12( empty local-lis/les=73/74 n=0 ec=69/55 lis/c=69/69 les/c/f=70/70/0 sis=73) [1] r=0 lpr=73 pi=[69,73)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 27 05:59:25 np0005537642 ceph-osd[82775]: osd.1 pg_epoch: 74 pg[11.14( empty local-lis/les=73/74 n=0 ec=69/55 lis/c=69/69 les/c/f=70/70/0 sis=73) [1] r=0 lpr=73 pi=[69,73)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 27 05:59:25 np0005537642 ceph-osd[82775]: osd.1 pg_epoch: 74 pg[8.17( v 49'6 (0'0,49'6] local-lis/les=73/74 n=0 ec=67/48 lis/c=67/67 les/c/f=68/68/0 sis=73) [1] r=0 lpr=73 pi=[67,73)/1 crt=49'6 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 27 05:59:25 np0005537642 ceph-osd[82775]: osd.1 pg_epoch: 74 pg[8.14( v 49'6 (0'0,49'6] local-lis/les=73/74 n=0 ec=67/48 lis/c=67/67 les/c/f=68/68/0 sis=73) [1] r=0 lpr=73 pi=[67,73)/1 crt=49'6 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 27 05:59:25 np0005537642 ceph-osd[82775]: osd.1 pg_epoch: 74 pg[11.1d( empty local-lis/les=73/74 n=0 ec=69/55 lis/c=69/69 les/c/f=70/70/0 sis=73) [1] r=0 lpr=73 pi=[69,73)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 27 05:59:25 np0005537642 ceph-osd[82775]: osd.1 pg_epoch: 74 pg[8.18( v 49'6 (0'0,49'6] local-lis/les=73/74 n=0 ec=67/48 lis/c=67/67 les/c/f=68/68/0 sis=73) [1] r=0 lpr=73 pi=[67,73)/1 crt=49'6 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 27 05:59:25 np0005537642 ceph-osd[82775]: osd.1 pg_epoch: 74 pg[8.4( v 49'6 (0'0,49'6] local-lis/les=73/74 n=1 ec=67/48 lis/c=67/67 les/c/f=68/68/0 sis=73) [1] r=0 lpr=73 pi=[67,73)/1 crt=49'6 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 27 05:59:25 np0005537642 ceph-4c838139-e0c9-556a-a9ca-e4422f459af7-nfs-cephfs-2-0-compute-0-ymahkb[97101]: 27/11/2025 10:59:25 : epoch 69282ef5 : compute-0 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f37d40021d0 fd 37 proxy header rest len failed header rlen = % (will set dead)
Nov 27 05:59:25 np0005537642 ceph-4c838139-e0c9-556a-a9ca-e4422f459af7-nfs-cephfs-2-0-compute-0-ymahkb[97101]: 27/11/2025 10:59:25 : epoch 69282ef5 : compute-0 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f37bc0016a0 fd 37 proxy header rest len failed header rlen = % (will set dead)
Nov 27 05:59:25 np0005537642 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v54: 353 pgs: 353 active+clean; 456 KiB data, 103 MiB used, 60 GiB / 60 GiB avail
Nov 27 05:59:25 np0005537642 ceph-mon[74338]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "3"} v 0)
Nov 27 05:59:25 np0005537642 ceph-mon[74338]: log_channel(audit) log [INF] : from='mgr.14424 192.168.122.100:0/1707931784' entity='mgr.compute-0.qnrkij' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "3"}]: dispatch
Nov 27 05:59:26 np0005537642 ceph-osd[82775]: log_channel(cluster) log [DBG] : 10.16 deep-scrub starts
Nov 27 05:59:26 np0005537642 ceph-osd[82775]: log_channel(cluster) log [DBG] : 10.16 deep-scrub ok
Nov 27 05:59:26 np0005537642 ceph-mon[74338]: mon.compute-0@0(leader).osd e74 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 27 05:59:26 np0005537642 ceph-4c838139-e0c9-556a-a9ca-e4422f459af7-nfs-cephfs-2-0-compute-0-ymahkb[97101]: 27/11/2025 10:59:26 : epoch 69282ef5 : compute-0 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f37dc0023e0 fd 37 proxy header rest len failed header rlen = % (will set dead)
Nov 27 05:59:26 np0005537642 ceph-mon[74338]: mon.compute-0@0(leader).osd e74 do_prune osdmap full prune enabled
Nov 27 05:59:26 np0005537642 ceph-mon[74338]: from='mgr.14424 192.168.122.100:0/1707931784' entity='mgr.compute-0.qnrkij' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "3"}]: dispatch
Nov 27 05:59:26 np0005537642 ceph-mon[74338]: log_channel(audit) log [INF] : from='mgr.14424 192.168.122.100:0/1707931784' entity='mgr.compute-0.qnrkij' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "3"}]': finished
Nov 27 05:59:26 np0005537642 ceph-mon[74338]: mon.compute-0@0(leader).osd e75 e75: 3 total, 3 up, 3 in
Nov 27 05:59:26 np0005537642 ceph-mon[74338]: log_channel(cluster) log [DBG] : osdmap e75: 3 total, 3 up, 3 in
Nov 27 05:59:26 np0005537642 podman[97640]: 2025-11-27 10:59:26.705384357 +0000 UTC m=+3.604232252 container create 87fe6e02a643d626ad6f9bb117b1423513e5e1bc33a146d67ab509093b3808b4 (image=quay.io/ceph/keepalived:2.2.4, name=magical_wing, vcs-ref=befaf1f5ec7b874aef2651ee1384d51828504eb9, build-date=2023-02-22T09:23:20, vcs-type=git, io.openshift.tags=Ceph keepalived, distribution-scope=public, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vendor=Red Hat, Inc., summary=Provides keepalived on RHEL 9 for Ceph., architecture=x86_64, name=keepalived, io.openshift.expose-services=, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9-minimal/images/9.1.0-1793, com.redhat.component=keepalived-container, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.buildah.version=1.28.2, io.k8s.display-name=Keepalived on RHEL 9, version=2.2.4, release=1793, description=keepalived for Ceph)
Nov 27 05:59:26 np0005537642 systemd[1]: Started libpod-conmon-87fe6e02a643d626ad6f9bb117b1423513e5e1bc33a146d67ab509093b3808b4.scope.
Nov 27 05:59:26 np0005537642 podman[97640]: 2025-11-27 10:59:26.682229436 +0000 UTC m=+3.581077361 image pull 4a3a1ff181d97c6dcfa9138ad76eb99fa2c1e840298461d5a7a56133bc05b9a2 quay.io/ceph/keepalived:2.2.4
Nov 27 05:59:26 np0005537642 systemd[1]: Started libcrun container.
Nov 27 05:59:26 np0005537642 podman[97640]: 2025-11-27 10:59:26.82468232 +0000 UTC m=+3.723530225 container init 87fe6e02a643d626ad6f9bb117b1423513e5e1bc33a146d67ab509093b3808b4 (image=quay.io/ceph/keepalived:2.2.4, name=magical_wing, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, architecture=x86_64, distribution-scope=public, com.redhat.component=keepalived-container, build-date=2023-02-22T09:23:20, io.buildah.version=1.28.2, release=1793, name=keepalived, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9-minimal/images/9.1.0-1793, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., summary=Provides keepalived on RHEL 9 for Ceph., io.openshift.tags=Ceph keepalived, vcs-ref=befaf1f5ec7b874aef2651ee1384d51828504eb9, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.display-name=Keepalived on RHEL 9, version=2.2.4, io.openshift.expose-services=, description=keepalived for Ceph, vcs-type=git)
Nov 27 05:59:26 np0005537642 podman[97640]: 2025-11-27 10:59:26.83400271 +0000 UTC m=+3.732850625 container start 87fe6e02a643d626ad6f9bb117b1423513e5e1bc33a146d67ab509093b3808b4 (image=quay.io/ceph/keepalived:2.2.4, name=magical_wing, io.openshift.expose-services=, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, distribution-scope=public, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., summary=Provides keepalived on RHEL 9 for Ceph., io.k8s.display-name=Keepalived on RHEL 9, version=2.2.4, release=1793, vcs-type=git, io.openshift.tags=Ceph keepalived, vendor=Red Hat, Inc., build-date=2023-02-22T09:23:20, name=keepalived, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9-minimal/images/9.1.0-1793, com.redhat.component=keepalived-container, vcs-ref=befaf1f5ec7b874aef2651ee1384d51828504eb9, architecture=x86_64, io.buildah.version=1.28.2, description=keepalived for Ceph, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI)
Nov 27 05:59:26 np0005537642 podman[97640]: 2025-11-27 10:59:26.837885034 +0000 UTC m=+3.736732919 container attach 87fe6e02a643d626ad6f9bb117b1423513e5e1bc33a146d67ab509093b3808b4 (image=quay.io/ceph/keepalived:2.2.4, name=magical_wing, version=2.2.4, io.openshift.expose-services=, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9-minimal/images/9.1.0-1793, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.display-name=Keepalived on RHEL 9, release=1793, com.redhat.component=keepalived-container, build-date=2023-02-22T09:23:20, name=keepalived, description=keepalived for Ceph, vcs-ref=befaf1f5ec7b874aef2651ee1384d51828504eb9, architecture=x86_64, io.buildah.version=1.28.2, vcs-type=git, summary=Provides keepalived on RHEL 9 for Ceph., io.openshift.tags=Ceph keepalived, distribution-scope=public, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vendor=Red Hat, Inc.)
Nov 27 05:59:26 np0005537642 magical_wing[97735]: 0 0
Nov 27 05:59:26 np0005537642 systemd[1]: libpod-87fe6e02a643d626ad6f9bb117b1423513e5e1bc33a146d67ab509093b3808b4.scope: Deactivated successfully.
Nov 27 05:59:26 np0005537642 podman[97640]: 2025-11-27 10:59:26.843951587 +0000 UTC m=+3.742799472 container died 87fe6e02a643d626ad6f9bb117b1423513e5e1bc33a146d67ab509093b3808b4 (image=quay.io/ceph/keepalived:2.2.4, name=magical_wing, io.openshift.tags=Ceph keepalived, distribution-scope=public, architecture=x86_64, io.buildah.version=1.28.2, io.k8s.display-name=Keepalived on RHEL 9, version=2.2.4, summary=Provides keepalived on RHEL 9 for Ceph., maintainer=Guillaume Abrioux <gabrioux@redhat.com>, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9-minimal/images/9.1.0-1793, com.redhat.component=keepalived-container, vcs-ref=befaf1f5ec7b874aef2651ee1384d51828504eb9, io.openshift.expose-services=, vcs-type=git, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., build-date=2023-02-22T09:23:20, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, release=1793, vendor=Red Hat, Inc., name=keepalived, description=keepalived for Ceph)
Nov 27 05:59:26 np0005537642 systemd[1]: var-lib-containers-storage-overlay-8d0ab4483d22e4f15274a87e31ac7e079059eb951ad3ecfd818d3be5b012d795-merged.mount: Deactivated successfully.
Nov 27 05:59:26 np0005537642 podman[97640]: 2025-11-27 10:59:26.886282353 +0000 UTC m=+3.785130238 container remove 87fe6e02a643d626ad6f9bb117b1423513e5e1bc33a146d67ab509093b3808b4 (image=quay.io/ceph/keepalived:2.2.4, name=magical_wing, build-date=2023-02-22T09:23:20, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, name=keepalived, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9-minimal/images/9.1.0-1793, version=2.2.4, vcs-ref=befaf1f5ec7b874aef2651ee1384d51828504eb9, distribution-scope=public, io.k8s.display-name=Keepalived on RHEL 9, description=keepalived for Ceph, io.openshift.tags=Ceph keepalived, architecture=x86_64, io.openshift.expose-services=, vcs-type=git, vendor=Red Hat, Inc., io.buildah.version=1.28.2, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., release=1793, summary=Provides keepalived on RHEL 9 for Ceph., com.redhat.component=keepalived-container)
Nov 27 05:59:26 np0005537642 systemd[1]: libpod-conmon-87fe6e02a643d626ad6f9bb117b1423513e5e1bc33a146d67ab509093b3808b4.scope: Deactivated successfully.
Nov 27 05:59:26 np0005537642 systemd[1]: Reloading.
Nov 27 05:59:27 np0005537642 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 27 05:59:27 np0005537642 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 27 05:59:27 np0005537642 systemd[1]: Reloading.
Nov 27 05:59:27 np0005537642 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 27 05:59:27 np0005537642 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 27 05:59:27 np0005537642 systemd[1]: Starting Ceph keepalived.nfs.cephfs.compute-0.zwobpl for 4c838139-e0c9-556a-a9ca-e4422f459af7...
Nov 27 05:59:27 np0005537642 ceph-osd[82775]: log_channel(cluster) log [DBG] : 12.15 scrub starts
Nov 27 05:59:27 np0005537642 ceph-osd[82775]: log_channel(cluster) log [DBG] : 12.15 scrub ok
Nov 27 05:59:27 np0005537642 ceph-4c838139-e0c9-556a-a9ca-e4422f459af7-nfs-cephfs-2-0-compute-0-ymahkb[97101]: 27/11/2025 10:59:27 : epoch 69282ef5 : compute-0 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f37c4001ac0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Nov 27 05:59:27 np0005537642 ceph-mon[74338]: from='mgr.14424 192.168.122.100:0/1707931784' entity='mgr.compute-0.qnrkij' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "3"}]': finished
Nov 27 05:59:27 np0005537642 podman[97880]: 2025-11-27 10:59:27.788344015 +0000 UTC m=+0.037931509 container create 3d53a6719a5e282f93d17858adf01e17fba9bc28fc08c14a3a13a6d5e41c4691 (image=quay.io/ceph/keepalived:2.2.4, name=ceph-4c838139-e0c9-556a-a9ca-e4422f459af7-keepalived-nfs-cephfs-compute-0-zwobpl, build-date=2023-02-22T09:23:20, distribution-scope=public, io.buildah.version=1.28.2, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, version=2.2.4, description=keepalived for Ceph, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9-minimal/images/9.1.0-1793, com.redhat.component=keepalived-container, release=1793, architecture=x86_64, io.k8s.display-name=Keepalived on RHEL 9, vcs-type=git, name=keepalived, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.expose-services=, vendor=Red Hat, Inc., summary=Provides keepalived on RHEL 9 for Ceph., vcs-ref=befaf1f5ec7b874aef2651ee1384d51828504eb9, io.openshift.tags=Ceph keepalived)
Nov 27 05:59:27 np0005537642 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1771bed783e4ec7992c4a03d0b61f9b4fe412af7af1be4647afa6bce223aee37/merged/etc/keepalived/keepalived.conf supports timestamps until 2038 (0x7fffffff)
Nov 27 05:59:27 np0005537642 ceph-4c838139-e0c9-556a-a9ca-e4422f459af7-nfs-cephfs-2-0-compute-0-ymahkb[97101]: 27/11/2025 10:59:27 : epoch 69282ef5 : compute-0 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f37d40021d0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Nov 27 05:59:27 np0005537642 podman[97880]: 2025-11-27 10:59:27.846939958 +0000 UTC m=+0.096527472 container init 3d53a6719a5e282f93d17858adf01e17fba9bc28fc08c14a3a13a6d5e41c4691 (image=quay.io/ceph/keepalived:2.2.4, name=ceph-4c838139-e0c9-556a-a9ca-e4422f459af7-keepalived-nfs-cephfs-compute-0-zwobpl, version=2.2.4, summary=Provides keepalived on RHEL 9 for Ceph., io.buildah.version=1.28.2, vcs-type=git, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., architecture=x86_64, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, build-date=2023-02-22T09:23:20, io.k8s.display-name=Keepalived on RHEL 9, description=keepalived for Ceph, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9-minimal/images/9.1.0-1793, name=keepalived, vcs-ref=befaf1f5ec7b874aef2651ee1384d51828504eb9, io.openshift.tags=Ceph keepalived, io.openshift.expose-services=, release=1793, distribution-scope=public, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, com.redhat.component=keepalived-container)
Nov 27 05:59:27 np0005537642 podman[97880]: 2025-11-27 10:59:27.854400438 +0000 UTC m=+0.103987932 container start 3d53a6719a5e282f93d17858adf01e17fba9bc28fc08c14a3a13a6d5e41c4691 (image=quay.io/ceph/keepalived:2.2.4, name=ceph-4c838139-e0c9-556a-a9ca-e4422f459af7-keepalived-nfs-cephfs-compute-0-zwobpl, architecture=x86_64, distribution-scope=public, io.buildah.version=1.28.2, io.k8s.display-name=Keepalived on RHEL 9, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, com.redhat.component=keepalived-container, summary=Provides keepalived on RHEL 9 for Ceph., io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., build-date=2023-02-22T09:23:20, vendor=Red Hat, Inc., vcs-type=git, io.openshift.tags=Ceph keepalived, io.openshift.expose-services=, description=keepalived for Ceph, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9-minimal/images/9.1.0-1793, vcs-ref=befaf1f5ec7b874aef2651ee1384d51828504eb9, release=1793, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, version=2.2.4, name=keepalived)
Nov 27 05:59:27 np0005537642 bash[97880]: 3d53a6719a5e282f93d17858adf01e17fba9bc28fc08c14a3a13a6d5e41c4691
Nov 27 05:59:27 np0005537642 podman[97880]: 2025-11-27 10:59:27.773084856 +0000 UTC m=+0.022672370 image pull 4a3a1ff181d97c6dcfa9138ad76eb99fa2c1e840298461d5a7a56133bc05b9a2 quay.io/ceph/keepalived:2.2.4
Nov 27 05:59:27 np0005537642 systemd[1]: Started Ceph keepalived.nfs.cephfs.compute-0.zwobpl for 4c838139-e0c9-556a-a9ca-e4422f459af7.
Nov 27 05:59:27 np0005537642 ceph-4c838139-e0c9-556a-a9ca-e4422f459af7-keepalived-nfs-cephfs-compute-0-zwobpl[97895]: Thu Nov 27 10:59:27 2025: Starting Keepalived v2.2.4 (08/21,2021)
Nov 27 05:59:27 np0005537642 ceph-4c838139-e0c9-556a-a9ca-e4422f459af7-keepalived-nfs-cephfs-compute-0-zwobpl[97895]: Thu Nov 27 10:59:27 2025: Running on Linux 5.14.0-642.el9.x86_64 #1 SMP PREEMPT_DYNAMIC Thu Nov 20 14:15:03 UTC 2025 (built for Linux 5.14.0)
Nov 27 05:59:27 np0005537642 ceph-4c838139-e0c9-556a-a9ca-e4422f459af7-keepalived-nfs-cephfs-compute-0-zwobpl[97895]: Thu Nov 27 10:59:27 2025: Command line: '/usr/sbin/keepalived' '-n' '-l' '-f' '/etc/keepalived/keepalived.conf'
Nov 27 05:59:27 np0005537642 ceph-4c838139-e0c9-556a-a9ca-e4422f459af7-keepalived-nfs-cephfs-compute-0-zwobpl[97895]: Thu Nov 27 10:59:27 2025: Configuration file /etc/keepalived/keepalived.conf
Nov 27 05:59:27 np0005537642 ceph-4c838139-e0c9-556a-a9ca-e4422f459af7-keepalived-nfs-cephfs-compute-0-zwobpl[97895]: Thu Nov 27 10:59:27 2025: NOTICE: setting config option max_auto_priority should result in better keepalived performance
Nov 27 05:59:27 np0005537642 ceph-4c838139-e0c9-556a-a9ca-e4422f459af7-keepalived-nfs-cephfs-compute-0-zwobpl[97895]: Thu Nov 27 10:59:27 2025: Starting VRRP child process, pid=4
Nov 27 05:59:27 np0005537642 ceph-4c838139-e0c9-556a-a9ca-e4422f459af7-keepalived-nfs-cephfs-compute-0-zwobpl[97895]: Thu Nov 27 10:59:27 2025: Startup complete
Nov 27 05:59:27 np0005537642 ceph-4c838139-e0c9-556a-a9ca-e4422f459af7-keepalived-nfs-cephfs-compute-0-zwobpl[97895]: Thu Nov 27 10:59:27 2025: (VI_0) Entering BACKUP STATE (init)
Nov 27 05:59:27 np0005537642 ceph-4c838139-e0c9-556a-a9ca-e4422f459af7-keepalived-nfs-cephfs-compute-0-zwobpl[97895]: Thu Nov 27 10:59:27 2025: VRRP_Script(check_backend) succeeded
Nov 27 05:59:27 np0005537642 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v56: 353 pgs: 353 active+clean; 456 KiB data, 103 MiB used, 60 GiB / 60 GiB avail
Nov 27 05:59:27 np0005537642 ceph-mon[74338]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "4"} v 0)
Nov 27 05:59:27 np0005537642 ceph-mon[74338]: log_channel(audit) log [INF] : from='mgr.14424 192.168.122.100:0/1707931784' entity='mgr.compute-0.qnrkij' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "4"}]: dispatch
Nov 27 05:59:27 np0005537642 ceph-mon[74338]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Nov 27 05:59:27 np0005537642 ceph-mon[74338]: log_channel(audit) log [INF] : from='mgr.14424 192.168.122.100:0/1707931784' entity='mgr.compute-0.qnrkij' 
Nov 27 05:59:27 np0005537642 ceph-mon[74338]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Nov 27 05:59:28 np0005537642 ceph-mon[74338]: log_channel(audit) log [INF] : from='mgr.14424 192.168.122.100:0/1707931784' entity='mgr.compute-0.qnrkij' 
Nov 27 05:59:28 np0005537642 ceph-mon[74338]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.ingress.nfs.cephfs}] v 0)
Nov 27 05:59:28 np0005537642 ceph-mon[74338]: log_channel(audit) log [INF] : from='mgr.14424 192.168.122.100:0/1707931784' entity='mgr.compute-0.qnrkij' 
Nov 27 05:59:28 np0005537642 ceph-mgr[74636]: [cephadm INFO cephadm.services.ingress] 192.168.122.2 is in 192.168.122.0/24 on compute-1 interface br-ex
Nov 27 05:59:28 np0005537642 ceph-mgr[74636]: log_channel(cephadm) log [INF] : 192.168.122.2 is in 192.168.122.0/24 on compute-1 interface br-ex
Nov 27 05:59:28 np0005537642 ceph-mgr[74636]: [cephadm INFO cephadm.services.ingress] 192.168.122.2 is in 192.168.122.0/24 on compute-0 interface br-ex
Nov 27 05:59:28 np0005537642 ceph-mgr[74636]: log_channel(cephadm) log [INF] : 192.168.122.2 is in 192.168.122.0/24 on compute-0 interface br-ex
Nov 27 05:59:28 np0005537642 ceph-mgr[74636]: [cephadm INFO cephadm.services.ingress] 192.168.122.2 is in 192.168.122.0/24 on compute-2 interface br-ex
Nov 27 05:59:28 np0005537642 ceph-mgr[74636]: log_channel(cephadm) log [INF] : 192.168.122.2 is in 192.168.122.0/24 on compute-2 interface br-ex
Nov 27 05:59:28 np0005537642 ceph-mgr[74636]: [cephadm INFO cephadm.serve] Deploying daemon keepalived.nfs.cephfs.compute-1.exczig on compute-1
Nov 27 05:59:28 np0005537642 ceph-mgr[74636]: log_channel(cephadm) log [INF] : Deploying daemon keepalived.nfs.cephfs.compute-1.exczig on compute-1
Nov 27 05:59:28 np0005537642 ceph-mgr[74636]: [progress INFO root] Completed event eac55ca3-ae1b-4be8-9043-5564d1047d0e (Global Recovery Event) in 10 seconds
Nov 27 05:59:28 np0005537642 ceph-osd[82775]: log_channel(cluster) log [DBG] : 10.0 deep-scrub starts
Nov 27 05:59:28 np0005537642 ceph-osd[82775]: log_channel(cluster) log [DBG] : 10.0 deep-scrub ok
Nov 27 05:59:28 np0005537642 ceph-4c838139-e0c9-556a-a9ca-e4422f459af7-nfs-cephfs-2-0-compute-0-ymahkb[97101]: 27/11/2025 10:59:28 : epoch 69282ef5 : compute-0 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f37bc002b10 fd 48 proxy header rest len failed header rlen = % (will set dead)
Nov 27 05:59:28 np0005537642 ceph-mon[74338]: mon.compute-0@0(leader).osd e75 do_prune osdmap full prune enabled
Nov 27 05:59:28 np0005537642 ceph-mon[74338]: from='mgr.14424 192.168.122.100:0/1707931784' entity='mgr.compute-0.qnrkij' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "4"}]: dispatch
Nov 27 05:59:28 np0005537642 ceph-mon[74338]: from='mgr.14424 192.168.122.100:0/1707931784' entity='mgr.compute-0.qnrkij' 
Nov 27 05:59:28 np0005537642 ceph-mon[74338]: from='mgr.14424 192.168.122.100:0/1707931784' entity='mgr.compute-0.qnrkij' 
Nov 27 05:59:28 np0005537642 ceph-mon[74338]: from='mgr.14424 192.168.122.100:0/1707931784' entity='mgr.compute-0.qnrkij' 
Nov 27 05:59:28 np0005537642 ceph-mon[74338]: 192.168.122.2 is in 192.168.122.0/24 on compute-1 interface br-ex
Nov 27 05:59:28 np0005537642 ceph-mon[74338]: 192.168.122.2 is in 192.168.122.0/24 on compute-0 interface br-ex
Nov 27 05:59:28 np0005537642 ceph-mon[74338]: 192.168.122.2 is in 192.168.122.0/24 on compute-2 interface br-ex
Nov 27 05:59:28 np0005537642 ceph-mon[74338]: Deploying daemon keepalived.nfs.cephfs.compute-1.exczig on compute-1
Nov 27 05:59:28 np0005537642 ceph-mon[74338]: log_channel(audit) log [INF] : from='mgr.14424 192.168.122.100:0/1707931784' entity='mgr.compute-0.qnrkij' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "4"}]': finished
Nov 27 05:59:28 np0005537642 ceph-mon[74338]: mon.compute-0@0(leader).osd e76 e76: 3 total, 3 up, 3 in
Nov 27 05:59:28 np0005537642 ceph-mon[74338]: log_channel(cluster) log [DBG] : osdmap e76: 3 total, 3 up, 3 in
Nov 27 05:59:29 np0005537642 ceph-osd[82775]: log_channel(cluster) log [DBG] : 10.e scrub starts
Nov 27 05:59:29 np0005537642 ceph-osd[82775]: log_channel(cluster) log [DBG] : 10.e scrub ok
Nov 27 05:59:29 np0005537642 ceph-4c838139-e0c9-556a-a9ca-e4422f459af7-nfs-cephfs-2-0-compute-0-ymahkb[97101]: 27/11/2025 10:59:29 : epoch 69282ef5 : compute-0 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f37dc0034e0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Nov 27 05:59:29 np0005537642 ceph-mon[74338]: mon.compute-0@0(leader).osd e76 do_prune osdmap full prune enabled
Nov 27 05:59:29 np0005537642 ceph-mon[74338]: from='mgr.14424 192.168.122.100:0/1707931784' entity='mgr.compute-0.qnrkij' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "4"}]': finished
Nov 27 05:59:29 np0005537642 ceph-mon[74338]: mon.compute-0@0(leader).osd e77 e77: 3 total, 3 up, 3 in
Nov 27 05:59:29 np0005537642 ceph-mon[74338]: log_channel(cluster) log [DBG] : osdmap e77: 3 total, 3 up, 3 in
Nov 27 05:59:29 np0005537642 ceph-4c838139-e0c9-556a-a9ca-e4422f459af7-nfs-cephfs-2-0-compute-0-ymahkb[97101]: 27/11/2025 10:59:29 : epoch 69282ef5 : compute-0 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f37c4002f50 fd 48 proxy header rest len failed header rlen = % (will set dead)
Nov 27 05:59:29 np0005537642 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v59: 353 pgs: 353 active+clean; 457 KiB data, 103 MiB used, 60 GiB / 60 GiB avail; 380 B/s, 0 keys/s, 3 objects/s recovering
Nov 27 05:59:29 np0005537642 ceph-mon[74338]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "5"} v 0)
Nov 27 05:59:29 np0005537642 ceph-mon[74338]: log_channel(audit) log [INF] : from='mgr.14424 192.168.122.100:0/1707931784' entity='mgr.compute-0.qnrkij' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "5"}]: dispatch
Nov 27 05:59:30 np0005537642 ceph-osd[82775]: log_channel(cluster) log [DBG] : 10.c scrub starts
Nov 27 05:59:30 np0005537642 ceph-osd[82775]: log_channel(cluster) log [DBG] : 10.c scrub ok
Nov 27 05:59:30 np0005537642 ceph-4c838139-e0c9-556a-a9ca-e4422f459af7-nfs-cephfs-2-0-compute-0-ymahkb[97101]: 27/11/2025 10:59:30 : epoch 69282ef5 : compute-0 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f37d40021d0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Nov 27 05:59:30 np0005537642 ceph-mon[74338]: mon.compute-0@0(leader).osd e77 do_prune osdmap full prune enabled
Nov 27 05:59:30 np0005537642 ceph-mon[74338]: log_channel(audit) log [INF] : from='mgr.14424 192.168.122.100:0/1707931784' entity='mgr.compute-0.qnrkij' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "5"}]': finished
Nov 27 05:59:30 np0005537642 ceph-mon[74338]: mon.compute-0@0(leader).osd e78 e78: 3 total, 3 up, 3 in
Nov 27 05:59:30 np0005537642 ceph-mon[74338]: log_channel(cluster) log [DBG] : osdmap e78: 3 total, 3 up, 3 in
Nov 27 05:59:30 np0005537642 ceph-mon[74338]: from='mgr.14424 192.168.122.100:0/1707931784' entity='mgr.compute-0.qnrkij' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "5"}]: dispatch
Nov 27 05:59:31 np0005537642 ceph-4c838139-e0c9-556a-a9ca-e4422f459af7-keepalived-nfs-cephfs-compute-0-zwobpl[97895]: Thu Nov 27 10:59:31 2025: (VI_0) Entering MASTER STATE
Nov 27 05:59:31 np0005537642 ceph-osd[82775]: log_channel(cluster) log [DBG] : 10.a scrub starts
Nov 27 05:59:31 np0005537642 ceph-osd[82775]: log_channel(cluster) log [DBG] : 10.a scrub ok
Nov 27 05:59:31 np0005537642 ceph-mon[74338]: mon.compute-0@0(leader).osd e78 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 27 05:59:31 np0005537642 ceph-mon[74338]: mon.compute-0@0(leader).osd e78 do_prune osdmap full prune enabled
Nov 27 05:59:31 np0005537642 ceph-mon[74338]: mon.compute-0@0(leader).osd e79 e79: 3 total, 3 up, 3 in
Nov 27 05:59:31 np0005537642 ceph-4c838139-e0c9-556a-a9ca-e4422f459af7-nfs-cephfs-2-0-compute-0-ymahkb[97101]: 27/11/2025 10:59:31 : epoch 69282ef5 : compute-0 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f37bc002b10 fd 48 proxy header rest len failed header rlen = % (will set dead)
Nov 27 05:59:31 np0005537642 ceph-mon[74338]: log_channel(cluster) log [DBG] : osdmap e79: 3 total, 3 up, 3 in
Nov 27 05:59:31 np0005537642 ceph-4c838139-e0c9-556a-a9ca-e4422f459af7-nfs-cephfs-2-0-compute-0-ymahkb[97101]: 27/11/2025 10:59:31 : epoch 69282ef5 : compute-0 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f37dc0034e0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Nov 27 05:59:31 np0005537642 ceph-mon[74338]: from='mgr.14424 192.168.122.100:0/1707931784' entity='mgr.compute-0.qnrkij' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "5"}]': finished
Nov 27 05:59:31 np0005537642 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v62: 353 pgs: 353 active+clean; 457 KiB data, 103 MiB used, 60 GiB / 60 GiB avail; 407 B/s, 0 keys/s, 3 objects/s recovering
Nov 27 05:59:31 np0005537642 ceph-mon[74338]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "6"} v 0)
Nov 27 05:59:31 np0005537642 ceph-mon[74338]: log_channel(audit) log [INF] : from='mgr.14424 192.168.122.100:0/1707931784' entity='mgr.compute-0.qnrkij' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "6"}]: dispatch
Nov 27 05:59:32 np0005537642 ceph-mon[74338]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1.devices.0}] v 0)
Nov 27 05:59:32 np0005537642 ceph-mon[74338]: log_channel(audit) log [INF] : from='mgr.14424 192.168.122.100:0/1707931784' entity='mgr.compute-0.qnrkij' 
Nov 27 05:59:32 np0005537642 ceph-mon[74338]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1}] v 0)
Nov 27 05:59:32 np0005537642 ceph-mon[74338]: log_channel(audit) log [INF] : from='mgr.14424 192.168.122.100:0/1707931784' entity='mgr.compute-0.qnrkij' 
Nov 27 05:59:32 np0005537642 ceph-mon[74338]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.ingress.nfs.cephfs}] v 0)
Nov 27 05:59:32 np0005537642 ceph-mon[74338]: log_channel(audit) log [INF] : from='mgr.14424 192.168.122.100:0/1707931784' entity='mgr.compute-0.qnrkij' 
Nov 27 05:59:32 np0005537642 ceph-mgr[74636]: [cephadm INFO cephadm.services.ingress] 192.168.122.2 is in 192.168.122.0/24 on compute-2 interface br-ex
Nov 27 05:59:32 np0005537642 ceph-mgr[74636]: log_channel(cephadm) log [INF] : 192.168.122.2 is in 192.168.122.0/24 on compute-2 interface br-ex
Nov 27 05:59:32 np0005537642 ceph-mgr[74636]: [cephadm INFO cephadm.services.ingress] 192.168.122.2 is in 192.168.122.0/24 on compute-0 interface br-ex
Nov 27 05:59:32 np0005537642 ceph-mgr[74636]: log_channel(cephadm) log [INF] : 192.168.122.2 is in 192.168.122.0/24 on compute-0 interface br-ex
Nov 27 05:59:32 np0005537642 ceph-mgr[74636]: [cephadm INFO cephadm.services.ingress] 192.168.122.2 is in 192.168.122.0/24 on compute-1 interface br-ex
Nov 27 05:59:32 np0005537642 ceph-mgr[74636]: log_channel(cephadm) log [INF] : 192.168.122.2 is in 192.168.122.0/24 on compute-1 interface br-ex
Nov 27 05:59:32 np0005537642 ceph-mgr[74636]: [cephadm INFO cephadm.serve] Deploying daemon keepalived.nfs.cephfs.compute-2.iopdfx on compute-2
Nov 27 05:59:32 np0005537642 ceph-mgr[74636]: log_channel(cephadm) log [INF] : Deploying daemon keepalived.nfs.cephfs.compute-2.iopdfx on compute-2
Nov 27 05:59:32 np0005537642 ceph-4c838139-e0c9-556a-a9ca-e4422f459af7-nfs-cephfs-2-0-compute-0-ymahkb[97101]: 27/11/2025 10:59:32 : epoch 69282ef5 : compute-0 : ganesha.nfsd-2[reaper] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Nov 27 05:59:32 np0005537642 ceph-osd[82775]: log_channel(cluster) log [DBG] : 10.9 scrub starts
Nov 27 05:59:32 np0005537642 ceph-osd[82775]: log_channel(cluster) log [DBG] : 10.9 scrub ok
Nov 27 05:59:32 np0005537642 ceph-4c838139-e0c9-556a-a9ca-e4422f459af7-nfs-cephfs-2-0-compute-0-ymahkb[97101]: 27/11/2025 10:59:32 : epoch 69282ef5 : compute-0 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f37dc0034e0 fd 49 proxy header rest len failed header rlen = % (will set dead)
Nov 27 05:59:32 np0005537642 ceph-mon[74338]: mon.compute-0@0(leader).osd e79 do_prune osdmap full prune enabled
Nov 27 05:59:32 np0005537642 ceph-mon[74338]: log_channel(audit) log [INF] : from='mgr.14424 192.168.122.100:0/1707931784' entity='mgr.compute-0.qnrkij' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "6"}]': finished
Nov 27 05:59:32 np0005537642 ceph-mon[74338]: mon.compute-0@0(leader).osd e80 e80: 3 total, 3 up, 3 in
Nov 27 05:59:32 np0005537642 ceph-mon[74338]: log_channel(cluster) log [DBG] : osdmap e80: 3 total, 3 up, 3 in
Nov 27 05:59:32 np0005537642 ceph-mon[74338]: from='mgr.14424 192.168.122.100:0/1707931784' entity='mgr.compute-0.qnrkij' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "6"}]: dispatch
Nov 27 05:59:32 np0005537642 ceph-mon[74338]: from='mgr.14424 192.168.122.100:0/1707931784' entity='mgr.compute-0.qnrkij' 
Nov 27 05:59:32 np0005537642 ceph-mon[74338]: from='mgr.14424 192.168.122.100:0/1707931784' entity='mgr.compute-0.qnrkij' 
Nov 27 05:59:32 np0005537642 ceph-mon[74338]: from='mgr.14424 192.168.122.100:0/1707931784' entity='mgr.compute-0.qnrkij' 
Nov 27 05:59:32 np0005537642 ceph-mon[74338]: 192.168.122.2 is in 192.168.122.0/24 on compute-2 interface br-ex
Nov 27 05:59:32 np0005537642 ceph-mon[74338]: 192.168.122.2 is in 192.168.122.0/24 on compute-0 interface br-ex
Nov 27 05:59:32 np0005537642 ceph-mon[74338]: 192.168.122.2 is in 192.168.122.0/24 on compute-1 interface br-ex
Nov 27 05:59:32 np0005537642 ceph-mon[74338]: Deploying daemon keepalived.nfs.cephfs.compute-2.iopdfx on compute-2
Nov 27 05:59:32 np0005537642 ceph-mon[74338]: from='mgr.14424 192.168.122.100:0/1707931784' entity='mgr.compute-0.qnrkij' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "6"}]': finished
Nov 27 05:59:33 np0005537642 ceph-mgr[74636]: [progress INFO root] Writing back 22 completed events
Nov 27 05:59:33 np0005537642 ceph-mon[74338]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/progress/completed}] v 0)
Nov 27 05:59:33 np0005537642 ceph-mon[74338]: log_channel(audit) log [INF] : from='mgr.14424 192.168.122.100:0/1707931784' entity='mgr.compute-0.qnrkij' 
Nov 27 05:59:33 np0005537642 ceph-osd[82775]: log_channel(cluster) log [DBG] : 12.f scrub starts
Nov 27 05:59:33 np0005537642 ceph-osd[82775]: log_channel(cluster) log [DBG] : 12.f scrub ok
Nov 27 05:59:33 np0005537642 ceph-4c838139-e0c9-556a-a9ca-e4422f459af7-nfs-cephfs-2-0-compute-0-ymahkb[97101]: 27/11/2025 10:59:33 : epoch 69282ef5 : compute-0 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f37dc0034e0 fd 49 proxy header rest len failed header rlen = % (will set dead)
Nov 27 05:59:33 np0005537642 ceph-mon[74338]: mon.compute-0@0(leader).osd e80 do_prune osdmap full prune enabled
Nov 27 05:59:33 np0005537642 ceph-mon[74338]: mon.compute-0@0(leader).osd e81 e81: 3 total, 3 up, 3 in
Nov 27 05:59:33 np0005537642 ceph-mon[74338]: log_channel(cluster) log [DBG] : osdmap e81: 3 total, 3 up, 3 in
Nov 27 05:59:33 np0005537642 ceph-4c838139-e0c9-556a-a9ca-e4422f459af7-nfs-cephfs-2-0-compute-0-ymahkb[97101]: 27/11/2025 10:59:33 : epoch 69282ef5 : compute-0 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f37bc002b10 fd 49 proxy header rest len failed header rlen = % (will set dead)
Nov 27 05:59:33 np0005537642 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v65: 353 pgs: 4 unknown, 349 active+clean; 456 KiB data, 103 MiB used, 60 GiB / 60 GiB avail; 302 B/s, 9 objects/s recovering
Nov 27 05:59:34 np0005537642 ceph-mon[74338]: from='mgr.14424 192.168.122.100:0/1707931784' entity='mgr.compute-0.qnrkij' 
Nov 27 05:59:34 np0005537642 ceph-osd[82775]: log_channel(cluster) log [DBG] : 10.d deep-scrub starts
Nov 27 05:59:34 np0005537642 ceph-osd[82775]: log_channel(cluster) log [DBG] : 10.d deep-scrub ok
Nov 27 05:59:34 np0005537642 ceph-4c838139-e0c9-556a-a9ca-e4422f459af7-nfs-cephfs-2-0-compute-0-ymahkb[97101]: 27/11/2025 10:59:34 : epoch 69282ef5 : compute-0 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f37c4002f50 fd 49 proxy header rest len failed header rlen = % (will set dead)
Nov 27 05:59:34 np0005537642 ceph-mon[74338]: mon.compute-0@0(leader).osd e81 do_prune osdmap full prune enabled
Nov 27 05:59:34 np0005537642 ceph-mon[74338]: mon.compute-0@0(leader).osd e82 e82: 3 total, 3 up, 3 in
Nov 27 05:59:34 np0005537642 ceph-mon[74338]: log_channel(cluster) log [DBG] : osdmap e82: 3 total, 3 up, 3 in
Nov 27 05:59:35 np0005537642 ceph-4c838139-e0c9-556a-a9ca-e4422f459af7-nfs-cephfs-2-0-compute-0-ymahkb[97101]: 27/11/2025 10:59:35 : epoch 69282ef5 : compute-0 : ganesha.nfsd-2[reaper] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Nov 27 05:59:35 np0005537642 ceph-4c838139-e0c9-556a-a9ca-e4422f459af7-nfs-cephfs-2-0-compute-0-ymahkb[97101]: 27/11/2025 10:59:35 : epoch 69282ef5 : compute-0 : ganesha.nfsd-2[reaper] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Nov 27 05:59:35 np0005537642 ceph-osd[82775]: log_channel(cluster) log [DBG] : 12.d scrub starts
Nov 27 05:59:35 np0005537642 ceph-osd[82775]: log_channel(cluster) log [DBG] : 12.d scrub ok
Nov 27 05:59:35 np0005537642 ceph-4c838139-e0c9-556a-a9ca-e4422f459af7-keepalived-nfs-cephfs-compute-0-zwobpl[97895]: Thu Nov 27 10:59:35 2025: (VI_0) Received advert from 192.168.122.101 with lower priority 90, ours 100, forcing new election
Nov 27 05:59:35 np0005537642 ceph-4c838139-e0c9-556a-a9ca-e4422f459af7-nfs-cephfs-2-0-compute-0-ymahkb[97101]: 27/11/2025 10:59:35 : epoch 69282ef5 : compute-0 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f37dc0034e0 fd 49 proxy header rest len failed header rlen = % (will set dead)
Nov 27 05:59:35 np0005537642 ceph-mon[74338]: mon.compute-0@0(leader).osd e82 do_prune osdmap full prune enabled
Nov 27 05:59:35 np0005537642 ceph-4c838139-e0c9-556a-a9ca-e4422f459af7-nfs-cephfs-2-0-compute-0-ymahkb[97101]: 27/11/2025 10:59:35 : epoch 69282ef5 : compute-0 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f37dc0034e0 fd 49 proxy header rest len failed header rlen = % (will set dead)
Nov 27 05:59:35 np0005537642 ceph-mon[74338]: mon.compute-0@0(leader).osd e83 e83: 3 total, 3 up, 3 in
Nov 27 05:59:35 np0005537642 ceph-mon[74338]: log_channel(cluster) log [DBG] : osdmap e83: 3 total, 3 up, 3 in
Nov 27 05:59:35 np0005537642 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v68: 353 pgs: 4 unknown, 349 active+clean; 456 KiB data, 103 MiB used, 60 GiB / 60 GiB avail; 302 B/s, 9 objects/s recovering
Nov 27 05:59:35 np0005537642 python3[97931]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint radosgw-admin quay.io/ceph/ceph:v19 --fsid 4c838139-e0c9-556a-a9ca-e4422f459af7 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   user info --uid openstack _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 27 05:59:36 np0005537642 podman[97933]: 2025-11-27 10:59:36.102704862 +0000 UTC m=+0.075987641 container create c2a81adf0d49e868211cd42c99e70a4dddad20b8c50ef170c7c5aedb5267a6b1 (image=quay.io/ceph/ceph:v19, name=interesting_williamson, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default)
Nov 27 05:59:36 np0005537642 systemd[1]: Started libpod-conmon-c2a81adf0d49e868211cd42c99e70a4dddad20b8c50ef170c7c5aedb5267a6b1.scope.
Nov 27 05:59:36 np0005537642 podman[97933]: 2025-11-27 10:59:36.07246365 +0000 UTC m=+0.045746469 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Nov 27 05:59:36 np0005537642 systemd[1]: Started libcrun container.
Nov 27 05:59:36 np0005537642 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/17afcc21829933be25d80abc13bac33131be5b448c3d7b732213015d5d23abbd/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 27 05:59:36 np0005537642 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/17afcc21829933be25d80abc13bac33131be5b448c3d7b732213015d5d23abbd/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Nov 27 05:59:36 np0005537642 podman[97933]: 2025-11-27 10:59:36.210499634 +0000 UTC m=+0.183782453 container init c2a81adf0d49e868211cd42c99e70a4dddad20b8c50ef170c7c5aedb5267a6b1 (image=quay.io/ceph/ceph:v19, name=interesting_williamson, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 27 05:59:36 np0005537642 podman[97933]: 2025-11-27 10:59:36.224280654 +0000 UTC m=+0.197563433 container start c2a81adf0d49e868211cd42c99e70a4dddad20b8c50ef170c7c5aedb5267a6b1 (image=quay.io/ceph/ceph:v19, name=interesting_williamson, CEPH_REF=squid, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1)
Nov 27 05:59:36 np0005537642 podman[97933]: 2025-11-27 10:59:36.230009278 +0000 UTC m=+0.203292107 container attach c2a81adf0d49e868211cd42c99e70a4dddad20b8c50ef170c7c5aedb5267a6b1 (image=quay.io/ceph/ceph:v19, name=interesting_williamson, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0)
Nov 27 05:59:36 np0005537642 interesting_williamson[97949]: could not fetch user info: no user info saved
Nov 27 05:59:36 np0005537642 ceph-mon[74338]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2.devices.0}] v 0)
Nov 27 05:59:36 np0005537642 ceph-mon[74338]: log_channel(audit) log [INF] : from='mgr.14424 192.168.122.100:0/1707931784' entity='mgr.compute-0.qnrkij' 
Nov 27 05:59:36 np0005537642 ceph-mon[74338]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2}] v 0)
Nov 27 05:59:36 np0005537642 ceph-mon[74338]: log_channel(audit) log [INF] : from='mgr.14424 192.168.122.100:0/1707931784' entity='mgr.compute-0.qnrkij' 
Nov 27 05:59:36 np0005537642 ceph-mon[74338]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.ingress.nfs.cephfs}] v 0)
Nov 27 05:59:36 np0005537642 ceph-mon[74338]: log_channel(audit) log [INF] : from='mgr.14424 192.168.122.100:0/1707931784' entity='mgr.compute-0.qnrkij' 
Nov 27 05:59:36 np0005537642 ceph-mgr[74636]: [progress INFO root] complete: finished ev 9162d3b0-cce2-464c-9212-8d63e174e08f (Updating ingress.nfs.cephfs deployment (+6 -> 6))
Nov 27 05:59:36 np0005537642 ceph-mgr[74636]: [progress INFO root] Completed event 9162d3b0-cce2-464c-9212-8d63e174e08f (Updating ingress.nfs.cephfs deployment (+6 -> 6)) in 34 seconds
Nov 27 05:59:36 np0005537642 ceph-mon[74338]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.ingress.nfs.cephfs}] v 0)
Nov 27 05:59:36 np0005537642 systemd[1]: libpod-c2a81adf0d49e868211cd42c99e70a4dddad20b8c50ef170c7c5aedb5267a6b1.scope: Deactivated successfully.
Nov 27 05:59:36 np0005537642 podman[97933]: 2025-11-27 10:59:36.501855195 +0000 UTC m=+0.475137974 container died c2a81adf0d49e868211cd42c99e70a4dddad20b8c50ef170c7c5aedb5267a6b1 (image=quay.io/ceph/ceph:v19, name=interesting_williamson, CEPH_REF=squid, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 27 05:59:36 np0005537642 ceph-mon[74338]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #18. Immutable memtables: 0.
Nov 27 05:59:36 np0005537642 ceph-mon[74338]: rocksdb: (Original Log Time 2025/11/27-10:59:36.504728) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Nov 27 05:59:36 np0005537642 ceph-mon[74338]: rocksdb: [db/flush_job.cc:856] [default] [JOB 3] Flushing memtable with next log file: 18
Nov 27 05:59:36 np0005537642 ceph-mon[74338]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764241176504957, "job": 3, "event": "flush_started", "num_memtables": 1, "num_entries": 7842, "num_deletes": 252, "total_data_size": 14066198, "memory_usage": 14699488, "flush_reason": "Manual Compaction"}
Nov 27 05:59:36 np0005537642 ceph-mon[74338]: rocksdb: [db/flush_job.cc:885] [default] [JOB 3] Level-0 flush table #19: started
Nov 27 05:59:36 np0005537642 ceph-mon[74338]: log_channel(audit) log [INF] : from='mgr.14424 192.168.122.100:0/1707931784' entity='mgr.compute-0.qnrkij' 
Nov 27 05:59:36 np0005537642 ceph-mgr[74636]: [progress INFO root] update: starting ev c475bc5a-cfe5-448c-9c4b-c32e0f0a7144 (Updating alertmanager deployment (+1 -> 1))
Nov 27 05:59:36 np0005537642 systemd[1]: var-lib-containers-storage-overlay-17afcc21829933be25d80abc13bac33131be5b448c3d7b732213015d5d23abbd-merged.mount: Deactivated successfully.
Nov 27 05:59:36 np0005537642 ceph-mgr[74636]: [cephadm INFO cephadm.serve] Deploying daemon alertmanager.compute-0 on compute-0
Nov 27 05:59:36 np0005537642 ceph-mgr[74636]: log_channel(cephadm) log [INF] : Deploying daemon alertmanager.compute-0 on compute-0
Nov 27 05:59:36 np0005537642 podman[97933]: 2025-11-27 10:59:36.571398281 +0000 UTC m=+0.544681060 container remove c2a81adf0d49e868211cd42c99e70a4dddad20b8c50ef170c7c5aedb5267a6b1 (image=quay.io/ceph/ceph:v19, name=interesting_williamson, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.license=GPLv2, CEPH_REF=squid, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default)
Nov 27 05:59:36 np0005537642 systemd[1]: libpod-conmon-c2a81adf0d49e868211cd42c99e70a4dddad20b8c50ef170c7c5aedb5267a6b1.scope: Deactivated successfully.
Nov 27 05:59:36 np0005537642 ceph-mon[74338]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764241176590544, "cf_name": "default", "job": 3, "event": "table_file_creation", "file_number": 19, "file_size": 11987579, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 146, "largest_seqno": 7979, "table_properties": {"data_size": 11958993, "index_size": 18231, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 9285, "raw_key_size": 89429, "raw_average_key_size": 24, "raw_value_size": 11888282, "raw_average_value_size": 3213, "num_data_blocks": 808, "num_entries": 3700, "num_filter_entries": 3700, "num_deletions": 252, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764240804, "oldest_key_time": 1764240804, "file_creation_time": 1764241176, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "65f89df2-0592-497f-b5d7-5930e7c7d9aa", "db_session_id": "PS7NKDG3F09YEGXCLO27", "orig_file_number": 19, "seqno_to_time_mapping": "N/A"}}
Nov 27 05:59:36 np0005537642 ceph-mon[74338]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 3] Flush lasted 85880 microseconds, and 44887 cpu microseconds.
Nov 27 05:59:36 np0005537642 ceph-mon[74338]: rocksdb: (Original Log Time 2025/11/27-10:59:36.590644) [db/flush_job.cc:967] [default] [JOB 3] Level-0 flush table #19: 11987579 bytes OK
Nov 27 05:59:36 np0005537642 ceph-mon[74338]: rocksdb: (Original Log Time 2025/11/27-10:59:36.590672) [db/memtable_list.cc:519] [default] Level-0 commit table #19 started
Nov 27 05:59:36 np0005537642 ceph-mon[74338]: rocksdb: (Original Log Time 2025/11/27-10:59:36.592719) [db/memtable_list.cc:722] [default] Level-0 commit table #19: memtable #1 done
Nov 27 05:59:36 np0005537642 ceph-mon[74338]: rocksdb: (Original Log Time 2025/11/27-10:59:36.592748) EVENT_LOG_v1 {"time_micros": 1764241176592740, "job": 3, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [3, 0, 0, 0, 0, 0, 0], "immutable_memtables": 0}
Nov 27 05:59:36 np0005537642 ceph-mon[74338]: rocksdb: (Original Log Time 2025/11/27-10:59:36.592803) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: files[3 0 0 0 0 0 0] max score 0.75
Nov 27 05:59:36 np0005537642 ceph-mon[74338]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 3] Try to delete WAL files size 14030588, prev total WAL file size 14032774, number of live WAL files 2.
Nov 27 05:59:36 np0005537642 ceph-mon[74338]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000014.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 27 05:59:36 np0005537642 ceph-mon[74338]: rocksdb: (Original Log Time 2025/11/27-10:59:36.597826) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F730030' seq:72057594037927935, type:22 .. '7061786F7300323532' seq:0, type:0; will stop at (end)
Nov 27 05:59:36 np0005537642 ceph-mon[74338]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 4] Compacting 3@0 files to L6, score -1.00
Nov 27 05:59:36 np0005537642 ceph-mon[74338]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 3 Base level 0, inputs: [19(11MB) 13(57KB) 8(1944B)]
Nov 27 05:59:36 np0005537642 ceph-mon[74338]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764241176598018, "job": 4, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [19, 13, 8], "score": -1, "input_data_size": 12048013, "oldest_snapshot_seqno": -1}
Nov 27 05:59:36 np0005537642 ceph-mon[74338]: mon.compute-0@0(leader).osd e83 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 27 05:59:36 np0005537642 ceph-4c838139-e0c9-556a-a9ca-e4422f459af7-nfs-cephfs-2-0-compute-0-ymahkb[97101]: 27/11/2025 10:59:36 : epoch 69282ef5 : compute-0 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f37bc003c10 fd 49 proxy header rest len failed header rlen = % (will set dead)
Nov 27 05:59:36 np0005537642 ceph-osd[82775]: log_channel(cluster) log [DBG] : 10.b scrub starts
Nov 27 05:59:36 np0005537642 ceph-osd[82775]: log_channel(cluster) log [DBG] : 10.b scrub ok
Nov 27 05:59:36 np0005537642 ceph-mon[74338]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 4] Generated table #20: 3511 keys, 12001633 bytes, temperature: kUnknown
Nov 27 05:59:36 np0005537642 ceph-mon[74338]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764241176714334, "cf_name": "default", "job": 4, "event": "table_file_creation", "file_number": 20, "file_size": 12001633, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 11973433, "index_size": 18298, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 8837, "raw_key_size": 87396, "raw_average_key_size": 24, "raw_value_size": 11904419, "raw_average_value_size": 3390, "num_data_blocks": 813, "num_entries": 3511, "num_filter_entries": 3511, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764240802, "oldest_key_time": 0, "file_creation_time": 1764241176, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "65f89df2-0592-497f-b5d7-5930e7c7d9aa", "db_session_id": "PS7NKDG3F09YEGXCLO27", "orig_file_number": 20, "seqno_to_time_mapping": "N/A"}}
Nov 27 05:59:36 np0005537642 ceph-mon[74338]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Nov 27 05:59:36 np0005537642 ceph-mon[74338]: rocksdb: (Original Log Time 2025/11/27-10:59:36.714800) [db/compaction/compaction_job.cc:1663] [default] [JOB 4] Compacted 3@0 files to L6 => 12001633 bytes
Nov 27 05:59:36 np0005537642 ceph-mon[74338]: rocksdb: (Original Log Time 2025/11/27-10:59:36.720479) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 103.3 rd, 102.9 wr, level 6, files in(3, 0) out(1 +0 blob) MB in(11.5, 0.0 +0.0 blob) out(11.4 +0.0 blob), read-write-amplify(2.0) write-amplify(1.0) OK, records in: 3809, records dropped: 298 output_compression: NoCompression
Nov 27 05:59:36 np0005537642 ceph-mon[74338]: rocksdb: (Original Log Time 2025/11/27-10:59:36.720512) EVENT_LOG_v1 {"time_micros": 1764241176720491, "job": 4, "event": "compaction_finished", "compaction_time_micros": 116599, "compaction_time_cpu_micros": 44502, "output_level": 6, "num_output_files": 1, "total_output_size": 12001633, "num_input_records": 3809, "num_output_records": 3511, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Nov 27 05:59:36 np0005537642 ceph-mon[74338]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000019.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 27 05:59:36 np0005537642 ceph-mon[74338]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764241176723423, "job": 4, "event": "table_file_deletion", "file_number": 19}
Nov 27 05:59:36 np0005537642 ceph-mon[74338]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000013.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 27 05:59:36 np0005537642 ceph-mon[74338]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764241176723682, "job": 4, "event": "table_file_deletion", "file_number": 13}
Nov 27 05:59:36 np0005537642 ceph-mon[74338]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000008.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 27 05:59:36 np0005537642 ceph-mon[74338]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764241176723835, "job": 4, "event": "table_file_deletion", "file_number": 8}
Nov 27 05:59:36 np0005537642 ceph-mon[74338]: rocksdb: (Original Log Time 2025/11/27-10:59:36.597648) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 27 05:59:36 np0005537642 ceph-mon[74338]: mon.compute-0@0(leader).osd e83 do_prune osdmap full prune enabled
Nov 27 05:59:36 np0005537642 ceph-mon[74338]: mon.compute-0@0(leader).osd e84 e84: 3 total, 3 up, 3 in
Nov 27 05:59:36 np0005537642 ceph-mon[74338]: log_channel(cluster) log [DBG] : osdmap e84: 3 total, 3 up, 3 in
Nov 27 05:59:36 np0005537642 python3[98123]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint radosgw-admin quay.io/ceph/ceph:v19 --fsid 4c838139-e0c9-556a-a9ca-e4422f459af7 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   user create --uid="openstack" --display-name "openstack" _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 27 05:59:37 np0005537642 podman[98139]: 2025-11-27 10:59:37.01356951 +0000 UTC m=+0.061606455 container create facb0c2b4d99c5a9f40bed25527d4fd508c2975e86bf610549967e18adf40f7b (image=quay.io/ceph/ceph:v19, name=funny_poincare, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 27 05:59:37 np0005537642 systemd[1]: Started libpod-conmon-facb0c2b4d99c5a9f40bed25527d4fd508c2975e86bf610549967e18adf40f7b.scope.
Nov 27 05:59:37 np0005537642 podman[98139]: 2025-11-27 10:59:36.987840929 +0000 UTC m=+0.035877904 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Nov 27 05:59:37 np0005537642 systemd[1]: Started libcrun container.
Nov 27 05:59:37 np0005537642 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d01e2e386ec00a15947be4860880e9d34dcf965c53ea299f323053b52ca26957/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Nov 27 05:59:37 np0005537642 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d01e2e386ec00a15947be4860880e9d34dcf965c53ea299f323053b52ca26957/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 27 05:59:37 np0005537642 podman[98139]: 2025-11-27 10:59:37.111651022 +0000 UTC m=+0.159687997 container init facb0c2b4d99c5a9f40bed25527d4fd508c2975e86bf610549967e18adf40f7b (image=quay.io/ceph/ceph:v19, name=funny_poincare, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1)
Nov 27 05:59:37 np0005537642 podman[98139]: 2025-11-27 10:59:37.117424117 +0000 UTC m=+0.165461042 container start facb0c2b4d99c5a9f40bed25527d4fd508c2975e86bf610549967e18adf40f7b (image=quay.io/ceph/ceph:v19, name=funny_poincare, ceph=True, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 27 05:59:37 np0005537642 podman[98139]: 2025-11-27 10:59:37.122123053 +0000 UTC m=+0.170160028 container attach facb0c2b4d99c5a9f40bed25527d4fd508c2975e86bf610549967e18adf40f7b (image=quay.io/ceph/ceph:v19, name=funny_poincare, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Nov 27 05:59:37 np0005537642 ceph-mon[74338]: from='mgr.14424 192.168.122.100:0/1707931784' entity='mgr.compute-0.qnrkij' 
Nov 27 05:59:37 np0005537642 ceph-mon[74338]: from='mgr.14424 192.168.122.100:0/1707931784' entity='mgr.compute-0.qnrkij' 
Nov 27 05:59:37 np0005537642 ceph-mon[74338]: from='mgr.14424 192.168.122.100:0/1707931784' entity='mgr.compute-0.qnrkij' 
Nov 27 05:59:37 np0005537642 ceph-mon[74338]: from='mgr.14424 192.168.122.100:0/1707931784' entity='mgr.compute-0.qnrkij' 
Nov 27 05:59:37 np0005537642 ceph-mon[74338]: Deploying daemon alertmanager.compute-0 on compute-0
Nov 27 05:59:37 np0005537642 funny_poincare[98176]: {
Nov 27 05:59:37 np0005537642 funny_poincare[98176]:    "user_id": "openstack",
Nov 27 05:59:37 np0005537642 funny_poincare[98176]:    "display_name": "openstack",
Nov 27 05:59:37 np0005537642 funny_poincare[98176]:    "email": "",
Nov 27 05:59:37 np0005537642 funny_poincare[98176]:    "suspended": 0,
Nov 27 05:59:37 np0005537642 funny_poincare[98176]:    "max_buckets": 1000,
Nov 27 05:59:37 np0005537642 funny_poincare[98176]:    "subusers": [],
Nov 27 05:59:37 np0005537642 funny_poincare[98176]:    "keys": [
Nov 27 05:59:37 np0005537642 funny_poincare[98176]:        {
Nov 27 05:59:37 np0005537642 funny_poincare[98176]:            "user": "openstack",
Nov 27 05:59:37 np0005537642 funny_poincare[98176]:            "access_key": "A4FXT645UJJM9BDPR1TX",
Nov 27 05:59:37 np0005537642 funny_poincare[98176]:            "secret_key": "PlPErTpWsRNJl6APYbxff5xJCJhWvEAWw0P8BsT7",
Nov 27 05:59:37 np0005537642 funny_poincare[98176]:            "active": true,
Nov 27 05:59:37 np0005537642 funny_poincare[98176]:            "create_date": "2025-11-27T10:59:37.560624Z"
Nov 27 05:59:37 np0005537642 funny_poincare[98176]:        }
Nov 27 05:59:37 np0005537642 funny_poincare[98176]:    ],
Nov 27 05:59:37 np0005537642 funny_poincare[98176]:    "swift_keys": [],
Nov 27 05:59:37 np0005537642 funny_poincare[98176]:    "caps": [],
Nov 27 05:59:37 np0005537642 funny_poincare[98176]:    "op_mask": "read, write, delete",
Nov 27 05:59:37 np0005537642 funny_poincare[98176]:    "default_placement": "",
Nov 27 05:59:37 np0005537642 funny_poincare[98176]:    "default_storage_class": "",
Nov 27 05:59:37 np0005537642 funny_poincare[98176]:    "placement_tags": [],
Nov 27 05:59:37 np0005537642 funny_poincare[98176]:    "bucket_quota": {
Nov 27 05:59:37 np0005537642 funny_poincare[98176]:        "enabled": false,
Nov 27 05:59:37 np0005537642 funny_poincare[98176]:        "check_on_raw": false,
Nov 27 05:59:37 np0005537642 funny_poincare[98176]:        "max_size": -1,
Nov 27 05:59:37 np0005537642 funny_poincare[98176]:        "max_size_kb": 0,
Nov 27 05:59:37 np0005537642 funny_poincare[98176]:        "max_objects": -1
Nov 27 05:59:37 np0005537642 funny_poincare[98176]:    },
Nov 27 05:59:37 np0005537642 funny_poincare[98176]:    "user_quota": {
Nov 27 05:59:37 np0005537642 funny_poincare[98176]:        "enabled": false,
Nov 27 05:59:37 np0005537642 funny_poincare[98176]:        "check_on_raw": false,
Nov 27 05:59:37 np0005537642 funny_poincare[98176]:        "max_size": -1,
Nov 27 05:59:37 np0005537642 funny_poincare[98176]:        "max_size_kb": 0,
Nov 27 05:59:37 np0005537642 funny_poincare[98176]:        "max_objects": -1
Nov 27 05:59:37 np0005537642 funny_poincare[98176]:    },
Nov 27 05:59:37 np0005537642 funny_poincare[98176]:    "temp_url_keys": [],
Nov 27 05:59:37 np0005537642 funny_poincare[98176]:    "type": "rgw",
Nov 27 05:59:37 np0005537642 funny_poincare[98176]:    "mfa_ids": [],
Nov 27 05:59:37 np0005537642 funny_poincare[98176]:    "account_id": "",
Nov 27 05:59:37 np0005537642 funny_poincare[98176]:    "path": "/",
Nov 27 05:59:37 np0005537642 funny_poincare[98176]:    "create_date": "2025-11-27T10:59:37.559119Z",
Nov 27 05:59:37 np0005537642 funny_poincare[98176]:    "tags": [],
Nov 27 05:59:37 np0005537642 funny_poincare[98176]:    "group_ids": []
Nov 27 05:59:37 np0005537642 funny_poincare[98176]: }
Nov 27 05:59:37 np0005537642 funny_poincare[98176]: 
Nov 27 05:59:37 np0005537642 ceph-osd[82775]: log_channel(cluster) log [DBG] : 12.5 scrub starts
Nov 27 05:59:37 np0005537642 ceph-osd[82775]: log_channel(cluster) log [DBG] : 12.5 scrub ok
Nov 27 05:59:37 np0005537642 ceph-4c838139-e0c9-556a-a9ca-e4422f459af7-nfs-cephfs-2-0-compute-0-ymahkb[97101]: 27/11/2025 10:59:37 : epoch 69282ef5 : compute-0 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f37c4003c60 fd 49 proxy header rest len failed header rlen = % (will set dead)
Nov 27 05:59:37 np0005537642 ceph-4c838139-e0c9-556a-a9ca-e4422f459af7-nfs-cephfs-2-0-compute-0-ymahkb[97101]: 27/11/2025 10:59:37 : epoch 69282ef5 : compute-0 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f37d40021d0 fd 49 proxy header rest len failed header rlen = % (will set dead)
Nov 27 05:59:37 np0005537642 systemd[1]: libpod-facb0c2b4d99c5a9f40bed25527d4fd508c2975e86bf610549967e18adf40f7b.scope: Deactivated successfully.
Nov 27 05:59:37 np0005537642 conmon[98176]: conmon facb0c2b4d99c5a9f40b <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-facb0c2b4d99c5a9f40bed25527d4fd508c2975e86bf610549967e18adf40f7b.scope/container/memory.events
Nov 27 05:59:37 np0005537642 podman[98139]: 2025-11-27 10:59:37.873126072 +0000 UTC m=+0.921163007 container died facb0c2b4d99c5a9f40bed25527d4fd508c2975e86bf610549967e18adf40f7b (image=quay.io/ceph/ceph:v19, name=funny_poincare, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 27 05:59:37 np0005537642 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v70: 353 pgs: 4 unknown, 349 active+clean; 456 KiB data, 103 MiB used, 60 GiB / 60 GiB avail
Nov 27 05:59:37 np0005537642 systemd[1]: var-lib-containers-storage-overlay-d01e2e386ec00a15947be4860880e9d34dcf965c53ea299f323053b52ca26957-merged.mount: Deactivated successfully.
Nov 27 05:59:38 np0005537642 podman[98139]: 2025-11-27 10:59:38.009745979 +0000 UTC m=+1.057782924 container remove facb0c2b4d99c5a9f40bed25527d4fd508c2975e86bf610549967e18adf40f7b (image=quay.io/ceph/ceph:v19, name=funny_poincare, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default)
Nov 27 05:59:38 np0005537642 systemd[1]: libpod-conmon-facb0c2b4d99c5a9f40bed25527d4fd508c2975e86bf610549967e18adf40f7b.scope: Deactivated successfully.
Nov 27 05:59:38 np0005537642 ceph-mgr[74636]: [progress INFO root] Writing back 23 completed events
Nov 27 05:59:38 np0005537642 ceph-mon[74338]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/progress/completed}] v 0)
Nov 27 05:59:38 np0005537642 ceph-mon[74338]: log_channel(audit) log [INF] : from='mgr.14424 192.168.122.100:0/1707931784' entity='mgr.compute-0.qnrkij' 
Nov 27 05:59:38 np0005537642 ceph-mgr[74636]: [progress WARNING root] Starting Global Recovery Event,4 pgs not in active + clean state
Nov 27 05:59:38 np0005537642 ceph-mon[74338]: from='mgr.14424 192.168.122.100:0/1707931784' entity='mgr.compute-0.qnrkij' 
Nov 27 05:59:38 np0005537642 ceph-4c838139-e0c9-556a-a9ca-e4422f459af7-nfs-cephfs-2-0-compute-0-ymahkb[97101]: 27/11/2025 10:59:38 : epoch 69282ef5 : compute-0 : ganesha.nfsd-2[reaper] nfs_lift_grace_locked :STATE :EVENT :NFS Server Now NOT IN GRACE
Nov 27 05:59:38 np0005537642 ceph-4c838139-e0c9-556a-a9ca-e4422f459af7-nfs-cephfs-2-0-compute-0-ymahkb[97101]: 27/11/2025 10:59:38 : epoch 69282ef5 : compute-0 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f37bc003c10 fd 49 proxy header rest len failed header rlen = % (will set dead)
Nov 27 05:59:38 np0005537642 ceph-osd[82775]: log_channel(cluster) log [DBG] : 12.0 scrub starts
Nov 27 05:59:38 np0005537642 python3[98376]: ansible-ansible.builtin.get_url Invoked with url=http://192.168.122.100:8443 dest=/tmp/dash_response mode=0644 validate_certs=False force=False http_agent=ansible-httpget use_proxy=True force_basic_auth=False use_gssapi=False backup=False checksum= timeout=10 unredirected_headers=[] decompress=True use_netrc=True unsafe_writes=False url_username=None url_password=NOT_LOGGING_PARAMETER client_cert=None client_key=None headers=None tmp_dest=None ciphers=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 27 05:59:38 np0005537642 ceph-osd[82775]: log_channel(cluster) log [DBG] : 12.0 scrub ok
Nov 27 05:59:38 np0005537642 ceph-mgr[74636]: [dashboard INFO request] [192.168.122.100:55194] [GET] [200] [0.211s] [6.3K] [831eb21f-5438-4b9b-8c36-8381077aa09c] /
Nov 27 05:59:39 np0005537642 python3[98420]: ansible-ansible.builtin.get_url Invoked with url=http://192.168.122.100:8443 dest=/tmp/dash_http_response mode=0644 validate_certs=False username=VALUE_SPECIFIED_IN_NO_LOG_PARAMETER password=NOT_LOGGING_PARAMETER url_username=VALUE_SPECIFIED_IN_NO_LOG_PARAMETER url_password=NOT_LOGGING_PARAMETER force=False http_agent=ansible-httpget use_proxy=True force_basic_auth=False use_gssapi=False backup=False checksum= timeout=10 unredirected_headers=[] decompress=True use_netrc=True unsafe_writes=False client_cert=None client_key=None headers=None tmp_dest=None ciphers=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 27 05:59:39 np0005537642 ceph-mgr[74636]: [dashboard INFO request] [192.168.122.100:55208] [GET] [200] [0.001s] [6.3K] [c33d00bf-c96b-4a3c-9eca-4f9ee539d587] /
Nov 27 05:59:39 np0005537642 podman[98184]: 2025-11-27 10:59:39.518804654 +0000 UTC m=+2.425134355 image pull c8568f914cd25b2062c44e9f79f9c18da6e3b85fe0c47a12a2191c61426c2b19 quay.io/prometheus/alertmanager:v0.25.0
Nov 27 05:59:39 np0005537642 podman[98184]: 2025-11-27 10:59:39.607744812 +0000 UTC m=+2.514074453 volume create be2751b52a47c5c0d929157ea1b473735aba2f04a76d15c915143777755aa2d9
Nov 27 05:59:39 np0005537642 podman[98184]: 2025-11-27 10:59:39.626049903 +0000 UTC m=+2.532379544 container create 692ba713b3842e9b898c9cbb65cfd051e817a98abf17d911cf1daf3db61cc9ab (image=quay.io/prometheus/alertmanager:v0.25.0, name=condescending_pike, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Nov 27 05:59:39 np0005537642 ceph-osd[82775]: log_channel(cluster) log [DBG] : 10.6 deep-scrub starts
Nov 27 05:59:39 np0005537642 ceph-osd[82775]: log_channel(cluster) log [DBG] : 10.6 deep-scrub ok
Nov 27 05:59:39 np0005537642 systemd[1]: Started libpod-conmon-692ba713b3842e9b898c9cbb65cfd051e817a98abf17d911cf1daf3db61cc9ab.scope.
Nov 27 05:59:39 np0005537642 ceph-4c838139-e0c9-556a-a9ca-e4422f459af7-nfs-cephfs-2-0-compute-0-ymahkb[97101]: 27/11/2025 10:59:39 : epoch 69282ef5 : compute-0 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f37dc0034e0 fd 49 proxy header rest len failed header rlen = % (will set dead)
Nov 27 05:59:39 np0005537642 systemd[1]: Started libcrun container.
Nov 27 05:59:39 np0005537642 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0b38554e7c5a3cef437f0ee908f92c2f4fc11433aeb044aed167f16fe407cc67/merged/alertmanager supports timestamps until 2038 (0x7fffffff)
Nov 27 05:59:39 np0005537642 podman[98184]: 2025-11-27 10:59:39.742493177 +0000 UTC m=+2.648822868 container init 692ba713b3842e9b898c9cbb65cfd051e817a98abf17d911cf1daf3db61cc9ab (image=quay.io/prometheus/alertmanager:v0.25.0, name=condescending_pike, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Nov 27 05:59:39 np0005537642 podman[98184]: 2025-11-27 10:59:39.750454211 +0000 UTC m=+2.656783822 container start 692ba713b3842e9b898c9cbb65cfd051e817a98abf17d911cf1daf3db61cc9ab (image=quay.io/prometheus/alertmanager:v0.25.0, name=condescending_pike, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Nov 27 05:59:39 np0005537642 condescending_pike[98470]: 65534 65534
Nov 27 05:59:39 np0005537642 systemd[1]: libpod-692ba713b3842e9b898c9cbb65cfd051e817a98abf17d911cf1daf3db61cc9ab.scope: Deactivated successfully.
Nov 27 05:59:39 np0005537642 podman[98184]: 2025-11-27 10:59:39.756865773 +0000 UTC m=+2.663195384 container attach 692ba713b3842e9b898c9cbb65cfd051e817a98abf17d911cf1daf3db61cc9ab (image=quay.io/prometheus/alertmanager:v0.25.0, name=condescending_pike, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Nov 27 05:59:39 np0005537642 podman[98184]: 2025-11-27 10:59:39.757271974 +0000 UTC m=+2.663601615 container died 692ba713b3842e9b898c9cbb65cfd051e817a98abf17d911cf1daf3db61cc9ab (image=quay.io/prometheus/alertmanager:v0.25.0, name=condescending_pike, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Nov 27 05:59:39 np0005537642 systemd[1]: var-lib-containers-storage-overlay-0b38554e7c5a3cef437f0ee908f92c2f4fc11433aeb044aed167f16fe407cc67-merged.mount: Deactivated successfully.
Nov 27 05:59:39 np0005537642 podman[98184]: 2025-11-27 10:59:39.827056557 +0000 UTC m=+2.733386168 container remove 692ba713b3842e9b898c9cbb65cfd051e817a98abf17d911cf1daf3db61cc9ab (image=quay.io/prometheus/alertmanager:v0.25.0, name=condescending_pike, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Nov 27 05:59:39 np0005537642 podman[98184]: 2025-11-27 10:59:39.8342745 +0000 UTC m=+2.740604121 volume remove be2751b52a47c5c0d929157ea1b473735aba2f04a76d15c915143777755aa2d9
Nov 27 05:59:39 np0005537642 ceph-4c838139-e0c9-556a-a9ca-e4422f459af7-nfs-cephfs-2-0-compute-0-ymahkb[97101]: 27/11/2025 10:59:39 : epoch 69282ef5 : compute-0 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f37c4003c60 fd 49 proxy header rest len failed header rlen = % (will set dead)
Nov 27 05:59:39 np0005537642 systemd[1]: libpod-conmon-692ba713b3842e9b898c9cbb65cfd051e817a98abf17d911cf1daf3db61cc9ab.scope: Deactivated successfully.
Nov 27 05:59:39 np0005537642 podman[98490]: 2025-11-27 10:59:39.905224125 +0000 UTC m=+0.043521549 volume create c17e22fc00b5f905486556b2d103ffae5bf82f385f4243db61e98a623467bafd
Nov 27 05:59:39 np0005537642 podman[98490]: 2025-11-27 10:59:39.9199598 +0000 UTC m=+0.058257224 container create 855f305c3b56f9c995219b72c73c58d0d71dd71fdf6d1fd6c2ed1dda2e478d39 (image=quay.io/prometheus/alertmanager:v0.25.0, name=angry_raman, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Nov 27 05:59:39 np0005537642 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v71: 353 pgs: 353 active+clean; 456 KiB data, 121 MiB used, 60 GiB / 60 GiB avail; 17 KiB/s rd, 511 B/s wr, 28 op/s; 98 B/s, 4 objects/s recovering
Nov 27 05:59:39 np0005537642 ceph-mon[74338]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "7"} v 0)
Nov 27 05:59:39 np0005537642 ceph-mon[74338]: log_channel(audit) log [INF] : from='mgr.14424 192.168.122.100:0/1707931784' entity='mgr.compute-0.qnrkij' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "7"}]: dispatch
Nov 27 05:59:39 np0005537642 podman[98490]: 2025-11-27 10:59:39.886665947 +0000 UTC m=+0.024963381 image pull c8568f914cd25b2062c44e9f79f9c18da6e3b85fe0c47a12a2191c61426c2b19 quay.io/prometheus/alertmanager:v0.25.0
Nov 27 05:59:40 np0005537642 systemd[1]: Started libpod-conmon-855f305c3b56f9c995219b72c73c58d0d71dd71fdf6d1fd6c2ed1dda2e478d39.scope.
Nov 27 05:59:40 np0005537642 systemd[1]: Started libcrun container.
Nov 27 05:59:40 np0005537642 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c076449e93ffa987fd091975c4b325d23798021edbf76c9093e0a9286da8bc76/merged/alertmanager supports timestamps until 2038 (0x7fffffff)
Nov 27 05:59:40 np0005537642 ceph-4c838139-e0c9-556a-a9ca-e4422f459af7-keepalived-nfs-cephfs-compute-0-zwobpl[97895]: Thu Nov 27 10:59:40 2025: (VI_0) Received advert from 192.168.122.102 with lower priority 90, ours 100, forcing new election
Nov 27 05:59:40 np0005537642 podman[98490]: 2025-11-27 10:59:40.058016786 +0000 UTC m=+0.196314230 container init 855f305c3b56f9c995219b72c73c58d0d71dd71fdf6d1fd6c2ed1dda2e478d39 (image=quay.io/prometheus/alertmanager:v0.25.0, name=angry_raman, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Nov 27 05:59:40 np0005537642 podman[98490]: 2025-11-27 10:59:40.06748916 +0000 UTC m=+0.205786604 container start 855f305c3b56f9c995219b72c73c58d0d71dd71fdf6d1fd6c2ed1dda2e478d39 (image=quay.io/prometheus/alertmanager:v0.25.0, name=angry_raman, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Nov 27 05:59:40 np0005537642 angry_raman[98507]: 65534 65534
Nov 27 05:59:40 np0005537642 systemd[1]: libpod-855f305c3b56f9c995219b72c73c58d0d71dd71fdf6d1fd6c2ed1dda2e478d39.scope: Deactivated successfully.
Nov 27 05:59:40 np0005537642 podman[98490]: 2025-11-27 10:59:40.07194296 +0000 UTC m=+0.210240404 container attach 855f305c3b56f9c995219b72c73c58d0d71dd71fdf6d1fd6c2ed1dda2e478d39 (image=quay.io/prometheus/alertmanager:v0.25.0, name=angry_raman, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Nov 27 05:59:40 np0005537642 podman[98490]: 2025-11-27 10:59:40.07306698 +0000 UTC m=+0.211364434 container died 855f305c3b56f9c995219b72c73c58d0d71dd71fdf6d1fd6c2ed1dda2e478d39 (image=quay.io/prometheus/alertmanager:v0.25.0, name=angry_raman, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Nov 27 05:59:40 np0005537642 systemd[1]: var-lib-containers-storage-overlay-c076449e93ffa987fd091975c4b325d23798021edbf76c9093e0a9286da8bc76-merged.mount: Deactivated successfully.
Nov 27 05:59:40 np0005537642 ceph-mon[74338]: mon.compute-0@0(leader).osd e84 do_prune osdmap full prune enabled
Nov 27 05:59:40 np0005537642 ceph-mon[74338]: log_channel(audit) log [INF] : from='mgr.14424 192.168.122.100:0/1707931784' entity='mgr.compute-0.qnrkij' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "7"}]': finished
Nov 27 05:59:40 np0005537642 ceph-mon[74338]: mon.compute-0@0(leader).osd e85 e85: 3 total, 3 up, 3 in
Nov 27 05:59:40 np0005537642 ceph-mon[74338]: log_channel(cluster) log [DBG] : osdmap e85: 3 total, 3 up, 3 in
Nov 27 05:59:40 np0005537642 podman[98490]: 2025-11-27 10:59:40.436271399 +0000 UTC m=+0.574568813 container remove 855f305c3b56f9c995219b72c73c58d0d71dd71fdf6d1fd6c2ed1dda2e478d39 (image=quay.io/prometheus/alertmanager:v0.25.0, name=angry_raman, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Nov 27 05:59:40 np0005537642 ceph-osd[82775]: osd.1 pg_epoch: 85 pg[9.1e( empty local-lis/les=0/0 n=0 ec=67/50 lis/c=67/67 les/c/f=68/68/0 sis=85) [1] r=0 lpr=85 pi=[67,85)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 27 05:59:40 np0005537642 ceph-osd[82775]: osd.1 pg_epoch: 85 pg[9.6( empty local-lis/les=0/0 n=0 ec=67/50 lis/c=67/67 les/c/f=68/68/0 sis=85) [1] r=0 lpr=85 pi=[67,85)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 27 05:59:40 np0005537642 ceph-osd[82775]: osd.1 pg_epoch: 85 pg[9.e( empty local-lis/les=0/0 n=0 ec=67/50 lis/c=67/67 les/c/f=68/68/0 sis=85) [1] r=0 lpr=85 pi=[67,85)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 27 05:59:40 np0005537642 ceph-osd[82775]: osd.1 pg_epoch: 85 pg[9.16( empty local-lis/les=0/0 n=0 ec=67/50 lis/c=67/67 les/c/f=68/68/0 sis=85) [1] r=0 lpr=85 pi=[67,85)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 27 05:59:40 np0005537642 ceph-mon[74338]: from='mgr.14424 192.168.122.100:0/1707931784' entity='mgr.compute-0.qnrkij' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "7"}]: dispatch
Nov 27 05:59:40 np0005537642 podman[98490]: 2025-11-27 10:59:40.443202125 +0000 UTC m=+0.581499579 volume remove c17e22fc00b5f905486556b2d103ffae5bf82f385f4243db61e98a623467bafd
Nov 27 05:59:40 np0005537642 systemd[1]: libpod-conmon-855f305c3b56f9c995219b72c73c58d0d71dd71fdf6d1fd6c2ed1dda2e478d39.scope: Deactivated successfully.
Nov 27 05:59:40 np0005537642 systemd[1]: Reloading.
Nov 27 05:59:40 np0005537642 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 27 05:59:40 np0005537642 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 27 05:59:40 np0005537642 ceph-osd[82775]: log_channel(cluster) log [DBG] : 12.1f scrub starts
Nov 27 05:59:40 np0005537642 ceph-osd[82775]: log_channel(cluster) log [DBG] : 12.1f scrub ok
Nov 27 05:59:40 np0005537642 ceph-4c838139-e0c9-556a-a9ca-e4422f459af7-nfs-cephfs-2-0-compute-0-ymahkb[97101]: 27/11/2025 10:59:40 : epoch 69282ef5 : compute-0 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f37b8000b60 fd 49 proxy header rest len failed header rlen = % (will set dead)
Nov 27 05:59:40 np0005537642 systemd[1]: Reloading.
Nov 27 05:59:40 np0005537642 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 27 05:59:40 np0005537642 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 27 05:59:41 np0005537642 systemd[1]: Starting Ceph alertmanager.compute-0 for 4c838139-e0c9-556a-a9ca-e4422f459af7...
Nov 27 05:59:41 np0005537642 ceph-mon[74338]: mon.compute-0@0(leader).osd e85 do_prune osdmap full prune enabled
Nov 27 05:59:41 np0005537642 ceph-mon[74338]: mon.compute-0@0(leader).osd e86 e86: 3 total, 3 up, 3 in
Nov 27 05:59:41 np0005537642 ceph-osd[82775]: osd.1 pg_epoch: 86 pg[9.16( empty local-lis/les=0/0 n=0 ec=67/50 lis/c=67/67 les/c/f=68/68/0 sis=86) [1]/[0] r=-1 lpr=86 pi=[67,86)/1 crt=0'0 mlcod 0'0 remapped mbc={}] PeeringState::start_peering_interval up [1] -> [1], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 1, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Nov 27 05:59:41 np0005537642 ceph-osd[82775]: osd.1 pg_epoch: 86 pg[9.16( empty local-lis/les=0/0 n=0 ec=67/50 lis/c=67/67 les/c/f=68/68/0 sis=86) [1]/[0] r=-1 lpr=86 pi=[67,86)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Nov 27 05:59:41 np0005537642 ceph-osd[82775]: osd.1 pg_epoch: 86 pg[9.e( empty local-lis/les=0/0 n=0 ec=67/50 lis/c=67/67 les/c/f=68/68/0 sis=86) [1]/[0] r=-1 lpr=86 pi=[67,86)/1 crt=0'0 mlcod 0'0 remapped mbc={}] PeeringState::start_peering_interval up [1] -> [1], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 1, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Nov 27 05:59:41 np0005537642 ceph-osd[82775]: osd.1 pg_epoch: 86 pg[9.e( empty local-lis/les=0/0 n=0 ec=67/50 lis/c=67/67 les/c/f=68/68/0 sis=86) [1]/[0] r=-1 lpr=86 pi=[67,86)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Nov 27 05:59:41 np0005537642 ceph-osd[82775]: osd.1 pg_epoch: 86 pg[9.6( empty local-lis/les=0/0 n=0 ec=67/50 lis/c=67/67 les/c/f=68/68/0 sis=86) [1]/[0] r=-1 lpr=86 pi=[67,86)/1 crt=0'0 mlcod 0'0 remapped mbc={}] PeeringState::start_peering_interval up [1] -> [1], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 1, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Nov 27 05:59:41 np0005537642 ceph-osd[82775]: osd.1 pg_epoch: 86 pg[9.6( empty local-lis/les=0/0 n=0 ec=67/50 lis/c=67/67 les/c/f=68/68/0 sis=86) [1]/[0] r=-1 lpr=86 pi=[67,86)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Nov 27 05:59:41 np0005537642 ceph-osd[82775]: osd.1 pg_epoch: 86 pg[9.1e( empty local-lis/les=0/0 n=0 ec=67/50 lis/c=67/67 les/c/f=68/68/0 sis=86) [1]/[0] r=-1 lpr=86 pi=[67,86)/1 crt=0'0 mlcod 0'0 remapped mbc={}] PeeringState::start_peering_interval up [1] -> [1], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 1, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Nov 27 05:59:41 np0005537642 ceph-osd[82775]: osd.1 pg_epoch: 86 pg[9.1e( empty local-lis/les=0/0 n=0 ec=67/50 lis/c=67/67 les/c/f=68/68/0 sis=86) [1]/[0] r=-1 lpr=86 pi=[67,86)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Nov 27 05:59:41 np0005537642 ceph-mon[74338]: log_channel(cluster) log [DBG] : osdmap e86: 3 total, 3 up, 3 in
Nov 27 05:59:41 np0005537642 ceph-mon[74338]: from='mgr.14424 192.168.122.100:0/1707931784' entity='mgr.compute-0.qnrkij' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "7"}]': finished
Nov 27 05:59:41 np0005537642 podman[98650]: 2025-11-27 10:59:41.515719083 +0000 UTC m=+0.052557182 volume create a7669a52377ec4fb0153ec45fcd0bb878773a82e1c7c445265bbd7ec3cb8ac22
Nov 27 05:59:41 np0005537642 podman[98650]: 2025-11-27 10:59:41.530817648 +0000 UTC m=+0.067655747 container create 3b2d33c696177af9f9c409dbd59184012d0c2369d434cb01038cb0ca1f741fd4 (image=quay.io/prometheus/alertmanager:v0.25.0, name=ceph-4c838139-e0c9-556a-a9ca-e4422f459af7-alertmanager-compute-0, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Nov 27 05:59:41 np0005537642 podman[98650]: 2025-11-27 10:59:41.492127309 +0000 UTC m=+0.028965458 image pull c8568f914cd25b2062c44e9f79f9c18da6e3b85fe0c47a12a2191c61426c2b19 quay.io/prometheus/alertmanager:v0.25.0
Nov 27 05:59:41 np0005537642 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/095a6dd2f9dcde732d7fb0f316f2fc084c66aa7d262764a6346983d07842434b/merged/alertmanager supports timestamps until 2038 (0x7fffffff)
Nov 27 05:59:41 np0005537642 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/095a6dd2f9dcde732d7fb0f316f2fc084c66aa7d262764a6346983d07842434b/merged/etc/alertmanager supports timestamps until 2038 (0x7fffffff)
Nov 27 05:59:41 np0005537642 podman[98650]: 2025-11-27 10:59:41.619018885 +0000 UTC m=+0.155857034 container init 3b2d33c696177af9f9c409dbd59184012d0c2369d434cb01038cb0ca1f741fd4 (image=quay.io/prometheus/alertmanager:v0.25.0, name=ceph-4c838139-e0c9-556a-a9ca-e4422f459af7-alertmanager-compute-0, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Nov 27 05:59:41 np0005537642 podman[98650]: 2025-11-27 10:59:41.624952565 +0000 UTC m=+0.161790634 container start 3b2d33c696177af9f9c409dbd59184012d0c2369d434cb01038cb0ca1f741fd4 (image=quay.io/prometheus/alertmanager:v0.25.0, name=ceph-4c838139-e0c9-556a-a9ca-e4422f459af7-alertmanager-compute-0, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Nov 27 05:59:41 np0005537642 bash[98650]: 3b2d33c696177af9f9c409dbd59184012d0c2369d434cb01038cb0ca1f741fd4
Nov 27 05:59:41 np0005537642 systemd[1]: Started Ceph alertmanager.compute-0 for 4c838139-e0c9-556a-a9ca-e4422f459af7.
Nov 27 05:59:41 np0005537642 ceph-mon[74338]: mon.compute-0@0(leader).osd e86 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 27 05:59:41 np0005537642 ceph-osd[82775]: log_channel(cluster) log [DBG] : 10.1a scrub starts
Nov 27 05:59:41 np0005537642 ceph-osd[82775]: log_channel(cluster) log [DBG] : 10.1a scrub ok
Nov 27 05:59:41 np0005537642 ceph-4c838139-e0c9-556a-a9ca-e4422f459af7-alertmanager-compute-0[98665]: ts=2025-11-27T10:59:41.661Z caller=main.go:240 level=info msg="Starting Alertmanager" version="(version=0.25.0, branch=HEAD, revision=258fab7cdd551f2cf251ed0348f0ad7289aee789)"
Nov 27 05:59:41 np0005537642 ceph-4c838139-e0c9-556a-a9ca-e4422f459af7-alertmanager-compute-0[98665]: ts=2025-11-27T10:59:41.661Z caller=main.go:241 level=info build_context="(go=go1.19.4, user=root@abe866dd5717, date=20221222-14:51:36)"
Nov 27 05:59:41 np0005537642 ceph-4c838139-e0c9-556a-a9ca-e4422f459af7-alertmanager-compute-0[98665]: ts=2025-11-27T10:59:41.676Z caller=cluster.go:185 level=info component=cluster msg="setting advertise address explicitly" addr=192.168.122.100 port=9094
Nov 27 05:59:41 np0005537642 ceph-4c838139-e0c9-556a-a9ca-e4422f459af7-alertmanager-compute-0[98665]: ts=2025-11-27T10:59:41.679Z caller=cluster.go:681 level=info component=cluster msg="Waiting for gossip to settle..." interval=2s
Nov 27 05:59:41 np0005537642 ceph-mon[74338]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Nov 27 05:59:41 np0005537642 ceph-mon[74338]: log_channel(audit) log [INF] : from='mgr.14424 192.168.122.100:0/1707931784' entity='mgr.compute-0.qnrkij' 
Nov 27 05:59:41 np0005537642 ceph-mon[74338]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Nov 27 05:59:41 np0005537642 ceph-4c838139-e0c9-556a-a9ca-e4422f459af7-nfs-cephfs-2-0-compute-0-ymahkb[97101]: 27/11/2025 10:59:41 : epoch 69282ef5 : compute-0 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f37bc003c10 fd 49 proxy header rest len failed header rlen = % (will set dead)
Nov 27 05:59:41 np0005537642 ceph-4c838139-e0c9-556a-a9ca-e4422f459af7-alertmanager-compute-0[98665]: ts=2025-11-27T10:59:41.726Z caller=coordinator.go:113 level=info component=configuration msg="Loading configuration file" file=/etc/alertmanager/alertmanager.yml
Nov 27 05:59:41 np0005537642 ceph-4c838139-e0c9-556a-a9ca-e4422f459af7-alertmanager-compute-0[98665]: ts=2025-11-27T10:59:41.727Z caller=coordinator.go:126 level=info component=configuration msg="Completed loading of configuration file" file=/etc/alertmanager/alertmanager.yml
Nov 27 05:59:41 np0005537642 ceph-mon[74338]: log_channel(audit) log [INF] : from='mgr.14424 192.168.122.100:0/1707931784' entity='mgr.compute-0.qnrkij' 
Nov 27 05:59:41 np0005537642 ceph-mon[74338]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.alertmanager}] v 0)
Nov 27 05:59:41 np0005537642 ceph-4c838139-e0c9-556a-a9ca-e4422f459af7-alertmanager-compute-0[98665]: ts=2025-11-27T10:59:41.732Z caller=tls_config.go:232 level=info msg="Listening on" address=192.168.122.100:9093
Nov 27 05:59:41 np0005537642 ceph-4c838139-e0c9-556a-a9ca-e4422f459af7-alertmanager-compute-0[98665]: ts=2025-11-27T10:59:41.732Z caller=tls_config.go:235 level=info msg="TLS is disabled." http2=false address=192.168.122.100:9093
Nov 27 05:59:41 np0005537642 ceph-mon[74338]: log_channel(audit) log [INF] : from='mgr.14424 192.168.122.100:0/1707931784' entity='mgr.compute-0.qnrkij' 
Nov 27 05:59:41 np0005537642 ceph-mgr[74636]: [progress INFO root] complete: finished ev c475bc5a-cfe5-448c-9c4b-c32e0f0a7144 (Updating alertmanager deployment (+1 -> 1))
Nov 27 05:59:41 np0005537642 ceph-mgr[74636]: [progress INFO root] Completed event c475bc5a-cfe5-448c-9c4b-c32e0f0a7144 (Updating alertmanager deployment (+1 -> 1)) in 5 seconds
Nov 27 05:59:41 np0005537642 ceph-mon[74338]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.alertmanager}] v 0)
Nov 27 05:59:41 np0005537642 ceph-mon[74338]: log_channel(audit) log [INF] : from='mgr.14424 192.168.122.100:0/1707931784' entity='mgr.compute-0.qnrkij' 
Nov 27 05:59:41 np0005537642 ceph-mgr[74636]: [progress INFO root] update: starting ev 8eb04479-944d-4267-9b9c-6b4e389a3647 (Updating grafana deployment (+1 -> 1))
Nov 27 05:59:41 np0005537642 ceph-mgr[74636]: [cephadm INFO cephadm.services.monitoring] Regenerating cephadm self-signed grafana TLS certificates
Nov 27 05:59:41 np0005537642 ceph-mgr[74636]: log_channel(cephadm) log [INF] : Regenerating cephadm self-signed grafana TLS certificates
Nov 27 05:59:41 np0005537642 ceph-mon[74338]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/cert_store.cert.grafana_cert}] v 0)
Nov 27 05:59:41 np0005537642 ceph-mon[74338]: log_channel(audit) log [INF] : from='mgr.14424 192.168.122.100:0/1707931784' entity='mgr.compute-0.qnrkij' 
Nov 27 05:59:41 np0005537642 ceph-mon[74338]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/cert_store.key.grafana_key}] v 0)
Nov 27 05:59:41 np0005537642 ceph-mon[74338]: log_channel(audit) log [INF] : from='mgr.14424 192.168.122.100:0/1707931784' entity='mgr.compute-0.qnrkij' 
Nov 27 05:59:41 np0005537642 ceph-mon[74338]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "dashboard set-grafana-api-ssl-verify", "value": "false"} v 0)
Nov 27 05:59:41 np0005537642 ceph-mon[74338]: log_channel(audit) log [INF] : from='mgr.14424 192.168.122.100:0/1707931784' entity='mgr.compute-0.qnrkij' cmd=[{"prefix": "dashboard set-grafana-api-ssl-verify", "value": "false"}]: dispatch
Nov 27 05:59:41 np0005537642 ceph-mgr[74636]: log_channel(audit) log [DBG] : from='mon.0 -' entity='mon.' cmd=[{"prefix": "dashboard set-grafana-api-ssl-verify", "value": "false"}]: dispatch
Nov 27 05:59:41 np0005537642 ceph-mon[74338]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config set, name=mgr/dashboard/GRAFANA_API_SSL_VERIFY}] v 0)
Nov 27 05:59:41 np0005537642 ceph-4c838139-e0c9-556a-a9ca-e4422f459af7-nfs-cephfs-2-0-compute-0-ymahkb[97101]: 27/11/2025 10:59:41 : epoch 69282ef5 : compute-0 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f37dc0034e0 fd 49 proxy header rest len failed header rlen = % (will set dead)
Nov 27 05:59:41 np0005537642 ceph-mon[74338]: log_channel(audit) log [INF] : from='mgr.14424 192.168.122.100:0/1707931784' entity='mgr.compute-0.qnrkij' 
Nov 27 05:59:41 np0005537642 ceph-mgr[74636]: [cephadm INFO cephadm.serve] Deploying daemon grafana.compute-0 on compute-0
Nov 27 05:59:41 np0005537642 ceph-mgr[74636]: log_channel(cephadm) log [INF] : Deploying daemon grafana.compute-0 on compute-0
Nov 27 05:59:41 np0005537642 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v74: 353 pgs: 353 active+clean; 456 KiB data, 121 MiB used, 60 GiB / 60 GiB avail; 17 KiB/s rd, 511 B/s wr, 28 op/s; 98 B/s, 4 objects/s recovering
Nov 27 05:59:41 np0005537642 ceph-mon[74338]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "8"} v 0)
Nov 27 05:59:41 np0005537642 ceph-mon[74338]: log_channel(audit) log [INF] : from='mgr.14424 192.168.122.100:0/1707931784' entity='mgr.compute-0.qnrkij' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "8"}]: dispatch
Nov 27 05:59:42 np0005537642 ceph-mgr[74636]: [volumes INFO mgr_util] scanning for idle connections..
Nov 27 05:59:42 np0005537642 ceph-mgr[74636]: [volumes INFO mgr_util] cleaning up connections: []
Nov 27 05:59:42 np0005537642 ceph-mgr[74636]: [volumes INFO mgr_util] scanning for idle connections..
Nov 27 05:59:42 np0005537642 ceph-mgr[74636]: [volumes INFO mgr_util] cleaning up connections: []
Nov 27 05:59:42 np0005537642 ceph-mgr[74636]: [volumes INFO mgr_util] scanning for idle connections..
Nov 27 05:59:42 np0005537642 ceph-mgr[74636]: [volumes INFO mgr_util] cleaning up connections: []
Nov 27 05:59:42 np0005537642 ceph-mon[74338]: mon.compute-0@0(leader).osd e86 do_prune osdmap full prune enabled
Nov 27 05:59:42 np0005537642 ceph-mon[74338]: log_channel(audit) log [INF] : from='mgr.14424 192.168.122.100:0/1707931784' entity='mgr.compute-0.qnrkij' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "8"}]': finished
Nov 27 05:59:42 np0005537642 ceph-mon[74338]: mon.compute-0@0(leader).osd e87 e87: 3 total, 3 up, 3 in
Nov 27 05:59:42 np0005537642 ceph-mon[74338]: log_channel(cluster) log [DBG] : osdmap e87: 3 total, 3 up, 3 in
Nov 27 05:59:42 np0005537642 ceph-mon[74338]: from='mgr.14424 192.168.122.100:0/1707931784' entity='mgr.compute-0.qnrkij' 
Nov 27 05:59:42 np0005537642 ceph-mon[74338]: from='mgr.14424 192.168.122.100:0/1707931784' entity='mgr.compute-0.qnrkij' 
Nov 27 05:59:42 np0005537642 ceph-mon[74338]: from='mgr.14424 192.168.122.100:0/1707931784' entity='mgr.compute-0.qnrkij' 
Nov 27 05:59:42 np0005537642 ceph-mon[74338]: from='mgr.14424 192.168.122.100:0/1707931784' entity='mgr.compute-0.qnrkij' 
Nov 27 05:59:42 np0005537642 ceph-mon[74338]: from='mgr.14424 192.168.122.100:0/1707931784' entity='mgr.compute-0.qnrkij' 
Nov 27 05:59:42 np0005537642 ceph-mon[74338]: from='mgr.14424 192.168.122.100:0/1707931784' entity='mgr.compute-0.qnrkij' 
Nov 27 05:59:42 np0005537642 ceph-mon[74338]: from='mgr.14424 192.168.122.100:0/1707931784' entity='mgr.compute-0.qnrkij' cmd=[{"prefix": "dashboard set-grafana-api-ssl-verify", "value": "false"}]: dispatch
Nov 27 05:59:42 np0005537642 ceph-mon[74338]: from='mgr.14424 192.168.122.100:0/1707931784' entity='mgr.compute-0.qnrkij' 
Nov 27 05:59:42 np0005537642 ceph-mon[74338]: from='mgr.14424 192.168.122.100:0/1707931784' entity='mgr.compute-0.qnrkij' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "8"}]: dispatch
Nov 27 05:59:42 np0005537642 ceph-mon[74338]: from='mgr.14424 192.168.122.100:0/1707931784' entity='mgr.compute-0.qnrkij' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "8"}]': finished
Nov 27 05:59:42 np0005537642 ceph-osd[82775]: log_channel(cluster) log [DBG] : 10.1c scrub starts
Nov 27 05:59:42 np0005537642 ceph-osd[82775]: log_channel(cluster) log [DBG] : 10.1c scrub ok
Nov 27 05:59:42 np0005537642 ceph-4c838139-e0c9-556a-a9ca-e4422f459af7-nfs-cephfs-2-0-compute-0-ymahkb[97101]: 27/11/2025 10:59:42 : epoch 69282ef5 : compute-0 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f37c4003c60 fd 49 proxy header rest len failed header rlen = % (will set dead)
Nov 27 05:59:43 np0005537642 ceph-mgr[74636]: [progress INFO root] Writing back 24 completed events
Nov 27 05:59:43 np0005537642 ceph-mon[74338]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/progress/completed}] v 0)
Nov 27 05:59:43 np0005537642 ceph-mon[74338]: log_channel(audit) log [INF] : from='mgr.14424 192.168.122.100:0/1707931784' entity='mgr.compute-0.qnrkij' 
Nov 27 05:59:43 np0005537642 ceph-mgr[74636]: [progress INFO root] Completed event 1475e012-0a1b-433a-bde0-d978f481d3d8 (Global Recovery Event) in 5 seconds
Nov 27 05:59:43 np0005537642 ceph-mon[74338]: mon.compute-0@0(leader).osd e87 do_prune osdmap full prune enabled
Nov 27 05:59:43 np0005537642 ceph-mon[74338]: Regenerating cephadm self-signed grafana TLS certificates
Nov 27 05:59:43 np0005537642 ceph-mon[74338]: Deploying daemon grafana.compute-0 on compute-0
Nov 27 05:59:43 np0005537642 ceph-mon[74338]: from='mgr.14424 192.168.122.100:0/1707931784' entity='mgr.compute-0.qnrkij' 
Nov 27 05:59:43 np0005537642 ceph-osd[82775]: log_channel(cluster) log [DBG] : 12.1b scrub starts
Nov 27 05:59:43 np0005537642 ceph-osd[82775]: log_channel(cluster) log [DBG] : 12.1b scrub ok
Nov 27 05:59:43 np0005537642 ceph-4c838139-e0c9-556a-a9ca-e4422f459af7-alertmanager-compute-0[98665]: ts=2025-11-27T10:59:43.679Z caller=cluster.go:706 level=info component=cluster msg="gossip not settled" polls=0 before=0 now=1 elapsed=2.000499525s
Nov 27 05:59:43 np0005537642 ceph-mon[74338]: mon.compute-0@0(leader).osd e88 e88: 3 total, 3 up, 3 in
Nov 27 05:59:43 np0005537642 ceph-4c838139-e0c9-556a-a9ca-e4422f459af7-nfs-cephfs-2-0-compute-0-ymahkb[97101]: 27/11/2025 10:59:43 : epoch 69282ef5 : compute-0 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f37b80016a0 fd 49 proxy header rest len failed header rlen = % (will set dead)
Nov 27 05:59:43 np0005537642 ceph-mon[74338]: log_channel(cluster) log [DBG] : osdmap e88: 3 total, 3 up, 3 in
Nov 27 05:59:43 np0005537642 ceph-osd[82775]: osd.1 pg_epoch: 88 pg[9.16( v 57'872 (0'0,57'872] local-lis/les=0/0 n=4 ec=67/50 lis/c=86/67 les/c/f=87/68/0 sis=88) [1] r=0 lpr=88 pi=[67,88)/1 luod=0'0 crt=57'872 mlcod 0'0 active mbc={}] PeeringState::start_peering_interval up [1] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 1 -> 1, role -1 -> 0, features acting 4540701547738038271 upacting 4540701547738038271
Nov 27 05:59:43 np0005537642 ceph-osd[82775]: osd.1 pg_epoch: 88 pg[9.16( v 57'872 (0'0,57'872] local-lis/les=0/0 n=4 ec=67/50 lis/c=86/67 les/c/f=87/68/0 sis=88) [1] r=0 lpr=88 pi=[67,88)/1 crt=57'872 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 27 05:59:43 np0005537642 ceph-osd[82775]: osd.1 pg_epoch: 88 pg[9.e( v 57'872 (0'0,57'872] local-lis/les=0/0 n=6 ec=67/50 lis/c=86/67 les/c/f=87/68/0 sis=88) [1] r=0 lpr=88 pi=[67,88)/1 luod=0'0 crt=57'872 mlcod 0'0 active mbc={}] PeeringState::start_peering_interval up [1] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 1 -> 1, role -1 -> 0, features acting 4540701547738038271 upacting 4540701547738038271
Nov 27 05:59:43 np0005537642 ceph-osd[82775]: osd.1 pg_epoch: 88 pg[9.e( v 57'872 (0'0,57'872] local-lis/les=0/0 n=6 ec=67/50 lis/c=86/67 les/c/f=87/68/0 sis=88) [1] r=0 lpr=88 pi=[67,88)/1 crt=57'872 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 27 05:59:43 np0005537642 ceph-osd[82775]: osd.1 pg_epoch: 88 pg[9.1e( v 57'872 (0'0,57'872] local-lis/les=0/0 n=5 ec=67/50 lis/c=86/67 les/c/f=87/68/0 sis=88) [1] r=0 lpr=88 pi=[67,88)/1 luod=0'0 crt=57'872 mlcod 0'0 active mbc={}] PeeringState::start_peering_interval up [1] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 1 -> 1, role -1 -> 0, features acting 4540701547738038271 upacting 4540701547738038271
Nov 27 05:59:43 np0005537642 ceph-osd[82775]: osd.1 pg_epoch: 88 pg[9.1e( v 57'872 (0'0,57'872] local-lis/les=0/0 n=5 ec=67/50 lis/c=86/67 les/c/f=87/68/0 sis=88) [1] r=0 lpr=88 pi=[67,88)/1 crt=57'872 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 27 05:59:43 np0005537642 ceph-4c838139-e0c9-556a-a9ca-e4422f459af7-nfs-cephfs-2-0-compute-0-ymahkb[97101]: 27/11/2025 10:59:43 : epoch 69282ef5 : compute-0 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f37bc003c10 fd 49 proxy header rest len failed header rlen = % (will set dead)
Nov 27 05:59:43 np0005537642 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v77: 353 pgs: 3 active+remapped, 1 active+recovering+remapped, 349 active+clean; 456 KiB data, 121 MiB used, 60 GiB / 60 GiB avail; 6/221 objects misplaced (2.715%); 54 B/s, 3 objects/s recovering
Nov 27 05:59:43 np0005537642 ceph-mon[74338]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "9"} v 0)
Nov 27 05:59:43 np0005537642 ceph-mon[74338]: log_channel(audit) log [INF] : from='mgr.14424 192.168.122.100:0/1707931784' entity='mgr.compute-0.qnrkij' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "9"}]: dispatch
Nov 27 05:59:44 np0005537642 ceph-4c838139-e0c9-556a-a9ca-e4422f459af7-haproxy-nfs-cephfs-compute-0-vcfcow[97533]: [WARNING] 330/105944 (4) : Server backend/nfs.cephfs.0 is UP, reason: Layer4 check passed, check duration: 0ms. 3 active and 0 backup servers online. 0 sessions requeued, 0 total in queue.
Nov 27 05:59:44 np0005537642 ceph-4c838139-e0c9-556a-a9ca-e4422f459af7-nfs-cephfs-2-0-compute-0-ymahkb[97101]: 27/11/2025 10:59:44 : epoch 69282ef5 : compute-0 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f37dc0034e0 fd 49 proxy header rest len failed header rlen = % (will set dead)
Nov 27 05:59:44 np0005537642 ceph-osd[82775]: log_channel(cluster) log [DBG] : 12.16 scrub starts
Nov 27 05:59:44 np0005537642 ceph-osd[82775]: log_channel(cluster) log [DBG] : 12.16 scrub ok
Nov 27 05:59:44 np0005537642 ceph-mon[74338]: mon.compute-0@0(leader).osd e88 do_prune osdmap full prune enabled
Nov 27 05:59:44 np0005537642 ceph-mon[74338]: from='mgr.14424 192.168.122.100:0/1707931784' entity='mgr.compute-0.qnrkij' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "9"}]: dispatch
Nov 27 05:59:45 np0005537642 ceph-mon[74338]: log_channel(audit) log [INF] : from='mgr.14424 192.168.122.100:0/1707931784' entity='mgr.compute-0.qnrkij' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "9"}]': finished
Nov 27 05:59:45 np0005537642 ceph-mon[74338]: mon.compute-0@0(leader).osd e89 e89: 3 total, 3 up, 3 in
Nov 27 05:59:45 np0005537642 ceph-mon[74338]: log_channel(cluster) log [DBG] : osdmap e89: 3 total, 3 up, 3 in
Nov 27 05:59:45 np0005537642 ceph-osd[82775]: osd.1 pg_epoch: 89 pg[9.6( v 57'872 (0'0,57'872] local-lis/les=0/0 n=6 ec=67/50 lis/c=86/67 les/c/f=87/68/0 sis=89) [1] r=0 lpr=89 pi=[67,89)/1 luod=0'0 crt=57'872 mlcod 0'0 active mbc={}] PeeringState::start_peering_interval up [1] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 1 -> 1, role -1 -> 0, features acting 4540701547738038271 upacting 4540701547738038271
Nov 27 05:59:45 np0005537642 ceph-osd[82775]: osd.1 pg_epoch: 89 pg[9.6( v 57'872 (0'0,57'872] local-lis/les=0/0 n=6 ec=67/50 lis/c=86/67 les/c/f=87/68/0 sis=89) [1] r=0 lpr=89 pi=[67,89)/1 crt=57'872 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 27 05:59:45 np0005537642 ceph-osd[82775]: osd.1 pg_epoch: 89 pg[9.16( v 57'872 (0'0,57'872] local-lis/les=88/89 n=4 ec=67/50 lis/c=86/67 les/c/f=87/68/0 sis=88) [1] r=0 lpr=88 pi=[67,88)/1 crt=57'872 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 27 05:59:45 np0005537642 ceph-osd[82775]: osd.1 pg_epoch: 89 pg[9.e( v 57'872 (0'0,57'872] local-lis/les=88/89 n=6 ec=67/50 lis/c=86/67 les/c/f=87/68/0 sis=88) [1] r=0 lpr=88 pi=[67,88)/1 crt=57'872 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 27 05:59:45 np0005537642 ceph-osd[82775]: osd.1 pg_epoch: 89 pg[9.1e( v 57'872 (0'0,57'872] local-lis/les=88/89 n=5 ec=67/50 lis/c=86/67 les/c/f=87/68/0 sis=88) [1] r=0 lpr=88 pi=[67,88)/1 crt=57'872 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 27 05:59:45 np0005537642 ceph-osd[82775]: log_channel(cluster) log [DBG] : 12.14 scrub starts
Nov 27 05:59:45 np0005537642 ceph-osd[82775]: log_channel(cluster) log [DBG] : 12.14 scrub ok
Nov 27 05:59:45 np0005537642 ceph-4c838139-e0c9-556a-a9ca-e4422f459af7-nfs-cephfs-2-0-compute-0-ymahkb[97101]: 27/11/2025 10:59:45 : epoch 69282ef5 : compute-0 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f37dc0034e0 fd 49 proxy header rest len failed header rlen = % (will set dead)
Nov 27 05:59:45 np0005537642 ceph-mon[74338]: from='mgr.14424 192.168.122.100:0/1707931784' entity='mgr.compute-0.qnrkij' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "9"}]': finished
Nov 27 05:59:45 np0005537642 ceph-4c838139-e0c9-556a-a9ca-e4422f459af7-nfs-cephfs-2-0-compute-0-ymahkb[97101]: 27/11/2025 10:59:45 : epoch 69282ef5 : compute-0 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f37c4003c60 fd 49 proxy header rest len failed header rlen = % (will set dead)
Nov 27 05:59:45 np0005537642 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v79: 353 pgs: 3 active+remapped, 1 active+recovering+remapped, 349 active+clean; 456 KiB data, 121 MiB used, 60 GiB / 60 GiB avail; 6/221 objects misplaced (2.715%); 48 B/s, 3 objects/s recovering
Nov 27 05:59:45 np0005537642 ceph-mon[74338]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "10"} v 0)
Nov 27 05:59:45 np0005537642 ceph-mon[74338]: log_channel(audit) log [INF] : from='mgr.14424 192.168.122.100:0/1707931784' entity='mgr.compute-0.qnrkij' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "10"}]: dispatch
Nov 27 05:59:46 np0005537642 ceph-mon[74338]: mon.compute-0@0(leader).osd e89 do_prune osdmap full prune enabled
Nov 27 05:59:46 np0005537642 ceph-mon[74338]: log_channel(audit) log [INF] : from='mgr.14424 192.168.122.100:0/1707931784' entity='mgr.compute-0.qnrkij' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "10"}]': finished
Nov 27 05:59:46 np0005537642 ceph-mon[74338]: mon.compute-0@0(leader).osd e90 e90: 3 total, 3 up, 3 in
Nov 27 05:59:46 np0005537642 ceph-mon[74338]: log_channel(cluster) log [DBG] : osdmap e90: 3 total, 3 up, 3 in
Nov 27 05:59:46 np0005537642 ceph-osd[82775]: osd.1 pg_epoch: 90 pg[9.6( v 57'872 (0'0,57'872] local-lis/les=89/90 n=6 ec=67/50 lis/c=86/67 les/c/f=87/68/0 sis=89) [1] r=0 lpr=89 pi=[67,89)/1 crt=57'872 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 27 05:59:46 np0005537642 ceph-osd[82775]: log_channel(cluster) log [DBG] : 12.1 deep-scrub starts
Nov 27 05:59:46 np0005537642 ceph-osd[82775]: log_channel(cluster) log [DBG] : 12.1 deep-scrub ok
Nov 27 05:59:46 np0005537642 ceph-mon[74338]: mon.compute-0@0(leader).osd e90 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 27 05:59:46 np0005537642 ceph-mon[74338]: mon.compute-0@0(leader).osd e90 do_prune osdmap full prune enabled
Nov 27 05:59:46 np0005537642 ceph-4c838139-e0c9-556a-a9ca-e4422f459af7-nfs-cephfs-2-0-compute-0-ymahkb[97101]: 27/11/2025 10:59:46 : epoch 69282ef5 : compute-0 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f37bc003c10 fd 49 proxy header rest len failed header rlen = % (will set dead)
Nov 27 05:59:46 np0005537642 ceph-mon[74338]: mon.compute-0@0(leader).osd e91 e91: 3 total, 3 up, 3 in
Nov 27 05:59:46 np0005537642 ceph-mon[74338]: log_channel(cluster) log [DBG] : osdmap e91: 3 total, 3 up, 3 in
Nov 27 05:59:46 np0005537642 ceph-mon[74338]: from='mgr.14424 192.168.122.100:0/1707931784' entity='mgr.compute-0.qnrkij' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "10"}]: dispatch
Nov 27 05:59:46 np0005537642 ceph-mon[74338]: from='mgr.14424 192.168.122.100:0/1707931784' entity='mgr.compute-0.qnrkij' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "10"}]': finished
Nov 27 05:59:47 np0005537642 ceph-osd[82775]: log_channel(cluster) log [DBG] : 11.1a scrub starts
Nov 27 05:59:47 np0005537642 ceph-osd[82775]: log_channel(cluster) log [DBG] : 11.1a scrub ok
Nov 27 05:59:47 np0005537642 ceph-mon[74338]: mon.compute-0@0(leader).osd e91 do_prune osdmap full prune enabled
Nov 27 05:59:47 np0005537642 ceph-mon[74338]: mon.compute-0@0(leader).osd e92 e92: 3 total, 3 up, 3 in
Nov 27 05:59:47 np0005537642 ceph-mon[74338]: log_channel(cluster) log [DBG] : osdmap e92: 3 total, 3 up, 3 in
Nov 27 05:59:47 np0005537642 ceph-4c838139-e0c9-556a-a9ca-e4422f459af7-nfs-cephfs-2-0-compute-0-ymahkb[97101]: 27/11/2025 10:59:47 : epoch 69282ef5 : compute-0 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f37b8001fc0 fd 49 proxy header rest len failed header rlen = % (will set dead)
Nov 27 05:59:47 np0005537642 ceph-4c838139-e0c9-556a-a9ca-e4422f459af7-nfs-cephfs-2-0-compute-0-ymahkb[97101]: 27/11/2025 10:59:47 : epoch 69282ef5 : compute-0 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f37dc0034e0 fd 49 proxy header rest len failed header rlen = % (will set dead)
Nov 27 05:59:47 np0005537642 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v83: 353 pgs: 3 active+remapped, 1 active+recovering+remapped, 349 active+clean; 456 KiB data, 121 MiB used, 60 GiB / 60 GiB avail; 6/221 objects misplaced (2.715%)
Nov 27 05:59:47 np0005537642 ceph-mon[74338]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "11"} v 0)
Nov 27 05:59:47 np0005537642 ceph-mon[74338]: log_channel(audit) log [INF] : from='mgr.14424 192.168.122.100:0/1707931784' entity='mgr.compute-0.qnrkij' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "11"}]: dispatch
Nov 27 05:59:48 np0005537642 ceph-mgr[74636]: [progress INFO root] Writing back 25 completed events
Nov 27 05:59:48 np0005537642 ceph-mon[74338]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/progress/completed}] v 0)
Nov 27 05:59:48 np0005537642 ceph-osd[82775]: log_channel(cluster) log [DBG] : 11.1c scrub starts
Nov 27 05:59:48 np0005537642 ceph-osd[82775]: log_channel(cluster) log [DBG] : 11.1c scrub ok
Nov 27 05:59:48 np0005537642 ceph-4c838139-e0c9-556a-a9ca-e4422f459af7-nfs-cephfs-2-0-compute-0-ymahkb[97101]: 27/11/2025 10:59:48 : epoch 69282ef5 : compute-0 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f37c4003c60 fd 49 proxy header rest len failed header rlen = % (will set dead)
Nov 27 05:59:48 np0005537642 ceph-mon[74338]: mon.compute-0@0(leader).osd e92 do_prune osdmap full prune enabled
Nov 27 05:59:49 np0005537642 ceph-mon[74338]: log_channel(audit) log [INF] : from='mgr.14424 192.168.122.100:0/1707931784' entity='mgr.compute-0.qnrkij' 
Nov 27 05:59:49 np0005537642 ceph-mgr[74636]: [progress WARNING root] Starting Global Recovery Event,4 pgs not in active + clean state
Nov 27 05:59:49 np0005537642 ceph-mon[74338]: log_channel(audit) log [INF] : from='mgr.14424 192.168.122.100:0/1707931784' entity='mgr.compute-0.qnrkij' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "11"}]': finished
Nov 27 05:59:49 np0005537642 ceph-mon[74338]: mon.compute-0@0(leader).osd e93 e93: 3 total, 3 up, 3 in
Nov 27 05:59:49 np0005537642 ceph-mon[74338]: from='mgr.14424 192.168.122.100:0/1707931784' entity='mgr.compute-0.qnrkij' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "11"}]: dispatch
Nov 27 05:59:49 np0005537642 ceph-mon[74338]: log_channel(cluster) log [DBG] : osdmap e93: 3 total, 3 up, 3 in
Nov 27 05:59:49 np0005537642 ceph-osd[82775]: osd.1 pg_epoch: 93 pg[9.1a( empty local-lis/les=0/0 n=0 ec=67/50 lis/c=67/67 les/c/f=68/68/0 sis=93) [1] r=0 lpr=93 pi=[67,93)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 27 05:59:49 np0005537642 ceph-osd[82775]: osd.1 pg_epoch: 93 pg[9.a( empty local-lis/les=0/0 n=0 ec=67/50 lis/c=67/67 les/c/f=68/68/0 sis=93) [1] r=0 lpr=93 pi=[67,93)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 27 05:59:49 np0005537642 ceph-osd[82775]: log_channel(cluster) log [DBG] : 8.12 deep-scrub starts
Nov 27 05:59:49 np0005537642 ceph-osd[82775]: log_channel(cluster) log [DBG] : 8.12 deep-scrub ok
Nov 27 05:59:49 np0005537642 ceph-4c838139-e0c9-556a-a9ca-e4422f459af7-nfs-cephfs-2-0-compute-0-ymahkb[97101]: 27/11/2025 10:59:49 : epoch 69282ef5 : compute-0 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f37bc003c10 fd 49 proxy header rest len failed header rlen = % (will set dead)
Nov 27 05:59:49 np0005537642 ceph-4c838139-e0c9-556a-a9ca-e4422f459af7-nfs-cephfs-2-0-compute-0-ymahkb[97101]: 27/11/2025 10:59:49 : epoch 69282ef5 : compute-0 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f37b8001fc0 fd 49 proxy header rest len failed header rlen = % (will set dead)
Nov 27 05:59:49 np0005537642 podman[98778]: 2025-11-27 10:59:49.856097225 +0000 UTC m=+7.245236488 image pull c8b91775d855b99270fc5d22f3c6737e8cca01ef4c25c8b0362295e0746fa39b quay.io/ceph/grafana:10.4.0
Nov 27 05:59:49 np0005537642 podman[98778]: 2025-11-27 10:59:49.906597629 +0000 UTC m=+7.295736852 container create 314436a3cc55003e3224e7700f5ba2279b6c7dff47d574e897f9903f33640833 (image=quay.io/ceph/grafana:10.4.0, name=angry_payne, maintainer=Grafana Labs <hello@grafana.com>)
Nov 27 05:59:49 np0005537642 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v85: 353 pgs: 2 peering, 2 active+remapped, 349 active+clean; 456 KiB data, 122 MiB used, 60 GiB / 60 GiB avail; 137 B/s, 7 objects/s recovering
Nov 27 05:59:49 np0005537642 systemd[1]: Started libpod-conmon-314436a3cc55003e3224e7700f5ba2279b6c7dff47d574e897f9903f33640833.scope.
Nov 27 05:59:49 np0005537642 systemd[1]: Started libcrun container.
Nov 27 05:59:50 np0005537642 podman[98778]: 2025-11-27 10:59:50.027164513 +0000 UTC m=+7.416303826 container init 314436a3cc55003e3224e7700f5ba2279b6c7dff47d574e897f9903f33640833 (image=quay.io/ceph/grafana:10.4.0, name=angry_payne, maintainer=Grafana Labs <hello@grafana.com>)
Nov 27 05:59:50 np0005537642 podman[98778]: 2025-11-27 10:59:50.040555988 +0000 UTC m=+7.429695251 container start 314436a3cc55003e3224e7700f5ba2279b6c7dff47d574e897f9903f33640833 (image=quay.io/ceph/grafana:10.4.0, name=angry_payne, maintainer=Grafana Labs <hello@grafana.com>)
Nov 27 05:59:50 np0005537642 angry_payne[99004]: 472 0
Nov 27 05:59:50 np0005537642 systemd[1]: libpod-314436a3cc55003e3224e7700f5ba2279b6c7dff47d574e897f9903f33640833.scope: Deactivated successfully.
Nov 27 05:59:50 np0005537642 podman[98778]: 2025-11-27 10:59:50.055037665 +0000 UTC m=+7.444176988 container attach 314436a3cc55003e3224e7700f5ba2279b6c7dff47d574e897f9903f33640833 (image=quay.io/ceph/grafana:10.4.0, name=angry_payne, maintainer=Grafana Labs <hello@grafana.com>)
Nov 27 05:59:50 np0005537642 podman[98778]: 2025-11-27 10:59:50.055672254 +0000 UTC m=+7.444811497 container died 314436a3cc55003e3224e7700f5ba2279b6c7dff47d574e897f9903f33640833 (image=quay.io/ceph/grafana:10.4.0, name=angry_payne, maintainer=Grafana Labs <hello@grafana.com>)
Nov 27 05:59:50 np0005537642 systemd[1]: var-lib-containers-storage-overlay-fb7cb3ef460750d08128a15d5bdfd03d2244bd8c176e952b75cb12d6a150ec38-merged.mount: Deactivated successfully.
Nov 27 05:59:50 np0005537642 podman[98778]: 2025-11-27 10:59:50.165987412 +0000 UTC m=+7.555126635 container remove 314436a3cc55003e3224e7700f5ba2279b6c7dff47d574e897f9903f33640833 (image=quay.io/ceph/grafana:10.4.0, name=angry_payne, maintainer=Grafana Labs <hello@grafana.com>)
Nov 27 05:59:50 np0005537642 systemd[1]: libpod-conmon-314436a3cc55003e3224e7700f5ba2279b6c7dff47d574e897f9903f33640833.scope: Deactivated successfully.
Nov 27 05:59:50 np0005537642 podman[99021]: 2025-11-27 10:59:50.295841022 +0000 UTC m=+0.097124329 container create 1b006c7141bda13048181df021442bf5f381d7e01c2d01616eddab2ec308865b (image=quay.io/ceph/grafana:10.4.0, name=epic_mirzakhani, maintainer=Grafana Labs <hello@grafana.com>)
Nov 27 05:59:50 np0005537642 podman[99021]: 2025-11-27 10:59:50.237885023 +0000 UTC m=+0.039168390 image pull c8b91775d855b99270fc5d22f3c6737e8cca01ef4c25c8b0362295e0746fa39b quay.io/ceph/grafana:10.4.0
Nov 27 05:59:50 np0005537642 systemd[1]: Started libpod-conmon-1b006c7141bda13048181df021442bf5f381d7e01c2d01616eddab2ec308865b.scope.
Nov 27 05:59:50 np0005537642 systemd[1]: Started libcrun container.
Nov 27 05:59:50 np0005537642 podman[99021]: 2025-11-27 10:59:50.413943524 +0000 UTC m=+0.215226831 container init 1b006c7141bda13048181df021442bf5f381d7e01c2d01616eddab2ec308865b (image=quay.io/ceph/grafana:10.4.0, name=epic_mirzakhani, maintainer=Grafana Labs <hello@grafana.com>)
Nov 27 05:59:50 np0005537642 podman[99021]: 2025-11-27 10:59:50.422874652 +0000 UTC m=+0.224157949 container start 1b006c7141bda13048181df021442bf5f381d7e01c2d01616eddab2ec308865b (image=quay.io/ceph/grafana:10.4.0, name=epic_mirzakhani, maintainer=Grafana Labs <hello@grafana.com>)
Nov 27 05:59:50 np0005537642 epic_mirzakhani[99037]: 472 0
Nov 27 05:59:50 np0005537642 systemd[1]: libpod-1b006c7141bda13048181df021442bf5f381d7e01c2d01616eddab2ec308865b.scope: Deactivated successfully.
Nov 27 05:59:50 np0005537642 podman[99021]: 2025-11-27 10:59:50.440661994 +0000 UTC m=+0.241945301 container attach 1b006c7141bda13048181df021442bf5f381d7e01c2d01616eddab2ec308865b (image=quay.io/ceph/grafana:10.4.0, name=epic_mirzakhani, maintainer=Grafana Labs <hello@grafana.com>)
Nov 27 05:59:50 np0005537642 podman[99021]: 2025-11-27 10:59:50.441183549 +0000 UTC m=+0.242466856 container died 1b006c7141bda13048181df021442bf5f381d7e01c2d01616eddab2ec308865b (image=quay.io/ceph/grafana:10.4.0, name=epic_mirzakhani, maintainer=Grafana Labs <hello@grafana.com>)
Nov 27 05:59:50 np0005537642 systemd[1]: var-lib-containers-storage-overlay-408882d9529d6d9243b5facb0ae98f008291617cafacd727855c0ff5b11751c0-merged.mount: Deactivated successfully.
Nov 27 05:59:50 np0005537642 ceph-mon[74338]: mon.compute-0@0(leader).osd e93 do_prune osdmap full prune enabled
Nov 27 05:59:50 np0005537642 podman[99021]: 2025-11-27 10:59:50.583421956 +0000 UTC m=+0.384705263 container remove 1b006c7141bda13048181df021442bf5f381d7e01c2d01616eddab2ec308865b (image=quay.io/ceph/grafana:10.4.0, name=epic_mirzakhani, maintainer=Grafana Labs <hello@grafana.com>)
Nov 27 05:59:50 np0005537642 systemd[1]: libpod-conmon-1b006c7141bda13048181df021442bf5f381d7e01c2d01616eddab2ec308865b.scope: Deactivated successfully.
Nov 27 05:59:50 np0005537642 ceph-osd[82775]: log_channel(cluster) log [DBG] : 11.1b scrub starts
Nov 27 05:59:50 np0005537642 ceph-4c838139-e0c9-556a-a9ca-e4422f459af7-nfs-cephfs-2-0-compute-0-ymahkb[97101]: 27/11/2025 10:59:50 : epoch 69282ef5 : compute-0 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f37dc0034e0 fd 49 proxy header rest len failed header rlen = % (will set dead)
Nov 27 05:59:50 np0005537642 ceph-osd[82775]: log_channel(cluster) log [DBG] : 11.1b scrub ok
Nov 27 05:59:50 np0005537642 ceph-mon[74338]: from='mgr.14424 192.168.122.100:0/1707931784' entity='mgr.compute-0.qnrkij' 
Nov 27 05:59:50 np0005537642 ceph-mon[74338]: from='mgr.14424 192.168.122.100:0/1707931784' entity='mgr.compute-0.qnrkij' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "11"}]': finished
Nov 27 05:59:50 np0005537642 ceph-mon[74338]: mon.compute-0@0(leader).osd e94 e94: 3 total, 3 up, 3 in
Nov 27 05:59:50 np0005537642 ceph-mon[74338]: log_channel(cluster) log [DBG] : osdmap e94: 3 total, 3 up, 3 in
Nov 27 05:59:50 np0005537642 ceph-osd[82775]: osd.1 pg_epoch: 94 pg[9.a( empty local-lis/les=0/0 n=0 ec=67/50 lis/c=67/67 les/c/f=68/68/0 sis=94) [1]/[0] r=-1 lpr=94 pi=[67,94)/1 crt=0'0 mlcod 0'0 remapped mbc={}] PeeringState::start_peering_interval up [1] -> [1], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 1, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Nov 27 05:59:50 np0005537642 ceph-osd[82775]: osd.1 pg_epoch: 94 pg[9.a( empty local-lis/les=0/0 n=0 ec=67/50 lis/c=67/67 les/c/f=68/68/0 sis=94) [1]/[0] r=-1 lpr=94 pi=[67,94)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Nov 27 05:59:50 np0005537642 ceph-osd[82775]: osd.1 pg_epoch: 94 pg[9.1a( empty local-lis/les=0/0 n=0 ec=67/50 lis/c=67/67 les/c/f=68/68/0 sis=94) [1]/[0] r=-1 lpr=94 pi=[67,94)/1 crt=0'0 mlcod 0'0 remapped mbc={}] PeeringState::start_peering_interval up [1] -> [1], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 1, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Nov 27 05:59:50 np0005537642 ceph-osd[82775]: osd.1 pg_epoch: 94 pg[9.1a( empty local-lis/les=0/0 n=0 ec=67/50 lis/c=67/67 les/c/f=68/68/0 sis=94) [1]/[0] r=-1 lpr=94 pi=[67,94)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Nov 27 05:59:50 np0005537642 systemd[1]: Reloading.
Nov 27 05:59:51 np0005537642 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 27 05:59:51 np0005537642 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 27 05:59:51 np0005537642 systemd[1]: Reloading.
Nov 27 05:59:51 np0005537642 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 27 05:59:51 np0005537642 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 27 05:59:51 np0005537642 ceph-mon[74338]: mon.compute-0@0(leader).osd e94 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 27 05:59:51 np0005537642 ceph-osd[82775]: log_channel(cluster) log [DBG] : 8.1b scrub starts
Nov 27 05:59:51 np0005537642 systemd[1]: Starting Ceph grafana.compute-0 for 4c838139-e0c9-556a-a9ca-e4422f459af7...
Nov 27 05:59:51 np0005537642 ceph-osd[82775]: log_channel(cluster) log [DBG] : 8.1b scrub ok
Nov 27 05:59:51 np0005537642 ceph-4c838139-e0c9-556a-a9ca-e4422f459af7-alertmanager-compute-0[98665]: ts=2025-11-27T10:59:51.684Z caller=cluster.go:698 level=info component=cluster msg="gossip settled; proceeding" elapsed=10.004665426s
Nov 27 05:59:51 np0005537642 ceph-4c838139-e0c9-556a-a9ca-e4422f459af7-nfs-cephfs-2-0-compute-0-ymahkb[97101]: 27/11/2025 10:59:51 : epoch 69282ef5 : compute-0 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f37c4003c60 fd 49 proxy header rest len failed header rlen = % (will set dead)
Nov 27 05:59:51 np0005537642 ceph-mon[74338]: mon.compute-0@0(leader).osd e94 do_prune osdmap full prune enabled
Nov 27 05:59:51 np0005537642 ceph-mon[74338]: mon.compute-0@0(leader).osd e95 e95: 3 total, 3 up, 3 in
Nov 27 05:59:51 np0005537642 ceph-mon[74338]: log_channel(cluster) log [DBG] : osdmap e95: 3 total, 3 up, 3 in
Nov 27 05:59:51 np0005537642 ceph-4c838139-e0c9-556a-a9ca-e4422f459af7-nfs-cephfs-2-0-compute-0-ymahkb[97101]: 27/11/2025 10:59:51 : epoch 69282ef5 : compute-0 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f37bc003c10 fd 49 proxy header rest len failed header rlen = % (will set dead)
Nov 27 05:59:51 np0005537642 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v88: 353 pgs: 2 peering, 2 active+remapped, 349 active+clean; 456 KiB data, 122 MiB used, 60 GiB / 60 GiB avail; 128 B/s, 6 objects/s recovering
Nov 27 05:59:51 np0005537642 podman[99181]: 2025-11-27 10:59:51.972337267 +0000 UTC m=+0.057905769 container create 64887548efd8ea94ae6cc4d74ce8d20ae68a58e079077f7deb3f19d16095acef (image=quay.io/ceph/grafana:10.4.0, name=ceph-4c838139-e0c9-556a-a9ca-e4422f459af7-grafana-compute-0, maintainer=Grafana Labs <hello@grafana.com>)
Nov 27 05:59:52 np0005537642 podman[99181]: 2025-11-27 10:59:51.943533317 +0000 UTC m=+0.029101839 image pull c8b91775d855b99270fc5d22f3c6737e8cca01ef4c25c8b0362295e0746fa39b quay.io/ceph/grafana:10.4.0
Nov 27 05:59:52 np0005537642 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4f912913c7db4ef87a29a021cadf5156fc386e198ab26914c3c7845901caf6eb/merged/etc/grafana/certs supports timestamps until 2038 (0x7fffffff)
Nov 27 05:59:52 np0005537642 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4f912913c7db4ef87a29a021cadf5156fc386e198ab26914c3c7845901caf6eb/merged/etc/grafana/grafana.ini supports timestamps until 2038 (0x7fffffff)
Nov 27 05:59:52 np0005537642 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4f912913c7db4ef87a29a021cadf5156fc386e198ab26914c3c7845901caf6eb/merged/var/lib/grafana/grafana.db supports timestamps until 2038 (0x7fffffff)
Nov 27 05:59:52 np0005537642 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4f912913c7db4ef87a29a021cadf5156fc386e198ab26914c3c7845901caf6eb/merged/etc/grafana/provisioning/datasources supports timestamps until 2038 (0x7fffffff)
Nov 27 05:59:52 np0005537642 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4f912913c7db4ef87a29a021cadf5156fc386e198ab26914c3c7845901caf6eb/merged/etc/grafana/provisioning/dashboards supports timestamps until 2038 (0x7fffffff)
Nov 27 05:59:52 np0005537642 podman[99181]: 2025-11-27 10:59:52.092281492 +0000 UTC m=+0.177850074 container init 64887548efd8ea94ae6cc4d74ce8d20ae68a58e079077f7deb3f19d16095acef (image=quay.io/ceph/grafana:10.4.0, name=ceph-4c838139-e0c9-556a-a9ca-e4422f459af7-grafana-compute-0, maintainer=Grafana Labs <hello@grafana.com>)
Nov 27 05:59:52 np0005537642 podman[99181]: 2025-11-27 10:59:52.101351053 +0000 UTC m=+0.186919585 container start 64887548efd8ea94ae6cc4d74ce8d20ae68a58e079077f7deb3f19d16095acef (image=quay.io/ceph/grafana:10.4.0, name=ceph-4c838139-e0c9-556a-a9ca-e4422f459af7-grafana-compute-0, maintainer=Grafana Labs <hello@grafana.com>)
Nov 27 05:59:52 np0005537642 bash[99181]: 64887548efd8ea94ae6cc4d74ce8d20ae68a58e079077f7deb3f19d16095acef
Nov 27 05:59:52 np0005537642 systemd[1]: Started Ceph grafana.compute-0 for 4c838139-e0c9-556a-a9ca-e4422f459af7.
Nov 27 05:59:52 np0005537642 ceph-mon[74338]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Nov 27 05:59:52 np0005537642 ceph-mon[74338]: log_channel(audit) log [INF] : from='mgr.14424 192.168.122.100:0/1707931784' entity='mgr.compute-0.qnrkij' 
Nov 27 05:59:52 np0005537642 ceph-mon[74338]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Nov 27 05:59:52 np0005537642 ceph-mon[74338]: log_channel(audit) log [INF] : from='mgr.14424 192.168.122.100:0/1707931784' entity='mgr.compute-0.qnrkij' 
Nov 27 05:59:52 np0005537642 ceph-mon[74338]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.grafana}] v 0)
Nov 27 05:59:52 np0005537642 ceph-mon[74338]: log_channel(audit) log [INF] : from='mgr.14424 192.168.122.100:0/1707931784' entity='mgr.compute-0.qnrkij' 
Nov 27 05:59:52 np0005537642 ceph-mgr[74636]: [progress INFO root] complete: finished ev 8eb04479-944d-4267-9b9c-6b4e389a3647 (Updating grafana deployment (+1 -> 1))
Nov 27 05:59:52 np0005537642 ceph-mgr[74636]: [progress INFO root] Completed event 8eb04479-944d-4267-9b9c-6b4e389a3647 (Updating grafana deployment (+1 -> 1)) in 11 seconds
Nov 27 05:59:52 np0005537642 ceph-mon[74338]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.grafana}] v 0)
Nov 27 05:59:52 np0005537642 ceph-mon[74338]: log_channel(audit) log [INF] : from='mgr.14424 192.168.122.100:0/1707931784' entity='mgr.compute-0.qnrkij' 
Nov 27 05:59:52 np0005537642 ceph-mgr[74636]: [progress INFO root] update: starting ev a105c4e0-f087-4fb8-a0b4-17642d66bf09 (Updating ingress.rgw.default deployment (+4 -> 4))
Nov 27 05:59:52 np0005537642 ceph-mon[74338]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/ingress.rgw.default/monitor_password}] v 0)
Nov 27 05:59:52 np0005537642 ceph-4c838139-e0c9-556a-a9ca-e4422f459af7-grafana-compute-0[99196]: logger=settings t=2025-11-27T10:59:52.309374906Z level=info msg="Starting Grafana" version=10.4.0 commit=03f502a94d17f7dc4e6c34acdf8428aedd986e4c branch=HEAD compiled=2025-11-27T10:59:52Z
Nov 27 05:59:52 np0005537642 ceph-4c838139-e0c9-556a-a9ca-e4422f459af7-grafana-compute-0[99196]: logger=settings t=2025-11-27T10:59:52.309646064Z level=info msg="Config loaded from" file=/usr/share/grafana/conf/defaults.ini
Nov 27 05:59:52 np0005537642 ceph-4c838139-e0c9-556a-a9ca-e4422f459af7-grafana-compute-0[99196]: logger=settings t=2025-11-27T10:59:52.309653054Z level=info msg="Config loaded from" file=/etc/grafana/grafana.ini
Nov 27 05:59:52 np0005537642 ceph-4c838139-e0c9-556a-a9ca-e4422f459af7-grafana-compute-0[99196]: logger=settings t=2025-11-27T10:59:52.309656794Z level=info msg="Config overridden from command line" arg="default.paths.data=/var/lib/grafana"
Nov 27 05:59:52 np0005537642 ceph-4c838139-e0c9-556a-a9ca-e4422f459af7-grafana-compute-0[99196]: logger=settings t=2025-11-27T10:59:52.309660244Z level=info msg="Config overridden from command line" arg="default.paths.logs=/var/log/grafana"
Nov 27 05:59:52 np0005537642 ceph-4c838139-e0c9-556a-a9ca-e4422f459af7-grafana-compute-0[99196]: logger=settings t=2025-11-27T10:59:52.309663534Z level=info msg="Config overridden from command line" arg="default.paths.plugins=/var/lib/grafana/plugins"
Nov 27 05:59:52 np0005537642 ceph-4c838139-e0c9-556a-a9ca-e4422f459af7-grafana-compute-0[99196]: logger=settings t=2025-11-27T10:59:52.309667814Z level=info msg="Config overridden from command line" arg="default.paths.provisioning=/etc/grafana/provisioning"
Nov 27 05:59:52 np0005537642 ceph-4c838139-e0c9-556a-a9ca-e4422f459af7-grafana-compute-0[99196]: logger=settings t=2025-11-27T10:59:52.309671484Z level=info msg="Config overridden from command line" arg="default.log.mode=console"
Nov 27 05:59:52 np0005537642 ceph-4c838139-e0c9-556a-a9ca-e4422f459af7-grafana-compute-0[99196]: logger=settings t=2025-11-27T10:59:52.309675605Z level=info msg="Config overridden from Environment variable" var="GF_PATHS_DATA=/var/lib/grafana"
Nov 27 05:59:52 np0005537642 ceph-4c838139-e0c9-556a-a9ca-e4422f459af7-grafana-compute-0[99196]: logger=settings t=2025-11-27T10:59:52.309679015Z level=info msg="Config overridden from Environment variable" var="GF_PATHS_LOGS=/var/log/grafana"
Nov 27 05:59:52 np0005537642 ceph-4c838139-e0c9-556a-a9ca-e4422f459af7-grafana-compute-0[99196]: logger=settings t=2025-11-27T10:59:52.309682315Z level=info msg="Config overridden from Environment variable" var="GF_PATHS_PLUGINS=/var/lib/grafana/plugins"
Nov 27 05:59:52 np0005537642 ceph-4c838139-e0c9-556a-a9ca-e4422f459af7-grafana-compute-0[99196]: logger=settings t=2025-11-27T10:59:52.309685475Z level=info msg="Config overridden from Environment variable" var="GF_PATHS_PROVISIONING=/etc/grafana/provisioning"
Nov 27 05:59:52 np0005537642 ceph-4c838139-e0c9-556a-a9ca-e4422f459af7-grafana-compute-0[99196]: logger=settings t=2025-11-27T10:59:52.309688755Z level=info msg=Target target=[all]
Nov 27 05:59:52 np0005537642 ceph-4c838139-e0c9-556a-a9ca-e4422f459af7-grafana-compute-0[99196]: logger=settings t=2025-11-27T10:59:52.309694805Z level=info msg="Path Home" path=/usr/share/grafana
Nov 27 05:59:52 np0005537642 ceph-4c838139-e0c9-556a-a9ca-e4422f459af7-grafana-compute-0[99196]: logger=settings t=2025-11-27T10:59:52.309698025Z level=info msg="Path Data" path=/var/lib/grafana
Nov 27 05:59:52 np0005537642 ceph-4c838139-e0c9-556a-a9ca-e4422f459af7-grafana-compute-0[99196]: logger=settings t=2025-11-27T10:59:52.309701135Z level=info msg="Path Logs" path=/var/log/grafana
Nov 27 05:59:52 np0005537642 ceph-4c838139-e0c9-556a-a9ca-e4422f459af7-grafana-compute-0[99196]: logger=settings t=2025-11-27T10:59:52.309704675Z level=info msg="Path Plugins" path=/var/lib/grafana/plugins
Nov 27 05:59:52 np0005537642 ceph-4c838139-e0c9-556a-a9ca-e4422f459af7-grafana-compute-0[99196]: logger=settings t=2025-11-27T10:59:52.309707905Z level=info msg="Path Provisioning" path=/etc/grafana/provisioning
Nov 27 05:59:52 np0005537642 ceph-4c838139-e0c9-556a-a9ca-e4422f459af7-grafana-compute-0[99196]: logger=settings t=2025-11-27T10:59:52.309711136Z level=info msg="App mode production"
Nov 27 05:59:52 np0005537642 ceph-4c838139-e0c9-556a-a9ca-e4422f459af7-grafana-compute-0[99196]: logger=sqlstore t=2025-11-27T10:59:52.309944562Z level=info msg="Connecting to DB" dbtype=sqlite3
Nov 27 05:59:52 np0005537642 ceph-4c838139-e0c9-556a-a9ca-e4422f459af7-grafana-compute-0[99196]: logger=sqlstore t=2025-11-27T10:59:52.309958443Z level=warn msg="SQLite database file has broader permissions than it should" path=/var/lib/grafana/grafana.db mode=-rw-r--r-- expected=-rw-r-----
Nov 27 05:59:52 np0005537642 ceph-4c838139-e0c9-556a-a9ca-e4422f459af7-grafana-compute-0[99196]: logger=migrator t=2025-11-27T10:59:52.310495018Z level=info msg="Starting DB migrations"
Nov 27 05:59:52 np0005537642 ceph-4c838139-e0c9-556a-a9ca-e4422f459af7-grafana-compute-0[99196]: logger=migrator t=2025-11-27T10:59:52.311728614Z level=info msg="Executing migration" id="create migration_log table"
Nov 27 05:59:52 np0005537642 ceph-4c838139-e0c9-556a-a9ca-e4422f459af7-grafana-compute-0[99196]: logger=migrator t=2025-11-27T10:59:52.312868046Z level=info msg="Migration successfully executed" id="create migration_log table" duration=1.139012ms
Nov 27 05:59:52 np0005537642 ceph-mon[74338]: log_channel(audit) log [INF] : from='mgr.14424 192.168.122.100:0/1707931784' entity='mgr.compute-0.qnrkij' 
Nov 27 05:59:52 np0005537642 ceph-mgr[74636]: [cephadm INFO cephadm.serve] Deploying daemon haproxy.rgw.default.compute-0.lbrkhk on compute-0
Nov 27 05:59:52 np0005537642 ceph-mgr[74636]: log_channel(cephadm) log [INF] : Deploying daemon haproxy.rgw.default.compute-0.lbrkhk on compute-0
Nov 27 05:59:52 np0005537642 ceph-4c838139-e0c9-556a-a9ca-e4422f459af7-grafana-compute-0[99196]: logger=migrator t=2025-11-27T10:59:52.32271418Z level=info msg="Executing migration" id="create user table"
Nov 27 05:59:52 np0005537642 ceph-4c838139-e0c9-556a-a9ca-e4422f459af7-grafana-compute-0[99196]: logger=migrator t=2025-11-27T10:59:52.323662707Z level=info msg="Migration successfully executed" id="create user table" duration=949.887µs
Nov 27 05:59:52 np0005537642 ceph-4c838139-e0c9-556a-a9ca-e4422f459af7-grafana-compute-0[99196]: logger=migrator t=2025-11-27T10:59:52.331598596Z level=info msg="Executing migration" id="add unique index user.login"
Nov 27 05:59:52 np0005537642 ceph-4c838139-e0c9-556a-a9ca-e4422f459af7-grafana-compute-0[99196]: logger=migrator t=2025-11-27T10:59:52.332355368Z level=info msg="Migration successfully executed" id="add unique index user.login" duration=758.882µs
Nov 27 05:59:52 np0005537642 ceph-4c838139-e0c9-556a-a9ca-e4422f459af7-grafana-compute-0[99196]: logger=migrator t=2025-11-27T10:59:52.338243247Z level=info msg="Executing migration" id="add unique index user.email"
Nov 27 05:59:52 np0005537642 ceph-4c838139-e0c9-556a-a9ca-e4422f459af7-grafana-compute-0[99196]: logger=migrator t=2025-11-27T10:59:52.338819174Z level=info msg="Migration successfully executed" id="add unique index user.email" duration=575.987µs
Nov 27 05:59:52 np0005537642 ceph-4c838139-e0c9-556a-a9ca-e4422f459af7-grafana-compute-0[99196]: logger=migrator t=2025-11-27T10:59:52.348475932Z level=info msg="Executing migration" id="drop index UQE_user_login - v1"
Nov 27 05:59:52 np0005537642 ceph-4c838139-e0c9-556a-a9ca-e4422f459af7-grafana-compute-0[99196]: logger=migrator t=2025-11-27T10:59:52.349397979Z level=info msg="Migration successfully executed" id="drop index UQE_user_login - v1" duration=923.917µs
Nov 27 05:59:52 np0005537642 ceph-4c838139-e0c9-556a-a9ca-e4422f459af7-grafana-compute-0[99196]: logger=migrator t=2025-11-27T10:59:52.355227177Z level=info msg="Executing migration" id="drop index UQE_user_email - v1"
Nov 27 05:59:52 np0005537642 ceph-4c838139-e0c9-556a-a9ca-e4422f459af7-grafana-compute-0[99196]: logger=migrator t=2025-11-27T10:59:52.355925127Z level=info msg="Migration successfully executed" id="drop index UQE_user_email - v1" duration=698.39µs
Nov 27 05:59:52 np0005537642 ceph-4c838139-e0c9-556a-a9ca-e4422f459af7-grafana-compute-0[99196]: logger=migrator t=2025-11-27T10:59:52.360138428Z level=info msg="Executing migration" id="Rename table user to user_v1 - v1"
Nov 27 05:59:52 np0005537642 ceph-4c838139-e0c9-556a-a9ca-e4422f459af7-grafana-compute-0[99196]: logger=migrator t=2025-11-27T10:59:52.362253809Z level=info msg="Migration successfully executed" id="Rename table user to user_v1 - v1" duration=2.108411ms
Nov 27 05:59:52 np0005537642 ceph-4c838139-e0c9-556a-a9ca-e4422f459af7-grafana-compute-0[99196]: logger=migrator t=2025-11-27T10:59:52.36715458Z level=info msg="Executing migration" id="create user table v2"
Nov 27 05:59:52 np0005537642 ceph-4c838139-e0c9-556a-a9ca-e4422f459af7-grafana-compute-0[99196]: logger=migrator t=2025-11-27T10:59:52.367896292Z level=info msg="Migration successfully executed" id="create user table v2" duration=742.102µs
Nov 27 05:59:52 np0005537642 ceph-4c838139-e0c9-556a-a9ca-e4422f459af7-grafana-compute-0[99196]: logger=migrator t=2025-11-27T10:59:52.372521925Z level=info msg="Executing migration" id="create index UQE_user_login - v2"
Nov 27 05:59:52 np0005537642 ceph-4c838139-e0c9-556a-a9ca-e4422f459af7-grafana-compute-0[99196]: logger=migrator t=2025-11-27T10:59:52.373151533Z level=info msg="Migration successfully executed" id="create index UQE_user_login - v2" duration=629.708µs
Nov 27 05:59:52 np0005537642 ceph-4c838139-e0c9-556a-a9ca-e4422f459af7-grafana-compute-0[99196]: logger=migrator t=2025-11-27T10:59:52.377639062Z level=info msg="Executing migration" id="create index UQE_user_email - v2"
Nov 27 05:59:52 np0005537642 ceph-4c838139-e0c9-556a-a9ca-e4422f459af7-grafana-compute-0[99196]: logger=migrator t=2025-11-27T10:59:52.378512548Z level=info msg="Migration successfully executed" id="create index UQE_user_email - v2" duration=859.015µs
Nov 27 05:59:52 np0005537642 ceph-4c838139-e0c9-556a-a9ca-e4422f459af7-grafana-compute-0[99196]: logger=migrator t=2025-11-27T10:59:52.3862207Z level=info msg="Executing migration" id="copy data_source v1 to v2"
Nov 27 05:59:52 np0005537642 ceph-4c838139-e0c9-556a-a9ca-e4422f459af7-grafana-compute-0[99196]: logger=migrator t=2025-11-27T10:59:52.386697173Z level=info msg="Migration successfully executed" id="copy data_source v1 to v2" duration=478.134µs
Nov 27 05:59:52 np0005537642 ceph-4c838139-e0c9-556a-a9ca-e4422f459af7-grafana-compute-0[99196]: logger=migrator t=2025-11-27T10:59:52.396052513Z level=info msg="Executing migration" id="Drop old table user_v1"
Nov 27 05:59:52 np0005537642 ceph-4c838139-e0c9-556a-a9ca-e4422f459af7-grafana-compute-0[99196]: logger=migrator t=2025-11-27T10:59:52.396748213Z level=info msg="Migration successfully executed" id="Drop old table user_v1" duration=697.42µs
Nov 27 05:59:52 np0005537642 ceph-4c838139-e0c9-556a-a9ca-e4422f459af7-grafana-compute-0[99196]: logger=migrator t=2025-11-27T10:59:52.40323208Z level=info msg="Executing migration" id="Add column help_flags1 to user table"
Nov 27 05:59:52 np0005537642 ceph-4c838139-e0c9-556a-a9ca-e4422f459af7-grafana-compute-0[99196]: logger=migrator t=2025-11-27T10:59:52.404571428Z level=info msg="Migration successfully executed" id="Add column help_flags1 to user table" duration=1.375949ms
Nov 27 05:59:52 np0005537642 ceph-4c838139-e0c9-556a-a9ca-e4422f459af7-grafana-compute-0[99196]: logger=migrator t=2025-11-27T10:59:52.410423037Z level=info msg="Executing migration" id="Update user table charset"
Nov 27 05:59:52 np0005537642 ceph-4c838139-e0c9-556a-a9ca-e4422f459af7-grafana-compute-0[99196]: logger=migrator t=2025-11-27T10:59:52.410448958Z level=info msg="Migration successfully executed" id="Update user table charset" duration=27.321µs
Nov 27 05:59:52 np0005537642 ceph-4c838139-e0c9-556a-a9ca-e4422f459af7-grafana-compute-0[99196]: logger=migrator t=2025-11-27T10:59:52.414478874Z level=info msg="Executing migration" id="Add last_seen_at column to user"
Nov 27 05:59:52 np0005537642 ceph-4c838139-e0c9-556a-a9ca-e4422f459af7-grafana-compute-0[99196]: logger=migrator t=2025-11-27T10:59:52.415638157Z level=info msg="Migration successfully executed" id="Add last_seen_at column to user" duration=1.159383ms
Nov 27 05:59:52 np0005537642 ceph-4c838139-e0c9-556a-a9ca-e4422f459af7-grafana-compute-0[99196]: logger=migrator t=2025-11-27T10:59:52.417726037Z level=info msg="Executing migration" id="Add missing user data"
Nov 27 05:59:52 np0005537642 ceph-4c838139-e0c9-556a-a9ca-e4422f459af7-grafana-compute-0[99196]: logger=migrator t=2025-11-27T10:59:52.417919293Z level=info msg="Migration successfully executed" id="Add missing user data" duration=193.326µs
Nov 27 05:59:52 np0005537642 ceph-4c838139-e0c9-556a-a9ca-e4422f459af7-grafana-compute-0[99196]: logger=migrator t=2025-11-27T10:59:52.420993981Z level=info msg="Executing migration" id="Add is_disabled column to user"
Nov 27 05:59:52 np0005537642 ceph-4c838139-e0c9-556a-a9ca-e4422f459af7-grafana-compute-0[99196]: logger=migrator t=2025-11-27T10:59:52.421936858Z level=info msg="Migration successfully executed" id="Add is_disabled column to user" duration=942.677µs
Nov 27 05:59:52 np0005537642 ceph-4c838139-e0c9-556a-a9ca-e4422f459af7-grafana-compute-0[99196]: logger=migrator t=2025-11-27T10:59:52.424348658Z level=info msg="Executing migration" id="Add index user.login/user.email"
Nov 27 05:59:52 np0005537642 ceph-4c838139-e0c9-556a-a9ca-e4422f459af7-grafana-compute-0[99196]: logger=migrator t=2025-11-27T10:59:52.424978976Z level=info msg="Migration successfully executed" id="Add index user.login/user.email" duration=630.488µs
Nov 27 05:59:52 np0005537642 ceph-4c838139-e0c9-556a-a9ca-e4422f459af7-grafana-compute-0[99196]: logger=migrator t=2025-11-27T10:59:52.432322248Z level=info msg="Executing migration" id="Add is_service_account column to user"
Nov 27 05:59:52 np0005537642 ceph-4c838139-e0c9-556a-a9ca-e4422f459af7-grafana-compute-0[99196]: logger=migrator t=2025-11-27T10:59:52.434467499Z level=info msg="Migration successfully executed" id="Add is_service_account column to user" duration=2.145751ms
Nov 27 05:59:52 np0005537642 ceph-4c838139-e0c9-556a-a9ca-e4422f459af7-grafana-compute-0[99196]: logger=migrator t=2025-11-27T10:59:52.437832076Z level=info msg="Executing migration" id="Update is_service_account column to nullable"
Nov 27 05:59:52 np0005537642 ceph-4c838139-e0c9-556a-a9ca-e4422f459af7-grafana-compute-0[99196]: logger=migrator t=2025-11-27T10:59:52.447600728Z level=info msg="Migration successfully executed" id="Update is_service_account column to nullable" duration=9.764152ms
Nov 27 05:59:52 np0005537642 ceph-4c838139-e0c9-556a-a9ca-e4422f459af7-grafana-compute-0[99196]: logger=migrator t=2025-11-27T10:59:52.453323243Z level=info msg="Executing migration" id="Add uid column to user"
Nov 27 05:59:52 np0005537642 ceph-4c838139-e0c9-556a-a9ca-e4422f459af7-grafana-compute-0[99196]: logger=migrator t=2025-11-27T10:59:52.454870577Z level=info msg="Migration successfully executed" id="Add uid column to user" duration=1.547654ms
Nov 27 05:59:52 np0005537642 ceph-4c838139-e0c9-556a-a9ca-e4422f459af7-grafana-compute-0[99196]: logger=migrator t=2025-11-27T10:59:52.458873332Z level=info msg="Executing migration" id="Update uid column values for users"
Nov 27 05:59:52 np0005537642 ceph-4c838139-e0c9-556a-a9ca-e4422f459af7-grafana-compute-0[99196]: logger=migrator t=2025-11-27T10:59:52.459262524Z level=info msg="Migration successfully executed" id="Update uid column values for users" duration=389.482µs
Nov 27 05:59:52 np0005537642 ceph-4c838139-e0c9-556a-a9ca-e4422f459af7-grafana-compute-0[99196]: logger=migrator t=2025-11-27T10:59:52.464141374Z level=info msg="Executing migration" id="Add unique index user_uid"
Nov 27 05:59:52 np0005537642 ceph-4c838139-e0c9-556a-a9ca-e4422f459af7-grafana-compute-0[99196]: logger=migrator t=2025-11-27T10:59:52.46504396Z level=info msg="Migration successfully executed" id="Add unique index user_uid" duration=902.806µs
Nov 27 05:59:52 np0005537642 ceph-4c838139-e0c9-556a-a9ca-e4422f459af7-grafana-compute-0[99196]: logger=migrator t=2025-11-27T10:59:52.47128285Z level=info msg="Executing migration" id="create temp user table v1-7"
Nov 27 05:59:52 np0005537642 ceph-4c838139-e0c9-556a-a9ca-e4422f459af7-grafana-compute-0[99196]: logger=migrator t=2025-11-27T10:59:52.47233514Z level=info msg="Migration successfully executed" id="create temp user table v1-7" duration=1.05296ms
Nov 27 05:59:52 np0005537642 ceph-4c838139-e0c9-556a-a9ca-e4422f459af7-grafana-compute-0[99196]: logger=migrator t=2025-11-27T10:59:52.481863035Z level=info msg="Executing migration" id="create index IDX_temp_user_email - v1-7"
Nov 27 05:59:52 np0005537642 ceph-4c838139-e0c9-556a-a9ca-e4422f459af7-grafana-compute-0[99196]: logger=migrator t=2025-11-27T10:59:52.482936556Z level=info msg="Migration successfully executed" id="create index IDX_temp_user_email - v1-7" duration=1.074991ms
Nov 27 05:59:52 np0005537642 ceph-4c838139-e0c9-556a-a9ca-e4422f459af7-grafana-compute-0[99196]: logger=migrator t=2025-11-27T10:59:52.490081722Z level=info msg="Executing migration" id="create index IDX_temp_user_org_id - v1-7"
Nov 27 05:59:52 np0005537642 ceph-4c838139-e0c9-556a-a9ca-e4422f459af7-grafana-compute-0[99196]: logger=migrator t=2025-11-27T10:59:52.491554254Z level=info msg="Migration successfully executed" id="create index IDX_temp_user_org_id - v1-7" duration=1.472813ms
Nov 27 05:59:52 np0005537642 ceph-4c838139-e0c9-556a-a9ca-e4422f459af7-grafana-compute-0[99196]: logger=migrator t=2025-11-27T10:59:52.504284651Z level=info msg="Executing migration" id="create index IDX_temp_user_code - v1-7"
Nov 27 05:59:52 np0005537642 ceph-4c838139-e0c9-556a-a9ca-e4422f459af7-grafana-compute-0[99196]: logger=migrator t=2025-11-27T10:59:52.505438354Z level=info msg="Migration successfully executed" id="create index IDX_temp_user_code - v1-7" duration=1.157253ms
Nov 27 05:59:52 np0005537642 ceph-4c838139-e0c9-556a-a9ca-e4422f459af7-grafana-compute-0[99196]: logger=migrator t=2025-11-27T10:59:52.515571106Z level=info msg="Executing migration" id="create index IDX_temp_user_status - v1-7"
Nov 27 05:59:52 np0005537642 ceph-4c838139-e0c9-556a-a9ca-e4422f459af7-grafana-compute-0[99196]: logger=migrator t=2025-11-27T10:59:52.516983877Z level=info msg="Migration successfully executed" id="create index IDX_temp_user_status - v1-7" duration=1.422961ms
Nov 27 05:59:52 np0005537642 ceph-4c838139-e0c9-556a-a9ca-e4422f459af7-grafana-compute-0[99196]: logger=migrator t=2025-11-27T10:59:52.520315232Z level=info msg="Executing migration" id="Update temp_user table charset"
Nov 27 05:59:52 np0005537642 ceph-4c838139-e0c9-556a-a9ca-e4422f459af7-grafana-compute-0[99196]: logger=migrator t=2025-11-27T10:59:52.520365904Z level=info msg="Migration successfully executed" id="Update temp_user table charset" duration=51.742µs
Nov 27 05:59:52 np0005537642 ceph-4c838139-e0c9-556a-a9ca-e4422f459af7-grafana-compute-0[99196]: logger=migrator t=2025-11-27T10:59:52.523417252Z level=info msg="Executing migration" id="drop index IDX_temp_user_email - v1"
Nov 27 05:59:52 np0005537642 ceph-4c838139-e0c9-556a-a9ca-e4422f459af7-grafana-compute-0[99196]: logger=migrator t=2025-11-27T10:59:52.52475601Z level=info msg="Migration successfully executed" id="drop index IDX_temp_user_email - v1" duration=1.298347ms
Nov 27 05:59:52 np0005537642 ceph-4c838139-e0c9-556a-a9ca-e4422f459af7-grafana-compute-0[99196]: logger=migrator t=2025-11-27T10:59:52.528862359Z level=info msg="Executing migration" id="drop index IDX_temp_user_org_id - v1"
Nov 27 05:59:52 np0005537642 ceph-4c838139-e0c9-556a-a9ca-e4422f459af7-grafana-compute-0[99196]: logger=migrator t=2025-11-27T10:59:52.530084534Z level=info msg="Migration successfully executed" id="drop index IDX_temp_user_org_id - v1" duration=1.222735ms
Nov 27 05:59:52 np0005537642 ceph-4c838139-e0c9-556a-a9ca-e4422f459af7-grafana-compute-0[99196]: logger=migrator t=2025-11-27T10:59:52.547150246Z level=info msg="Executing migration" id="drop index IDX_temp_user_code - v1"
Nov 27 05:59:52 np0005537642 ceph-4c838139-e0c9-556a-a9ca-e4422f459af7-grafana-compute-0[99196]: logger=migrator t=2025-11-27T10:59:52.548740861Z level=info msg="Migration successfully executed" id="drop index IDX_temp_user_code - v1" duration=1.594546ms
Nov 27 05:59:52 np0005537642 ceph-4c838139-e0c9-556a-a9ca-e4422f459af7-grafana-compute-0[99196]: logger=migrator t=2025-11-27T10:59:52.556539146Z level=info msg="Executing migration" id="drop index IDX_temp_user_status - v1"
Nov 27 05:59:52 np0005537642 ceph-4c838139-e0c9-556a-a9ca-e4422f459af7-grafana-compute-0[99196]: logger=migrator t=2025-11-27T10:59:52.557403451Z level=info msg="Migration successfully executed" id="drop index IDX_temp_user_status - v1" duration=868.565µs
Nov 27 05:59:52 np0005537642 ceph-4c838139-e0c9-556a-a9ca-e4422f459af7-grafana-compute-0[99196]: logger=migrator t=2025-11-27T10:59:52.561643873Z level=info msg="Executing migration" id="Rename table temp_user to temp_user_tmp_qwerty - v1"
Nov 27 05:59:52 np0005537642 ceph-4c838139-e0c9-556a-a9ca-e4422f459af7-grafana-compute-0[99196]: logger=migrator t=2025-11-27T10:59:52.565097273Z level=info msg="Migration successfully executed" id="Rename table temp_user to temp_user_tmp_qwerty - v1" duration=3.452279ms
Nov 27 05:59:52 np0005537642 ceph-4c838139-e0c9-556a-a9ca-e4422f459af7-grafana-compute-0[99196]: logger=migrator t=2025-11-27T10:59:52.567473281Z level=info msg="Executing migration" id="create temp_user v2"
Nov 27 05:59:52 np0005537642 ceph-4c838139-e0c9-556a-a9ca-e4422f459af7-grafana-compute-0[99196]: logger=migrator t=2025-11-27T10:59:52.568433429Z level=info msg="Migration successfully executed" id="create temp_user v2" duration=960.408µs
Nov 27 05:59:52 np0005537642 ceph-4c838139-e0c9-556a-a9ca-e4422f459af7-grafana-compute-0[99196]: logger=migrator t=2025-11-27T10:59:52.571058234Z level=info msg="Executing migration" id="create index IDX_temp_user_email - v2"
Nov 27 05:59:52 np0005537642 ceph-4c838139-e0c9-556a-a9ca-e4422f459af7-grafana-compute-0[99196]: logger=migrator t=2025-11-27T10:59:52.572078804Z level=info msg="Migration successfully executed" id="create index IDX_temp_user_email - v2" duration=1.02091ms
Nov 27 05:59:52 np0005537642 ceph-4c838139-e0c9-556a-a9ca-e4422f459af7-grafana-compute-0[99196]: logger=migrator t=2025-11-27T10:59:52.575042119Z level=info msg="Executing migration" id="create index IDX_temp_user_org_id - v2"
Nov 27 05:59:52 np0005537642 ceph-4c838139-e0c9-556a-a9ca-e4422f459af7-grafana-compute-0[99196]: logger=migrator t=2025-11-27T10:59:52.57577258Z level=info msg="Migration successfully executed" id="create index IDX_temp_user_org_id - v2" duration=730.081µs
Nov 27 05:59:52 np0005537642 ceph-4c838139-e0c9-556a-a9ca-e4422f459af7-grafana-compute-0[99196]: logger=migrator t=2025-11-27T10:59:52.578310133Z level=info msg="Executing migration" id="create index IDX_temp_user_code - v2"
Nov 27 05:59:52 np0005537642 ceph-4c838139-e0c9-556a-a9ca-e4422f459af7-grafana-compute-0[99196]: logger=migrator t=2025-11-27T10:59:52.579119006Z level=info msg="Migration successfully executed" id="create index IDX_temp_user_code - v2" duration=809.263µs
Nov 27 05:59:52 np0005537642 ceph-4c838139-e0c9-556a-a9ca-e4422f459af7-grafana-compute-0[99196]: logger=migrator t=2025-11-27T10:59:52.583163443Z level=info msg="Executing migration" id="create index IDX_temp_user_status - v2"
Nov 27 05:59:52 np0005537642 ceph-4c838139-e0c9-556a-a9ca-e4422f459af7-grafana-compute-0[99196]: logger=migrator t=2025-11-27T10:59:52.584052729Z level=info msg="Migration successfully executed" id="create index IDX_temp_user_status - v2" duration=889.746µs
Nov 27 05:59:52 np0005537642 ceph-4c838139-e0c9-556a-a9ca-e4422f459af7-grafana-compute-0[99196]: logger=migrator t=2025-11-27T10:59:52.589894337Z level=info msg="Executing migration" id="copy temp_user v1 to v2"
Nov 27 05:59:52 np0005537642 ceph-4c838139-e0c9-556a-a9ca-e4422f459af7-grafana-compute-0[99196]: logger=migrator t=2025-11-27T10:59:52.590421342Z level=info msg="Migration successfully executed" id="copy temp_user v1 to v2" duration=528.525µs
Nov 27 05:59:52 np0005537642 ceph-4c838139-e0c9-556a-a9ca-e4422f459af7-grafana-compute-0[99196]: logger=migrator t=2025-11-27T10:59:52.597534757Z level=info msg="Executing migration" id="drop temp_user_tmp_qwerty"
Nov 27 05:59:52 np0005537642 ceph-4c838139-e0c9-556a-a9ca-e4422f459af7-grafana-compute-0[99196]: logger=migrator t=2025-11-27T10:59:52.598361321Z level=info msg="Migration successfully executed" id="drop temp_user_tmp_qwerty" duration=832.194µs
Nov 27 05:59:52 np0005537642 ceph-4c838139-e0c9-556a-a9ca-e4422f459af7-grafana-compute-0[99196]: logger=migrator t=2025-11-27T10:59:52.607282228Z level=info msg="Executing migration" id="Set created for temp users that will otherwise prematurely expire"
Nov 27 05:59:52 np0005537642 ceph-4c838139-e0c9-556a-a9ca-e4422f459af7-grafana-compute-0[99196]: logger=migrator t=2025-11-27T10:59:52.607805743Z level=info msg="Migration successfully executed" id="Set created for temp users that will otherwise prematurely expire" duration=525.175µs
Nov 27 05:59:52 np0005537642 ceph-4c838139-e0c9-556a-a9ca-e4422f459af7-grafana-compute-0[99196]: logger=migrator t=2025-11-27T10:59:52.610997245Z level=info msg="Executing migration" id="create star table"
Nov 27 05:59:52 np0005537642 ceph-4c838139-e0c9-556a-a9ca-e4422f459af7-grafana-compute-0[99196]: logger=migrator t=2025-11-27T10:59:52.611782797Z level=info msg="Migration successfully executed" id="create star table" duration=785.702µs
Nov 27 05:59:52 np0005537642 ceph-4c838139-e0c9-556a-a9ca-e4422f459af7-grafana-compute-0[99196]: logger=migrator t=2025-11-27T10:59:52.617933795Z level=info msg="Executing migration" id="add unique index star.user_id_dashboard_id"
Nov 27 05:59:52 np0005537642 ceph-4c838139-e0c9-556a-a9ca-e4422f459af7-grafana-compute-0[99196]: logger=migrator t=2025-11-27T10:59:52.618917063Z level=info msg="Migration successfully executed" id="add unique index star.user_id_dashboard_id" duration=984.769µs
Nov 27 05:59:52 np0005537642 ceph-4c838139-e0c9-556a-a9ca-e4422f459af7-grafana-compute-0[99196]: logger=migrator t=2025-11-27T10:59:52.632740831Z level=info msg="Executing migration" id="create org table v1"
Nov 27 05:59:52 np0005537642 ceph-4c838139-e0c9-556a-a9ca-e4422f459af7-grafana-compute-0[99196]: logger=migrator t=2025-11-27T10:59:52.633866184Z level=info msg="Migration successfully executed" id="create org table v1" duration=1.119302ms
Nov 27 05:59:52 np0005537642 ceph-4c838139-e0c9-556a-a9ca-e4422f459af7-grafana-compute-0[99196]: logger=migrator t=2025-11-27T10:59:52.641076841Z level=info msg="Executing migration" id="create index UQE_org_name - v1"
Nov 27 05:59:52 np0005537642 ceph-4c838139-e0c9-556a-a9ca-e4422f459af7-grafana-compute-0[99196]: logger=migrator t=2025-11-27T10:59:52.642162453Z level=info msg="Migration successfully executed" id="create index UQE_org_name - v1" duration=1.087942ms
Nov 27 05:59:52 np0005537642 ceph-4c838139-e0c9-556a-a9ca-e4422f459af7-grafana-compute-0[99196]: logger=migrator t=2025-11-27T10:59:52.650857803Z level=info msg="Executing migration" id="create org_user table v1"
Nov 27 05:59:52 np0005537642 ceph-4c838139-e0c9-556a-a9ca-e4422f459af7-grafana-compute-0[99196]: logger=migrator t=2025-11-27T10:59:52.651869202Z level=info msg="Migration successfully executed" id="create org_user table v1" duration=1.015279ms
Nov 27 05:59:52 np0005537642 ceph-osd[82775]: log_channel(cluster) log [DBG] : 11.7 deep-scrub starts
Nov 27 05:59:52 np0005537642 ceph-4c838139-e0c9-556a-a9ca-e4422f459af7-nfs-cephfs-2-0-compute-0-ymahkb[97101]: 27/11/2025 10:59:52 : epoch 69282ef5 : compute-0 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f37b8001fc0 fd 49 proxy header rest len failed header rlen = % (will set dead)
Nov 27 05:59:52 np0005537642 ceph-osd[82775]: log_channel(cluster) log [DBG] : 11.7 deep-scrub ok
Nov 27 05:59:52 np0005537642 ceph-4c838139-e0c9-556a-a9ca-e4422f459af7-grafana-compute-0[99196]: logger=migrator t=2025-11-27T10:59:52.667375519Z level=info msg="Executing migration" id="create index IDX_org_user_org_id - v1"
Nov 27 05:59:52 np0005537642 ceph-4c838139-e0c9-556a-a9ca-e4422f459af7-grafana-compute-0[99196]: logger=migrator t=2025-11-27T10:59:52.668501261Z level=info msg="Migration successfully executed" id="create index IDX_org_user_org_id - v1" duration=1.127092ms
Nov 27 05:59:52 np0005537642 ceph-4c838139-e0c9-556a-a9ca-e4422f459af7-grafana-compute-0[99196]: logger=migrator t=2025-11-27T10:59:52.677099529Z level=info msg="Executing migration" id="create index UQE_org_user_org_id_user_id - v1"
Nov 27 05:59:52 np0005537642 ceph-4c838139-e0c9-556a-a9ca-e4422f459af7-grafana-compute-0[99196]: logger=migrator t=2025-11-27T10:59:52.678420887Z level=info msg="Migration successfully executed" id="create index UQE_org_user_org_id_user_id - v1" duration=1.323658ms
Nov 27 05:59:52 np0005537642 ceph-4c838139-e0c9-556a-a9ca-e4422f459af7-grafana-compute-0[99196]: logger=migrator t=2025-11-27T10:59:52.682181875Z level=info msg="Executing migration" id="create index IDX_org_user_user_id - v1"
Nov 27 05:59:52 np0005537642 ceph-4c838139-e0c9-556a-a9ca-e4422f459af7-grafana-compute-0[99196]: logger=migrator t=2025-11-27T10:59:52.683254396Z level=info msg="Migration successfully executed" id="create index IDX_org_user_user_id - v1" duration=1.081401ms
Nov 27 05:59:52 np0005537642 ceph-4c838139-e0c9-556a-a9ca-e4422f459af7-grafana-compute-0[99196]: logger=migrator t=2025-11-27T10:59:52.691000039Z level=info msg="Executing migration" id="Update org table charset"
Nov 27 05:59:52 np0005537642 ceph-4c838139-e0c9-556a-a9ca-e4422f459af7-grafana-compute-0[99196]: logger=migrator t=2025-11-27T10:59:52.691059301Z level=info msg="Migration successfully executed" id="Update org table charset" duration=64.342µs
Nov 27 05:59:52 np0005537642 ceph-4c838139-e0c9-556a-a9ca-e4422f459af7-grafana-compute-0[99196]: logger=migrator t=2025-11-27T10:59:52.699166275Z level=info msg="Executing migration" id="Update org_user table charset"
Nov 27 05:59:52 np0005537642 ceph-4c838139-e0c9-556a-a9ca-e4422f459af7-grafana-compute-0[99196]: logger=migrator t=2025-11-27T10:59:52.699192645Z level=info msg="Migration successfully executed" id="Update org_user table charset" duration=27.49µs
Nov 27 05:59:52 np0005537642 ceph-4c838139-e0c9-556a-a9ca-e4422f459af7-grafana-compute-0[99196]: logger=migrator t=2025-11-27T10:59:52.704862069Z level=info msg="Executing migration" id="Migrate all Read Only Viewers to Viewers"
Nov 27 05:59:52 np0005537642 ceph-4c838139-e0c9-556a-a9ca-e4422f459af7-grafana-compute-0[99196]: logger=migrator t=2025-11-27T10:59:52.705108106Z level=info msg="Migration successfully executed" id="Migrate all Read Only Viewers to Viewers" duration=246.727µs
Nov 27 05:59:52 np0005537642 ceph-4c838139-e0c9-556a-a9ca-e4422f459af7-grafana-compute-0[99196]: logger=migrator t=2025-11-27T10:59:52.709726019Z level=info msg="Executing migration" id="create dashboard table"
Nov 27 05:59:52 np0005537642 ceph-4c838139-e0c9-556a-a9ca-e4422f459af7-grafana-compute-0[99196]: logger=migrator t=2025-11-27T10:59:52.710558913Z level=info msg="Migration successfully executed" id="create dashboard table" duration=835.154µs
Nov 27 05:59:52 np0005537642 ceph-4c838139-e0c9-556a-a9ca-e4422f459af7-grafana-compute-0[99196]: logger=migrator t=2025-11-27T10:59:52.713262841Z level=info msg="Executing migration" id="add index dashboard.account_id"
Nov 27 05:59:52 np0005537642 ceph-4c838139-e0c9-556a-a9ca-e4422f459af7-grafana-compute-0[99196]: logger=migrator t=2025-11-27T10:59:52.714066274Z level=info msg="Migration successfully executed" id="add index dashboard.account_id" duration=803.263µs
Nov 27 05:59:52 np0005537642 ceph-4c838139-e0c9-556a-a9ca-e4422f459af7-grafana-compute-0[99196]: logger=migrator t=2025-11-27T10:59:52.71601231Z level=info msg="Executing migration" id="add unique index dashboard_account_id_slug"
Nov 27 05:59:52 np0005537642 ceph-4c838139-e0c9-556a-a9ca-e4422f459af7-grafana-compute-0[99196]: logger=migrator t=2025-11-27T10:59:52.716828094Z level=info msg="Migration successfully executed" id="add unique index dashboard_account_id_slug" duration=815.423µs
Nov 27 05:59:52 np0005537642 ceph-4c838139-e0c9-556a-a9ca-e4422f459af7-grafana-compute-0[99196]: logger=migrator t=2025-11-27T10:59:52.719982974Z level=info msg="Executing migration" id="create dashboard_tag table"
Nov 27 05:59:52 np0005537642 ceph-4c838139-e0c9-556a-a9ca-e4422f459af7-grafana-compute-0[99196]: logger=migrator t=2025-11-27T10:59:52.720681124Z level=info msg="Migration successfully executed" id="create dashboard_tag table" duration=698.05µs
Nov 27 05:59:52 np0005537642 ceph-4c838139-e0c9-556a-a9ca-e4422f459af7-grafana-compute-0[99196]: logger=migrator t=2025-11-27T10:59:52.724200776Z level=info msg="Executing migration" id="add unique index dashboard_tag.dasboard_id_term"
Nov 27 05:59:52 np0005537642 ceph-4c838139-e0c9-556a-a9ca-e4422f459af7-grafana-compute-0[99196]: logger=migrator t=2025-11-27T10:59:52.724951978Z level=info msg="Migration successfully executed" id="add unique index dashboard_tag.dasboard_id_term" duration=748.401µs
Nov 27 05:59:52 np0005537642 ceph-4c838139-e0c9-556a-a9ca-e4422f459af7-grafana-compute-0[99196]: logger=migrator t=2025-11-27T10:59:52.73024051Z level=info msg="Executing migration" id="drop index UQE_dashboard_tag_dashboard_id_term - v1"
Nov 27 05:59:52 np0005537642 ceph-4c838139-e0c9-556a-a9ca-e4422f459af7-grafana-compute-0[99196]: logger=migrator t=2025-11-27T10:59:52.731089734Z level=info msg="Migration successfully executed" id="drop index UQE_dashboard_tag_dashboard_id_term - v1" duration=849.484µs
Nov 27 05:59:52 np0005537642 ceph-4c838139-e0c9-556a-a9ca-e4422f459af7-grafana-compute-0[99196]: logger=migrator t=2025-11-27T10:59:52.741772242Z level=info msg="Executing migration" id="Rename table dashboard to dashboard_v1 - v1"
Nov 27 05:59:52 np0005537642 ceph-4c838139-e0c9-556a-a9ca-e4422f459af7-grafana-compute-0[99196]: logger=migrator t=2025-11-27T10:59:52.746855398Z level=info msg="Migration successfully executed" id="Rename table dashboard to dashboard_v1 - v1" duration=5.082656ms
Nov 27 05:59:52 np0005537642 ceph-4c838139-e0c9-556a-a9ca-e4422f459af7-grafana-compute-0[99196]: logger=migrator t=2025-11-27T10:59:52.753246483Z level=info msg="Executing migration" id="create dashboard v2"
Nov 27 05:59:52 np0005537642 ceph-4c838139-e0c9-556a-a9ca-e4422f459af7-grafana-compute-0[99196]: logger=migrator t=2025-11-27T10:59:52.754220651Z level=info msg="Migration successfully executed" id="create dashboard v2" duration=976.879µs
Nov 27 05:59:52 np0005537642 ceph-4c838139-e0c9-556a-a9ca-e4422f459af7-grafana-compute-0[99196]: logger=migrator t=2025-11-27T10:59:52.760157062Z level=info msg="Executing migration" id="create index IDX_dashboard_org_id - v2"
Nov 27 05:59:52 np0005537642 ceph-4c838139-e0c9-556a-a9ca-e4422f459af7-grafana-compute-0[99196]: logger=migrator t=2025-11-27T10:59:52.760859212Z level=info msg="Migration successfully executed" id="create index IDX_dashboard_org_id - v2" duration=707.841µs
Nov 27 05:59:52 np0005537642 ceph-4c838139-e0c9-556a-a9ca-e4422f459af7-grafana-compute-0[99196]: logger=migrator t=2025-11-27T10:59:52.768160312Z level=info msg="Executing migration" id="create index UQE_dashboard_org_id_slug - v2"
Nov 27 05:59:52 np0005537642 ceph-4c838139-e0c9-556a-a9ca-e4422f459af7-grafana-compute-0[99196]: logger=migrator t=2025-11-27T10:59:52.769235353Z level=info msg="Migration successfully executed" id="create index UQE_dashboard_org_id_slug - v2" duration=1.075841ms
Nov 27 05:59:52 np0005537642 ceph-4c838139-e0c9-556a-a9ca-e4422f459af7-grafana-compute-0[99196]: logger=migrator t=2025-11-27T10:59:52.77955617Z level=info msg="Executing migration" id="copy dashboard v1 to v2"
Nov 27 05:59:52 np0005537642 ceph-4c838139-e0c9-556a-a9ca-e4422f459af7-grafana-compute-0[99196]: logger=migrator t=2025-11-27T10:59:52.780129797Z level=info msg="Migration successfully executed" id="copy dashboard v1 to v2" duration=565.747µs
Nov 27 05:59:52 np0005537642 ceph-4c838139-e0c9-556a-a9ca-e4422f459af7-grafana-compute-0[99196]: logger=migrator t=2025-11-27T10:59:52.783251037Z level=info msg="Executing migration" id="drop table dashboard_v1"
Nov 27 05:59:52 np0005537642 ceph-4c838139-e0c9-556a-a9ca-e4422f459af7-grafana-compute-0[99196]: logger=migrator t=2025-11-27T10:59:52.784676358Z level=info msg="Migration successfully executed" id="drop table dashboard_v1" duration=1.425191ms
Nov 27 05:59:52 np0005537642 ceph-4c838139-e0c9-556a-a9ca-e4422f459af7-grafana-compute-0[99196]: logger=migrator t=2025-11-27T10:59:52.790888467Z level=info msg="Executing migration" id="alter dashboard.data to mediumtext v1"
Nov 27 05:59:52 np0005537642 ceph-4c838139-e0c9-556a-a9ca-e4422f459af7-grafana-compute-0[99196]: logger=migrator t=2025-11-27T10:59:52.7909927Z level=info msg="Migration successfully executed" id="alter dashboard.data to mediumtext v1" duration=106.313µs
Nov 27 05:59:52 np0005537642 ceph-4c838139-e0c9-556a-a9ca-e4422f459af7-grafana-compute-0[99196]: logger=migrator t=2025-11-27T10:59:52.797780075Z level=info msg="Executing migration" id="Add column updated_by in dashboard - v2"
Nov 27 05:59:52 np0005537642 ceph-4c838139-e0c9-556a-a9ca-e4422f459af7-grafana-compute-0[99196]: logger=migrator t=2025-11-27T10:59:52.800186895Z level=info msg="Migration successfully executed" id="Add column updated_by in dashboard - v2" duration=2.403239ms
Nov 27 05:59:52 np0005537642 ceph-4c838139-e0c9-556a-a9ca-e4422f459af7-grafana-compute-0[99196]: logger=migrator t=2025-11-27T10:59:52.808878235Z level=info msg="Executing migration" id="Add column created_by in dashboard - v2"
Nov 27 05:59:52 np0005537642 ceph-4c838139-e0c9-556a-a9ca-e4422f459af7-grafana-compute-0[99196]: logger=migrator t=2025-11-27T10:59:52.810743879Z level=info msg="Migration successfully executed" id="Add column created_by in dashboard - v2" duration=1.866364ms
Nov 27 05:59:52 np0005537642 ceph-mon[74338]: mon.compute-0@0(leader).osd e95 do_prune osdmap full prune enabled
Nov 27 05:59:52 np0005537642 ceph-4c838139-e0c9-556a-a9ca-e4422f459af7-grafana-compute-0[99196]: logger=migrator t=2025-11-27T10:59:52.812565211Z level=info msg="Executing migration" id="Add column gnetId in dashboard"
Nov 27 05:59:52 np0005537642 ceph-4c838139-e0c9-556a-a9ca-e4422f459af7-grafana-compute-0[99196]: logger=migrator t=2025-11-27T10:59:52.814082495Z level=info msg="Migration successfully executed" id="Add column gnetId in dashboard" duration=1.516784ms
Nov 27 05:59:52 np0005537642 ceph-mon[74338]: from='mgr.14424 192.168.122.100:0/1707931784' entity='mgr.compute-0.qnrkij' 
Nov 27 05:59:52 np0005537642 ceph-mon[74338]: from='mgr.14424 192.168.122.100:0/1707931784' entity='mgr.compute-0.qnrkij' 
Nov 27 05:59:52 np0005537642 ceph-4c838139-e0c9-556a-a9ca-e4422f459af7-grafana-compute-0[99196]: logger=migrator t=2025-11-27T10:59:52.820896111Z level=info msg="Executing migration" id="Add index for gnetId in dashboard"
Nov 27 05:59:52 np0005537642 ceph-mon[74338]: from='mgr.14424 192.168.122.100:0/1707931784' entity='mgr.compute-0.qnrkij' 
Nov 27 05:59:52 np0005537642 ceph-mon[74338]: from='mgr.14424 192.168.122.100:0/1707931784' entity='mgr.compute-0.qnrkij' 
Nov 27 05:59:52 np0005537642 ceph-mon[74338]: from='mgr.14424 192.168.122.100:0/1707931784' entity='mgr.compute-0.qnrkij' 
Nov 27 05:59:52 np0005537642 ceph-mon[74338]: Deploying daemon haproxy.rgw.default.compute-0.lbrkhk on compute-0
Nov 27 05:59:52 np0005537642 ceph-4c838139-e0c9-556a-a9ca-e4422f459af7-grafana-compute-0[99196]: logger=migrator t=2025-11-27T10:59:52.822336833Z level=info msg="Migration successfully executed" id="Add index for gnetId in dashboard" duration=1.443742ms
Nov 27 05:59:52 np0005537642 ceph-4c838139-e0c9-556a-a9ca-e4422f459af7-grafana-compute-0[99196]: logger=migrator t=2025-11-27T10:59:52.828799509Z level=info msg="Executing migration" id="Add column plugin_id in dashboard"
Nov 27 05:59:52 np0005537642 ceph-4c838139-e0c9-556a-a9ca-e4422f459af7-grafana-compute-0[99196]: logger=migrator t=2025-11-27T10:59:52.831166077Z level=info msg="Migration successfully executed" id="Add column plugin_id in dashboard" duration=2.374488ms
Nov 27 05:59:52 np0005537642 ceph-4c838139-e0c9-556a-a9ca-e4422f459af7-grafana-compute-0[99196]: logger=migrator t=2025-11-27T10:59:52.839795896Z level=info msg="Executing migration" id="Add index for plugin_id in dashboard"
Nov 27 05:59:52 np0005537642 ceph-4c838139-e0c9-556a-a9ca-e4422f459af7-grafana-compute-0[99196]: logger=migrator t=2025-11-27T10:59:52.840726513Z level=info msg="Migration successfully executed" id="Add index for plugin_id in dashboard" duration=932.047µs
Nov 27 05:59:52 np0005537642 ceph-4c838139-e0c9-556a-a9ca-e4422f459af7-grafana-compute-0[99196]: logger=migrator t=2025-11-27T10:59:52.860985166Z level=info msg="Executing migration" id="Add index for dashboard_id in dashboard_tag"
Nov 27 05:59:52 np0005537642 ceph-4c838139-e0c9-556a-a9ca-e4422f459af7-grafana-compute-0[99196]: logger=migrator t=2025-11-27T10:59:52.862131869Z level=info msg="Migration successfully executed" id="Add index for dashboard_id in dashboard_tag" duration=1.151943ms
Nov 27 05:59:52 np0005537642 ceph-mon[74338]: mon.compute-0@0(leader).osd e96 e96: 3 total, 3 up, 3 in
Nov 27 05:59:52 np0005537642 ceph-4c838139-e0c9-556a-a9ca-e4422f459af7-grafana-compute-0[99196]: logger=migrator t=2025-11-27T10:59:52.870341236Z level=info msg="Executing migration" id="Update dashboard table charset"
Nov 27 05:59:52 np0005537642 ceph-4c838139-e0c9-556a-a9ca-e4422f459af7-grafana-compute-0[99196]: logger=migrator t=2025-11-27T10:59:52.870406688Z level=info msg="Migration successfully executed" id="Update dashboard table charset" duration=53.441µs
Nov 27 05:59:52 np0005537642 ceph-mon[74338]: log_channel(cluster) log [DBG] : osdmap e96: 3 total, 3 up, 3 in
Nov 27 05:59:52 np0005537642 ceph-4c838139-e0c9-556a-a9ca-e4422f459af7-grafana-compute-0[99196]: logger=migrator t=2025-11-27T10:59:52.87987543Z level=info msg="Executing migration" id="Update dashboard_tag table charset"
Nov 27 05:59:52 np0005537642 ceph-4c838139-e0c9-556a-a9ca-e4422f459af7-grafana-compute-0[99196]: logger=migrator t=2025-11-27T10:59:52.879950583Z level=info msg="Migration successfully executed" id="Update dashboard_tag table charset" duration=81.213µs
Nov 27 05:59:52 np0005537642 ceph-4c838139-e0c9-556a-a9ca-e4422f459af7-grafana-compute-0[99196]: logger=migrator t=2025-11-27T10:59:52.888632683Z level=info msg="Executing migration" id="Add column folder_id in dashboard"
Nov 27 05:59:52 np0005537642 ceph-4c838139-e0c9-556a-a9ca-e4422f459af7-grafana-compute-0[99196]: logger=migrator t=2025-11-27T10:59:52.891018211Z level=info msg="Migration successfully executed" id="Add column folder_id in dashboard" duration=2.389268ms
Nov 27 05:59:52 np0005537642 ceph-4c838139-e0c9-556a-a9ca-e4422f459af7-grafana-compute-0[99196]: logger=migrator t=2025-11-27T10:59:52.895455679Z level=info msg="Executing migration" id="Add column isFolder in dashboard"
Nov 27 05:59:52 np0005537642 ceph-4c838139-e0c9-556a-a9ca-e4422f459af7-grafana-compute-0[99196]: logger=migrator t=2025-11-27T10:59:52.897385335Z level=info msg="Migration successfully executed" id="Add column isFolder in dashboard" duration=1.928276ms
Nov 27 05:59:52 np0005537642 ceph-4c838139-e0c9-556a-a9ca-e4422f459af7-grafana-compute-0[99196]: logger=migrator t=2025-11-27T10:59:52.901925296Z level=info msg="Executing migration" id="Add column has_acl in dashboard"
Nov 27 05:59:52 np0005537642 ceph-4c838139-e0c9-556a-a9ca-e4422f459af7-grafana-compute-0[99196]: logger=migrator t=2025-11-27T10:59:52.903730918Z level=info msg="Migration successfully executed" id="Add column has_acl in dashboard" duration=1.806142ms
Nov 27 05:59:52 np0005537642 ceph-4c838139-e0c9-556a-a9ca-e4422f459af7-grafana-compute-0[99196]: logger=migrator t=2025-11-27T10:59:52.915470966Z level=info msg="Executing migration" id="Add column uid in dashboard"
Nov 27 05:59:52 np0005537642 ceph-4c838139-e0c9-556a-a9ca-e4422f459af7-grafana-compute-0[99196]: logger=migrator t=2025-11-27T10:59:52.918815032Z level=info msg="Migration successfully executed" id="Add column uid in dashboard" duration=3.343866ms
Nov 27 05:59:52 np0005537642 ceph-4c838139-e0c9-556a-a9ca-e4422f459af7-grafana-compute-0[99196]: logger=migrator t=2025-11-27T10:59:52.933940078Z level=info msg="Executing migration" id="Update uid column values in dashboard"
Nov 27 05:59:52 np0005537642 ceph-4c838139-e0c9-556a-a9ca-e4422f459af7-grafana-compute-0[99196]: logger=migrator t=2025-11-27T10:59:52.935218065Z level=info msg="Migration successfully executed" id="Update uid column values in dashboard" duration=1.282727ms
Nov 27 05:59:52 np0005537642 ceph-osd[82775]: osd.1 pg_epoch: 96 pg[9.a( v 57'872 (0'0,57'872] local-lis/les=0/0 n=6 ec=67/50 lis/c=94/67 les/c/f=95/68/0 sis=96) [1] r=0 lpr=96 pi=[67,96)/1 luod=0'0 crt=57'872 mlcod 0'0 active mbc={}] PeeringState::start_peering_interval up [1] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 1 -> 1, role -1 -> 0, features acting 4540701547738038271 upacting 4540701547738038271
Nov 27 05:59:52 np0005537642 ceph-osd[82775]: osd.1 pg_epoch: 96 pg[9.a( v 57'872 (0'0,57'872] local-lis/les=0/0 n=6 ec=67/50 lis/c=94/67 les/c/f=95/68/0 sis=96) [1] r=0 lpr=96 pi=[67,96)/1 crt=57'872 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 27 05:59:52 np0005537642 ceph-osd[82775]: osd.1 pg_epoch: 96 pg[9.1a( v 57'872 (0'0,57'872] local-lis/les=0/0 n=5 ec=67/50 lis/c=94/67 les/c/f=95/68/0 sis=96) [1] r=0 lpr=96 pi=[67,96)/1 luod=0'0 crt=57'872 mlcod 0'0 active mbc={}] PeeringState::start_peering_interval up [1] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 1 -> 1, role -1 -> 0, features acting 4540701547738038271 upacting 4540701547738038271
Nov 27 05:59:52 np0005537642 ceph-osd[82775]: osd.1 pg_epoch: 96 pg[9.1a( v 57'872 (0'0,57'872] local-lis/les=0/0 n=5 ec=67/50 lis/c=94/67 les/c/f=95/68/0 sis=96) [1] r=0 lpr=96 pi=[67,96)/1 crt=57'872 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 27 05:59:52 np0005537642 ceph-4c838139-e0c9-556a-a9ca-e4422f459af7-grafana-compute-0[99196]: logger=migrator t=2025-11-27T10:59:52.939391675Z level=info msg="Executing migration" id="Add unique index dashboard_org_id_uid"
Nov 27 05:59:52 np0005537642 ceph-4c838139-e0c9-556a-a9ca-e4422f459af7-grafana-compute-0[99196]: logger=migrator t=2025-11-27T10:59:52.940172687Z level=info msg="Migration successfully executed" id="Add unique index dashboard_org_id_uid" duration=781.012µs
Nov 27 05:59:52 np0005537642 podman[99302]: 2025-11-27 10:59:52.942810143 +0000 UTC m=+0.060005239 container create daf58c8534fa42ae0d7136625d4ec2e9a91f8b53081b86d78a1bc9996df38f6f (image=quay.io/ceph/haproxy:2.3, name=hungry_pascal)
Nov 27 05:59:52 np0005537642 ceph-4c838139-e0c9-556a-a9ca-e4422f459af7-grafana-compute-0[99196]: logger=migrator t=2025-11-27T10:59:52.948357523Z level=info msg="Executing migration" id="Remove unique index org_id_slug"
Nov 27 05:59:52 np0005537642 ceph-4c838139-e0c9-556a-a9ca-e4422f459af7-grafana-compute-0[99196]: logger=migrator t=2025-11-27T10:59:52.949952789Z level=info msg="Migration successfully executed" id="Remove unique index org_id_slug" duration=1.589616ms
Nov 27 05:59:52 np0005537642 ceph-4c838139-e0c9-556a-a9ca-e4422f459af7-grafana-compute-0[99196]: logger=migrator t=2025-11-27T10:59:52.956422276Z level=info msg="Executing migration" id="Update dashboard title length"
Nov 27 05:59:52 np0005537642 ceph-4c838139-e0c9-556a-a9ca-e4422f459af7-grafana-compute-0[99196]: logger=migrator t=2025-11-27T10:59:52.956480407Z level=info msg="Migration successfully executed" id="Update dashboard title length" duration=68.602µs
Nov 27 05:59:52 np0005537642 ceph-4c838139-e0c9-556a-a9ca-e4422f459af7-grafana-compute-0[99196]: logger=migrator t=2025-11-27T10:59:52.967502145Z level=info msg="Executing migration" id="Add unique index for dashboard_org_id_title_folder_id"
Nov 27 05:59:52 np0005537642 ceph-4c838139-e0c9-556a-a9ca-e4422f459af7-grafana-compute-0[99196]: logger=migrator t=2025-11-27T10:59:52.968978487Z level=info msg="Migration successfully executed" id="Add unique index for dashboard_org_id_title_folder_id" duration=1.479142ms
Nov 27 05:59:52 np0005537642 ceph-4c838139-e0c9-556a-a9ca-e4422f459af7-grafana-compute-0[99196]: logger=migrator t=2025-11-27T10:59:52.972946372Z level=info msg="Executing migration" id="create dashboard_provisioning"
Nov 27 05:59:52 np0005537642 ceph-4c838139-e0c9-556a-a9ca-e4422f459af7-grafana-compute-0[99196]: logger=migrator t=2025-11-27T10:59:52.974014672Z level=info msg="Migration successfully executed" id="create dashboard_provisioning" duration=1.06413ms
Nov 27 05:59:52 np0005537642 ceph-4c838139-e0c9-556a-a9ca-e4422f459af7-grafana-compute-0[99196]: logger=migrator t=2025-11-27T10:59:52.980130629Z level=info msg="Executing migration" id="Rename table dashboard_provisioning to dashboard_provisioning_tmp_qwerty - v1"
Nov 27 05:59:52 np0005537642 ceph-4c838139-e0c9-556a-a9ca-e4422f459af7-grafana-compute-0[99196]: logger=migrator t=2025-11-27T10:59:52.986317967Z level=info msg="Migration successfully executed" id="Rename table dashboard_provisioning to dashboard_provisioning_tmp_qwerty - v1" duration=6.186219ms
Nov 27 05:59:52 np0005537642 ceph-4c838139-e0c9-556a-a9ca-e4422f459af7-grafana-compute-0[99196]: logger=migrator t=2025-11-27T10:59:52.992298629Z level=info msg="Executing migration" id="create dashboard_provisioning v2"
Nov 27 05:59:52 np0005537642 ceph-4c838139-e0c9-556a-a9ca-e4422f459af7-grafana-compute-0[99196]: logger=migrator t=2025-11-27T10:59:52.993341399Z level=info msg="Migration successfully executed" id="create dashboard_provisioning v2" duration=1.04343ms
Nov 27 05:59:52 np0005537642 systemd[1]: Started libpod-conmon-daf58c8534fa42ae0d7136625d4ec2e9a91f8b53081b86d78a1bc9996df38f6f.scope.
Nov 27 05:59:52 np0005537642 ceph-4c838139-e0c9-556a-a9ca-e4422f459af7-grafana-compute-0[99196]: logger=migrator t=2025-11-27T10:59:52.997911551Z level=info msg="Executing migration" id="create index IDX_dashboard_provisioning_dashboard_id - v2"
Nov 27 05:59:52 np0005537642 ceph-4c838139-e0c9-556a-a9ca-e4422f459af7-grafana-compute-0[99196]: logger=migrator t=2025-11-27T10:59:52.999250549Z level=info msg="Migration successfully executed" id="create index IDX_dashboard_provisioning_dashboard_id - v2" duration=1.342318ms
Nov 27 05:59:53 np0005537642 ceph-4c838139-e0c9-556a-a9ca-e4422f459af7-grafana-compute-0[99196]: logger=migrator t=2025-11-27T10:59:53.002814652Z level=info msg="Executing migration" id="create index IDX_dashboard_provisioning_dashboard_id_name - v2"
Nov 27 05:59:53 np0005537642 podman[99302]: 2025-11-27 10:59:52.908882076 +0000 UTC m=+0.026077192 image pull e85424b0d443f37ddd2dd8a3bb2ef6f18dd352b987723a921b64289023af2914 quay.io/ceph/haproxy:2.3
Nov 27 05:59:53 np0005537642 ceph-4c838139-e0c9-556a-a9ca-e4422f459af7-grafana-compute-0[99196]: logger=migrator t=2025-11-27T10:59:53.003844582Z level=info msg="Migration successfully executed" id="create index IDX_dashboard_provisioning_dashboard_id_name - v2" duration=1.03119ms
Nov 27 05:59:53 np0005537642 ceph-4c838139-e0c9-556a-a9ca-e4422f459af7-grafana-compute-0[99196]: logger=migrator t=2025-11-27T10:59:53.011337258Z level=info msg="Executing migration" id="copy dashboard_provisioning v1 to v2"
Nov 27 05:59:53 np0005537642 ceph-4c838139-e0c9-556a-a9ca-e4422f459af7-grafana-compute-0[99196]: logger=migrator t=2025-11-27T10:59:53.011957365Z level=info msg="Migration successfully executed" id="copy dashboard_provisioning v1 to v2" duration=639.378µs
Nov 27 05:59:53 np0005537642 ceph-4c838139-e0c9-556a-a9ca-e4422f459af7-grafana-compute-0[99196]: logger=migrator t=2025-11-27T10:59:53.015747375Z level=info msg="Executing migration" id="drop dashboard_provisioning_tmp_qwerty"
Nov 27 05:59:53 np0005537642 ceph-4c838139-e0c9-556a-a9ca-e4422f459af7-grafana-compute-0[99196]: logger=migrator t=2025-11-27T10:59:53.016544978Z level=info msg="Migration successfully executed" id="drop dashboard_provisioning_tmp_qwerty" duration=799.194µs
Nov 27 05:59:53 np0005537642 ceph-4c838139-e0c9-556a-a9ca-e4422f459af7-grafana-compute-0[99196]: logger=migrator t=2025-11-27T10:59:53.020870722Z level=info msg="Executing migration" id="Add check_sum column"
Nov 27 05:59:53 np0005537642 ceph-4c838139-e0c9-556a-a9ca-e4422f459af7-grafana-compute-0[99196]: logger=migrator t=2025-11-27T10:59:53.023139808Z level=info msg="Migration successfully executed" id="Add check_sum column" duration=2.269875ms
Nov 27 05:59:53 np0005537642 ceph-4c838139-e0c9-556a-a9ca-e4422f459af7-grafana-compute-0[99196]: logger=migrator t=2025-11-27T10:59:53.025919288Z level=info msg="Executing migration" id="Add index for dashboard_title"
Nov 27 05:59:53 np0005537642 ceph-4c838139-e0c9-556a-a9ca-e4422f459af7-grafana-compute-0[99196]: logger=migrator t=2025-11-27T10:59:53.026853464Z level=info msg="Migration successfully executed" id="Add index for dashboard_title" duration=934.796µs
Nov 27 05:59:53 np0005537642 ceph-4c838139-e0c9-556a-a9ca-e4422f459af7-grafana-compute-0[99196]: logger=migrator t=2025-11-27T10:59:53.032141467Z level=info msg="Executing migration" id="delete tags for deleted dashboards"
Nov 27 05:59:53 np0005537642 ceph-4c838139-e0c9-556a-a9ca-e4422f459af7-grafana-compute-0[99196]: logger=migrator t=2025-11-27T10:59:53.032460306Z level=info msg="Migration successfully executed" id="delete tags for deleted dashboards" duration=321.779µs
Nov 27 05:59:53 np0005537642 systemd[1]: Started libcrun container.
Nov 27 05:59:53 np0005537642 ceph-4c838139-e0c9-556a-a9ca-e4422f459af7-grafana-compute-0[99196]: logger=migrator t=2025-11-27T10:59:53.038753097Z level=info msg="Executing migration" id="delete stars for deleted dashboards"
Nov 27 05:59:53 np0005537642 ceph-4c838139-e0c9-556a-a9ca-e4422f459af7-grafana-compute-0[99196]: logger=migrator t=2025-11-27T10:59:53.039054336Z level=info msg="Migration successfully executed" id="delete stars for deleted dashboards" duration=304.469µs
Nov 27 05:59:53 np0005537642 ceph-4c838139-e0c9-556a-a9ca-e4422f459af7-grafana-compute-0[99196]: logger=migrator t=2025-11-27T10:59:53.049238129Z level=info msg="Executing migration" id="Add index for dashboard_is_folder"
Nov 27 05:59:53 np0005537642 ceph-4c838139-e0c9-556a-a9ca-e4422f459af7-grafana-compute-0[99196]: logger=migrator t=2025-11-27T10:59:53.05029556Z level=info msg="Migration successfully executed" id="Add index for dashboard_is_folder" duration=1.053901ms
Nov 27 05:59:53 np0005537642 ceph-4c838139-e0c9-556a-a9ca-e4422f459af7-grafana-compute-0[99196]: logger=migrator t=2025-11-27T10:59:53.059249258Z level=info msg="Executing migration" id="Add isPublic for dashboard"
Nov 27 05:59:53 np0005537642 podman[99302]: 2025-11-27 10:59:53.061883514 +0000 UTC m=+0.179078630 container init daf58c8534fa42ae0d7136625d4ec2e9a91f8b53081b86d78a1bc9996df38f6f (image=quay.io/ceph/haproxy:2.3, name=hungry_pascal)
Nov 27 05:59:53 np0005537642 ceph-4c838139-e0c9-556a-a9ca-e4422f459af7-grafana-compute-0[99196]: logger=migrator t=2025-11-27T10:59:53.062071149Z level=info msg="Migration successfully executed" id="Add isPublic for dashboard" duration=2.828382ms
Nov 27 05:59:53 np0005537642 podman[99302]: 2025-11-27 10:59:53.07077459 +0000 UTC m=+0.187969726 container start daf58c8534fa42ae0d7136625d4ec2e9a91f8b53081b86d78a1bc9996df38f6f (image=quay.io/ceph/haproxy:2.3, name=hungry_pascal)
Nov 27 05:59:53 np0005537642 ceph-4c838139-e0c9-556a-a9ca-e4422f459af7-grafana-compute-0[99196]: logger=migrator t=2025-11-27T10:59:53.071212422Z level=info msg="Executing migration" id="create data_source table"
Nov 27 05:59:53 np0005537642 ceph-4c838139-e0c9-556a-a9ca-e4422f459af7-grafana-compute-0[99196]: logger=migrator t=2025-11-27T10:59:53.072391216Z level=info msg="Migration successfully executed" id="create data_source table" duration=1.181254ms
Nov 27 05:59:53 np0005537642 ceph-4c838139-e0c9-556a-a9ca-e4422f459af7-grafana-compute-0[99196]: logger=migrator t=2025-11-27T10:59:53.076125164Z level=info msg="Executing migration" id="add index data_source.account_id"
Nov 27 05:59:53 np0005537642 podman[99302]: 2025-11-27 10:59:53.076436503 +0000 UTC m=+0.193631609 container attach daf58c8534fa42ae0d7136625d4ec2e9a91f8b53081b86d78a1bc9996df38f6f (image=quay.io/ceph/haproxy:2.3, name=hungry_pascal)
Nov 27 05:59:53 np0005537642 ceph-4c838139-e0c9-556a-a9ca-e4422f459af7-grafana-compute-0[99196]: logger=migrator t=2025-11-27T10:59:53.077112272Z level=info msg="Migration successfully executed" id="add index data_source.account_id" duration=979.158µs
Nov 27 05:59:53 np0005537642 hungry_pascal[99319]: 0 0
Nov 27 05:59:53 np0005537642 systemd[1]: libpod-daf58c8534fa42ae0d7136625d4ec2e9a91f8b53081b86d78a1bc9996df38f6f.scope: Deactivated successfully.
Nov 27 05:59:53 np0005537642 conmon[99319]: conmon daf58c8534fa42ae0d71 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-daf58c8534fa42ae0d7136625d4ec2e9a91f8b53081b86d78a1bc9996df38f6f.scope/container/memory.events
Nov 27 05:59:53 np0005537642 podman[99302]: 2025-11-27 10:59:53.079184152 +0000 UTC m=+0.196379258 container died daf58c8534fa42ae0d7136625d4ec2e9a91f8b53081b86d78a1bc9996df38f6f (image=quay.io/ceph/haproxy:2.3, name=hungry_pascal)
Nov 27 05:59:53 np0005537642 ceph-4c838139-e0c9-556a-a9ca-e4422f459af7-grafana-compute-0[99196]: logger=migrator t=2025-11-27T10:59:53.080922142Z level=info msg="Executing migration" id="add unique index data_source.account_id_name"
Nov 27 05:59:53 np0005537642 ceph-4c838139-e0c9-556a-a9ca-e4422f459af7-grafana-compute-0[99196]: logger=migrator t=2025-11-27T10:59:53.082072225Z level=info msg="Migration successfully executed" id="add unique index data_source.account_id_name" duration=1.152043ms
Nov 27 05:59:53 np0005537642 ceph-4c838139-e0c9-556a-a9ca-e4422f459af7-grafana-compute-0[99196]: logger=migrator t=2025-11-27T10:59:53.08606839Z level=info msg="Executing migration" id="drop index IDX_data_source_account_id - v1"
Nov 27 05:59:53 np0005537642 ceph-4c838139-e0c9-556a-a9ca-e4422f459af7-grafana-compute-0[99196]: logger=migrator t=2025-11-27T10:59:53.087162352Z level=info msg="Migration successfully executed" id="drop index IDX_data_source_account_id - v1" duration=1.096752ms
Nov 27 05:59:53 np0005537642 ceph-4c838139-e0c9-556a-a9ca-e4422f459af7-grafana-compute-0[99196]: logger=migrator t=2025-11-27T10:59:53.089082287Z level=info msg="Executing migration" id="drop index UQE_data_source_account_id_name - v1"
Nov 27 05:59:53 np0005537642 ceph-4c838139-e0c9-556a-a9ca-e4422f459af7-grafana-compute-0[99196]: logger=migrator t=2025-11-27T10:59:53.089841939Z level=info msg="Migration successfully executed" id="drop index UQE_data_source_account_id_name - v1" duration=759.822µs
Nov 27 05:59:53 np0005537642 ceph-4c838139-e0c9-556a-a9ca-e4422f459af7-grafana-compute-0[99196]: logger=migrator t=2025-11-27T10:59:53.092018622Z level=info msg="Executing migration" id="Rename table data_source to data_source_v1 - v1"
Nov 27 05:59:53 np0005537642 ceph-4c838139-e0c9-556a-a9ca-e4422f459af7-grafana-compute-0[99196]: logger=migrator t=2025-11-27T10:59:53.097478829Z level=info msg="Migration successfully executed" id="Rename table data_source to data_source_v1 - v1" duration=5.461998ms
Nov 27 05:59:53 np0005537642 ceph-4c838139-e0c9-556a-a9ca-e4422f459af7-grafana-compute-0[99196]: logger=migrator t=2025-11-27T10:59:53.099461616Z level=info msg="Executing migration" id="create data_source table v2"
Nov 27 05:59:53 np0005537642 ceph-4c838139-e0c9-556a-a9ca-e4422f459af7-grafana-compute-0[99196]: logger=migrator t=2025-11-27T10:59:53.100656641Z level=info msg="Migration successfully executed" id="create data_source table v2" duration=1.195435ms
Nov 27 05:59:53 np0005537642 ceph-4c838139-e0c9-556a-a9ca-e4422f459af7-grafana-compute-0[99196]: logger=migrator t=2025-11-27T10:59:53.104956844Z level=info msg="Executing migration" id="create index IDX_data_source_org_id - v2"
Nov 27 05:59:53 np0005537642 ceph-4c838139-e0c9-556a-a9ca-e4422f459af7-grafana-compute-0[99196]: logger=migrator t=2025-11-27T10:59:53.106275952Z level=info msg="Migration successfully executed" id="create index IDX_data_source_org_id - v2" duration=1.321048ms
Nov 27 05:59:53 np0005537642 systemd[1]: var-lib-containers-storage-overlay-359b51bb1bf0b263f8fdb1c35bafb9a9cfc0d5d1900a147857a4b11eccc969c8-merged.mount: Deactivated successfully.
Nov 27 05:59:53 np0005537642 ceph-4c838139-e0c9-556a-a9ca-e4422f459af7-grafana-compute-0[99196]: logger=migrator t=2025-11-27T10:59:53.112123271Z level=info msg="Executing migration" id="create index UQE_data_source_org_id_name - v2"
Nov 27 05:59:53 np0005537642 ceph-4c838139-e0c9-556a-a9ca-e4422f459af7-grafana-compute-0[99196]: logger=migrator t=2025-11-27T10:59:53.114204011Z level=info msg="Migration successfully executed" id="create index UQE_data_source_org_id_name - v2" duration=2.08637ms
Nov 27 05:59:53 np0005537642 ceph-4c838139-e0c9-556a-a9ca-e4422f459af7-grafana-compute-0[99196]: logger=migrator t=2025-11-27T10:59:53.121234393Z level=info msg="Executing migration" id="Drop old table data_source_v1 #2"
Nov 27 05:59:53 np0005537642 ceph-4c838139-e0c9-556a-a9ca-e4422f459af7-grafana-compute-0[99196]: logger=migrator t=2025-11-27T10:59:53.122375366Z level=info msg="Migration successfully executed" id="Drop old table data_source_v1 #2" duration=1.146163ms
Nov 27 05:59:53 np0005537642 ceph-4c838139-e0c9-556a-a9ca-e4422f459af7-grafana-compute-0[99196]: logger=migrator t=2025-11-27T10:59:53.127912246Z level=info msg="Executing migration" id="Add column with_credentials"
Nov 27 05:59:53 np0005537642 ceph-4c838139-e0c9-556a-a9ca-e4422f459af7-grafana-compute-0[99196]: logger=migrator t=2025-11-27T10:59:53.131855439Z level=info msg="Migration successfully executed" id="Add column with_credentials" duration=3.945833ms
Nov 27 05:59:53 np0005537642 ceph-4c838139-e0c9-556a-a9ca-e4422f459af7-grafana-compute-0[99196]: logger=migrator t=2025-11-27T10:59:53.149455106Z level=info msg="Executing migration" id="Add secure json data column"
Nov 27 05:59:53 np0005537642 ceph-4c838139-e0c9-556a-a9ca-e4422f459af7-grafana-compute-0[99196]: logger=migrator t=2025-11-27T10:59:53.153543014Z level=info msg="Migration successfully executed" id="Add secure json data column" duration=4.087138ms
Nov 27 05:59:53 np0005537642 ceph-4c838139-e0c9-556a-a9ca-e4422f459af7-grafana-compute-0[99196]: logger=migrator t=2025-11-27T10:59:53.200799335Z level=info msg="Executing migration" id="Update data_source table charset"
Nov 27 05:59:53 np0005537642 ceph-4c838139-e0c9-556a-a9ca-e4422f459af7-grafana-compute-0[99196]: logger=migrator t=2025-11-27T10:59:53.200857647Z level=info msg="Migration successfully executed" id="Update data_source table charset" duration=62.502µs
Nov 27 05:59:53 np0005537642 podman[99302]: 2025-11-27 10:59:53.20129554 +0000 UTC m=+0.318490626 container remove daf58c8534fa42ae0d7136625d4ec2e9a91f8b53081b86d78a1bc9996df38f6f (image=quay.io/ceph/haproxy:2.3, name=hungry_pascal)
Nov 27 05:59:53 np0005537642 systemd[1]: libpod-conmon-daf58c8534fa42ae0d7136625d4ec2e9a91f8b53081b86d78a1bc9996df38f6f.scope: Deactivated successfully.
Nov 27 05:59:53 np0005537642 ceph-4c838139-e0c9-556a-a9ca-e4422f459af7-grafana-compute-0[99196]: logger=migrator t=2025-11-27T10:59:53.272967644Z level=info msg="Executing migration" id="Update initial version to 1"
Nov 27 05:59:53 np0005537642 ceph-4c838139-e0c9-556a-a9ca-e4422f459af7-grafana-compute-0[99196]: logger=migrator t=2025-11-27T10:59:53.273431658Z level=info msg="Migration successfully executed" id="Update initial version to 1" duration=466.933µs
Nov 27 05:59:53 np0005537642 ceph-osd[82775]: log_channel(cluster) log [DBG] : 11.5 scrub starts
Nov 27 05:59:53 np0005537642 ceph-osd[82775]: log_channel(cluster) log [DBG] : 11.5 scrub ok
Nov 27 05:59:53 np0005537642 ceph-4c838139-e0c9-556a-a9ca-e4422f459af7-nfs-cephfs-2-0-compute-0-ymahkb[97101]: 27/11/2025 10:59:53 : epoch 69282ef5 : compute-0 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f37dc0034e0 fd 49 proxy header rest len failed header rlen = % (will set dead)
Nov 27 05:59:53 np0005537642 ceph-4c838139-e0c9-556a-a9ca-e4422f459af7-grafana-compute-0[99196]: logger=migrator t=2025-11-27T10:59:53.847926017Z level=info msg="Executing migration" id="Add read_only data column"
Nov 27 05:59:53 np0005537642 ceph-4c838139-e0c9-556a-a9ca-e4422f459af7-grafana-compute-0[99196]: logger=migrator t=2025-11-27T10:59:53.852848659Z level=info msg="Migration successfully executed" id="Add read_only data column" duration=4.948533ms
Nov 27 05:59:53 np0005537642 ceph-4c838139-e0c9-556a-a9ca-e4422f459af7-nfs-cephfs-2-0-compute-0-ymahkb[97101]: 27/11/2025 10:59:53 : epoch 69282ef5 : compute-0 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f37c4003c60 fd 49 proxy header rest len failed header rlen = % (will set dead)
Nov 27 05:59:53 np0005537642 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v90: 353 pgs: 2 peering, 351 active+clean; 456 KiB data, 126 MiB used, 60 GiB / 60 GiB avail; 50 B/s, 2 objects/s recovering
Nov 27 05:59:54 np0005537642 ceph-mon[74338]: mon.compute-0@0(leader).osd e96 do_prune osdmap full prune enabled
Nov 27 05:59:54 np0005537642 ceph-mgr[74636]: [progress INFO root] Writing back 26 completed events
Nov 27 05:59:54 np0005537642 ceph-mon[74338]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/progress/completed}] v 0)
Nov 27 05:59:54 np0005537642 ceph-4c838139-e0c9-556a-a9ca-e4422f459af7-grafana-compute-0[99196]: logger=migrator t=2025-11-27T10:59:54.580651824Z level=info msg="Executing migration" id="Migrate logging ds to loki ds"
Nov 27 05:59:54 np0005537642 ceph-4c838139-e0c9-556a-a9ca-e4422f459af7-grafana-compute-0[99196]: logger=migrator t=2025-11-27T10:59:54.58118628Z level=info msg="Migration successfully executed" id="Migrate logging ds to loki ds" duration=612.998µs
Nov 27 05:59:54 np0005537642 ceph-osd[82775]: log_channel(cluster) log [DBG] : 11.4 deep-scrub starts
Nov 27 05:59:54 np0005537642 ceph-osd[82775]: log_channel(cluster) log [DBG] : 11.4 deep-scrub ok
Nov 27 05:59:54 np0005537642 ceph-4c838139-e0c9-556a-a9ca-e4422f459af7-nfs-cephfs-2-0-compute-0-ymahkb[97101]: 27/11/2025 10:59:54 : epoch 69282ef5 : compute-0 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f37bc003c10 fd 49 proxy header rest len failed header rlen = % (will set dead)
Nov 27 05:59:55 np0005537642 ceph-osd[82775]: log_channel(cluster) log [DBG] : 11.1 scrub starts
Nov 27 05:59:55 np0005537642 ceph-4c838139-e0c9-556a-a9ca-e4422f459af7-nfs-cephfs-2-0-compute-0-ymahkb[97101]: 27/11/2025 10:59:55 : epoch 69282ef5 : compute-0 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f37b80032f0 fd 49 proxy header rest len failed header rlen = % (will set dead)
Nov 27 05:59:55 np0005537642 ceph-4c838139-e0c9-556a-a9ca-e4422f459af7-grafana-compute-0[99196]: logger=migrator t=2025-11-27T10:59:55.850473174Z level=info msg="Executing migration" id="Update json_data with nulls"
Nov 27 05:59:55 np0005537642 ceph-4c838139-e0c9-556a-a9ca-e4422f459af7-grafana-compute-0[99196]: logger=migrator t=2025-11-27T10:59:55.85103513Z level=info msg="Migration successfully executed" id="Update json_data with nulls" duration=591.397µs
Nov 27 05:59:55 np0005537642 ceph-4c838139-e0c9-556a-a9ca-e4422f459af7-nfs-cephfs-2-0-compute-0-ymahkb[97101]: 27/11/2025 10:59:55 : epoch 69282ef5 : compute-0 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f37dc0034e0 fd 49 proxy header rest len failed header rlen = % (will set dead)
Nov 27 05:59:55 np0005537642 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v91: 353 pgs: 2 peering, 351 active+clean; 456 KiB data, 126 MiB used, 60 GiB / 60 GiB avail; 36 B/s, 2 objects/s recovering
Nov 27 05:59:56 np0005537642 ceph-4c838139-e0c9-556a-a9ca-e4422f459af7-nfs-cephfs-2-0-compute-0-ymahkb[97101]: 27/11/2025 10:59:56 : epoch 69282ef5 : compute-0 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f37c4003c60 fd 49 proxy header rest len failed header rlen = % (will set dead)
Nov 27 05:59:56 np0005537642 ceph-4c838139-e0c9-556a-a9ca-e4422f459af7-grafana-compute-0[99196]: logger=migrator t=2025-11-27T10:59:56.952615844Z level=info msg="Executing migration" id="Add uid column"
Nov 27 05:59:56 np0005537642 ceph-4c838139-e0c9-556a-a9ca-e4422f459af7-grafana-compute-0[99196]: logger=migrator t=2025-11-27T10:59:56.956858776Z level=info msg="Migration successfully executed" id="Add uid column" duration=4.286333ms
Nov 27 05:59:56 np0005537642 ceph-osd[82775]: log_channel(cluster) log [DBG] : 11.1 scrub ok
Nov 27 05:59:56 np0005537642 ceph-osd[82775]: log_channel(cluster) log [DBG] : 8.8 deep-scrub starts
Nov 27 05:59:56 np0005537642 ceph-4c838139-e0c9-556a-a9ca-e4422f459af7-grafana-compute-0[99196]: logger=migrator t=2025-11-27T10:59:56.998871756Z level=info msg="Executing migration" id="Update uid value"
Nov 27 05:59:56 np0005537642 ceph-4c838139-e0c9-556a-a9ca-e4422f459af7-grafana-compute-0[99196]: logger=migrator t=2025-11-27T10:59:56.999509865Z level=info msg="Migration successfully executed" id="Update uid value" duration=640.459µs
Nov 27 05:59:57 np0005537642 ceph-4c838139-e0c9-556a-a9ca-e4422f459af7-grafana-compute-0[99196]: logger=migrator t=2025-11-27T10:59:57.026288636Z level=info msg="Executing migration" id="Add unique index datasource_org_id_uid"
Nov 27 05:59:57 np0005537642 ceph-4c838139-e0c9-556a-a9ca-e4422f459af7-grafana-compute-0[99196]: logger=migrator t=2025-11-27T10:59:57.027432759Z level=info msg="Migration successfully executed" id="Add unique index datasource_org_id_uid" duration=1.146023ms
Nov 27 05:59:57 np0005537642 ceph-osd[82775]: log_channel(cluster) log [DBG] : 8.8 deep-scrub ok
Nov 27 05:59:57 np0005537642 ceph-4c838139-e0c9-556a-a9ca-e4422f459af7-grafana-compute-0[99196]: logger=migrator t=2025-11-27T10:59:57.12498939Z level=info msg="Executing migration" id="add unique index datasource_org_id_is_default"
Nov 27 05:59:57 np0005537642 ceph-4c838139-e0c9-556a-a9ca-e4422f459af7-grafana-compute-0[99196]: logger=migrator t=2025-11-27T10:59:57.126872704Z level=info msg="Migration successfully executed" id="add unique index datasource_org_id_is_default" duration=1.904915ms
Nov 27 05:59:57 np0005537642 ceph-4c838139-e0c9-556a-a9ca-e4422f459af7-grafana-compute-0[99196]: logger=migrator t=2025-11-27T10:59:57.163141989Z level=info msg="Executing migration" id="create api_key table"
Nov 27 05:59:57 np0005537642 ceph-4c838139-e0c9-556a-a9ca-e4422f459af7-grafana-compute-0[99196]: logger=migrator t=2025-11-27T10:59:57.166016171Z level=info msg="Migration successfully executed" id="create api_key table" duration=2.875933ms
Nov 27 05:59:57 np0005537642 ceph-4c838139-e0c9-556a-a9ca-e4422f459af7-grafana-compute-0[99196]: logger=migrator t=2025-11-27T10:59:57.224074054Z level=info msg="Executing migration" id="add index api_key.account_id"
Nov 27 05:59:57 np0005537642 ceph-mon[74338]: mon.compute-0@0(leader).osd e97 e97: 3 total, 3 up, 3 in
Nov 27 05:59:57 np0005537642 ceph-4c838139-e0c9-556a-a9ca-e4422f459af7-grafana-compute-0[99196]: logger=migrator t=2025-11-27T10:59:57.226478903Z level=info msg="Migration successfully executed" id="add index api_key.account_id" duration=2.404729ms
Nov 27 05:59:57 np0005537642 ceph-mon[74338]: log_channel(audit) log [INF] : from='mgr.14424 192.168.122.100:0/1707931784' entity='mgr.compute-0.qnrkij' 
Nov 27 05:59:57 np0005537642 ceph-mon[74338]: log_channel(cluster) log [DBG] : osdmap e97: 3 total, 3 up, 3 in
Nov 27 05:59:57 np0005537642 systemd[1]: Reloading.
Nov 27 05:59:57 np0005537642 ceph-4c838139-e0c9-556a-a9ca-e4422f459af7-grafana-compute-0[99196]: logger=migrator t=2025-11-27T10:59:57.291850457Z level=info msg="Executing migration" id="add index api_key.key"
Nov 27 05:59:57 np0005537642 ceph-4c838139-e0c9-556a-a9ca-e4422f459af7-grafana-compute-0[99196]: logger=migrator t=2025-11-27T10:59:57.292924447Z level=info msg="Migration successfully executed" id="add index api_key.key" duration=1.076311ms
Nov 27 05:59:57 np0005537642 ceph-4c838139-e0c9-556a-a9ca-e4422f459af7-grafana-compute-0[99196]: logger=migrator t=2025-11-27T10:59:57.316417444Z level=info msg="Executing migration" id="add index api_key.account_id_name"
Nov 27 05:59:57 np0005537642 ceph-4c838139-e0c9-556a-a9ca-e4422f459af7-grafana-compute-0[99196]: logger=migrator t=2025-11-27T10:59:57.318392531Z level=info msg="Migration successfully executed" id="add index api_key.account_id_name" duration=1.976727ms
Nov 27 05:59:57 np0005537642 ceph-4c838139-e0c9-556a-a9ca-e4422f459af7-grafana-compute-0[99196]: logger=migrator t=2025-11-27T10:59:57.33952336Z level=info msg="Executing migration" id="drop index IDX_api_key_account_id - v1"
Nov 27 05:59:57 np0005537642 ceph-4c838139-e0c9-556a-a9ca-e4422f459af7-grafana-compute-0[99196]: logger=migrator t=2025-11-27T10:59:57.340682603Z level=info msg="Migration successfully executed" id="drop index IDX_api_key_account_id - v1" duration=1.162353ms
Nov 27 05:59:57 np0005537642 ceph-4c838139-e0c9-556a-a9ca-e4422f459af7-grafana-compute-0[99196]: logger=migrator t=2025-11-27T10:59:57.350266739Z level=info msg="Executing migration" id="drop index UQE_api_key_key - v1"
Nov 27 05:59:57 np0005537642 ceph-4c838139-e0c9-556a-a9ca-e4422f459af7-grafana-compute-0[99196]: logger=migrator t=2025-11-27T10:59:57.351436733Z level=info msg="Migration successfully executed" id="drop index UQE_api_key_key - v1" duration=1.172314ms
Nov 27 05:59:57 np0005537642 ceph-osd[82775]: osd.1 pg_epoch: 97 pg[9.a( v 57'872 (0'0,57'872] local-lis/les=96/97 n=6 ec=67/50 lis/c=94/67 les/c/f=95/68/0 sis=96) [1] r=0 lpr=96 pi=[67,96)/1 crt=57'872 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 27 05:59:57 np0005537642 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 27 05:59:57 np0005537642 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 27 05:59:57 np0005537642 ceph-4c838139-e0c9-556a-a9ca-e4422f459af7-grafana-compute-0[99196]: logger=migrator t=2025-11-27T10:59:57.365192199Z level=info msg="Executing migration" id="drop index UQE_api_key_account_id_name - v1"
Nov 27 05:59:57 np0005537642 ceph-4c838139-e0c9-556a-a9ca-e4422f459af7-grafana-compute-0[99196]: logger=migrator t=2025-11-27T10:59:57.366113386Z level=info msg="Migration successfully executed" id="drop index UQE_api_key_account_id_name - v1" duration=923.867µs
Nov 27 05:59:57 np0005537642 ceph-4c838139-e0c9-556a-a9ca-e4422f459af7-grafana-compute-0[99196]: logger=migrator t=2025-11-27T10:59:57.379898363Z level=info msg="Executing migration" id="Rename table api_key to api_key_v1 - v1"
Nov 27 05:59:57 np0005537642 ceph-osd[82775]: osd.1 pg_epoch: 97 pg[9.1a( v 57'872 (0'0,57'872] local-lis/les=96/97 n=5 ec=67/50 lis/c=94/67 les/c/f=95/68/0 sis=96) [1] r=0 lpr=96 pi=[67,96)/1 crt=57'872 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 27 05:59:57 np0005537642 ceph-4c838139-e0c9-556a-a9ca-e4422f459af7-grafana-compute-0[99196]: logger=migrator t=2025-11-27T10:59:57.385382281Z level=info msg="Migration successfully executed" id="Rename table api_key to api_key_v1 - v1" duration=5.481498ms
Nov 27 05:59:57 np0005537642 ceph-4c838139-e0c9-556a-a9ca-e4422f459af7-grafana-compute-0[99196]: logger=migrator t=2025-11-27T10:59:57.394917576Z level=info msg="Executing migration" id="create api_key table v2"
Nov 27 05:59:57 np0005537642 ceph-4c838139-e0c9-556a-a9ca-e4422f459af7-grafana-compute-0[99196]: logger=migrator t=2025-11-27T10:59:57.395747179Z level=info msg="Migration successfully executed" id="create api_key table v2" duration=831.403µs
Nov 27 05:59:57 np0005537642 ceph-4c838139-e0c9-556a-a9ca-e4422f459af7-grafana-compute-0[99196]: logger=migrator t=2025-11-27T10:59:57.413616624Z level=info msg="Executing migration" id="create index IDX_api_key_org_id - v2"
Nov 27 05:59:57 np0005537642 ceph-4c838139-e0c9-556a-a9ca-e4422f459af7-grafana-compute-0[99196]: logger=migrator t=2025-11-27T10:59:57.414730396Z level=info msg="Migration successfully executed" id="create index IDX_api_key_org_id - v2" duration=1.115892ms
Nov 27 05:59:57 np0005537642 ceph-4c838139-e0c9-556a-a9ca-e4422f459af7-grafana-compute-0[99196]: logger=migrator t=2025-11-27T10:59:57.430150901Z level=info msg="Executing migration" id="create index UQE_api_key_key - v2"
Nov 27 05:59:57 np0005537642 ceph-4c838139-e0c9-556a-a9ca-e4422f459af7-grafana-compute-0[99196]: logger=migrator t=2025-11-27T10:59:57.43117348Z level=info msg="Migration successfully executed" id="create index UQE_api_key_key - v2" duration=1.02475ms
Nov 27 05:59:57 np0005537642 ceph-4c838139-e0c9-556a-a9ca-e4422f459af7-grafana-compute-0[99196]: logger=migrator t=2025-11-27T10:59:57.436108202Z level=info msg="Executing migration" id="create index UQE_api_key_org_id_name - v2"
Nov 27 05:59:57 np0005537642 ceph-4c838139-e0c9-556a-a9ca-e4422f459af7-grafana-compute-0[99196]: logger=migrator t=2025-11-27T10:59:57.436860714Z level=info msg="Migration successfully executed" id="create index UQE_api_key_org_id_name - v2" duration=752.342µs
Nov 27 05:59:57 np0005537642 ceph-4c838139-e0c9-556a-a9ca-e4422f459af7-grafana-compute-0[99196]: logger=migrator t=2025-11-27T10:59:57.44783013Z level=info msg="Executing migration" id="copy api_key v1 to v2"
Nov 27 05:59:57 np0005537642 ceph-4c838139-e0c9-556a-a9ca-e4422f459af7-grafana-compute-0[99196]: logger=migrator t=2025-11-27T10:59:57.448223141Z level=info msg="Migration successfully executed" id="copy api_key v1 to v2" duration=394.641µs
Nov 27 05:59:57 np0005537642 ceph-4c838139-e0c9-556a-a9ca-e4422f459af7-grafana-compute-0[99196]: logger=migrator t=2025-11-27T10:59:57.461810663Z level=info msg="Executing migration" id="Drop old table api_key_v1"
Nov 27 05:59:57 np0005537642 ceph-4c838139-e0c9-556a-a9ca-e4422f459af7-grafana-compute-0[99196]: logger=migrator t=2025-11-27T10:59:57.462782611Z level=info msg="Migration successfully executed" id="Drop old table api_key_v1" duration=991.939µs
Nov 27 05:59:57 np0005537642 ceph-4c838139-e0c9-556a-a9ca-e4422f459af7-grafana-compute-0[99196]: logger=migrator t=2025-11-27T10:59:57.469127473Z level=info msg="Executing migration" id="Update api_key table charset"
Nov 27 05:59:57 np0005537642 ceph-4c838139-e0c9-556a-a9ca-e4422f459af7-grafana-compute-0[99196]: logger=migrator t=2025-11-27T10:59:57.469168015Z level=info msg="Migration successfully executed" id="Update api_key table charset" duration=57.122µs
Nov 27 05:59:57 np0005537642 ceph-4c838139-e0c9-556a-a9ca-e4422f459af7-grafana-compute-0[99196]: logger=migrator t=2025-11-27T10:59:57.480195692Z level=info msg="Executing migration" id="Add expires to api_key table"
Nov 27 05:59:57 np0005537642 ceph-4c838139-e0c9-556a-a9ca-e4422f459af7-grafana-compute-0[99196]: logger=migrator t=2025-11-27T10:59:57.483301232Z level=info msg="Migration successfully executed" id="Add expires to api_key table" duration=3.11832ms
Nov 27 05:59:57 np0005537642 ceph-4c838139-e0c9-556a-a9ca-e4422f459af7-grafana-compute-0[99196]: logger=migrator t=2025-11-27T10:59:57.490780067Z level=info msg="Executing migration" id="Add service account foreign key"
Nov 27 05:59:57 np0005537642 ceph-4c838139-e0c9-556a-a9ca-e4422f459af7-grafana-compute-0[99196]: logger=migrator t=2025-11-27T10:59:57.494086982Z level=info msg="Migration successfully executed" id="Add service account foreign key" duration=3.311945ms
Nov 27 05:59:57 np0005537642 ceph-4c838139-e0c9-556a-a9ca-e4422f459af7-grafana-compute-0[99196]: logger=migrator t=2025-11-27T10:59:57.499865199Z level=info msg="Executing migration" id="set service account foreign key to nil if 0"
Nov 27 05:59:57 np0005537642 ceph-4c838139-e0c9-556a-a9ca-e4422f459af7-grafana-compute-0[99196]: logger=migrator t=2025-11-27T10:59:57.50025488Z level=info msg="Migration successfully executed" id="set service account foreign key to nil if 0" duration=392.941µs
Nov 27 05:59:57 np0005537642 ceph-4c838139-e0c9-556a-a9ca-e4422f459af7-grafana-compute-0[99196]: logger=migrator t=2025-11-27T10:59:57.505676816Z level=info msg="Executing migration" id="Add last_used_at to api_key table"
Nov 27 05:59:57 np0005537642 ceph-4c838139-e0c9-556a-a9ca-e4422f459af7-grafana-compute-0[99196]: logger=migrator t=2025-11-27T10:59:57.511108353Z level=info msg="Migration successfully executed" id="Add last_used_at to api_key table" duration=5.429197ms
Nov 27 05:59:57 np0005537642 ceph-4c838139-e0c9-556a-a9ca-e4422f459af7-grafana-compute-0[99196]: logger=migrator t=2025-11-27T10:59:57.516989222Z level=info msg="Executing migration" id="Add is_revoked column to api_key table"
Nov 27 05:59:57 np0005537642 ceph-4c838139-e0c9-556a-a9ca-e4422f459af7-grafana-compute-0[99196]: logger=migrator t=2025-11-27T10:59:57.522207062Z level=info msg="Migration successfully executed" id="Add is_revoked column to api_key table" duration=5.19818ms
Nov 27 05:59:57 np0005537642 ceph-4c838139-e0c9-556a-a9ca-e4422f459af7-grafana-compute-0[99196]: logger=migrator t=2025-11-27T10:59:57.527824084Z level=info msg="Executing migration" id="create dashboard_snapshot table v4"
Nov 27 05:59:57 np0005537642 ceph-4c838139-e0c9-556a-a9ca-e4422f459af7-grafana-compute-0[99196]: logger=migrator t=2025-11-27T10:59:57.528741491Z level=info msg="Migration successfully executed" id="create dashboard_snapshot table v4" duration=917.287µs
Nov 27 05:59:57 np0005537642 ceph-4c838139-e0c9-556a-a9ca-e4422f459af7-grafana-compute-0[99196]: logger=migrator t=2025-11-27T10:59:57.532661524Z level=info msg="Executing migration" id="drop table dashboard_snapshot_v4 #1"
Nov 27 05:59:57 np0005537642 ceph-4c838139-e0c9-556a-a9ca-e4422f459af7-grafana-compute-0[99196]: logger=migrator t=2025-11-27T10:59:57.533367904Z level=info msg="Migration successfully executed" id="drop table dashboard_snapshot_v4 #1" duration=705.87µs
Nov 27 05:59:57 np0005537642 ceph-4c838139-e0c9-556a-a9ca-e4422f459af7-grafana-compute-0[99196]: logger=migrator t=2025-11-27T10:59:57.539425178Z level=info msg="Executing migration" id="create dashboard_snapshot table v5 #2"
Nov 27 05:59:57 np0005537642 ceph-4c838139-e0c9-556a-a9ca-e4422f459af7-grafana-compute-0[99196]: logger=migrator t=2025-11-27T10:59:57.541472557Z level=info msg="Migration successfully executed" id="create dashboard_snapshot table v5 #2" duration=2.046669ms
Nov 27 05:59:57 np0005537642 ceph-4c838139-e0c9-556a-a9ca-e4422f459af7-grafana-compute-0[99196]: logger=migrator t=2025-11-27T10:59:57.546560274Z level=info msg="Executing migration" id="create index UQE_dashboard_snapshot_key - v5"
Nov 27 05:59:57 np0005537642 ceph-4c838139-e0c9-556a-a9ca-e4422f459af7-grafana-compute-0[99196]: logger=migrator t=2025-11-27T10:59:57.548407107Z level=info msg="Migration successfully executed" id="create index UQE_dashboard_snapshot_key - v5" duration=1.845513ms
Nov 27 05:59:57 np0005537642 ceph-4c838139-e0c9-556a-a9ca-e4422f459af7-grafana-compute-0[99196]: logger=migrator t=2025-11-27T10:59:57.553873885Z level=info msg="Executing migration" id="create index UQE_dashboard_snapshot_delete_key - v5"
Nov 27 05:59:57 np0005537642 ceph-4c838139-e0c9-556a-a9ca-e4422f459af7-grafana-compute-0[99196]: logger=migrator t=2025-11-27T10:59:57.555522932Z level=info msg="Migration successfully executed" id="create index UQE_dashboard_snapshot_delete_key - v5" duration=1.648007ms
Nov 27 05:59:57 np0005537642 ceph-4c838139-e0c9-556a-a9ca-e4422f459af7-grafana-compute-0[99196]: logger=migrator t=2025-11-27T10:59:57.561632858Z level=info msg="Executing migration" id="create index IDX_dashboard_snapshot_user_id - v5"
Nov 27 05:59:57 np0005537642 ceph-4c838139-e0c9-556a-a9ca-e4422f459af7-grafana-compute-0[99196]: logger=migrator t=2025-11-27T10:59:57.563243825Z level=info msg="Migration successfully executed" id="create index IDX_dashboard_snapshot_user_id - v5" duration=1.609276ms
Nov 27 05:59:57 np0005537642 ceph-4c838139-e0c9-556a-a9ca-e4422f459af7-grafana-compute-0[99196]: logger=migrator t=2025-11-27T10:59:57.572784789Z level=info msg="Executing migration" id="alter dashboard_snapshot to mediumtext v2"
Nov 27 05:59:57 np0005537642 ceph-4c838139-e0c9-556a-a9ca-e4422f459af7-grafana-compute-0[99196]: logger=migrator t=2025-11-27T10:59:57.572981585Z level=info msg="Migration successfully executed" id="alter dashboard_snapshot to mediumtext v2" duration=212.056µs
Nov 27 05:59:57 np0005537642 ceph-4c838139-e0c9-556a-a9ca-e4422f459af7-grafana-compute-0[99196]: logger=migrator t=2025-11-27T10:59:57.577222647Z level=info msg="Executing migration" id="Update dashboard_snapshot table charset"
Nov 27 05:59:57 np0005537642 ceph-4c838139-e0c9-556a-a9ca-e4422f459af7-grafana-compute-0[99196]: logger=migrator t=2025-11-27T10:59:57.577341651Z level=info msg="Migration successfully executed" id="Update dashboard_snapshot table charset" duration=120.554µs
Nov 27 05:59:57 np0005537642 ceph-4c838139-e0c9-556a-a9ca-e4422f459af7-grafana-compute-0[99196]: logger=migrator t=2025-11-27T10:59:57.581723727Z level=info msg="Executing migration" id="Add column external_delete_url to dashboard_snapshots table"
Nov 27 05:59:57 np0005537642 ceph-4c838139-e0c9-556a-a9ca-e4422f459af7-grafana-compute-0[99196]: logger=migrator t=2025-11-27T10:59:57.586384271Z level=info msg="Migration successfully executed" id="Add column external_delete_url to dashboard_snapshots table" duration=4.654704ms
Nov 27 05:59:57 np0005537642 ceph-4c838139-e0c9-556a-a9ca-e4422f459af7-grafana-compute-0[99196]: logger=migrator t=2025-11-27T10:59:57.595241616Z level=info msg="Executing migration" id="Add encrypted dashboard json column"
Nov 27 05:59:57 np0005537642 ceph-4c838139-e0c9-556a-a9ca-e4422f459af7-grafana-compute-0[99196]: logger=migrator t=2025-11-27T10:59:57.598285444Z level=info msg="Migration successfully executed" id="Add encrypted dashboard json column" duration=3.045578ms
Nov 27 05:59:57 np0005537642 ceph-4c838139-e0c9-556a-a9ca-e4422f459af7-grafana-compute-0[99196]: logger=migrator t=2025-11-27T10:59:57.609426264Z level=info msg="Executing migration" id="Change dashboard_encrypted column to MEDIUMBLOB"
Nov 27 05:59:57 np0005537642 ceph-4c838139-e0c9-556a-a9ca-e4422f459af7-grafana-compute-0[99196]: logger=migrator t=2025-11-27T10:59:57.609665671Z level=info msg="Migration successfully executed" id="Change dashboard_encrypted column to MEDIUMBLOB" duration=244.987µs
Nov 27 05:59:57 np0005537642 systemd[1]: Reloading.
Nov 27 05:59:57 np0005537642 ceph-4c838139-e0c9-556a-a9ca-e4422f459af7-grafana-compute-0[99196]: logger=migrator t=2025-11-27T10:59:57.615660594Z level=info msg="Executing migration" id="create quota table v1"
Nov 27 05:59:57 np0005537642 ceph-4c838139-e0c9-556a-a9ca-e4422f459af7-grafana-compute-0[99196]: logger=migrator t=2025-11-27T10:59:57.616707244Z level=info msg="Migration successfully executed" id="create quota table v1" duration=1.057271ms
Nov 27 05:59:57 np0005537642 ceph-4c838139-e0c9-556a-a9ca-e4422f459af7-grafana-compute-0[99196]: logger=migrator t=2025-11-27T10:59:57.62940405Z level=info msg="Executing migration" id="create index UQE_quota_org_id_user_id_target - v1"
Nov 27 05:59:57 np0005537642 ceph-4c838139-e0c9-556a-a9ca-e4422f459af7-grafana-compute-0[99196]: logger=migrator t=2025-11-27T10:59:57.631660255Z level=info msg="Migration successfully executed" id="create index UQE_quota_org_id_user_id_target - v1" duration=2.247005ms
Nov 27 05:59:57 np0005537642 ceph-4c838139-e0c9-556a-a9ca-e4422f459af7-grafana-compute-0[99196]: logger=migrator t=2025-11-27T10:59:57.642427185Z level=info msg="Executing migration" id="Update quota table charset"
Nov 27 05:59:57 np0005537642 ceph-4c838139-e0c9-556a-a9ca-e4422f459af7-grafana-compute-0[99196]: logger=migrator t=2025-11-27T10:59:57.642569949Z level=info msg="Migration successfully executed" id="Update quota table charset" duration=128.534µs
Nov 27 05:59:57 np0005537642 ceph-4c838139-e0c9-556a-a9ca-e4422f459af7-grafana-compute-0[99196]: logger=migrator t=2025-11-27T10:59:57.651125875Z level=info msg="Executing migration" id="create plugin_setting table"
Nov 27 05:59:57 np0005537642 ceph-4c838139-e0c9-556a-a9ca-e4422f459af7-grafana-compute-0[99196]: logger=migrator t=2025-11-27T10:59:57.652351191Z level=info msg="Migration successfully executed" id="create plugin_setting table" duration=1.227845ms
Nov 27 05:59:57 np0005537642 ceph-4c838139-e0c9-556a-a9ca-e4422f459af7-grafana-compute-0[99196]: logger=migrator t=2025-11-27T10:59:57.660368222Z level=info msg="Executing migration" id="create index UQE_plugin_setting_org_id_plugin_id - v1"
Nov 27 05:59:57 np0005537642 ceph-4c838139-e0c9-556a-a9ca-e4422f459af7-grafana-compute-0[99196]: logger=migrator t=2025-11-27T10:59:57.661476343Z level=info msg="Migration successfully executed" id="create index UQE_plugin_setting_org_id_plugin_id - v1" duration=1.103972ms
Nov 27 05:59:57 np0005537642 ceph-4c838139-e0c9-556a-a9ca-e4422f459af7-grafana-compute-0[99196]: logger=migrator t=2025-11-27T10:59:57.670962197Z level=info msg="Executing migration" id="Add column plugin_version to plugin_settings"
Nov 27 05:59:57 np0005537642 ceph-4c838139-e0c9-556a-a9ca-e4422f459af7-grafana-compute-0[99196]: logger=migrator t=2025-11-27T10:59:57.677129914Z level=info msg="Migration successfully executed" id="Add column plugin_version to plugin_settings" duration=6.165897ms
Nov 27 05:59:57 np0005537642 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 27 05:59:57 np0005537642 ceph-4c838139-e0c9-556a-a9ca-e4422f459af7-grafana-compute-0[99196]: logger=migrator t=2025-11-27T10:59:57.686749151Z level=info msg="Executing migration" id="Update plugin_setting table charset"
Nov 27 05:59:57 np0005537642 ceph-4c838139-e0c9-556a-a9ca-e4422f459af7-grafana-compute-0[99196]: logger=migrator t=2025-11-27T10:59:57.686928737Z level=info msg="Migration successfully executed" id="Update plugin_setting table charset" duration=184.316µs
Nov 27 05:59:57 np0005537642 ceph-4c838139-e0c9-556a-a9ca-e4422f459af7-grafana-compute-0[99196]: logger=migrator t=2025-11-27T10:59:57.69606348Z level=info msg="Executing migration" id="create session table"
Nov 27 05:59:57 np0005537642 ceph-4c838139-e0c9-556a-a9ca-e4422f459af7-grafana-compute-0[99196]: logger=migrator t=2025-11-27T10:59:57.697373888Z level=info msg="Migration successfully executed" id="create session table" duration=1.316728ms
Nov 27 05:59:57 np0005537642 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 27 05:59:57 np0005537642 ceph-4c838139-e0c9-556a-a9ca-e4422f459af7-grafana-compute-0[99196]: logger=migrator t=2025-11-27T10:59:57.707420237Z level=info msg="Executing migration" id="Drop old table playlist table"
Nov 27 05:59:57 np0005537642 ceph-4c838139-e0c9-556a-a9ca-e4422f459af7-grafana-compute-0[99196]: logger=migrator t=2025-11-27T10:59:57.707656654Z level=info msg="Migration successfully executed" id="Drop old table playlist table" duration=237.007µs
Nov 27 05:59:57 np0005537642 ceph-4c838139-e0c9-556a-a9ca-e4422f459af7-grafana-compute-0[99196]: logger=migrator t=2025-11-27T10:59:57.71167444Z level=info msg="Executing migration" id="Drop old table playlist_item table"
Nov 27 05:59:57 np0005537642 ceph-4c838139-e0c9-556a-a9ca-e4422f459af7-grafana-compute-0[99196]: logger=migrator t=2025-11-27T10:59:57.711824954Z level=info msg="Migration successfully executed" id="Drop old table playlist_item table" duration=151.615µs
Nov 27 05:59:57 np0005537642 ceph-4c838139-e0c9-556a-a9ca-e4422f459af7-grafana-compute-0[99196]: logger=migrator t=2025-11-27T10:59:57.720755791Z level=info msg="Executing migration" id="create playlist table v2"
Nov 27 05:59:57 np0005537642 ceph-4c838139-e0c9-556a-a9ca-e4422f459af7-grafana-compute-0[99196]: logger=migrator t=2025-11-27T10:59:57.721801601Z level=info msg="Migration successfully executed" id="create playlist table v2" duration=1.05377ms
Nov 27 05:59:57 np0005537642 ceph-4c838139-e0c9-556a-a9ca-e4422f459af7-grafana-compute-0[99196]: logger=migrator t=2025-11-27T10:59:57.72939623Z level=info msg="Executing migration" id="create playlist item table v2"
Nov 27 05:59:57 np0005537642 ceph-4c838139-e0c9-556a-a9ca-e4422f459af7-grafana-compute-0[99196]: logger=migrator t=2025-11-27T10:59:57.730378978Z level=info msg="Migration successfully executed" id="create playlist item table v2" duration=983.548µs
Nov 27 05:59:57 np0005537642 ceph-4c838139-e0c9-556a-a9ca-e4422f459af7-nfs-cephfs-2-0-compute-0-ymahkb[97101]: 27/11/2025 10:59:57 : epoch 69282ef5 : compute-0 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f37bc003c10 fd 49 proxy header rest len failed header rlen = % (will set dead)
Nov 27 05:59:57 np0005537642 ceph-4c838139-e0c9-556a-a9ca-e4422f459af7-grafana-compute-0[99196]: logger=migrator t=2025-11-27T10:59:57.744931178Z level=info msg="Executing migration" id="Update playlist table charset"
Nov 27 05:59:57 np0005537642 ceph-4c838139-e0c9-556a-a9ca-e4422f459af7-grafana-compute-0[99196]: logger=migrator t=2025-11-27T10:59:57.74501483Z level=info msg="Migration successfully executed" id="Update playlist table charset" duration=87.033µs
Nov 27 05:59:57 np0005537642 ceph-4c838139-e0c9-556a-a9ca-e4422f459af7-grafana-compute-0[99196]: logger=migrator t=2025-11-27T10:59:57.75298394Z level=info msg="Executing migration" id="Update playlist_item table charset"
Nov 27 05:59:57 np0005537642 ceph-4c838139-e0c9-556a-a9ca-e4422f459af7-grafana-compute-0[99196]: logger=migrator t=2025-11-27T10:59:57.753298319Z level=info msg="Migration successfully executed" id="Update playlist_item table charset" duration=32.101µs
Nov 27 05:59:57 np0005537642 ceph-4c838139-e0c9-556a-a9ca-e4422f459af7-grafana-compute-0[99196]: logger=migrator t=2025-11-27T10:59:57.757284193Z level=info msg="Executing migration" id="Add playlist column created_at"
Nov 27 05:59:57 np0005537642 ceph-4c838139-e0c9-556a-a9ca-e4422f459af7-grafana-compute-0[99196]: logger=migrator t=2025-11-27T10:59:57.760735863Z level=info msg="Migration successfully executed" id="Add playlist column created_at" duration=3.44544ms
Nov 27 05:59:57 np0005537642 ceph-4c838139-e0c9-556a-a9ca-e4422f459af7-grafana-compute-0[99196]: logger=migrator t=2025-11-27T10:59:57.76515218Z level=info msg="Executing migration" id="Add playlist column updated_at"
Nov 27 05:59:57 np0005537642 ceph-4c838139-e0c9-556a-a9ca-e4422f459af7-grafana-compute-0[99196]: logger=migrator t=2025-11-27T10:59:57.768497676Z level=info msg="Migration successfully executed" id="Add playlist column updated_at" duration=3.344316ms
Nov 27 05:59:57 np0005537642 ceph-4c838139-e0c9-556a-a9ca-e4422f459af7-grafana-compute-0[99196]: logger=migrator t=2025-11-27T10:59:57.771995697Z level=info msg="Executing migration" id="drop preferences table v2"
Nov 27 05:59:57 np0005537642 ceph-4c838139-e0c9-556a-a9ca-e4422f459af7-grafana-compute-0[99196]: logger=migrator t=2025-11-27T10:59:57.772134301Z level=info msg="Migration successfully executed" id="drop preferences table v2" duration=138.664µs
Nov 27 05:59:57 np0005537642 ceph-4c838139-e0c9-556a-a9ca-e4422f459af7-grafana-compute-0[99196]: logger=migrator t=2025-11-27T10:59:57.779293377Z level=info msg="Executing migration" id="drop preferences table v3"
Nov 27 05:59:57 np0005537642 ceph-4c838139-e0c9-556a-a9ca-e4422f459af7-grafana-compute-0[99196]: logger=migrator t=2025-11-27T10:59:57.779451742Z level=info msg="Migration successfully executed" id="drop preferences table v3" duration=148.045µs
Nov 27 05:59:57 np0005537642 ceph-4c838139-e0c9-556a-a9ca-e4422f459af7-grafana-compute-0[99196]: logger=migrator t=2025-11-27T10:59:57.785939539Z level=info msg="Executing migration" id="create preferences table v3"
Nov 27 05:59:57 np0005537642 ceph-4c838139-e0c9-556a-a9ca-e4422f459af7-grafana-compute-0[99196]: logger=migrator t=2025-11-27T10:59:57.787045981Z level=info msg="Migration successfully executed" id="create preferences table v3" duration=1.106362ms
Nov 27 05:59:57 np0005537642 ceph-4c838139-e0c9-556a-a9ca-e4422f459af7-grafana-compute-0[99196]: logger=migrator t=2025-11-27T10:59:57.795117113Z level=info msg="Executing migration" id="Update preferences table charset"
Nov 27 05:59:57 np0005537642 ceph-4c838139-e0c9-556a-a9ca-e4422f459af7-grafana-compute-0[99196]: logger=migrator t=2025-11-27T10:59:57.795186735Z level=info msg="Migration successfully executed" id="Update preferences table charset" duration=38.601µs
Nov 27 05:59:57 np0005537642 ceph-4c838139-e0c9-556a-a9ca-e4422f459af7-grafana-compute-0[99196]: logger=migrator t=2025-11-27T10:59:57.803773823Z level=info msg="Executing migration" id="Add column team_id in preferences"
Nov 27 05:59:57 np0005537642 ceph-4c838139-e0c9-556a-a9ca-e4422f459af7-grafana-compute-0[99196]: logger=migrator t=2025-11-27T10:59:57.807407027Z level=info msg="Migration successfully executed" id="Add column team_id in preferences" duration=3.628344ms
Nov 27 05:59:57 np0005537642 ceph-4c838139-e0c9-556a-a9ca-e4422f459af7-grafana-compute-0[99196]: logger=migrator t=2025-11-27T10:59:57.810358212Z level=info msg="Executing migration" id="Update team_id column values in preferences"
Nov 27 05:59:57 np0005537642 ceph-4c838139-e0c9-556a-a9ca-e4422f459af7-grafana-compute-0[99196]: logger=migrator t=2025-11-27T10:59:57.810607469Z level=info msg="Migration successfully executed" id="Update team_id column values in preferences" duration=250.147µs
Nov 27 05:59:57 np0005537642 ceph-4c838139-e0c9-556a-a9ca-e4422f459af7-grafana-compute-0[99196]: logger=migrator t=2025-11-27T10:59:57.816404696Z level=info msg="Executing migration" id="Add column week_start in preferences"
Nov 27 05:59:57 np0005537642 ceph-4c838139-e0c9-556a-a9ca-e4422f459af7-grafana-compute-0[99196]: logger=migrator t=2025-11-27T10:59:57.819964659Z level=info msg="Migration successfully executed" id="Add column week_start in preferences" duration=3.560013ms
Nov 27 05:59:57 np0005537642 ceph-4c838139-e0c9-556a-a9ca-e4422f459af7-grafana-compute-0[99196]: logger=migrator t=2025-11-27T10:59:57.824552371Z level=info msg="Executing migration" id="Add column preferences.json_data"
Nov 27 05:59:57 np0005537642 ceph-4c838139-e0c9-556a-a9ca-e4422f459af7-grafana-compute-0[99196]: logger=migrator t=2025-11-27T10:59:57.830831032Z level=info msg="Migration successfully executed" id="Add column preferences.json_data" duration=6.275991ms
Nov 27 05:59:57 np0005537642 ceph-4c838139-e0c9-556a-a9ca-e4422f459af7-grafana-compute-0[99196]: logger=migrator t=2025-11-27T10:59:57.837812013Z level=info msg="Executing migration" id="alter preferences.json_data to mediumtext v1"
Nov 27 05:59:57 np0005537642 ceph-4c838139-e0c9-556a-a9ca-e4422f459af7-grafana-compute-0[99196]: logger=migrator t=2025-11-27T10:59:57.838068991Z level=info msg="Migration successfully executed" id="alter preferences.json_data to mediumtext v1" duration=261.098µs
Nov 27 05:59:57 np0005537642 ceph-4c838139-e0c9-556a-a9ca-e4422f459af7-grafana-compute-0[99196]: logger=migrator t=2025-11-27T10:59:57.84394339Z level=info msg="Executing migration" id="Add preferences index org_id"
Nov 27 05:59:57 np0005537642 ceph-4c838139-e0c9-556a-a9ca-e4422f459af7-grafana-compute-0[99196]: logger=migrator t=2025-11-27T10:59:57.846061751Z level=info msg="Migration successfully executed" id="Add preferences index org_id" duration=2.117961ms
Nov 27 05:59:57 np0005537642 ceph-4c838139-e0c9-556a-a9ca-e4422f459af7-grafana-compute-0[99196]: logger=migrator t=2025-11-27T10:59:57.85123826Z level=info msg="Executing migration" id="Add preferences index user_id"
Nov 27 05:59:57 np0005537642 ceph-4c838139-e0c9-556a-a9ca-e4422f459af7-grafana-compute-0[99196]: logger=migrator t=2025-11-27T10:59:57.853464594Z level=info msg="Migration successfully executed" id="Add preferences index user_id" duration=2.227484ms
Nov 27 05:59:57 np0005537642 ceph-4c838139-e0c9-556a-a9ca-e4422f459af7-nfs-cephfs-2-0-compute-0-ymahkb[97101]: 27/11/2025 10:59:57 : epoch 69282ef5 : compute-0 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f37b80032f0 fd 49 proxy header rest len failed header rlen = % (will set dead)
Nov 27 05:59:57 np0005537642 ceph-4c838139-e0c9-556a-a9ca-e4422f459af7-grafana-compute-0[99196]: logger=migrator t=2025-11-27T10:59:57.870015251Z level=info msg="Executing migration" id="create alert table v1"
Nov 27 05:59:57 np0005537642 ceph-4c838139-e0c9-556a-a9ca-e4422f459af7-grafana-compute-0[99196]: logger=migrator t=2025-11-27T10:59:57.873434839Z level=info msg="Migration successfully executed" id="create alert table v1" duration=3.422948ms
Nov 27 05:59:57 np0005537642 ceph-4c838139-e0c9-556a-a9ca-e4422f459af7-grafana-compute-0[99196]: logger=migrator t=2025-11-27T10:59:57.876760615Z level=info msg="Executing migration" id="add index alert org_id & id "
Nov 27 05:59:57 np0005537642 ceph-4c838139-e0c9-556a-a9ca-e4422f459af7-grafana-compute-0[99196]: logger=migrator t=2025-11-27T10:59:57.879079292Z level=info msg="Migration successfully executed" id="add index alert org_id & id " duration=2.319537ms
Nov 27 05:59:57 np0005537642 ceph-4c838139-e0c9-556a-a9ca-e4422f459af7-grafana-compute-0[99196]: logger=migrator t=2025-11-27T10:59:57.886216968Z level=info msg="Executing migration" id="add index alert state"
Nov 27 05:59:57 np0005537642 ceph-4c838139-e0c9-556a-a9ca-e4422f459af7-grafana-compute-0[99196]: logger=migrator t=2025-11-27T10:59:57.887407892Z level=info msg="Migration successfully executed" id="add index alert state" duration=1.191955ms
Nov 27 05:59:57 np0005537642 ceph-4c838139-e0c9-556a-a9ca-e4422f459af7-grafana-compute-0[99196]: logger=migrator t=2025-11-27T10:59:57.890528392Z level=info msg="Executing migration" id="add index alert dashboard_id"
Nov 27 05:59:57 np0005537642 ceph-4c838139-e0c9-556a-a9ca-e4422f459af7-grafana-compute-0[99196]: logger=migrator t=2025-11-27T10:59:57.891776388Z level=info msg="Migration successfully executed" id="add index alert dashboard_id" duration=1.249666ms
Nov 27 05:59:57 np0005537642 ceph-4c838139-e0c9-556a-a9ca-e4422f459af7-grafana-compute-0[99196]: logger=migrator t=2025-11-27T10:59:57.89463656Z level=info msg="Executing migration" id="Create alert_rule_tag table v1"
Nov 27 05:59:57 np0005537642 ceph-4c838139-e0c9-556a-a9ca-e4422f459af7-grafana-compute-0[99196]: logger=migrator t=2025-11-27T10:59:57.895420533Z level=info msg="Migration successfully executed" id="Create alert_rule_tag table v1" duration=779.823µs
Nov 27 05:59:57 np0005537642 ceph-4c838139-e0c9-556a-a9ca-e4422f459af7-grafana-compute-0[99196]: logger=migrator t=2025-11-27T10:59:57.899873471Z level=info msg="Executing migration" id="Add unique index alert_rule_tag.alert_id_tag_id"
Nov 27 05:59:57 np0005537642 ceph-4c838139-e0c9-556a-a9ca-e4422f459af7-grafana-compute-0[99196]: logger=migrator t=2025-11-27T10:59:57.900947382Z level=info msg="Migration successfully executed" id="Add unique index alert_rule_tag.alert_id_tag_id" duration=1.075531ms
Nov 27 05:59:57 np0005537642 ceph-4c838139-e0c9-556a-a9ca-e4422f459af7-grafana-compute-0[99196]: logger=migrator t=2025-11-27T10:59:57.90609136Z level=info msg="Executing migration" id="drop index UQE_alert_rule_tag_alert_id_tag_id - v1"
Nov 27 05:59:57 np0005537642 ceph-4c838139-e0c9-556a-a9ca-e4422f459af7-grafana-compute-0[99196]: logger=migrator t=2025-11-27T10:59:57.907187642Z level=info msg="Migration successfully executed" id="drop index UQE_alert_rule_tag_alert_id_tag_id - v1" duration=1.098282ms
Nov 27 05:59:57 np0005537642 ceph-4c838139-e0c9-556a-a9ca-e4422f459af7-grafana-compute-0[99196]: logger=migrator t=2025-11-27T10:59:57.911011732Z level=info msg="Executing migration" id="Rename table alert_rule_tag to alert_rule_tag_v1 - v1"
Nov 27 05:59:57 np0005537642 ceph-osd[82775]: log_channel(cluster) log [DBG] : 11.f scrub starts
Nov 27 05:59:57 np0005537642 ceph-osd[82775]: log_channel(cluster) log [DBG] : 11.f scrub ok
Nov 27 05:59:57 np0005537642 ceph-4c838139-e0c9-556a-a9ca-e4422f459af7-grafana-compute-0[99196]: logger=migrator t=2025-11-27T10:59:57.92621219Z level=info msg="Migration successfully executed" id="Rename table alert_rule_tag to alert_rule_tag_v1 - v1" duration=15.195017ms
Nov 27 05:59:57 np0005537642 systemd[1]: Starting Ceph haproxy.rgw.default.compute-0.lbrkhk for 4c838139-e0c9-556a-a9ca-e4422f459af7...
Nov 27 05:59:57 np0005537642 ceph-4c838139-e0c9-556a-a9ca-e4422f459af7-grafana-compute-0[99196]: logger=migrator t=2025-11-27T10:59:57.931743949Z level=info msg="Executing migration" id="Create alert_rule_tag table v2"
Nov 27 05:59:57 np0005537642 ceph-4c838139-e0c9-556a-a9ca-e4422f459af7-grafana-compute-0[99196]: logger=migrator t=2025-11-27T10:59:57.93387531Z level=info msg="Migration successfully executed" id="Create alert_rule_tag table v2" duration=2.131181ms
Nov 27 05:59:57 np0005537642 ceph-4c838139-e0c9-556a-a9ca-e4422f459af7-grafana-compute-0[99196]: logger=migrator t=2025-11-27T10:59:57.939549044Z level=info msg="Executing migration" id="create index UQE_alert_rule_tag_alert_id_tag_id - Add unique index alert_rule_tag.alert_id_tag_id V2"
Nov 27 05:59:57 np0005537642 ceph-4c838139-e0c9-556a-a9ca-e4422f459af7-grafana-compute-0[99196]: logger=migrator t=2025-11-27T10:59:57.941233162Z level=info msg="Migration successfully executed" id="create index UQE_alert_rule_tag_alert_id_tag_id - Add unique index alert_rule_tag.alert_id_tag_id V2" duration=1.684658ms
Nov 27 05:59:57 np0005537642 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v93: 353 pgs: 2 peering, 351 active+clean; 456 KiB data, 126 MiB used, 60 GiB / 60 GiB avail; 35 B/s, 2 objects/s recovering
Nov 27 05:59:57 np0005537642 ceph-4c838139-e0c9-556a-a9ca-e4422f459af7-grafana-compute-0[99196]: logger=migrator t=2025-11-27T10:59:57.94775898Z level=info msg="Executing migration" id="copy alert_rule_tag v1 to v2"
Nov 27 05:59:57 np0005537642 ceph-4c838139-e0c9-556a-a9ca-e4422f459af7-grafana-compute-0[99196]: logger=migrator t=2025-11-27T10:59:57.948314836Z level=info msg="Migration successfully executed" id="copy alert_rule_tag v1 to v2" duration=555.556µs
Nov 27 05:59:57 np0005537642 ceph-4c838139-e0c9-556a-a9ca-e4422f459af7-grafana-compute-0[99196]: logger=migrator t=2025-11-27T10:59:57.952450186Z level=info msg="Executing migration" id="drop table alert_rule_tag_v1"
Nov 27 05:59:57 np0005537642 ceph-4c838139-e0c9-556a-a9ca-e4422f459af7-grafana-compute-0[99196]: logger=migrator t=2025-11-27T10:59:57.953511526Z level=info msg="Migration successfully executed" id="drop table alert_rule_tag_v1" duration=1.06102ms
Nov 27 05:59:57 np0005537642 ceph-4c838139-e0c9-556a-a9ca-e4422f459af7-grafana-compute-0[99196]: logger=migrator t=2025-11-27T10:59:57.960477487Z level=info msg="Executing migration" id="create alert_notification table v1"
Nov 27 05:59:57 np0005537642 ceph-4c838139-e0c9-556a-a9ca-e4422f459af7-grafana-compute-0[99196]: logger=migrator t=2025-11-27T10:59:57.962026932Z level=info msg="Migration successfully executed" id="create alert_notification table v1" duration=1.546944ms
Nov 27 05:59:57 np0005537642 ceph-4c838139-e0c9-556a-a9ca-e4422f459af7-grafana-compute-0[99196]: logger=migrator t=2025-11-27T10:59:57.965853312Z level=info msg="Executing migration" id="Add column is_default"
Nov 27 05:59:57 np0005537642 ceph-4c838139-e0c9-556a-a9ca-e4422f459af7-grafana-compute-0[99196]: logger=migrator t=2025-11-27T10:59:57.971447213Z level=info msg="Migration successfully executed" id="Add column is_default" duration=5.593231ms
Nov 27 05:59:57 np0005537642 ceph-mon[74338]: from='mgr.14424 192.168.122.100:0/1707931784' entity='mgr.compute-0.qnrkij' 
Nov 27 05:59:57 np0005537642 ceph-4c838139-e0c9-556a-a9ca-e4422f459af7-grafana-compute-0[99196]: logger=migrator t=2025-11-27T10:59:57.976737935Z level=info msg="Executing migration" id="Add column frequency"
Nov 27 05:59:57 np0005537642 ceph-4c838139-e0c9-556a-a9ca-e4422f459af7-grafana-compute-0[99196]: logger=migrator t=2025-11-27T10:59:57.98278784Z level=info msg="Migration successfully executed" id="Add column frequency" duration=6.068814ms
Nov 27 05:59:57 np0005537642 ceph-4c838139-e0c9-556a-a9ca-e4422f459af7-grafana-compute-0[99196]: logger=migrator t=2025-11-27T10:59:57.986782475Z level=info msg="Executing migration" id="Add column send_reminder"
Nov 27 05:59:57 np0005537642 ceph-4c838139-e0c9-556a-a9ca-e4422f459af7-grafana-compute-0[99196]: logger=migrator t=2025-11-27T10:59:57.993118167Z level=info msg="Migration successfully executed" id="Add column send_reminder" duration=6.333103ms
Nov 27 05:59:57 np0005537642 ceph-4c838139-e0c9-556a-a9ca-e4422f459af7-grafana-compute-0[99196]: logger=migrator t=2025-11-27T10:59:57.995522816Z level=info msg="Executing migration" id="Add column disable_resolve_message"
Nov 27 05:59:58 np0005537642 ceph-4c838139-e0c9-556a-a9ca-e4422f459af7-grafana-compute-0[99196]: logger=migrator t=2025-11-27T10:59:58.003126595Z level=info msg="Migration successfully executed" id="Add column disable_resolve_message" duration=7.604029ms
Nov 27 05:59:58 np0005537642 ceph-4c838139-e0c9-556a-a9ca-e4422f459af7-grafana-compute-0[99196]: logger=migrator t=2025-11-27T10:59:58.009677514Z level=info msg="Executing migration" id="add index alert_notification org_id & name"
Nov 27 05:59:58 np0005537642 ceph-4c838139-e0c9-556a-a9ca-e4422f459af7-grafana-compute-0[99196]: logger=migrator t=2025-11-27T10:59:58.010970791Z level=info msg="Migration successfully executed" id="add index alert_notification org_id & name" duration=1.294487ms
Nov 27 05:59:58 np0005537642 ceph-4c838139-e0c9-556a-a9ca-e4422f459af7-grafana-compute-0[99196]: logger=migrator t=2025-11-27T10:59:58.013266818Z level=info msg="Executing migration" id="Update alert table charset"
Nov 27 05:59:58 np0005537642 ceph-4c838139-e0c9-556a-a9ca-e4422f459af7-grafana-compute-0[99196]: logger=migrator t=2025-11-27T10:59:58.013306039Z level=info msg="Migration successfully executed" id="Update alert table charset" duration=43.212µs
Nov 27 05:59:58 np0005537642 ceph-4c838139-e0c9-556a-a9ca-e4422f459af7-grafana-compute-0[99196]: logger=migrator t=2025-11-27T10:59:58.016194202Z level=info msg="Executing migration" id="Update alert_notification table charset"
Nov 27 05:59:58 np0005537642 ceph-4c838139-e0c9-556a-a9ca-e4422f459af7-grafana-compute-0[99196]: logger=migrator t=2025-11-27T10:59:58.016218353Z level=info msg="Migration successfully executed" id="Update alert_notification table charset" duration=24.791µs
Nov 27 05:59:58 np0005537642 ceph-4c838139-e0c9-556a-a9ca-e4422f459af7-grafana-compute-0[99196]: logger=migrator t=2025-11-27T10:59:58.01855401Z level=info msg="Executing migration" id="create notification_journal table v1"
Nov 27 05:59:58 np0005537642 ceph-4c838139-e0c9-556a-a9ca-e4422f459af7-grafana-compute-0[99196]: logger=migrator t=2025-11-27T10:59:58.019530878Z level=info msg="Migration successfully executed" id="create notification_journal table v1" duration=976.858µs
Nov 27 05:59:58 np0005537642 ceph-4c838139-e0c9-556a-a9ca-e4422f459af7-grafana-compute-0[99196]: logger=migrator t=2025-11-27T10:59:58.026723785Z level=info msg="Executing migration" id="add index notification_journal org_id & alert_id & notifier_id"
Nov 27 05:59:58 np0005537642 ceph-4c838139-e0c9-556a-a9ca-e4422f459af7-grafana-compute-0[99196]: logger=migrator t=2025-11-27T10:59:58.028060344Z level=info msg="Migration successfully executed" id="add index notification_journal org_id & alert_id & notifier_id" duration=1.338189ms
Nov 27 05:59:58 np0005537642 ceph-4c838139-e0c9-556a-a9ca-e4422f459af7-grafana-compute-0[99196]: logger=migrator t=2025-11-27T10:59:58.033627304Z level=info msg="Executing migration" id="drop alert_notification_journal"
Nov 27 05:59:58 np0005537642 ceph-4c838139-e0c9-556a-a9ca-e4422f459af7-grafana-compute-0[99196]: logger=migrator t=2025-11-27T10:59:58.034908051Z level=info msg="Migration successfully executed" id="drop alert_notification_journal" duration=1.278917ms
Nov 27 05:59:58 np0005537642 ceph-4c838139-e0c9-556a-a9ca-e4422f459af7-grafana-compute-0[99196]: logger=migrator t=2025-11-27T10:59:58.043257042Z level=info msg="Executing migration" id="create alert_notification_state table v1"
Nov 27 05:59:58 np0005537642 ceph-4c838139-e0c9-556a-a9ca-e4422f459af7-grafana-compute-0[99196]: logger=migrator t=2025-11-27T10:59:58.04493462Z level=info msg="Migration successfully executed" id="create alert_notification_state table v1" duration=1.678739ms
Nov 27 05:59:58 np0005537642 ceph-4c838139-e0c9-556a-a9ca-e4422f459af7-grafana-compute-0[99196]: logger=migrator t=2025-11-27T10:59:58.050032287Z level=info msg="Executing migration" id="add index alert_notification_state org_id & alert_id & notifier_id"
Nov 27 05:59:58 np0005537642 ceph-4c838139-e0c9-556a-a9ca-e4422f459af7-grafana-compute-0[99196]: logger=migrator t=2025-11-27T10:59:58.051452968Z level=info msg="Migration successfully executed" id="add index alert_notification_state org_id & alert_id & notifier_id" duration=1.420431ms
Nov 27 05:59:58 np0005537642 ceph-4c838139-e0c9-556a-a9ca-e4422f459af7-grafana-compute-0[99196]: logger=migrator t=2025-11-27T10:59:58.05569313Z level=info msg="Executing migration" id="Add for to alert table"
Nov 27 05:59:58 np0005537642 ceph-4c838139-e0c9-556a-a9ca-e4422f459af7-grafana-compute-0[99196]: logger=migrator t=2025-11-27T10:59:58.059843399Z level=info msg="Migration successfully executed" id="Add for to alert table" duration=4.150589ms
Nov 27 05:59:58 np0005537642 ceph-4c838139-e0c9-556a-a9ca-e4422f459af7-grafana-compute-0[99196]: logger=migrator t=2025-11-27T10:59:58.063722301Z level=info msg="Executing migration" id="Add column uid in alert_notification"
Nov 27 05:59:58 np0005537642 ceph-4c838139-e0c9-556a-a9ca-e4422f459af7-grafana-compute-0[99196]: logger=migrator t=2025-11-27T10:59:58.067594553Z level=info msg="Migration successfully executed" id="Add column uid in alert_notification" duration=3.871872ms
Nov 27 05:59:58 np0005537642 ceph-4c838139-e0c9-556a-a9ca-e4422f459af7-grafana-compute-0[99196]: logger=migrator t=2025-11-27T10:59:58.071033552Z level=info msg="Executing migration" id="Update uid column values in alert_notification"
Nov 27 05:59:58 np0005537642 ceph-4c838139-e0c9-556a-a9ca-e4422f459af7-grafana-compute-0[99196]: logger=migrator t=2025-11-27T10:59:58.071291089Z level=info msg="Migration successfully executed" id="Update uid column values in alert_notification" duration=256.937µs
Nov 27 05:59:58 np0005537642 ceph-4c838139-e0c9-556a-a9ca-e4422f459af7-grafana-compute-0[99196]: logger=migrator t=2025-11-27T10:59:58.074499482Z level=info msg="Executing migration" id="Add unique index alert_notification_org_id_uid"
Nov 27 05:59:58 np0005537642 ceph-4c838139-e0c9-556a-a9ca-e4422f459af7-grafana-compute-0[99196]: logger=migrator t=2025-11-27T10:59:58.075691956Z level=info msg="Migration successfully executed" id="Add unique index alert_notification_org_id_uid" duration=1.192615ms
Nov 27 05:59:58 np0005537642 ceph-4c838139-e0c9-556a-a9ca-e4422f459af7-grafana-compute-0[99196]: logger=migrator t=2025-11-27T10:59:58.079271689Z level=info msg="Executing migration" id="Remove unique index org_id_name"
Nov 27 05:59:58 np0005537642 ceph-4c838139-e0c9-556a-a9ca-e4422f459af7-grafana-compute-0[99196]: logger=migrator t=2025-11-27T10:59:58.080323149Z level=info msg="Migration successfully executed" id="Remove unique index org_id_name" duration=1.05183ms
Nov 27 05:59:58 np0005537642 ceph-4c838139-e0c9-556a-a9ca-e4422f459af7-grafana-compute-0[99196]: logger=migrator t=2025-11-27T10:59:58.08209327Z level=info msg="Executing migration" id="Add column secure_settings in alert_notification"
Nov 27 05:59:58 np0005537642 ceph-4c838139-e0c9-556a-a9ca-e4422f459af7-grafana-compute-0[99196]: logger=migrator t=2025-11-27T10:59:58.086161718Z level=info msg="Migration successfully executed" id="Add column secure_settings in alert_notification" duration=4.063657ms
Nov 27 05:59:58 np0005537642 ceph-4c838139-e0c9-556a-a9ca-e4422f459af7-grafana-compute-0[99196]: logger=migrator t=2025-11-27T10:59:58.088969638Z level=info msg="Executing migration" id="alter alert.settings to mediumtext"
Nov 27 05:59:58 np0005537642 ceph-4c838139-e0c9-556a-a9ca-e4422f459af7-grafana-compute-0[99196]: logger=migrator t=2025-11-27T10:59:58.089096362Z level=info msg="Migration successfully executed" id="alter alert.settings to mediumtext" duration=128.234µs
Nov 27 05:59:58 np0005537642 ceph-4c838139-e0c9-556a-a9ca-e4422f459af7-grafana-compute-0[99196]: logger=migrator t=2025-11-27T10:59:58.094899339Z level=info msg="Executing migration" id="Add non-unique index alert_notification_state_alert_id"
Nov 27 05:59:58 np0005537642 ceph-4c838139-e0c9-556a-a9ca-e4422f459af7-grafana-compute-0[99196]: logger=migrator t=2025-11-27T10:59:58.0959852Z level=info msg="Migration successfully executed" id="Add non-unique index alert_notification_state_alert_id" duration=1.085361ms
Nov 27 05:59:58 np0005537642 ceph-4c838139-e0c9-556a-a9ca-e4422f459af7-grafana-compute-0[99196]: logger=migrator t=2025-11-27T10:59:58.100518981Z level=info msg="Executing migration" id="Add non-unique index alert_rule_tag_alert_id"
Nov 27 05:59:58 np0005537642 ceph-4c838139-e0c9-556a-a9ca-e4422f459af7-grafana-compute-0[99196]: logger=migrator t=2025-11-27T10:59:58.102118947Z level=info msg="Migration successfully executed" id="Add non-unique index alert_rule_tag_alert_id" duration=1.599716ms
Nov 27 05:59:58 np0005537642 ceph-4c838139-e0c9-556a-a9ca-e4422f459af7-grafana-compute-0[99196]: logger=migrator t=2025-11-27T10:59:58.107188103Z level=info msg="Executing migration" id="Drop old annotation table v4"
Nov 27 05:59:58 np0005537642 ceph-4c838139-e0c9-556a-a9ca-e4422f459af7-grafana-compute-0[99196]: logger=migrator t=2025-11-27T10:59:58.107451891Z level=info msg="Migration successfully executed" id="Drop old annotation table v4" duration=268.808µs
Nov 27 05:59:58 np0005537642 ceph-4c838139-e0c9-556a-a9ca-e4422f459af7-grafana-compute-0[99196]: logger=migrator t=2025-11-27T10:59:58.110433607Z level=info msg="Executing migration" id="create annotation table v5"
Nov 27 05:59:58 np0005537642 ceph-4c838139-e0c9-556a-a9ca-e4422f459af7-grafana-compute-0[99196]: logger=migrator t=2025-11-27T10:59:58.111856888Z level=info msg="Migration successfully executed" id="create annotation table v5" duration=1.423071ms
Nov 27 05:59:58 np0005537642 ceph-4c838139-e0c9-556a-a9ca-e4422f459af7-grafana-compute-0[99196]: logger=migrator t=2025-11-27T10:59:58.114907766Z level=info msg="Executing migration" id="add index annotation 0 v3"
Nov 27 05:59:58 np0005537642 ceph-4c838139-e0c9-556a-a9ca-e4422f459af7-grafana-compute-0[99196]: logger=migrator t=2025-11-27T10:59:58.116237844Z level=info msg="Migration successfully executed" id="add index annotation 0 v3" duration=1.331579ms
Nov 27 05:59:58 np0005537642 ceph-4c838139-e0c9-556a-a9ca-e4422f459af7-grafana-compute-0[99196]: logger=migrator t=2025-11-27T10:59:58.119395715Z level=info msg="Executing migration" id="add index annotation 1 v3"
Nov 27 05:59:58 np0005537642 ceph-4c838139-e0c9-556a-a9ca-e4422f459af7-grafana-compute-0[99196]: logger=migrator t=2025-11-27T10:59:58.120722703Z level=info msg="Migration successfully executed" id="add index annotation 1 v3" duration=1.328508ms
Nov 27 05:59:58 np0005537642 ceph-4c838139-e0c9-556a-a9ca-e4422f459af7-grafana-compute-0[99196]: logger=migrator t=2025-11-27T10:59:58.124533573Z level=info msg="Executing migration" id="add index annotation 2 v3"
Nov 27 05:59:58 np0005537642 ceph-4c838139-e0c9-556a-a9ca-e4422f459af7-grafana-compute-0[99196]: logger=migrator t=2025-11-27T10:59:58.125753778Z level=info msg="Migration successfully executed" id="add index annotation 2 v3" duration=1.220925ms
Nov 27 05:59:58 np0005537642 ceph-4c838139-e0c9-556a-a9ca-e4422f459af7-grafana-compute-0[99196]: logger=migrator t=2025-11-27T10:59:58.133643975Z level=info msg="Executing migration" id="add index annotation 3 v3"
Nov 27 05:59:58 np0005537642 ceph-4c838139-e0c9-556a-a9ca-e4422f459af7-grafana-compute-0[99196]: logger=migrator t=2025-11-27T10:59:58.135094627Z level=info msg="Migration successfully executed" id="add index annotation 3 v3" duration=1.452122ms
Nov 27 05:59:58 np0005537642 ceph-4c838139-e0c9-556a-a9ca-e4422f459af7-grafana-compute-0[99196]: logger=migrator t=2025-11-27T10:59:58.138683661Z level=info msg="Executing migration" id="add index annotation 4 v3"
Nov 27 05:59:58 np0005537642 ceph-4c838139-e0c9-556a-a9ca-e4422f459af7-grafana-compute-0[99196]: logger=migrator t=2025-11-27T10:59:58.140239905Z level=info msg="Migration successfully executed" id="add index annotation 4 v3" duration=1.556635ms
Nov 27 05:59:58 np0005537642 ceph-4c838139-e0c9-556a-a9ca-e4422f459af7-grafana-compute-0[99196]: logger=migrator t=2025-11-27T10:59:58.145528858Z level=info msg="Executing migration" id="Update annotation table charset"
Nov 27 05:59:58 np0005537642 ceph-4c838139-e0c9-556a-a9ca-e4422f459af7-grafana-compute-0[99196]: logger=migrator t=2025-11-27T10:59:58.14561328Z level=info msg="Migration successfully executed" id="Update annotation table charset" duration=86.332µs
Nov 27 05:59:58 np0005537642 ceph-4c838139-e0c9-556a-a9ca-e4422f459af7-grafana-compute-0[99196]: logger=migrator t=2025-11-27T10:59:58.148207855Z level=info msg="Executing migration" id="Add column region_id to annotation table"
Nov 27 05:59:58 np0005537642 ceph-4c838139-e0c9-556a-a9ca-e4422f459af7-grafana-compute-0[99196]: logger=migrator t=2025-11-27T10:59:58.15288419Z level=info msg="Migration successfully executed" id="Add column region_id to annotation table" duration=4.672184ms
Nov 27 05:59:58 np0005537642 ceph-4c838139-e0c9-556a-a9ca-e4422f459af7-grafana-compute-0[99196]: logger=migrator t=2025-11-27T10:59:58.158714568Z level=info msg="Executing migration" id="Drop category_id index"
Nov 27 05:59:58 np0005537642 ceph-4c838139-e0c9-556a-a9ca-e4422f459af7-grafana-compute-0[99196]: logger=migrator t=2025-11-27T10:59:58.159932543Z level=info msg="Migration successfully executed" id="Drop category_id index" duration=1.220115ms
Nov 27 05:59:58 np0005537642 ceph-4c838139-e0c9-556a-a9ca-e4422f459af7-grafana-compute-0[99196]: logger=migrator t=2025-11-27T10:59:58.163238068Z level=info msg="Executing migration" id="Add column tags to annotation table"
Nov 27 05:59:58 np0005537642 ceph-4c838139-e0c9-556a-a9ca-e4422f459af7-grafana-compute-0[99196]: logger=migrator t=2025-11-27T10:59:58.16746905Z level=info msg="Migration successfully executed" id="Add column tags to annotation table" duration=4.230032ms
Nov 27 05:59:58 np0005537642 ceph-4c838139-e0c9-556a-a9ca-e4422f459af7-grafana-compute-0[99196]: logger=migrator t=2025-11-27T10:59:58.173171914Z level=info msg="Executing migration" id="Create annotation_tag table v2"
Nov 27 05:59:58 np0005537642 ceph-4c838139-e0c9-556a-a9ca-e4422f459af7-grafana-compute-0[99196]: logger=migrator t=2025-11-27T10:59:58.174225464Z level=info msg="Migration successfully executed" id="Create annotation_tag table v2" duration=1.05204ms
Nov 27 05:59:58 np0005537642 ceph-4c838139-e0c9-556a-a9ca-e4422f459af7-grafana-compute-0[99196]: logger=migrator t=2025-11-27T10:59:58.180534176Z level=info msg="Executing migration" id="Add unique index annotation_tag.annotation_id_tag_id"
Nov 27 05:59:58 np0005537642 ceph-4c838139-e0c9-556a-a9ca-e4422f459af7-grafana-compute-0[99196]: logger=migrator t=2025-11-27T10:59:58.18170898Z level=info msg="Migration successfully executed" id="Add unique index annotation_tag.annotation_id_tag_id" duration=1.174234ms
Nov 27 05:59:58 np0005537642 ceph-4c838139-e0c9-556a-a9ca-e4422f459af7-grafana-compute-0[99196]: logger=migrator t=2025-11-27T10:59:58.185007095Z level=info msg="Executing migration" id="drop index UQE_annotation_tag_annotation_id_tag_id - v2"
Nov 27 05:59:58 np0005537642 ceph-4c838139-e0c9-556a-a9ca-e4422f459af7-grafana-compute-0[99196]: logger=migrator t=2025-11-27T10:59:58.186159018Z level=info msg="Migration successfully executed" id="drop index UQE_annotation_tag_annotation_id_tag_id - v2" duration=1.152453ms
Nov 27 05:59:58 np0005537642 ceph-4c838139-e0c9-556a-a9ca-e4422f459af7-grafana-compute-0[99196]: logger=migrator t=2025-11-27T10:59:58.18971438Z level=info msg="Executing migration" id="Rename table annotation_tag to annotation_tag_v2 - v2"
Nov 27 05:59:58 np0005537642 ceph-4c838139-e0c9-556a-a9ca-e4422f459af7-grafana-compute-0[99196]: logger=migrator t=2025-11-27T10:59:58.201326105Z level=info msg="Migration successfully executed" id="Rename table annotation_tag to annotation_tag_v2 - v2" duration=11.605255ms
Nov 27 05:59:58 np0005537642 ceph-4c838139-e0c9-556a-a9ca-e4422f459af7-grafana-compute-0[99196]: logger=migrator t=2025-11-27T10:59:58.204840096Z level=info msg="Executing migration" id="Create annotation_tag table v3"
Nov 27 05:59:58 np0005537642 ceph-4c838139-e0c9-556a-a9ca-e4422f459af7-grafana-compute-0[99196]: logger=migrator t=2025-11-27T10:59:58.205818534Z level=info msg="Migration successfully executed" id="Create annotation_tag table v3" duration=978.278µs
Nov 27 05:59:58 np0005537642 ceph-4c838139-e0c9-556a-a9ca-e4422f459af7-grafana-compute-0[99196]: logger=migrator t=2025-11-27T10:59:58.209882982Z level=info msg="Executing migration" id="create index UQE_annotation_tag_annotation_id_tag_id - Add unique index annotation_tag.annotation_id_tag_id V3"
Nov 27 05:59:58 np0005537642 ceph-4c838139-e0c9-556a-a9ca-e4422f459af7-grafana-compute-0[99196]: logger=migrator t=2025-11-27T10:59:58.211043395Z level=info msg="Migration successfully executed" id="create index UQE_annotation_tag_annotation_id_tag_id - Add unique index annotation_tag.annotation_id_tag_id V3" duration=1.159693ms
Nov 27 05:59:58 np0005537642 ceph-4c838139-e0c9-556a-a9ca-e4422f459af7-grafana-compute-0[99196]: logger=migrator t=2025-11-27T10:59:58.218016866Z level=info msg="Executing migration" id="copy annotation_tag v2 to v3"
Nov 27 05:59:58 np0005537642 ceph-4c838139-e0c9-556a-a9ca-e4422f459af7-grafana-compute-0[99196]: logger=migrator t=2025-11-27T10:59:58.21849756Z level=info msg="Migration successfully executed" id="copy annotation_tag v2 to v3" duration=481.844µs
Nov 27 05:59:58 np0005537642 ceph-4c838139-e0c9-556a-a9ca-e4422f459af7-grafana-compute-0[99196]: logger=migrator t=2025-11-27T10:59:58.221024172Z level=info msg="Executing migration" id="drop table annotation_tag_v2"
Nov 27 05:59:58 np0005537642 ceph-4c838139-e0c9-556a-a9ca-e4422f459af7-grafana-compute-0[99196]: logger=migrator t=2025-11-27T10:59:58.221839656Z level=info msg="Migration successfully executed" id="drop table annotation_tag_v2" duration=815.404µs
Nov 27 05:59:58 np0005537642 ceph-4c838139-e0c9-556a-a9ca-e4422f459af7-grafana-compute-0[99196]: logger=migrator t=2025-11-27T10:59:58.230017172Z level=info msg="Executing migration" id="Update alert annotations and set TEXT to empty"
Nov 27 05:59:58 np0005537642 ceph-4c838139-e0c9-556a-a9ca-e4422f459af7-grafana-compute-0[99196]: logger=migrator t=2025-11-27T10:59:58.230267989Z level=info msg="Migration successfully executed" id="Update alert annotations and set TEXT to empty" duration=251.558µs
Nov 27 05:59:58 np0005537642 podman[99474]: 2025-11-27 10:59:58.230784434 +0000 UTC m=+0.071455430 container create 04bf3bdfbfe606eae4e65fb511b617aa3b9437dfdabd63216e07ecb9b007f04d (image=quay.io/ceph/haproxy:2.3, name=ceph-4c838139-e0c9-556a-a9ca-e4422f459af7-haproxy-rgw-default-compute-0-lbrkhk)
Nov 27 05:59:58 np0005537642 ceph-4c838139-e0c9-556a-a9ca-e4422f459af7-grafana-compute-0[99196]: logger=migrator t=2025-11-27T10:59:58.233738509Z level=info msg="Executing migration" id="Add created time to annotation table"
Nov 27 05:59:58 np0005537642 ceph-4c838139-e0c9-556a-a9ca-e4422f459af7-grafana-compute-0[99196]: logger=migrator t=2025-11-27T10:59:58.246817586Z level=info msg="Migration successfully executed" id="Add created time to annotation table" duration=13.077697ms
Nov 27 05:59:58 np0005537642 ceph-4c838139-e0c9-556a-a9ca-e4422f459af7-grafana-compute-0[99196]: logger=migrator t=2025-11-27T10:59:58.259680216Z level=info msg="Executing migration" id="Add updated time to annotation table"
Nov 27 05:59:58 np0005537642 ceph-4c838139-e0c9-556a-a9ca-e4422f459af7-grafana-compute-0[99196]: logger=migrator t=2025-11-27T10:59:58.263906258Z level=info msg="Migration successfully executed" id="Add updated time to annotation table" duration=4.229662ms
Nov 27 05:59:58 np0005537642 ceph-4c838139-e0c9-556a-a9ca-e4422f459af7-grafana-compute-0[99196]: logger=migrator t=2025-11-27T10:59:58.267374638Z level=info msg="Executing migration" id="Add index for created in annotation table"
Nov 27 05:59:58 np0005537642 ceph-4c838139-e0c9-556a-a9ca-e4422f459af7-grafana-compute-0[99196]: logger=migrator t=2025-11-27T10:59:58.268272124Z level=info msg="Migration successfully executed" id="Add index for created in annotation table" duration=899.705µs
Nov 27 05:59:58 np0005537642 ceph-4c838139-e0c9-556a-a9ca-e4422f459af7-grafana-compute-0[99196]: logger=migrator t=2025-11-27T10:59:58.272386772Z level=info msg="Executing migration" id="Add index for updated in annotation table"
Nov 27 05:59:58 np0005537642 ceph-4c838139-e0c9-556a-a9ca-e4422f459af7-grafana-compute-0[99196]: logger=migrator t=2025-11-27T10:59:58.273255417Z level=info msg="Migration successfully executed" id="Add index for updated in annotation table" duration=866.285µs
Nov 27 05:59:58 np0005537642 ceph-4c838139-e0c9-556a-a9ca-e4422f459af7-grafana-compute-0[99196]: logger=migrator t=2025-11-27T10:59:58.275181653Z level=info msg="Executing migration" id="Convert existing annotations from seconds to milliseconds"
Nov 27 05:59:58 np0005537642 ceph-4c838139-e0c9-556a-a9ca-e4422f459af7-grafana-compute-0[99196]: logger=migrator t=2025-11-27T10:59:58.275418569Z level=info msg="Migration successfully executed" id="Convert existing annotations from seconds to milliseconds" duration=237.076µs
Nov 27 05:59:58 np0005537642 ceph-4c838139-e0c9-556a-a9ca-e4422f459af7-grafana-compute-0[99196]: logger=migrator t=2025-11-27T10:59:58.27996286Z level=info msg="Executing migration" id="Add epoch_end column"
Nov 27 05:59:58 np0005537642 ceph-4c838139-e0c9-556a-a9ca-e4422f459af7-grafana-compute-0[99196]: logger=migrator t=2025-11-27T10:59:58.283945735Z level=info msg="Migration successfully executed" id="Add epoch_end column" duration=3.982225ms
Nov 27 05:59:58 np0005537642 podman[99474]: 2025-11-27 10:59:58.193991244 +0000 UTC m=+0.034662310 image pull e85424b0d443f37ddd2dd8a3bb2ef6f18dd352b987723a921b64289023af2914 quay.io/ceph/haproxy:2.3
Nov 27 05:59:58 np0005537642 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/41872f463064f816abc2c4714726296ea66974bd2f612ab7c2d2099e879e42ba/merged/var/lib/haproxy supports timestamps until 2038 (0x7fffffff)
Nov 27 05:59:58 np0005537642 ceph-4c838139-e0c9-556a-a9ca-e4422f459af7-grafana-compute-0[99196]: logger=migrator t=2025-11-27T10:59:58.304778225Z level=info msg="Executing migration" id="Add index for epoch_end"
Nov 27 05:59:58 np0005537642 ceph-4c838139-e0c9-556a-a9ca-e4422f459af7-grafana-compute-0[99196]: logger=migrator t=2025-11-27T10:59:58.30631972Z level=info msg="Migration successfully executed" id="Add index for epoch_end" duration=1.546035ms
Nov 27 05:59:58 np0005537642 ceph-4c838139-e0c9-556a-a9ca-e4422f459af7-grafana-compute-0[99196]: logger=migrator t=2025-11-27T10:59:58.313546358Z level=info msg="Executing migration" id="Make epoch_end the same as epoch"
Nov 27 05:59:58 np0005537642 ceph-4c838139-e0c9-556a-a9ca-e4422f459af7-grafana-compute-0[99196]: logger=migrator t=2025-11-27T10:59:58.314026052Z level=info msg="Migration successfully executed" id="Make epoch_end the same as epoch" duration=490.025µs
Nov 27 05:59:58 np0005537642 ceph-4c838139-e0c9-556a-a9ca-e4422f459af7-grafana-compute-0[99196]: logger=migrator t=2025-11-27T10:59:58.317871822Z level=info msg="Executing migration" id="Move region to single row"
Nov 27 05:59:58 np0005537642 ceph-4c838139-e0c9-556a-a9ca-e4422f459af7-grafana-compute-0[99196]: logger=migrator t=2025-11-27T10:59:58.31848181Z level=info msg="Migration successfully executed" id="Move region to single row" duration=610.158µs
Nov 27 05:59:58 np0005537642 ceph-4c838139-e0c9-556a-a9ca-e4422f459af7-grafana-compute-0[99196]: logger=migrator t=2025-11-27T10:59:58.321736094Z level=info msg="Executing migration" id="Remove index org_id_epoch from annotation table"
Nov 27 05:59:58 np0005537642 ceph-4c838139-e0c9-556a-a9ca-e4422f459af7-grafana-compute-0[99196]: logger=migrator t=2025-11-27T10:59:58.323425022Z level=info msg="Migration successfully executed" id="Remove index org_id_epoch from annotation table" duration=1.688128ms
Nov 27 05:59:58 np0005537642 podman[99474]: 2025-11-27 10:59:58.323876365 +0000 UTC m=+0.164547381 container init 04bf3bdfbfe606eae4e65fb511b617aa3b9437dfdabd63216e07ecb9b007f04d (image=quay.io/ceph/haproxy:2.3, name=ceph-4c838139-e0c9-556a-a9ca-e4422f459af7-haproxy-rgw-default-compute-0-lbrkhk)
Nov 27 05:59:58 np0005537642 podman[99474]: 2025-11-27 10:59:58.328451647 +0000 UTC m=+0.169122633 container start 04bf3bdfbfe606eae4e65fb511b617aa3b9437dfdabd63216e07ecb9b007f04d (image=quay.io/ceph/haproxy:2.3, name=ceph-4c838139-e0c9-556a-a9ca-e4422f459af7-haproxy-rgw-default-compute-0-lbrkhk)
Nov 27 05:59:58 np0005537642 ceph-4c838139-e0c9-556a-a9ca-e4422f459af7-grafana-compute-0[99196]: logger=migrator t=2025-11-27T10:59:58.329776765Z level=info msg="Executing migration" id="Remove index org_id_dashboard_id_panel_id_epoch from annotation table"
Nov 27 05:59:58 np0005537642 ceph-4c838139-e0c9-556a-a9ca-e4422f459af7-grafana-compute-0[99196]: logger=migrator t=2025-11-27T10:59:58.331412673Z level=info msg="Migration successfully executed" id="Remove index org_id_dashboard_id_panel_id_epoch from annotation table" duration=1.631847ms
Nov 27 05:59:58 np0005537642 ceph-4c838139-e0c9-556a-a9ca-e4422f459af7-grafana-compute-0[99196]: logger=migrator t=2025-11-27T10:59:58.340608027Z level=info msg="Executing migration" id="Add index for org_id_dashboard_id_epoch_end_epoch on annotation table"
Nov 27 05:59:58 np0005537642 bash[99474]: 04bf3bdfbfe606eae4e65fb511b617aa3b9437dfdabd63216e07ecb9b007f04d
Nov 27 05:59:58 np0005537642 ceph-4c838139-e0c9-556a-a9ca-e4422f459af7-grafana-compute-0[99196]: logger=migrator t=2025-11-27T10:59:58.342260825Z level=info msg="Migration successfully executed" id="Add index for org_id_dashboard_id_epoch_end_epoch on annotation table" duration=1.688119ms
Nov 27 05:59:58 np0005537642 ceph-4c838139-e0c9-556a-a9ca-e4422f459af7-haproxy-rgw-default-compute-0-lbrkhk[99489]: [NOTICE] 330/105958 (2) : New worker #1 (4) forked
Nov 27 05:59:58 np0005537642 ceph-4c838139-e0c9-556a-a9ca-e4422f459af7-grafana-compute-0[99196]: logger=migrator t=2025-11-27T10:59:58.345746325Z level=info msg="Executing migration" id="Add index for org_id_epoch_end_epoch on annotation table"
Nov 27 05:59:58 np0005537642 ceph-4c838139-e0c9-556a-a9ca-e4422f459af7-grafana-compute-0[99196]: logger=migrator t=2025-11-27T10:59:58.34762279Z level=info msg="Migration successfully executed" id="Add index for org_id_epoch_end_epoch on annotation table" duration=1.876914ms
Nov 27 05:59:58 np0005537642 systemd[1]: Started Ceph haproxy.rgw.default.compute-0.lbrkhk for 4c838139-e0c9-556a-a9ca-e4422f459af7.
Nov 27 05:59:58 np0005537642 ceph-4c838139-e0c9-556a-a9ca-e4422f459af7-grafana-compute-0[99196]: logger=migrator t=2025-11-27T10:59:58.351677446Z level=info msg="Executing migration" id="Remove index org_id_epoch_epoch_end from annotation table"
Nov 27 05:59:58 np0005537642 ceph-4c838139-e0c9-556a-a9ca-e4422f459af7-grafana-compute-0[99196]: logger=migrator t=2025-11-27T10:59:58.353405276Z level=info msg="Migration successfully executed" id="Remove index org_id_epoch_epoch_end from annotation table" duration=1.72819ms
Nov 27 05:59:58 np0005537642 radosgw[89563]: ====== starting new request req=0x7f291eca25d0 =====
Nov 27 05:59:58 np0005537642 radosgw[89563]: ====== req done req=0x7f291eca25d0 op status=0 http_status=200 latency=0.002000057s ======
Nov 27 05:59:58 np0005537642 radosgw[89563]: beast: 0x7f291eca25d0: 192.168.122.100 - anonymous [27/Nov/2025:10:59:58.366 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.002000057s
Nov 27 05:59:58 np0005537642 ceph-4c838139-e0c9-556a-a9ca-e4422f459af7-grafana-compute-0[99196]: logger=migrator t=2025-11-27T10:59:58.418299075Z level=info msg="Executing migration" id="Add index for alert_id on annotation table"
Nov 27 05:59:58 np0005537642 ceph-4c838139-e0c9-556a-a9ca-e4422f459af7-grafana-compute-0[99196]: logger=migrator t=2025-11-27T10:59:58.419534511Z level=info msg="Migration successfully executed" id="Add index for alert_id on annotation table" duration=1.239306ms
Nov 27 05:59:58 np0005537642 ceph-4c838139-e0c9-556a-a9ca-e4422f459af7-grafana-compute-0[99196]: logger=migrator t=2025-11-27T10:59:58.463693333Z level=info msg="Executing migration" id="Increase tags column to length 4096"
Nov 27 05:59:58 np0005537642 ceph-4c838139-e0c9-556a-a9ca-e4422f459af7-grafana-compute-0[99196]: logger=migrator t=2025-11-27T10:59:58.463903009Z level=info msg="Migration successfully executed" id="Increase tags column to length 4096" duration=214.536µs
Nov 27 05:59:58 np0005537642 ceph-4c838139-e0c9-556a-a9ca-e4422f459af7-grafana-compute-0[99196]: logger=migrator t=2025-11-27T10:59:58.491679699Z level=info msg="Executing migration" id="create test_data table"
Nov 27 05:59:58 np0005537642 ceph-4c838139-e0c9-556a-a9ca-e4422f459af7-grafana-compute-0[99196]: logger=migrator t=2025-11-27T10:59:58.493474681Z level=info msg="Migration successfully executed" id="create test_data table" duration=1.796382ms
Nov 27 05:59:58 np0005537642 ceph-mon[74338]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Nov 27 05:59:58 np0005537642 ceph-4c838139-e0c9-556a-a9ca-e4422f459af7-grafana-compute-0[99196]: logger=migrator t=2025-11-27T10:59:58.54689096Z level=info msg="Executing migration" id="create dashboard_version table v1"
Nov 27 05:59:58 np0005537642 ceph-4c838139-e0c9-556a-a9ca-e4422f459af7-grafana-compute-0[99196]: logger=migrator t=2025-11-27T10:59:58.549358951Z level=info msg="Migration successfully executed" id="create dashboard_version table v1" duration=2.469371ms
Nov 27 05:59:58 np0005537642 ceph-mon[74338]: log_channel(audit) log [INF] : from='mgr.14424 192.168.122.100:0/1707931784' entity='mgr.compute-0.qnrkij' 
Nov 27 05:59:58 np0005537642 ceph-mon[74338]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Nov 27 05:59:58 np0005537642 ceph-4c838139-e0c9-556a-a9ca-e4422f459af7-grafana-compute-0[99196]: logger=migrator t=2025-11-27T10:59:58.573073664Z level=info msg="Executing migration" id="add index dashboard_version.dashboard_id"
Nov 27 05:59:58 np0005537642 ceph-4c838139-e0c9-556a-a9ca-e4422f459af7-grafana-compute-0[99196]: logger=migrator t=2025-11-27T10:59:58.574222387Z level=info msg="Migration successfully executed" id="add index dashboard_version.dashboard_id" duration=1.151713ms
Nov 27 05:59:58 np0005537642 ceph-4c838139-e0c9-556a-a9ca-e4422f459af7-grafana-compute-0[99196]: logger=migrator t=2025-11-27T10:59:58.606470996Z level=info msg="Executing migration" id="add unique index dashboard_version.dashboard_id and dashboard_version.version"
Nov 27 05:59:58 np0005537642 ceph-4c838139-e0c9-556a-a9ca-e4422f459af7-grafana-compute-0[99196]: logger=migrator t=2025-11-27T10:59:58.608444933Z level=info msg="Migration successfully executed" id="add unique index dashboard_version.dashboard_id and dashboard_version.version" duration=1.981597ms
Nov 27 05:59:58 np0005537642 ceph-4c838139-e0c9-556a-a9ca-e4422f459af7-grafana-compute-0[99196]: logger=migrator t=2025-11-27T10:59:58.628281454Z level=info msg="Executing migration" id="Set dashboard version to 1 where 0"
Nov 27 05:59:58 np0005537642 ceph-4c838139-e0c9-556a-a9ca-e4422f459af7-grafana-compute-0[99196]: logger=migrator t=2025-11-27T10:59:58.628633895Z level=info msg="Migration successfully executed" id="Set dashboard version to 1 where 0" duration=357.01µs
Nov 27 05:59:58 np0005537642 ceph-mon[74338]: log_channel(audit) log [INF] : from='mgr.14424 192.168.122.100:0/1707931784' entity='mgr.compute-0.qnrkij' 
Nov 27 05:59:58 np0005537642 ceph-mon[74338]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.ingress.rgw.default}] v 0)
Nov 27 05:59:58 np0005537642 ceph-4c838139-e0c9-556a-a9ca-e4422f459af7-nfs-cephfs-2-0-compute-0-ymahkb[97101]: 27/11/2025 10:59:58 : epoch 69282ef5 : compute-0 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f37dc0034e0 fd 49 proxy header rest len failed header rlen = % (will set dead)
Nov 27 05:59:58 np0005537642 ceph-4c838139-e0c9-556a-a9ca-e4422f459af7-grafana-compute-0[99196]: logger=migrator t=2025-11-27T10:59:58.757704832Z level=info msg="Executing migration" id="save existing dashboard data in dashboard_version table v1"
Nov 27 05:59:58 np0005537642 ceph-4c838139-e0c9-556a-a9ca-e4422f459af7-grafana-compute-0[99196]: logger=migrator t=2025-11-27T10:59:58.758481515Z level=info msg="Migration successfully executed" id="save existing dashboard data in dashboard_version table v1" duration=783.383µs
Nov 27 05:59:58 np0005537642 ceph-4c838139-e0c9-556a-a9ca-e4422f459af7-grafana-compute-0[99196]: logger=migrator t=2025-11-27T10:59:58.765741214Z level=info msg="Executing migration" id="alter dashboard_version.data to mediumtext v1"
Nov 27 05:59:58 np0005537642 ceph-4c838139-e0c9-556a-a9ca-e4422f459af7-grafana-compute-0[99196]: logger=migrator t=2025-11-27T10:59:58.765905859Z level=info msg="Migration successfully executed" id="alter dashboard_version.data to mediumtext v1" duration=170.845µs
Nov 27 05:59:58 np0005537642 ceph-mon[74338]: log_channel(audit) log [INF] : from='mgr.14424 192.168.122.100:0/1707931784' entity='mgr.compute-0.qnrkij' 
Nov 27 05:59:58 np0005537642 ceph-mgr[74636]: [cephadm INFO cephadm.serve] Deploying daemon haproxy.rgw.default.compute-2.quxapy on compute-2
Nov 27 05:59:58 np0005537642 ceph-mgr[74636]: log_channel(cephadm) log [INF] : Deploying daemon haproxy.rgw.default.compute-2.quxapy on compute-2
Nov 27 05:59:58 np0005537642 ceph-4c838139-e0c9-556a-a9ca-e4422f459af7-grafana-compute-0[99196]: logger=migrator t=2025-11-27T10:59:58.868050901Z level=info msg="Executing migration" id="create team table"
Nov 27 05:59:58 np0005537642 ceph-4c838139-e0c9-556a-a9ca-e4422f459af7-grafana-compute-0[99196]: logger=migrator t=2025-11-27T10:59:58.869238185Z level=info msg="Migration successfully executed" id="create team table" duration=1.194874ms
Nov 27 05:59:58 np0005537642 ceph-osd[82775]: log_channel(cluster) log [DBG] : 11.14 scrub starts
Nov 27 05:59:58 np0005537642 ceph-4c838139-e0c9-556a-a9ca-e4422f459af7-grafana-compute-0[99196]: logger=migrator t=2025-11-27T10:59:58.956886101Z level=info msg="Executing migration" id="add index team.org_id"
Nov 27 05:59:58 np0005537642 ceph-4c838139-e0c9-556a-a9ca-e4422f459af7-grafana-compute-0[99196]: logger=migrator t=2025-11-27T10:59:58.958476746Z level=info msg="Migration successfully executed" id="add index team.org_id" duration=1.595686ms
Nov 27 05:59:58 np0005537642 ceph-osd[82775]: log_channel(cluster) log [DBG] : 11.14 scrub ok
Nov 27 05:59:58 np0005537642 ceph-4c838139-e0c9-556a-a9ca-e4422f459af7-grafana-compute-0[99196]: logger=migrator t=2025-11-27T10:59:58.96900475Z level=info msg="Executing migration" id="add unique index team_org_id_name"
Nov 27 05:59:58 np0005537642 ceph-4c838139-e0c9-556a-a9ca-e4422f459af7-grafana-compute-0[99196]: logger=migrator t=2025-11-27T10:59:58.971045148Z level=info msg="Migration successfully executed" id="add unique index team_org_id_name" duration=2.042388ms
Nov 27 05:59:58 np0005537642 ceph-4c838139-e0c9-556a-a9ca-e4422f459af7-grafana-compute-0[99196]: logger=migrator t=2025-11-27T10:59:58.976937628Z level=info msg="Executing migration" id="Add column uid in team"
Nov 27 05:59:58 np0005537642 ceph-4c838139-e0c9-556a-a9ca-e4422f459af7-grafana-compute-0[99196]: logger=migrator t=2025-11-27T10:59:58.984679071Z level=info msg="Migration successfully executed" id="Add column uid in team" duration=7.738433ms
Nov 27 05:59:58 np0005537642 ceph-4c838139-e0c9-556a-a9ca-e4422f459af7-grafana-compute-0[99196]: logger=migrator t=2025-11-27T10:59:58.991488757Z level=info msg="Executing migration" id="Update uid column values in team"
Nov 27 05:59:58 np0005537642 ceph-4c838139-e0c9-556a-a9ca-e4422f459af7-grafana-compute-0[99196]: logger=migrator t=2025-11-27T10:59:58.991873989Z level=info msg="Migration successfully executed" id="Update uid column values in team" duration=388.081µs
Nov 27 05:59:59 np0005537642 ceph-4c838139-e0c9-556a-a9ca-e4422f459af7-grafana-compute-0[99196]: logger=migrator t=2025-11-27T10:59:59.027931327Z level=info msg="Executing migration" id="Add unique index team_org_id_uid"
Nov 27 05:59:59 np0005537642 ceph-4c838139-e0c9-556a-a9ca-e4422f459af7-grafana-compute-0[99196]: logger=migrator t=2025-11-27T10:59:59.0297573Z level=info msg="Migration successfully executed" id="Add unique index team_org_id_uid" duration=1.827533ms
Nov 27 05:59:59 np0005537642 ceph-4c838139-e0c9-556a-a9ca-e4422f459af7-grafana-compute-0[99196]: logger=migrator t=2025-11-27T10:59:59.036758331Z level=info msg="Executing migration" id="create team member table"
Nov 27 05:59:59 np0005537642 ceph-4c838139-e0c9-556a-a9ca-e4422f459af7-grafana-compute-0[99196]: logger=migrator t=2025-11-27T10:59:59.038294036Z level=info msg="Migration successfully executed" id="create team member table" duration=1.533175ms
Nov 27 05:59:59 np0005537642 ceph-4c838139-e0c9-556a-a9ca-e4422f459af7-grafana-compute-0[99196]: logger=migrator t=2025-11-27T10:59:59.049921981Z level=info msg="Executing migration" id="add index team_member.org_id"
Nov 27 05:59:59 np0005537642 ceph-4c838139-e0c9-556a-a9ca-e4422f459af7-grafana-compute-0[99196]: logger=migrator t=2025-11-27T10:59:59.067072875Z level=info msg="Migration successfully executed" id="add index team_member.org_id" duration=17.140193ms
Nov 27 05:59:59 np0005537642 ceph-4c838139-e0c9-556a-a9ca-e4422f459af7-grafana-compute-0[99196]: logger=migrator t=2025-11-27T10:59:59.094017891Z level=info msg="Executing migration" id="add unique index team_member_org_id_team_id_user_id"
Nov 27 05:59:59 np0005537642 ceph-4c838139-e0c9-556a-a9ca-e4422f459af7-grafana-compute-0[99196]: logger=migrator t=2025-11-27T10:59:59.09639843Z level=info msg="Migration successfully executed" id="add unique index team_member_org_id_team_id_user_id" duration=2.384208ms
Nov 27 05:59:59 np0005537642 ceph-4c838139-e0c9-556a-a9ca-e4422f459af7-grafana-compute-0[99196]: logger=migrator t=2025-11-27T10:59:59.101536527Z level=info msg="Executing migration" id="add index team_member.team_id"
Nov 27 05:59:59 np0005537642 ceph-4c838139-e0c9-556a-a9ca-e4422f459af7-grafana-compute-0[99196]: logger=migrator t=2025-11-27T10:59:59.103419952Z level=info msg="Migration successfully executed" id="add index team_member.team_id" duration=1.883755ms
Nov 27 05:59:59 np0005537642 ceph-4c838139-e0c9-556a-a9ca-e4422f459af7-grafana-compute-0[99196]: logger=migrator t=2025-11-27T10:59:59.105559933Z level=info msg="Executing migration" id="Add column email to team table"
Nov 27 05:59:59 np0005537642 ceph-4c838139-e0c9-556a-a9ca-e4422f459af7-grafana-compute-0[99196]: logger=migrator t=2025-11-27T10:59:59.10959697Z level=info msg="Migration successfully executed" id="Add column email to team table" duration=4.035607ms
Nov 27 05:59:59 np0005537642 ceph-4c838139-e0c9-556a-a9ca-e4422f459af7-grafana-compute-0[99196]: logger=migrator t=2025-11-27T10:59:59.113843222Z level=info msg="Executing migration" id="Add column external to team_member table"
Nov 27 05:59:59 np0005537642 ceph-4c838139-e0c9-556a-a9ca-e4422f459af7-grafana-compute-0[99196]: logger=migrator t=2025-11-27T10:59:59.117523378Z level=info msg="Migration successfully executed" id="Add column external to team_member table" duration=3.678976ms
Nov 27 05:59:59 np0005537642 ceph-4c838139-e0c9-556a-a9ca-e4422f459af7-grafana-compute-0[99196]: logger=migrator t=2025-11-27T10:59:59.13285208Z level=info msg="Executing migration" id="Add column permission to team_member table"
Nov 27 05:59:59 np0005537642 ceph-4c838139-e0c9-556a-a9ca-e4422f459af7-grafana-compute-0[99196]: logger=migrator t=2025-11-27T10:59:59.136528935Z level=info msg="Migration successfully executed" id="Add column permission to team_member table" duration=3.679896ms
Nov 27 05:59:59 np0005537642 ceph-4c838139-e0c9-556a-a9ca-e4422f459af7-grafana-compute-0[99196]: logger=migrator t=2025-11-27T10:59:59.143596019Z level=info msg="Executing migration" id="create dashboard acl table"
Nov 27 05:59:59 np0005537642 ceph-4c838139-e0c9-556a-a9ca-e4422f459af7-grafana-compute-0[99196]: logger=migrator t=2025-11-27T10:59:59.144800994Z level=info msg="Migration successfully executed" id="create dashboard acl table" duration=1.185634ms
Nov 27 05:59:59 np0005537642 ceph-4c838139-e0c9-556a-a9ca-e4422f459af7-grafana-compute-0[99196]: logger=migrator t=2025-11-27T10:59:59.152094484Z level=info msg="Executing migration" id="add index dashboard_acl_dashboard_id"
Nov 27 05:59:59 np0005537642 ceph-4c838139-e0c9-556a-a9ca-e4422f459af7-grafana-compute-0[99196]: logger=migrator t=2025-11-27T10:59:59.153894746Z level=info msg="Migration successfully executed" id="add index dashboard_acl_dashboard_id" duration=1.803752ms
Nov 27 05:59:59 np0005537642 ceph-4c838139-e0c9-556a-a9ca-e4422f459af7-grafana-compute-0[99196]: logger=migrator t=2025-11-27T10:59:59.162135573Z level=info msg="Executing migration" id="add unique index dashboard_acl_dashboard_id_user_id"
Nov 27 05:59:59 np0005537642 ceph-4c838139-e0c9-556a-a9ca-e4422f459af7-grafana-compute-0[99196]: logger=migrator t=2025-11-27T10:59:59.1634044Z level=info msg="Migration successfully executed" id="add unique index dashboard_acl_dashboard_id_user_id" duration=1.269917ms
Nov 27 05:59:59 np0005537642 ceph-4c838139-e0c9-556a-a9ca-e4422f459af7-grafana-compute-0[99196]: logger=migrator t=2025-11-27T10:59:59.170607917Z level=info msg="Executing migration" id="add unique index dashboard_acl_dashboard_id_team_id"
Nov 27 05:59:59 np0005537642 ceph-4c838139-e0c9-556a-a9ca-e4422f459af7-grafana-compute-0[99196]: logger=migrator t=2025-11-27T10:59:59.171818832Z level=info msg="Migration successfully executed" id="add unique index dashboard_acl_dashboard_id_team_id" duration=1.213355ms
Nov 27 05:59:59 np0005537642 ceph-4c838139-e0c9-556a-a9ca-e4422f459af7-grafana-compute-0[99196]: logger=migrator t=2025-11-27T10:59:59.182803989Z level=info msg="Executing migration" id="add index dashboard_acl_user_id"
Nov 27 05:59:59 np0005537642 ceph-4c838139-e0c9-556a-a9ca-e4422f459af7-grafana-compute-0[99196]: logger=migrator t=2025-11-27T10:59:59.183950642Z level=info msg="Migration successfully executed" id="add index dashboard_acl_user_id" duration=1.146953ms
Nov 27 05:59:59 np0005537642 ceph-4c838139-e0c9-556a-a9ca-e4422f459af7-grafana-compute-0[99196]: logger=migrator t=2025-11-27T10:59:59.191443848Z level=info msg="Executing migration" id="add index dashboard_acl_team_id"
Nov 27 05:59:59 np0005537642 ceph-4c838139-e0c9-556a-a9ca-e4422f459af7-grafana-compute-0[99196]: logger=migrator t=2025-11-27T10:59:59.192384385Z level=info msg="Migration successfully executed" id="add index dashboard_acl_team_id" duration=936.947µs
Nov 27 05:59:59 np0005537642 ceph-4c838139-e0c9-556a-a9ca-e4422f459af7-grafana-compute-0[99196]: logger=migrator t=2025-11-27T10:59:59.199407637Z level=info msg="Executing migration" id="add index dashboard_acl_org_id_role"
Nov 27 05:59:59 np0005537642 ceph-4c838139-e0c9-556a-a9ca-e4422f459af7-grafana-compute-0[99196]: logger=migrator t=2025-11-27T10:59:59.200434976Z level=info msg="Migration successfully executed" id="add index dashboard_acl_org_id_role" duration=1.025989ms
Nov 27 05:59:59 np0005537642 ceph-4c838139-e0c9-556a-a9ca-e4422f459af7-grafana-compute-0[99196]: logger=migrator t=2025-11-27T10:59:59.204906335Z level=info msg="Executing migration" id="add index dashboard_permission"
Nov 27 05:59:59 np0005537642 ceph-4c838139-e0c9-556a-a9ca-e4422f459af7-grafana-compute-0[99196]: logger=migrator t=2025-11-27T10:59:59.205804771Z level=info msg="Migration successfully executed" id="add index dashboard_permission" duration=897.386µs
Nov 27 05:59:59 np0005537642 ceph-4c838139-e0c9-556a-a9ca-e4422f459af7-grafana-compute-0[99196]: logger=migrator t=2025-11-27T10:59:59.208097237Z level=info msg="Executing migration" id="save default acl rules in dashboard_acl table"
Nov 27 05:59:59 np0005537642 ceph-4c838139-e0c9-556a-a9ca-e4422f459af7-grafana-compute-0[99196]: logger=migrator t=2025-11-27T10:59:59.208611642Z level=info msg="Migration successfully executed" id="save default acl rules in dashboard_acl table" duration=514.125µs
Nov 27 05:59:59 np0005537642 ceph-4c838139-e0c9-556a-a9ca-e4422f459af7-grafana-compute-0[99196]: logger=migrator t=2025-11-27T10:59:59.211736792Z level=info msg="Executing migration" id="delete acl rules for deleted dashboards and folders"
Nov 27 05:59:59 np0005537642 ceph-4c838139-e0c9-556a-a9ca-e4422f459af7-grafana-compute-0[99196]: logger=migrator t=2025-11-27T10:59:59.211924357Z level=info msg="Migration successfully executed" id="delete acl rules for deleted dashboards and folders" duration=183.945µs
Nov 27 05:59:59 np0005537642 ceph-4c838139-e0c9-556a-a9ca-e4422f459af7-grafana-compute-0[99196]: logger=migrator t=2025-11-27T10:59:59.214282515Z level=info msg="Executing migration" id="create tag table"
Nov 27 05:59:59 np0005537642 ceph-4c838139-e0c9-556a-a9ca-e4422f459af7-grafana-compute-0[99196]: logger=migrator t=2025-11-27T10:59:59.214931704Z level=info msg="Migration successfully executed" id="create tag table" duration=648.459µs
Nov 27 05:59:59 np0005537642 ceph-4c838139-e0c9-556a-a9ca-e4422f459af7-grafana-compute-0[99196]: logger=migrator t=2025-11-27T10:59:59.223015877Z level=info msg="Executing migration" id="add index tag.key_value"
Nov 27 05:59:59 np0005537642 ceph-4c838139-e0c9-556a-a9ca-e4422f459af7-grafana-compute-0[99196]: logger=migrator t=2025-11-27T10:59:59.223749088Z level=info msg="Migration successfully executed" id="add index tag.key_value" duration=732.811µs
Nov 27 05:59:59 np0005537642 ceph-4c838139-e0c9-556a-a9ca-e4422f459af7-grafana-compute-0[99196]: logger=migrator t=2025-11-27T10:59:59.228541936Z level=info msg="Executing migration" id="create login attempt table"
Nov 27 05:59:59 np0005537642 ceph-4c838139-e0c9-556a-a9ca-e4422f459af7-grafana-compute-0[99196]: logger=migrator t=2025-11-27T10:59:59.229101342Z level=info msg="Migration successfully executed" id="create login attempt table" duration=559.346µs
Nov 27 05:59:59 np0005537642 ceph-4c838139-e0c9-556a-a9ca-e4422f459af7-grafana-compute-0[99196]: logger=migrator t=2025-11-27T10:59:59.234164098Z level=info msg="Executing migration" id="add index login_attempt.username"
Nov 27 05:59:59 np0005537642 ceph-4c838139-e0c9-556a-a9ca-e4422f459af7-grafana-compute-0[99196]: logger=migrator t=2025-11-27T10:59:59.236146255Z level=info msg="Migration successfully executed" id="add index login_attempt.username" duration=1.990937ms
Nov 27 05:59:59 np0005537642 ceph-4c838139-e0c9-556a-a9ca-e4422f459af7-grafana-compute-0[99196]: logger=migrator t=2025-11-27T10:59:59.239625435Z level=info msg="Executing migration" id="drop index IDX_login_attempt_username - v1"
Nov 27 05:59:59 np0005537642 ceph-4c838139-e0c9-556a-a9ca-e4422f459af7-grafana-compute-0[99196]: logger=migrator t=2025-11-27T10:59:59.241303614Z level=info msg="Migration successfully executed" id="drop index IDX_login_attempt_username - v1" duration=1.678429ms
Nov 27 05:59:59 np0005537642 ceph-4c838139-e0c9-556a-a9ca-e4422f459af7-grafana-compute-0[99196]: logger=migrator t=2025-11-27T10:59:59.245619568Z level=info msg="Executing migration" id="Rename table login_attempt to login_attempt_tmp_qwerty - v1"
Nov 27 05:59:59 np0005537642 ceph-4c838139-e0c9-556a-a9ca-e4422f459af7-grafana-compute-0[99196]: logger=migrator t=2025-11-27T10:59:59.266802488Z level=info msg="Migration successfully executed" id="Rename table login_attempt to login_attempt_tmp_qwerty - v1" duration=21.17103ms
Nov 27 05:59:59 np0005537642 ceph-4c838139-e0c9-556a-a9ca-e4422f459af7-grafana-compute-0[99196]: logger=migrator t=2025-11-27T10:59:59.26929321Z level=info msg="Executing migration" id="create login_attempt v2"
Nov 27 05:59:59 np0005537642 ceph-4c838139-e0c9-556a-a9ca-e4422f459af7-grafana-compute-0[99196]: logger=migrator t=2025-11-27T10:59:59.270355071Z level=info msg="Migration successfully executed" id="create login_attempt v2" duration=1.066011ms
Nov 27 05:59:59 np0005537642 ceph-4c838139-e0c9-556a-a9ca-e4422f459af7-grafana-compute-0[99196]: logger=migrator t=2025-11-27T10:59:59.272637056Z level=info msg="Executing migration" id="create index IDX_login_attempt_username - v2"
Nov 27 05:59:59 np0005537642 ceph-4c838139-e0c9-556a-a9ca-e4422f459af7-grafana-compute-0[99196]: logger=migrator t=2025-11-27T10:59:59.273728848Z level=info msg="Migration successfully executed" id="create index IDX_login_attempt_username - v2" duration=1.091422ms
Nov 27 05:59:59 np0005537642 ceph-4c838139-e0c9-556a-a9ca-e4422f459af7-grafana-compute-0[99196]: logger=migrator t=2025-11-27T10:59:59.276307332Z level=info msg="Executing migration" id="copy login_attempt v1 to v2"
Nov 27 05:59:59 np0005537642 ceph-4c838139-e0c9-556a-a9ca-e4422f459af7-grafana-compute-0[99196]: logger=migrator t=2025-11-27T10:59:59.276716024Z level=info msg="Migration successfully executed" id="copy login_attempt v1 to v2" duration=408.962µs
Nov 27 05:59:59 np0005537642 ceph-4c838139-e0c9-556a-a9ca-e4422f459af7-grafana-compute-0[99196]: logger=migrator t=2025-11-27T10:59:59.285732944Z level=info msg="Executing migration" id="drop login_attempt_tmp_qwerty"
Nov 27 05:59:59 np0005537642 ceph-mon[74338]: from='mgr.14424 192.168.122.100:0/1707931784' entity='mgr.compute-0.qnrkij' 
Nov 27 05:59:59 np0005537642 ceph-mon[74338]: from='mgr.14424 192.168.122.100:0/1707931784' entity='mgr.compute-0.qnrkij' 
Nov 27 05:59:59 np0005537642 ceph-mon[74338]: from='mgr.14424 192.168.122.100:0/1707931784' entity='mgr.compute-0.qnrkij' 
Nov 27 05:59:59 np0005537642 ceph-4c838139-e0c9-556a-a9ca-e4422f459af7-grafana-compute-0[99196]: logger=migrator t=2025-11-27T10:59:59.287652899Z level=info msg="Migration successfully executed" id="drop login_attempt_tmp_qwerty" duration=1.915855ms
Nov 27 05:59:59 np0005537642 ceph-4c838139-e0c9-556a-a9ca-e4422f459af7-grafana-compute-0[99196]: logger=migrator t=2025-11-27T10:59:59.290851501Z level=info msg="Executing migration" id="create user auth table"
Nov 27 05:59:59 np0005537642 ceph-4c838139-e0c9-556a-a9ca-e4422f459af7-grafana-compute-0[99196]: logger=migrator t=2025-11-27T10:59:59.291795488Z level=info msg="Migration successfully executed" id="create user auth table" duration=946.407µs
Nov 27 05:59:59 np0005537642 ceph-4c838139-e0c9-556a-a9ca-e4422f459af7-grafana-compute-0[99196]: logger=migrator t=2025-11-27T10:59:59.29880129Z level=info msg="Executing migration" id="create index IDX_user_auth_auth_module_auth_id - v1"
Nov 27 05:59:59 np0005537642 ceph-4c838139-e0c9-556a-a9ca-e4422f459af7-grafana-compute-0[99196]: logger=migrator t=2025-11-27T10:59:59.299802959Z level=info msg="Migration successfully executed" id="create index IDX_user_auth_auth_module_auth_id - v1" duration=1.019529ms
Nov 27 05:59:59 np0005537642 ceph-4c838139-e0c9-556a-a9ca-e4422f459af7-grafana-compute-0[99196]: logger=migrator t=2025-11-27T10:59:59.303596708Z level=info msg="Executing migration" id="alter user_auth.auth_id to length 190"
Nov 27 05:59:59 np0005537642 ceph-4c838139-e0c9-556a-a9ca-e4422f459af7-grafana-compute-0[99196]: logger=migrator t=2025-11-27T10:59:59.30366813Z level=info msg="Migration successfully executed" id="alter user_auth.auth_id to length 190" duration=72.632µs
Nov 27 05:59:59 np0005537642 ceph-4c838139-e0c9-556a-a9ca-e4422f459af7-grafana-compute-0[99196]: logger=migrator t=2025-11-27T10:59:59.305788351Z level=info msg="Executing migration" id="Add OAuth access token to user_auth"
Nov 27 05:59:59 np0005537642 ceph-4c838139-e0c9-556a-a9ca-e4422f459af7-grafana-compute-0[99196]: logger=migrator t=2025-11-27T10:59:59.310564329Z level=info msg="Migration successfully executed" id="Add OAuth access token to user_auth" duration=4.774698ms
Nov 27 05:59:59 np0005537642 ceph-4c838139-e0c9-556a-a9ca-e4422f459af7-grafana-compute-0[99196]: logger=migrator t=2025-11-27T10:59:59.312562537Z level=info msg="Executing migration" id="Add OAuth refresh token to user_auth"
Nov 27 05:59:59 np0005537642 ceph-4c838139-e0c9-556a-a9ca-e4422f459af7-grafana-compute-0[99196]: logger=migrator t=2025-11-27T10:59:59.317460908Z level=info msg="Migration successfully executed" id="Add OAuth refresh token to user_auth" duration=4.893101ms
Nov 27 05:59:59 np0005537642 ceph-4c838139-e0c9-556a-a9ca-e4422f459af7-grafana-compute-0[99196]: logger=migrator t=2025-11-27T10:59:59.319809875Z level=info msg="Executing migration" id="Add OAuth token type to user_auth"
Nov 27 05:59:59 np0005537642 ceph-4c838139-e0c9-556a-a9ca-e4422f459af7-grafana-compute-0[99196]: logger=migrator t=2025-11-27T10:59:59.324912692Z level=info msg="Migration successfully executed" id="Add OAuth token type to user_auth" duration=5.102507ms
Nov 27 05:59:59 np0005537642 ceph-4c838139-e0c9-556a-a9ca-e4422f459af7-grafana-compute-0[99196]: logger=migrator t=2025-11-27T10:59:59.326845308Z level=info msg="Executing migration" id="Add OAuth expiry to user_auth"
Nov 27 05:59:59 np0005537642 ceph-4c838139-e0c9-556a-a9ca-e4422f459af7-grafana-compute-0[99196]: logger=migrator t=2025-11-27T10:59:59.33178448Z level=info msg="Migration successfully executed" id="Add OAuth expiry to user_auth" duration=4.934992ms
Nov 27 05:59:59 np0005537642 ceph-4c838139-e0c9-556a-a9ca-e4422f459af7-grafana-compute-0[99196]: logger=migrator t=2025-11-27T10:59:59.334955302Z level=info msg="Executing migration" id="Add index to user_id column in user_auth"
Nov 27 05:59:59 np0005537642 ceph-4c838139-e0c9-556a-a9ca-e4422f459af7-grafana-compute-0[99196]: logger=migrator t=2025-11-27T10:59:59.336503726Z level=info msg="Migration successfully executed" id="Add index to user_id column in user_auth" duration=1.550344ms
Nov 27 05:59:59 np0005537642 ceph-4c838139-e0c9-556a-a9ca-e4422f459af7-grafana-compute-0[99196]: logger=migrator t=2025-11-27T10:59:59.343646652Z level=info msg="Executing migration" id="Add OAuth ID token to user_auth"
Nov 27 05:59:59 np0005537642 ceph-4c838139-e0c9-556a-a9ca-e4422f459af7-grafana-compute-0[99196]: logger=migrator t=2025-11-27T10:59:59.35017794Z level=info msg="Migration successfully executed" id="Add OAuth ID token to user_auth" duration=6.525948ms
Nov 27 05:59:59 np0005537642 ceph-4c838139-e0c9-556a-a9ca-e4422f459af7-grafana-compute-0[99196]: logger=migrator t=2025-11-27T10:59:59.355673479Z level=info msg="Executing migration" id="create server_lock table"
Nov 27 05:59:59 np0005537642 ceph-4c838139-e0c9-556a-a9ca-e4422f459af7-grafana-compute-0[99196]: logger=migrator t=2025-11-27T10:59:59.357180482Z level=info msg="Migration successfully executed" id="create server_lock table" duration=1.510084ms
Nov 27 05:59:59 np0005537642 ceph-4c838139-e0c9-556a-a9ca-e4422f459af7-grafana-compute-0[99196]: logger=migrator t=2025-11-27T10:59:59.359500209Z level=info msg="Executing migration" id="add index server_lock.operation_uid"
Nov 27 05:59:59 np0005537642 ceph-4c838139-e0c9-556a-a9ca-e4422f459af7-grafana-compute-0[99196]: logger=migrator t=2025-11-27T10:59:59.360972251Z level=info msg="Migration successfully executed" id="add index server_lock.operation_uid" duration=1.472102ms
Nov 27 05:59:59 np0005537642 ceph-4c838139-e0c9-556a-a9ca-e4422f459af7-grafana-compute-0[99196]: logger=migrator t=2025-11-27T10:59:59.365324436Z level=info msg="Executing migration" id="create user auth token table"
Nov 27 05:59:59 np0005537642 ceph-4c838139-e0c9-556a-a9ca-e4422f459af7-grafana-compute-0[99196]: logger=migrator t=2025-11-27T10:59:59.366993795Z level=info msg="Migration successfully executed" id="create user auth token table" duration=1.671078ms
Nov 27 05:59:59 np0005537642 ceph-4c838139-e0c9-556a-a9ca-e4422f459af7-grafana-compute-0[99196]: logger=migrator t=2025-11-27T10:59:59.372071871Z level=info msg="Executing migration" id="add unique index user_auth_token.auth_token"
Nov 27 05:59:59 np0005537642 ceph-4c838139-e0c9-556a-a9ca-e4422f459af7-grafana-compute-0[99196]: logger=migrator t=2025-11-27T10:59:59.373654267Z level=info msg="Migration successfully executed" id="add unique index user_auth_token.auth_token" duration=1.582315ms
Nov 27 05:59:59 np0005537642 ceph-4c838139-e0c9-556a-a9ca-e4422f459af7-grafana-compute-0[99196]: logger=migrator t=2025-11-27T10:59:59.378165286Z level=info msg="Executing migration" id="add unique index user_auth_token.prev_auth_token"
Nov 27 05:59:59 np0005537642 ceph-4c838139-e0c9-556a-a9ca-e4422f459af7-grafana-compute-0[99196]: logger=migrator t=2025-11-27T10:59:59.37968682Z level=info msg="Migration successfully executed" id="add unique index user_auth_token.prev_auth_token" duration=1.521764ms
Nov 27 05:59:59 np0005537642 ceph-4c838139-e0c9-556a-a9ca-e4422f459af7-grafana-compute-0[99196]: logger=migrator t=2025-11-27T10:59:59.382390428Z level=info msg="Executing migration" id="add index user_auth_token.user_id"
Nov 27 05:59:59 np0005537642 ceph-4c838139-e0c9-556a-a9ca-e4422f459af7-grafana-compute-0[99196]: logger=migrator t=2025-11-27T10:59:59.384034485Z level=info msg="Migration successfully executed" id="add index user_auth_token.user_id" duration=1.642677ms
Nov 27 05:59:59 np0005537642 ceph-4c838139-e0c9-556a-a9ca-e4422f459af7-grafana-compute-0[99196]: logger=migrator t=2025-11-27T10:59:59.38660875Z level=info msg="Executing migration" id="Add revoked_at to the user auth token"
Nov 27 05:59:59 np0005537642 ceph-4c838139-e0c9-556a-a9ca-e4422f459af7-grafana-compute-0[99196]: logger=migrator t=2025-11-27T10:59:59.3918181Z level=info msg="Migration successfully executed" id="Add revoked_at to the user auth token" duration=5.20611ms
Nov 27 05:59:59 np0005537642 ceph-4c838139-e0c9-556a-a9ca-e4422f459af7-grafana-compute-0[99196]: logger=migrator t=2025-11-27T10:59:59.394850697Z level=info msg="Executing migration" id="add index user_auth_token.revoked_at"
Nov 27 05:59:59 np0005537642 ceph-4c838139-e0c9-556a-a9ca-e4422f459af7-grafana-compute-0[99196]: logger=migrator t=2025-11-27T10:59:59.395840116Z level=info msg="Migration successfully executed" id="add index user_auth_token.revoked_at" duration=989.769µs
Nov 27 05:59:59 np0005537642 ceph-4c838139-e0c9-556a-a9ca-e4422f459af7-grafana-compute-0[99196]: logger=migrator t=2025-11-27T10:59:59.399620274Z level=info msg="Executing migration" id="create cache_data table"
Nov 27 05:59:59 np0005537642 ceph-4c838139-e0c9-556a-a9ca-e4422f459af7-grafana-compute-0[99196]: logger=migrator t=2025-11-27T10:59:59.40120511Z level=info msg="Migration successfully executed" id="create cache_data table" duration=1.584926ms
Nov 27 05:59:59 np0005537642 ceph-4c838139-e0c9-556a-a9ca-e4422f459af7-grafana-compute-0[99196]: logger=migrator t=2025-11-27T10:59:59.404791013Z level=info msg="Executing migration" id="add unique index cache_data.cache_key"
Nov 27 05:59:59 np0005537642 ceph-4c838139-e0c9-556a-a9ca-e4422f459af7-grafana-compute-0[99196]: logger=migrator t=2025-11-27T10:59:59.406386459Z level=info msg="Migration successfully executed" id="add unique index cache_data.cache_key" duration=1.595256ms
Nov 27 05:59:59 np0005537642 ceph-4c838139-e0c9-556a-a9ca-e4422f459af7-grafana-compute-0[99196]: logger=migrator t=2025-11-27T10:59:59.410137207Z level=info msg="Executing migration" id="create short_url table v1"
Nov 27 05:59:59 np0005537642 ceph-4c838139-e0c9-556a-a9ca-e4422f459af7-grafana-compute-0[99196]: logger=migrator t=2025-11-27T10:59:59.411103385Z level=info msg="Migration successfully executed" id="create short_url table v1" duration=962.508µs
Nov 27 05:59:59 np0005537642 ceph-4c838139-e0c9-556a-a9ca-e4422f459af7-grafana-compute-0[99196]: logger=migrator t=2025-11-27T10:59:59.414522834Z level=info msg="Executing migration" id="add index short_url.org_id-uid"
Nov 27 05:59:59 np0005537642 ceph-4c838139-e0c9-556a-a9ca-e4422f459af7-grafana-compute-0[99196]: logger=migrator t=2025-11-27T10:59:59.415375828Z level=info msg="Migration successfully executed" id="add index short_url.org_id-uid" duration=853.534µs
Nov 27 05:59:59 np0005537642 ceph-4c838139-e0c9-556a-a9ca-e4422f459af7-grafana-compute-0[99196]: logger=migrator t=2025-11-27T10:59:59.420245829Z level=info msg="Executing migration" id="alter table short_url alter column created_by type to bigint"
Nov 27 05:59:59 np0005537642 ceph-4c838139-e0c9-556a-a9ca-e4422f459af7-grafana-compute-0[99196]: logger=migrator t=2025-11-27T10:59:59.420357522Z level=info msg="Migration successfully executed" id="alter table short_url alter column created_by type to bigint" duration=113.663µs
Nov 27 05:59:59 np0005537642 ceph-4c838139-e0c9-556a-a9ca-e4422f459af7-grafana-compute-0[99196]: logger=migrator t=2025-11-27T10:59:59.422914316Z level=info msg="Executing migration" id="delete alert_definition table"
Nov 27 05:59:59 np0005537642 ceph-4c838139-e0c9-556a-a9ca-e4422f459af7-grafana-compute-0[99196]: logger=migrator t=2025-11-27T10:59:59.422997278Z level=info msg="Migration successfully executed" id="delete alert_definition table" duration=83.863µs
Nov 27 05:59:59 np0005537642 ceph-4c838139-e0c9-556a-a9ca-e4422f459af7-grafana-compute-0[99196]: logger=migrator t=2025-11-27T10:59:59.426948092Z level=info msg="Executing migration" id="recreate alert_definition table"
Nov 27 05:59:59 np0005537642 ceph-4c838139-e0c9-556a-a9ca-e4422f459af7-grafana-compute-0[99196]: logger=migrator t=2025-11-27T10:59:59.4279259Z level=info msg="Migration successfully executed" id="recreate alert_definition table" duration=980.438µs
Nov 27 05:59:59 np0005537642 ceph-4c838139-e0c9-556a-a9ca-e4422f459af7-grafana-compute-0[99196]: logger=migrator t=2025-11-27T10:59:59.431517173Z level=info msg="Executing migration" id="add index in alert_definition on org_id and title columns"
Nov 27 05:59:59 np0005537642 ceph-4c838139-e0c9-556a-a9ca-e4422f459af7-grafana-compute-0[99196]: logger=migrator t=2025-11-27T10:59:59.432301926Z level=info msg="Migration successfully executed" id="add index in alert_definition on org_id and title columns" duration=784.753µs
Nov 27 05:59:59 np0005537642 ceph-4c838139-e0c9-556a-a9ca-e4422f459af7-grafana-compute-0[99196]: logger=migrator t=2025-11-27T10:59:59.43695209Z level=info msg="Executing migration" id="add index in alert_definition on org_id and uid columns"
Nov 27 05:59:59 np0005537642 ceph-4c838139-e0c9-556a-a9ca-e4422f459af7-grafana-compute-0[99196]: logger=migrator t=2025-11-27T10:59:59.437759583Z level=info msg="Migration successfully executed" id="add index in alert_definition on org_id and uid columns" duration=807.453µs
Nov 27 05:59:59 np0005537642 ceph-4c838139-e0c9-556a-a9ca-e4422f459af7-grafana-compute-0[99196]: logger=migrator t=2025-11-27T10:59:59.44215167Z level=info msg="Executing migration" id="alter alert_definition table data column to mediumtext in mysql"
Nov 27 05:59:59 np0005537642 ceph-4c838139-e0c9-556a-a9ca-e4422f459af7-grafana-compute-0[99196]: logger=migrator t=2025-11-27T10:59:59.442275053Z level=info msg="Migration successfully executed" id="alter alert_definition table data column to mediumtext in mysql" duration=127.793µs
Nov 27 05:59:59 np0005537642 ceph-4c838139-e0c9-556a-a9ca-e4422f459af7-grafana-compute-0[99196]: logger=migrator t=2025-11-27T10:59:59.446744792Z level=info msg="Executing migration" id="drop index in alert_definition on org_id and title columns"
Nov 27 05:59:59 np0005537642 ceph-4c838139-e0c9-556a-a9ca-e4422f459af7-grafana-compute-0[99196]: logger=migrator t=2025-11-27T10:59:59.449082189Z level=info msg="Migration successfully executed" id="drop index in alert_definition on org_id and title columns" duration=2.337447ms
Nov 27 05:59:59 np0005537642 ceph-4c838139-e0c9-556a-a9ca-e4422f459af7-grafana-compute-0[99196]: logger=migrator t=2025-11-27T10:59:59.451667804Z level=info msg="Executing migration" id="drop index in alert_definition on org_id and uid columns"
Nov 27 05:59:59 np0005537642 ceph-4c838139-e0c9-556a-a9ca-e4422f459af7-grafana-compute-0[99196]: logger=migrator t=2025-11-27T10:59:59.453210498Z level=info msg="Migration successfully executed" id="drop index in alert_definition on org_id and uid columns" duration=1.542884ms
Nov 27 05:59:59 np0005537642 ceph-4c838139-e0c9-556a-a9ca-e4422f459af7-grafana-compute-0[99196]: logger=migrator t=2025-11-27T10:59:59.455499414Z level=info msg="Executing migration" id="add unique index in alert_definition on org_id and title columns"
Nov 27 05:59:59 np0005537642 ceph-4c838139-e0c9-556a-a9ca-e4422f459af7-grafana-compute-0[99196]: logger=migrator t=2025-11-27T10:59:59.456970827Z level=info msg="Migration successfully executed" id="add unique index in alert_definition on org_id and title columns" duration=1.470862ms
Nov 27 05:59:59 np0005537642 ceph-4c838139-e0c9-556a-a9ca-e4422f459af7-grafana-compute-0[99196]: logger=migrator t=2025-11-27T10:59:59.469769735Z level=info msg="Executing migration" id="add unique index in alert_definition on org_id and uid columns"
Nov 27 05:59:59 np0005537642 ceph-4c838139-e0c9-556a-a9ca-e4422f459af7-grafana-compute-0[99196]: logger=migrator t=2025-11-27T10:59:59.471143415Z level=info msg="Migration successfully executed" id="add unique index in alert_definition on org_id and uid columns" duration=1.37924ms
Nov 27 05:59:59 np0005537642 ceph-4c838139-e0c9-556a-a9ca-e4422f459af7-grafana-compute-0[99196]: logger=migrator t=2025-11-27T10:59:59.473215915Z level=info msg="Executing migration" id="Add column paused in alert_definition"
Nov 27 05:59:59 np0005537642 ceph-4c838139-e0c9-556a-a9ca-e4422f459af7-grafana-compute-0[99196]: logger=migrator t=2025-11-27T10:59:59.478752394Z level=info msg="Migration successfully executed" id="Add column paused in alert_definition" duration=5.531979ms
Nov 27 05:59:59 np0005537642 ceph-4c838139-e0c9-556a-a9ca-e4422f459af7-grafana-compute-0[99196]: logger=migrator t=2025-11-27T10:59:59.482643506Z level=info msg="Executing migration" id="drop alert_definition table"
Nov 27 05:59:59 np0005537642 ceph-4c838139-e0c9-556a-a9ca-e4422f459af7-grafana-compute-0[99196]: logger=migrator t=2025-11-27T10:59:59.484610163Z level=info msg="Migration successfully executed" id="drop alert_definition table" duration=1.972227ms
Nov 27 05:59:59 np0005537642 ceph-4c838139-e0c9-556a-a9ca-e4422f459af7-grafana-compute-0[99196]: logger=migrator t=2025-11-27T10:59:59.490907154Z level=info msg="Executing migration" id="delete alert_definition_version table"
Nov 27 05:59:59 np0005537642 ceph-4c838139-e0c9-556a-a9ca-e4422f459af7-grafana-compute-0[99196]: logger=migrator t=2025-11-27T10:59:59.491049228Z level=info msg="Migration successfully executed" id="delete alert_definition_version table" duration=142.894µs
Nov 27 05:59:59 np0005537642 ceph-4c838139-e0c9-556a-a9ca-e4422f459af7-grafana-compute-0[99196]: logger=migrator t=2025-11-27T10:59:59.493229601Z level=info msg="Executing migration" id="recreate alert_definition_version table"
Nov 27 05:59:59 np0005537642 ceph-4c838139-e0c9-556a-a9ca-e4422f459af7-grafana-compute-0[99196]: logger=migrator t=2025-11-27T10:59:59.494702603Z level=info msg="Migration successfully executed" id="recreate alert_definition_version table" duration=1.471872ms
Nov 27 05:59:59 np0005537642 ceph-4c838139-e0c9-556a-a9ca-e4422f459af7-grafana-compute-0[99196]: logger=migrator t=2025-11-27T10:59:59.497102223Z level=info msg="Executing migration" id="add index in alert_definition_version table on alert_definition_id and version columns"
Nov 27 05:59:59 np0005537642 ceph-4c838139-e0c9-556a-a9ca-e4422f459af7-grafana-compute-0[99196]: logger=migrator t=2025-11-27T10:59:59.49839279Z level=info msg="Migration successfully executed" id="add index in alert_definition_version table on alert_definition_id and version columns" duration=1.289747ms
Nov 27 05:59:59 np0005537642 ceph-4c838139-e0c9-556a-a9ca-e4422f459af7-grafana-compute-0[99196]: logger=migrator t=2025-11-27T10:59:59.500538662Z level=info msg="Executing migration" id="add index in alert_definition_version table on alert_definition_uid and version columns"
Nov 27 05:59:59 np0005537642 ceph-4c838139-e0c9-556a-a9ca-e4422f459af7-grafana-compute-0[99196]: logger=migrator t=2025-11-27T10:59:59.501774157Z level=info msg="Migration successfully executed" id="add index in alert_definition_version table on alert_definition_uid and version columns" duration=1.237566ms
Nov 27 05:59:59 np0005537642 ceph-4c838139-e0c9-556a-a9ca-e4422f459af7-grafana-compute-0[99196]: logger=migrator t=2025-11-27T10:59:59.503767425Z level=info msg="Executing migration" id="alter alert_definition_version table data column to mediumtext in mysql"
Nov 27 05:59:59 np0005537642 ceph-4c838139-e0c9-556a-a9ca-e4422f459af7-grafana-compute-0[99196]: logger=migrator t=2025-11-27T10:59:59.503842417Z level=info msg="Migration successfully executed" id="alter alert_definition_version table data column to mediumtext in mysql" duration=75.642µs
Nov 27 05:59:59 np0005537642 ceph-4c838139-e0c9-556a-a9ca-e4422f459af7-grafana-compute-0[99196]: logger=migrator t=2025-11-27T10:59:59.505531826Z level=info msg="Executing migration" id="drop alert_definition_version table"
Nov 27 05:59:59 np0005537642 ceph-4c838139-e0c9-556a-a9ca-e4422f459af7-grafana-compute-0[99196]: logger=migrator t=2025-11-27T10:59:59.506960267Z level=info msg="Migration successfully executed" id="drop alert_definition_version table" duration=1.423221ms
Nov 27 05:59:59 np0005537642 ceph-4c838139-e0c9-556a-a9ca-e4422f459af7-grafana-compute-0[99196]: logger=migrator t=2025-11-27T10:59:59.510245911Z level=info msg="Executing migration" id="create alert_instance table"
Nov 27 05:59:59 np0005537642 ceph-4c838139-e0c9-556a-a9ca-e4422f459af7-grafana-compute-0[99196]: logger=migrator t=2025-11-27T10:59:59.511380224Z level=info msg="Migration successfully executed" id="create alert_instance table" duration=1.133613ms
Nov 27 05:59:59 np0005537642 ceph-4c838139-e0c9-556a-a9ca-e4422f459af7-grafana-compute-0[99196]: logger=migrator t=2025-11-27T10:59:59.516257224Z level=info msg="Executing migration" id="add index in alert_instance table on def_org_id, def_uid and current_state columns"
Nov 27 05:59:59 np0005537642 ceph-4c838139-e0c9-556a-a9ca-e4422f459af7-grafana-compute-0[99196]: logger=migrator t=2025-11-27T10:59:59.517407408Z level=info msg="Migration successfully executed" id="add index in alert_instance table on def_org_id, def_uid and current_state columns" duration=1.147583ms
Nov 27 05:59:59 np0005537642 ceph-4c838139-e0c9-556a-a9ca-e4422f459af7-grafana-compute-0[99196]: logger=migrator t=2025-11-27T10:59:59.544291482Z level=info msg="Executing migration" id="add index in alert_instance table on def_org_id, current_state columns"
Nov 27 05:59:59 np0005537642 ceph-4c838139-e0c9-556a-a9ca-e4422f459af7-grafana-compute-0[99196]: logger=migrator t=2025-11-27T10:59:59.546449544Z level=info msg="Migration successfully executed" id="add index in alert_instance table on def_org_id, current_state columns" duration=2.169103ms
Nov 27 05:59:59 np0005537642 ceph-4c838139-e0c9-556a-a9ca-e4422f459af7-grafana-compute-0[99196]: logger=migrator t=2025-11-27T10:59:59.553455386Z level=info msg="Executing migration" id="add column current_state_end to alert_instance"
Nov 27 05:59:59 np0005537642 ceph-4c838139-e0c9-556a-a9ca-e4422f459af7-grafana-compute-0[99196]: logger=migrator t=2025-11-27T10:59:59.558654456Z level=info msg="Migration successfully executed" id="add column current_state_end to alert_instance" duration=5.20262ms
Nov 27 05:59:59 np0005537642 ceph-4c838139-e0c9-556a-a9ca-e4422f459af7-grafana-compute-0[99196]: logger=migrator t=2025-11-27T10:59:59.562987871Z level=info msg="Executing migration" id="remove index def_org_id, def_uid, current_state on alert_instance"
Nov 27 05:59:59 np0005537642 ceph-4c838139-e0c9-556a-a9ca-e4422f459af7-grafana-compute-0[99196]: logger=migrator t=2025-11-27T10:59:59.563881606Z level=info msg="Migration successfully executed" id="remove index def_org_id, def_uid, current_state on alert_instance" duration=894.066µs
Nov 27 05:59:59 np0005537642 ceph-4c838139-e0c9-556a-a9ca-e4422f459af7-grafana-compute-0[99196]: logger=migrator t=2025-11-27T10:59:59.569045765Z level=info msg="Executing migration" id="remove index def_org_id, current_state on alert_instance"
Nov 27 05:59:59 np0005537642 ceph-4c838139-e0c9-556a-a9ca-e4422f459af7-grafana-compute-0[99196]: logger=migrator t=2025-11-27T10:59:59.569997793Z level=info msg="Migration successfully executed" id="remove index def_org_id, current_state on alert_instance" duration=952.607µs
Nov 27 05:59:59 np0005537642 ceph-4c838139-e0c9-556a-a9ca-e4422f459af7-grafana-compute-0[99196]: logger=migrator t=2025-11-27T10:59:59.572072232Z level=info msg="Executing migration" id="rename def_org_id to rule_org_id in alert_instance"
Nov 27 05:59:59 np0005537642 ceph-4c838139-e0c9-556a-a9ca-e4422f459af7-grafana-compute-0[99196]: logger=migrator t=2025-11-27T10:59:59.593997654Z level=info msg="Migration successfully executed" id="rename def_org_id to rule_org_id in alert_instance" duration=21.920792ms
Nov 27 05:59:59 np0005537642 ceph-4c838139-e0c9-556a-a9ca-e4422f459af7-grafana-compute-0[99196]: logger=migrator t=2025-11-27T10:59:59.596370052Z level=info msg="Executing migration" id="rename def_uid to rule_uid in alert_instance"
Nov 27 05:59:59 np0005537642 ceph-4c838139-e0c9-556a-a9ca-e4422f459af7-grafana-compute-0[99196]: logger=migrator t=2025-11-27T10:59:59.6164441Z level=info msg="Migration successfully executed" id="rename def_uid to rule_uid in alert_instance" duration=20.069318ms
Nov 27 05:59:59 np0005537642 ceph-4c838139-e0c9-556a-a9ca-e4422f459af7-grafana-compute-0[99196]: logger=migrator t=2025-11-27T10:59:59.61886188Z level=info msg="Executing migration" id="add index rule_org_id, rule_uid, current_state on alert_instance"
Nov 27 05:59:59 np0005537642 ceph-4c838139-e0c9-556a-a9ca-e4422f459af7-grafana-compute-0[99196]: logger=migrator t=2025-11-27T10:59:59.619692074Z level=info msg="Migration successfully executed" id="add index rule_org_id, rule_uid, current_state on alert_instance" duration=829.874µs
Nov 27 05:59:59 np0005537642 ceph-4c838139-e0c9-556a-a9ca-e4422f459af7-grafana-compute-0[99196]: logger=migrator t=2025-11-27T10:59:59.62128938Z level=info msg="Executing migration" id="add index rule_org_id, current_state on alert_instance"
Nov 27 05:59:59 np0005537642 ceph-4c838139-e0c9-556a-a9ca-e4422f459af7-grafana-compute-0[99196]: logger=migrator t=2025-11-27T10:59:59.622052452Z level=info msg="Migration successfully executed" id="add index rule_org_id, current_state on alert_instance" duration=762.882µs
Nov 27 05:59:59 np0005537642 ceph-4c838139-e0c9-556a-a9ca-e4422f459af7-grafana-compute-0[99196]: logger=migrator t=2025-11-27T10:59:59.665795382Z level=info msg="Executing migration" id="add current_reason column related to current_state"
Nov 27 05:59:59 np0005537642 ceph-4c838139-e0c9-556a-a9ca-e4422f459af7-grafana-compute-0[99196]: logger=migrator t=2025-11-27T10:59:59.67056511Z level=info msg="Migration successfully executed" id="add current_reason column related to current_state" duration=4.779187ms
Nov 27 05:59:59 np0005537642 ceph-4c838139-e0c9-556a-a9ca-e4422f459af7-grafana-compute-0[99196]: logger=migrator t=2025-11-27T10:59:59.67335704Z level=info msg="Executing migration" id="add result_fingerprint column to alert_instance"
Nov 27 05:59:59 np0005537642 ceph-4c838139-e0c9-556a-a9ca-e4422f459af7-grafana-compute-0[99196]: logger=migrator t=2025-11-27T10:59:59.677753197Z level=info msg="Migration successfully executed" id="add result_fingerprint column to alert_instance" duration=4.394607ms
Nov 27 05:59:59 np0005537642 ceph-4c838139-e0c9-556a-a9ca-e4422f459af7-grafana-compute-0[99196]: logger=migrator t=2025-11-27T10:59:59.679744314Z level=info msg="Executing migration" id="create alert_rule table"
Nov 27 05:59:59 np0005537642 ceph-4c838139-e0c9-556a-a9ca-e4422f459af7-grafana-compute-0[99196]: logger=migrator t=2025-11-27T10:59:59.680511786Z level=info msg="Migration successfully executed" id="create alert_rule table" duration=767.282µs
Nov 27 05:59:59 np0005537642 ceph-4c838139-e0c9-556a-a9ca-e4422f459af7-grafana-compute-0[99196]: logger=migrator t=2025-11-27T10:59:59.696530848Z level=info msg="Executing migration" id="add index in alert_rule on org_id and title columns"
Nov 27 05:59:59 np0005537642 ceph-4c838139-e0c9-556a-a9ca-e4422f459af7-grafana-compute-0[99196]: logger=migrator t=2025-11-27T10:59:59.697481685Z level=info msg="Migration successfully executed" id="add index in alert_rule on org_id and title columns" duration=947.627µs
Nov 27 05:59:59 np0005537642 ceph-4c838139-e0c9-556a-a9ca-e4422f459af7-grafana-compute-0[99196]: logger=migrator t=2025-11-27T10:59:59.700620195Z level=info msg="Executing migration" id="add index in alert_rule on org_id and uid columns"
Nov 27 05:59:59 np0005537642 ceph-4c838139-e0c9-556a-a9ca-e4422f459af7-grafana-compute-0[99196]: logger=migrator t=2025-11-27T10:59:59.701407238Z level=info msg="Migration successfully executed" id="add index in alert_rule on org_id and uid columns" duration=787.663µs
Nov 27 05:59:59 np0005537642 ceph-4c838139-e0c9-556a-a9ca-e4422f459af7-grafana-compute-0[99196]: logger=migrator t=2025-11-27T10:59:59.703327383Z level=info msg="Executing migration" id="add index in alert_rule on org_id, namespace_uid, group_uid columns"
Nov 27 05:59:59 np0005537642 ceph-4c838139-e0c9-556a-a9ca-e4422f459af7-grafana-compute-0[99196]: logger=migrator t=2025-11-27T10:59:59.704189868Z level=info msg="Migration successfully executed" id="add index in alert_rule on org_id, namespace_uid, group_uid columns" duration=860.905µs
Nov 27 05:59:59 np0005537642 ceph-4c838139-e0c9-556a-a9ca-e4422f459af7-grafana-compute-0[99196]: logger=migrator t=2025-11-27T10:59:59.708102821Z level=info msg="Executing migration" id="alter alert_rule table data column to mediumtext in mysql"
Nov 27 05:59:59 np0005537642 ceph-4c838139-e0c9-556a-a9ca-e4422f459af7-grafana-compute-0[99196]: logger=migrator t=2025-11-27T10:59:59.708160333Z level=info msg="Migration successfully executed" id="alter alert_rule table data column to mediumtext in mysql" duration=58.342µs
Nov 27 05:59:59 np0005537642 ceph-4c838139-e0c9-556a-a9ca-e4422f459af7-grafana-compute-0[99196]: logger=migrator t=2025-11-27T10:59:59.710406247Z level=info msg="Executing migration" id="add column for to alert_rule"
Nov 27 05:59:59 np0005537642 ceph-4c838139-e0c9-556a-a9ca-e4422f459af7-grafana-compute-0[99196]: logger=migrator t=2025-11-27T10:59:59.7146654Z level=info msg="Migration successfully executed" id="add column for to alert_rule" duration=4.259613ms
Nov 27 05:59:59 np0005537642 ceph-4c838139-e0c9-556a-a9ca-e4422f459af7-grafana-compute-0[99196]: logger=migrator t=2025-11-27T10:59:59.716531514Z level=info msg="Executing migration" id="add column annotations to alert_rule"
Nov 27 05:59:59 np0005537642 ceph-4c838139-e0c9-556a-a9ca-e4422f459af7-grafana-compute-0[99196]: logger=migrator t=2025-11-27T10:59:59.720667453Z level=info msg="Migration successfully executed" id="add column annotations to alert_rule" duration=4.136049ms
Nov 27 05:59:59 np0005537642 ceph-4c838139-e0c9-556a-a9ca-e4422f459af7-nfs-cephfs-2-0-compute-0-ymahkb[97101]: 27/11/2025 10:59:59 : epoch 69282ef5 : compute-0 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f37c4003c60 fd 49 proxy header rest len failed header rlen = % (will set dead)
Nov 27 05:59:59 np0005537642 ceph-4c838139-e0c9-556a-a9ca-e4422f459af7-grafana-compute-0[99196]: logger=migrator t=2025-11-27T10:59:59.738847637Z level=info msg="Executing migration" id="add column labels to alert_rule"
Nov 27 05:59:59 np0005537642 ceph-4c838139-e0c9-556a-a9ca-e4422f459af7-grafana-compute-0[99196]: logger=migrator t=2025-11-27T10:59:59.749127053Z level=info msg="Migration successfully executed" id="add column labels to alert_rule" duration=10.287157ms
Nov 27 05:59:59 np0005537642 ceph-4c838139-e0c9-556a-a9ca-e4422f459af7-grafana-compute-0[99196]: logger=migrator t=2025-11-27T10:59:59.751417229Z level=info msg="Executing migration" id="remove unique index from alert_rule on org_id, title columns"
Nov 27 05:59:59 np0005537642 ceph-4c838139-e0c9-556a-a9ca-e4422f459af7-grafana-compute-0[99196]: logger=migrator t=2025-11-27T10:59:59.752667265Z level=info msg="Migration successfully executed" id="remove unique index from alert_rule on org_id, title columns" duration=1.249596ms
Nov 27 05:59:59 np0005537642 ceph-4c838139-e0c9-556a-a9ca-e4422f459af7-grafana-compute-0[99196]: logger=migrator t=2025-11-27T10:59:59.75492998Z level=info msg="Executing migration" id="add index in alert_rule on org_id, namespase_uid and title columns"
Nov 27 05:59:59 np0005537642 ceph-4c838139-e0c9-556a-a9ca-e4422f459af7-grafana-compute-0[99196]: logger=migrator t=2025-11-27T10:59:59.756081443Z level=info msg="Migration successfully executed" id="add index in alert_rule on org_id, namespase_uid and title columns" duration=1.150763ms
Nov 27 05:59:59 np0005537642 ceph-4c838139-e0c9-556a-a9ca-e4422f459af7-grafana-compute-0[99196]: logger=migrator t=2025-11-27T10:59:59.758461672Z level=info msg="Executing migration" id="add dashboard_uid column to alert_rule"
Nov 27 05:59:59 np0005537642 ceph-4c838139-e0c9-556a-a9ca-e4422f459af7-grafana-compute-0[99196]: logger=migrator t=2025-11-27T10:59:59.765177505Z level=info msg="Migration successfully executed" id="add dashboard_uid column to alert_rule" duration=6.712273ms
Nov 27 05:59:59 np0005537642 ceph-4c838139-e0c9-556a-a9ca-e4422f459af7-grafana-compute-0[99196]: logger=migrator t=2025-11-27T10:59:59.769058527Z level=info msg="Executing migration" id="add panel_id column to alert_rule"
Nov 27 05:59:59 np0005537642 ceph-4c838139-e0c9-556a-a9ca-e4422f459af7-grafana-compute-0[99196]: logger=migrator t=2025-11-27T10:59:59.775718459Z level=info msg="Migration successfully executed" id="add panel_id column to alert_rule" duration=6.658322ms
Nov 27 05:59:59 np0005537642 ceph-4c838139-e0c9-556a-a9ca-e4422f459af7-grafana-compute-0[99196]: logger=migrator t=2025-11-27T10:59:59.781622779Z level=info msg="Executing migration" id="add index in alert_rule on org_id, dashboard_uid and panel_id columns"
Nov 27 05:59:59 np0005537642 ceph-4c838139-e0c9-556a-a9ca-e4422f459af7-grafana-compute-0[99196]: logger=migrator t=2025-11-27T10:59:59.783028359Z level=info msg="Migration successfully executed" id="add index in alert_rule on org_id, dashboard_uid and panel_id columns" duration=1.40497ms
Nov 27 05:59:59 np0005537642 ceph-4c838139-e0c9-556a-a9ca-e4422f459af7-grafana-compute-0[99196]: logger=migrator t=2025-11-27T10:59:59.786303444Z level=info msg="Executing migration" id="add rule_group_idx column to alert_rule"
Nov 27 05:59:59 np0005537642 ceph-4c838139-e0c9-556a-a9ca-e4422f459af7-grafana-compute-0[99196]: logger=migrator t=2025-11-27T10:59:59.793664376Z level=info msg="Migration successfully executed" id="add rule_group_idx column to alert_rule" duration=7.360052ms
Nov 27 05:59:59 np0005537642 ceph-4c838139-e0c9-556a-a9ca-e4422f459af7-grafana-compute-0[99196]: logger=migrator t=2025-11-27T10:59:59.797595759Z level=info msg="Executing migration" id="add is_paused column to alert_rule table"
Nov 27 05:59:59 np0005537642 ceph-4c838139-e0c9-556a-a9ca-e4422f459af7-grafana-compute-0[99196]: logger=migrator t=2025-11-27T10:59:59.804873039Z level=info msg="Migration successfully executed" id="add is_paused column to alert_rule table" duration=7.2743ms
Nov 27 05:59:59 np0005537642 ceph-4c838139-e0c9-556a-a9ca-e4422f459af7-grafana-compute-0[99196]: logger=migrator t=2025-11-27T10:59:59.809158942Z level=info msg="Executing migration" id="fix is_paused column for alert_rule table"
Nov 27 05:59:59 np0005537642 ceph-4c838139-e0c9-556a-a9ca-e4422f459af7-grafana-compute-0[99196]: logger=migrator t=2025-11-27T10:59:59.809259885Z level=info msg="Migration successfully executed" id="fix is_paused column for alert_rule table" duration=147.714µs
Nov 27 05:59:59 np0005537642 ceph-4c838139-e0c9-556a-a9ca-e4422f459af7-grafana-compute-0[99196]: logger=migrator t=2025-11-27T10:59:59.811803548Z level=info msg="Executing migration" id="create alert_rule_version table"
Nov 27 05:59:59 np0005537642 ceph-4c838139-e0c9-556a-a9ca-e4422f459af7-grafana-compute-0[99196]: logger=migrator t=2025-11-27T10:59:59.813317252Z level=info msg="Migration successfully executed" id="create alert_rule_version table" duration=1.513164ms
Nov 27 05:59:59 np0005537642 ceph-4c838139-e0c9-556a-a9ca-e4422f459af7-grafana-compute-0[99196]: logger=migrator t=2025-11-27T10:59:59.817853323Z level=info msg="Executing migration" id="add index in alert_rule_version table on rule_org_id, rule_uid and version columns"
Nov 27 05:59:59 np0005537642 ceph-4c838139-e0c9-556a-a9ca-e4422f459af7-grafana-compute-0[99196]: logger=migrator t=2025-11-27T10:59:59.819133209Z level=info msg="Migration successfully executed" id="add index in alert_rule_version table on rule_org_id, rule_uid and version columns" duration=1.279666ms
Nov 27 05:59:59 np0005537642 ceph-4c838139-e0c9-556a-a9ca-e4422f459af7-grafana-compute-0[99196]: logger=migrator t=2025-11-27T10:59:59.824654079Z level=info msg="Executing migration" id="add index in alert_rule_version table on rule_org_id, rule_namespace_uid and rule_group columns"
Nov 27 05:59:59 np0005537642 ceph-4c838139-e0c9-556a-a9ca-e4422f459af7-grafana-compute-0[99196]: logger=migrator t=2025-11-27T10:59:59.826378638Z level=info msg="Migration successfully executed" id="add index in alert_rule_version table on rule_org_id, rule_namespace_uid and rule_group columns" duration=1.72942ms
Nov 27 05:59:59 np0005537642 ceph-4c838139-e0c9-556a-a9ca-e4422f459af7-grafana-compute-0[99196]: logger=migrator t=2025-11-27T10:59:59.846280071Z level=info msg="Executing migration" id="alter alert_rule_version table data column to mediumtext in mysql"
Nov 27 05:59:59 np0005537642 ceph-4c838139-e0c9-556a-a9ca-e4422f459af7-grafana-compute-0[99196]: logger=migrator t=2025-11-27T10:59:59.846495928Z level=info msg="Migration successfully executed" id="alter alert_rule_version table data column to mediumtext in mysql" duration=224.417µs
Nov 27 05:59:59 np0005537642 ceph-4c838139-e0c9-556a-a9ca-e4422f459af7-grafana-compute-0[99196]: logger=migrator t=2025-11-27T10:59:59.852533972Z level=info msg="Executing migration" id="add column for to alert_rule_version"
Nov 27 05:59:59 np0005537642 ceph-4c838139-e0c9-556a-a9ca-e4422f459af7-grafana-compute-0[99196]: logger=migrator t=2025-11-27T10:59:59.857650299Z level=info msg="Migration successfully executed" id="add column for to alert_rule_version" duration=5.111148ms
Nov 27 05:59:59 np0005537642 ceph-4c838139-e0c9-556a-a9ca-e4422f459af7-grafana-compute-0[99196]: logger=migrator t=2025-11-27T10:59:59.862030355Z level=info msg="Executing migration" id="add column annotations to alert_rule_version"
Nov 27 05:59:59 np0005537642 ceph-4c838139-e0c9-556a-a9ca-e4422f459af7-nfs-cephfs-2-0-compute-0-ymahkb[97101]: 27/11/2025 10:59:59 : epoch 69282ef5 : compute-0 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f37bc003c10 fd 49 proxy header rest len failed header rlen = % (will set dead)
Nov 27 05:59:59 np0005537642 ceph-4c838139-e0c9-556a-a9ca-e4422f459af7-grafana-compute-0[99196]: logger=migrator t=2025-11-27T10:59:59.866788162Z level=info msg="Migration successfully executed" id="add column annotations to alert_rule_version" duration=4.756407ms
Nov 27 05:59:59 np0005537642 ceph-4c838139-e0c9-556a-a9ca-e4422f459af7-grafana-compute-0[99196]: logger=migrator t=2025-11-27T10:59:59.86846192Z level=info msg="Executing migration" id="add column labels to alert_rule_version"
Nov 27 05:59:59 np0005537642 ceph-4c838139-e0c9-556a-a9ca-e4422f459af7-grafana-compute-0[99196]: logger=migrator t=2025-11-27T10:59:59.873060443Z level=info msg="Migration successfully executed" id="add column labels to alert_rule_version" duration=4.598613ms
Nov 27 05:59:59 np0005537642 ceph-4c838139-e0c9-556a-a9ca-e4422f459af7-grafana-compute-0[99196]: logger=migrator t=2025-11-27T10:59:59.901269635Z level=info msg="Executing migration" id="add rule_group_idx column to alert_rule_version"
Nov 27 05:59:59 np0005537642 ceph-4c838139-e0c9-556a-a9ca-e4422f459af7-grafana-compute-0[99196]: logger=migrator t=2025-11-27T10:59:59.908564446Z level=info msg="Migration successfully executed" id="add rule_group_idx column to alert_rule_version" duration=7.29412ms
Nov 27 05:59:59 np0005537642 ceph-4c838139-e0c9-556a-a9ca-e4422f459af7-grafana-compute-0[99196]: logger=migrator t=2025-11-27T10:59:59.917179704Z level=info msg="Executing migration" id="add is_paused column to alert_rule_versions table"
Nov 27 05:59:59 np0005537642 ceph-4c838139-e0c9-556a-a9ca-e4422f459af7-grafana-compute-0[99196]: logger=migrator t=2025-11-27T10:59:59.92399013Z level=info msg="Migration successfully executed" id="add is_paused column to alert_rule_versions table" duration=6.809516ms
Nov 27 05:59:59 np0005537642 ceph-osd[82775]: log_channel(cluster) log [DBG] : 11.12 scrub starts
Nov 27 05:59:59 np0005537642 ceph-4c838139-e0c9-556a-a9ca-e4422f459af7-grafana-compute-0[99196]: logger=migrator t=2025-11-27T10:59:59.931675301Z level=info msg="Executing migration" id="fix is_paused column for alert_rule_version table"
Nov 27 05:59:59 np0005537642 ceph-4c838139-e0c9-556a-a9ca-e4422f459af7-grafana-compute-0[99196]: logger=migrator t=2025-11-27T10:59:59.931751024Z level=info msg="Migration successfully executed" id="fix is_paused column for alert_rule_version table" duration=77.403µs
Nov 27 05:59:59 np0005537642 ceph-osd[82775]: log_channel(cluster) log [DBG] : 11.12 scrub ok
Nov 27 05:59:59 np0005537642 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v94: 353 pgs: 353 active+clean; 456 KiB data, 126 MiB used, 60 GiB / 60 GiB avail; 21 B/s, 1 objects/s recovering
Nov 27 05:59:59 np0005537642 ceph-mon[74338]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "12"} v 0)
Nov 27 05:59:59 np0005537642 ceph-mon[74338]: log_channel(audit) log [INF] : from='mgr.14424 192.168.122.100:0/1707931784' entity='mgr.compute-0.qnrkij' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "12"}]: dispatch
Nov 27 05:59:59 np0005537642 ceph-4c838139-e0c9-556a-a9ca-e4422f459af7-grafana-compute-0[99196]: logger=migrator t=2025-11-27T10:59:59.955990692Z level=info msg="Executing migration" id=create_alert_configuration_table
Nov 27 05:59:59 np0005537642 ceph-4c838139-e0c9-556a-a9ca-e4422f459af7-grafana-compute-0[99196]: logger=migrator t=2025-11-27T10:59:59.957424873Z level=info msg="Migration successfully executed" id=create_alert_configuration_table duration=1.439141ms
Nov 27 05:59:59 np0005537642 ceph-4c838139-e0c9-556a-a9ca-e4422f459af7-grafana-compute-0[99196]: logger=migrator t=2025-11-27T10:59:59.966375581Z level=info msg="Executing migration" id="Add column default in alert_configuration"
Nov 27 05:59:59 np0005537642 ceph-4c838139-e0c9-556a-a9ca-e4422f459af7-grafana-compute-0[99196]: logger=migrator t=2025-11-27T10:59:59.974066343Z level=info msg="Migration successfully executed" id="Add column default in alert_configuration" duration=7.688031ms
Nov 27 05:59:59 np0005537642 ceph-4c838139-e0c9-556a-a9ca-e4422f459af7-grafana-compute-0[99196]: logger=migrator t=2025-11-27T10:59:59.976366949Z level=info msg="Executing migration" id="alert alert_configuration alertmanager_configuration column from TEXT to MEDIUMTEXT if mysql"
Nov 27 05:59:59 np0005537642 ceph-4c838139-e0c9-556a-a9ca-e4422f459af7-grafana-compute-0[99196]: logger=migrator t=2025-11-27T10:59:59.976445791Z level=info msg="Migration successfully executed" id="alert alert_configuration alertmanager_configuration column from TEXT to MEDIUMTEXT if mysql" duration=80.462µs
Nov 27 05:59:59 np0005537642 ceph-4c838139-e0c9-556a-a9ca-e4422f459af7-grafana-compute-0[99196]: logger=migrator t=2025-11-27T10:59:59.979002895Z level=info msg="Executing migration" id="add column org_id in alert_configuration"
Nov 27 05:59:59 np0005537642 ceph-4c838139-e0c9-556a-a9ca-e4422f459af7-grafana-compute-0[99196]: logger=migrator t=2025-11-27T10:59:59.986088559Z level=info msg="Migration successfully executed" id="add column org_id in alert_configuration" duration=7.084904ms
Nov 27 06:00:00 np0005537642 ceph-mon[74338]: log_channel(cluster) log [INF] : overall HEALTH_OK
Nov 27 06:00:00 np0005537642 ceph-4c838139-e0c9-556a-a9ca-e4422f459af7-grafana-compute-0[99196]: logger=migrator t=2025-11-27T11:00:00.004312524Z level=info msg="Executing migration" id="add index in alert_configuration table on org_id column"
Nov 27 06:00:00 np0005537642 ceph-4c838139-e0c9-556a-a9ca-e4422f459af7-grafana-compute-0[99196]: logger=migrator t=2025-11-27T11:00:00.005970302Z level=info msg="Migration successfully executed" id="add index in alert_configuration table on org_id column" duration=1.662588ms
Nov 27 06:00:00 np0005537642 ceph-4c838139-e0c9-556a-a9ca-e4422f459af7-grafana-compute-0[99196]: logger=migrator t=2025-11-27T11:00:00.009240146Z level=info msg="Executing migration" id="add configuration_hash column to alert_configuration"
Nov 27 06:00:00 np0005537642 ceph-4c838139-e0c9-556a-a9ca-e4422f459af7-grafana-compute-0[99196]: logger=migrator t=2025-11-27T11:00:00.015874907Z level=info msg="Migration successfully executed" id="add configuration_hash column to alert_configuration" duration=6.633721ms
Nov 27 06:00:00 np0005537642 ceph-4c838139-e0c9-556a-a9ca-e4422f459af7-grafana-compute-0[99196]: logger=migrator t=2025-11-27T11:00:00.018246705Z level=info msg="Executing migration" id=create_ngalert_configuration_table
Nov 27 06:00:00 np0005537642 ceph-4c838139-e0c9-556a-a9ca-e4422f459af7-grafana-compute-0[99196]: logger=migrator t=2025-11-27T11:00:00.019262255Z level=info msg="Migration successfully executed" id=create_ngalert_configuration_table duration=1.01872ms
Nov 27 06:00:00 np0005537642 ceph-4c838139-e0c9-556a-a9ca-e4422f459af7-grafana-compute-0[99196]: logger=migrator t=2025-11-27T11:00:00.022406935Z level=info msg="Executing migration" id="add index in ngalert_configuration on org_id column"
Nov 27 06:00:00 np0005537642 ceph-4c838139-e0c9-556a-a9ca-e4422f459af7-grafana-compute-0[99196]: logger=migrator t=2025-11-27T11:00:00.023543728Z level=info msg="Migration successfully executed" id="add index in ngalert_configuration on org_id column" duration=1.136953ms
Nov 27 06:00:00 np0005537642 ceph-4c838139-e0c9-556a-a9ca-e4422f459af7-grafana-compute-0[99196]: logger=migrator t=2025-11-27T11:00:00.026200044Z level=info msg="Executing migration" id="add column send_alerts_to in ngalert_configuration"
Nov 27 06:00:00 np0005537642 ceph-4c838139-e0c9-556a-a9ca-e4422f459af7-grafana-compute-0[99196]: logger=migrator t=2025-11-27T11:00:00.033801853Z level=info msg="Migration successfully executed" id="add column send_alerts_to in ngalert_configuration" duration=7.597159ms
Nov 27 06:00:00 np0005537642 ceph-4c838139-e0c9-556a-a9ca-e4422f459af7-grafana-compute-0[99196]: logger=migrator t=2025-11-27T11:00:00.035659287Z level=info msg="Executing migration" id="create provenance_type table"
Nov 27 06:00:00 np0005537642 ceph-4c838139-e0c9-556a-a9ca-e4422f459af7-grafana-compute-0[99196]: logger=migrator t=2025-11-27T11:00:00.036492981Z level=info msg="Migration successfully executed" id="create provenance_type table" duration=833.384µs
Nov 27 06:00:00 np0005537642 ceph-4c838139-e0c9-556a-a9ca-e4422f459af7-grafana-compute-0[99196]: logger=migrator t=2025-11-27T11:00:00.040269Z level=info msg="Executing migration" id="add index to uniquify (record_key, record_type, org_id) columns"
Nov 27 06:00:00 np0005537642 ceph-4c838139-e0c9-556a-a9ca-e4422f459af7-grafana-compute-0[99196]: logger=migrator t=2025-11-27T11:00:00.041096364Z level=info msg="Migration successfully executed" id="add index to uniquify (record_key, record_type, org_id) columns" duration=827.924µs
Nov 27 06:00:00 np0005537642 ceph-4c838139-e0c9-556a-a9ca-e4422f459af7-grafana-compute-0[99196]: logger=migrator t=2025-11-27T11:00:00.0447815Z level=info msg="Executing migration" id="create alert_image table"
Nov 27 06:00:00 np0005537642 ceph-4c838139-e0c9-556a-a9ca-e4422f459af7-grafana-compute-0[99196]: logger=migrator t=2025-11-27T11:00:00.045442639Z level=info msg="Migration successfully executed" id="create alert_image table" duration=661.079µs
Nov 27 06:00:00 np0005537642 ceph-4c838139-e0c9-556a-a9ca-e4422f459af7-grafana-compute-0[99196]: logger=migrator t=2025-11-27T11:00:00.048098625Z level=info msg="Executing migration" id="add unique index on token to alert_image table"
Nov 27 06:00:00 np0005537642 ceph-4c838139-e0c9-556a-a9ca-e4422f459af7-grafana-compute-0[99196]: logger=migrator t=2025-11-27T11:00:00.048932859Z level=info msg="Migration successfully executed" id="add unique index on token to alert_image table" duration=833.634µs
Nov 27 06:00:00 np0005537642 ceph-4c838139-e0c9-556a-a9ca-e4422f459af7-grafana-compute-0[99196]: logger=migrator t=2025-11-27T11:00:00.052019218Z level=info msg="Executing migration" id="support longer URLs in alert_image table"
Nov 27 06:00:00 np0005537642 ceph-4c838139-e0c9-556a-a9ca-e4422f459af7-grafana-compute-0[99196]: logger=migrator t=2025-11-27T11:00:00.05207825Z level=info msg="Migration successfully executed" id="support longer URLs in alert_image table" duration=59.282µs
Nov 27 06:00:00 np0005537642 ceph-4c838139-e0c9-556a-a9ca-e4422f459af7-grafana-compute-0[99196]: logger=migrator t=2025-11-27T11:00:00.05417448Z level=info msg="Executing migration" id=create_alert_configuration_history_table
Nov 27 06:00:00 np0005537642 ceph-4c838139-e0c9-556a-a9ca-e4422f459af7-grafana-compute-0[99196]: logger=migrator t=2025-11-27T11:00:00.055098157Z level=info msg="Migration successfully executed" id=create_alert_configuration_history_table duration=923.097µs
Nov 27 06:00:00 np0005537642 ceph-4c838139-e0c9-556a-a9ca-e4422f459af7-grafana-compute-0[99196]: logger=migrator t=2025-11-27T11:00:00.058121334Z level=info msg="Executing migration" id="drop non-unique orgID index on alert_configuration"
Nov 27 06:00:00 np0005537642 ceph-4c838139-e0c9-556a-a9ca-e4422f459af7-grafana-compute-0[99196]: logger=migrator t=2025-11-27T11:00:00.059646048Z level=info msg="Migration successfully executed" id="drop non-unique orgID index on alert_configuration" duration=1.446752ms
Nov 27 06:00:00 np0005537642 ceph-4c838139-e0c9-556a-a9ca-e4422f459af7-grafana-compute-0[99196]: logger=migrator t=2025-11-27T11:00:00.06143829Z level=info msg="Executing migration" id="drop unique orgID index on alert_configuration if exists"
Nov 27 06:00:00 np0005537642 ceph-4c838139-e0c9-556a-a9ca-e4422f459af7-grafana-compute-0[99196]: logger=migrator t=2025-11-27T11:00:00.068489953Z level=warn msg="Skipping migration: Already executed, but not recorded in migration log" id="drop unique orgID index on alert_configuration if exists"
Nov 27 06:00:00 np0005537642 ceph-4c838139-e0c9-556a-a9ca-e4422f459af7-grafana-compute-0[99196]: logger=migrator t=2025-11-27T11:00:00.073864118Z level=info msg="Executing migration" id="extract alertmanager configuration history to separate table"
Nov 27 06:00:00 np0005537642 ceph-4c838139-e0c9-556a-a9ca-e4422f459af7-grafana-compute-0[99196]: logger=migrator t=2025-11-27T11:00:00.074502516Z level=info msg="Migration successfully executed" id="extract alertmanager configuration history to separate table" duration=645.919µs
Nov 27 06:00:00 np0005537642 ceph-4c838139-e0c9-556a-a9ca-e4422f459af7-grafana-compute-0[99196]: logger=migrator t=2025-11-27T11:00:00.07915404Z level=info msg="Executing migration" id="add unique index on orgID to alert_configuration"
Nov 27 06:00:00 np0005537642 ceph-4c838139-e0c9-556a-a9ca-e4422f459af7-grafana-compute-0[99196]: logger=migrator t=2025-11-27T11:00:00.080709725Z level=info msg="Migration successfully executed" id="add unique index on orgID to alert_configuration" duration=1.552445ms
Nov 27 06:00:00 np0005537642 ceph-4c838139-e0c9-556a-a9ca-e4422f459af7-grafana-compute-0[99196]: logger=migrator t=2025-11-27T11:00:00.083523206Z level=info msg="Executing migration" id="add last_applied column to alert_configuration_history"
Nov 27 06:00:00 np0005537642 ceph-4c838139-e0c9-556a-a9ca-e4422f459af7-grafana-compute-0[99196]: logger=migrator t=2025-11-27T11:00:00.091469985Z level=info msg="Migration successfully executed" id="add last_applied column to alert_configuration_history" duration=7.946159ms
Nov 27 06:00:00 np0005537642 ceph-4c838139-e0c9-556a-a9ca-e4422f459af7-grafana-compute-0[99196]: logger=migrator t=2025-11-27T11:00:00.093836243Z level=info msg="Executing migration" id="create library_element table v1"
Nov 27 06:00:00 np0005537642 ceph-4c838139-e0c9-556a-a9ca-e4422f459af7-grafana-compute-0[99196]: logger=migrator t=2025-11-27T11:00:00.095165401Z level=info msg="Migration successfully executed" id="create library_element table v1" duration=1.328938ms
Nov 27 06:00:00 np0005537642 ceph-4c838139-e0c9-556a-a9ca-e4422f459af7-grafana-compute-0[99196]: logger=migrator t=2025-11-27T11:00:00.101748661Z level=info msg="Executing migration" id="add index library_element org_id-folder_id-name-kind"
Nov 27 06:00:00 np0005537642 ceph-4c838139-e0c9-556a-a9ca-e4422f459af7-grafana-compute-0[99196]: logger=migrator t=2025-11-27T11:00:00.102965936Z level=info msg="Migration successfully executed" id="add index library_element org_id-folder_id-name-kind" duration=1.216515ms
Nov 27 06:00:00 np0005537642 ceph-4c838139-e0c9-556a-a9ca-e4422f459af7-grafana-compute-0[99196]: logger=migrator t=2025-11-27T11:00:00.106809777Z level=info msg="Executing migration" id="create library_element_connection table v1"
Nov 27 06:00:00 np0005537642 ceph-4c838139-e0c9-556a-a9ca-e4422f459af7-grafana-compute-0[99196]: logger=migrator t=2025-11-27T11:00:00.107794895Z level=info msg="Migration successfully executed" id="create library_element_connection table v1" duration=982.968µs
Nov 27 06:00:00 np0005537642 ceph-4c838139-e0c9-556a-a9ca-e4422f459af7-grafana-compute-0[99196]: logger=migrator t=2025-11-27T11:00:00.110808102Z level=info msg="Executing migration" id="add index library_element_connection element_id-kind-connection_id"
Nov 27 06:00:00 np0005537642 ceph-4c838139-e0c9-556a-a9ca-e4422f459af7-grafana-compute-0[99196]: logger=migrator t=2025-11-27T11:00:00.111995816Z level=info msg="Migration successfully executed" id="add index library_element_connection element_id-kind-connection_id" duration=1.186964ms
Nov 27 06:00:00 np0005537642 ceph-4c838139-e0c9-556a-a9ca-e4422f459af7-grafana-compute-0[99196]: logger=migrator t=2025-11-27T11:00:00.114250961Z level=info msg="Executing migration" id="add unique index library_element org_id_uid"
Nov 27 06:00:00 np0005537642 ceph-4c838139-e0c9-556a-a9ca-e4422f459af7-grafana-compute-0[99196]: logger=migrator t=2025-11-27T11:00:00.115356463Z level=info msg="Migration successfully executed" id="add unique index library_element org_id_uid" duration=1.104902ms
Nov 27 06:00:00 np0005537642 ceph-4c838139-e0c9-556a-a9ca-e4422f459af7-grafana-compute-0[99196]: logger=migrator t=2025-11-27T11:00:00.120919783Z level=info msg="Executing migration" id="increase max description length to 2048"
Nov 27 06:00:00 np0005537642 ceph-4c838139-e0c9-556a-a9ca-e4422f459af7-grafana-compute-0[99196]: logger=migrator t=2025-11-27T11:00:00.120949184Z level=info msg="Migration successfully executed" id="increase max description length to 2048" duration=30.131µs
Nov 27 06:00:00 np0005537642 ceph-4c838139-e0c9-556a-a9ca-e4422f459af7-grafana-compute-0[99196]: logger=migrator t=2025-11-27T11:00:00.123419395Z level=info msg="Executing migration" id="alter library_element model to mediumtext"
Nov 27 06:00:00 np0005537642 ceph-4c838139-e0c9-556a-a9ca-e4422f459af7-grafana-compute-0[99196]: logger=migrator t=2025-11-27T11:00:00.123488947Z level=info msg="Migration successfully executed" id="alter library_element model to mediumtext" duration=72.892µs
Nov 27 06:00:00 np0005537642 ceph-4c838139-e0c9-556a-a9ca-e4422f459af7-grafana-compute-0[99196]: logger=migrator t=2025-11-27T11:00:00.126269737Z level=info msg="Executing migration" id="clone move dashboard alerts to unified alerting"
Nov 27 06:00:00 np0005537642 ceph-4c838139-e0c9-556a-a9ca-e4422f459af7-grafana-compute-0[99196]: logger=migrator t=2025-11-27T11:00:00.126610657Z level=info msg="Migration successfully executed" id="clone move dashboard alerts to unified alerting" duration=340.93µs
Nov 27 06:00:00 np0005537642 ceph-4c838139-e0c9-556a-a9ca-e4422f459af7-grafana-compute-0[99196]: logger=migrator t=2025-11-27T11:00:00.12846168Z level=info msg="Executing migration" id="create data_keys table"
Nov 27 06:00:00 np0005537642 ceph-4c838139-e0c9-556a-a9ca-e4422f459af7-grafana-compute-0[99196]: logger=migrator t=2025-11-27T11:00:00.129803789Z level=info msg="Migration successfully executed" id="create data_keys table" duration=1.341139ms
Nov 27 06:00:00 np0005537642 ceph-4c838139-e0c9-556a-a9ca-e4422f459af7-grafana-compute-0[99196]: logger=migrator t=2025-11-27T11:00:00.134622538Z level=info msg="Executing migration" id="create secrets table"
Nov 27 06:00:00 np0005537642 ceph-4c838139-e0c9-556a-a9ca-e4422f459af7-grafana-compute-0[99196]: logger=migrator t=2025-11-27T11:00:00.135787801Z level=info msg="Migration successfully executed" id="create secrets table" duration=1.164403ms
Nov 27 06:00:00 np0005537642 ceph-4c838139-e0c9-556a-a9ca-e4422f459af7-grafana-compute-0[99196]: logger=migrator t=2025-11-27T11:00:00.13886223Z level=info msg="Executing migration" id="rename data_keys name column to id"
Nov 27 06:00:00 np0005537642 ceph-4c838139-e0c9-556a-a9ca-e4422f459af7-grafana-compute-0[99196]: logger=migrator t=2025-11-27T11:00:00.181065316Z level=info msg="Migration successfully executed" id="rename data_keys name column to id" duration=42.186545ms
Nov 27 06:00:00 np0005537642 ceph-4c838139-e0c9-556a-a9ca-e4422f459af7-grafana-compute-0[99196]: logger=migrator t=2025-11-27T11:00:00.18330452Z level=info msg="Executing migration" id="add name column into data_keys"
Nov 27 06:00:00 np0005537642 ceph-4c838139-e0c9-556a-a9ca-e4422f459af7-grafana-compute-0[99196]: logger=migrator t=2025-11-27T11:00:00.19022865Z level=info msg="Migration successfully executed" id="add name column into data_keys" duration=6.922799ms
Nov 27 06:00:00 np0005537642 ceph-4c838139-e0c9-556a-a9ca-e4422f459af7-grafana-compute-0[99196]: logger=migrator t=2025-11-27T11:00:00.192214367Z level=info msg="Executing migration" id="copy data_keys id column values into name"
Nov 27 06:00:00 np0005537642 ceph-4c838139-e0c9-556a-a9ca-e4422f459af7-grafana-compute-0[99196]: logger=migrator t=2025-11-27T11:00:00.192358361Z level=info msg="Migration successfully executed" id="copy data_keys id column values into name" duration=145.014µs
Nov 27 06:00:00 np0005537642 ceph-4c838139-e0c9-556a-a9ca-e4422f459af7-grafana-compute-0[99196]: logger=migrator t=2025-11-27T11:00:00.194711699Z level=info msg="Executing migration" id="rename data_keys name column to label"
Nov 27 06:00:00 np0005537642 ceph-4c838139-e0c9-556a-a9ca-e4422f459af7-grafana-compute-0[99196]: logger=migrator t=2025-11-27T11:00:00.221449339Z level=info msg="Migration successfully executed" id="rename data_keys name column to label" duration=26.72849ms
Nov 27 06:00:00 np0005537642 ceph-4c838139-e0c9-556a-a9ca-e4422f459af7-grafana-compute-0[99196]: logger=migrator t=2025-11-27T11:00:00.223318343Z level=info msg="Executing migration" id="rename data_keys id column back to name"
Nov 27 06:00:00 np0005537642 ceph-4c838139-e0c9-556a-a9ca-e4422f459af7-grafana-compute-0[99196]: logger=migrator t=2025-11-27T11:00:00.251267678Z level=info msg="Migration successfully executed" id="rename data_keys id column back to name" duration=27.936275ms
Nov 27 06:00:00 np0005537642 ceph-4c838139-e0c9-556a-a9ca-e4422f459af7-grafana-compute-0[99196]: logger=migrator t=2025-11-27T11:00:00.253255215Z level=info msg="Executing migration" id="create kv_store table v1"
Nov 27 06:00:00 np0005537642 ceph-4c838139-e0c9-556a-a9ca-e4422f459af7-grafana-compute-0[99196]: logger=migrator t=2025-11-27T11:00:00.254197732Z level=info msg="Migration successfully executed" id="create kv_store table v1" duration=942.967µs
Nov 27 06:00:00 np0005537642 ceph-4c838139-e0c9-556a-a9ca-e4422f459af7-grafana-compute-0[99196]: logger=migrator t=2025-11-27T11:00:00.256937961Z level=info msg="Executing migration" id="add index kv_store.org_id-namespace-key"
Nov 27 06:00:00 np0005537642 ceph-4c838139-e0c9-556a-a9ca-e4422f459af7-grafana-compute-0[99196]: logger=migrator t=2025-11-27T11:00:00.257907489Z level=info msg="Migration successfully executed" id="add index kv_store.org_id-namespace-key" duration=1.496983ms
Nov 27 06:00:00 np0005537642 ceph-4c838139-e0c9-556a-a9ca-e4422f459af7-grafana-compute-0[99196]: logger=migrator t=2025-11-27T11:00:00.260818213Z level=info msg="Executing migration" id="update dashboard_uid and panel_id from existing annotations"
Nov 27 06:00:00 np0005537642 ceph-4c838139-e0c9-556a-a9ca-e4422f459af7-grafana-compute-0[99196]: logger=migrator t=2025-11-27T11:00:00.261020459Z level=info msg="Migration successfully executed" id="update dashboard_uid and panel_id from existing annotations" duration=202.336µs
Nov 27 06:00:00 np0005537642 ceph-4c838139-e0c9-556a-a9ca-e4422f459af7-grafana-compute-0[99196]: logger=migrator t=2025-11-27T11:00:00.263765268Z level=info msg="Executing migration" id="create permission table"
Nov 27 06:00:00 np0005537642 ceph-4c838139-e0c9-556a-a9ca-e4422f459af7-grafana-compute-0[99196]: logger=migrator t=2025-11-27T11:00:00.264657374Z level=info msg="Migration successfully executed" id="create permission table" duration=891.776µs
Nov 27 06:00:00 np0005537642 ceph-4c838139-e0c9-556a-a9ca-e4422f459af7-grafana-compute-0[99196]: logger=migrator t=2025-11-27T11:00:00.268146324Z level=info msg="Executing migration" id="add unique index permission.role_id"
Nov 27 06:00:00 np0005537642 ceph-4c838139-e0c9-556a-a9ca-e4422f459af7-grafana-compute-0[99196]: logger=migrator t=2025-11-27T11:00:00.269773571Z level=info msg="Migration successfully executed" id="add unique index permission.role_id" duration=1.637347ms
Nov 27 06:00:00 np0005537642 ceph-4c838139-e0c9-556a-a9ca-e4422f459af7-grafana-compute-0[99196]: logger=migrator t=2025-11-27T11:00:00.273112807Z level=info msg="Executing migration" id="add unique index role_id_action_scope"
Nov 27 06:00:00 np0005537642 ceph-4c838139-e0c9-556a-a9ca-e4422f459af7-grafana-compute-0[99196]: logger=migrator t=2025-11-27T11:00:00.274189258Z level=info msg="Migration successfully executed" id="add unique index role_id_action_scope" duration=1.077031ms
Nov 27 06:00:00 np0005537642 ceph-4c838139-e0c9-556a-a9ca-e4422f459af7-grafana-compute-0[99196]: logger=migrator t=2025-11-27T11:00:00.276185796Z level=info msg="Executing migration" id="create role table"
Nov 27 06:00:00 np0005537642 ceph-4c838139-e0c9-556a-a9ca-e4422f459af7-grafana-compute-0[99196]: logger=migrator t=2025-11-27T11:00:00.27701162Z level=info msg="Migration successfully executed" id="create role table" duration=825.254µs
Nov 27 06:00:00 np0005537642 ceph-4c838139-e0c9-556a-a9ca-e4422f459af7-grafana-compute-0[99196]: logger=migrator t=2025-11-27T11:00:00.279379288Z level=info msg="Executing migration" id="add column display_name"
Nov 27 06:00:00 np0005537642 ceph-4c838139-e0c9-556a-a9ca-e4422f459af7-grafana-compute-0[99196]: logger=migrator t=2025-11-27T11:00:00.285234036Z level=info msg="Migration successfully executed" id="add column display_name" duration=5.855098ms
Nov 27 06:00:00 np0005537642 radosgw[89563]: ====== starting new request req=0x7f291eca25d0 =====
Nov 27 06:00:00 np0005537642 radosgw[89563]: ====== req done req=0x7f291eca25d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 27 06:00:00 np0005537642 radosgw[89563]: beast: 0x7f291eca25d0: 192.168.122.102 - anonymous [27/Nov/2025:11:00:00.285 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 27 06:00:00 np0005537642 ceph-4c838139-e0c9-556a-a9ca-e4422f459af7-grafana-compute-0[99196]: logger=migrator t=2025-11-27T11:00:00.288999445Z level=info msg="Executing migration" id="add column group_name"
Nov 27 06:00:00 np0005537642 ceph-mon[74338]: mon.compute-0@0(leader).osd e97 do_prune osdmap full prune enabled
Nov 27 06:00:00 np0005537642 ceph-4c838139-e0c9-556a-a9ca-e4422f459af7-grafana-compute-0[99196]: logger=migrator t=2025-11-27T11:00:00.294093912Z level=info msg="Migration successfully executed" id="add column group_name" duration=5.094587ms
Nov 27 06:00:00 np0005537642 ceph-4c838139-e0c9-556a-a9ca-e4422f459af7-grafana-compute-0[99196]: logger=migrator t=2025-11-27T11:00:00.295719559Z level=info msg="Executing migration" id="add index role.org_id"
Nov 27 06:00:00 np0005537642 ceph-4c838139-e0c9-556a-a9ca-e4422f459af7-grafana-compute-0[99196]: logger=migrator t=2025-11-27T11:00:00.296501221Z level=info msg="Migration successfully executed" id="add index role.org_id" duration=780.832µs
Nov 27 06:00:00 np0005537642 ceph-mon[74338]: Deploying daemon haproxy.rgw.default.compute-2.quxapy on compute-2
Nov 27 06:00:00 np0005537642 ceph-mon[74338]: from='mgr.14424 192.168.122.100:0/1707931784' entity='mgr.compute-0.qnrkij' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "12"}]: dispatch
Nov 27 06:00:00 np0005537642 ceph-mon[74338]: overall HEALTH_OK
Nov 27 06:00:00 np0005537642 ceph-4c838139-e0c9-556a-a9ca-e4422f459af7-grafana-compute-0[99196]: logger=migrator t=2025-11-27T11:00:00.299469207Z level=info msg="Executing migration" id="add unique index role_org_id_name"
Nov 27 06:00:00 np0005537642 ceph-4c838139-e0c9-556a-a9ca-e4422f459af7-grafana-compute-0[99196]: logger=migrator t=2025-11-27T11:00:00.300325121Z level=info msg="Migration successfully executed" id="add unique index role_org_id_name" duration=855.114µs
Nov 27 06:00:00 np0005537642 ceph-4c838139-e0c9-556a-a9ca-e4422f459af7-grafana-compute-0[99196]: logger=migrator t=2025-11-27T11:00:00.302256597Z level=info msg="Executing migration" id="add index role_org_id_uid"
Nov 27 06:00:00 np0005537642 ceph-4c838139-e0c9-556a-a9ca-e4422f459af7-grafana-compute-0[99196]: logger=migrator t=2025-11-27T11:00:00.303193354Z level=info msg="Migration successfully executed" id="add index role_org_id_uid" duration=936.377µs
Nov 27 06:00:00 np0005537642 ceph-4c838139-e0c9-556a-a9ca-e4422f459af7-grafana-compute-0[99196]: logger=migrator t=2025-11-27T11:00:00.305800449Z level=info msg="Executing migration" id="create team role table"
Nov 27 06:00:00 np0005537642 ceph-4c838139-e0c9-556a-a9ca-e4422f459af7-grafana-compute-0[99196]: logger=migrator t=2025-11-27T11:00:00.306599682Z level=info msg="Migration successfully executed" id="create team role table" duration=798.863µs
Nov 27 06:00:00 np0005537642 ceph-4c838139-e0c9-556a-a9ca-e4422f459af7-grafana-compute-0[99196]: logger=migrator t=2025-11-27T11:00:00.309805234Z level=info msg="Executing migration" id="add index team_role.org_id"
Nov 27 06:00:00 np0005537642 ceph-4c838139-e0c9-556a-a9ca-e4422f459af7-grafana-compute-0[99196]: logger=migrator t=2025-11-27T11:00:00.310738701Z level=info msg="Migration successfully executed" id="add index team_role.org_id" duration=933.017µs
Nov 27 06:00:00 np0005537642 ceph-4c838139-e0c9-556a-a9ca-e4422f459af7-grafana-compute-0[99196]: logger=migrator t=2025-11-27T11:00:00.313799319Z level=info msg="Executing migration" id="add unique index team_role_org_id_team_id_role_id"
Nov 27 06:00:00 np0005537642 ceph-4c838139-e0c9-556a-a9ca-e4422f459af7-grafana-compute-0[99196]: logger=migrator t=2025-11-27T11:00:00.314729766Z level=info msg="Migration successfully executed" id="add unique index team_role_org_id_team_id_role_id" duration=929.777µs
Nov 27 06:00:00 np0005537642 ceph-mon[74338]: log_channel(audit) log [INF] : from='mgr.14424 192.168.122.100:0/1707931784' entity='mgr.compute-0.qnrkij' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "12"}]': finished
Nov 27 06:00:00 np0005537642 ceph-mon[74338]: mon.compute-0@0(leader).osd e98 e98: 3 total, 3 up, 3 in
Nov 27 06:00:00 np0005537642 ceph-4c838139-e0c9-556a-a9ca-e4422f459af7-grafana-compute-0[99196]: logger=migrator t=2025-11-27T11:00:00.31799838Z level=info msg="Executing migration" id="add index team_role.team_id"
Nov 27 06:00:00 np0005537642 ceph-4c838139-e0c9-556a-a9ca-e4422f459af7-grafana-compute-0[99196]: logger=migrator t=2025-11-27T11:00:00.318873356Z level=info msg="Migration successfully executed" id="add index team_role.team_id" duration=873.885µs
Nov 27 06:00:00 np0005537642 ceph-4c838139-e0c9-556a-a9ca-e4422f459af7-grafana-compute-0[99196]: logger=migrator t=2025-11-27T11:00:00.3235366Z level=info msg="Executing migration" id="create user role table"
Nov 27 06:00:00 np0005537642 ceph-4c838139-e0c9-556a-a9ca-e4422f459af7-grafana-compute-0[99196]: logger=migrator t=2025-11-27T11:00:00.32422355Z level=info msg="Migration successfully executed" id="create user role table" duration=688.15µs
Nov 27 06:00:00 np0005537642 ceph-mon[74338]: log_channel(cluster) log [DBG] : osdmap e98: 3 total, 3 up, 3 in
Nov 27 06:00:00 np0005537642 ceph-mon[74338]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2.devices.0}] v 0)
Nov 27 06:00:00 np0005537642 ceph-4c838139-e0c9-556a-a9ca-e4422f459af7-grafana-compute-0[99196]: logger=migrator t=2025-11-27T11:00:00.326777713Z level=info msg="Executing migration" id="add index user_role.org_id"
Nov 27 06:00:00 np0005537642 ceph-4c838139-e0c9-556a-a9ca-e4422f459af7-grafana-compute-0[99196]: logger=migrator t=2025-11-27T11:00:00.327568516Z level=info msg="Migration successfully executed" id="add index user_role.org_id" duration=790.713µs
Nov 27 06:00:00 np0005537642 ceph-4c838139-e0c9-556a-a9ca-e4422f459af7-grafana-compute-0[99196]: logger=migrator t=2025-11-27T11:00:00.332186989Z level=info msg="Executing migration" id="add unique index user_role_org_id_user_id_role_id"
Nov 27 06:00:00 np0005537642 ceph-4c838139-e0c9-556a-a9ca-e4422f459af7-grafana-compute-0[99196]: logger=migrator t=2025-11-27T11:00:00.332973812Z level=info msg="Migration successfully executed" id="add unique index user_role_org_id_user_id_role_id" duration=786.393µs
Nov 27 06:00:00 np0005537642 ceph-4c838139-e0c9-556a-a9ca-e4422f459af7-grafana-compute-0[99196]: logger=migrator t=2025-11-27T11:00:00.335978218Z level=info msg="Executing migration" id="add index user_role.user_id"
Nov 27 06:00:00 np0005537642 ceph-4c838139-e0c9-556a-a9ca-e4422f459af7-grafana-compute-0[99196]: logger=migrator t=2025-11-27T11:00:00.336788072Z level=info msg="Migration successfully executed" id="add index user_role.user_id" duration=809.634µs
Nov 27 06:00:00 np0005537642 ceph-4c838139-e0c9-556a-a9ca-e4422f459af7-grafana-compute-0[99196]: logger=migrator t=2025-11-27T11:00:00.342397423Z level=info msg="Executing migration" id="create builtin role table"
Nov 27 06:00:00 np0005537642 ceph-4c838139-e0c9-556a-a9ca-e4422f459af7-grafana-compute-0[99196]: logger=migrator t=2025-11-27T11:00:00.343122624Z level=info msg="Migration successfully executed" id="create builtin role table" duration=724.921µs
Nov 27 06:00:00 np0005537642 ceph-mon[74338]: log_channel(audit) log [INF] : from='mgr.14424 192.168.122.100:0/1707931784' entity='mgr.compute-0.qnrkij' 
Nov 27 06:00:00 np0005537642 ceph-4c838139-e0c9-556a-a9ca-e4422f459af7-grafana-compute-0[99196]: logger=migrator t=2025-11-27T11:00:00.346080059Z level=info msg="Executing migration" id="add index builtin_role.role_id"
Nov 27 06:00:00 np0005537642 ceph-mon[74338]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2}] v 0)
Nov 27 06:00:00 np0005537642 ceph-4c838139-e0c9-556a-a9ca-e4422f459af7-grafana-compute-0[99196]: logger=migrator t=2025-11-27T11:00:00.347006296Z level=info msg="Migration successfully executed" id="add index builtin_role.role_id" duration=926.587µs
Nov 27 06:00:00 np0005537642 ceph-4c838139-e0c9-556a-a9ca-e4422f459af7-grafana-compute-0[99196]: logger=migrator t=2025-11-27T11:00:00.350501317Z level=info msg="Executing migration" id="add index builtin_role.name"
Nov 27 06:00:00 np0005537642 ceph-4c838139-e0c9-556a-a9ca-e4422f459af7-grafana-compute-0[99196]: logger=migrator t=2025-11-27T11:00:00.351345581Z level=info msg="Migration successfully executed" id="add index builtin_role.name" duration=843.844µs
Nov 27 06:00:00 np0005537642 ceph-4c838139-e0c9-556a-a9ca-e4422f459af7-grafana-compute-0[99196]: logger=migrator t=2025-11-27T11:00:00.354924554Z level=info msg="Executing migration" id="Add column org_id to builtin_role table"
Nov 27 06:00:00 np0005537642 ceph-4c838139-e0c9-556a-a9ca-e4422f459af7-grafana-compute-0[99196]: logger=migrator t=2025-11-27T11:00:00.360803383Z level=info msg="Migration successfully executed" id="Add column org_id to builtin_role table" duration=5.878659ms
Nov 27 06:00:00 np0005537642 ceph-mon[74338]: log_channel(audit) log [INF] : from='mgr.14424 192.168.122.100:0/1707931784' entity='mgr.compute-0.qnrkij' 
Nov 27 06:00:00 np0005537642 ceph-mon[74338]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.ingress.rgw.default}] v 0)
Nov 27 06:00:00 np0005537642 ceph-4c838139-e0c9-556a-a9ca-e4422f459af7-grafana-compute-0[99196]: logger=migrator t=2025-11-27T11:00:00.363628845Z level=info msg="Executing migration" id="add index builtin_role.org_id"
Nov 27 06:00:00 np0005537642 ceph-4c838139-e0c9-556a-a9ca-e4422f459af7-grafana-compute-0[99196]: logger=migrator t=2025-11-27T11:00:00.36449247Z level=info msg="Migration successfully executed" id="add index builtin_role.org_id" duration=866.795µs
Nov 27 06:00:00 np0005537642 ceph-4c838139-e0c9-556a-a9ca-e4422f459af7-grafana-compute-0[99196]: logger=migrator t=2025-11-27T11:00:00.367469906Z level=info msg="Executing migration" id="add unique index builtin_role_org_id_role_id_role"
Nov 27 06:00:00 np0005537642 ceph-4c838139-e0c9-556a-a9ca-e4422f459af7-grafana-compute-0[99196]: logger=migrator t=2025-11-27T11:00:00.368378212Z level=info msg="Migration successfully executed" id="add unique index builtin_role_org_id_role_id_role" duration=907.797µs
Nov 27 06:00:00 np0005537642 ceph-4c838139-e0c9-556a-a9ca-e4422f459af7-grafana-compute-0[99196]: logger=migrator t=2025-11-27T11:00:00.370767571Z level=info msg="Executing migration" id="Remove unique index role_org_id_uid"
Nov 27 06:00:00 np0005537642 radosgw[89563]: ====== starting new request req=0x7f291eca25d0 =====
Nov 27 06:00:00 np0005537642 radosgw[89563]: ====== req done req=0x7f291eca25d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 27 06:00:00 np0005537642 radosgw[89563]: beast: 0x7f291eca25d0: 192.168.122.100 - anonymous [27/Nov/2025:11:00:00.370 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 27 06:00:00 np0005537642 ceph-4c838139-e0c9-556a-a9ca-e4422f459af7-grafana-compute-0[99196]: logger=migrator t=2025-11-27T11:00:00.374043915Z level=info msg="Migration successfully executed" id="Remove unique index role_org_id_uid" duration=3.273615ms
Nov 27 06:00:00 np0005537642 ceph-4c838139-e0c9-556a-a9ca-e4422f459af7-grafana-compute-0[99196]: logger=migrator t=2025-11-27T11:00:00.376449714Z level=info msg="Executing migration" id="add unique index role.uid"
Nov 27 06:00:00 np0005537642 ceph-mon[74338]: log_channel(audit) log [INF] : from='mgr.14424 192.168.122.100:0/1707931784' entity='mgr.compute-0.qnrkij' 
Nov 27 06:00:00 np0005537642 ceph-mon[74338]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/ingress.rgw.default/keepalived_password}] v 0)
Nov 27 06:00:00 np0005537642 ceph-4c838139-e0c9-556a-a9ca-e4422f459af7-grafana-compute-0[99196]: logger=migrator t=2025-11-27T11:00:00.380184162Z level=info msg="Migration successfully executed" id="add unique index role.uid" duration=3.728537ms
Nov 27 06:00:00 np0005537642 ceph-4c838139-e0c9-556a-a9ca-e4422f459af7-grafana-compute-0[99196]: logger=migrator t=2025-11-27T11:00:00.382811437Z level=info msg="Executing migration" id="create seed assignment table"
Nov 27 06:00:00 np0005537642 ceph-4c838139-e0c9-556a-a9ca-e4422f459af7-grafana-compute-0[99196]: logger=migrator t=2025-11-27T11:00:00.384417834Z level=info msg="Migration successfully executed" id="create seed assignment table" duration=1.604147ms
Nov 27 06:00:00 np0005537642 ceph-4c838139-e0c9-556a-a9ca-e4422f459af7-grafana-compute-0[99196]: logger=migrator t=2025-11-27T11:00:00.387543124Z level=info msg="Executing migration" id="add unique index builtin_role_role_name"
Nov 27 06:00:00 np0005537642 ceph-4c838139-e0c9-556a-a9ca-e4422f459af7-grafana-compute-0[99196]: logger=migrator t=2025-11-27T11:00:00.389968604Z level=info msg="Migration successfully executed" id="add unique index builtin_role_role_name" duration=2.426289ms
Nov 27 06:00:00 np0005537642 ceph-4c838139-e0c9-556a-a9ca-e4422f459af7-grafana-compute-0[99196]: logger=migrator t=2025-11-27T11:00:00.392837956Z level=info msg="Executing migration" id="add column hidden to role table"
Nov 27 06:00:00 np0005537642 ceph-mon[74338]: log_channel(audit) log [INF] : from='mgr.14424 192.168.122.100:0/1707931784' entity='mgr.compute-0.qnrkij' 
Nov 27 06:00:00 np0005537642 ceph-mgr[74636]: [cephadm INFO cephadm.services.ingress] 192.168.122.2 is in 192.168.122.0/24 on compute-0 interface br-ex
Nov 27 06:00:00 np0005537642 ceph-mgr[74636]: log_channel(cephadm) log [INF] : 192.168.122.2 is in 192.168.122.0/24 on compute-0 interface br-ex
Nov 27 06:00:00 np0005537642 ceph-mgr[74636]: [cephadm INFO cephadm.services.ingress] 192.168.122.2 is in 192.168.122.0/24 on compute-2 interface br-ex
Nov 27 06:00:00 np0005537642 ceph-mgr[74636]: log_channel(cephadm) log [INF] : 192.168.122.2 is in 192.168.122.0/24 on compute-2 interface br-ex
Nov 27 06:00:00 np0005537642 ceph-mgr[74636]: [cephadm INFO cephadm.serve] Deploying daemon keepalived.rgw.default.compute-0.jjyrft on compute-0
Nov 27 06:00:00 np0005537642 ceph-mgr[74636]: log_channel(cephadm) log [INF] : Deploying daemon keepalived.rgw.default.compute-0.jjyrft on compute-0
Nov 27 06:00:00 np0005537642 ceph-4c838139-e0c9-556a-a9ca-e4422f459af7-grafana-compute-0[99196]: logger=migrator t=2025-11-27T11:00:00.406911602Z level=info msg="Migration successfully executed" id="add column hidden to role table" duration=14.066155ms
Nov 27 06:00:00 np0005537642 ceph-4c838139-e0c9-556a-a9ca-e4422f459af7-grafana-compute-0[99196]: logger=migrator t=2025-11-27T11:00:00.409701072Z level=info msg="Executing migration" id="permission kind migration"
Nov 27 06:00:00 np0005537642 ceph-4c838139-e0c9-556a-a9ca-e4422f459af7-grafana-compute-0[99196]: logger=migrator t=2025-11-27T11:00:00.424956172Z level=info msg="Migration successfully executed" id="permission kind migration" duration=15.250759ms
Nov 27 06:00:00 np0005537642 ceph-4c838139-e0c9-556a-a9ca-e4422f459af7-grafana-compute-0[99196]: logger=migrator t=2025-11-27T11:00:00.427508605Z level=info msg="Executing migration" id="permission attribute migration"
Nov 27 06:00:00 np0005537642 ceph-4c838139-e0c9-556a-a9ca-e4422f459af7-grafana-compute-0[99196]: logger=migrator t=2025-11-27T11:00:00.436099532Z level=info msg="Migration successfully executed" id="permission attribute migration" duration=8.587917ms
Nov 27 06:00:00 np0005537642 ceph-4c838139-e0c9-556a-a9ca-e4422f459af7-grafana-compute-0[99196]: logger=migrator t=2025-11-27T11:00:00.438526262Z level=info msg="Executing migration" id="permission identifier migration"
Nov 27 06:00:00 np0005537642 ceph-4c838139-e0c9-556a-a9ca-e4422f459af7-grafana-compute-0[99196]: logger=migrator t=2025-11-27T11:00:00.447988255Z level=info msg="Migration successfully executed" id="permission identifier migration" duration=9.451773ms
Nov 27 06:00:00 np0005537642 ceph-4c838139-e0c9-556a-a9ca-e4422f459af7-grafana-compute-0[99196]: logger=migrator t=2025-11-27T11:00:00.450548529Z level=info msg="Executing migration" id="add permission identifier index"
Nov 27 06:00:00 np0005537642 ceph-4c838139-e0c9-556a-a9ca-e4422f459af7-grafana-compute-0[99196]: logger=migrator t=2025-11-27T11:00:00.451895728Z level=info msg="Migration successfully executed" id="add permission identifier index" duration=1.344178ms
Nov 27 06:00:00 np0005537642 ceph-4c838139-e0c9-556a-a9ca-e4422f459af7-grafana-compute-0[99196]: logger=migrator t=2025-11-27T11:00:00.454859323Z level=info msg="Executing migration" id="add permission action scope role_id index"
Nov 27 06:00:00 np0005537642 ceph-4c838139-e0c9-556a-a9ca-e4422f459af7-grafana-compute-0[99196]: logger=migrator t=2025-11-27T11:00:00.456436098Z level=info msg="Migration successfully executed" id="add permission action scope role_id index" duration=1.577825ms
Nov 27 06:00:00 np0005537642 ceph-4c838139-e0c9-556a-a9ca-e4422f459af7-grafana-compute-0[99196]: logger=migrator t=2025-11-27T11:00:00.459641101Z level=info msg="Executing migration" id="remove permission role_id action scope index"
Nov 27 06:00:00 np0005537642 ceph-4c838139-e0c9-556a-a9ca-e4422f459af7-grafana-compute-0[99196]: logger=migrator t=2025-11-27T11:00:00.461050021Z level=info msg="Migration successfully executed" id="remove permission role_id action scope index" duration=1.41025ms
Nov 27 06:00:00 np0005537642 ceph-4c838139-e0c9-556a-a9ca-e4422f459af7-grafana-compute-0[99196]: logger=migrator t=2025-11-27T11:00:00.464959664Z level=info msg="Executing migration" id="create query_history table v1"
Nov 27 06:00:00 np0005537642 ceph-4c838139-e0c9-556a-a9ca-e4422f459af7-grafana-compute-0[99196]: logger=migrator t=2025-11-27T11:00:00.466302163Z level=info msg="Migration successfully executed" id="create query_history table v1" duration=1.343788ms
Nov 27 06:00:00 np0005537642 ceph-4c838139-e0c9-556a-a9ca-e4422f459af7-grafana-compute-0[99196]: logger=migrator t=2025-11-27T11:00:00.476241899Z level=info msg="Executing migration" id="add index query_history.org_id-created_by-datasource_uid"
Nov 27 06:00:00 np0005537642 ceph-4c838139-e0c9-556a-a9ca-e4422f459af7-grafana-compute-0[99196]: logger=migrator t=2025-11-27T11:00:00.478477343Z level=info msg="Migration successfully executed" id="add index query_history.org_id-created_by-datasource_uid" duration=2.960575ms
Nov 27 06:00:00 np0005537642 ceph-4c838139-e0c9-556a-a9ca-e4422f459af7-grafana-compute-0[99196]: logger=migrator t=2025-11-27T11:00:00.494954048Z level=info msg="Executing migration" id="alter table query_history alter column created_by type to bigint"
Nov 27 06:00:00 np0005537642 ceph-4c838139-e0c9-556a-a9ca-e4422f459af7-grafana-compute-0[99196]: logger=migrator t=2025-11-27T11:00:00.495089352Z level=info msg="Migration successfully executed" id="alter table query_history alter column created_by type to bigint" duration=139.874µs
Nov 27 06:00:00 np0005537642 ceph-4c838139-e0c9-556a-a9ca-e4422f459af7-grafana-compute-0[99196]: logger=migrator t=2025-11-27T11:00:00.509992711Z level=info msg="Executing migration" id="rbac disabled migrator"
Nov 27 06:00:00 np0005537642 ceph-4c838139-e0c9-556a-a9ca-e4422f459af7-grafana-compute-0[99196]: logger=migrator t=2025-11-27T11:00:00.510105674Z level=info msg="Migration successfully executed" id="rbac disabled migrator" duration=117.383µs
Nov 27 06:00:00 np0005537642 ceph-4c838139-e0c9-556a-a9ca-e4422f459af7-grafana-compute-0[99196]: logger=migrator t=2025-11-27T11:00:00.517994892Z level=info msg="Executing migration" id="teams permissions migration"
Nov 27 06:00:00 np0005537642 ceph-4c838139-e0c9-556a-a9ca-e4422f459af7-grafana-compute-0[99196]: logger=migrator t=2025-11-27T11:00:00.518560208Z level=info msg="Migration successfully executed" id="teams permissions migration" duration=571.217µs
Nov 27 06:00:00 np0005537642 ceph-4c838139-e0c9-556a-a9ca-e4422f459af7-grafana-compute-0[99196]: logger=migrator t=2025-11-27T11:00:00.528866085Z level=info msg="Executing migration" id="dashboard permissions"
Nov 27 06:00:00 np0005537642 ceph-4c838139-e0c9-556a-a9ca-e4422f459af7-grafana-compute-0[99196]: logger=migrator t=2025-11-27T11:00:00.529656588Z level=info msg="Migration successfully executed" id="dashboard permissions" duration=796.563µs
Nov 27 06:00:00 np0005537642 ceph-4c838139-e0c9-556a-a9ca-e4422f459af7-grafana-compute-0[99196]: logger=migrator t=2025-11-27T11:00:00.539130471Z level=info msg="Executing migration" id="dashboard permissions uid scopes"
Nov 27 06:00:00 np0005537642 ceph-4c838139-e0c9-556a-a9ca-e4422f459af7-grafana-compute-0[99196]: logger=migrator t=2025-11-27T11:00:00.540012686Z level=info msg="Migration successfully executed" id="dashboard permissions uid scopes" duration=884.356µs
Nov 27 06:00:00 np0005537642 ceph-4c838139-e0c9-556a-a9ca-e4422f459af7-grafana-compute-0[99196]: logger=migrator t=2025-11-27T11:00:00.548419778Z level=info msg="Executing migration" id="drop managed folder create actions"
Nov 27 06:00:00 np0005537642 ceph-4c838139-e0c9-556a-a9ca-e4422f459af7-grafana-compute-0[99196]: logger=migrator t=2025-11-27T11:00:00.54883844Z level=info msg="Migration successfully executed" id="drop managed folder create actions" duration=423.012µs
Nov 27 06:00:00 np0005537642 ceph-4c838139-e0c9-556a-a9ca-e4422f459af7-grafana-compute-0[99196]: logger=migrator t=2025-11-27T11:00:00.556218073Z level=info msg="Executing migration" id="alerting notification permissions"
Nov 27 06:00:00 np0005537642 ceph-4c838139-e0c9-556a-a9ca-e4422f459af7-grafana-compute-0[99196]: logger=migrator t=2025-11-27T11:00:00.556957704Z level=info msg="Migration successfully executed" id="alerting notification permissions" duration=741.981µs
Nov 27 06:00:00 np0005537642 ceph-4c838139-e0c9-556a-a9ca-e4422f459af7-grafana-compute-0[99196]: logger=migrator t=2025-11-27T11:00:00.559088075Z level=info msg="Executing migration" id="create query_history_star table v1"
Nov 27 06:00:00 np0005537642 ceph-4c838139-e0c9-556a-a9ca-e4422f459af7-grafana-compute-0[99196]: logger=migrator t=2025-11-27T11:00:00.560799435Z level=info msg="Migration successfully executed" id="create query_history_star table v1" duration=1.71235ms
Nov 27 06:00:00 np0005537642 ceph-4c838139-e0c9-556a-a9ca-e4422f459af7-grafana-compute-0[99196]: logger=migrator t=2025-11-27T11:00:00.572119561Z level=info msg="Executing migration" id="add index query_history.user_id-query_uid"
Nov 27 06:00:00 np0005537642 ceph-4c838139-e0c9-556a-a9ca-e4422f459af7-grafana-compute-0[99196]: logger=migrator t=2025-11-27T11:00:00.574402877Z level=info msg="Migration successfully executed" id="add index query_history.user_id-query_uid" duration=2.283406ms
Nov 27 06:00:00 np0005537642 ceph-4c838139-e0c9-556a-a9ca-e4422f459af7-grafana-compute-0[99196]: logger=migrator t=2025-11-27T11:00:00.587959227Z level=info msg="Executing migration" id="add column org_id in query_history_star"
Nov 27 06:00:00 np0005537642 ceph-4c838139-e0c9-556a-a9ca-e4422f459af7-grafana-compute-0[99196]: logger=migrator t=2025-11-27T11:00:00.600746705Z level=info msg="Migration successfully executed" id="add column org_id in query_history_star" duration=12.785268ms
Nov 27 06:00:00 np0005537642 ceph-4c838139-e0c9-556a-a9ca-e4422f459af7-grafana-compute-0[99196]: logger=migrator t=2025-11-27T11:00:00.603983019Z level=info msg="Executing migration" id="alter table query_history_star_mig column user_id type to bigint"
Nov 27 06:00:00 np0005537642 ceph-4c838139-e0c9-556a-a9ca-e4422f459af7-grafana-compute-0[99196]: logger=migrator t=2025-11-27T11:00:00.604177404Z level=info msg="Migration successfully executed" id="alter table query_history_star_mig column user_id type to bigint" duration=195.535µs
Nov 27 06:00:00 np0005537642 ceph-4c838139-e0c9-556a-a9ca-e4422f459af7-grafana-compute-0[99196]: logger=migrator t=2025-11-27T11:00:00.610017723Z level=info msg="Executing migration" id="create correlation table v1"
Nov 27 06:00:00 np0005537642 ceph-4c838139-e0c9-556a-a9ca-e4422f459af7-grafana-compute-0[99196]: logger=migrator t=2025-11-27T11:00:00.612204516Z level=info msg="Migration successfully executed" id="create correlation table v1" duration=2.186763ms
Nov 27 06:00:00 np0005537642 ceph-4c838139-e0c9-556a-a9ca-e4422f459af7-grafana-compute-0[99196]: logger=migrator t=2025-11-27T11:00:00.616453368Z level=info msg="Executing migration" id="add index correlations.uid"
Nov 27 06:00:00 np0005537642 ceph-4c838139-e0c9-556a-a9ca-e4422f459af7-grafana-compute-0[99196]: logger=migrator t=2025-11-27T11:00:00.618532438Z level=info msg="Migration successfully executed" id="add index correlations.uid" duration=2.07884ms
Nov 27 06:00:00 np0005537642 ceph-4c838139-e0c9-556a-a9ca-e4422f459af7-grafana-compute-0[99196]: logger=migrator t=2025-11-27T11:00:00.621457052Z level=info msg="Executing migration" id="add index correlations.source_uid"
Nov 27 06:00:00 np0005537642 ceph-4c838139-e0c9-556a-a9ca-e4422f459af7-grafana-compute-0[99196]: logger=migrator t=2025-11-27T11:00:00.623654065Z level=info msg="Migration successfully executed" id="add index correlations.source_uid" duration=2.197653ms
Nov 27 06:00:00 np0005537642 ceph-4c838139-e0c9-556a-a9ca-e4422f459af7-grafana-compute-0[99196]: logger=migrator t=2025-11-27T11:00:00.626376724Z level=info msg="Executing migration" id="add correlation config column"
Nov 27 06:00:00 np0005537642 ceph-4c838139-e0c9-556a-a9ca-e4422f459af7-grafana-compute-0[99196]: logger=migrator t=2025-11-27T11:00:00.640021577Z level=info msg="Migration successfully executed" id="add correlation config column" duration=13.634693ms
Nov 27 06:00:00 np0005537642 ceph-4c838139-e0c9-556a-a9ca-e4422f459af7-grafana-compute-0[99196]: logger=migrator t=2025-11-27T11:00:00.643817906Z level=info msg="Executing migration" id="drop index IDX_correlation_uid - v1"
Nov 27 06:00:00 np0005537642 ceph-4c838139-e0c9-556a-a9ca-e4422f459af7-grafana-compute-0[99196]: logger=migrator t=2025-11-27T11:00:00.646279457Z level=info msg="Migration successfully executed" id="drop index IDX_correlation_uid - v1" duration=2.463931ms
Nov 27 06:00:00 np0005537642 ceph-4c838139-e0c9-556a-a9ca-e4422f459af7-grafana-compute-0[99196]: logger=migrator t=2025-11-27T11:00:00.648540002Z level=info msg="Executing migration" id="drop index IDX_correlation_source_uid - v1"
Nov 27 06:00:00 np0005537642 ceph-4c838139-e0c9-556a-a9ca-e4422f459af7-grafana-compute-0[99196]: logger=migrator t=2025-11-27T11:00:00.651088356Z level=info msg="Migration successfully executed" id="drop index IDX_correlation_source_uid - v1" duration=2.547854ms
Nov 27 06:00:00 np0005537642 ceph-4c838139-e0c9-556a-a9ca-e4422f459af7-grafana-compute-0[99196]: logger=migrator t=2025-11-27T11:00:00.654563636Z level=info msg="Executing migration" id="Rename table correlation to correlation_tmp_qwerty - v1"
Nov 27 06:00:00 np0005537642 ceph-4c838139-e0c9-556a-a9ca-e4422f459af7-nfs-cephfs-2-0-compute-0-ymahkb[97101]: 27/11/2025 11:00:00 : epoch 69282ef5 : compute-0 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f37b8004000 fd 49 proxy header rest len failed header rlen = % (will set dead)
Nov 27 06:00:00 np0005537642 ceph-4c838139-e0c9-556a-a9ca-e4422f459af7-grafana-compute-0[99196]: logger=migrator t=2025-11-27T11:00:00.685670612Z level=info msg="Migration successfully executed" id="Rename table correlation to correlation_tmp_qwerty - v1" duration=31.103426ms
Nov 27 06:00:00 np0005537642 ceph-4c838139-e0c9-556a-a9ca-e4422f459af7-grafana-compute-0[99196]: logger=migrator t=2025-11-27T11:00:00.694608349Z level=info msg="Executing migration" id="create correlation v2"
Nov 27 06:00:00 np0005537642 ceph-4c838139-e0c9-556a-a9ca-e4422f459af7-grafana-compute-0[99196]: logger=migrator t=2025-11-27T11:00:00.696255427Z level=info msg="Migration successfully executed" id="create correlation v2" duration=1.649398ms
Nov 27 06:00:00 np0005537642 ceph-4c838139-e0c9-556a-a9ca-e4422f459af7-grafana-compute-0[99196]: logger=migrator t=2025-11-27T11:00:00.698238784Z level=info msg="Executing migration" id="create index IDX_correlation_uid - v2"
Nov 27 06:00:00 np0005537642 ceph-4c838139-e0c9-556a-a9ca-e4422f459af7-grafana-compute-0[99196]: logger=migrator t=2025-11-27T11:00:00.699525461Z level=info msg="Migration successfully executed" id="create index IDX_correlation_uid - v2" duration=1.286377ms
Nov 27 06:00:00 np0005537642 ceph-4c838139-e0c9-556a-a9ca-e4422f459af7-grafana-compute-0[99196]: logger=migrator t=2025-11-27T11:00:00.702437455Z level=info msg="Executing migration" id="create index IDX_correlation_source_uid - v2"
Nov 27 06:00:00 np0005537642 ceph-4c838139-e0c9-556a-a9ca-e4422f459af7-grafana-compute-0[99196]: logger=migrator t=2025-11-27T11:00:00.703878096Z level=info msg="Migration successfully executed" id="create index IDX_correlation_source_uid - v2" duration=1.439931ms
Nov 27 06:00:00 np0005537642 ceph-4c838139-e0c9-556a-a9ca-e4422f459af7-grafana-compute-0[99196]: logger=migrator t=2025-11-27T11:00:00.712829014Z level=info msg="Executing migration" id="create index IDX_correlation_org_id - v2"
Nov 27 06:00:00 np0005537642 ceph-4c838139-e0c9-556a-a9ca-e4422f459af7-grafana-compute-0[99196]: logger=migrator t=2025-11-27T11:00:00.714012758Z level=info msg="Migration successfully executed" id="create index IDX_correlation_org_id - v2" duration=1.185974ms
Nov 27 06:00:00 np0005537642 ceph-4c838139-e0c9-556a-a9ca-e4422f459af7-grafana-compute-0[99196]: logger=migrator t=2025-11-27T11:00:00.717138509Z level=info msg="Executing migration" id="copy correlation v1 to v2"
Nov 27 06:00:00 np0005537642 ceph-4c838139-e0c9-556a-a9ca-e4422f459af7-grafana-compute-0[99196]: logger=migrator t=2025-11-27T11:00:00.717435427Z level=info msg="Migration successfully executed" id="copy correlation v1 to v2" duration=299.979µs
Nov 27 06:00:00 np0005537642 ceph-4c838139-e0c9-556a-a9ca-e4422f459af7-grafana-compute-0[99196]: logger=migrator t=2025-11-27T11:00:00.720830995Z level=info msg="Executing migration" id="drop correlation_tmp_qwerty"
Nov 27 06:00:00 np0005537642 ceph-4c838139-e0c9-556a-a9ca-e4422f459af7-grafana-compute-0[99196]: logger=migrator t=2025-11-27T11:00:00.721743901Z level=info msg="Migration successfully executed" id="drop correlation_tmp_qwerty" duration=913.116µs
Nov 27 06:00:00 np0005537642 ceph-4c838139-e0c9-556a-a9ca-e4422f459af7-grafana-compute-0[99196]: logger=migrator t=2025-11-27T11:00:00.72447181Z level=info msg="Executing migration" id="add provisioning column"
Nov 27 06:00:00 np0005537642 ceph-4c838139-e0c9-556a-a9ca-e4422f459af7-grafana-compute-0[99196]: logger=migrator t=2025-11-27T11:00:00.730687489Z level=info msg="Migration successfully executed" id="add provisioning column" duration=6.212019ms
Nov 27 06:00:00 np0005537642 ceph-4c838139-e0c9-556a-a9ca-e4422f459af7-grafana-compute-0[99196]: logger=migrator t=2025-11-27T11:00:00.736226408Z level=info msg="Executing migration" id="create entity_events table"
Nov 27 06:00:00 np0005537642 ceph-4c838139-e0c9-556a-a9ca-e4422f459af7-grafana-compute-0[99196]: logger=migrator t=2025-11-27T11:00:00.737060812Z level=info msg="Migration successfully executed" id="create entity_events table" duration=835.514µs
Nov 27 06:00:00 np0005537642 ceph-4c838139-e0c9-556a-a9ca-e4422f459af7-grafana-compute-0[99196]: logger=migrator t=2025-11-27T11:00:00.743632932Z level=info msg="Executing migration" id="create dashboard public config v1"
Nov 27 06:00:00 np0005537642 ceph-4c838139-e0c9-556a-a9ca-e4422f459af7-grafana-compute-0[99196]: logger=migrator t=2025-11-27T11:00:00.744690662Z level=info msg="Migration successfully executed" id="create dashboard public config v1" duration=1.05905ms
Nov 27 06:00:00 np0005537642 ceph-4c838139-e0c9-556a-a9ca-e4422f459af7-grafana-compute-0[99196]: logger=migrator t=2025-11-27T11:00:00.747790481Z level=info msg="Executing migration" id="drop index UQE_dashboard_public_config_uid - v1"
Nov 27 06:00:00 np0005537642 ceph-4c838139-e0c9-556a-a9ca-e4422f459af7-grafana-compute-0[99196]: logger=migrator t=2025-11-27T11:00:00.748180703Z level=warn msg="Skipping migration: Already executed, but not recorded in migration log" id="drop index UQE_dashboard_public_config_uid - v1"
Nov 27 06:00:00 np0005537642 ceph-4c838139-e0c9-556a-a9ca-e4422f459af7-grafana-compute-0[99196]: logger=migrator t=2025-11-27T11:00:00.750241782Z level=info msg="Executing migration" id="drop index IDX_dashboard_public_config_org_id_dashboard_uid - v1"
Nov 27 06:00:00 np0005537642 ceph-4c838139-e0c9-556a-a9ca-e4422f459af7-grafana-compute-0[99196]: logger=migrator t=2025-11-27T11:00:00.750621823Z level=warn msg="Skipping migration: Already executed, but not recorded in migration log" id="drop index IDX_dashboard_public_config_org_id_dashboard_uid - v1"
Nov 27 06:00:00 np0005537642 ceph-4c838139-e0c9-556a-a9ca-e4422f459af7-grafana-compute-0[99196]: logger=migrator t=2025-11-27T11:00:00.752999272Z level=info msg="Executing migration" id="Drop old dashboard public config table"
Nov 27 06:00:00 np0005537642 ceph-4c838139-e0c9-556a-a9ca-e4422f459af7-grafana-compute-0[99196]: logger=migrator t=2025-11-27T11:00:00.753886057Z level=info msg="Migration successfully executed" id="Drop old dashboard public config table" duration=886.335µs
Nov 27 06:00:00 np0005537642 ceph-4c838139-e0c9-556a-a9ca-e4422f459af7-grafana-compute-0[99196]: logger=migrator t=2025-11-27T11:00:00.757870632Z level=info msg="Executing migration" id="recreate dashboard public config v1"
Nov 27 06:00:00 np0005537642 ceph-4c838139-e0c9-556a-a9ca-e4422f459af7-grafana-compute-0[99196]: logger=migrator t=2025-11-27T11:00:00.758783218Z level=info msg="Migration successfully executed" id="recreate dashboard public config v1" duration=911.696µs
Nov 27 06:00:00 np0005537642 ceph-4c838139-e0c9-556a-a9ca-e4422f459af7-grafana-compute-0[99196]: logger=migrator t=2025-11-27T11:00:00.780472273Z level=info msg="Executing migration" id="create index UQE_dashboard_public_config_uid - v1"
Nov 27 06:00:00 np0005537642 ceph-4c838139-e0c9-556a-a9ca-e4422f459af7-grafana-compute-0[99196]: logger=migrator t=2025-11-27T11:00:00.782264025Z level=info msg="Migration successfully executed" id="create index UQE_dashboard_public_config_uid - v1" duration=1.794441ms
Nov 27 06:00:00 np0005537642 ceph-4c838139-e0c9-556a-a9ca-e4422f459af7-grafana-compute-0[99196]: logger=migrator t=2025-11-27T11:00:00.788975928Z level=info msg="Executing migration" id="create index IDX_dashboard_public_config_org_id_dashboard_uid - v1"
Nov 27 06:00:00 np0005537642 ceph-4c838139-e0c9-556a-a9ca-e4422f459af7-grafana-compute-0[99196]: logger=migrator t=2025-11-27T11:00:00.792020986Z level=info msg="Migration successfully executed" id="create index IDX_dashboard_public_config_org_id_dashboard_uid - v1" duration=3.049488ms
Nov 27 06:00:00 np0005537642 ceph-4c838139-e0c9-556a-a9ca-e4422f459af7-grafana-compute-0[99196]: logger=migrator t=2025-11-27T11:00:00.801450667Z level=info msg="Executing migration" id="drop index UQE_dashboard_public_config_uid - v2"
Nov 27 06:00:00 np0005537642 ceph-4c838139-e0c9-556a-a9ca-e4422f459af7-grafana-compute-0[99196]: logger=migrator t=2025-11-27T11:00:00.803258819Z level=info msg="Migration successfully executed" id="drop index UQE_dashboard_public_config_uid - v2" duration=1.812542ms
Nov 27 06:00:00 np0005537642 ceph-4c838139-e0c9-556a-a9ca-e4422f459af7-grafana-compute-0[99196]: logger=migrator t=2025-11-27T11:00:00.806638897Z level=info msg="Executing migration" id="drop index IDX_dashboard_public_config_org_id_dashboard_uid - v2"
Nov 27 06:00:00 np0005537642 ceph-4c838139-e0c9-556a-a9ca-e4422f459af7-grafana-compute-0[99196]: logger=migrator t=2025-11-27T11:00:00.808394697Z level=info msg="Migration successfully executed" id="drop index IDX_dashboard_public_config_org_id_dashboard_uid - v2" duration=1.756441ms
Nov 27 06:00:00 np0005537642 ceph-4c838139-e0c9-556a-a9ca-e4422f459af7-grafana-compute-0[99196]: logger=migrator t=2025-11-27T11:00:00.831995367Z level=info msg="Executing migration" id="Drop public config table"
Nov 27 06:00:00 np0005537642 ceph-4c838139-e0c9-556a-a9ca-e4422f459af7-grafana-compute-0[99196]: logger=migrator t=2025-11-27T11:00:00.833923713Z level=info msg="Migration successfully executed" id="Drop public config table" duration=1.930596ms
Nov 27 06:00:00 np0005537642 ceph-4c838139-e0c9-556a-a9ca-e4422f459af7-grafana-compute-0[99196]: logger=migrator t=2025-11-27T11:00:00.837172326Z level=info msg="Executing migration" id="Recreate dashboard public config v2"
Nov 27 06:00:00 np0005537642 ceph-4c838139-e0c9-556a-a9ca-e4422f459af7-grafana-compute-0[99196]: logger=migrator t=2025-11-27T11:00:00.838951428Z level=info msg="Migration successfully executed" id="Recreate dashboard public config v2" duration=1.783371ms
Nov 27 06:00:00 np0005537642 ceph-4c838139-e0c9-556a-a9ca-e4422f459af7-grafana-compute-0[99196]: logger=migrator t=2025-11-27T11:00:00.843414616Z level=info msg="Executing migration" id="create index UQE_dashboard_public_config_uid - v2"
Nov 27 06:00:00 np0005537642 ceph-4c838139-e0c9-556a-a9ca-e4422f459af7-grafana-compute-0[99196]: logger=migrator t=2025-11-27T11:00:00.844856598Z level=info msg="Migration successfully executed" id="create index UQE_dashboard_public_config_uid - v2" duration=1.445462ms
Nov 27 06:00:00 np0005537642 ceph-4c838139-e0c9-556a-a9ca-e4422f459af7-grafana-compute-0[99196]: logger=migrator t=2025-11-27T11:00:00.847623277Z level=info msg="Executing migration" id="create index IDX_dashboard_public_config_org_id_dashboard_uid - v2"
Nov 27 06:00:00 np0005537642 ceph-4c838139-e0c9-556a-a9ca-e4422f459af7-grafana-compute-0[99196]: logger=migrator t=2025-11-27T11:00:00.849064659Z level=info msg="Migration successfully executed" id="create index IDX_dashboard_public_config_org_id_dashboard_uid - v2" duration=1.441722ms
Nov 27 06:00:00 np0005537642 ceph-4c838139-e0c9-556a-a9ca-e4422f459af7-grafana-compute-0[99196]: logger=migrator t=2025-11-27T11:00:00.855873575Z level=info msg="Executing migration" id="create index UQE_dashboard_public_config_access_token - v2"
Nov 27 06:00:00 np0005537642 ceph-4c838139-e0c9-556a-a9ca-e4422f459af7-grafana-compute-0[99196]: logger=migrator t=2025-11-27T11:00:00.857466691Z level=info msg="Migration successfully executed" id="create index UQE_dashboard_public_config_access_token - v2" duration=1.595516ms
Nov 27 06:00:00 np0005537642 ceph-4c838139-e0c9-556a-a9ca-e4422f459af7-grafana-compute-0[99196]: logger=migrator t=2025-11-27T11:00:00.863182906Z level=info msg="Executing migration" id="Rename table dashboard_public_config to dashboard_public - v2"
Nov 27 06:00:00 np0005537642 ceph-4c838139-e0c9-556a-a9ca-e4422f459af7-grafana-compute-0[99196]: logger=migrator t=2025-11-27T11:00:00.897803553Z level=info msg="Migration successfully executed" id="Rename table dashboard_public_config to dashboard_public - v2" duration=34.617908ms
Nov 27 06:00:00 np0005537642 ceph-4c838139-e0c9-556a-a9ca-e4422f459af7-grafana-compute-0[99196]: logger=migrator t=2025-11-27T11:00:00.906121453Z level=info msg="Executing migration" id="add annotations_enabled column"
Nov 27 06:00:00 np0005537642 ceph-4c838139-e0c9-556a-a9ca-e4422f459af7-grafana-compute-0[99196]: logger=migrator t=2025-11-27T11:00:00.916347897Z level=info msg="Migration successfully executed" id="add annotations_enabled column" duration=10.224255ms
Nov 27 06:00:00 np0005537642 ceph-4c838139-e0c9-556a-a9ca-e4422f459af7-grafana-compute-0[99196]: logger=migrator t=2025-11-27T11:00:00.925607184Z level=info msg="Executing migration" id="add time_selection_enabled column"
Nov 27 06:00:00 np0005537642 ceph-4c838139-e0c9-556a-a9ca-e4422f459af7-grafana-compute-0[99196]: logger=migrator t=2025-11-27T11:00:00.936213629Z level=info msg="Migration successfully executed" id="add time_selection_enabled column" duration=10.636146ms
Nov 27 06:00:00 np0005537642 ceph-4c838139-e0c9-556a-a9ca-e4422f459af7-grafana-compute-0[99196]: logger=migrator t=2025-11-27T11:00:00.939146224Z level=info msg="Executing migration" id="delete orphaned public dashboards"
Nov 27 06:00:00 np0005537642 ceph-4c838139-e0c9-556a-a9ca-e4422f459af7-grafana-compute-0[99196]: logger=migrator t=2025-11-27T11:00:00.939514185Z level=info msg="Migration successfully executed" id="delete orphaned public dashboards" duration=370.301µs
Nov 27 06:00:00 np0005537642 ceph-4c838139-e0c9-556a-a9ca-e4422f459af7-grafana-compute-0[99196]: logger=migrator t=2025-11-27T11:00:00.943311724Z level=info msg="Executing migration" id="add share column"
Nov 27 06:00:00 np0005537642 ceph-4c838139-e0c9-556a-a9ca-e4422f459af7-grafana-compute-0[99196]: logger=migrator t=2025-11-27T11:00:00.953630641Z level=info msg="Migration successfully executed" id="add share column" duration=10.311927ms
Nov 27 06:00:00 np0005537642 ceph-4c838139-e0c9-556a-a9ca-e4422f459af7-grafana-compute-0[99196]: logger=migrator t=2025-11-27T11:00:00.95671937Z level=info msg="Executing migration" id="backfill empty share column fields with default of public"
Nov 27 06:00:00 np0005537642 ceph-4c838139-e0c9-556a-a9ca-e4422f459af7-grafana-compute-0[99196]: logger=migrator t=2025-11-27T11:00:00.957245855Z level=info msg="Migration successfully executed" id="backfill empty share column fields with default of public" duration=532.695µs
Nov 27 06:00:00 np0005537642 ceph-4c838139-e0c9-556a-a9ca-e4422f459af7-grafana-compute-0[99196]: logger=migrator t=2025-11-27T11:00:00.968720006Z level=info msg="Executing migration" id="create file table"
Nov 27 06:00:00 np0005537642 ceph-4c838139-e0c9-556a-a9ca-e4422f459af7-grafana-compute-0[99196]: logger=migrator t=2025-11-27T11:00:00.970124326Z level=info msg="Migration successfully executed" id="create file table" duration=1.40556ms
Nov 27 06:00:00 np0005537642 ceph-osd[82775]: log_channel(cluster) log [DBG] : 11.1e scrub starts
Nov 27 06:00:00 np0005537642 ceph-4c838139-e0c9-556a-a9ca-e4422f459af7-grafana-compute-0[99196]: logger=migrator t=2025-11-27T11:00:00.975620235Z level=info msg="Executing migration" id="file table idx: path natural pk"
Nov 27 06:00:00 np0005537642 ceph-4c838139-e0c9-556a-a9ca-e4422f459af7-grafana-compute-0[99196]: logger=migrator t=2025-11-27T11:00:00.976865951Z level=info msg="Migration successfully executed" id="file table idx: path natural pk" duration=1.244615ms
Nov 27 06:00:00 np0005537642 ceph-4c838139-e0c9-556a-a9ca-e4422f459af7-grafana-compute-0[99196]: logger=migrator t=2025-11-27T11:00:00.982382169Z level=info msg="Executing migration" id="file table idx: parent_folder_path_hash fast folder retrieval"
Nov 27 06:00:00 np0005537642 ceph-osd[82775]: log_channel(cluster) log [DBG] : 11.1e scrub ok
Nov 27 06:00:00 np0005537642 ceph-4c838139-e0c9-556a-a9ca-e4422f459af7-grafana-compute-0[99196]: logger=migrator t=2025-11-27T11:00:00.983561283Z level=info msg="Migration successfully executed" id="file table idx: parent_folder_path_hash fast folder retrieval" duration=1.178134ms
Nov 27 06:00:00 np0005537642 ceph-4c838139-e0c9-556a-a9ca-e4422f459af7-grafana-compute-0[99196]: logger=migrator t=2025-11-27T11:00:00.996056163Z level=info msg="Executing migration" id="create file_meta table"
Nov 27 06:00:00 np0005537642 ceph-4c838139-e0c9-556a-a9ca-e4422f459af7-grafana-compute-0[99196]: logger=migrator t=2025-11-27T11:00:00.997178426Z level=info msg="Migration successfully executed" id="create file_meta table" duration=1.129693ms
Nov 27 06:00:01 np0005537642 ceph-4c838139-e0c9-556a-a9ca-e4422f459af7-grafana-compute-0[99196]: logger=migrator t=2025-11-27T11:00:01.011519339Z level=info msg="Executing migration" id="file table idx: path key"
Nov 27 06:00:01 np0005537642 ceph-4c838139-e0c9-556a-a9ca-e4422f459af7-grafana-compute-0[99196]: logger=migrator t=2025-11-27T11:00:01.012736344Z level=info msg="Migration successfully executed" id="file table idx: path key" duration=1.220975ms
Nov 27 06:00:01 np0005537642 ceph-4c838139-e0c9-556a-a9ca-e4422f459af7-grafana-compute-0[99196]: logger=migrator t=2025-11-27T11:00:01.018207322Z level=info msg="Executing migration" id="set path collation in file table"
Nov 27 06:00:01 np0005537642 ceph-4c838139-e0c9-556a-a9ca-e4422f459af7-grafana-compute-0[99196]: logger=migrator t=2025-11-27T11:00:01.018341975Z level=info msg="Migration successfully executed" id="set path collation in file table" duration=141.284µs
Nov 27 06:00:01 np0005537642 ceph-4c838139-e0c9-556a-a9ca-e4422f459af7-grafana-compute-0[99196]: logger=migrator t=2025-11-27T11:00:01.022461104Z level=info msg="Executing migration" id="migrate contents column to mediumblob for MySQL"
Nov 27 06:00:01 np0005537642 ceph-4c838139-e0c9-556a-a9ca-e4422f459af7-grafana-compute-0[99196]: logger=migrator t=2025-11-27T11:00:01.022597378Z level=info msg="Migration successfully executed" id="migrate contents column to mediumblob for MySQL" duration=139.674µs
Nov 27 06:00:01 np0005537642 ceph-4c838139-e0c9-556a-a9ca-e4422f459af7-grafana-compute-0[99196]: logger=migrator t=2025-11-27T11:00:01.040919946Z level=info msg="Executing migration" id="managed permissions migration"
Nov 27 06:00:01 np0005537642 ceph-4c838139-e0c9-556a-a9ca-e4422f459af7-grafana-compute-0[99196]: logger=migrator t=2025-11-27T11:00:01.041850193Z level=info msg="Migration successfully executed" id="managed permissions migration" duration=934.896µs
Nov 27 06:00:01 np0005537642 ceph-4c838139-e0c9-556a-a9ca-e4422f459af7-grafana-compute-0[99196]: logger=migrator t=2025-11-27T11:00:01.044684224Z level=info msg="Executing migration" id="managed folder permissions alert actions migration"
Nov 27 06:00:01 np0005537642 ceph-4c838139-e0c9-556a-a9ca-e4422f459af7-grafana-compute-0[99196]: logger=migrator t=2025-11-27T11:00:01.045205369Z level=info msg="Migration successfully executed" id="managed folder permissions alert actions migration" duration=518.655µs
Nov 27 06:00:01 np0005537642 ceph-4c838139-e0c9-556a-a9ca-e4422f459af7-grafana-compute-0[99196]: logger=migrator t=2025-11-27T11:00:01.047426053Z level=info msg="Executing migration" id="RBAC action name migrator"
Nov 27 06:00:01 np0005537642 ceph-4c838139-e0c9-556a-a9ca-e4422f459af7-grafana-compute-0[99196]: logger=migrator t=2025-11-27T11:00:01.04904955Z level=info msg="Migration successfully executed" id="RBAC action name migrator" duration=1.623267ms
Nov 27 06:00:01 np0005537642 ceph-4c838139-e0c9-556a-a9ca-e4422f459af7-grafana-compute-0[99196]: logger=migrator t=2025-11-27T11:00:01.054659032Z level=info msg="Executing migration" id="Add UID column to playlist"
Nov 27 06:00:01 np0005537642 ceph-4c838139-e0c9-556a-a9ca-e4422f459af7-grafana-compute-0[99196]: logger=migrator t=2025-11-27T11:00:01.063378233Z level=info msg="Migration successfully executed" id="Add UID column to playlist" duration=8.718872ms
Nov 27 06:00:01 np0005537642 ceph-4c838139-e0c9-556a-a9ca-e4422f459af7-grafana-compute-0[99196]: logger=migrator t=2025-11-27T11:00:01.067058289Z level=info msg="Executing migration" id="Update uid column values in playlist"
Nov 27 06:00:01 np0005537642 ceph-4c838139-e0c9-556a-a9ca-e4422f459af7-grafana-compute-0[99196]: logger=migrator t=2025-11-27T11:00:01.067238314Z level=info msg="Migration successfully executed" id="Update uid column values in playlist" duration=179.905µs
Nov 27 06:00:01 np0005537642 ceph-4c838139-e0c9-556a-a9ca-e4422f459af7-grafana-compute-0[99196]: logger=migrator t=2025-11-27T11:00:01.07128313Z level=info msg="Executing migration" id="Add index for uid in playlist"
Nov 27 06:00:01 np0005537642 ceph-4c838139-e0c9-556a-a9ca-e4422f459af7-grafana-compute-0[99196]: logger=migrator t=2025-11-27T11:00:01.072552327Z level=info msg="Migration successfully executed" id="Add index for uid in playlist" duration=1.269257ms
Nov 27 06:00:01 np0005537642 ceph-4c838139-e0c9-556a-a9ca-e4422f459af7-grafana-compute-0[99196]: logger=migrator t=2025-11-27T11:00:01.077081727Z level=info msg="Executing migration" id="update group index for alert rules"
Nov 27 06:00:01 np0005537642 ceph-4c838139-e0c9-556a-a9ca-e4422f459af7-grafana-compute-0[99196]: logger=migrator t=2025-11-27T11:00:01.077701905Z level=info msg="Migration successfully executed" id="update group index for alert rules" duration=618.768µs
Nov 27 06:00:01 np0005537642 ceph-4c838139-e0c9-556a-a9ca-e4422f459af7-grafana-compute-0[99196]: logger=migrator t=2025-11-27T11:00:01.084225013Z level=info msg="Executing migration" id="managed folder permissions alert actions repeated migration"
Nov 27 06:00:01 np0005537642 ceph-4c838139-e0c9-556a-a9ca-e4422f459af7-grafana-compute-0[99196]: logger=migrator t=2025-11-27T11:00:01.084535232Z level=info msg="Migration successfully executed" id="managed folder permissions alert actions repeated migration" duration=310.479µs
Nov 27 06:00:01 np0005537642 ceph-4c838139-e0c9-556a-a9ca-e4422f459af7-grafana-compute-0[99196]: logger=migrator t=2025-11-27T11:00:01.086717365Z level=info msg="Executing migration" id="admin only folder/dashboard permission"
Nov 27 06:00:01 np0005537642 ceph-4c838139-e0c9-556a-a9ca-e4422f459af7-grafana-compute-0[99196]: logger=migrator t=2025-11-27T11:00:01.08724137Z level=info msg="Migration successfully executed" id="admin only folder/dashboard permission" duration=523.705µs
Nov 27 06:00:01 np0005537642 ceph-4c838139-e0c9-556a-a9ca-e4422f459af7-grafana-compute-0[99196]: logger=migrator t=2025-11-27T11:00:01.090458943Z level=info msg="Executing migration" id="add action column to seed_assignment"
Nov 27 06:00:01 np0005537642 podman[99594]: 2025-11-27 11:00:01.095741905 +0000 UTC m=+0.054710877 container create 059bd4479634e1878dd6e7b4f46481520bce39a8814507a639603fa770481c31 (image=quay.io/ceph/keepalived:2.2.4, name=practical_cori, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, release=1793, build-date=2023-02-22T09:23:20, io.buildah.version=1.28.2, io.openshift.tags=Ceph keepalived, vcs-type=git, vcs-ref=befaf1f5ec7b874aef2651ee1384d51828504eb9, name=keepalived, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9-minimal/images/9.1.0-1793, io.openshift.expose-services=, vendor=Red Hat, Inc., architecture=x86_64, summary=Provides keepalived on RHEL 9 for Ceph., io.k8s.display-name=Keepalived on RHEL 9, version=2.2.4, description=keepalived for Ceph, distribution-scope=public, com.redhat.component=keepalived-container)
Nov 27 06:00:01 np0005537642 ceph-4c838139-e0c9-556a-a9ca-e4422f459af7-grafana-compute-0[99196]: logger=migrator t=2025-11-27T11:00:01.099566405Z level=info msg="Migration successfully executed" id="add action column to seed_assignment" duration=9.107742ms
Nov 27 06:00:01 np0005537642 ceph-4c838139-e0c9-556a-a9ca-e4422f459af7-grafana-compute-0[99196]: logger=migrator t=2025-11-27T11:00:01.101622604Z level=info msg="Executing migration" id="add scope column to seed_assignment"
Nov 27 06:00:01 np0005537642 ceph-4c838139-e0c9-556a-a9ca-e4422f459af7-grafana-compute-0[99196]: logger=migrator t=2025-11-27T11:00:01.110456359Z level=info msg="Migration successfully executed" id="add scope column to seed_assignment" duration=8.833835ms
Nov 27 06:00:01 np0005537642 ceph-4c838139-e0c9-556a-a9ca-e4422f459af7-grafana-compute-0[99196]: logger=migrator t=2025-11-27T11:00:01.1157112Z level=info msg="Executing migration" id="remove unique index builtin_role_role_name before nullable update"
Nov 27 06:00:01 np0005537642 ceph-4c838139-e0c9-556a-a9ca-e4422f459af7-grafana-compute-0[99196]: logger=migrator t=2025-11-27T11:00:01.116933316Z level=info msg="Migration successfully executed" id="remove unique index builtin_role_role_name before nullable update" duration=1.220415ms
Nov 27 06:00:01 np0005537642 ceph-4c838139-e0c9-556a-a9ca-e4422f459af7-grafana-compute-0[99196]: logger=migrator t=2025-11-27T11:00:01.121144027Z level=info msg="Executing migration" id="update seed_assignment role_name column to nullable"
Nov 27 06:00:01 np0005537642 systemd[1]: Started libpod-conmon-059bd4479634e1878dd6e7b4f46481520bce39a8814507a639603fa770481c31.scope.
Nov 27 06:00:01 np0005537642 podman[99594]: 2025-11-27 11:00:01.071423264 +0000 UTC m=+0.030392326 image pull 4a3a1ff181d97c6dcfa9138ad76eb99fa2c1e840298461d5a7a56133bc05b9a2 quay.io/ceph/keepalived:2.2.4
Nov 27 06:00:01 np0005537642 systemd[1]: Started libcrun container.
Nov 27 06:00:01 np0005537642 podman[99594]: 2025-11-27 11:00:01.202278853 +0000 UTC m=+0.161247905 container init 059bd4479634e1878dd6e7b4f46481520bce39a8814507a639603fa770481c31 (image=quay.io/ceph/keepalived:2.2.4, name=practical_cori, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, build-date=2023-02-22T09:23:20, io.buildah.version=1.28.2, com.redhat.component=keepalived-container, release=1793, version=2.2.4, io.openshift.expose-services=, io.k8s.display-name=Keepalived on RHEL 9, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, vcs-type=git, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9-minimal/images/9.1.0-1793, description=keepalived for Ceph, distribution-scope=public, architecture=x86_64, summary=Provides keepalived on RHEL 9 for Ceph., vcs-ref=befaf1f5ec7b874aef2651ee1384d51828504eb9, io.openshift.tags=Ceph keepalived, vendor=Red Hat, Inc., name=keepalived)
Nov 27 06:00:01 np0005537642 podman[99594]: 2025-11-27 11:00:01.217249014 +0000 UTC m=+0.176217986 container start 059bd4479634e1878dd6e7b4f46481520bce39a8814507a639603fa770481c31 (image=quay.io/ceph/keepalived:2.2.4, name=practical_cori, io.openshift.tags=Ceph keepalived, io.k8s.display-name=Keepalived on RHEL 9, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, release=1793, io.buildah.version=1.28.2, architecture=x86_64, summary=Provides keepalived on RHEL 9 for Ceph., description=keepalived for Ceph, vendor=Red Hat, Inc., com.redhat.component=keepalived-container, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., version=2.2.4, distribution-scope=public, vcs-type=git, vcs-ref=befaf1f5ec7b874aef2651ee1384d51828504eb9, name=keepalived, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9-minimal/images/9.1.0-1793, build-date=2023-02-22T09:23:20, io.openshift.expose-services=)
Nov 27 06:00:01 np0005537642 practical_cori[99611]: 0 0
Nov 27 06:00:01 np0005537642 systemd[1]: libpod-059bd4479634e1878dd6e7b4f46481520bce39a8814507a639603fa770481c31.scope: Deactivated successfully.
Nov 27 06:00:01 np0005537642 conmon[99611]: conmon 059bd4479634e1878dd6 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-059bd4479634e1878dd6e7b4f46481520bce39a8814507a639603fa770481c31.scope/container/memory.events
Nov 27 06:00:01 np0005537642 ceph-4c838139-e0c9-556a-a9ca-e4422f459af7-grafana-compute-0[99196]: logger=migrator t=2025-11-27T11:00:01.244462388Z level=info msg="Migration successfully executed" id="update seed_assignment role_name column to nullable" duration=123.316091ms
Nov 27 06:00:01 np0005537642 ceph-4c838139-e0c9-556a-a9ca-e4422f459af7-grafana-compute-0[99196]: logger=migrator t=2025-11-27T11:00:01.456781764Z level=info msg="Executing migration" id="add unique index builtin_role_name back"
Nov 27 06:00:01 np0005537642 ceph-4c838139-e0c9-556a-a9ca-e4422f459af7-grafana-compute-0[99196]: logger=migrator t=2025-11-27T11:00:01.458344609Z level=info msg="Migration successfully executed" id="add unique index builtin_role_name back" duration=1.565465ms
Nov 27 06:00:01 np0005537642 podman[99594]: 2025-11-27 11:00:01.467102082 +0000 UTC m=+0.426071144 container attach 059bd4479634e1878dd6e7b4f46481520bce39a8814507a639603fa770481c31 (image=quay.io/ceph/keepalived:2.2.4, name=practical_cori, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9-minimal/images/9.1.0-1793, build-date=2023-02-22T09:23:20, io.openshift.expose-services=, vendor=Red Hat, Inc., vcs-ref=befaf1f5ec7b874aef2651ee1384d51828504eb9, com.redhat.component=keepalived-container, release=1793, description=keepalived for Ceph, io.buildah.version=1.28.2, architecture=x86_64, summary=Provides keepalived on RHEL 9 for Ceph., name=keepalived, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., version=2.2.4, vcs-type=git, distribution-scope=public, io.openshift.tags=Ceph keepalived, io.k8s.display-name=Keepalived on RHEL 9)
Nov 27 06:00:01 np0005537642 podman[99594]: 2025-11-27 11:00:01.468967376 +0000 UTC m=+0.427936418 container died 059bd4479634e1878dd6e7b4f46481520bce39a8814507a639603fa770481c31 (image=quay.io/ceph/keepalived:2.2.4, name=practical_cori, summary=Provides keepalived on RHEL 9 for Ceph., com.redhat.component=keepalived-container, build-date=2023-02-22T09:23:20, description=keepalived for Ceph, distribution-scope=public, io.openshift.tags=Ceph keepalived, io.k8s.display-name=Keepalived on RHEL 9, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., version=2.2.4, io.buildah.version=1.28.2, architecture=x86_64, name=keepalived, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, io.openshift.expose-services=, vendor=Red Hat, Inc., vcs-type=git, vcs-ref=befaf1f5ec7b874aef2651ee1384d51828504eb9, release=1793, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9-minimal/images/9.1.0-1793)
Nov 27 06:00:01 np0005537642 ceph-4c838139-e0c9-556a-a9ca-e4422f459af7-grafana-compute-0[99196]: logger=migrator t=2025-11-27T11:00:01.567388411Z level=info msg="Executing migration" id="add unique index builtin_role_action_scope"
Nov 27 06:00:01 np0005537642 ceph-4c838139-e0c9-556a-a9ca-e4422f459af7-grafana-compute-0[99196]: logger=migrator t=2025-11-27T11:00:01.56980385Z level=info msg="Migration successfully executed" id="add unique index builtin_role_action_scope" duration=2.415669ms
Nov 27 06:00:01 np0005537642 ceph-mon[74338]: from='mgr.14424 192.168.122.100:0/1707931784' entity='mgr.compute-0.qnrkij' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "12"}]': finished
Nov 27 06:00:01 np0005537642 ceph-mon[74338]: from='mgr.14424 192.168.122.100:0/1707931784' entity='mgr.compute-0.qnrkij' 
Nov 27 06:00:01 np0005537642 ceph-mon[74338]: from='mgr.14424 192.168.122.100:0/1707931784' entity='mgr.compute-0.qnrkij' 
Nov 27 06:00:01 np0005537642 ceph-mon[74338]: from='mgr.14424 192.168.122.100:0/1707931784' entity='mgr.compute-0.qnrkij' 
Nov 27 06:00:01 np0005537642 ceph-mon[74338]: from='mgr.14424 192.168.122.100:0/1707931784' entity='mgr.compute-0.qnrkij' 
Nov 27 06:00:01 np0005537642 ceph-mon[74338]: 192.168.122.2 is in 192.168.122.0/24 on compute-0 interface br-ex
Nov 27 06:00:01 np0005537642 ceph-mon[74338]: 192.168.122.2 is in 192.168.122.0/24 on compute-2 interface br-ex
Nov 27 06:00:01 np0005537642 ceph-mon[74338]: Deploying daemon keepalived.rgw.default.compute-0.jjyrft on compute-0
Nov 27 06:00:01 np0005537642 ceph-4c838139-e0c9-556a-a9ca-e4422f459af7-grafana-compute-0[99196]: logger=migrator t=2025-11-27T11:00:01.644789881Z level=info msg="Executing migration" id="add primary key to seed_assigment"
Nov 27 06:00:01 np0005537642 ceph-4c838139-e0c9-556a-a9ca-e4422f459af7-grafana-compute-0[99196]: logger=migrator t=2025-11-27T11:00:01.67287114Z level=info msg="Migration successfully executed" id="add primary key to seed_assigment" duration=28.086169ms
Nov 27 06:00:01 np0005537642 ceph-4c838139-e0c9-556a-a9ca-e4422f459af7-nfs-cephfs-2-0-compute-0-ymahkb[97101]: 27/11/2025 11:00:01 : epoch 69282ef5 : compute-0 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f37dc0034e0 fd 49 proxy header rest len failed header rlen = % (will set dead)
Nov 27 06:00:01 np0005537642 ceph-4c838139-e0c9-556a-a9ca-e4422f459af7-nfs-cephfs-2-0-compute-0-ymahkb[97101]: 27/11/2025 11:00:01 : epoch 69282ef5 : compute-0 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f37c4003c60 fd 49 proxy header rest len failed header rlen = % (will set dead)
Nov 27 06:00:01 np0005537642 ceph-4c838139-e0c9-556a-a9ca-e4422f459af7-grafana-compute-0[99196]: logger=migrator t=2025-11-27T11:00:01.927817714Z level=info msg="Executing migration" id="add origin column to seed_assignment"
Nov 27 06:00:01 np0005537642 ceph-4c838139-e0c9-556a-a9ca-e4422f459af7-grafana-compute-0[99196]: logger=migrator t=2025-11-27T11:00:01.94677882Z level=info msg="Migration successfully executed" id="add origin column to seed_assignment" duration=18.954516ms
Nov 27 06:00:01 np0005537642 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v96: 353 pgs: 353 active+clean; 456 KiB data, 126 MiB used, 60 GiB / 60 GiB avail
Nov 27 06:00:01 np0005537642 ceph-mon[74338]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "13"} v 0)
Nov 27 06:00:01 np0005537642 ceph-mon[74338]: log_channel(audit) log [INF] : from='mgr.14424 192.168.122.100:0/1707931784' entity='mgr.compute-0.qnrkij' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "13"}]: dispatch
Nov 27 06:00:01 np0005537642 ceph-4c838139-e0c9-556a-a9ca-e4422f459af7-grafana-compute-0[99196]: logger=migrator t=2025-11-27T11:00:01.951688162Z level=info msg="Executing migration" id="add origin to plugin seed_assignment"
Nov 27 06:00:01 np0005537642 ceph-4c838139-e0c9-556a-a9ca-e4422f459af7-grafana-compute-0[99196]: logger=migrator t=2025-11-27T11:00:01.95230812Z level=info msg="Migration successfully executed" id="add origin to plugin seed_assignment" duration=622.078µs
Nov 27 06:00:01 np0005537642 ceph-4c838139-e0c9-556a-a9ca-e4422f459af7-grafana-compute-0[99196]: logger=migrator t=2025-11-27T11:00:01.957673294Z level=info msg="Executing migration" id="prevent seeding OnCall access"
Nov 27 06:00:01 np0005537642 ceph-4c838139-e0c9-556a-a9ca-e4422f459af7-grafana-compute-0[99196]: logger=migrator t=2025-11-27T11:00:01.958064365Z level=info msg="Migration successfully executed" id="prevent seeding OnCall access" duration=391.221µs
Nov 27 06:00:01 np0005537642 systemd[1]: var-lib-containers-storage-overlay-8311b50e7e38d3b7f8518a8db3f01119aef8771fec77e5bf453b3cf0a752a7c4-merged.mount: Deactivated successfully.
Nov 27 06:00:01 np0005537642 ceph-4c838139-e0c9-556a-a9ca-e4422f459af7-grafana-compute-0[99196]: logger=migrator t=2025-11-27T11:00:01.965730896Z level=info msg="Executing migration" id="managed folder permissions alert actions repeated fixed migration"
Nov 27 06:00:01 np0005537642 ceph-4c838139-e0c9-556a-a9ca-e4422f459af7-grafana-compute-0[99196]: logger=migrator t=2025-11-27T11:00:01.966271332Z level=info msg="Migration successfully executed" id="managed folder permissions alert actions repeated fixed migration" duration=543.986µs
Nov 27 06:00:01 np0005537642 ceph-4c838139-e0c9-556a-a9ca-e4422f459af7-grafana-compute-0[99196]: logger=migrator t=2025-11-27T11:00:01.971032209Z level=info msg="Executing migration" id="managed folder permissions library panel actions migration"
Nov 27 06:00:01 np0005537642 ceph-4c838139-e0c9-556a-a9ca-e4422f459af7-grafana-compute-0[99196]: logger=migrator t=2025-11-27T11:00:01.971476782Z level=info msg="Migration successfully executed" id="managed folder permissions library panel actions migration" duration=442.623µs
Nov 27 06:00:01 np0005537642 ceph-osd[82775]: log_channel(cluster) log [DBG] : 8.17 scrub starts
Nov 27 06:00:01 np0005537642 ceph-osd[82775]: log_channel(cluster) log [DBG] : 8.17 scrub ok
Nov 27 06:00:01 np0005537642 ceph-4c838139-e0c9-556a-a9ca-e4422f459af7-grafana-compute-0[99196]: logger=migrator t=2025-11-27T11:00:01.991806978Z level=info msg="Executing migration" id="migrate external alertmanagers to datsourcse"
Nov 27 06:00:01 np0005537642 ceph-4c838139-e0c9-556a-a9ca-e4422f459af7-grafana-compute-0[99196]: logger=migrator t=2025-11-27T11:00:01.992345663Z level=info msg="Migration successfully executed" id="migrate external alertmanagers to datsourcse" duration=542.446µs
Nov 27 06:00:02 np0005537642 ceph-4c838139-e0c9-556a-a9ca-e4422f459af7-grafana-compute-0[99196]: logger=migrator t=2025-11-27T11:00:01.999925831Z level=info msg="Executing migration" id="create folder table"
Nov 27 06:00:02 np0005537642 ceph-4c838139-e0c9-556a-a9ca-e4422f459af7-grafana-compute-0[99196]: logger=migrator t=2025-11-27T11:00:02.002093784Z level=info msg="Migration successfully executed" id="create folder table" duration=2.171243ms
Nov 27 06:00:02 np0005537642 ceph-4c838139-e0c9-556a-a9ca-e4422f459af7-grafana-compute-0[99196]: logger=migrator t=2025-11-27T11:00:02.016097647Z level=info msg="Executing migration" id="Add index for parent_uid"
Nov 27 06:00:02 np0005537642 ceph-4c838139-e0c9-556a-a9ca-e4422f459af7-grafana-compute-0[99196]: logger=migrator t=2025-11-27T11:00:02.018734393Z level=info msg="Migration successfully executed" id="Add index for parent_uid" duration=2.636016ms
Nov 27 06:00:02 np0005537642 podman[99594]: 2025-11-27 11:00:02.058818368 +0000 UTC m=+1.017787380 container remove 059bd4479634e1878dd6e7b4f46481520bce39a8814507a639603fa770481c31 (image=quay.io/ceph/keepalived:2.2.4, name=practical_cori, summary=Provides keepalived on RHEL 9 for Ceph., vcs-ref=befaf1f5ec7b874aef2651ee1384d51828504eb9, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9-minimal/images/9.1.0-1793, name=keepalived, io.openshift.expose-services=, com.redhat.component=keepalived-container, release=1793, description=keepalived for Ceph, vendor=Red Hat, Inc., architecture=x86_64, version=2.2.4, build-date=2023-02-22T09:23:20, io.buildah.version=1.28.2, io.k8s.display-name=Keepalived on RHEL 9, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., maintainer=Guillaume Abrioux <gabrioux@redhat.com>, vcs-type=git, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, distribution-scope=public, io.openshift.tags=Ceph keepalived)
Nov 27 06:00:02 np0005537642 ceph-4c838139-e0c9-556a-a9ca-e4422f459af7-grafana-compute-0[99196]: logger=migrator t=2025-11-27T11:00:02.061269268Z level=info msg="Executing migration" id="Add unique index for folder.uid and folder.org_id"
Nov 27 06:00:02 np0005537642 ceph-4c838139-e0c9-556a-a9ca-e4422f459af7-grafana-compute-0[99196]: logger=migrator t=2025-11-27T11:00:02.063665988Z level=info msg="Migration successfully executed" id="Add unique index for folder.uid and folder.org_id" duration=2.399209ms
Nov 27 06:00:02 np0005537642 ceph-4c838139-e0c9-556a-a9ca-e4422f459af7-grafana-compute-0[99196]: logger=migrator t=2025-11-27T11:00:02.078204046Z level=info msg="Executing migration" id="Update folder title length"
Nov 27 06:00:02 np0005537642 ceph-4c838139-e0c9-556a-a9ca-e4422f459af7-grafana-compute-0[99196]: logger=migrator t=2025-11-27T11:00:02.078269138Z level=info msg="Migration successfully executed" id="Update folder title length" duration=68.172µs
Nov 27 06:00:02 np0005537642 ceph-4c838139-e0c9-556a-a9ca-e4422f459af7-grafana-compute-0[99196]: logger=migrator t=2025-11-27T11:00:02.108154939Z level=info msg="Executing migration" id="Add unique index for folder.title and folder.parent_uid"
Nov 27 06:00:02 np0005537642 ceph-4c838139-e0c9-556a-a9ca-e4422f459af7-grafana-compute-0[99196]: logger=migrator t=2025-11-27T11:00:02.110514067Z level=info msg="Migration successfully executed" id="Add unique index for folder.title and folder.parent_uid" duration=2.360078ms
Nov 27 06:00:02 np0005537642 systemd[1]: libpod-conmon-059bd4479634e1878dd6e7b4f46481520bce39a8814507a639603fa770481c31.scope: Deactivated successfully.
Nov 27 06:00:02 np0005537642 ceph-4c838139-e0c9-556a-a9ca-e4422f459af7-grafana-compute-0[99196]: logger=migrator t=2025-11-27T11:00:02.133107818Z level=info msg="Executing migration" id="Remove unique index for folder.title and folder.parent_uid"
Nov 27 06:00:02 np0005537642 ceph-4c838139-e0c9-556a-a9ca-e4422f459af7-grafana-compute-0[99196]: logger=migrator t=2025-11-27T11:00:02.135285271Z level=info msg="Migration successfully executed" id="Remove unique index for folder.title and folder.parent_uid" duration=2.179563ms
Nov 27 06:00:02 np0005537642 ceph-4c838139-e0c9-556a-a9ca-e4422f459af7-grafana-compute-0[99196]: logger=migrator t=2025-11-27T11:00:02.143021924Z level=info msg="Executing migration" id="Add unique index for title, parent_uid, and org_id"
Nov 27 06:00:02 np0005537642 ceph-4c838139-e0c9-556a-a9ca-e4422f459af7-grafana-compute-0[99196]: logger=migrator t=2025-11-27T11:00:02.145148055Z level=info msg="Migration successfully executed" id="Add unique index for title, parent_uid, and org_id" duration=2.124921ms
Nov 27 06:00:02 np0005537642 ceph-mon[74338]: mon.compute-0@0(leader).osd e98 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 27 06:00:02 np0005537642 ceph-4c838139-e0c9-556a-a9ca-e4422f459af7-grafana-compute-0[99196]: logger=migrator t=2025-11-27T11:00:02.171215716Z level=info msg="Executing migration" id="Sync dashboard and folder table"
Nov 27 06:00:02 np0005537642 ceph-4c838139-e0c9-556a-a9ca-e4422f459af7-grafana-compute-0[99196]: logger=migrator t=2025-11-27T11:00:02.172296267Z level=info msg="Migration successfully executed" id="Sync dashboard and folder table" duration=1.086062ms
Nov 27 06:00:02 np0005537642 ceph-mgr[74636]: [progress INFO root] Completed event 8c8e7dc8-523b-43b9-ad39-33013e07b0a7 (Global Recovery Event) in 13 seconds
Nov 27 06:00:02 np0005537642 radosgw[89563]: ====== starting new request req=0x7f291eca25d0 =====
Nov 27 06:00:02 np0005537642 radosgw[89563]: ====== req done req=0x7f291eca25d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 27 06:00:02 np0005537642 radosgw[89563]: beast: 0x7f291eca25d0: 192.168.122.102 - anonymous [27/Nov/2025:11:00:02.288 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 27 06:00:02 np0005537642 ceph-4c838139-e0c9-556a-a9ca-e4422f459af7-grafana-compute-0[99196]: logger=migrator t=2025-11-27T11:00:02.29389314Z level=info msg="Executing migration" id="Remove ghost folders from the folder table"
Nov 27 06:00:02 np0005537642 ceph-4c838139-e0c9-556a-a9ca-e4422f459af7-grafana-compute-0[99196]: logger=migrator t=2025-11-27T11:00:02.294490517Z level=info msg="Migration successfully executed" id="Remove ghost folders from the folder table" duration=599.527µs
Nov 27 06:00:02 np0005537642 ceph-mon[74338]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #21. Immutable memtables: 0.
Nov 27 06:00:02 np0005537642 ceph-mon[74338]: rocksdb: (Original Log Time 2025/11/27-11:00:02.297052) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Nov 27 06:00:02 np0005537642 ceph-mon[74338]: rocksdb: [db/flush_job.cc:856] [default] [JOB 5] Flushing memtable with next log file: 21
Nov 27 06:00:02 np0005537642 ceph-mon[74338]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764241202297101, "job": 5, "event": "flush_started", "num_memtables": 1, "num_entries": 989, "num_deletes": 262, "total_data_size": 1144889, "memory_usage": 1175520, "flush_reason": "Manual Compaction"}
Nov 27 06:00:02 np0005537642 ceph-mon[74338]: rocksdb: [db/flush_job.cc:885] [default] [JOB 5] Level-0 flush table #22: started
Nov 27 06:00:02 np0005537642 ceph-4c838139-e0c9-556a-a9ca-e4422f459af7-grafana-compute-0[99196]: logger=migrator t=2025-11-27T11:00:02.299070979Z level=info msg="Executing migration" id="Remove unique index UQE_folder_uid_org_id"
Nov 27 06:00:02 np0005537642 ceph-4c838139-e0c9-556a-a9ca-e4422f459af7-grafana-compute-0[99196]: logger=migrator t=2025-11-27T11:00:02.301300703Z level=info msg="Migration successfully executed" id="Remove unique index UQE_folder_uid_org_id" duration=2.234894ms
Nov 27 06:00:02 np0005537642 radosgw[89563]: ====== starting new request req=0x7f291eca25d0 =====
Nov 27 06:00:02 np0005537642 radosgw[89563]: ====== req done req=0x7f291eca25d0 op status=0 http_status=200 latency=0.001000029s ======
Nov 27 06:00:02 np0005537642 radosgw[89563]: beast: 0x7f291eca25d0: 192.168.122.100 - anonymous [27/Nov/2025:11:00:02.372 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Nov 27 06:00:02 np0005537642 ceph-4c838139-e0c9-556a-a9ca-e4422f459af7-grafana-compute-0[99196]: logger=migrator t=2025-11-27T11:00:02.409598923Z level=info msg="Executing migration" id="Add unique index UQE_folder_org_id_uid"
Nov 27 06:00:02 np0005537642 ceph-4c838139-e0c9-556a-a9ca-e4422f459af7-grafana-compute-0[99196]: logger=migrator t=2025-11-27T11:00:02.412015392Z level=info msg="Migration successfully executed" id="Add unique index UQE_folder_org_id_uid" duration=2.44576ms
Nov 27 06:00:02 np0005537642 ceph-4c838139-e0c9-556a-a9ca-e4422f459af7-grafana-compute-0[99196]: logger=migrator t=2025-11-27T11:00:02.532866444Z level=info msg="Executing migration" id="Remove unique index UQE_folder_title_parent_uid_org_id"
Nov 27 06:00:02 np0005537642 ceph-4c838139-e0c9-556a-a9ca-e4422f459af7-grafana-compute-0[99196]: logger=migrator t=2025-11-27T11:00:02.535054897Z level=info msg="Migration successfully executed" id="Remove unique index UQE_folder_title_parent_uid_org_id" duration=2.190803ms
Nov 27 06:00:02 np0005537642 ceph-mon[74338]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764241202544258, "cf_name": "default", "job": 5, "event": "table_file_creation", "file_number": 22, "file_size": 1092694, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 7980, "largest_seqno": 8968, "table_properties": {"data_size": 1087595, "index_size": 2303, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 1733, "raw_key_size": 13933, "raw_average_key_size": 20, "raw_value_size": 1075838, "raw_average_value_size": 1608, "num_data_blocks": 102, "num_entries": 669, "num_filter_entries": 669, "num_deletions": 262, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764241176, "oldest_key_time": 1764241176, "file_creation_time": 1764241202, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "65f89df2-0592-497f-b5d7-5930e7c7d9aa", "db_session_id": "PS7NKDG3F09YEGXCLO27", "orig_file_number": 22, "seqno_to_time_mapping": "N/A"}}
Nov 27 06:00:02 np0005537642 ceph-mon[74338]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 5] Flush lasted 247401 microseconds, and 9101 cpu microseconds.
Nov 27 06:00:02 np0005537642 ceph-mon[74338]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Nov 27 06:00:02 np0005537642 ceph-mon[74338]: mon.compute-0@0(leader).osd e98 do_prune osdmap full prune enabled
Nov 27 06:00:02 np0005537642 ceph-4c838139-e0c9-556a-a9ca-e4422f459af7-grafana-compute-0[99196]: logger=migrator t=2025-11-27T11:00:02.654273091Z level=info msg="Executing migration" id="Add unique index UQE_folder_org_id_parent_uid_title"
Nov 27 06:00:02 np0005537642 ceph-4c838139-e0c9-556a-a9ca-e4422f459af7-grafana-compute-0[99196]: logger=migrator t=2025-11-27T11:00:02.655809156Z level=info msg="Migration successfully executed" id="Add unique index UQE_folder_org_id_parent_uid_title" duration=1.538195ms
Nov 27 06:00:02 np0005537642 ceph-4c838139-e0c9-556a-a9ca-e4422f459af7-nfs-cephfs-2-0-compute-0-ymahkb[97101]: 27/11/2025 11:00:02 : epoch 69282ef5 : compute-0 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f37c4003c60 fd 49 proxy header rest len failed header rlen = % (will set dead)
Nov 27 06:00:02 np0005537642 ceph-mon[74338]: rocksdb: (Original Log Time 2025/11/27-11:00:02.544449) [db/flush_job.cc:967] [default] [JOB 5] Level-0 flush table #22: 1092694 bytes OK
Nov 27 06:00:02 np0005537642 ceph-mon[74338]: rocksdb: (Original Log Time 2025/11/27-11:00:02.544559) [db/memtable_list.cc:519] [default] Level-0 commit table #22 started
Nov 27 06:00:02 np0005537642 ceph-mon[74338]: rocksdb: (Original Log Time 2025/11/27-11:00:02.690617) [db/memtable_list.cc:722] [default] Level-0 commit table #22: memtable #1 done
Nov 27 06:00:02 np0005537642 ceph-mon[74338]: rocksdb: (Original Log Time 2025/11/27-11:00:02.690693) EVENT_LOG_v1 {"time_micros": 1764241202690679, "job": 5, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Nov 27 06:00:02 np0005537642 ceph-mon[74338]: rocksdb: (Original Log Time 2025/11/27-11:00:02.690729) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Nov 27 06:00:02 np0005537642 ceph-mon[74338]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 5] Try to delete WAL files size 1139456, prev total WAL file size 1148014, number of live WAL files 2.
Nov 27 06:00:02 np0005537642 ceph-4c838139-e0c9-556a-a9ca-e4422f459af7-grafana-compute-0[99196]: logger=migrator t=2025-11-27T11:00:02.692086581Z level=info msg="Executing migration" id="Remove index IDX_folder_parent_uid_org_id"
Nov 27 06:00:02 np0005537642 ceph-mon[74338]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000018.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 27 06:00:02 np0005537642 ceph-mon[74338]: rocksdb: (Original Log Time 2025/11/27-11:00:02.693206) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '6B760030' seq:72057594037927935, type:22 .. '6B7600323633' seq:0, type:0; will stop at (end)
Nov 27 06:00:02 np0005537642 ceph-mon[74338]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 6] Compacting 1@0 + 1@6 files to L6, score -1.00
Nov 27 06:00:02 np0005537642 ceph-mon[74338]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 5 Base level 0, inputs: [22(1067KB)], [20(11MB)]
Nov 27 06:00:02 np0005537642 ceph-mon[74338]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764241202693256, "job": 6, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [22], "files_L6": [20], "score": -1, "input_data_size": 13094327, "oldest_snapshot_seqno": -1}
Nov 27 06:00:02 np0005537642 ceph-4c838139-e0c9-556a-a9ca-e4422f459af7-grafana-compute-0[99196]: logger=migrator t=2025-11-27T11:00:02.694855221Z level=info msg="Migration successfully executed" id="Remove index IDX_folder_parent_uid_org_id" duration=2.770939ms
Nov 27 06:00:02 np0005537642 systemd[1]: Reloading.
Nov 27 06:00:02 np0005537642 ceph-4c838139-e0c9-556a-a9ca-e4422f459af7-grafana-compute-0[99196]: logger=migrator t=2025-11-27T11:00:02.724758492Z level=info msg="Executing migration" id="create anon_device table"
Nov 27 06:00:02 np0005537642 ceph-mon[74338]: log_channel(audit) log [INF] : from='mgr.14424 192.168.122.100:0/1707931784' entity='mgr.compute-0.qnrkij' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "13"}]': finished
Nov 27 06:00:02 np0005537642 ceph-4c838139-e0c9-556a-a9ca-e4422f459af7-grafana-compute-0[99196]: logger=migrator t=2025-11-27T11:00:02.726977576Z level=info msg="Migration successfully executed" id="create anon_device table" duration=2.221614ms
Nov 27 06:00:02 np0005537642 ceph-mon[74338]: mon.compute-0@0(leader).osd e99 e99: 3 total, 3 up, 3 in
Nov 27 06:00:02 np0005537642 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 27 06:00:02 np0005537642 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 27 06:00:02 np0005537642 ceph-4c838139-e0c9-556a-a9ca-e4422f459af7-grafana-compute-0[99196]: logger=migrator t=2025-11-27T11:00:02.961644576Z level=info msg="Executing migration" id="add unique index anon_device.device_id"
Nov 27 06:00:02 np0005537642 ceph-4c838139-e0c9-556a-a9ca-e4422f459af7-grafana-compute-0[99196]: logger=migrator t=2025-11-27T11:00:02.963632613Z level=info msg="Migration successfully executed" id="add unique index anon_device.device_id" duration=1.991987ms
Nov 27 06:00:03 np0005537642 ceph-osd[82775]: log_channel(cluster) log [DBG] : 8.18 scrub starts
Nov 27 06:00:03 np0005537642 ceph-mon[74338]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 6] Generated table #23: 3632 keys, 12608909 bytes, temperature: kUnknown
Nov 27 06:00:03 np0005537642 ceph-mon[74338]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764241203070493, "cf_name": "default", "job": 6, "event": "table_file_creation", "file_number": 23, "file_size": 12608909, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 12579273, "index_size": 19478, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 9093, "raw_key_size": 93331, "raw_average_key_size": 25, "raw_value_size": 12507107, "raw_average_value_size": 3443, "num_data_blocks": 844, "num_entries": 3632, "num_filter_entries": 3632, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1764240802, "oldest_key_time": 0, "file_creation_time": 1764241202, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "65f89df2-0592-497f-b5d7-5930e7c7d9aa", "db_session_id": "PS7NKDG3F09YEGXCLO27", "orig_file_number": 23, "seqno_to_time_mapping": "N/A"}}
Nov 27 06:00:03 np0005537642 ceph-mon[74338]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Nov 27 06:00:03 np0005537642 ceph-mon[74338]: log_channel(cluster) log [DBG] : osdmap e99: 3 total, 3 up, 3 in
Nov 27 06:00:03 np0005537642 ceph-4c838139-e0c9-556a-a9ca-e4422f459af7-grafana-compute-0[99196]: logger=migrator t=2025-11-27T11:00:03.074332962Z level=info msg="Executing migration" id="add index anon_device.updated_at"
Nov 27 06:00:03 np0005537642 ceph-4c838139-e0c9-556a-a9ca-e4422f459af7-grafana-compute-0[99196]: logger=migrator t=2025-11-27T11:00:03.076923387Z level=info msg="Migration successfully executed" id="add index anon_device.updated_at" duration=2.593855ms
Nov 27 06:00:03 np0005537642 ceph-mon[74338]: rocksdb: (Original Log Time 2025/11/27-11:00:03.071236) [db/compaction/compaction_job.cc:1663] [default] [JOB 6] Compacted 1@0 + 1@6 files to L6 => 12608909 bytes
Nov 27 06:00:03 np0005537642 ceph-mon[74338]: rocksdb: (Original Log Time 2025/11/27-11:00:03.101049) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 34.7 rd, 33.4 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(1.0, 11.4 +0.0 blob) out(12.0 +0.0 blob), read-write-amplify(23.5) write-amplify(11.5) OK, records in: 4180, records dropped: 548 output_compression: NoCompression
Nov 27 06:00:03 np0005537642 ceph-mon[74338]: rocksdb: (Original Log Time 2025/11/27-11:00:03.101108) EVENT_LOG_v1 {"time_micros": 1764241203101086, "job": 6, "event": "compaction_finished", "compaction_time_micros": 377729, "compaction_time_cpu_micros": 42981, "output_level": 6, "num_output_files": 1, "total_output_size": 12608909, "num_input_records": 4180, "num_output_records": 3632, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Nov 27 06:00:03 np0005537642 ceph-mon[74338]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000022.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 27 06:00:03 np0005537642 ceph-mon[74338]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764241203101510, "job": 6, "event": "table_file_deletion", "file_number": 22}
Nov 27 06:00:03 np0005537642 ceph-mon[74338]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000020.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 27 06:00:03 np0005537642 ceph-mon[74338]: rocksdb: EVENT_LOG_v1 {"time_micros": 1764241203103878, "job": 6, "event": "table_file_deletion", "file_number": 20}
Nov 27 06:00:03 np0005537642 ceph-mon[74338]: rocksdb: (Original Log Time 2025/11/27-11:00:02.693142) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 27 06:00:03 np0005537642 ceph-mon[74338]: rocksdb: (Original Log Time 2025/11/27-11:00:03.103965) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 27 06:00:03 np0005537642 ceph-mon[74338]: rocksdb: (Original Log Time 2025/11/27-11:00:03.103972) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 27 06:00:03 np0005537642 ceph-mon[74338]: rocksdb: (Original Log Time 2025/11/27-11:00:03.103975) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 27 06:00:03 np0005537642 ceph-mon[74338]: rocksdb: (Original Log Time 2025/11/27-11:00:03.103977) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 27 06:00:03 np0005537642 ceph-mon[74338]: rocksdb: (Original Log Time 2025/11/27-11:00:03.103979) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 27 06:00:03 np0005537642 ceph-osd[82775]: log_channel(cluster) log [DBG] : 8.18 scrub ok
Nov 27 06:00:03 np0005537642 ceph-4c838139-e0c9-556a-a9ca-e4422f459af7-grafana-compute-0[99196]: logger=migrator t=2025-11-27T11:00:03.110978958Z level=info msg="Executing migration" id="create signing_key table"
Nov 27 06:00:03 np0005537642 ceph-4c838139-e0c9-556a-a9ca-e4422f459af7-grafana-compute-0[99196]: logger=migrator t=2025-11-27T11:00:03.113045748Z level=info msg="Migration successfully executed" id="create signing_key table" duration=2.068999ms
Nov 27 06:00:03 np0005537642 ceph-mon[74338]: from='mgr.14424 192.168.122.100:0/1707931784' entity='mgr.compute-0.qnrkij' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "13"}]: dispatch
Nov 27 06:00:03 np0005537642 ceph-4c838139-e0c9-556a-a9ca-e4422f459af7-grafana-compute-0[99196]: logger=migrator t=2025-11-27T11:00:03.118756592Z level=info msg="Executing migration" id="add unique index signing_key.key_id"
Nov 27 06:00:03 np0005537642 ceph-4c838139-e0c9-556a-a9ca-e4422f459af7-grafana-compute-0[99196]: logger=migrator t=2025-11-27T11:00:03.121201113Z level=info msg="Migration successfully executed" id="add unique index signing_key.key_id" duration=2.44602ms
Nov 27 06:00:03 np0005537642 ceph-4c838139-e0c9-556a-a9ca-e4422f459af7-grafana-compute-0[99196]: logger=migrator t=2025-11-27T11:00:03.126269029Z level=info msg="Executing migration" id="set legacy alert migration status in kvstore"
Nov 27 06:00:03 np0005537642 ceph-4c838139-e0c9-556a-a9ca-e4422f459af7-grafana-compute-0[99196]: logger=migrator t=2025-11-27T11:00:03.127501234Z level=info msg="Migration successfully executed" id="set legacy alert migration status in kvstore" duration=1.233196ms
Nov 27 06:00:03 np0005537642 ceph-4c838139-e0c9-556a-a9ca-e4422f459af7-grafana-compute-0[99196]: logger=migrator t=2025-11-27T11:00:03.129256545Z level=info msg="Executing migration" id="migrate record of created folders during legacy migration to kvstore"
Nov 27 06:00:03 np0005537642 ceph-4c838139-e0c9-556a-a9ca-e4422f459af7-grafana-compute-0[99196]: logger=migrator t=2025-11-27T11:00:03.129597354Z level=info msg="Migration successfully executed" id="migrate record of created folders during legacy migration to kvstore" duration=341.169µs
Nov 27 06:00:03 np0005537642 ceph-4c838139-e0c9-556a-a9ca-e4422f459af7-grafana-compute-0[99196]: logger=migrator t=2025-11-27T11:00:03.132092026Z level=info msg="Executing migration" id="Add folder_uid for dashboard"
Nov 27 06:00:03 np0005537642 ceph-4c838139-e0c9-556a-a9ca-e4422f459af7-grafana-compute-0[99196]: logger=migrator t=2025-11-27T11:00:03.140720095Z level=info msg="Migration successfully executed" id="Add folder_uid for dashboard" duration=8.624939ms
Nov 27 06:00:03 np0005537642 ceph-4c838139-e0c9-556a-a9ca-e4422f459af7-grafana-compute-0[99196]: logger=migrator t=2025-11-27T11:00:03.14748233Z level=info msg="Executing migration" id="Populate dashboard folder_uid column"
Nov 27 06:00:03 np0005537642 ceph-4c838139-e0c9-556a-a9ca-e4422f459af7-grafana-compute-0[99196]: logger=migrator t=2025-11-27T11:00:03.148398926Z level=info msg="Migration successfully executed" id="Populate dashboard folder_uid column" duration=920.267µs
Nov 27 06:00:03 np0005537642 ceph-4c838139-e0c9-556a-a9ca-e4422f459af7-grafana-compute-0[99196]: logger=migrator t=2025-11-27T11:00:03.151018461Z level=info msg="Executing migration" id="Add unique index for dashboard_org_id_folder_uid_title"
Nov 27 06:00:03 np0005537642 ceph-4c838139-e0c9-556a-a9ca-e4422f459af7-grafana-compute-0[99196]: logger=migrator t=2025-11-27T11:00:03.152332589Z level=info msg="Migration successfully executed" id="Add unique index for dashboard_org_id_folder_uid_title" duration=1.306918ms
Nov 27 06:00:03 np0005537642 ceph-4c838139-e0c9-556a-a9ca-e4422f459af7-grafana-compute-0[99196]: logger=migrator t=2025-11-27T11:00:03.154857492Z level=info msg="Executing migration" id="Delete unique index for dashboard_org_id_folder_id_title"
Nov 27 06:00:03 np0005537642 ceph-4c838139-e0c9-556a-a9ca-e4422f459af7-grafana-compute-0[99196]: logger=migrator t=2025-11-27T11:00:03.156028506Z level=info msg="Migration successfully executed" id="Delete unique index for dashboard_org_id_folder_id_title" duration=1.171004ms
Nov 27 06:00:03 np0005537642 ceph-4c838139-e0c9-556a-a9ca-e4422f459af7-grafana-compute-0[99196]: logger=migrator t=2025-11-27T11:00:03.157967492Z level=info msg="Executing migration" id="Delete unique index for dashboard_org_id_folder_uid_title"
Nov 27 06:00:03 np0005537642 ceph-4c838139-e0c9-556a-a9ca-e4422f459af7-grafana-compute-0[99196]: logger=migrator t=2025-11-27T11:00:03.159159516Z level=info msg="Migration successfully executed" id="Delete unique index for dashboard_org_id_folder_uid_title" duration=1.190954ms
Nov 27 06:00:03 np0005537642 ceph-4c838139-e0c9-556a-a9ca-e4422f459af7-grafana-compute-0[99196]: logger=migrator t=2025-11-27T11:00:03.160900046Z level=info msg="Executing migration" id="Add unique index for dashboard_org_id_folder_uid_title_is_folder"
Nov 27 06:00:03 np0005537642 ceph-4c838139-e0c9-556a-a9ca-e4422f459af7-grafana-compute-0[99196]: logger=migrator t=2025-11-27T11:00:03.162340468Z level=info msg="Migration successfully executed" id="Add unique index for dashboard_org_id_folder_uid_title_is_folder" duration=1.439792ms
Nov 27 06:00:03 np0005537642 ceph-4c838139-e0c9-556a-a9ca-e4422f459af7-grafana-compute-0[99196]: logger=migrator t=2025-11-27T11:00:03.164996764Z level=info msg="Executing migration" id="Restore index for dashboard_org_id_folder_id_title"
Nov 27 06:00:03 np0005537642 ceph-4c838139-e0c9-556a-a9ca-e4422f459af7-grafana-compute-0[99196]: logger=migrator t=2025-11-27T11:00:03.166151517Z level=info msg="Migration successfully executed" id="Restore index for dashboard_org_id_folder_id_title" duration=1.153953ms
Nov 27 06:00:03 np0005537642 ceph-4c838139-e0c9-556a-a9ca-e4422f459af7-grafana-compute-0[99196]: logger=migrator t=2025-11-27T11:00:03.16868527Z level=info msg="Executing migration" id="create sso_setting table"
Nov 27 06:00:03 np0005537642 ceph-4c838139-e0c9-556a-a9ca-e4422f459af7-grafana-compute-0[99196]: logger=migrator t=2025-11-27T11:00:03.169878745Z level=info msg="Migration successfully executed" id="create sso_setting table" duration=1.194075ms
Nov 27 06:00:03 np0005537642 ceph-4c838139-e0c9-556a-a9ca-e4422f459af7-grafana-compute-0[99196]: logger=migrator t=2025-11-27T11:00:03.174729225Z level=info msg="Executing migration" id="copy kvstore migration status to each org"
Nov 27 06:00:03 np0005537642 ceph-4c838139-e0c9-556a-a9ca-e4422f459af7-grafana-compute-0[99196]: logger=migrator t=2025-11-27T11:00:03.175703703Z level=info msg="Migration successfully executed" id="copy kvstore migration status to each org" duration=976.479µs
Nov 27 06:00:03 np0005537642 ceph-4c838139-e0c9-556a-a9ca-e4422f459af7-grafana-compute-0[99196]: logger=migrator t=2025-11-27T11:00:03.177268938Z level=info msg="Executing migration" id="add back entry for orgid=0 migrated status"
Nov 27 06:00:03 np0005537642 ceph-4c838139-e0c9-556a-a9ca-e4422f459af7-grafana-compute-0[99196]: logger=migrator t=2025-11-27T11:00:03.177603387Z level=info msg="Migration successfully executed" id="add back entry for orgid=0 migrated status" duration=304.848µs
Nov 27 06:00:03 np0005537642 ceph-4c838139-e0c9-556a-a9ca-e4422f459af7-grafana-compute-0[99196]: logger=migrator t=2025-11-27T11:00:03.181066197Z level=info msg="Executing migration" id="alter kv_store.value to longtext"
Nov 27 06:00:03 np0005537642 ceph-4c838139-e0c9-556a-a9ca-e4422f459af7-grafana-compute-0[99196]: logger=migrator t=2025-11-27T11:00:03.181189561Z level=info msg="Migration successfully executed" id="alter kv_store.value to longtext" duration=125.134µs
Nov 27 06:00:03 np0005537642 ceph-4c838139-e0c9-556a-a9ca-e4422f459af7-grafana-compute-0[99196]: logger=migrator t=2025-11-27T11:00:03.185601538Z level=info msg="Executing migration" id="add notification_settings column to alert_rule table"
Nov 27 06:00:03 np0005537642 systemd[1]: Reloading.
Nov 27 06:00:03 np0005537642 ceph-4c838139-e0c9-556a-a9ca-e4422f459af7-grafana-compute-0[99196]: logger=migrator t=2025-11-27T11:00:03.194208246Z level=info msg="Migration successfully executed" id="add notification_settings column to alert_rule table" duration=8.601657ms
Nov 27 06:00:03 np0005537642 ceph-4c838139-e0c9-556a-a9ca-e4422f459af7-grafana-compute-0[99196]: logger=migrator t=2025-11-27T11:00:03.196191963Z level=info msg="Executing migration" id="add notification_settings column to alert_rule_version table"
Nov 27 06:00:03 np0005537642 ceph-4c838139-e0c9-556a-a9ca-e4422f459af7-grafana-compute-0[99196]: logger=migrator t=2025-11-27T11:00:03.2051196Z level=info msg="Migration successfully executed" id="add notification_settings column to alert_rule_version table" duration=8.921687ms
Nov 27 06:00:03 np0005537642 ceph-4c838139-e0c9-556a-a9ca-e4422f459af7-grafana-compute-0[99196]: logger=migrator t=2025-11-27T11:00:03.207225831Z level=info msg="Executing migration" id="removing scope from alert.instances:read action migration"
Nov 27 06:00:03 np0005537642 ceph-4c838139-e0c9-556a-a9ca-e4422f459af7-grafana-compute-0[99196]: logger=migrator t=2025-11-27T11:00:03.207849209Z level=info msg="Migration successfully executed" id="removing scope from alert.instances:read action migration" duration=624.458µs
Nov 27 06:00:03 np0005537642 ceph-4c838139-e0c9-556a-a9ca-e4422f459af7-grafana-compute-0[99196]: logger=migrator t=2025-11-27T11:00:03.209680871Z level=info msg="migrations completed" performed=547 skipped=0 duration=10.897985848s
Nov 27 06:00:03 np0005537642 ceph-4c838139-e0c9-556a-a9ca-e4422f459af7-grafana-compute-0[99196]: logger=sqlstore t=2025-11-27T11:00:03.211325459Z level=info msg="Created default organization"
Nov 27 06:00:03 np0005537642 ceph-4c838139-e0c9-556a-a9ca-e4422f459af7-grafana-compute-0[99196]: logger=secrets t=2025-11-27T11:00:03.213511182Z level=info msg="Envelope encryption state" enabled=true currentprovider=secretKey.v1
Nov 27 06:00:03 np0005537642 ceph-4c838139-e0c9-556a-a9ca-e4422f459af7-grafana-compute-0[99196]: logger=plugin.store t=2025-11-27T11:00:03.243919358Z level=info msg="Loading plugins..."
Nov 27 06:00:03 np0005537642 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 27 06:00:03 np0005537642 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 27 06:00:03 np0005537642 ceph-4c838139-e0c9-556a-a9ca-e4422f459af7-grafana-compute-0[99196]: logger=local.finder t=2025-11-27T11:00:03.323205972Z level=warn msg="Skipping finding plugins as directory does not exist" path=/usr/share/grafana/plugins-bundled
Nov 27 06:00:03 np0005537642 ceph-4c838139-e0c9-556a-a9ca-e4422f459af7-grafana-compute-0[99196]: logger=plugin.store t=2025-11-27T11:00:03.323235853Z level=info msg="Plugins loaded" count=55 duration=79.317585ms
Nov 27 06:00:03 np0005537642 ceph-4c838139-e0c9-556a-a9ca-e4422f459af7-grafana-compute-0[99196]: logger=query_data t=2025-11-27T11:00:03.32591586Z level=info msg="Query Service initialization"
Nov 27 06:00:03 np0005537642 ceph-4c838139-e0c9-556a-a9ca-e4422f459af7-grafana-compute-0[99196]: logger=live.push_http t=2025-11-27T11:00:03.33876218Z level=info msg="Live Push Gateway initialization"
Nov 27 06:00:03 np0005537642 ceph-4c838139-e0c9-556a-a9ca-e4422f459af7-grafana-compute-0[99196]: logger=ngalert.migration t=2025-11-27T11:00:03.369852386Z level=info msg=Starting
Nov 27 06:00:03 np0005537642 ceph-4c838139-e0c9-556a-a9ca-e4422f459af7-grafana-compute-0[99196]: logger=ngalert.migration t=2025-11-27T11:00:03.371132332Z level=info msg="Applying transition" currentType=Legacy desiredType=UnifiedAlerting cleanOnDowngrade=false cleanOnUpgrade=false
Nov 27 06:00:03 np0005537642 ceph-4c838139-e0c9-556a-a9ca-e4422f459af7-grafana-compute-0[99196]: logger=ngalert.migration orgID=1 t=2025-11-27T11:00:03.371555765Z level=info msg="Migrating alerts for organisation"
Nov 27 06:00:03 np0005537642 ceph-4c838139-e0c9-556a-a9ca-e4422f459af7-grafana-compute-0[99196]: logger=ngalert.migration orgID=1 t=2025-11-27T11:00:03.372284896Z level=info msg="Alerts found to migrate" alerts=0
Nov 27 06:00:03 np0005537642 ceph-4c838139-e0c9-556a-a9ca-e4422f459af7-grafana-compute-0[99196]: logger=ngalert.migration t=2025-11-27T11:00:03.374215991Z level=info msg="Completed alerting migration"
Nov 27 06:00:03 np0005537642 ceph-4c838139-e0c9-556a-a9ca-e4422f459af7-grafana-compute-0[99196]: logger=ngalert.state.manager t=2025-11-27T11:00:03.419822465Z level=info msg="Running in alternative execution of Error/NoData mode"
Nov 27 06:00:03 np0005537642 ceph-4c838139-e0c9-556a-a9ca-e4422f459af7-grafana-compute-0[99196]: logger=infra.usagestats.collector t=2025-11-27T11:00:03.421625727Z level=info msg="registering usage stat providers" usageStatsProvidersLen=2
Nov 27 06:00:03 np0005537642 ceph-4c838139-e0c9-556a-a9ca-e4422f459af7-grafana-compute-0[99196]: logger=provisioning.datasources t=2025-11-27T11:00:03.422711958Z level=info msg="inserting datasource from configuration" name=Loki uid=P8E80F9AEF21F6940
Nov 27 06:00:03 np0005537642 ceph-4c838139-e0c9-556a-a9ca-e4422f459af7-grafana-compute-0[99196]: logger=provisioning.alerting t=2025-11-27T11:00:03.443213599Z level=info msg="starting to provision alerting"
Nov 27 06:00:03 np0005537642 ceph-4c838139-e0c9-556a-a9ca-e4422f459af7-grafana-compute-0[99196]: logger=provisioning.alerting t=2025-11-27T11:00:03.44325868Z level=info msg="finished to provision alerting"
Nov 27 06:00:03 np0005537642 ceph-4c838139-e0c9-556a-a9ca-e4422f459af7-grafana-compute-0[99196]: logger=grafanaStorageLogger t=2025-11-27T11:00:03.443468876Z level=info msg="Storage starting"
Nov 27 06:00:03 np0005537642 ceph-4c838139-e0c9-556a-a9ca-e4422f459af7-grafana-compute-0[99196]: logger=ngalert.state.manager t=2025-11-27T11:00:03.445314509Z level=info msg="Warming state cache for startup"
Nov 27 06:00:03 np0005537642 ceph-4c838139-e0c9-556a-a9ca-e4422f459af7-grafana-compute-0[99196]: logger=ngalert.multiorg.alertmanager t=2025-11-27T11:00:03.446654608Z level=info msg="Starting MultiOrg Alertmanager"
Nov 27 06:00:03 np0005537642 ceph-4c838139-e0c9-556a-a9ca-e4422f459af7-grafana-compute-0[99196]: logger=http.server t=2025-11-27T11:00:03.447686018Z level=info msg="HTTP Server TLS settings" MinTLSVersion=TLS1.2 configuredciphers=TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA,TLS_ECDHE_ECDSA_WITH_AES_256_CBC_SHA,TLS_ECDHE_RSA_WITH_AES_256_CBC_SHA,TLS_RSA_WITH_AES_128_GCM_SHA256,TLS_RSA_WITH_AES_256_GCM_SHA384,TLS_RSA_WITH_AES_128_CBC_SHA,TLS_RSA_WITH_AES_256_CBC_SHA
Nov 27 06:00:03 np0005537642 ceph-4c838139-e0c9-556a-a9ca-e4422f459af7-grafana-compute-0[99196]: logger=http.server t=2025-11-27T11:00:03.448439219Z level=info msg="HTTP Server Listen" address=192.168.122.100:3000 protocol=https subUrl= socket=
Nov 27 06:00:03 np0005537642 ceph-4c838139-e0c9-556a-a9ca-e4422f459af7-grafana-compute-0[99196]: logger=ngalert.state.manager t=2025-11-27T11:00:03.453852185Z level=info msg="State cache has been initialized" states=0 duration=8.533606ms
Nov 27 06:00:03 np0005537642 ceph-4c838139-e0c9-556a-a9ca-e4422f459af7-grafana-compute-0[99196]: logger=ngalert.scheduler t=2025-11-27T11:00:03.453909727Z level=info msg="Starting scheduler" tickInterval=10s maxAttempts=1
Nov 27 06:00:03 np0005537642 ceph-4c838139-e0c9-556a-a9ca-e4422f459af7-grafana-compute-0[99196]: logger=ticker t=2025-11-27T11:00:03.453960168Z level=info msg=starting first_tick=2025-11-27T11:00:10Z
Nov 27 06:00:03 np0005537642 ceph-4c838139-e0c9-556a-a9ca-e4422f459af7-grafana-compute-0[99196]: logger=sqlstore.transactions t=2025-11-27T11:00:03.456035558Z level=info msg="Database locked, sleeping then retrying" error="database is locked" retry=0 code="database is locked"
Nov 27 06:00:03 np0005537642 systemd[1]: Starting Ceph keepalived.rgw.default.compute-0.jjyrft for 4c838139-e0c9-556a-a9ca-e4422f459af7...
Nov 27 06:00:03 np0005537642 ceph-4c838139-e0c9-556a-a9ca-e4422f459af7-grafana-compute-0[99196]: logger=sqlstore.transactions t=2025-11-27T11:00:03.470987109Z level=info msg="Database locked, sleeping then retrying" error="database is locked" retry=1 code="database is locked"
Nov 27 06:00:03 np0005537642 ceph-4c838139-e0c9-556a-a9ca-e4422f459af7-grafana-compute-0[99196]: logger=provisioning.dashboard t=2025-11-27T11:00:03.539904334Z level=info msg="starting to provision dashboards"
Nov 27 06:00:03 np0005537642 ceph-4c838139-e0c9-556a-a9ca-e4422f459af7-grafana-compute-0[99196]: logger=grafana.update.checker t=2025-11-27T11:00:03.559163069Z level=info msg="Update check succeeded" duration=115.57454ms
Nov 27 06:00:03 np0005537642 ceph-4c838139-e0c9-556a-a9ca-e4422f459af7-grafana-compute-0[99196]: logger=plugins.update.checker t=2025-11-27T11:00:03.559695444Z level=info msg="Update check succeeded" duration=116.016532ms
Nov 27 06:00:03 np0005537642 ceph-4c838139-e0c9-556a-a9ca-e4422f459af7-grafana-compute-0[99196]: logger=grafana-apiserver t=2025-11-27T11:00:03.637672141Z level=info msg="Adding GroupVersion playlist.grafana.app v0alpha1 to ResourceManager"
Nov 27 06:00:03 np0005537642 ceph-4c838139-e0c9-556a-a9ca-e4422f459af7-grafana-compute-0[99196]: logger=grafana-apiserver t=2025-11-27T11:00:03.638793073Z level=info msg="Adding GroupVersion featuretoggle.grafana.app v0alpha1 to ResourceManager"
Nov 27 06:00:03 np0005537642 podman[99760]: 2025-11-27 11:00:03.711552229 +0000 UTC m=+0.065405915 container create cbcac57b7b07265eef9628cadbbb672e4a23b6e6aebd645f5d8dbd283ad4303f (image=quay.io/ceph/keepalived:2.2.4, name=ceph-4c838139-e0c9-556a-a9ca-e4422f459af7-keepalived-rgw-default-compute-0-jjyrft, io.openshift.expose-services=, release=1793, com.redhat.component=keepalived-container, version=2.2.4, architecture=x86_64, name=keepalived, io.k8s.display-name=Keepalived on RHEL 9, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9-minimal/images/9.1.0-1793, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.buildah.version=1.28.2, summary=Provides keepalived on RHEL 9 for Ceph., io.openshift.tags=Ceph keepalived, vcs-type=git, distribution-scope=public, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, build-date=2023-02-22T09:23:20, vcs-ref=befaf1f5ec7b874aef2651ee1384d51828504eb9, vendor=Red Hat, Inc., description=keepalived for Ceph)
Nov 27 06:00:03 np0005537642 ceph-4c838139-e0c9-556a-a9ca-e4422f459af7-nfs-cephfs-2-0-compute-0-ymahkb[97101]: 27/11/2025 11:00:03 : epoch 69282ef5 : compute-0 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f37b8004000 fd 49 proxy header rest len failed header rlen = % (will set dead)
Nov 27 06:00:03 np0005537642 podman[99760]: 2025-11-27 11:00:03.66852846 +0000 UTC m=+0.022382136 image pull 4a3a1ff181d97c6dcfa9138ad76eb99fa2c1e840298461d5a7a56133bc05b9a2 quay.io/ceph/keepalived:2.2.4
Nov 27 06:00:03 np0005537642 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/97cf1710fbd2e0e9d32e3ce5e758de4e27d7a43b38bccce678e3347932ab591b/merged/etc/keepalived/keepalived.conf supports timestamps until 2038 (0x7fffffff)
Nov 27 06:00:03 np0005537642 podman[99760]: 2025-11-27 11:00:03.856209196 +0000 UTC m=+0.210062872 container init cbcac57b7b07265eef9628cadbbb672e4a23b6e6aebd645f5d8dbd283ad4303f (image=quay.io/ceph/keepalived:2.2.4, name=ceph-4c838139-e0c9-556a-a9ca-e4422f459af7-keepalived-rgw-default-compute-0-jjyrft, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9-minimal/images/9.1.0-1793, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, architecture=x86_64, release=1793, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vcs-type=git, distribution-scope=public, io.buildah.version=1.28.2, version=2.2.4, build-date=2023-02-22T09:23:20, summary=Provides keepalived on RHEL 9 for Ceph., io.openshift.tags=Ceph keepalived, io.openshift.expose-services=, vendor=Red Hat, Inc., description=keepalived for Ceph, vcs-ref=befaf1f5ec7b874aef2651ee1384d51828504eb9, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, com.redhat.component=keepalived-container, name=keepalived, io.k8s.display-name=Keepalived on RHEL 9)
Nov 27 06:00:03 np0005537642 podman[99760]: 2025-11-27 11:00:03.865498784 +0000 UTC m=+0.219352430 container start cbcac57b7b07265eef9628cadbbb672e4a23b6e6aebd645f5d8dbd283ad4303f (image=quay.io/ceph/keepalived:2.2.4, name=ceph-4c838139-e0c9-556a-a9ca-e4422f459af7-keepalived-rgw-default-compute-0-jjyrft, version=2.2.4, com.redhat.component=keepalived-container, io.openshift.expose-services=, vcs-type=git, io.buildah.version=1.28.2, summary=Provides keepalived on RHEL 9 for Ceph., vcs-ref=befaf1f5ec7b874aef2651ee1384d51828504eb9, io.k8s.display-name=Keepalived on RHEL 9, description=keepalived for Ceph, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9-minimal/images/9.1.0-1793, build-date=2023-02-22T09:23:20, vendor=Red Hat, Inc., name=keepalived, io.openshift.tags=Ceph keepalived, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, architecture=x86_64, release=1793, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., distribution-scope=public)
Nov 27 06:00:03 np0005537642 ceph-4c838139-e0c9-556a-a9ca-e4422f459af7-nfs-cephfs-2-0-compute-0-ymahkb[97101]: 27/11/2025 11:00:03 : epoch 69282ef5 : compute-0 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f37dc0034e0 fd 49 proxy header rest len failed header rlen = % (will set dead)
Nov 27 06:00:03 np0005537642 ceph-4c838139-e0c9-556a-a9ca-e4422f459af7-keepalived-rgw-default-compute-0-jjyrft[99775]: Thu Nov 27 11:00:03 2025: Starting Keepalived v2.2.4 (08/21,2021)
Nov 27 06:00:03 np0005537642 ceph-4c838139-e0c9-556a-a9ca-e4422f459af7-keepalived-rgw-default-compute-0-jjyrft[99775]: Thu Nov 27 11:00:03 2025: Running on Linux 5.14.0-642.el9.x86_64 #1 SMP PREEMPT_DYNAMIC Thu Nov 20 14:15:03 UTC 2025 (built for Linux 5.14.0)
Nov 27 06:00:03 np0005537642 ceph-4c838139-e0c9-556a-a9ca-e4422f459af7-keepalived-rgw-default-compute-0-jjyrft[99775]: Thu Nov 27 11:00:03 2025: Command line: '/usr/sbin/keepalived' '-n' '-l' '-f' '/etc/keepalived/keepalived.conf'
Nov 27 06:00:03 np0005537642 ceph-4c838139-e0c9-556a-a9ca-e4422f459af7-keepalived-rgw-default-compute-0-jjyrft[99775]: Thu Nov 27 11:00:03 2025: Configuration file /etc/keepalived/keepalived.conf
Nov 27 06:00:03 np0005537642 ceph-4c838139-e0c9-556a-a9ca-e4422f459af7-keepalived-rgw-default-compute-0-jjyrft[99775]: Thu Nov 27 11:00:03 2025: Failed to bind to process monitoring socket - errno 98 - Address already in use
Nov 27 06:00:03 np0005537642 ceph-4c838139-e0c9-556a-a9ca-e4422f459af7-keepalived-rgw-default-compute-0-jjyrft[99775]: Thu Nov 27 11:00:03 2025: NOTICE: setting config option max_auto_priority should result in better keepalived performance
Nov 27 06:00:03 np0005537642 ceph-4c838139-e0c9-556a-a9ca-e4422f459af7-keepalived-rgw-default-compute-0-jjyrft[99775]: Thu Nov 27 11:00:03 2025: Starting VRRP child process, pid=4
Nov 27 06:00:03 np0005537642 ceph-4c838139-e0c9-556a-a9ca-e4422f459af7-keepalived-rgw-default-compute-0-jjyrft[99775]: Thu Nov 27 11:00:03 2025: Startup complete
Nov 27 06:00:03 np0005537642 ceph-4c838139-e0c9-556a-a9ca-e4422f459af7-keepalived-nfs-cephfs-compute-0-zwobpl[97895]: Thu Nov 27 11:00:03 2025: (VI_0) Entering BACKUP STATE
Nov 27 06:00:03 np0005537642 ceph-4c838139-e0c9-556a-a9ca-e4422f459af7-keepalived-rgw-default-compute-0-jjyrft[99775]: Thu Nov 27 11:00:03 2025: (VI_0) Entering BACKUP STATE (init)
Nov 27 06:00:03 np0005537642 bash[99760]: cbcac57b7b07265eef9628cadbbb672e4a23b6e6aebd645f5d8dbd283ad4303f
Nov 27 06:00:03 np0005537642 ceph-4c838139-e0c9-556a-a9ca-e4422f459af7-keepalived-rgw-default-compute-0-jjyrft[99775]: Thu Nov 27 11:00:03 2025: VRRP_Script(check_backend) succeeded
Nov 27 06:00:03 np0005537642 systemd[1]: Started Ceph keepalived.rgw.default.compute-0.jjyrft for 4c838139-e0c9-556a-a9ca-e4422f459af7.
Nov 27 06:00:03 np0005537642 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v98: 353 pgs: 353 active+clean; 456 KiB data, 126 MiB used, 60 GiB / 60 GiB avail
Nov 27 06:00:03 np0005537642 ceph-mon[74338]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "14"} v 0)
Nov 27 06:00:03 np0005537642 ceph-mon[74338]: log_channel(audit) log [INF] : from='mgr.14424 192.168.122.100:0/1707931784' entity='mgr.compute-0.qnrkij' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "14"}]: dispatch
Nov 27 06:00:03 np0005537642 ceph-mon[74338]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Nov 27 06:00:04 np0005537642 ceph-osd[82775]: log_channel(cluster) log [DBG] : 11.1d scrub starts
Nov 27 06:00:04 np0005537642 ceph-osd[82775]: log_channel(cluster) log [DBG] : 11.1d scrub ok
Nov 27 06:00:04 np0005537642 ceph-mon[74338]: log_channel(audit) log [INF] : from='mgr.14424 192.168.122.100:0/1707931784' entity='mgr.compute-0.qnrkij' 
Nov 27 06:00:04 np0005537642 ceph-mon[74338]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Nov 27 06:00:04 np0005537642 ceph-mon[74338]: log_channel(audit) log [INF] : from='mgr.14424 192.168.122.100:0/1707931784' entity='mgr.compute-0.qnrkij' 
Nov 27 06:00:04 np0005537642 ceph-mon[74338]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.ingress.rgw.default}] v 0)
Nov 27 06:00:04 np0005537642 ceph-mon[74338]: log_channel(audit) log [INF] : from='mgr.14424 192.168.122.100:0/1707931784' entity='mgr.compute-0.qnrkij' 
Nov 27 06:00:04 np0005537642 ceph-mgr[74636]: [cephadm INFO cephadm.services.ingress] 192.168.122.2 is in 192.168.122.0/24 on compute-2 interface br-ex
Nov 27 06:00:04 np0005537642 ceph-mgr[74636]: log_channel(cephadm) log [INF] : 192.168.122.2 is in 192.168.122.0/24 on compute-2 interface br-ex
Nov 27 06:00:04 np0005537642 ceph-mgr[74636]: [cephadm INFO cephadm.services.ingress] 192.168.122.2 is in 192.168.122.0/24 on compute-0 interface br-ex
Nov 27 06:00:04 np0005537642 ceph-mgr[74636]: log_channel(cephadm) log [INF] : 192.168.122.2 is in 192.168.122.0/24 on compute-0 interface br-ex
Nov 27 06:00:04 np0005537642 ceph-mgr[74636]: [cephadm INFO cephadm.serve] Deploying daemon keepalived.rgw.default.compute-2.zoylhh on compute-2
Nov 27 06:00:04 np0005537642 ceph-mgr[74636]: log_channel(cephadm) log [INF] : Deploying daemon keepalived.rgw.default.compute-2.zoylhh on compute-2
Nov 27 06:00:04 np0005537642 ceph-mon[74338]: mon.compute-0@0(leader).osd e99 do_prune osdmap full prune enabled
Nov 27 06:00:04 np0005537642 ceph-mon[74338]: from='mgr.14424 192.168.122.100:0/1707931784' entity='mgr.compute-0.qnrkij' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "13"}]': finished
Nov 27 06:00:04 np0005537642 ceph-mon[74338]: from='mgr.14424 192.168.122.100:0/1707931784' entity='mgr.compute-0.qnrkij' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "14"}]: dispatch
Nov 27 06:00:04 np0005537642 ceph-mon[74338]: from='mgr.14424 192.168.122.100:0/1707931784' entity='mgr.compute-0.qnrkij' 
Nov 27 06:00:04 np0005537642 ceph-mon[74338]: from='mgr.14424 192.168.122.100:0/1707931784' entity='mgr.compute-0.qnrkij' 
Nov 27 06:00:04 np0005537642 ceph-mon[74338]: from='mgr.14424 192.168.122.100:0/1707931784' entity='mgr.compute-0.qnrkij' 
Nov 27 06:00:04 np0005537642 ceph-mon[74338]: log_channel(audit) log [INF] : from='mgr.14424 192.168.122.100:0/1707931784' entity='mgr.compute-0.qnrkij' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "14"}]': finished
Nov 27 06:00:04 np0005537642 ceph-mon[74338]: mon.compute-0@0(leader).osd e100 e100: 3 total, 3 up, 3 in
Nov 27 06:00:04 np0005537642 ceph-4c838139-e0c9-556a-a9ca-e4422f459af7-grafana-compute-0[99196]: logger=provisioning.dashboard t=2025-11-27T11:00:04.276621797Z level=info msg="finished to provision dashboards"
Nov 27 06:00:04 np0005537642 radosgw[89563]: ====== starting new request req=0x7f291eca25d0 =====
Nov 27 06:00:04 np0005537642 radosgw[89563]: ====== req done req=0x7f291eca25d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 27 06:00:04 np0005537642 radosgw[89563]: beast: 0x7f291eca25d0: 192.168.122.102 - anonymous [27/Nov/2025:11:00:04.292 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 27 06:00:04 np0005537642 ceph-mon[74338]: log_channel(cluster) log [DBG] : osdmap e100: 3 total, 3 up, 3 in
Nov 27 06:00:04 np0005537642 ceph-osd[82775]: osd.1 pg_epoch: 100 pg[9.1d( empty local-lis/les=0/0 n=0 ec=67/50 lis/c=83/83 les/c/f=84/84/0 sis=100) [1] r=0 lpr=100 pi=[83,100)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 27 06:00:04 np0005537642 ceph-osd[82775]: osd.1 pg_epoch: 100 pg[9.d( empty local-lis/les=0/0 n=0 ec=67/50 lis/c=83/83 les/c/f=84/84/0 sis=100) [1] r=0 lpr=100 pi=[83,100)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 27 06:00:04 np0005537642 radosgw[89563]: ====== starting new request req=0x7f291eca25d0 =====
Nov 27 06:00:04 np0005537642 radosgw[89563]: ====== req done req=0x7f291eca25d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 27 06:00:04 np0005537642 radosgw[89563]: beast: 0x7f291eca25d0: 192.168.122.100 - anonymous [27/Nov/2025:11:00:04.374 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 27 06:00:04 np0005537642 ceph-4c838139-e0c9-556a-a9ca-e4422f459af7-keepalived-nfs-cephfs-compute-0-zwobpl[97895]: Thu Nov 27 11:00:04 2025: (VI_0) Entering MASTER STATE
Nov 27 06:00:04 np0005537642 ceph-4c838139-e0c9-556a-a9ca-e4422f459af7-nfs-cephfs-2-0-compute-0-ymahkb[97101]: 27/11/2025 11:00:04 : epoch 69282ef5 : compute-0 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f37dc0034e0 fd 49 proxy header rest len failed header rlen = % (will set dead)
Nov 27 06:00:04 np0005537642 ceph-osd[82775]: log_channel(cluster) log [DBG] : 8.4 scrub starts
Nov 27 06:00:05 np0005537642 ceph-osd[82775]: log_channel(cluster) log [DBG] : 8.4 scrub ok
Nov 27 06:00:05 np0005537642 ceph-mon[74338]: mon.compute-0@0(leader).osd e100 do_prune osdmap full prune enabled
Nov 27 06:00:05 np0005537642 ceph-mon[74338]: 192.168.122.2 is in 192.168.122.0/24 on compute-2 interface br-ex
Nov 27 06:00:05 np0005537642 ceph-mon[74338]: 192.168.122.2 is in 192.168.122.0/24 on compute-0 interface br-ex
Nov 27 06:00:05 np0005537642 ceph-mon[74338]: Deploying daemon keepalived.rgw.default.compute-2.zoylhh on compute-2
Nov 27 06:00:05 np0005537642 ceph-mon[74338]: from='mgr.14424 192.168.122.100:0/1707931784' entity='mgr.compute-0.qnrkij' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "14"}]': finished
Nov 27 06:00:05 np0005537642 ceph-mon[74338]: mon.compute-0@0(leader).osd e101 e101: 3 total, 3 up, 3 in
Nov 27 06:00:05 np0005537642 ceph-mon[74338]: log_channel(cluster) log [DBG] : osdmap e101: 3 total, 3 up, 3 in
Nov 27 06:00:05 np0005537642 ceph-osd[82775]: osd.1 pg_epoch: 101 pg[9.d( empty local-lis/les=0/0 n=0 ec=67/50 lis/c=83/83 les/c/f=84/84/0 sis=101) [1]/[2] r=-1 lpr=101 pi=[83,101)/1 crt=0'0 mlcod 0'0 remapped mbc={}] PeeringState::start_peering_interval up [1] -> [1], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 1, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Nov 27 06:00:05 np0005537642 ceph-osd[82775]: osd.1 pg_epoch: 101 pg[9.d( empty local-lis/les=0/0 n=0 ec=67/50 lis/c=83/83 les/c/f=84/84/0 sis=101) [1]/[2] r=-1 lpr=101 pi=[83,101)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Nov 27 06:00:05 np0005537642 ceph-osd[82775]: osd.1 pg_epoch: 101 pg[9.1d( empty local-lis/les=0/0 n=0 ec=67/50 lis/c=83/83 les/c/f=84/84/0 sis=101) [1]/[2] r=-1 lpr=101 pi=[83,101)/1 crt=0'0 mlcod 0'0 remapped mbc={}] PeeringState::start_peering_interval up [1] -> [1], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 1, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Nov 27 06:00:05 np0005537642 ceph-osd[82775]: osd.1 pg_epoch: 101 pg[9.1d( empty local-lis/les=0/0 n=0 ec=67/50 lis/c=83/83 les/c/f=84/84/0 sis=101) [1]/[2] r=-1 lpr=101 pi=[83,101)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Nov 27 06:00:05 np0005537642 ceph-mon[74338]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2.devices.0}] v 0)
Nov 27 06:00:05 np0005537642 ceph-mon[74338]: log_channel(audit) log [INF] : from='mgr.14424 192.168.122.100:0/1707931784' entity='mgr.compute-0.qnrkij' 
Nov 27 06:00:05 np0005537642 ceph-mon[74338]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2}] v 0)
Nov 27 06:00:05 np0005537642 ceph-mon[74338]: log_channel(audit) log [INF] : from='mgr.14424 192.168.122.100:0/1707931784' entity='mgr.compute-0.qnrkij' 
Nov 27 06:00:05 np0005537642 ceph-mon[74338]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.ingress.rgw.default}] v 0)
Nov 27 06:00:05 np0005537642 ceph-mon[74338]: log_channel(audit) log [INF] : from='mgr.14424 192.168.122.100:0/1707931784' entity='mgr.compute-0.qnrkij' 
Nov 27 06:00:05 np0005537642 ceph-mgr[74636]: [progress INFO root] complete: finished ev a105c4e0-f087-4fb8-a0b4-17642d66bf09 (Updating ingress.rgw.default deployment (+4 -> 4))
Nov 27 06:00:05 np0005537642 ceph-mgr[74636]: [progress INFO root] Completed event a105c4e0-f087-4fb8-a0b4-17642d66bf09 (Updating ingress.rgw.default deployment (+4 -> 4)) in 13 seconds
Nov 27 06:00:05 np0005537642 ceph-mon[74338]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.ingress.rgw.default}] v 0)
Nov 27 06:00:05 np0005537642 ceph-mon[74338]: log_channel(audit) log [INF] : from='mgr.14424 192.168.122.100:0/1707931784' entity='mgr.compute-0.qnrkij' 
Nov 27 06:00:05 np0005537642 ceph-mgr[74636]: [progress INFO root] update: starting ev 3e12ac8b-8d5f-46ba-8320-1b77398ff7de (Updating prometheus deployment (+1 -> 1))
Nov 27 06:00:05 np0005537642 ceph-4c838139-e0c9-556a-a9ca-e4422f459af7-nfs-cephfs-2-0-compute-0-ymahkb[97101]: 27/11/2025 11:00:05 : epoch 69282ef5 : compute-0 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f37dc0034e0 fd 49 proxy header rest len failed header rlen = % (will set dead)
Nov 27 06:00:06 np0005537642 ceph-4c838139-e0c9-556a-a9ca-e4422f459af7-nfs-cephfs-2-0-compute-0-ymahkb[97101]: 27/11/2025 11:00:05 : epoch 69282ef5 : compute-0 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f37ac000b60 fd 49 proxy header rest len failed header rlen = % (will set dead)
Nov 27 06:00:06 np0005537642 ceph-osd[82775]: log_channel(cluster) log [DBG] : 8.19 scrub starts
Nov 27 06:00:06 np0005537642 ceph-osd[82775]: log_channel(cluster) log [DBG] : 8.19 scrub ok
Nov 27 06:00:06 np0005537642 ceph-4c838139-e0c9-556a-a9ca-e4422f459af7-haproxy-nfs-cephfs-compute-0-vcfcow[97533]: [WARNING] 330/110006 (4) : Server backend/nfs.cephfs.0 is DOWN, reason: Layer4 connection problem, info: "Connection refused", check duration: 0ms. 2 active and 0 backup servers left. 0 sessions active, 0 requeued, 0 remaining in queue.
Nov 27 06:00:06 np0005537642 ceph-4c838139-e0c9-556a-a9ca-e4422f459af7-nfs-cephfs-2-0-compute-0-ymahkb[97101]: 27/11/2025 11:00:06 : epoch 69282ef5 : compute-0 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f37a8000b60 fd 49 proxy header rest len failed header rlen = % (will set dead)
Nov 27 06:00:06 np0005537642 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v101: 353 pgs: 353 active+clean; 456 KiB data, 126 MiB used, 60 GiB / 60 GiB avail
Nov 27 06:00:06 np0005537642 radosgw[89563]: ====== starting new request req=0x7f291eca25d0 =====
Nov 27 06:00:06 np0005537642 radosgw[89563]: ====== req done req=0x7f291eca25d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 27 06:00:06 np0005537642 radosgw[89563]: ====== starting new request req=0x7f291ec215d0 =====
Nov 27 06:00:06 np0005537642 radosgw[89563]: beast: 0x7f291eca25d0: 192.168.122.102 - anonymous [27/Nov/2025:11:00:06.843 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 27 06:00:06 np0005537642 radosgw[89563]: ====== req done req=0x7f291ec215d0 op status=0 http_status=200 latency=0.001000029s ======
Nov 27 06:00:06 np0005537642 radosgw[89563]: beast: 0x7f291ec215d0: 192.168.122.100 - anonymous [27/Nov/2025:11:00:06.843 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Nov 27 06:00:06 np0005537642 ceph-mgr[74636]: [cephadm INFO cephadm.serve] Deploying daemon prometheus.compute-0 on compute-0
Nov 27 06:00:06 np0005537642 ceph-mgr[74636]: log_channel(cephadm) log [INF] : Deploying daemon prometheus.compute-0 on compute-0
Nov 27 06:00:06 np0005537642 ceph-mon[74338]: mon.compute-0@0(leader).osd e101 do_prune osdmap full prune enabled
Nov 27 06:00:06 np0005537642 ceph-mon[74338]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "15"} v 0)
Nov 27 06:00:06 np0005537642 ceph-mon[74338]: log_channel(audit) log [INF] : from='mgr.14424 192.168.122.100:0/1707931784' entity='mgr.compute-0.qnrkij' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "15"}]: dispatch
Nov 27 06:00:06 np0005537642 ceph-mon[74338]: from='mgr.14424 192.168.122.100:0/1707931784' entity='mgr.compute-0.qnrkij' 
Nov 27 06:00:06 np0005537642 ceph-mon[74338]: from='mgr.14424 192.168.122.100:0/1707931784' entity='mgr.compute-0.qnrkij' 
Nov 27 06:00:06 np0005537642 ceph-mon[74338]: from='mgr.14424 192.168.122.100:0/1707931784' entity='mgr.compute-0.qnrkij' 
Nov 27 06:00:06 np0005537642 ceph-mon[74338]: from='mgr.14424 192.168.122.100:0/1707931784' entity='mgr.compute-0.qnrkij' 
Nov 27 06:00:06 np0005537642 ceph-mon[74338]: mon.compute-0@0(leader).osd e102 e102: 3 total, 3 up, 3 in
Nov 27 06:00:06 np0005537642 ceph-mon[74338]: log_channel(cluster) log [DBG] : osdmap e102: 3 total, 3 up, 3 in
Nov 27 06:00:07 np0005537642 ceph-mon[74338]: mon.compute-0@0(leader).osd e102 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 27 06:00:07 np0005537642 ceph-mon[74338]: mon.compute-0@0(leader).osd e102 do_prune osdmap full prune enabled
Nov 27 06:00:07 np0005537642 ceph-mon[74338]: log_channel(audit) log [INF] : from='mgr.14424 192.168.122.100:0/1707931784' entity='mgr.compute-0.qnrkij' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "15"}]': finished
Nov 27 06:00:07 np0005537642 ceph-mon[74338]: mon.compute-0@0(leader).osd e103 e103: 3 total, 3 up, 3 in
Nov 27 06:00:07 np0005537642 ceph-mon[74338]: log_channel(cluster) log [DBG] : osdmap e103: 3 total, 3 up, 3 in
Nov 27 06:00:07 np0005537642 ceph-osd[82775]: osd.1 pg_epoch: 103 pg[9.1d( v 57'872 (0'0,57'872] local-lis/les=0/0 n=5 ec=67/50 lis/c=101/83 les/c/f=102/84/0 sis=103) [1] r=0 lpr=103 pi=[83,103)/1 luod=0'0 crt=57'872 mlcod 0'0 active mbc={}] PeeringState::start_peering_interval up [1] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 1 -> 1, role -1 -> 0, features acting 4540701547738038271 upacting 4540701547738038271
Nov 27 06:00:07 np0005537642 ceph-osd[82775]: osd.1 pg_epoch: 103 pg[9.1d( v 57'872 (0'0,57'872] local-lis/les=0/0 n=5 ec=67/50 lis/c=101/83 les/c/f=102/84/0 sis=103) [1] r=0 lpr=103 pi=[83,103)/1 crt=57'872 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 27 06:00:07 np0005537642 ceph-osd[82775]: osd.1 pg_epoch: 103 pg[9.d( v 57'872 (0'0,57'872] local-lis/les=0/0 n=6 ec=67/50 lis/c=101/83 les/c/f=102/84/0 sis=103) [1] r=0 lpr=103 pi=[83,103)/1 luod=0'0 crt=57'872 mlcod 0'0 active mbc={}] PeeringState::start_peering_interval up [1] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 1 -> 1, role -1 -> 0, features acting 4540701547738038271 upacting 4540701547738038271
Nov 27 06:00:07 np0005537642 ceph-osd[82775]: osd.1 pg_epoch: 103 pg[9.d( v 57'872 (0'0,57'872] local-lis/les=0/0 n=6 ec=67/50 lis/c=101/83 les/c/f=102/84/0 sis=103) [1] r=0 lpr=103 pi=[83,103)/1 crt=57'872 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 27 06:00:07 np0005537642 ceph-mgr[74636]: [progress INFO root] Writing back 28 completed events
Nov 27 06:00:07 np0005537642 ceph-mon[74338]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/progress/completed}] v 0)
Nov 27 06:00:07 np0005537642 ceph-mon[74338]: log_channel(audit) log [INF] : from='mgr.14424 192.168.122.100:0/1707931784' entity='mgr.compute-0.qnrkij' 
Nov 27 06:00:07 np0005537642 ceph-4c838139-e0c9-556a-a9ca-e4422f459af7-keepalived-rgw-default-compute-0-jjyrft[99775]: Thu Nov 27 11:00:07 2025: (VI_0) Entering MASTER STATE
Nov 27 06:00:07 np0005537642 ceph-4c838139-e0c9-556a-a9ca-e4422f459af7-nfs-cephfs-2-0-compute-0-ymahkb[97101]: 27/11/2025 11:00:07 : epoch 69282ef5 : compute-0 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f37a8000b60 fd 49 proxy header rest len failed header rlen = % (will set dead)
Nov 27 06:00:07 np0005537642 ceph-4c838139-e0c9-556a-a9ca-e4422f459af7-nfs-cephfs-2-0-compute-0-ymahkb[97101]: 27/11/2025 11:00:07 : epoch 69282ef5 : compute-0 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f37c4004580 fd 49 proxy header rest len failed header rlen = % (will set dead)
Nov 27 06:00:07 np0005537642 ceph-mon[74338]: Deploying daemon prometheus.compute-0 on compute-0
Nov 27 06:00:07 np0005537642 ceph-mon[74338]: from='mgr.14424 192.168.122.100:0/1707931784' entity='mgr.compute-0.qnrkij' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "15"}]: dispatch
Nov 27 06:00:07 np0005537642 ceph-mon[74338]: from='mgr.14424 192.168.122.100:0/1707931784' entity='mgr.compute-0.qnrkij' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "15"}]': finished
Nov 27 06:00:07 np0005537642 ceph-mon[74338]: from='mgr.14424 192.168.122.100:0/1707931784' entity='mgr.compute-0.qnrkij' 
Nov 27 06:00:07 np0005537642 ceph-osd[82775]: log_channel(cluster) log [DBG] : 9.a scrub starts
Nov 27 06:00:08 np0005537642 ceph-osd[82775]: log_channel(cluster) log [DBG] : 9.a scrub ok
Nov 27 06:00:08 np0005537642 ceph-mon[74338]: mon.compute-0@0(leader).osd e103 do_prune osdmap full prune enabled
Nov 27 06:00:08 np0005537642 ceph-mon[74338]: mon.compute-0@0(leader).osd e104 e104: 3 total, 3 up, 3 in
Nov 27 06:00:08 np0005537642 ceph-mon[74338]: log_channel(cluster) log [DBG] : osdmap e104: 3 total, 3 up, 3 in
Nov 27 06:00:08 np0005537642 ceph-osd[82775]: osd.1 pg_epoch: 104 pg[9.d( v 57'872 (0'0,57'872] local-lis/les=103/104 n=6 ec=67/50 lis/c=101/83 les/c/f=102/84/0 sis=103) [1] r=0 lpr=103 pi=[83,103)/1 crt=57'872 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 27 06:00:08 np0005537642 ceph-osd[82775]: osd.1 pg_epoch: 104 pg[9.1d( v 57'872 (0'0,57'872] local-lis/les=103/104 n=5 ec=67/50 lis/c=101/83 les/c/f=102/84/0 sis=103) [1] r=0 lpr=103 pi=[83,103)/1 crt=57'872 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 27 06:00:08 np0005537642 ceph-4c838139-e0c9-556a-a9ca-e4422f459af7-nfs-cephfs-2-0-compute-0-ymahkb[97101]: 27/11/2025 11:00:08 : epoch 69282ef5 : compute-0 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f37dc0034e0 fd 49 proxy header rest len failed header rlen = % (will set dead)
Nov 27 06:00:08 np0005537642 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v105: 353 pgs: 2 peering, 1 active+clean+scrubbing, 350 active+clean; 456 KiB data, 126 MiB used, 60 GiB / 60 GiB avail
Nov 27 06:00:08 np0005537642 radosgw[89563]: ====== starting new request req=0x7f291eca25d0 =====
Nov 27 06:00:08 np0005537642 radosgw[89563]: ====== starting new request req=0x7f291ec215d0 =====
Nov 27 06:00:08 np0005537642 radosgw[89563]: ====== req done req=0x7f291eca25d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 27 06:00:08 np0005537642 radosgw[89563]: beast: 0x7f291eca25d0: 192.168.122.102 - anonymous [27/Nov/2025:11:00:08.847 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 27 06:00:08 np0005537642 radosgw[89563]: ====== req done req=0x7f291ec215d0 op status=0 http_status=200 latency=0.001000028s ======
Nov 27 06:00:08 np0005537642 radosgw[89563]: beast: 0x7f291ec215d0: 192.168.122.100 - anonymous [27/Nov/2025:11:00:08.847 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Nov 27 06:00:08 np0005537642 ceph-osd[82775]: log_channel(cluster) log [DBG] : 9.e scrub starts
Nov 27 06:00:09 np0005537642 ceph-osd[82775]: log_channel(cluster) log [DBG] : 9.e scrub ok
Nov 27 06:00:09 np0005537642 ceph-4c838139-e0c9-556a-a9ca-e4422f459af7-nfs-cephfs-2-0-compute-0-ymahkb[97101]: 27/11/2025 11:00:09 : epoch 69282ef5 : compute-0 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f37ac0016a0 fd 49 proxy header rest len failed header rlen = % (will set dead)
Nov 27 06:00:09 np0005537642 ceph-4c838139-e0c9-556a-a9ca-e4422f459af7-nfs-cephfs-2-0-compute-0-ymahkb[97101]: 27/11/2025 11:00:09 : epoch 69282ef5 : compute-0 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f37a8001b40 fd 49 proxy header rest len failed header rlen = % (will set dead)
Nov 27 06:00:09 np0005537642 ceph-osd[82775]: log_channel(cluster) log [DBG] : 9.6 scrub starts
Nov 27 06:00:09 np0005537642 ceph-osd[82775]: log_channel(cluster) log [DBG] : 9.6 scrub ok
Nov 27 06:00:10 np0005537642 ceph-4c838139-e0c9-556a-a9ca-e4422f459af7-nfs-cephfs-2-0-compute-0-ymahkb[97101]: 27/11/2025 11:00:10 : epoch 69282ef5 : compute-0 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f37c4004580 fd 49 proxy header rest len failed header rlen = % (will set dead)
Nov 27 06:00:10 np0005537642 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v106: 353 pgs: 2 peering, 1 active+clean+scrubbing, 350 active+clean; 456 KiB data, 126 MiB used, 60 GiB / 60 GiB avail
Nov 27 06:00:10 np0005537642 radosgw[89563]: ====== starting new request req=0x7f291eca25d0 =====
Nov 27 06:00:10 np0005537642 radosgw[89563]: ====== req done req=0x7f291eca25d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 27 06:00:10 np0005537642 radosgw[89563]: beast: 0x7f291eca25d0: 192.168.122.100 - anonymous [27/Nov/2025:11:00:10.849 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 27 06:00:10 np0005537642 radosgw[89563]: ====== starting new request req=0x7f291eca25d0 =====
Nov 27 06:00:10 np0005537642 radosgw[89563]: ====== req done req=0x7f291eca25d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 27 06:00:10 np0005537642 radosgw[89563]: beast: 0x7f291eca25d0: 192.168.122.102 - anonymous [27/Nov/2025:11:00:10.851 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 27 06:00:10 np0005537642 ceph-osd[82775]: log_channel(cluster) log [DBG] : 9.1a scrub starts
Nov 27 06:00:10 np0005537642 ceph-osd[82775]: log_channel(cluster) log [DBG] : 9.1a scrub ok
Nov 27 06:00:11 np0005537642 ceph-4c838139-e0c9-556a-a9ca-e4422f459af7-nfs-cephfs-2-0-compute-0-ymahkb[97101]: 27/11/2025 11:00:11 : epoch 69282ef5 : compute-0 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f37dc0034e0 fd 49 proxy header rest len failed header rlen = % (will set dead)
Nov 27 06:00:11 np0005537642 ceph-4c838139-e0c9-556a-a9ca-e4422f459af7-nfs-cephfs-2-0-compute-0-ymahkb[97101]: 27/11/2025 11:00:11 : epoch 69282ef5 : compute-0 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f37ac001fc0 fd 49 proxy header rest len failed header rlen = % (will set dead)
Nov 27 06:00:11 np0005537642 ceph-mgr[74636]: [balancer INFO root] Optimize plan auto_2025-11-27_11:00:11
Nov 27 06:00:11 np0005537642 ceph-mgr[74636]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Nov 27 06:00:11 np0005537642 ceph-mgr[74636]: [balancer INFO root] Some PGs (0.005666) are inactive; try again later
Nov 27 06:00:11 np0005537642 ceph-osd[82775]: log_channel(cluster) log [DBG] : 9.1e scrub starts
Nov 27 06:00:11 np0005537642 ceph-osd[82775]: log_channel(cluster) log [DBG] : 9.1e scrub ok
Nov 27 06:00:12 np0005537642 ceph-mgr[74636]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Nov 27 06:00:12 np0005537642 ceph-mgr[74636]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 27 06:00:12 np0005537642 ceph-mgr[74636]: [volumes INFO mgr_util] scanning for idle connections..
Nov 27 06:00:12 np0005537642 ceph-mgr[74636]: [volumes INFO mgr_util] cleaning up connections: []
Nov 27 06:00:12 np0005537642 ceph-mgr[74636]: [volumes INFO mgr_util] scanning for idle connections..
Nov 27 06:00:12 np0005537642 ceph-mgr[74636]: [volumes INFO mgr_util] cleaning up connections: []
Nov 27 06:00:12 np0005537642 ceph-mgr[74636]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 27 06:00:12 np0005537642 ceph-mgr[74636]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 27 06:00:12 np0005537642 ceph-mgr[74636]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 27 06:00:12 np0005537642 ceph-mgr[74636]: [volumes INFO mgr_util] scanning for idle connections..
Nov 27 06:00:12 np0005537642 ceph-mgr[74636]: [volumes INFO mgr_util] cleaning up connections: []
Nov 27 06:00:12 np0005537642 ceph-mgr[74636]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Nov 27 06:00:12 np0005537642 ceph-mgr[74636]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 27 06:00:12 np0005537642 ceph-mgr[74636]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 27 06:00:12 np0005537642 ceph-mgr[74636]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 27 06:00:12 np0005537642 ceph-mgr[74636]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 27 06:00:12 np0005537642 ceph-mon[74338]: mon.compute-0@0(leader).osd e104 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 27 06:00:12 np0005537642 ceph-mgr[74636]: [progress WARNING root] Starting Global Recovery Event,2 pgs not in active + clean state
Nov 27 06:00:12 np0005537642 ceph-4c838139-e0c9-556a-a9ca-e4422f459af7-nfs-cephfs-2-0-compute-0-ymahkb[97101]: 27/11/2025 11:00:12 : epoch 69282ef5 : compute-0 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f37a8001b40 fd 49 proxy header rest len failed header rlen = % (will set dead)
Nov 27 06:00:12 np0005537642 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v107: 353 pgs: 2 peering, 1 active+clean+scrubbing, 350 active+clean; 456 KiB data, 126 MiB used, 60 GiB / 60 GiB avail
Nov 27 06:00:12 np0005537642 radosgw[89563]: ====== starting new request req=0x7f291eca25d0 =====
Nov 27 06:00:12 np0005537642 radosgw[89563]: ====== req done req=0x7f291eca25d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 27 06:00:12 np0005537642 radosgw[89563]: beast: 0x7f291eca25d0: 192.168.122.100 - anonymous [27/Nov/2025:11:00:12.851 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 27 06:00:12 np0005537642 radosgw[89563]: ====== starting new request req=0x7f291eca25d0 =====
Nov 27 06:00:12 np0005537642 radosgw[89563]: ====== req done req=0x7f291eca25d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 27 06:00:12 np0005537642 radosgw[89563]: beast: 0x7f291eca25d0: 192.168.122.102 - anonymous [27/Nov/2025:11:00:12.853 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 27 06:00:12 np0005537642 ceph-osd[82775]: log_channel(cluster) log [DBG] : 9.d scrub starts
Nov 27 06:00:12 np0005537642 ceph-osd[82775]: log_channel(cluster) log [DBG] : 9.d scrub ok
Nov 27 06:00:13 np0005537642 ceph-4c838139-e0c9-556a-a9ca-e4422f459af7-nfs-cephfs-2-0-compute-0-ymahkb[97101]: 27/11/2025 11:00:13 : epoch 69282ef5 : compute-0 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f37c4004580 fd 49 proxy header rest len failed header rlen = % (will set dead)
Nov 27 06:00:13 np0005537642 ceph-4c838139-e0c9-556a-a9ca-e4422f459af7-nfs-cephfs-2-0-compute-0-ymahkb[97101]: 27/11/2025 11:00:13 : epoch 69282ef5 : compute-0 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f37dc0034e0 fd 49 proxy header rest len failed header rlen = % (will set dead)
Nov 27 06:00:13 np0005537642 ceph-osd[82775]: log_channel(cluster) log [DBG] : 9.1d deep-scrub starts
Nov 27 06:00:13 np0005537642 ceph-osd[82775]: log_channel(cluster) log [DBG] : 9.1d deep-scrub ok
Nov 27 06:00:14 np0005537642 podman[99880]: 2025-11-27 11:00:14.128065287 +0000 UTC m=+6.723142963 volume create b19c04d4cf9d7d48c2aeea7b8ffac715b3de22add60f9cea08cfa474166cd6fa
Nov 27 06:00:14 np0005537642 podman[99880]: 2025-11-27 11:00:14.104313102 +0000 UTC m=+6.699390798 image pull 1d3b7f56885b6dd623f1785be963aa9c195f86bc256ea454e8d02a7980b79c53 quay.io/prometheus/prometheus:v2.51.0
Nov 27 06:00:14 np0005537642 podman[99880]: 2025-11-27 11:00:14.17051049 +0000 UTC m=+6.765588166 container create 076228f102d2eac11dd603174f9f372ee2be49f8401b80031f8d4b72fa2550a4 (image=quay.io/prometheus/prometheus:v2.51.0, name=thirsty_shannon, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Nov 27 06:00:14 np0005537642 systemd[92942]: Starting Mark boot as successful...
Nov 27 06:00:14 np0005537642 systemd[92942]: Finished Mark boot as successful.
Nov 27 06:00:14 np0005537642 systemd[1]: Started libpod-conmon-076228f102d2eac11dd603174f9f372ee2be49f8401b80031f8d4b72fa2550a4.scope.
Nov 27 06:00:14 np0005537642 systemd[1]: Started libcrun container.
Nov 27 06:00:14 np0005537642 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a1bf855f23ace86ca0100923587d54127e1775c6ccb0d658169778ad8a451003/merged/prometheus supports timestamps until 2038 (0x7fffffff)
Nov 27 06:00:14 np0005537642 podman[99880]: 2025-11-27 11:00:14.376922796 +0000 UTC m=+6.972000482 container init 076228f102d2eac11dd603174f9f372ee2be49f8401b80031f8d4b72fa2550a4 (image=quay.io/prometheus/prometheus:v2.51.0, name=thirsty_shannon, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Nov 27 06:00:14 np0005537642 podman[99880]: 2025-11-27 11:00:14.392264348 +0000 UTC m=+6.987342024 container start 076228f102d2eac11dd603174f9f372ee2be49f8401b80031f8d4b72fa2550a4 (image=quay.io/prometheus/prometheus:v2.51.0, name=thirsty_shannon, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Nov 27 06:00:14 np0005537642 thirsty_shannon[100141]: 65534 65534
Nov 27 06:00:14 np0005537642 systemd[1]: libpod-076228f102d2eac11dd603174f9f372ee2be49f8401b80031f8d4b72fa2550a4.scope: Deactivated successfully.
Nov 27 06:00:14 np0005537642 podman[99880]: 2025-11-27 11:00:14.469942495 +0000 UTC m=+7.065020211 container attach 076228f102d2eac11dd603174f9f372ee2be49f8401b80031f8d4b72fa2550a4 (image=quay.io/prometheus/prometheus:v2.51.0, name=thirsty_shannon, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Nov 27 06:00:14 np0005537642 podman[99880]: 2025-11-27 11:00:14.471043157 +0000 UTC m=+7.066120833 container died 076228f102d2eac11dd603174f9f372ee2be49f8401b80031f8d4b72fa2550a4 (image=quay.io/prometheus/prometheus:v2.51.0, name=thirsty_shannon, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Nov 27 06:00:14 np0005537642 systemd[1]: var-lib-containers-storage-overlay-a1bf855f23ace86ca0100923587d54127e1775c6ccb0d658169778ad8a451003-merged.mount: Deactivated successfully.
Nov 27 06:00:14 np0005537642 podman[99880]: 2025-11-27 11:00:14.681479779 +0000 UTC m=+7.276557455 container remove 076228f102d2eac11dd603174f9f372ee2be49f8401b80031f8d4b72fa2550a4 (image=quay.io/prometheus/prometheus:v2.51.0, name=thirsty_shannon, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Nov 27 06:00:14 np0005537642 ceph-4c838139-e0c9-556a-a9ca-e4422f459af7-nfs-cephfs-2-0-compute-0-ymahkb[97101]: 27/11/2025 11:00:14 : epoch 69282ef5 : compute-0 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f37ac001fc0 fd 49 proxy header rest len failed header rlen = % (will set dead)
Nov 27 06:00:14 np0005537642 podman[99880]: 2025-11-27 11:00:14.718374932 +0000 UTC m=+7.313452608 volume remove b19c04d4cf9d7d48c2aeea7b8ffac715b3de22add60f9cea08cfa474166cd6fa
Nov 27 06:00:14 np0005537642 systemd[1]: libpod-conmon-076228f102d2eac11dd603174f9f372ee2be49f8401b80031f8d4b72fa2550a4.scope: Deactivated successfully.
Nov 27 06:00:14 np0005537642 podman[100158]: 2025-11-27 11:00:14.830169742 +0000 UTC m=+0.076979658 volume create c48ca1ffe3270fae1b992afc1dfe152e39c1bcdf1f529808796b856963323f02
Nov 27 06:00:14 np0005537642 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v108: 353 pgs: 1 active+clean+scrubbing, 352 active+clean; 456 KiB data, 126 MiB used, 60 GiB / 60 GiB avail
Nov 27 06:00:14 np0005537642 ceph-mon[74338]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "16"} v 0)
Nov 27 06:00:14 np0005537642 ceph-mon[74338]: log_channel(audit) log [INF] : from='mgr.14424 192.168.122.100:0/1707931784' entity='mgr.compute-0.qnrkij' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "16"}]: dispatch
Nov 27 06:00:14 np0005537642 radosgw[89563]: ====== starting new request req=0x7f291eca25d0 =====
Nov 27 06:00:14 np0005537642 radosgw[89563]: ====== req done req=0x7f291eca25d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 27 06:00:14 np0005537642 radosgw[89563]: beast: 0x7f291eca25d0: 192.168.122.100 - anonymous [27/Nov/2025:11:00:14.854 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 27 06:00:14 np0005537642 radosgw[89563]: ====== starting new request req=0x7f291eca25d0 =====
Nov 27 06:00:14 np0005537642 radosgw[89563]: ====== req done req=0x7f291eca25d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 27 06:00:14 np0005537642 radosgw[89563]: beast: 0x7f291eca25d0: 192.168.122.102 - anonymous [27/Nov/2025:11:00:14.856 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 27 06:00:14 np0005537642 podman[100158]: 2025-11-27 11:00:14.858882399 +0000 UTC m=+0.105692315 container create 8a7b1ee58d5edc01329de921286705a9110ed3c8ad3f26c474431b7f8351715c (image=quay.io/prometheus/prometheus:v2.51.0, name=confident_mclaren, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Nov 27 06:00:14 np0005537642 podman[100158]: 2025-11-27 11:00:14.781721847 +0000 UTC m=+0.028531783 image pull 1d3b7f56885b6dd623f1785be963aa9c195f86bc256ea454e8d02a7980b79c53 quay.io/prometheus/prometheus:v2.51.0
Nov 27 06:00:14 np0005537642 ceph-mon[74338]: mon.compute-0@0(leader).osd e104 do_prune osdmap full prune enabled
Nov 27 06:00:14 np0005537642 ceph-mon[74338]: log_channel(audit) log [INF] : from='mgr.14424 192.168.122.100:0/1707931784' entity='mgr.compute-0.qnrkij' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "16"}]': finished
Nov 27 06:00:14 np0005537642 ceph-mon[74338]: mon.compute-0@0(leader).osd e105 e105: 3 total, 3 up, 3 in
Nov 27 06:00:14 np0005537642 ceph-mon[74338]: log_channel(cluster) log [DBG] : osdmap e105: 3 total, 3 up, 3 in
Nov 27 06:00:14 np0005537642 systemd[1]: Started libpod-conmon-8a7b1ee58d5edc01329de921286705a9110ed3c8ad3f26c474431b7f8351715c.scope.
Nov 27 06:00:14 np0005537642 systemd[1]: Started libcrun container.
Nov 27 06:00:14 np0005537642 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/39db95783104e11934a62875875467cf1ca268020bce25186010f2af0b1d42e2/merged/prometheus supports timestamps until 2038 (0x7fffffff)
Nov 27 06:00:15 np0005537642 podman[100158]: 2025-11-27 11:00:15.039319017 +0000 UTC m=+0.286128953 container init 8a7b1ee58d5edc01329de921286705a9110ed3c8ad3f26c474431b7f8351715c (image=quay.io/prometheus/prometheus:v2.51.0, name=confident_mclaren, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Nov 27 06:00:15 np0005537642 ceph-mon[74338]: from='mgr.14424 192.168.122.100:0/1707931784' entity='mgr.compute-0.qnrkij' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "16"}]: dispatch
Nov 27 06:00:15 np0005537642 podman[100158]: 2025-11-27 11:00:15.046735221 +0000 UTC m=+0.293545137 container start 8a7b1ee58d5edc01329de921286705a9110ed3c8ad3f26c474431b7f8351715c (image=quay.io/prometheus/prometheus:v2.51.0, name=confident_mclaren, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Nov 27 06:00:15 np0005537642 ceph-osd[82775]: osd.1 pg_epoch: 105 pg[9.1f( empty local-lis/les=0/0 n=0 ec=67/50 lis/c=79/79 les/c/f=80/80/0 sis=105) [1] r=0 lpr=105 pi=[79,105)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 27 06:00:15 np0005537642 ceph-osd[82775]: osd.1 pg_epoch: 105 pg[9.f( empty local-lis/les=0/0 n=0 ec=67/50 lis/c=79/79 les/c/f=80/80/0 sis=105) [1] r=0 lpr=105 pi=[79,105)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 27 06:00:15 np0005537642 confident_mclaren[100177]: 65534 65534
Nov 27 06:00:15 np0005537642 systemd[1]: libpod-8a7b1ee58d5edc01329de921286705a9110ed3c8ad3f26c474431b7f8351715c.scope: Deactivated successfully.
Nov 27 06:00:15 np0005537642 podman[100158]: 2025-11-27 11:00:15.054887116 +0000 UTC m=+0.301697052 container attach 8a7b1ee58d5edc01329de921286705a9110ed3c8ad3f26c474431b7f8351715c (image=quay.io/prometheus/prometheus:v2.51.0, name=confident_mclaren, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Nov 27 06:00:15 np0005537642 podman[100158]: 2025-11-27 11:00:15.056209604 +0000 UTC m=+0.303019550 container died 8a7b1ee58d5edc01329de921286705a9110ed3c8ad3f26c474431b7f8351715c (image=quay.io/prometheus/prometheus:v2.51.0, name=confident_mclaren, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Nov 27 06:00:15 np0005537642 systemd[1]: var-lib-containers-storage-overlay-39db95783104e11934a62875875467cf1ca268020bce25186010f2af0b1d42e2-merged.mount: Deactivated successfully.
Nov 27 06:00:15 np0005537642 podman[100158]: 2025-11-27 11:00:15.502009474 +0000 UTC m=+0.748819390 container remove 8a7b1ee58d5edc01329de921286705a9110ed3c8ad3f26c474431b7f8351715c (image=quay.io/prometheus/prometheus:v2.51.0, name=confident_mclaren, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Nov 27 06:00:15 np0005537642 podman[100158]: 2025-11-27 11:00:15.546870557 +0000 UTC m=+0.793680483 volume remove c48ca1ffe3270fae1b992afc1dfe152e39c1bcdf1f529808796b856963323f02
Nov 27 06:00:15 np0005537642 systemd[1]: libpod-conmon-8a7b1ee58d5edc01329de921286705a9110ed3c8ad3f26c474431b7f8351715c.scope: Deactivated successfully.
Nov 27 06:00:15 np0005537642 ceph-4c838139-e0c9-556a-a9ca-e4422f459af7-nfs-cephfs-2-0-compute-0-ymahkb[97101]: 27/11/2025 11:00:15 : epoch 69282ef5 : compute-0 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f37a8001b40 fd 49 proxy header rest len failed header rlen = % (will set dead)
Nov 27 06:00:15 np0005537642 ceph-4c838139-e0c9-556a-a9ca-e4422f459af7-nfs-cephfs-2-0-compute-0-ymahkb[97101]: 27/11/2025 11:00:15 : epoch 69282ef5 : compute-0 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f37c4004580 fd 49 proxy header rest len failed header rlen = % (will set dead)
Nov 27 06:00:16 np0005537642 ceph-mon[74338]: mon.compute-0@0(leader).osd e105 do_prune osdmap full prune enabled
Nov 27 06:00:16 np0005537642 systemd[1]: Reloading.
Nov 27 06:00:16 np0005537642 ceph-mon[74338]: from='mgr.14424 192.168.122.100:0/1707931784' entity='mgr.compute-0.qnrkij' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "16"}]': finished
Nov 27 06:00:16 np0005537642 ceph-4c838139-e0c9-556a-a9ca-e4422f459af7-nfs-cephfs-2-0-compute-0-ymahkb[97101]: 27/11/2025 11:00:16 : epoch 69282ef5 : compute-0 : ganesha.nfsd-2[reaper] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Nov 27 06:00:16 np0005537642 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 27 06:00:16 np0005537642 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 27 06:00:16 np0005537642 ceph-mon[74338]: mon.compute-0@0(leader).osd e106 e106: 3 total, 3 up, 3 in
Nov 27 06:00:16 np0005537642 ceph-mon[74338]: log_channel(cluster) log [DBG] : osdmap e106: 3 total, 3 up, 3 in
Nov 27 06:00:16 np0005537642 ceph-osd[82775]: osd.1 pg_epoch: 106 pg[9.f( empty local-lis/les=0/0 n=0 ec=67/50 lis/c=79/79 les/c/f=80/80/0 sis=106) [1]/[2] r=-1 lpr=106 pi=[79,106)/1 crt=0'0 mlcod 0'0 remapped mbc={}] PeeringState::start_peering_interval up [1] -> [1], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 1, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Nov 27 06:00:16 np0005537642 ceph-osd[82775]: osd.1 pg_epoch: 106 pg[9.1f( empty local-lis/les=0/0 n=0 ec=67/50 lis/c=79/79 les/c/f=80/80/0 sis=106) [1]/[2] r=-1 lpr=106 pi=[79,106)/1 crt=0'0 mlcod 0'0 remapped mbc={}] PeeringState::start_peering_interval up [1] -> [1], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 1, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Nov 27 06:00:16 np0005537642 ceph-osd[82775]: osd.1 pg_epoch: 106 pg[9.f( empty local-lis/les=0/0 n=0 ec=67/50 lis/c=79/79 les/c/f=80/80/0 sis=106) [1]/[2] r=-1 lpr=106 pi=[79,106)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Nov 27 06:00:16 np0005537642 ceph-osd[82775]: osd.1 pg_epoch: 106 pg[9.1f( empty local-lis/les=0/0 n=0 ec=67/50 lis/c=79/79 les/c/f=80/80/0 sis=106) [1]/[2] r=-1 lpr=106 pi=[79,106)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Nov 27 06:00:16 np0005537642 systemd[1]: Reloading.
Nov 27 06:00:16 np0005537642 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 27 06:00:16 np0005537642 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 27 06:00:16 np0005537642 ceph-4c838139-e0c9-556a-a9ca-e4422f459af7-nfs-cephfs-2-0-compute-0-ymahkb[97101]: 27/11/2025 11:00:16 : epoch 69282ef5 : compute-0 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f37dc0034e0 fd 49 proxy header rest len failed header rlen = % (will set dead)
Nov 27 06:00:16 np0005537642 systemd[1]: Starting Ceph prometheus.compute-0 for 4c838139-e0c9-556a-a9ca-e4422f459af7...
Nov 27 06:00:16 np0005537642 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v111: 353 pgs: 1 active+clean+scrubbing, 352 active+clean; 456 KiB data, 126 MiB used, 60 GiB / 60 GiB avail
Nov 27 06:00:16 np0005537642 ceph-mon[74338]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "17"} v 0)
Nov 27 06:00:16 np0005537642 ceph-mon[74338]: log_channel(audit) log [INF] : from='mgr.14424 192.168.122.100:0/1707931784' entity='mgr.compute-0.qnrkij' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "17"}]: dispatch
Nov 27 06:00:16 np0005537642 radosgw[89563]: ====== starting new request req=0x7f291eca25d0 =====
Nov 27 06:00:16 np0005537642 radosgw[89563]: ====== req done req=0x7f291eca25d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 27 06:00:16 np0005537642 radosgw[89563]: beast: 0x7f291eca25d0: 192.168.122.100 - anonymous [27/Nov/2025:11:00:16.856 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 27 06:00:16 np0005537642 radosgw[89563]: ====== starting new request req=0x7f291eca25d0 =====
Nov 27 06:00:16 np0005537642 radosgw[89563]: ====== req done req=0x7f291eca25d0 op status=0 http_status=200 latency=0.001000028s ======
Nov 27 06:00:16 np0005537642 radosgw[89563]: beast: 0x7f291eca25d0: 192.168.122.102 - anonymous [27/Nov/2025:11:00:16.858 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Nov 27 06:00:17 np0005537642 podman[100327]: 2025-11-27 11:00:17.052560122 +0000 UTC m=+0.043814043 image pull 1d3b7f56885b6dd623f1785be963aa9c195f86bc256ea454e8d02a7980b79c53 quay.io/prometheus/prometheus:v2.51.0
Nov 27 06:00:17 np0005537642 ceph-mon[74338]: mon.compute-0@0(leader).osd e106 do_prune osdmap full prune enabled
Nov 27 06:00:17 np0005537642 ceph-mgr[74636]: [progress INFO root] Completed event 6b37f446-8edd-43f6-9b5e-dc2f658ce66d (Global Recovery Event) in 5 seconds
Nov 27 06:00:17 np0005537642 podman[100327]: 2025-11-27 11:00:17.337252953 +0000 UTC m=+0.328506864 container create cb36667204a38dfbc0ab1cdb43efac259d05d4f7099b4b24b76953126a2609c5 (image=quay.io/prometheus/prometheus:v2.51.0, name=ceph-4c838139-e0c9-556a-a9ca-e4422f459af7-prometheus-compute-0, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Nov 27 06:00:17 np0005537642 ceph-mon[74338]: from='mgr.14424 192.168.122.100:0/1707931784' entity='mgr.compute-0.qnrkij' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "17"}]: dispatch
Nov 27 06:00:17 np0005537642 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/132e0825e82d93bd6a05ff054f3120500da575af586177404357697cc9300a26/merged/prometheus supports timestamps until 2038 (0x7fffffff)
Nov 27 06:00:17 np0005537642 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/132e0825e82d93bd6a05ff054f3120500da575af586177404357697cc9300a26/merged/etc/prometheus supports timestamps until 2038 (0x7fffffff)
Nov 27 06:00:17 np0005537642 ceph-mon[74338]: log_channel(audit) log [INF] : from='mgr.14424 192.168.122.100:0/1707931784' entity='mgr.compute-0.qnrkij' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "17"}]': finished
Nov 27 06:00:17 np0005537642 ceph-mon[74338]: mon.compute-0@0(leader).osd e107 e107: 3 total, 3 up, 3 in
Nov 27 06:00:17 np0005537642 podman[100327]: 2025-11-27 11:00:17.681704026 +0000 UTC m=+0.672957957 container init cb36667204a38dfbc0ab1cdb43efac259d05d4f7099b4b24b76953126a2609c5 (image=quay.io/prometheus/prometheus:v2.51.0, name=ceph-4c838139-e0c9-556a-a9ca-e4422f459af7-prometheus-compute-0, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Nov 27 06:00:17 np0005537642 podman[100327]: 2025-11-27 11:00:17.687475752 +0000 UTC m=+0.678729663 container start cb36667204a38dfbc0ab1cdb43efac259d05d4f7099b4b24b76953126a2609c5 (image=quay.io/prometheus/prometheus:v2.51.0, name=ceph-4c838139-e0c9-556a-a9ca-e4422f459af7-prometheus-compute-0, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Nov 27 06:00:17 np0005537642 ceph-mon[74338]: log_channel(cluster) log [DBG] : osdmap e107: 3 total, 3 up, 3 in
Nov 27 06:00:17 np0005537642 ceph-4c838139-e0c9-556a-a9ca-e4422f459af7-prometheus-compute-0[100342]: ts=2025-11-27T11:00:17.723Z caller=main.go:617 level=info msg="Starting Prometheus Server" mode=server version="(version=2.51.0, branch=HEAD, revision=c05c15512acb675e3f6cd662a6727854e93fc024)"
Nov 27 06:00:17 np0005537642 ceph-4c838139-e0c9-556a-a9ca-e4422f459af7-prometheus-compute-0[100342]: ts=2025-11-27T11:00:17.723Z caller=main.go:622 level=info build_context="(go=go1.22.1, platform=linux/amd64, user=root@b5723e458358, date=20240319-10:54:45, tags=netgo,builtinassets,stringlabels)"
Nov 27 06:00:17 np0005537642 ceph-4c838139-e0c9-556a-a9ca-e4422f459af7-prometheus-compute-0[100342]: ts=2025-11-27T11:00:17.723Z caller=main.go:623 level=info host_details="(Linux 5.14.0-642.el9.x86_64 #1 SMP PREEMPT_DYNAMIC Thu Nov 20 14:15:03 UTC 2025 x86_64 compute-0 (none))"
Nov 27 06:00:17 np0005537642 ceph-4c838139-e0c9-556a-a9ca-e4422f459af7-prometheus-compute-0[100342]: ts=2025-11-27T11:00:17.723Z caller=main.go:624 level=info fd_limits="(soft=1048576, hard=1048576)"
Nov 27 06:00:17 np0005537642 ceph-4c838139-e0c9-556a-a9ca-e4422f459af7-prometheus-compute-0[100342]: ts=2025-11-27T11:00:17.723Z caller=main.go:625 level=info vm_limits="(soft=unlimited, hard=unlimited)"
Nov 27 06:00:17 np0005537642 ceph-4c838139-e0c9-556a-a9ca-e4422f459af7-prometheus-compute-0[100342]: ts=2025-11-27T11:00:17.725Z caller=web.go:568 level=info component=web msg="Start listening for connections" address=192.168.122.100:9095
Nov 27 06:00:17 np0005537642 ceph-4c838139-e0c9-556a-a9ca-e4422f459af7-prometheus-compute-0[100342]: ts=2025-11-27T11:00:17.726Z caller=main.go:1129 level=info msg="Starting TSDB ..."
Nov 27 06:00:17 np0005537642 ceph-4c838139-e0c9-556a-a9ca-e4422f459af7-prometheus-compute-0[100342]: ts=2025-11-27T11:00:17.732Z caller=tls_config.go:313 level=info component=web msg="Listening on" address=192.168.122.100:9095
Nov 27 06:00:17 np0005537642 ceph-4c838139-e0c9-556a-a9ca-e4422f459af7-prometheus-compute-0[100342]: ts=2025-11-27T11:00:17.732Z caller=tls_config.go:316 level=info component=web msg="TLS is disabled." http2=false address=192.168.122.100:9095
Nov 27 06:00:17 np0005537642 ceph-4c838139-e0c9-556a-a9ca-e4422f459af7-prometheus-compute-0[100342]: ts=2025-11-27T11:00:17.736Z caller=head.go:616 level=info component=tsdb msg="Replaying on-disk memory mappable chunks if any"
Nov 27 06:00:17 np0005537642 ceph-4c838139-e0c9-556a-a9ca-e4422f459af7-prometheus-compute-0[100342]: ts=2025-11-27T11:00:17.736Z caller=head.go:698 level=info component=tsdb msg="On-disk memory mappable chunks replay completed" duration=3.88µs
Nov 27 06:00:17 np0005537642 ceph-4c838139-e0c9-556a-a9ca-e4422f459af7-prometheus-compute-0[100342]: ts=2025-11-27T11:00:17.736Z caller=head.go:706 level=info component=tsdb msg="Replaying WAL, this may take a while"
Nov 27 06:00:17 np0005537642 ceph-4c838139-e0c9-556a-a9ca-e4422f459af7-prometheus-compute-0[100342]: ts=2025-11-27T11:00:17.738Z caller=head.go:778 level=info component=tsdb msg="WAL segment loaded" segment=0 maxSegment=0
Nov 27 06:00:17 np0005537642 ceph-4c838139-e0c9-556a-a9ca-e4422f459af7-prometheus-compute-0[100342]: ts=2025-11-27T11:00:17.738Z caller=head.go:815 level=info component=tsdb msg="WAL replay completed" checkpoint_replay_duration=72.782µs wal_replay_duration=1.965757ms wbl_replay_duration=210ns total_replay_duration=2.123861ms
Nov 27 06:00:17 np0005537642 bash[100327]: cb36667204a38dfbc0ab1cdb43efac259d05d4f7099b4b24b76953126a2609c5
Nov 27 06:00:17 np0005537642 ceph-4c838139-e0c9-556a-a9ca-e4422f459af7-prometheus-compute-0[100342]: ts=2025-11-27T11:00:17.742Z caller=main.go:1150 level=info fs_type=XFS_SUPER_MAGIC
Nov 27 06:00:17 np0005537642 ceph-4c838139-e0c9-556a-a9ca-e4422f459af7-prometheus-compute-0[100342]: ts=2025-11-27T11:00:17.742Z caller=main.go:1153 level=info msg="TSDB started"
Nov 27 06:00:17 np0005537642 ceph-4c838139-e0c9-556a-a9ca-e4422f459af7-prometheus-compute-0[100342]: ts=2025-11-27T11:00:17.742Z caller=main.go:1335 level=info msg="Loading configuration file" filename=/etc/prometheus/prometheus.yml
Nov 27 06:00:17 np0005537642 systemd[1]: Started Ceph prometheus.compute-0 for 4c838139-e0c9-556a-a9ca-e4422f459af7.
Nov 27 06:00:17 np0005537642 ceph-4c838139-e0c9-556a-a9ca-e4422f459af7-nfs-cephfs-2-0-compute-0-ymahkb[97101]: 27/11/2025 11:00:17 : epoch 69282ef5 : compute-0 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f37ac001fc0 fd 49 proxy header rest len failed header rlen = % (will set dead)
Nov 27 06:00:17 np0005537642 ceph-4c838139-e0c9-556a-a9ca-e4422f459af7-prometheus-compute-0[100342]: ts=2025-11-27T11:00:17.777Z caller=main.go:1372 level=info msg="Completed loading of configuration file" filename=/etc/prometheus/prometheus.yml totalDuration=35.42569ms db_storage=1.7µs remote_storage=2.88µs web_handler=1.02µs query_engine=1.62µs scrape=5.113438ms scrape_sd=307.719µs notify=21.85µs notify_sd=17.151µs rules=29.037796ms tracing=19.07µs
Nov 27 06:00:17 np0005537642 ceph-4c838139-e0c9-556a-a9ca-e4422f459af7-prometheus-compute-0[100342]: ts=2025-11-27T11:00:17.777Z caller=main.go:1114 level=info msg="Server is ready to receive web requests."
Nov 27 06:00:17 np0005537642 ceph-4c838139-e0c9-556a-a9ca-e4422f459af7-prometheus-compute-0[100342]: ts=2025-11-27T11:00:17.777Z caller=manager.go:163 level=info component="rule manager" msg="Starting rule manager..."
Nov 27 06:00:17 np0005537642 ceph-4c838139-e0c9-556a-a9ca-e4422f459af7-nfs-cephfs-2-0-compute-0-ymahkb[97101]: 27/11/2025 11:00:17 : epoch 69282ef5 : compute-0 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f37a8002f00 fd 49 proxy header rest len failed header rlen = % (will set dead)
Nov 27 06:00:17 np0005537642 ceph-mon[74338]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Nov 27 06:00:18 np0005537642 ceph-mon[74338]: log_channel(audit) log [INF] : from='mgr.14424 192.168.122.100:0/1707931784' entity='mgr.compute-0.qnrkij' 
Nov 27 06:00:18 np0005537642 ceph-mon[74338]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Nov 27 06:00:18 np0005537642 ceph-mon[74338]: log_channel(audit) log [INF] : from='mgr.14424 192.168.122.100:0/1707931784' entity='mgr.compute-0.qnrkij' 
Nov 27 06:00:18 np0005537642 ceph-mon[74338]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.prometheus}] v 0)
Nov 27 06:00:18 np0005537642 ceph-mon[74338]: log_channel(audit) log [INF] : from='mgr.14424 192.168.122.100:0/1707931784' entity='mgr.compute-0.qnrkij' 
Nov 27 06:00:18 np0005537642 ceph-mgr[74636]: [progress INFO root] complete: finished ev 3e12ac8b-8d5f-46ba-8320-1b77398ff7de (Updating prometheus deployment (+1 -> 1))
Nov 27 06:00:18 np0005537642 ceph-mgr[74636]: [progress INFO root] Completed event 3e12ac8b-8d5f-46ba-8320-1b77398ff7de (Updating prometheus deployment (+1 -> 1)) in 13 seconds
Nov 27 06:00:18 np0005537642 ceph-mon[74338]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mgr module enable", "module": "prometheus"} v 0)
Nov 27 06:00:18 np0005537642 ceph-mon[74338]: log_channel(audit) log [INF] : from='mgr.14424 192.168.122.100:0/1707931784' entity='mgr.compute-0.qnrkij' cmd=[{"prefix": "mgr module enable", "module": "prometheus"}]: dispatch
Nov 27 06:00:18 np0005537642 ceph-mgr[74636]: [pg_autoscaler INFO root] _maybe_adjust
Nov 27 06:00:18 np0005537642 ceph-mgr[74636]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 27 06:00:18 np0005537642 ceph-mgr[74636]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Nov 27 06:00:18 np0005537642 ceph-mgr[74636]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 27 06:00:18 np0005537642 ceph-mgr[74636]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 27 06:00:18 np0005537642 ceph-mgr[74636]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 27 06:00:18 np0005537642 ceph-mgr[74636]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 27 06:00:18 np0005537642 ceph-mgr[74636]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 27 06:00:18 np0005537642 ceph-mgr[74636]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 27 06:00:18 np0005537642 ceph-mgr[74636]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 27 06:00:18 np0005537642 ceph-mgr[74636]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 27 06:00:18 np0005537642 ceph-mgr[74636]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 27 06:00:18 np0005537642 ceph-mgr[74636]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 32)
Nov 27 06:00:18 np0005537642 ceph-mgr[74636]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 27 06:00:18 np0005537642 ceph-mgr[74636]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 27 06:00:18 np0005537642 ceph-mgr[74636]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 27 06:00:18 np0005537642 ceph-mgr[74636]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Nov 27 06:00:18 np0005537642 ceph-mgr[74636]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 27 06:00:18 np0005537642 ceph-mgr[74636]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Nov 27 06:00:18 np0005537642 ceph-mgr[74636]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 27 06:00:18 np0005537642 ceph-mgr[74636]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 27 06:00:18 np0005537642 ceph-mgr[74636]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 27 06:00:18 np0005537642 ceph-mgr[74636]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Nov 27 06:00:18 np0005537642 ceph-mgr[74636]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 27 06:00:18 np0005537642 ceph-mgr[74636]: [pg_autoscaler INFO root] Pool '.nfs' root_id -1 using 6.359070782053786e-08 of space, bias 1.0, pg target 1.907721234616136e-05 quantized to 32 (current 32)
Nov 27 06:00:18 np0005537642 ceph-osd[82775]: osd.1 pg_epoch: 107 pg[9.10( empty local-lis/les=0/0 n=0 ec=67/50 lis/c=67/67 les/c/f=68/68/0 sis=107) [1] r=0 lpr=107 pi=[67,107)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 27 06:00:18 np0005537642 ceph-4c838139-e0c9-556a-a9ca-e4422f459af7-nfs-cephfs-2-0-compute-0-ymahkb[97101]: 27/11/2025 11:00:18 : epoch 69282ef5 : compute-0 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f37c4004580 fd 49 proxy header rest len failed header rlen = % (will set dead)
Nov 27 06:00:18 np0005537642 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v113: 353 pgs: 1 active+recovering+remapped, 1 active+recovery_wait+remapped, 351 active+clean; 456 KiB data, 126 MiB used, 60 GiB / 60 GiB avail; 12/225 objects misplaced (5.333%)
Nov 27 06:00:18 np0005537642 ceph-mon[74338]: from='mgr.14424 192.168.122.100:0/1707931784' entity='mgr.compute-0.qnrkij' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "17"}]': finished
Nov 27 06:00:18 np0005537642 ceph-mon[74338]: from='mgr.14424 192.168.122.100:0/1707931784' entity='mgr.compute-0.qnrkij' 
Nov 27 06:00:18 np0005537642 ceph-mon[74338]: from='mgr.14424 192.168.122.100:0/1707931784' entity='mgr.compute-0.qnrkij' 
Nov 27 06:00:18 np0005537642 ceph-mon[74338]: from='mgr.14424 192.168.122.100:0/1707931784' entity='mgr.compute-0.qnrkij' 
Nov 27 06:00:18 np0005537642 ceph-mon[74338]: from='mgr.14424 192.168.122.100:0/1707931784' entity='mgr.compute-0.qnrkij' cmd=[{"prefix": "mgr module enable", "module": "prometheus"}]: dispatch
Nov 27 06:00:18 np0005537642 radosgw[89563]: ====== starting new request req=0x7f291eca25d0 =====
Nov 27 06:00:18 np0005537642 radosgw[89563]: ====== req done req=0x7f291eca25d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 27 06:00:18 np0005537642 radosgw[89563]: beast: 0x7f291eca25d0: 192.168.122.100 - anonymous [27/Nov/2025:11:00:18.858 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 27 06:00:18 np0005537642 radosgw[89563]: ====== starting new request req=0x7f291eca25d0 =====
Nov 27 06:00:18 np0005537642 radosgw[89563]: ====== req done req=0x7f291eca25d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 27 06:00:18 np0005537642 radosgw[89563]: beast: 0x7f291eca25d0: 192.168.122.102 - anonymous [27/Nov/2025:11:00:18.860 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 27 06:00:19 np0005537642 ceph-4c838139-e0c9-556a-a9ca-e4422f459af7-nfs-cephfs-2-0-compute-0-ymahkb[97101]: 27/11/2025 11:00:19 : epoch 69282ef5 : compute-0 : ganesha.nfsd-2[reaper] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Nov 27 06:00:19 np0005537642 ceph-4c838139-e0c9-556a-a9ca-e4422f459af7-nfs-cephfs-2-0-compute-0-ymahkb[97101]: 27/11/2025 11:00:19 : epoch 69282ef5 : compute-0 : ganesha.nfsd-2[reaper] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Nov 27 06:00:19 np0005537642 ceph-4c838139-e0c9-556a-a9ca-e4422f459af7-nfs-cephfs-2-0-compute-0-ymahkb[97101]: 27/11/2025 11:00:19 : epoch 69282ef5 : compute-0 : ganesha.nfsd-2[reaper] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Nov 27 06:00:19 np0005537642 ceph-mon[74338]: mon.compute-0@0(leader).osd e107 do_prune osdmap full prune enabled
Nov 27 06:00:19 np0005537642 ceph-mon[74338]: mon.compute-0@0(leader).osd e108 e108: 3 total, 3 up, 3 in
Nov 27 06:00:19 np0005537642 ceph-mon[74338]: log_channel(cluster) log [DBG] : osdmap e108: 3 total, 3 up, 3 in
Nov 27 06:00:19 np0005537642 ceph-osd[82775]: osd.1 pg_epoch: 108 pg[9.1f( v 57'872 (0'0,57'872] local-lis/les=0/0 n=5 ec=67/50 lis/c=106/79 les/c/f=107/80/0 sis=108) [1] r=0 lpr=108 pi=[79,108)/1 luod=0'0 crt=57'872 mlcod 0'0 active mbc={}] PeeringState::start_peering_interval up [1] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 1 -> 1, role -1 -> 0, features acting 4540701547738038271 upacting 4540701547738038271
Nov 27 06:00:19 np0005537642 ceph-osd[82775]: osd.1 pg_epoch: 108 pg[9.1f( v 57'872 (0'0,57'872] local-lis/les=0/0 n=5 ec=67/50 lis/c=106/79 les/c/f=107/80/0 sis=108) [1] r=0 lpr=108 pi=[79,108)/1 crt=57'872 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 27 06:00:19 np0005537642 ceph-osd[82775]: osd.1 pg_epoch: 108 pg[9.10( empty local-lis/les=0/0 n=0 ec=67/50 lis/c=67/67 les/c/f=68/68/0 sis=108) [1]/[0] r=-1 lpr=108 pi=[67,108)/1 crt=0'0 mlcod 0'0 remapped mbc={}] PeeringState::start_peering_interval up [1] -> [1], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 1, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Nov 27 06:00:19 np0005537642 ceph-osd[82775]: osd.1 pg_epoch: 108 pg[9.10( empty local-lis/les=0/0 n=0 ec=67/50 lis/c=67/67 les/c/f=68/68/0 sis=108) [1]/[0] r=-1 lpr=108 pi=[67,108)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Nov 27 06:00:19 np0005537642 ceph-mon[74338]: log_channel(audit) log [INF] : from='mgr.14424 192.168.122.100:0/1707931784' entity='mgr.compute-0.qnrkij' cmd='[{"prefix": "mgr module enable", "module": "prometheus"}]': finished
Nov 27 06:00:19 np0005537642 ceph-mgr[74636]: mgr handle_mgr_map respawning because set of enabled modules changed!
Nov 27 06:00:19 np0005537642 ceph-mgr[74636]: mgr respawn  e: '/usr/bin/ceph-mgr'
Nov 27 06:00:19 np0005537642 ceph-mgr[74636]: mgr respawn  0: '/usr/bin/ceph-mgr'
Nov 27 06:00:19 np0005537642 ceph-mgr[74636]: mgr respawn  1: '-n'
Nov 27 06:00:19 np0005537642 ceph-mgr[74636]: mgr respawn  2: 'mgr.compute-0.qnrkij'
Nov 27 06:00:19 np0005537642 ceph-mgr[74636]: mgr respawn  3: '-f'
Nov 27 06:00:19 np0005537642 ceph-mgr[74636]: mgr respawn  4: '--setuser'
Nov 27 06:00:19 np0005537642 ceph-mgr[74636]: mgr respawn  5: 'ceph'
Nov 27 06:00:19 np0005537642 ceph-mgr[74636]: mgr respawn  6: '--setgroup'
Nov 27 06:00:19 np0005537642 ceph-mgr[74636]: mgr respawn  7: 'ceph'
Nov 27 06:00:19 np0005537642 ceph-mgr[74636]: mgr respawn  8: '--default-log-to-file=false'
Nov 27 06:00:19 np0005537642 ceph-mgr[74636]: mgr respawn  9: '--default-log-to-journald=true'
Nov 27 06:00:19 np0005537642 ceph-mgr[74636]: mgr respawn  10: '--default-log-to-stderr=false'
Nov 27 06:00:19 np0005537642 ceph-mgr[74636]: mgr respawn respawning with exe /usr/bin/ceph-mgr
Nov 27 06:00:19 np0005537642 ceph-mgr[74636]: mgr respawn  exe_path /proc/self/exe
Nov 27 06:00:19 np0005537642 ceph-mon[74338]: log_channel(cluster) log [DBG] : mgrmap e26: compute-0.qnrkij(active, since 2m), standbys: compute-1.npcryb, compute-2.yyrxaz
Nov 27 06:00:19 np0005537642 ceph-4c838139-e0c9-556a-a9ca-e4422f459af7-nfs-cephfs-2-0-compute-0-ymahkb[97101]: 27/11/2025 11:00:19 : epoch 69282ef5 : compute-0 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f37dc0034e0 fd 49 proxy header rest len failed header rlen = % (will set dead)
Nov 27 06:00:19 np0005537642 systemd[1]: session-35.scope: Deactivated successfully.
Nov 27 06:00:19 np0005537642 systemd[1]: session-35.scope: Consumed 56.009s CPU time.
Nov 27 06:00:19 np0005537642 systemd-logind[801]: Session 35 logged out. Waiting for processes to exit.
Nov 27 06:00:19 np0005537642 systemd-logind[801]: Removed session 35.
Nov 27 06:00:19 np0005537642 ceph-4c838139-e0c9-556a-a9ca-e4422f459af7-mgr-compute-0-qnrkij[74632]: ignoring --setuser ceph since I am not root
Nov 27 06:00:19 np0005537642 ceph-4c838139-e0c9-556a-a9ca-e4422f459af7-mgr-compute-0-qnrkij[74632]: ignoring --setgroup ceph since I am not root
Nov 27 06:00:19 np0005537642 ceph-mgr[74636]: ceph version 19.2.3 (c92aebb279828e9c3c1f5d24613efca272649e62) squid (stable), process ceph-mgr, pid 2
Nov 27 06:00:19 np0005537642 ceph-mgr[74636]: pidfile_write: ignore empty --pid-file
Nov 27 06:00:19 np0005537642 ceph-mgr[74636]: mgr[py] Loading python module 'alerts'
Nov 27 06:00:19 np0005537642 ceph-4c838139-e0c9-556a-a9ca-e4422f459af7-nfs-cephfs-2-0-compute-0-ymahkb[97101]: 27/11/2025 11:00:19 : epoch 69282ef5 : compute-0 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f37ac0032f0 fd 14 proxy header rest len failed header rlen = % (will set dead)
Nov 27 06:00:19 np0005537642 ceph-4c838139-e0c9-556a-a9ca-e4422f459af7-mgr-compute-0-qnrkij[74632]: 2025-11-27T11:00:19.955+0000 7f49e870d140 -1 mgr[py] Module alerts has missing NOTIFY_TYPES member
Nov 27 06:00:19 np0005537642 ceph-mgr[74636]: mgr[py] Module alerts has missing NOTIFY_TYPES member
Nov 27 06:00:19 np0005537642 ceph-mgr[74636]: mgr[py] Loading python module 'balancer'
Nov 27 06:00:20 np0005537642 ceph-4c838139-e0c9-556a-a9ca-e4422f459af7-mgr-compute-0-qnrkij[74632]: 2025-11-27T11:00:20.033+0000 7f49e870d140 -1 mgr[py] Module balancer has missing NOTIFY_TYPES member
Nov 27 06:00:20 np0005537642 ceph-mgr[74636]: mgr[py] Module balancer has missing NOTIFY_TYPES member
Nov 27 06:00:20 np0005537642 ceph-mgr[74636]: mgr[py] Loading python module 'cephadm'
Nov 27 06:00:20 np0005537642 ceph-mon[74338]: mon.compute-0@0(leader).osd e108 do_prune osdmap full prune enabled
Nov 27 06:00:20 np0005537642 ceph-4c838139-e0c9-556a-a9ca-e4422f459af7-nfs-cephfs-2-0-compute-0-ymahkb[97101]: 27/11/2025 11:00:20 : epoch 69282ef5 : compute-0 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f37a8002f00 fd 14 proxy header rest len failed header rlen = % (will set dead)
Nov 27 06:00:20 np0005537642 ceph-mgr[74636]: mgr[py] Loading python module 'crash'
Nov 27 06:00:20 np0005537642 ceph-4c838139-e0c9-556a-a9ca-e4422f459af7-mgr-compute-0-qnrkij[74632]: 2025-11-27T11:00:20.859+0000 7f49e870d140 -1 mgr[py] Module crash has missing NOTIFY_TYPES member
Nov 27 06:00:20 np0005537642 ceph-mgr[74636]: mgr[py] Module crash has missing NOTIFY_TYPES member
Nov 27 06:00:20 np0005537642 ceph-mgr[74636]: mgr[py] Loading python module 'dashboard'
Nov 27 06:00:20 np0005537642 radosgw[89563]: ====== starting new request req=0x7f291eca25d0 =====
Nov 27 06:00:20 np0005537642 radosgw[89563]: ====== req done req=0x7f291eca25d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 27 06:00:20 np0005537642 radosgw[89563]: beast: 0x7f291eca25d0: 192.168.122.100 - anonymous [27/Nov/2025:11:00:20.861 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 27 06:00:20 np0005537642 radosgw[89563]: ====== starting new request req=0x7f291eca25d0 =====
Nov 27 06:00:20 np0005537642 radosgw[89563]: ====== req done req=0x7f291eca25d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 27 06:00:20 np0005537642 radosgw[89563]: beast: 0x7f291eca25d0: 192.168.122.102 - anonymous [27/Nov/2025:11:00:20.863 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 27 06:00:21 np0005537642 ceph-mgr[74636]: mgr[py] Loading python module 'devicehealth'
Nov 27 06:00:21 np0005537642 ceph-4c838139-e0c9-556a-a9ca-e4422f459af7-mgr-compute-0-qnrkij[74632]: 2025-11-27T11:00:21.546+0000 7f49e870d140 -1 mgr[py] Module devicehealth has missing NOTIFY_TYPES member
Nov 27 06:00:21 np0005537642 ceph-mgr[74636]: mgr[py] Module devicehealth has missing NOTIFY_TYPES member
Nov 27 06:00:21 np0005537642 ceph-mgr[74636]: mgr[py] Loading python module 'diskprediction_local'
Nov 27 06:00:21 np0005537642 ceph-4c838139-e0c9-556a-a9ca-e4422f459af7-mgr-compute-0-qnrkij[74632]: /lib64/python3.9/site-packages/scipy/__init__.py:73: UserWarning: NumPy was imported from a Python sub-interpreter but NumPy does not properly support sub-interpreters. This will likely work for most users but might cause hard to track down issues or subtle bugs. A common user of the rare sub-interpreter feature is wsgi which also allows single-interpreter mode.
Nov 27 06:00:21 np0005537642 ceph-4c838139-e0c9-556a-a9ca-e4422f459af7-mgr-compute-0-qnrkij[74632]: Improvements in the case of bugs are welcome, but is not on the NumPy roadmap, and full support may require significant effort to achieve.
Nov 27 06:00:21 np0005537642 ceph-4c838139-e0c9-556a-a9ca-e4422f459af7-mgr-compute-0-qnrkij[74632]:  from numpy import show_config as show_numpy_config
Nov 27 06:00:21 np0005537642 ceph-4c838139-e0c9-556a-a9ca-e4422f459af7-mgr-compute-0-qnrkij[74632]: 2025-11-27T11:00:21.698+0000 7f49e870d140 -1 mgr[py] Module diskprediction_local has missing NOTIFY_TYPES member
Nov 27 06:00:21 np0005537642 ceph-mgr[74636]: mgr[py] Module diskprediction_local has missing NOTIFY_TYPES member
Nov 27 06:00:21 np0005537642 ceph-mgr[74636]: mgr[py] Loading python module 'influx'
Nov 27 06:00:21 np0005537642 ceph-4c838139-e0c9-556a-a9ca-e4422f459af7-nfs-cephfs-2-0-compute-0-ymahkb[97101]: 27/11/2025 11:00:21 : epoch 69282ef5 : compute-0 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f37c4004580 fd 14 proxy header rest len failed header rlen = % (will set dead)
Nov 27 06:00:21 np0005537642 ceph-4c838139-e0c9-556a-a9ca-e4422f459af7-mgr-compute-0-qnrkij[74632]: 2025-11-27T11:00:21.763+0000 7f49e870d140 -1 mgr[py] Module influx has missing NOTIFY_TYPES member
Nov 27 06:00:21 np0005537642 ceph-mgr[74636]: mgr[py] Module influx has missing NOTIFY_TYPES member
Nov 27 06:00:21 np0005537642 ceph-mgr[74636]: mgr[py] Loading python module 'insights'
Nov 27 06:00:21 np0005537642 ceph-mgr[74636]: mgr[py] Loading python module 'iostat'
Nov 27 06:00:21 np0005537642 ceph-4c838139-e0c9-556a-a9ca-e4422f459af7-nfs-cephfs-2-0-compute-0-ymahkb[97101]: 27/11/2025 11:00:21 : epoch 69282ef5 : compute-0 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f37dc004970 fd 14 proxy header rest len failed header rlen = % (will set dead)
Nov 27 06:00:21 np0005537642 ceph-4c838139-e0c9-556a-a9ca-e4422f459af7-mgr-compute-0-qnrkij[74632]: 2025-11-27T11:00:21.890+0000 7f49e870d140 -1 mgr[py] Module iostat has missing NOTIFY_TYPES member
Nov 27 06:00:21 np0005537642 ceph-mgr[74636]: mgr[py] Module iostat has missing NOTIFY_TYPES member
Nov 27 06:00:21 np0005537642 ceph-mgr[74636]: mgr[py] Loading python module 'k8sevents'
Nov 27 06:00:22 np0005537642 ceph-mon[74338]: mon.compute-0@0(leader).osd e109 e109: 3 total, 3 up, 3 in
Nov 27 06:00:22 np0005537642 ceph-mgr[74636]: mgr[py] Loading python module 'localpool'
Nov 27 06:00:22 np0005537642 ceph-4c838139-e0c9-556a-a9ca-e4422f459af7-nfs-cephfs-2-0-compute-0-ymahkb[97101]: 27/11/2025 11:00:22 : epoch 69282ef5 : compute-0 : ganesha.nfsd-2[reaper] nfs_lift_grace_locked :STATE :EVENT :NFS Server Now NOT IN GRACE
Nov 27 06:00:22 np0005537642 ceph-mgr[74636]: mgr[py] Loading python module 'mds_autoscaler'
Nov 27 06:00:22 np0005537642 ceph-mon[74338]: log_channel(cluster) log [DBG] : osdmap e109: 3 total, 3 up, 3 in
Nov 27 06:00:22 np0005537642 ceph-osd[82775]: osd.1 pg_epoch: 109 pg[9.f( v 57'872 (0'0,57'872] local-lis/les=0/0 n=7 ec=67/50 lis/c=106/79 les/c/f=107/80/0 sis=109) [1] r=0 lpr=109 pi=[79,109)/1 luod=0'0 crt=57'872 mlcod 0'0 active mbc={}] PeeringState::start_peering_interval up [1] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 1 -> 1, role -1 -> 0, features acting 4540701547738038271 upacting 4540701547738038271
Nov 27 06:00:22 np0005537642 ceph-osd[82775]: osd.1 pg_epoch: 109 pg[9.f( v 57'872 (0'0,57'872] local-lis/les=0/0 n=7 ec=67/50 lis/c=106/79 les/c/f=107/80/0 sis=109) [1] r=0 lpr=109 pi=[79,109)/1 crt=57'872 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 27 06:00:22 np0005537642 ceph-mon[74338]: mon.compute-0@0(leader).osd e109 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 27 06:00:22 np0005537642 ceph-mgr[74636]: mgr[py] Loading python module 'mirroring'
Nov 27 06:00:22 np0005537642 ceph-mgr[74636]: mgr[py] Loading python module 'nfs'
Nov 27 06:00:22 np0005537642 ceph-4c838139-e0c9-556a-a9ca-e4422f459af7-nfs-cephfs-2-0-compute-0-ymahkb[97101]: 27/11/2025 11:00:22 : epoch 69282ef5 : compute-0 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f37ac0032f0 fd 14 proxy header rest len failed header rlen = % (will set dead)
Nov 27 06:00:22 np0005537642 ceph-mon[74338]: from='mgr.14424 192.168.122.100:0/1707931784' entity='mgr.compute-0.qnrkij' cmd='[{"prefix": "mgr module enable", "module": "prometheus"}]': finished
Nov 27 06:00:22 np0005537642 ceph-osd[82775]: osd.1 pg_epoch: 109 pg[9.1f( v 57'872 (0'0,57'872] local-lis/les=108/109 n=5 ec=67/50 lis/c=106/79 les/c/f=107/80/0 sis=108) [1] r=0 lpr=108 pi=[79,108)/1 crt=57'872 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 27 06:00:22 np0005537642 radosgw[89563]: ====== starting new request req=0x7f291eca25d0 =====
Nov 27 06:00:22 np0005537642 radosgw[89563]: ====== req done req=0x7f291eca25d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 27 06:00:22 np0005537642 radosgw[89563]: beast: 0x7f291eca25d0: 192.168.122.100 - anonymous [27/Nov/2025:11:00:22.864 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 27 06:00:22 np0005537642 radosgw[89563]: ====== starting new request req=0x7f291eca25d0 =====
Nov 27 06:00:22 np0005537642 radosgw[89563]: ====== req done req=0x7f291eca25d0 op status=0 http_status=200 latency=0.001000029s ======
Nov 27 06:00:22 np0005537642 radosgw[89563]: beast: 0x7f291eca25d0: 192.168.122.102 - anonymous [27/Nov/2025:11:00:22.866 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Nov 27 06:00:22 np0005537642 ceph-4c838139-e0c9-556a-a9ca-e4422f459af7-mgr-compute-0-qnrkij[74632]: 2025-11-27T11:00:22.890+0000 7f49e870d140 -1 mgr[py] Module nfs has missing NOTIFY_TYPES member
Nov 27 06:00:22 np0005537642 ceph-mgr[74636]: mgr[py] Module nfs has missing NOTIFY_TYPES member
Nov 27 06:00:22 np0005537642 ceph-mgr[74636]: mgr[py] Loading python module 'orchestrator'
Nov 27 06:00:23 np0005537642 ceph-4c838139-e0c9-556a-a9ca-e4422f459af7-mgr-compute-0-qnrkij[74632]: 2025-11-27T11:00:23.124+0000 7f49e870d140 -1 mgr[py] Module orchestrator has missing NOTIFY_TYPES member
Nov 27 06:00:23 np0005537642 ceph-mgr[74636]: mgr[py] Module orchestrator has missing NOTIFY_TYPES member
Nov 27 06:00:23 np0005537642 ceph-mgr[74636]: mgr[py] Loading python module 'osd_perf_query'
Nov 27 06:00:23 np0005537642 ceph-mon[74338]: mon.compute-0@0(leader).osd e109 do_prune osdmap full prune enabled
Nov 27 06:00:23 np0005537642 ceph-4c838139-e0c9-556a-a9ca-e4422f459af7-mgr-compute-0-qnrkij[74632]: 2025-11-27T11:00:23.205+0000 7f49e870d140 -1 mgr[py] Module osd_perf_query has missing NOTIFY_TYPES member
Nov 27 06:00:23 np0005537642 ceph-mgr[74636]: mgr[py] Module osd_perf_query has missing NOTIFY_TYPES member
Nov 27 06:00:23 np0005537642 ceph-mgr[74636]: mgr[py] Loading python module 'osd_support'
Nov 27 06:00:23 np0005537642 ceph-4c838139-e0c9-556a-a9ca-e4422f459af7-mgr-compute-0-qnrkij[74632]: 2025-11-27T11:00:23.277+0000 7f49e870d140 -1 mgr[py] Module osd_support has missing NOTIFY_TYPES member
Nov 27 06:00:23 np0005537642 ceph-mgr[74636]: mgr[py] Module osd_support has missing NOTIFY_TYPES member
Nov 27 06:00:23 np0005537642 ceph-mgr[74636]: mgr[py] Loading python module 'pg_autoscaler'
Nov 27 06:00:23 np0005537642 ceph-mon[74338]: mon.compute-0@0(leader).osd e110 e110: 3 total, 3 up, 3 in
Nov 27 06:00:23 np0005537642 ceph-4c838139-e0c9-556a-a9ca-e4422f459af7-mgr-compute-0-qnrkij[74632]: 2025-11-27T11:00:23.359+0000 7f49e870d140 -1 mgr[py] Module pg_autoscaler has missing NOTIFY_TYPES member
Nov 27 06:00:23 np0005537642 ceph-mgr[74636]: mgr[py] Module pg_autoscaler has missing NOTIFY_TYPES member
Nov 27 06:00:23 np0005537642 ceph-mgr[74636]: mgr[py] Loading python module 'progress'
Nov 27 06:00:23 np0005537642 ceph-4c838139-e0c9-556a-a9ca-e4422f459af7-mgr-compute-0-qnrkij[74632]: 2025-11-27T11:00:23.429+0000 7f49e870d140 -1 mgr[py] Module progress has missing NOTIFY_TYPES member
Nov 27 06:00:23 np0005537642 ceph-mgr[74636]: mgr[py] Module progress has missing NOTIFY_TYPES member
Nov 27 06:00:23 np0005537642 ceph-mgr[74636]: mgr[py] Loading python module 'prometheus'
Nov 27 06:00:23 np0005537642 ceph-mon[74338]: log_channel(cluster) log [DBG] : osdmap e110: 3 total, 3 up, 3 in
Nov 27 06:00:23 np0005537642 ceph-4c838139-e0c9-556a-a9ca-e4422f459af7-mgr-compute-0-qnrkij[74632]: 2025-11-27T11:00:23.752+0000 7f49e870d140 -1 mgr[py] Module prometheus has missing NOTIFY_TYPES member
Nov 27 06:00:23 np0005537642 ceph-mgr[74636]: mgr[py] Module prometheus has missing NOTIFY_TYPES member
Nov 27 06:00:23 np0005537642 ceph-mgr[74636]: mgr[py] Loading python module 'rbd_support'
Nov 27 06:00:23 np0005537642 ceph-osd[82775]: osd.1 pg_epoch: 110 pg[9.10( v 57'872 (0'0,57'872] local-lis/les=0/0 n=2 ec=67/50 lis/c=108/67 les/c/f=109/68/0 sis=110) [1] r=0 lpr=110 pi=[67,110)/1 luod=0'0 crt=57'872 mlcod 0'0 active mbc={}] PeeringState::start_peering_interval up [1] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 1 -> 1, role -1 -> 0, features acting 4540701547738038271 upacting 4540701547738038271
Nov 27 06:00:23 np0005537642 ceph-osd[82775]: osd.1 pg_epoch: 110 pg[9.10( v 57'872 (0'0,57'872] local-lis/les=0/0 n=2 ec=67/50 lis/c=108/67 les/c/f=109/68/0 sis=110) [1] r=0 lpr=110 pi=[67,110)/1 crt=57'872 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 27 06:00:23 np0005537642 ceph-4c838139-e0c9-556a-a9ca-e4422f459af7-nfs-cephfs-2-0-compute-0-ymahkb[97101]: 27/11/2025 11:00:23 : epoch 69282ef5 : compute-0 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f37a8003c10 fd 14 proxy header rest len failed header rlen = % (will set dead)
Nov 27 06:00:23 np0005537642 ceph-4c838139-e0c9-556a-a9ca-e4422f459af7-mgr-compute-0-qnrkij[74632]: 2025-11-27T11:00:23.858+0000 7f49e870d140 -1 mgr[py] Module rbd_support has missing NOTIFY_TYPES member
Nov 27 06:00:23 np0005537642 ceph-mgr[74636]: mgr[py] Module rbd_support has missing NOTIFY_TYPES member
Nov 27 06:00:23 np0005537642 ceph-mgr[74636]: mgr[py] Loading python module 'restful'
Nov 27 06:00:23 np0005537642 ceph-4c838139-e0c9-556a-a9ca-e4422f459af7-nfs-cephfs-2-0-compute-0-ymahkb[97101]: 27/11/2025 11:00:23 : epoch 69282ef5 : compute-0 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f37c4004580 fd 14 proxy header rest len failed header rlen = % (will set dead)
Nov 27 06:00:23 np0005537642 ceph-osd[82775]: osd.1 pg_epoch: 110 pg[9.f( v 57'872 (0'0,57'872] local-lis/les=109/110 n=7 ec=67/50 lis/c=106/79 les/c/f=107/80/0 sis=109) [1] r=0 lpr=109 pi=[79,109)/1 crt=57'872 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 27 06:00:24 np0005537642 ceph-mgr[74636]: mgr[py] Loading python module 'rgw'
Nov 27 06:00:24 np0005537642 ceph-4c838139-e0c9-556a-a9ca-e4422f459af7-mgr-compute-0-qnrkij[74632]: 2025-11-27T11:00:24.298+0000 7f49e870d140 -1 mgr[py] Module rgw has missing NOTIFY_TYPES member
Nov 27 06:00:24 np0005537642 ceph-mgr[74636]: mgr[py] Module rgw has missing NOTIFY_TYPES member
Nov 27 06:00:24 np0005537642 ceph-mgr[74636]: mgr[py] Loading python module 'rook'
Nov 27 06:00:24 np0005537642 ceph-mon[74338]: mon.compute-0@0(leader).osd e110 do_prune osdmap full prune enabled
Nov 27 06:00:24 np0005537642 ceph-mon[74338]: mon.compute-0@0(leader).osd e111 e111: 3 total, 3 up, 3 in
Nov 27 06:00:24 np0005537642 ceph-mon[74338]: log_channel(cluster) log [DBG] : osdmap e111: 3 total, 3 up, 3 in
Nov 27 06:00:24 np0005537642 ceph-4c838139-e0c9-556a-a9ca-e4422f459af7-nfs-cephfs-2-0-compute-0-ymahkb[97101]: 27/11/2025 11:00:24 : epoch 69282ef5 : compute-0 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f37dc004970 fd 14 proxy header rest len failed header rlen = % (will set dead)
Nov 27 06:00:24 np0005537642 ceph-osd[82775]: osd.1 pg_epoch: 111 pg[9.10( v 57'872 (0'0,57'872] local-lis/les=110/111 n=2 ec=67/50 lis/c=108/67 les/c/f=109/68/0 sis=110) [1] r=0 lpr=110 pi=[67,110)/1 crt=57'872 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 27 06:00:24 np0005537642 radosgw[89563]: ====== starting new request req=0x7f291eca25d0 =====
Nov 27 06:00:24 np0005537642 radosgw[89563]: ====== req done req=0x7f291eca25d0 op status=0 http_status=200 latency=0.001000029s ======
Nov 27 06:00:24 np0005537642 radosgw[89563]: beast: 0x7f291eca25d0: 192.168.122.100 - anonymous [27/Nov/2025:11:00:24.866 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Nov 27 06:00:24 np0005537642 radosgw[89563]: ====== starting new request req=0x7f291eca25d0 =====
Nov 27 06:00:24 np0005537642 ceph-4c838139-e0c9-556a-a9ca-e4422f459af7-mgr-compute-0-qnrkij[74632]: 2025-11-27T11:00:24.870+0000 7f49e870d140 -1 mgr[py] Module rook has missing NOTIFY_TYPES member
Nov 27 06:00:24 np0005537642 radosgw[89563]: ====== req done req=0x7f291eca25d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 27 06:00:24 np0005537642 ceph-mgr[74636]: mgr[py] Module rook has missing NOTIFY_TYPES member
Nov 27 06:00:24 np0005537642 ceph-mgr[74636]: mgr[py] Loading python module 'selftest'
Nov 27 06:00:24 np0005537642 radosgw[89563]: beast: 0x7f291eca25d0: 192.168.122.102 - anonymous [27/Nov/2025:11:00:24.870 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 27 06:00:24 np0005537642 ceph-4c838139-e0c9-556a-a9ca-e4422f459af7-mgr-compute-0-qnrkij[74632]: 2025-11-27T11:00:24.940+0000 7f49e870d140 -1 mgr[py] Module selftest has missing NOTIFY_TYPES member
Nov 27 06:00:24 np0005537642 ceph-mgr[74636]: mgr[py] Module selftest has missing NOTIFY_TYPES member
Nov 27 06:00:24 np0005537642 ceph-mgr[74636]: mgr[py] Loading python module 'snap_schedule'
Nov 27 06:00:25 np0005537642 ceph-4c838139-e0c9-556a-a9ca-e4422f459af7-mgr-compute-0-qnrkij[74632]: 2025-11-27T11:00:25.017+0000 7f49e870d140 -1 mgr[py] Module snap_schedule has missing NOTIFY_TYPES member
Nov 27 06:00:25 np0005537642 ceph-mgr[74636]: mgr[py] Module snap_schedule has missing NOTIFY_TYPES member
Nov 27 06:00:25 np0005537642 ceph-mgr[74636]: mgr[py] Loading python module 'stats'
Nov 27 06:00:25 np0005537642 ceph-mgr[74636]: mgr[py] Loading python module 'status'
Nov 27 06:00:25 np0005537642 ceph-4c838139-e0c9-556a-a9ca-e4422f459af7-mgr-compute-0-qnrkij[74632]: 2025-11-27T11:00:25.159+0000 7f49e870d140 -1 mgr[py] Module status has missing NOTIFY_TYPES member
Nov 27 06:00:25 np0005537642 ceph-mgr[74636]: mgr[py] Module status has missing NOTIFY_TYPES member
Nov 27 06:00:25 np0005537642 ceph-mgr[74636]: mgr[py] Loading python module 'telegraf'
Nov 27 06:00:25 np0005537642 ceph-4c838139-e0c9-556a-a9ca-e4422f459af7-mgr-compute-0-qnrkij[74632]: 2025-11-27T11:00:25.229+0000 7f49e870d140 -1 mgr[py] Module telegraf has missing NOTIFY_TYPES member
Nov 27 06:00:25 np0005537642 ceph-mgr[74636]: mgr[py] Module telegraf has missing NOTIFY_TYPES member
Nov 27 06:00:25 np0005537642 ceph-mgr[74636]: mgr[py] Loading python module 'telemetry'
Nov 27 06:00:25 np0005537642 ceph-4c838139-e0c9-556a-a9ca-e4422f459af7-mgr-compute-0-qnrkij[74632]: 2025-11-27T11:00:25.385+0000 7f49e870d140 -1 mgr[py] Module telemetry has missing NOTIFY_TYPES member
Nov 27 06:00:25 np0005537642 ceph-mgr[74636]: mgr[py] Module telemetry has missing NOTIFY_TYPES member
Nov 27 06:00:25 np0005537642 ceph-mgr[74636]: mgr[py] Loading python module 'test_orchestrator'
Nov 27 06:00:25 np0005537642 ceph-4c838139-e0c9-556a-a9ca-e4422f459af7-mgr-compute-0-qnrkij[74632]: 2025-11-27T11:00:25.638+0000 7f49e870d140 -1 mgr[py] Module test_orchestrator has missing NOTIFY_TYPES member
Nov 27 06:00:25 np0005537642 ceph-mgr[74636]: mgr[py] Module test_orchestrator has missing NOTIFY_TYPES member
Nov 27 06:00:25 np0005537642 ceph-mgr[74636]: mgr[py] Loading python module 'volumes'
Nov 27 06:00:25 np0005537642 ceph-4c838139-e0c9-556a-a9ca-e4422f459af7-nfs-cephfs-2-0-compute-0-ymahkb[97101]: 27/11/2025 11:00:25 : epoch 69282ef5 : compute-0 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f37ac004000 fd 14 proxy header rest len failed header rlen = % (will set dead)
Nov 27 06:00:25 np0005537642 ceph-mon[74338]: log_channel(cluster) log [DBG] : Standby manager daemon compute-2.yyrxaz restarted
Nov 27 06:00:25 np0005537642 ceph-mon[74338]: log_channel(cluster) log [DBG] : Standby manager daemon compute-2.yyrxaz started
Nov 27 06:00:25 np0005537642 ceph-mon[74338]: log_channel(cluster) log [DBG] : Standby manager daemon compute-1.npcryb restarted
Nov 27 06:00:25 np0005537642 ceph-mon[74338]: log_channel(cluster) log [DBG] : Standby manager daemon compute-1.npcryb started
Nov 27 06:00:25 np0005537642 ceph-4c838139-e0c9-556a-a9ca-e4422f459af7-nfs-cephfs-2-0-compute-0-ymahkb[97101]: 27/11/2025 11:00:25 : epoch 69282ef5 : compute-0 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f37a8003c10 fd 14 proxy header rest len failed header rlen = % (will set dead)
Nov 27 06:00:25 np0005537642 ceph-4c838139-e0c9-556a-a9ca-e4422f459af7-mgr-compute-0-qnrkij[74632]: 2025-11-27T11:00:25.975+0000 7f49e870d140 -1 mgr[py] Module volumes has missing NOTIFY_TYPES member
Nov 27 06:00:25 np0005537642 ceph-mgr[74636]: mgr[py] Module volumes has missing NOTIFY_TYPES member
Nov 27 06:00:25 np0005537642 ceph-mgr[74636]: mgr[py] Loading python module 'zabbix'
Nov 27 06:00:26 np0005537642 ceph-4c838139-e0c9-556a-a9ca-e4422f459af7-mgr-compute-0-qnrkij[74632]: 2025-11-27T11:00:26.053+0000 7f49e870d140 -1 mgr[py] Module zabbix has missing NOTIFY_TYPES member
Nov 27 06:00:26 np0005537642 ceph-mgr[74636]: mgr[py] Module zabbix has missing NOTIFY_TYPES member
Nov 27 06:00:26 np0005537642 ceph-mgr[74636]: ms_deliver_dispatch: unhandled message 0x5619745cb860 mon_map magic: 0 from mon.0 v2:192.168.122.100:3300/0
Nov 27 06:00:26 np0005537642 ceph-mon[74338]: log_channel(cluster) log [INF] : Active manager daemon compute-0.qnrkij restarted
Nov 27 06:00:26 np0005537642 ceph-mon[74338]: mon.compute-0@0(leader).osd e111 do_prune osdmap full prune enabled
Nov 27 06:00:26 np0005537642 ceph-mon[74338]: log_channel(cluster) log [INF] : Activating manager daemon compute-0.qnrkij
Nov 27 06:00:26 np0005537642 ceph-mon[74338]: mon.compute-0@0(leader).osd e112 e112: 3 total, 3 up, 3 in
Nov 27 06:00:26 np0005537642 ceph-mgr[74636]: mgr handle_mgr_map Activating!
Nov 27 06:00:26 np0005537642 ceph-mgr[74636]: mgr handle_mgr_map I am now activating
Nov 27 06:00:26 np0005537642 ceph-mon[74338]: log_channel(cluster) log [DBG] : osdmap e112: 3 total, 3 up, 3 in
Nov 27 06:00:26 np0005537642 ceph-mon[74338]: log_channel(cluster) log [DBG] : mgrmap e27: compute-0.qnrkij(active, starting, since 0.113251s), standbys: compute-2.yyrxaz, compute-1.npcryb
Nov 27 06:00:26 np0005537642 ceph-mon[74338]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mon metadata", "id": "compute-0"} v 0)
Nov 27 06:00:26 np0005537642 ceph-mon[74338]: log_channel(audit) log [DBG] : from='mgr.14655 192.168.122.100:0/3058013073' entity='mgr.compute-0.qnrkij' cmd=[{"prefix": "mon metadata", "id": "compute-0"}]: dispatch
Nov 27 06:00:26 np0005537642 ceph-mon[74338]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mon metadata", "id": "compute-1"} v 0)
Nov 27 06:00:26 np0005537642 ceph-mon[74338]: log_channel(audit) log [DBG] : from='mgr.14655 192.168.122.100:0/3058013073' entity='mgr.compute-0.qnrkij' cmd=[{"prefix": "mon metadata", "id": "compute-1"}]: dispatch
Nov 27 06:00:26 np0005537642 ceph-mon[74338]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mon metadata", "id": "compute-2"} v 0)
Nov 27 06:00:26 np0005537642 ceph-mon[74338]: log_channel(audit) log [DBG] : from='mgr.14655 192.168.122.100:0/3058013073' entity='mgr.compute-0.qnrkij' cmd=[{"prefix": "mon metadata", "id": "compute-2"}]: dispatch
Nov 27 06:00:26 np0005537642 ceph-mon[74338]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mds metadata", "who": "cephfs.compute-0.pbsgjz"} v 0)
Nov 27 06:00:26 np0005537642 ceph-mon[74338]: log_channel(audit) log [DBG] : from='mgr.14655 192.168.122.100:0/3058013073' entity='mgr.compute-0.qnrkij' cmd=[{"prefix": "mds metadata", "who": "cephfs.compute-0.pbsgjz"}]: dispatch
Nov 27 06:00:26 np0005537642 ceph-mon[74338]: mon.compute-0@0(leader).mds e10 all = 0
Nov 27 06:00:26 np0005537642 ceph-mon[74338]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mds metadata", "who": "cephfs.compute-1.dfsdca"} v 0)
Nov 27 06:00:26 np0005537642 ceph-mon[74338]: log_channel(audit) log [DBG] : from='mgr.14655 192.168.122.100:0/3058013073' entity='mgr.compute-0.qnrkij' cmd=[{"prefix": "mds metadata", "who": "cephfs.compute-1.dfsdca"}]: dispatch
Nov 27 06:00:26 np0005537642 ceph-mon[74338]: mon.compute-0@0(leader).mds e10 all = 0
Nov 27 06:00:26 np0005537642 ceph-mon[74338]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mds metadata", "who": "cephfs.compute-2.pktzxb"} v 0)
Nov 27 06:00:26 np0005537642 ceph-mon[74338]: log_channel(audit) log [DBG] : from='mgr.14655 192.168.122.100:0/3058013073' entity='mgr.compute-0.qnrkij' cmd=[{"prefix": "mds metadata", "who": "cephfs.compute-2.pktzxb"}]: dispatch
Nov 27 06:00:26 np0005537642 ceph-mon[74338]: mon.compute-0@0(leader).mds e10 all = 0
Nov 27 06:00:26 np0005537642 ceph-mon[74338]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mgr metadata", "who": "compute-0.qnrkij", "id": "compute-0.qnrkij"} v 0)
Nov 27 06:00:26 np0005537642 ceph-mon[74338]: log_channel(audit) log [DBG] : from='mgr.14655 192.168.122.100:0/3058013073' entity='mgr.compute-0.qnrkij' cmd=[{"prefix": "mgr metadata", "who": "compute-0.qnrkij", "id": "compute-0.qnrkij"}]: dispatch
Nov 27 06:00:26 np0005537642 ceph-mon[74338]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mgr metadata", "who": "compute-2.yyrxaz", "id": "compute-2.yyrxaz"} v 0)
Nov 27 06:00:26 np0005537642 ceph-mon[74338]: log_channel(audit) log [DBG] : from='mgr.14655 192.168.122.100:0/3058013073' entity='mgr.compute-0.qnrkij' cmd=[{"prefix": "mgr metadata", "who": "compute-2.yyrxaz", "id": "compute-2.yyrxaz"}]: dispatch
Nov 27 06:00:26 np0005537642 ceph-mon[74338]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mgr metadata", "who": "compute-1.npcryb", "id": "compute-1.npcryb"} v 0)
Nov 27 06:00:26 np0005537642 ceph-mon[74338]: log_channel(audit) log [DBG] : from='mgr.14655 192.168.122.100:0/3058013073' entity='mgr.compute-0.qnrkij' cmd=[{"prefix": "mgr metadata", "who": "compute-1.npcryb", "id": "compute-1.npcryb"}]: dispatch
Nov 27 06:00:26 np0005537642 ceph-mon[74338]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd metadata", "id": 0} v 0)
Nov 27 06:00:26 np0005537642 ceph-mon[74338]: log_channel(audit) log [DBG] : from='mgr.14655 192.168.122.100:0/3058013073' entity='mgr.compute-0.qnrkij' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch
Nov 27 06:00:26 np0005537642 ceph-mon[74338]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd metadata", "id": 1} v 0)
Nov 27 06:00:26 np0005537642 ceph-mon[74338]: log_channel(audit) log [DBG] : from='mgr.14655 192.168.122.100:0/3058013073' entity='mgr.compute-0.qnrkij' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch
Nov 27 06:00:26 np0005537642 ceph-mon[74338]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd metadata", "id": 2} v 0)
Nov 27 06:00:26 np0005537642 ceph-mon[74338]: log_channel(audit) log [DBG] : from='mgr.14655 192.168.122.100:0/3058013073' entity='mgr.compute-0.qnrkij' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch
Nov 27 06:00:26 np0005537642 ceph-mon[74338]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mds metadata"} v 0)
Nov 27 06:00:26 np0005537642 ceph-mon[74338]: log_channel(audit) log [DBG] : from='mgr.14655 192.168.122.100:0/3058013073' entity='mgr.compute-0.qnrkij' cmd=[{"prefix": "mds metadata"}]: dispatch
Nov 27 06:00:26 np0005537642 ceph-mon[74338]: mon.compute-0@0(leader).mds e10 all = 1
Nov 27 06:00:26 np0005537642 ceph-mon[74338]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd metadata"} v 0)
Nov 27 06:00:26 np0005537642 ceph-mon[74338]: log_channel(audit) log [DBG] : from='mgr.14655 192.168.122.100:0/3058013073' entity='mgr.compute-0.qnrkij' cmd=[{"prefix": "osd metadata"}]: dispatch
Nov 27 06:00:26 np0005537642 ceph-mon[74338]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mon metadata"} v 0)
Nov 27 06:00:26 np0005537642 ceph-mon[74338]: log_channel(audit) log [DBG] : from='mgr.14655 192.168.122.100:0/3058013073' entity='mgr.compute-0.qnrkij' cmd=[{"prefix": "mon metadata"}]: dispatch
Nov 27 06:00:26 np0005537642 ceph-mgr[74636]: [balancer DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Nov 27 06:00:26 np0005537642 ceph-mgr[74636]: mgr load Constructed class from module: balancer
Nov 27 06:00:26 np0005537642 ceph-mon[74338]: log_channel(cluster) log [INF] : Manager daemon compute-0.qnrkij is now available
Nov 27 06:00:26 np0005537642 ceph-mgr[74636]: [cephadm DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Nov 27 06:00:26 np0005537642 ceph-mgr[74636]: [balancer INFO root] Starting
Nov 27 06:00:26 np0005537642 ceph-mgr[74636]: [balancer INFO root] Optimize plan auto_2025-11-27_11:00:26
Nov 27 06:00:26 np0005537642 ceph-mgr[74636]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Nov 27 06:00:26 np0005537642 ceph-mgr[74636]: [balancer INFO root] Some PGs (1.000000) are unknown; try again later
Nov 27 06:00:26 np0005537642 ceph-mgr[74636]: mgr load Constructed class from module: cephadm
Nov 27 06:00:26 np0005537642 ceph-mgr[74636]: [crash DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Nov 27 06:00:26 np0005537642 ceph-mgr[74636]: mgr load Constructed class from module: crash
Nov 27 06:00:26 np0005537642 ceph-mgr[74636]: [dashboard DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Nov 27 06:00:26 np0005537642 ceph-mgr[74636]: mgr load Constructed class from module: dashboard
Nov 27 06:00:26 np0005537642 ceph-mgr[74636]: [devicehealth DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Nov 27 06:00:26 np0005537642 ceph-mgr[74636]: mgr load Constructed class from module: devicehealth
Nov 27 06:00:26 np0005537642 ceph-mgr[74636]: [dashboard INFO access_control] Loading user roles DB version=2
Nov 27 06:00:26 np0005537642 ceph-mgr[74636]: [dashboard INFO sso] Loading SSO DB version=1
Nov 27 06:00:26 np0005537642 ceph-mgr[74636]: [dashboard INFO root] server: ssl=no host=192.168.122.100 port=8443
Nov 27 06:00:26 np0005537642 ceph-mgr[74636]: [dashboard INFO root] Configured CherryPy, starting engine...
Nov 27 06:00:26 np0005537642 ceph-mgr[74636]: [iostat DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Nov 27 06:00:26 np0005537642 ceph-mgr[74636]: mgr load Constructed class from module: iostat
Nov 27 06:00:26 np0005537642 ceph-mgr[74636]: [devicehealth INFO root] Starting
Nov 27 06:00:26 np0005537642 ceph-mgr[74636]: [nfs DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Nov 27 06:00:26 np0005537642 ceph-mgr[74636]: mgr load Constructed class from module: nfs
Nov 27 06:00:26 np0005537642 ceph-mgr[74636]: [orchestrator DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Nov 27 06:00:26 np0005537642 ceph-mgr[74636]: mgr load Constructed class from module: orchestrator
Nov 27 06:00:26 np0005537642 ceph-mgr[74636]: [pg_autoscaler DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Nov 27 06:00:26 np0005537642 ceph-mgr[74636]: mgr load Constructed class from module: pg_autoscaler
Nov 27 06:00:26 np0005537642 ceph-mgr[74636]: [progress DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Nov 27 06:00:26 np0005537642 ceph-mgr[74636]: mgr load Constructed class from module: progress
Nov 27 06:00:26 np0005537642 ceph-mgr[74636]: [pg_autoscaler INFO root] _maybe_adjust
Nov 27 06:00:26 np0005537642 ceph-mgr[74636]: [prometheus DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Nov 27 06:00:26 np0005537642 ceph-mon[74338]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/prometheus/health_history}] v 0)
Nov 27 06:00:26 np0005537642 ceph-mgr[74636]: [progress INFO root] Loading...
Nov 27 06:00:26 np0005537642 ceph-mgr[74636]: [progress INFO root] Loaded [<progress.module.GhostEvent object at 0x7f4969a7ec10>, <progress.module.GhostEvent object at 0x7f4969a7ec40>, <progress.module.GhostEvent object at 0x7f4969a7eb50>, <progress.module.GhostEvent object at 0x7f4969a7eb80>, <progress.module.GhostEvent object at 0x7f4969a7ebb0>, <progress.module.GhostEvent object at 0x7f4969a7eac0>, <progress.module.GhostEvent object at 0x7f4969a7eaf0>, <progress.module.GhostEvent object at 0x7f4969a7eb20>, <progress.module.GhostEvent object at 0x7f4969a7ea30>, <progress.module.GhostEvent object at 0x7f4969a7ea60>, <progress.module.GhostEvent object at 0x7f4969a7ea90>, <progress.module.GhostEvent object at 0x7f4969a7e9a0>, <progress.module.GhostEvent object at 0x7f4969a7e9d0>, <progress.module.GhostEvent object at 0x7f4969a7ea00>, <progress.module.GhostEvent object at 0x7f4969a7e8e0>, <progress.module.GhostEvent object at 0x7f4969a7e910>, <progress.module.GhostEvent object at 0x7f4969a7e940>, <progress.module.GhostEvent object at 0x7f4969a7e670>, <progress.module.GhostEvent object at 0x7f4969a7e2b0>, <progress.module.GhostEvent object at 0x7f4969a7e520>, <progress.module.GhostEvent object at 0x7f4969a7e550>, <progress.module.GhostEvent object at 0x7f4969a7e580>, <progress.module.GhostEvent object at 0x7f4969a7e5b0>, <progress.module.GhostEvent object at 0x7f4969a7e5e0>, <progress.module.GhostEvent object at 0x7f4969a7e610>, <progress.module.GhostEvent object at 0x7f4969a7e640>, <progress.module.GhostEvent object at 0x7f4969a7e730>, <progress.module.GhostEvent object at 0x7f4969a7e760>] historic events
Nov 27 06:00:26 np0005537642 ceph-mgr[74636]: [progress INFO root] Loaded OSDMap, ready.
Nov 27 06:00:26 np0005537642 ceph-mon[74338]: log_channel(audit) log [INF] : from='mgr.14655 192.168.122.100:0/3058013073' entity='mgr.compute-0.qnrkij' 
Nov 27 06:00:26 np0005537642 ceph-mgr[74636]: mgr load Constructed class from module: prometheus
Nov 27 06:00:26 np0005537642 ceph-mgr[74636]: [rbd_support DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Nov 27 06:00:26 np0005537642 ceph-mgr[74636]: [prometheus INFO root] server_addr: :: server_port: 9283
Nov 27 06:00:26 np0005537642 ceph-mgr[74636]: [prometheus INFO root] Cache enabled
Nov 27 06:00:26 np0005537642 ceph-mgr[74636]: [prometheus INFO root] starting metric collection thread
Nov 27 06:00:26 np0005537642 ceph-mon[74338]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Nov 27 06:00:26 np0005537642 ceph-mon[74338]: log_channel(audit) log [DBG] : from='mgr.14655 192.168.122.100:0/3058013073' entity='mgr.compute-0.qnrkij' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Nov 27 06:00:26 np0005537642 ceph-mgr[74636]: [prometheus INFO root] Starting engine...
Nov 27 06:00:26 np0005537642 ceph-mgr[74636]: [prometheus INFO cherrypy.error] [27/Nov/2025:11:00:26] ENGINE Bus STARTING
Nov 27 06:00:26 np0005537642 ceph-4c838139-e0c9-556a-a9ca-e4422f459af7-mgr-compute-0-qnrkij[74632]: [27/Nov/2025:11:00:26] ENGINE Bus STARTING
Nov 27 06:00:26 np0005537642 ceph-4c838139-e0c9-556a-a9ca-e4422f459af7-mgr-compute-0-qnrkij[74632]: CherryPy Checker:
Nov 27 06:00:26 np0005537642 ceph-4c838139-e0c9-556a-a9ca-e4422f459af7-mgr-compute-0-qnrkij[74632]: The Application mounted at '' has an empty config.
Nov 27 06:00:26 np0005537642 ceph-4c838139-e0c9-556a-a9ca-e4422f459af7-mgr-compute-0-qnrkij[74632]: 
Nov 27 06:00:26 np0005537642 ceph-mgr[74636]: [rbd_support INFO root] recovery thread starting
Nov 27 06:00:26 np0005537642 ceph-mgr[74636]: [rbd_support INFO root] starting setup
Nov 27 06:00:26 np0005537642 ceph-mgr[74636]: mgr load Constructed class from module: rbd_support
Nov 27 06:00:26 np0005537642 ceph-mgr[74636]: [restful DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Nov 27 06:00:26 np0005537642 ceph-mgr[74636]: mgr load Constructed class from module: restful
Nov 27 06:00:26 np0005537642 ceph-mon[74338]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/compute-0.qnrkij/mirror_snapshot_schedule"} v 0)
Nov 27 06:00:26 np0005537642 ceph-mon[74338]: log_channel(audit) log [INF] : from='mgr.14655 192.168.122.100:0/3058013073' entity='mgr.compute-0.qnrkij' cmd=[{"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/compute-0.qnrkij/mirror_snapshot_schedule"}]: dispatch
Nov 27 06:00:26 np0005537642 ceph-mgr[74636]: [restful INFO root] server_addr: :: server_port: 8003
Nov 27 06:00:26 np0005537642 ceph-mgr[74636]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Nov 27 06:00:26 np0005537642 ceph-mgr[74636]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 27 06:00:26 np0005537642 ceph-mgr[74636]: [status DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Nov 27 06:00:26 np0005537642 ceph-mgr[74636]: mgr load Constructed class from module: status
Nov 27 06:00:26 np0005537642 ceph-mgr[74636]: [telemetry DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Nov 27 06:00:26 np0005537642 ceph-mgr[74636]: mgr load Constructed class from module: telemetry
Nov 27 06:00:26 np0005537642 ceph-mgr[74636]: [volumes DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Nov 27 06:00:26 np0005537642 ceph-mgr[74636]: [restful WARNING root] server not running: no certificate configured
Nov 27 06:00:26 np0005537642 ceph-mgr[74636]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 27 06:00:26 np0005537642 ceph-mgr[74636]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 27 06:00:26 np0005537642 ceph-mgr[74636]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 27 06:00:26 np0005537642 ceph-mgr[74636]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: starting
Nov 27 06:00:26 np0005537642 ceph-mgr[74636]: [rbd_support INFO root] PerfHandler: starting
Nov 27 06:00:26 np0005537642 ceph-mgr[74636]: [rbd_support INFO root] load_task_task: vms, start_after=
Nov 27 06:00:26 np0005537642 ceph-mgr[74636]: [volumes INFO volumes.fs.async_job] queuing job for volume 'cephfs'
Nov 27 06:00:26 np0005537642 ceph-mgr[74636]: [volumes INFO volumes.fs.async_job] queuing job for volume 'cephfs'
Nov 27 06:00:26 np0005537642 ceph-mgr[74636]: mgr load Constructed class from module: volumes
Nov 27 06:00:26 np0005537642 ceph-mgr[74636]: [rbd_support INFO root] load_task_task: volumes, start_after=
Nov 27 06:00:26 np0005537642 ceph-mgr[74636]: [rbd_support INFO root] load_task_task: backups, start_after=
Nov 27 06:00:26 np0005537642 ceph-mgr[74636]: [rbd_support INFO root] load_task_task: images, start_after=
Nov 27 06:00:26 np0005537642 ceph-4c838139-e0c9-556a-a9ca-e4422f459af7-mgr-compute-0-qnrkij[74632]: 2025-11-27T11:00:26.480+0000 7f494c45f640 -1 client.0 error registering admin socket command: (17) File exists
Nov 27 06:00:26 np0005537642 ceph-mgr[74636]: client.0 error registering admin socket command: (17) File exists
Nov 27 06:00:26 np0005537642 ceph-mgr[74636]: client.0 error registering admin socket command: (17) File exists
Nov 27 06:00:26 np0005537642 ceph-mgr[74636]: client.0 error registering admin socket command: (17) File exists
Nov 27 06:00:26 np0005537642 ceph-mgr[74636]: client.0 error registering admin socket command: (17) File exists
Nov 27 06:00:26 np0005537642 ceph-mgr[74636]: client.0 error registering admin socket command: (17) File exists
Nov 27 06:00:26 np0005537642 ceph-mgr[74636]: client.0 error registering admin socket command: (17) File exists
Nov 27 06:00:26 np0005537642 ceph-4c838139-e0c9-556a-a9ca-e4422f459af7-mgr-compute-0-qnrkij[74632]: 2025-11-27T11:00:26.482+0000 7f495446f640 -1 client.0 error registering admin socket command: (17) File exists
Nov 27 06:00:26 np0005537642 ceph-4c838139-e0c9-556a-a9ca-e4422f459af7-mgr-compute-0-qnrkij[74632]: 2025-11-27T11:00:26.482+0000 7f495446f640 -1 client.0 error registering admin socket command: (17) File exists
Nov 27 06:00:26 np0005537642 ceph-4c838139-e0c9-556a-a9ca-e4422f459af7-mgr-compute-0-qnrkij[74632]: 2025-11-27T11:00:26.482+0000 7f495446f640 -1 client.0 error registering admin socket command: (17) File exists
Nov 27 06:00:26 np0005537642 ceph-4c838139-e0c9-556a-a9ca-e4422f459af7-mgr-compute-0-qnrkij[74632]: 2025-11-27T11:00:26.482+0000 7f495446f640 -1 client.0 error registering admin socket command: (17) File exists
Nov 27 06:00:26 np0005537642 ceph-4c838139-e0c9-556a-a9ca-e4422f459af7-mgr-compute-0-qnrkij[74632]: 2025-11-27T11:00:26.482+0000 7f495446f640 -1 client.0 error registering admin socket command: (17) File exists
Nov 27 06:00:26 np0005537642 ceph-mgr[74636]: [rbd_support INFO root] TaskHandler: starting
Nov 27 06:00:26 np0005537642 ceph-mon[74338]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/compute-0.qnrkij/trash_purge_schedule"} v 0)
Nov 27 06:00:26 np0005537642 ceph-mon[74338]: log_channel(audit) log [INF] : from='mgr.14655 192.168.122.100:0/3058013073' entity='mgr.compute-0.qnrkij' cmd=[{"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/compute-0.qnrkij/trash_purge_schedule"}]: dispatch
Nov 27 06:00:26 np0005537642 ceph-mgr[74636]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Nov 27 06:00:26 np0005537642 ceph-mgr[74636]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 27 06:00:26 np0005537642 ceph-mgr[74636]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 27 06:00:26 np0005537642 ceph-mgr[74636]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 27 06:00:26 np0005537642 ceph-mgr[74636]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 27 06:00:26 np0005537642 ceph-mgr[74636]: [rbd_support INFO root] TrashPurgeScheduleHandler: starting
Nov 27 06:00:26 np0005537642 ceph-mgr[74636]: [rbd_support INFO root] setup complete
Nov 27 06:00:26 np0005537642 ceph-mgr[74636]: [prometheus INFO cherrypy.error] [27/Nov/2025:11:00:26] ENGINE Serving on http://:::9283
Nov 27 06:00:26 np0005537642 ceph-mgr[74636]: [prometheus INFO cherrypy.error] [27/Nov/2025:11:00:26] ENGINE Bus STARTED
Nov 27 06:00:26 np0005537642 ceph-mgr[74636]: [prometheus INFO root] Engine started.
Nov 27 06:00:26 np0005537642 ceph-4c838139-e0c9-556a-a9ca-e4422f459af7-mgr-compute-0-qnrkij[74632]: [27/Nov/2025:11:00:26] ENGINE Serving on http://:::9283
Nov 27 06:00:26 np0005537642 ceph-4c838139-e0c9-556a-a9ca-e4422f459af7-mgr-compute-0-qnrkij[74632]: [27/Nov/2025:11:00:26] ENGINE Bus STARTED
Nov 27 06:00:26 np0005537642 ceph-mgr[74636]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: CephFS -> /api/cephfs
Nov 27 06:00:26 np0005537642 ceph-mgr[74636]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: CephFsUi -> /ui-api/cephfs
Nov 27 06:00:26 np0005537642 ceph-mgr[74636]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: CephFSSubvolume -> /api/cephfs/subvolume
Nov 27 06:00:26 np0005537642 ceph-mgr[74636]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: CephFSSubvolumeGroups -> /api/cephfs/subvolume/group
Nov 27 06:00:26 np0005537642 ceph-mgr[74636]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: CephFSSubvolumeSnapshots -> /api/cephfs/subvolume/snapshot
Nov 27 06:00:26 np0005537642 ceph-mgr[74636]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: CephFsSnapshotClone -> /api/cephfs/subvolume/snapshot/clone
Nov 27 06:00:26 np0005537642 ceph-mgr[74636]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: CephFSSnapshotSchedule -> /api/cephfs/snapshot/schedule
Nov 27 06:00:26 np0005537642 ceph-mgr[74636]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: IscsiUi -> /ui-api/iscsi
Nov 27 06:00:26 np0005537642 ceph-mgr[74636]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: Iscsi -> /api/iscsi
Nov 27 06:00:26 np0005537642 ceph-mgr[74636]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: IscsiTarget -> /api/iscsi/target
Nov 27 06:00:26 np0005537642 ceph-mgr[74636]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: NFSGaneshaCluster -> /api/nfs-ganesha/cluster
Nov 27 06:00:26 np0005537642 ceph-mgr[74636]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: NFSGaneshaExports -> /api/nfs-ganesha/export
Nov 27 06:00:26 np0005537642 ceph-mgr[74636]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: NFSGaneshaUi -> /ui-api/nfs-ganesha
Nov 27 06:00:26 np0005537642 ceph-mgr[74636]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: Orchestrator -> /ui-api/orchestrator
Nov 27 06:00:26 np0005537642 ceph-mgr[74636]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: Service -> /api/service
Nov 27 06:00:26 np0005537642 ceph-mgr[74636]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: RbdMirroring -> /api/block/mirroring
Nov 27 06:00:26 np0005537642 ceph-mgr[74636]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: RbdMirroringSummary -> /api/block/mirroring/summary
Nov 27 06:00:26 np0005537642 ceph-mgr[74636]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: RbdMirroringPoolMode -> /api/block/mirroring/pool
Nov 27 06:00:26 np0005537642 ceph-mgr[74636]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: RbdMirroringPoolBootstrap -> /api/block/mirroring/pool/{pool_name}/bootstrap
Nov 27 06:00:26 np0005537642 ceph-mgr[74636]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: RbdMirroringPoolPeer -> /api/block/mirroring/pool/{pool_name}/peer
Nov 27 06:00:26 np0005537642 ceph-mgr[74636]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: RbdMirroringStatus -> /ui-api/block/mirroring
Nov 27 06:00:26 np0005537642 ceph-mgr[74636]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: Pool -> /api/pool
Nov 27 06:00:26 np0005537642 ceph-mgr[74636]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: PoolUi -> /ui-api/pool
Nov 27 06:00:26 np0005537642 ceph-mgr[74636]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: RBDPool -> /api/pool
Nov 27 06:00:26 np0005537642 ceph-mgr[74636]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: Rbd -> /api/block/image
Nov 27 06:00:26 np0005537642 ceph-mgr[74636]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: RbdStatus -> /ui-api/block/rbd
Nov 27 06:00:26 np0005537642 ceph-mgr[74636]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: RbdSnapshot -> /api/block/image/{image_spec}/snap
Nov 27 06:00:26 np0005537642 ceph-mgr[74636]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: RbdTrash -> /api/block/image/trash
Nov 27 06:00:26 np0005537642 ceph-mgr[74636]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: RbdNamespace -> /api/block/pool/{pool_name}/namespace
Nov 27 06:00:26 np0005537642 ceph-mgr[74636]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: Rgw -> /ui-api/rgw
Nov 27 06:00:26 np0005537642 ceph-mgr[74636]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: RgwMultisiteStatus -> /ui-api/rgw/multisite
Nov 27 06:00:26 np0005537642 ceph-mgr[74636]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: RgwMultisiteController -> /api/rgw/multisite
Nov 27 06:00:26 np0005537642 ceph-mgr[74636]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: RgwDaemon -> /api/rgw/daemon
Nov 27 06:00:26 np0005537642 ceph-mgr[74636]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: RgwSite -> /api/rgw/site
Nov 27 06:00:26 np0005537642 ceph-mgr[74636]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: RgwBucket -> /api/rgw/bucket
Nov 27 06:00:26 np0005537642 ceph-mgr[74636]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: RgwBucketUi -> /ui-api/rgw/bucket
Nov 27 06:00:26 np0005537642 ceph-mgr[74636]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: RgwUser -> /api/rgw/user
Nov 27 06:00:26 np0005537642 ceph-mgr[74636]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: rgwroles_CRUDClass -> /api/rgw/roles
Nov 27 06:00:26 np0005537642 ceph-mgr[74636]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: rgwroles_CRUDClassMetadata -> /ui-api/rgw/roles
Nov 27 06:00:26 np0005537642 ceph-mgr[74636]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: RgwRealm -> /api/rgw/realm
Nov 27 06:00:26 np0005537642 ceph-mgr[74636]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: RgwZonegroup -> /api/rgw/zonegroup
Nov 27 06:00:26 np0005537642 ceph-mgr[74636]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: RgwZone -> /api/rgw/zone
Nov 27 06:00:26 np0005537642 ceph-mgr[74636]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: Auth -> /api/auth
Nov 27 06:00:26 np0005537642 ceph-mgr[74636]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: clusteruser_CRUDClass -> /api/cluster/user
Nov 27 06:00:26 np0005537642 ceph-mgr[74636]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: clusteruser_CRUDClassMetadata -> /ui-api/cluster/user
Nov 27 06:00:26 np0005537642 ceph-mgr[74636]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: Cluster -> /api/cluster
Nov 27 06:00:26 np0005537642 ceph-mgr[74636]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: ClusterUpgrade -> /api/cluster/upgrade
Nov 27 06:00:26 np0005537642 ceph-mgr[74636]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: ClusterConfiguration -> /api/cluster_conf
Nov 27 06:00:26 np0005537642 ceph-mgr[74636]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: CrushRule -> /api/crush_rule
Nov 27 06:00:26 np0005537642 ceph-mgr[74636]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: CrushRuleUi -> /ui-api/crush_rule
Nov 27 06:00:26 np0005537642 ceph-mgr[74636]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: Daemon -> /api/daemon
Nov 27 06:00:26 np0005537642 ceph-mgr[74636]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: Docs -> /docs
Nov 27 06:00:26 np0005537642 ceph-mgr[74636]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: ErasureCodeProfile -> /api/erasure_code_profile
Nov 27 06:00:26 np0005537642 ceph-mgr[74636]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: ErasureCodeProfileUi -> /ui-api/erasure_code_profile
Nov 27 06:00:26 np0005537642 ceph-mgr[74636]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: FeedbackController -> /api/feedback
Nov 27 06:00:26 np0005537642 ceph-mgr[74636]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: FeedbackApiController -> /api/feedback/api_key
Nov 27 06:00:26 np0005537642 ceph-mgr[74636]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: FeedbackUiController -> /ui-api/feedback/api_key
Nov 27 06:00:26 np0005537642 ceph-mgr[74636]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: FrontendLogging -> /ui-api/logging
Nov 27 06:00:26 np0005537642 ceph-mgr[74636]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: Grafana -> /api/grafana
Nov 27 06:00:26 np0005537642 ceph-mgr[74636]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: Host -> /api/host
Nov 27 06:00:26 np0005537642 ceph-mgr[74636]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: HostUi -> /ui-api/host
Nov 27 06:00:26 np0005537642 ceph-mgr[74636]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: Health -> /api/health
Nov 27 06:00:26 np0005537642 ceph-mgr[74636]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: HomeController -> /
Nov 27 06:00:26 np0005537642 ceph-mgr[74636]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: LangsController -> /ui-api/langs
Nov 27 06:00:26 np0005537642 ceph-mgr[74636]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: LoginController -> /ui-api/login
Nov 27 06:00:26 np0005537642 ceph-mgr[74636]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: Logs -> /api/logs
Nov 27 06:00:26 np0005537642 ceph-mgr[74636]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: MgrModules -> /api/mgr/module
Nov 27 06:00:26 np0005537642 ceph-mgr[74636]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: Monitor -> /api/monitor
Nov 27 06:00:26 np0005537642 ceph-mgr[74636]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: NVMeoFGateway -> /api/nvmeof/gateway
Nov 27 06:00:26 np0005537642 ceph-mgr[74636]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: NVMeoFSpdk -> /api/nvmeof/spdk
Nov 27 06:00:26 np0005537642 ceph-mgr[74636]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: NVMeoFSubsystem -> /api/nvmeof/subsystem
Nov 27 06:00:26 np0005537642 ceph-mgr[74636]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: NVMeoFListener -> /api/nvmeof/subsystem/{nqn}/listener
Nov 27 06:00:26 np0005537642 ceph-mgr[74636]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: NVMeoFNamespace -> /api/nvmeof/subsystem/{nqn}/namespace
Nov 27 06:00:26 np0005537642 ceph-mgr[74636]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: NVMeoFHost -> /api/nvmeof/subsystem/{nqn}/host
Nov 27 06:00:26 np0005537642 ceph-mgr[74636]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: NVMeoFConnection -> /api/nvmeof/subsystem/{nqn}/connection
Nov 27 06:00:26 np0005537642 ceph-mgr[74636]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: NVMeoFTcpUI -> /ui-api/nvmeof
Nov 27 06:00:26 np0005537642 ceph-mgr[74636]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: Osd -> /api/osd
Nov 27 06:00:26 np0005537642 ceph-mgr[74636]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: OsdUi -> /ui-api/osd
Nov 27 06:00:26 np0005537642 ceph-mgr[74636]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: OsdFlagsController -> /api/osd/flags
Nov 27 06:00:26 np0005537642 ceph-mgr[74636]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: MdsPerfCounter -> /api/perf_counters/mds
Nov 27 06:00:26 np0005537642 ceph-mgr[74636]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: MonPerfCounter -> /api/perf_counters/mon
Nov 27 06:00:26 np0005537642 ceph-mgr[74636]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: OsdPerfCounter -> /api/perf_counters/osd
Nov 27 06:00:26 np0005537642 ceph-mgr[74636]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: RgwPerfCounter -> /api/perf_counters/rgw
Nov 27 06:00:26 np0005537642 ceph-mgr[74636]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: RbdMirrorPerfCounter -> /api/perf_counters/rbd-mirror
Nov 27 06:00:26 np0005537642 ceph-mgr[74636]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: MgrPerfCounter -> /api/perf_counters/mgr
Nov 27 06:00:26 np0005537642 ceph-mgr[74636]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: TcmuRunnerPerfCounter -> /api/perf_counters/tcmu-runner
Nov 27 06:00:26 np0005537642 ceph-mgr[74636]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: PerfCounters -> /api/perf_counters
Nov 27 06:00:26 np0005537642 ceph-mgr[74636]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: PrometheusReceiver -> /api/prometheus_receiver
Nov 27 06:00:26 np0005537642 ceph-mgr[74636]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: Prometheus -> /api/prometheus
Nov 27 06:00:26 np0005537642 ceph-mgr[74636]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: PrometheusNotifications -> /api/prometheus/notifications
Nov 27 06:00:26 np0005537642 ceph-mgr[74636]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: PrometheusSettings -> /ui-api/prometheus
Nov 27 06:00:26 np0005537642 ceph-mgr[74636]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: Role -> /api/role
Nov 27 06:00:26 np0005537642 ceph-mgr[74636]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: Scope -> /ui-api/scope
Nov 27 06:00:26 np0005537642 ceph-mgr[74636]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: Saml2 -> /auth/saml2
Nov 27 06:00:26 np0005537642 ceph-mgr[74636]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: Settings -> /api/settings
Nov 27 06:00:26 np0005537642 ceph-mgr[74636]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: StandardSettings -> /ui-api/standard_settings
Nov 27 06:00:26 np0005537642 ceph-mgr[74636]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: Summary -> /api/summary
Nov 27 06:00:26 np0005537642 ceph-mgr[74636]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: Task -> /api/task
Nov 27 06:00:26 np0005537642 ceph-mgr[74636]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: Telemetry -> /api/telemetry
Nov 27 06:00:26 np0005537642 ceph-mgr[74636]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: User -> /api/user
Nov 27 06:00:26 np0005537642 ceph-mgr[74636]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: UserPasswordPolicy -> /api/user
Nov 27 06:00:26 np0005537642 ceph-mgr[74636]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: UserChangePassword -> /api/user/{username}
Nov 27 06:00:26 np0005537642 ceph-mgr[74636]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: FeatureTogglesEndpoint -> /api/feature_toggles
Nov 27 06:00:26 np0005537642 ceph-mgr[74636]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: MessageOfTheDay -> /ui-api/motd
Nov 27 06:00:26 np0005537642 ceph-4c838139-e0c9-556a-a9ca-e4422f459af7-nfs-cephfs-2-0-compute-0-ymahkb[97101]: 27/11/2025 11:00:26 : epoch 69282ef5 : compute-0 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f37c4004580 fd 14 proxy header rest len failed header rlen = % (will set dead)
Nov 27 06:00:26 np0005537642 ceph-mon[74338]: Active manager daemon compute-0.qnrkij restarted
Nov 27 06:00:26 np0005537642 ceph-mon[74338]: Activating manager daemon compute-0.qnrkij
Nov 27 06:00:26 np0005537642 ceph-mon[74338]: Manager daemon compute-0.qnrkij is now available
Nov 27 06:00:26 np0005537642 ceph-mon[74338]: from='mgr.14655 192.168.122.100:0/3058013073' entity='mgr.compute-0.qnrkij' 
Nov 27 06:00:26 np0005537642 ceph-mon[74338]: from='mgr.14655 192.168.122.100:0/3058013073' entity='mgr.compute-0.qnrkij' cmd=[{"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/compute-0.qnrkij/mirror_snapshot_schedule"}]: dispatch
Nov 27 06:00:26 np0005537642 ceph-mon[74338]: from='mgr.14655 192.168.122.100:0/3058013073' entity='mgr.compute-0.qnrkij' cmd=[{"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/compute-0.qnrkij/trash_purge_schedule"}]: dispatch
Nov 27 06:00:26 np0005537642 systemd-logind[801]: New session 37 of user ceph-admin.
Nov 27 06:00:26 np0005537642 systemd[1]: Started Session 37 of User ceph-admin.
Nov 27 06:00:26 np0005537642 ceph-mgr[74636]: [dashboard INFO dashboard.module] Engine started.
Nov 27 06:00:26 np0005537642 radosgw[89563]: ====== starting new request req=0x7f291eca25d0 =====
Nov 27 06:00:26 np0005537642 radosgw[89563]: ====== req done req=0x7f291eca25d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 27 06:00:26 np0005537642 radosgw[89563]: beast: 0x7f291eca25d0: 192.168.122.100 - anonymous [27/Nov/2025:11:00:26.868 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 27 06:00:26 np0005537642 radosgw[89563]: ====== starting new request req=0x7f291eca25d0 =====
Nov 27 06:00:26 np0005537642 radosgw[89563]: ====== req done req=0x7f291eca25d0 op status=0 http_status=200 latency=0.001000028s ======
Nov 27 06:00:26 np0005537642 radosgw[89563]: beast: 0x7f291eca25d0: 192.168.122.102 - anonymous [27/Nov/2025:11:00:26.872 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Nov 27 06:00:27 np0005537642 ceph-mon[74338]: log_channel(cluster) log [DBG] : mgrmap e28: compute-0.qnrkij(active, since 1.25371s), standbys: compute-2.yyrxaz, compute-1.npcryb
Nov 27 06:00:27 np0005537642 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v3: 353 pgs: 353 active+clean; 457 KiB data, 144 MiB used, 60 GiB / 60 GiB avail
Nov 27 06:00:27 np0005537642 ceph-mon[74338]: mon.compute-0@0(leader).osd e112 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 27 06:00:27 np0005537642 ceph-mgr[74636]: [cephadm INFO cherrypy.error] [27/Nov/2025:11:00:27] ENGINE Bus STARTING
Nov 27 06:00:27 np0005537642 ceph-mgr[74636]: log_channel(cephadm) log [INF] : [27/Nov/2025:11:00:27] ENGINE Bus STARTING
Nov 27 06:00:27 np0005537642 ceph-mgr[74636]: [cephadm INFO cherrypy.error] [27/Nov/2025:11:00:27] ENGINE Serving on https://192.168.122.100:7150
Nov 27 06:00:27 np0005537642 ceph-mgr[74636]: log_channel(cephadm) log [INF] : [27/Nov/2025:11:00:27] ENGINE Serving on https://192.168.122.100:7150
Nov 27 06:00:27 np0005537642 ceph-mgr[74636]: [cephadm INFO cherrypy.error] [27/Nov/2025:11:00:27] ENGINE Client ('192.168.122.100', 52128) lost — peer dropped the TLS connection suddenly, during handshake: (6, 'TLS/SSL connection has been closed (EOF) (_ssl.c:1147)')
Nov 27 06:00:27 np0005537642 ceph-mgr[74636]: log_channel(cephadm) log [INF] : [27/Nov/2025:11:00:27] ENGINE Client ('192.168.122.100', 52128) lost — peer dropped the TLS connection suddenly, during handshake: (6, 'TLS/SSL connection has been closed (EOF) (_ssl.c:1147)')
Nov 27 06:00:27 np0005537642 ceph-mgr[74636]: [cephadm INFO cherrypy.error] [27/Nov/2025:11:00:27] ENGINE Serving on http://192.168.122.100:8765
Nov 27 06:00:27 np0005537642 ceph-mgr[74636]: log_channel(cephadm) log [INF] : [27/Nov/2025:11:00:27] ENGINE Serving on http://192.168.122.100:8765
Nov 27 06:00:27 np0005537642 ceph-mgr[74636]: [cephadm INFO cherrypy.error] [27/Nov/2025:11:00:27] ENGINE Bus STARTED
Nov 27 06:00:27 np0005537642 ceph-mgr[74636]: log_channel(cephadm) log [INF] : [27/Nov/2025:11:00:27] ENGINE Bus STARTED
Nov 27 06:00:27 np0005537642 ceph-4c838139-e0c9-556a-a9ca-e4422f459af7-nfs-cephfs-2-0-compute-0-ymahkb[97101]: 27/11/2025 11:00:27 : epoch 69282ef5 : compute-0 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f37dc004970 fd 49 proxy header rest len failed header rlen = % (will set dead)
Nov 27 06:00:27 np0005537642 ceph-4c838139-e0c9-556a-a9ca-e4422f459af7-nfs-cephfs-2-0-compute-0-ymahkb[97101]: 27/11/2025 11:00:27 : epoch 69282ef5 : compute-0 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f37ac004000 fd 49 proxy header rest len failed header rlen = % (will set dead)
Nov 27 06:00:27 np0005537642 podman[100709]: 2025-11-27 11:00:27.983045246 +0000 UTC m=+0.358807937 container exec 10d3b07b5dbe91b896d72c044972881d213b8aa535ac9c97588798b2ade7a7fa (image=quay.io/ceph/ceph:v19, name=ceph-4c838139-e0c9-556a-a9ca-e4422f459af7-mon-compute-0, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, CEPH_REF=squid, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 27 06:00:28 np0005537642 podman[100709]: 2025-11-27 11:00:28.134032735 +0000 UTC m=+0.509795436 container exec_died 10d3b07b5dbe91b896d72c044972881d213b8aa535ac9c97588798b2ade7a7fa (image=quay.io/ceph/ceph:v19, name=ceph-4c838139-e0c9-556a-a9ca-e4422f459af7-mon-compute-0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 27 06:00:28 np0005537642 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v4: 353 pgs: 353 active+clean; 457 KiB data, 144 MiB used, 60 GiB / 60 GiB avail
Nov 27 06:00:28 np0005537642 ceph-mon[74338]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "18"} v 0)
Nov 27 06:00:28 np0005537642 ceph-mon[74338]: log_channel(audit) log [INF] : from='mgr.14655 192.168.122.100:0/3058013073' entity='mgr.compute-0.qnrkij' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "18"}]: dispatch
Nov 27 06:00:28 np0005537642 ceph-4c838139-e0c9-556a-a9ca-e4422f459af7-haproxy-nfs-cephfs-compute-0-vcfcow[97533]: [WARNING] 330/110028 (4) : Server backend/nfs.cephfs.0 is UP, reason: Layer4 check passed, check duration: 0ms. 3 active and 0 backup servers online. 0 sessions requeued, 0 total in queue.
Nov 27 06:00:28 np0005537642 ceph-mon[74338]: mon.compute-0@0(leader).osd e112 do_prune osdmap full prune enabled
Nov 27 06:00:28 np0005537642 ceph-4c838139-e0c9-556a-a9ca-e4422f459af7-nfs-cephfs-2-0-compute-0-ymahkb[97101]: 27/11/2025 11:00:28 : epoch 69282ef5 : compute-0 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f37a8003c10 fd 49 proxy header rest len failed header rlen = % (will set dead)
Nov 27 06:00:28 np0005537642 radosgw[89563]: ====== starting new request req=0x7f291eca25d0 =====
Nov 27 06:00:28 np0005537642 radosgw[89563]: ====== req done req=0x7f291eca25d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 27 06:00:28 np0005537642 radosgw[89563]: beast: 0x7f291eca25d0: 192.168.122.100 - anonymous [27/Nov/2025:11:00:28.870 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 27 06:00:28 np0005537642 radosgw[89563]: ====== starting new request req=0x7f291eca25d0 =====
Nov 27 06:00:28 np0005537642 radosgw[89563]: ====== req done req=0x7f291eca25d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 27 06:00:28 np0005537642 radosgw[89563]: beast: 0x7f291eca25d0: 192.168.122.102 - anonymous [27/Nov/2025:11:00:28.876 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 27 06:00:29 np0005537642 ceph-mon[74338]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1.devices.0}] v 0)
Nov 27 06:00:29 np0005537642 ceph-mon[74338]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2.devices.0}] v 0)
Nov 27 06:00:29 np0005537642 ceph-mon[74338]: log_channel(audit) log [INF] : from='mgr.14655 192.168.122.100:0/3058013073' entity='mgr.compute-0.qnrkij' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "18"}]': finished
Nov 27 06:00:29 np0005537642 ceph-mon[74338]: mon.compute-0@0(leader).osd e113 e113: 3 total, 3 up, 3 in
Nov 27 06:00:29 np0005537642 ceph-mon[74338]: [27/Nov/2025:11:00:27] ENGINE Bus STARTING
Nov 27 06:00:29 np0005537642 ceph-mon[74338]: [27/Nov/2025:11:00:27] ENGINE Serving on https://192.168.122.100:7150
Nov 27 06:00:29 np0005537642 ceph-mon[74338]: [27/Nov/2025:11:00:27] ENGINE Client ('192.168.122.100', 52128) lost — peer dropped the TLS connection suddenly, during handshake: (6, 'TLS/SSL connection has been closed (EOF) (_ssl.c:1147)')
Nov 27 06:00:29 np0005537642 ceph-mon[74338]: [27/Nov/2025:11:00:27] ENGINE Serving on http://192.168.122.100:8765
Nov 27 06:00:29 np0005537642 ceph-mon[74338]: [27/Nov/2025:11:00:27] ENGINE Bus STARTED
Nov 27 06:00:29 np0005537642 ceph-mon[74338]: from='mgr.14655 192.168.122.100:0/3058013073' entity='mgr.compute-0.qnrkij' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "18"}]: dispatch
Nov 27 06:00:29 np0005537642 ceph-mon[74338]: log_channel(cluster) log [DBG] : mgrmap e29: compute-0.qnrkij(active, since 3s), standbys: compute-2.yyrxaz, compute-1.npcryb
Nov 27 06:00:29 np0005537642 ceph-mon[74338]: log_channel(cluster) log [DBG] : osdmap e113: 3 total, 3 up, 3 in
Nov 27 06:00:29 np0005537642 ceph-mon[74338]: log_channel(audit) log [INF] : from='mgr.14655 192.168.122.100:0/3058013073' entity='mgr.compute-0.qnrkij' 
Nov 27 06:00:29 np0005537642 ceph-mon[74338]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1}] v 0)
Nov 27 06:00:29 np0005537642 ceph-4c838139-e0c9-556a-a9ca-e4422f459af7-nfs-cephfs-2-0-compute-0-ymahkb[97101]: 27/11/2025 11:00:29 : epoch 69282ef5 : compute-0 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f37c4004580 fd 49 proxy header rest len failed header rlen = % (will set dead)
Nov 27 06:00:29 np0005537642 ceph-mon[74338]: log_channel(audit) log [INF] : from='mgr.14655 192.168.122.100:0/3058013073' entity='mgr.compute-0.qnrkij' 
Nov 27 06:00:29 np0005537642 ceph-4c838139-e0c9-556a-a9ca-e4422f459af7-nfs-cephfs-2-0-compute-0-ymahkb[97101]: 27/11/2025 11:00:29 : epoch 69282ef5 : compute-0 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f37dc004970 fd 49 proxy header rest len failed header rlen = % (will set dead)
Nov 27 06:00:29 np0005537642 podman[100858]: 2025-11-27 11:00:29.969822289 +0000 UTC m=+0.375121656 container exec ef4ca26692ee3ac96369d611d0545c11cb87735cb1047144085ac48b0518225b (image=quay.io/prometheus/node-exporter:v1.7.0, name=ceph-4c838139-e0c9-556a-a9ca-e4422f459af7-node-exporter-compute-0, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Nov 27 06:00:29 np0005537642 ceph-mon[74338]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2}] v 0)
Nov 27 06:00:30 np0005537642 ceph-mon[74338]: log_channel(audit) log [INF] : from='mgr.14655 192.168.122.100:0/3058013073' entity='mgr.compute-0.qnrkij' 
Nov 27 06:00:30 np0005537642 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v6: 353 pgs: 353 active+clean; 457 KiB data, 144 MiB used, 60 GiB / 60 GiB avail
Nov 27 06:00:30 np0005537642 ceph-mgr[74636]: [devicehealth INFO root] Check health
Nov 27 06:00:30 np0005537642 ceph-mon[74338]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "19"} v 0)
Nov 27 06:00:30 np0005537642 ceph-mon[74338]: log_channel(audit) log [INF] : from='mgr.14655 192.168.122.100:0/3058013073' entity='mgr.compute-0.qnrkij' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "19"}]: dispatch
Nov 27 06:00:30 np0005537642 podman[100858]: 2025-11-27 11:00:30.384301809 +0000 UTC m=+0.789601106 container exec_died ef4ca26692ee3ac96369d611d0545c11cb87735cb1047144085ac48b0518225b (image=quay.io/prometheus/node-exporter:v1.7.0, name=ceph-4c838139-e0c9-556a-a9ca-e4422f459af7-node-exporter-compute-0, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Nov 27 06:00:30 np0005537642 ceph-mon[74338]: log_channel(audit) log [INF] : from='mgr.14655 192.168.122.100:0/3058013073' entity='mgr.compute-0.qnrkij' 
Nov 27 06:00:30 np0005537642 ceph-osd[82775]: osd.1 pg_epoch: 113 pg[9.11( empty local-lis/les=0/0 n=0 ec=67/50 lis/c=67/67 les/c/f=68/68/0 sis=113) [1] r=0 lpr=113 pi=[67,113)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 27 06:00:30 np0005537642 ceph-4c838139-e0c9-556a-a9ca-e4422f459af7-nfs-cephfs-2-0-compute-0-ymahkb[97101]: 27/11/2025 11:00:30 : epoch 69282ef5 : compute-0 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f37ac004000 fd 49 proxy header rest len failed header rlen = % (will set dead)
Nov 27 06:00:30 np0005537642 radosgw[89563]: ====== starting new request req=0x7f291eca25d0 =====
Nov 27 06:00:30 np0005537642 radosgw[89563]: ====== req done req=0x7f291eca25d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 27 06:00:30 np0005537642 radosgw[89563]: beast: 0x7f291eca25d0: 192.168.122.100 - anonymous [27/Nov/2025:11:00:30.872 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 27 06:00:30 np0005537642 radosgw[89563]: ====== starting new request req=0x7f291eca25d0 =====
Nov 27 06:00:30 np0005537642 radosgw[89563]: ====== req done req=0x7f291eca25d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 27 06:00:30 np0005537642 radosgw[89563]: beast: 0x7f291eca25d0: 192.168.122.102 - anonymous [27/Nov/2025:11:00:30.879 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 27 06:00:31 np0005537642 ceph-mon[74338]: from='mgr.14655 192.168.122.100:0/3058013073' entity='mgr.compute-0.qnrkij' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "18"}]': finished
Nov 27 06:00:31 np0005537642 ceph-mon[74338]: from='mgr.14655 192.168.122.100:0/3058013073' entity='mgr.compute-0.qnrkij' 
Nov 27 06:00:31 np0005537642 ceph-mon[74338]: from='mgr.14655 192.168.122.100:0/3058013073' entity='mgr.compute-0.qnrkij' 
Nov 27 06:00:31 np0005537642 ceph-mon[74338]: from='mgr.14655 192.168.122.100:0/3058013073' entity='mgr.compute-0.qnrkij' 
Nov 27 06:00:31 np0005537642 ceph-mon[74338]: from='mgr.14655 192.168.122.100:0/3058013073' entity='mgr.compute-0.qnrkij' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "19"}]: dispatch
Nov 27 06:00:31 np0005537642 ceph-mon[74338]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1.devices.0}] v 0)
Nov 27 06:00:31 np0005537642 ceph-mon[74338]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2.devices.0}] v 0)
Nov 27 06:00:31 np0005537642 ceph-mon[74338]: mon.compute-0@0(leader).osd e113 do_prune osdmap full prune enabled
Nov 27 06:00:31 np0005537642 ceph-4c838139-e0c9-556a-a9ca-e4422f459af7-nfs-cephfs-2-0-compute-0-ymahkb[97101]: 27/11/2025 11:00:31 : epoch 69282ef5 : compute-0 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f37ac004000 fd 49 proxy header rest len failed header rlen = % (will set dead)
Nov 27 06:00:31 np0005537642 ceph-4c838139-e0c9-556a-a9ca-e4422f459af7-nfs-cephfs-2-0-compute-0-ymahkb[97101]: 27/11/2025 11:00:31 : epoch 69282ef5 : compute-0 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f37b4000e00 fd 49 proxy header rest len failed header rlen = % (will set dead)
Nov 27 06:00:31 np0005537642 ceph-mon[74338]: log_channel(cluster) log [DBG] : mgrmap e30: compute-0.qnrkij(active, since 5s), standbys: compute-2.yyrxaz, compute-1.npcryb
Nov 27 06:00:32 np0005537642 ceph-mon[74338]: log_channel(audit) log [INF] : from='mgr.14655 192.168.122.100:0/3058013073' entity='mgr.compute-0.qnrkij' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "19"}]': finished
Nov 27 06:00:32 np0005537642 ceph-mon[74338]: mon.compute-0@0(leader).osd e114 e114: 3 total, 3 up, 3 in
Nov 27 06:00:32 np0005537642 ceph-mon[74338]: log_channel(audit) log [INF] : from='mgr.14655 192.168.122.100:0/3058013073' entity='mgr.compute-0.qnrkij' 
Nov 27 06:00:32 np0005537642 ceph-mon[74338]: log_channel(cluster) log [DBG] : osdmap e114: 3 total, 3 up, 3 in
Nov 27 06:00:32 np0005537642 ceph-mon[74338]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1}] v 0)
Nov 27 06:00:32 np0005537642 ceph-osd[82775]: osd.1 pg_epoch: 114 pg[9.12( empty local-lis/les=0/0 n=0 ec=67/50 lis/c=67/67 les/c/f=68/68/0 sis=114) [1] r=0 lpr=114 pi=[67,114)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 27 06:00:32 np0005537642 ceph-osd[82775]: osd.1 pg_epoch: 114 pg[9.11( empty local-lis/les=0/0 n=0 ec=67/50 lis/c=67/67 les/c/f=68/68/0 sis=114) [1]/[0] r=-1 lpr=114 pi=[67,114)/1 crt=0'0 mlcod 0'0 remapped mbc={}] PeeringState::start_peering_interval up [1] -> [1], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 1, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Nov 27 06:00:32 np0005537642 ceph-osd[82775]: osd.1 pg_epoch: 114 pg[9.11( empty local-lis/les=0/0 n=0 ec=67/50 lis/c=67/67 les/c/f=68/68/0 sis=114) [1]/[0] r=-1 lpr=114 pi=[67,114)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Nov 27 06:00:32 np0005537642 ceph-mon[74338]: log_channel(audit) log [INF] : from='mgr.14655 192.168.122.100:0/3058013073' entity='mgr.compute-0.qnrkij' 
Nov 27 06:00:32 np0005537642 ceph-mon[74338]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2}] v 0)
Nov 27 06:00:32 np0005537642 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v8: 353 pgs: 353 active+clean; 457 KiB data, 144 MiB used, 60 GiB / 60 GiB avail
Nov 27 06:00:32 np0005537642 ceph-mon[74338]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "20"} v 0)
Nov 27 06:00:32 np0005537642 ceph-mon[74338]: log_channel(audit) log [INF] : from='mgr.14655 192.168.122.100:0/3058013073' entity='mgr.compute-0.qnrkij' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "20"}]: dispatch
Nov 27 06:00:32 np0005537642 ceph-mon[74338]: from='mgr.14655 192.168.122.100:0/3058013073' entity='mgr.compute-0.qnrkij' 
Nov 27 06:00:32 np0005537642 ceph-mon[74338]: from='mgr.14655 192.168.122.100:0/3058013073' entity='mgr.compute-0.qnrkij' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "19"}]': finished
Nov 27 06:00:32 np0005537642 ceph-mon[74338]: from='mgr.14655 192.168.122.100:0/3058013073' entity='mgr.compute-0.qnrkij' 
Nov 27 06:00:32 np0005537642 ceph-mon[74338]: log_channel(audit) log [INF] : from='mgr.14655 192.168.122.100:0/3058013073' entity='mgr.compute-0.qnrkij' 
Nov 27 06:00:32 np0005537642 ceph-mon[74338]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config rm", "who": "osd/host:compute-1", "name": "osd_memory_target"} v 0)
Nov 27 06:00:32 np0005537642 ceph-mon[74338]: log_channel(audit) log [INF] : from='mgr.14655 192.168.122.100:0/3058013073' entity='mgr.compute-0.qnrkij' cmd=[{"prefix": "config rm", "who": "osd/host:compute-1", "name": "osd_memory_target"}]: dispatch
Nov 27 06:00:32 np0005537642 podman[100941]: 2025-11-27 11:00:32.29878136 +0000 UTC m=+0.219840804 container exec 53f9f5e8dda735f05eb81d1b684c0d159d53601b93e16ccc276ab30167724430 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-4c838139-e0c9-556a-a9ca-e4422f459af7-nfs-cephfs-2-0-compute-0-ymahkb, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 27 06:00:32 np0005537642 ceph-mon[74338]: log_channel(audit) log [INF] : from='mgr.14655 192.168.122.100:0/3058013073' entity='mgr.compute-0.qnrkij' 
Nov 27 06:00:32 np0005537642 ceph-mon[74338]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config rm", "who": "osd/host:compute-2", "name": "osd_memory_target"} v 0)
Nov 27 06:00:32 np0005537642 ceph-mon[74338]: log_channel(audit) log [INF] : from='mgr.14655 192.168.122.100:0/3058013073' entity='mgr.compute-0.qnrkij' cmd=[{"prefix": "config rm", "who": "osd/host:compute-2", "name": "osd_memory_target"}]: dispatch
Nov 27 06:00:32 np0005537642 podman[100961]: 2025-11-27 11:00:32.395778905 +0000 UTC m=+0.075884397 container exec_died 53f9f5e8dda735f05eb81d1b684c0d159d53601b93e16ccc276ab30167724430 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-4c838139-e0c9-556a-a9ca-e4422f459af7-nfs-cephfs-2-0-compute-0-ymahkb, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, ceph=True, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 27 06:00:32 np0005537642 podman[100941]: 2025-11-27 11:00:32.432309537 +0000 UTC m=+0.353369011 container exec_died 53f9f5e8dda735f05eb81d1b684c0d159d53601b93e16ccc276ab30167724430 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-4c838139-e0c9-556a-a9ca-e4422f459af7-nfs-cephfs-2-0-compute-0-ymahkb, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True)
Nov 27 06:00:32 np0005537642 ceph-mon[74338]: mon.compute-0@0(leader).osd e114 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 27 06:00:32 np0005537642 ceph-mon[74338]: mon.compute-0@0(leader).osd e114 do_prune osdmap full prune enabled
Nov 27 06:00:32 np0005537642 ceph-mon[74338]: log_channel(audit) log [INF] : from='mgr.14655 192.168.122.100:0/3058013073' entity='mgr.compute-0.qnrkij' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "20"}]': finished
Nov 27 06:00:32 np0005537642 ceph-mon[74338]: mon.compute-0@0(leader).osd e115 e115: 3 total, 3 up, 3 in
Nov 27 06:00:32 np0005537642 ceph-mon[74338]: log_channel(cluster) log [DBG] : osdmap e115: 3 total, 3 up, 3 in
Nov 27 06:00:32 np0005537642 ceph-osd[82775]: osd.1 pg_epoch: 115 pg[9.12( empty local-lis/les=0/0 n=0 ec=67/50 lis/c=67/67 les/c/f=68/68/0 sis=115) [1]/[0] r=-1 lpr=115 pi=[67,115)/1 crt=0'0 mlcod 0'0 remapped mbc={}] PeeringState::start_peering_interval up [1] -> [1], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 1, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Nov 27 06:00:32 np0005537642 ceph-osd[82775]: osd.1 pg_epoch: 115 pg[9.12( empty local-lis/les=0/0 n=0 ec=67/50 lis/c=67/67 les/c/f=68/68/0 sis=115) [1]/[0] r=-1 lpr=115 pi=[67,115)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Nov 27 06:00:32 np0005537642 ceph-4c838139-e0c9-556a-a9ca-e4422f459af7-nfs-cephfs-2-0-compute-0-ymahkb[97101]: 27/11/2025 11:00:32 : epoch 69282ef5 : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f37a8003c10 fd 49 proxy header rest len failed header rlen = % (will set dead)
Nov 27 06:00:32 np0005537642 podman[101006]: 2025-11-27 11:00:32.76494251 +0000 UTC m=+0.105808630 container exec bdcf3b8372dc80dc778da56341fccbda9cc403cf6c6dbfd21e1cbbfaf9135c0c (image=quay.io/ceph/haproxy:2.3, name=ceph-4c838139-e0c9-556a-a9ca-e4422f459af7-haproxy-nfs-cephfs-compute-0-vcfcow)
Nov 27 06:00:32 np0005537642 podman[101027]: 2025-11-27 11:00:32.856859088 +0000 UTC m=+0.072324955 container exec_died bdcf3b8372dc80dc778da56341fccbda9cc403cf6c6dbfd21e1cbbfaf9135c0c (image=quay.io/ceph/haproxy:2.3, name=ceph-4c838139-e0c9-556a-a9ca-e4422f459af7-haproxy-nfs-cephfs-compute-0-vcfcow)
Nov 27 06:00:32 np0005537642 podman[101006]: 2025-11-27 11:00:32.871350595 +0000 UTC m=+0.212216695 container exec_died bdcf3b8372dc80dc778da56341fccbda9cc403cf6c6dbfd21e1cbbfaf9135c0c (image=quay.io/ceph/haproxy:2.3, name=ceph-4c838139-e0c9-556a-a9ca-e4422f459af7-haproxy-nfs-cephfs-compute-0-vcfcow)
Nov 27 06:00:32 np0005537642 radosgw[89563]: ====== starting new request req=0x7f291eca25d0 =====
Nov 27 06:00:32 np0005537642 radosgw[89563]: ====== req done req=0x7f291eca25d0 op status=0 http_status=200 latency=0.001000029s ======
Nov 27 06:00:32 np0005537642 radosgw[89563]: beast: 0x7f291eca25d0: 192.168.122.100 - anonymous [27/Nov/2025:11:00:32.874 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Nov 27 06:00:32 np0005537642 radosgw[89563]: ====== starting new request req=0x7f291eca25d0 =====
Nov 27 06:00:32 np0005537642 radosgw[89563]: ====== req done req=0x7f291eca25d0 op status=0 http_status=200 latency=0.001000029s ======
Nov 27 06:00:32 np0005537642 radosgw[89563]: beast: 0x7f291eca25d0: 192.168.122.102 - anonymous [27/Nov/2025:11:00:32.881 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Nov 27 06:00:33 np0005537642 podman[101074]: 2025-11-27 11:00:33.229289356 +0000 UTC m=+0.109952938 container exec 3d53a6719a5e282f93d17858adf01e17fba9bc28fc08c14a3a13a6d5e41c4691 (image=quay.io/ceph/keepalived:2.2.4, name=ceph-4c838139-e0c9-556a-a9ca-e4422f459af7-keepalived-nfs-cephfs-compute-0-zwobpl, architecture=x86_64, build-date=2023-02-22T09:23:20, distribution-scope=public, io.buildah.version=1.28.2, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., description=keepalived for Ceph, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, name=keepalived, summary=Provides keepalived on RHEL 9 for Ceph., io.openshift.tags=Ceph keepalived, release=1793, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9-minimal/images/9.1.0-1793, vcs-type=git, com.redhat.component=keepalived-container, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, io.openshift.expose-services=, vcs-ref=befaf1f5ec7b874aef2651ee1384d51828504eb9, vendor=Red Hat, Inc., version=2.2.4, io.k8s.display-name=Keepalived on RHEL 9)
Nov 27 06:00:33 np0005537642 ceph-mon[74338]: from='mgr.14655 192.168.122.100:0/3058013073' entity='mgr.compute-0.qnrkij' 
Nov 27 06:00:33 np0005537642 ceph-mon[74338]: from='mgr.14655 192.168.122.100:0/3058013073' entity='mgr.compute-0.qnrkij' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "20"}]: dispatch
Nov 27 06:00:33 np0005537642 ceph-mon[74338]: from='mgr.14655 192.168.122.100:0/3058013073' entity='mgr.compute-0.qnrkij' 
Nov 27 06:00:33 np0005537642 ceph-mon[74338]: from='mgr.14655 192.168.122.100:0/3058013073' entity='mgr.compute-0.qnrkij' cmd=[{"prefix": "config rm", "who": "osd/host:compute-1", "name": "osd_memory_target"}]: dispatch
Nov 27 06:00:33 np0005537642 ceph-mon[74338]: from='mgr.14655 192.168.122.100:0/3058013073' entity='mgr.compute-0.qnrkij' 
Nov 27 06:00:33 np0005537642 ceph-mon[74338]: from='mgr.14655 192.168.122.100:0/3058013073' entity='mgr.compute-0.qnrkij' cmd=[{"prefix": "config rm", "who": "osd/host:compute-2", "name": "osd_memory_target"}]: dispatch
Nov 27 06:00:33 np0005537642 ceph-mon[74338]: from='mgr.14655 192.168.122.100:0/3058013073' entity='mgr.compute-0.qnrkij' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "20"}]': finished
Nov 27 06:00:33 np0005537642 podman[101094]: 2025-11-27 11:00:33.327759813 +0000 UTC m=+0.075844856 container exec_died 3d53a6719a5e282f93d17858adf01e17fba9bc28fc08c14a3a13a6d5e41c4691 (image=quay.io/ceph/keepalived:2.2.4, name=ceph-4c838139-e0c9-556a-a9ca-e4422f459af7-keepalived-nfs-cephfs-compute-0-zwobpl, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-type=git, release=1793, name=keepalived, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., summary=Provides keepalived on RHEL 9 for Ceph., description=keepalived for Ceph, distribution-scope=public, io.buildah.version=1.28.2, io.openshift.expose-services=, vendor=Red Hat, Inc., build-date=2023-02-22T09:23:20, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, com.redhat.component=keepalived-container, io.k8s.display-name=Keepalived on RHEL 9, io.openshift.tags=Ceph keepalived, version=2.2.4, vcs-ref=befaf1f5ec7b874aef2651ee1384d51828504eb9, architecture=x86_64, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9-minimal/images/9.1.0-1793)
Nov 27 06:00:33 np0005537642 podman[101074]: 2025-11-27 11:00:33.359183498 +0000 UTC m=+0.239847080 container exec_died 3d53a6719a5e282f93d17858adf01e17fba9bc28fc08c14a3a13a6d5e41c4691 (image=quay.io/ceph/keepalived:2.2.4, name=ceph-4c838139-e0c9-556a-a9ca-e4422f459af7-keepalived-nfs-cephfs-compute-0-zwobpl, version=2.2.4, description=keepalived for Ceph, architecture=x86_64, io.buildah.version=1.28.2, io.openshift.expose-services=, name=keepalived, com.redhat.component=keepalived-container, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, io.openshift.tags=Ceph keepalived, release=1793, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9-minimal/images/9.1.0-1793, io.k8s.display-name=Keepalived on RHEL 9, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, summary=Provides keepalived on RHEL 9 for Ceph., build-date=2023-02-22T09:23:20, distribution-scope=public, vcs-ref=befaf1f5ec7b874aef2651ee1384d51828504eb9, vendor=Red Hat, Inc., vcs-type=git, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly.)
Nov 27 06:00:33 np0005537642 ceph-mon[74338]: mon.compute-0@0(leader).osd e115 do_prune osdmap full prune enabled
Nov 27 06:00:33 np0005537642 ceph-4c838139-e0c9-556a-a9ca-e4422f459af7-mgr-compute-0-qnrkij[74632]: ::ffff:192.168.122.100 - - [27/Nov/2025:11:00:33] "GET /metrics HTTP/1.1" 200 46592 "" "Prometheus/2.51.0"
Nov 27 06:00:33 np0005537642 ceph-mgr[74636]: [prometheus INFO cherrypy.access.139953872425936] ::ffff:192.168.122.100 - - [27/Nov/2025:11:00:33] "GET /metrics HTTP/1.1" 200 46592 "" "Prometheus/2.51.0"
Nov 27 06:00:33 np0005537642 ceph-mon[74338]: mon.compute-0@0(leader).osd e116 e116: 3 total, 3 up, 3 in
Nov 27 06:00:33 np0005537642 ceph-mon[74338]: log_channel(cluster) log [DBG] : osdmap e116: 3 total, 3 up, 3 in
Nov 27 06:00:33 np0005537642 ceph-4c838139-e0c9-556a-a9ca-e4422f459af7-nfs-cephfs-2-0-compute-0-ymahkb[97101]: 27/11/2025 11:00:33 : epoch 69282ef5 : compute-0 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f37c4004580 fd 49 proxy header rest len failed header rlen = % (will set dead)
Nov 27 06:00:33 np0005537642 ceph-4c838139-e0c9-556a-a9ca-e4422f459af7-nfs-cephfs-2-0-compute-0-ymahkb[97101]: 27/11/2025 11:00:33 : epoch 69282ef5 : compute-0 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f37ac004000 fd 49 proxy header rest len failed header rlen = % (will set dead)
Nov 27 06:00:34 np0005537642 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v11: 353 pgs: 1 unknown, 1 remapped+peering, 351 active+clean; 457 KiB data, 144 MiB used, 60 GiB / 60 GiB avail; 36 KiB/s rd, 0 B/s wr, 15 op/s
Nov 27 06:00:34 np0005537642 ceph-osd[82775]: osd.1 pg_epoch: 116 pg[9.11( v 57'872 (0'0,57'872] local-lis/les=0/0 n=5 ec=67/50 lis/c=114/67 les/c/f=115/68/0 sis=116) [1] r=0 lpr=116 pi=[67,116)/1 luod=0'0 crt=57'872 mlcod 0'0 active mbc={}] PeeringState::start_peering_interval up [1] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 1 -> 1, role -1 -> 0, features acting 4540701547738038271 upacting 4540701547738038271
Nov 27 06:00:34 np0005537642 ceph-osd[82775]: osd.1 pg_epoch: 116 pg[9.11( v 57'872 (0'0,57'872] local-lis/les=0/0 n=5 ec=67/50 lis/c=114/67 les/c/f=115/68/0 sis=116) [1] r=0 lpr=116 pi=[67,116)/1 crt=57'872 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 27 06:00:34 np0005537642 podman[101140]: 2025-11-27 11:00:34.525800235 +0000 UTC m=+0.856493205 container exec 3b2d33c696177af9f9c409dbd59184012d0c2369d434cb01038cb0ca1f741fd4 (image=quay.io/prometheus/alertmanager:v0.25.0, name=ceph-4c838139-e0c9-556a-a9ca-e4422f459af7-alertmanager-compute-0, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Nov 27 06:00:34 np0005537642 ceph-4c838139-e0c9-556a-a9ca-e4422f459af7-nfs-cephfs-2-0-compute-0-ymahkb[97101]: 27/11/2025 11:00:34 : epoch 69282ef5 : compute-0 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f37b4001920 fd 49 proxy header rest len failed header rlen = % (will set dead)
Nov 27 06:00:34 np0005537642 podman[101170]: 2025-11-27 11:00:34.744904527 +0000 UTC m=+0.179622036 container exec_died 3b2d33c696177af9f9c409dbd59184012d0c2369d434cb01038cb0ca1f741fd4 (image=quay.io/prometheus/alertmanager:v0.25.0, name=ceph-4c838139-e0c9-556a-a9ca-e4422f459af7-alertmanager-compute-0, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Nov 27 06:00:34 np0005537642 ceph-mon[74338]: mon.compute-0@0(leader).osd e116 do_prune osdmap full prune enabled
Nov 27 06:00:34 np0005537642 podman[101140]: 2025-11-27 11:00:34.760547647 +0000 UTC m=+1.091240637 container exec_died 3b2d33c696177af9f9c409dbd59184012d0c2369d434cb01038cb0ca1f741fd4 (image=quay.io/prometheus/alertmanager:v0.25.0, name=ceph-4c838139-e0c9-556a-a9ca-e4422f459af7-alertmanager-compute-0, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Nov 27 06:00:34 np0005537642 ceph-mon[74338]: mon.compute-0@0(leader).osd e117 e117: 3 total, 3 up, 3 in
Nov 27 06:00:34 np0005537642 radosgw[89563]: ====== starting new request req=0x7f291eca25d0 =====
Nov 27 06:00:34 np0005537642 radosgw[89563]: ====== req done req=0x7f291eca25d0 op status=0 http_status=200 latency=0.001000028s ======
Nov 27 06:00:34 np0005537642 radosgw[89563]: beast: 0x7f291eca25d0: 192.168.122.100 - anonymous [27/Nov/2025:11:00:34.876 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Nov 27 06:00:34 np0005537642 radosgw[89563]: ====== starting new request req=0x7f291eca25d0 =====
Nov 27 06:00:34 np0005537642 radosgw[89563]: ====== req done req=0x7f291eca25d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 27 06:00:34 np0005537642 radosgw[89563]: beast: 0x7f291eca25d0: 192.168.122.102 - anonymous [27/Nov/2025:11:00:34.885 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 27 06:00:34 np0005537642 ceph-mon[74338]: log_channel(cluster) log [DBG] : osdmap e117: 3 total, 3 up, 3 in
Nov 27 06:00:35 np0005537642 ceph-osd[82775]: osd.1 pg_epoch: 117 pg[9.11( v 57'872 (0'0,57'872] local-lis/les=116/117 n=5 ec=67/50 lis/c=114/67 les/c/f=115/68/0 sis=116) [1] r=0 lpr=116 pi=[67,116)/1 crt=57'872 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 27 06:00:35 np0005537642 podman[101216]: 2025-11-27 11:00:35.193325184 +0000 UTC m=+0.248355905 container exec 64887548efd8ea94ae6cc4d74ce8d20ae68a58e079077f7deb3f19d16095acef (image=quay.io/ceph/grafana:10.4.0, name=ceph-4c838139-e0c9-556a-a9ca-e4422f459af7-grafana-compute-0, maintainer=Grafana Labs <hello@grafana.com>)
Nov 27 06:00:35 np0005537642 podman[101216]: 2025-11-27 11:00:35.373021481 +0000 UTC m=+0.428052212 container exec_died 64887548efd8ea94ae6cc4d74ce8d20ae68a58e079077f7deb3f19d16095acef (image=quay.io/ceph/grafana:10.4.0, name=ceph-4c838139-e0c9-556a-a9ca-e4422f459af7-grafana-compute-0, maintainer=Grafana Labs <hello@grafana.com>)
Nov 27 06:00:35 np0005537642 ceph-4c838139-e0c9-556a-a9ca-e4422f459af7-nfs-cephfs-2-0-compute-0-ymahkb[97101]: 27/11/2025 11:00:35 : epoch 69282ef5 : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f37a8003c10 fd 49 proxy header rest len failed header rlen = % (will set dead)
Nov 27 06:00:35 np0005537642 ceph-mon[74338]: mon.compute-0@0(leader).osd e117 do_prune osdmap full prune enabled
Nov 27 06:00:35 np0005537642 ceph-4c838139-e0c9-556a-a9ca-e4422f459af7-nfs-cephfs-2-0-compute-0-ymahkb[97101]: 27/11/2025 11:00:35 : epoch 69282ef5 : compute-0 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f37c4004580 fd 49 proxy header rest len failed header rlen = % (will set dead)
Nov 27 06:00:35 np0005537642 ceph-mon[74338]: mon.compute-0@0(leader).osd e118 e118: 3 total, 3 up, 3 in
Nov 27 06:00:35 np0005537642 ceph-mon[74338]: log_channel(cluster) log [DBG] : osdmap e118: 3 total, 3 up, 3 in
Nov 27 06:00:35 np0005537642 ceph-osd[82775]: osd.1 pg_epoch: 118 pg[9.12( v 57'872 (0'0,57'872] local-lis/les=0/0 n=4 ec=67/50 lis/c=115/67 les/c/f=116/68/0 sis=118) [1] r=0 lpr=118 pi=[67,118)/1 luod=0'0 crt=57'872 mlcod 0'0 active mbc={}] PeeringState::start_peering_interval up [1] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 1 -> 1, role -1 -> 0, features acting 4540701547738038271 upacting 4540701547738038271
Nov 27 06:00:35 np0005537642 ceph-osd[82775]: osd.1 pg_epoch: 118 pg[9.12( v 57'872 (0'0,57'872] local-lis/les=0/0 n=4 ec=67/50 lis/c=115/67 les/c/f=116/68/0 sis=118) [1] r=0 lpr=118 pi=[67,118)/1 crt=57'872 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 27 06:00:36 np0005537642 podman[101326]: 2025-11-27 11:00:36.069293189 +0000 UTC m=+0.076285689 container exec cb36667204a38dfbc0ab1cdb43efac259d05d4f7099b4b24b76953126a2609c5 (image=quay.io/prometheus/prometheus:v2.51.0, name=ceph-4c838139-e0c9-556a-a9ca-e4422f459af7-prometheus-compute-0, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Nov 27 06:00:36 np0005537642 podman[101326]: 2025-11-27 11:00:36.11726134 +0000 UTC m=+0.124253840 container exec_died cb36667204a38dfbc0ab1cdb43efac259d05d4f7099b4b24b76953126a2609c5 (image=quay.io/prometheus/prometheus:v2.51.0, name=ceph-4c838139-e0c9-556a-a9ca-e4422f459af7-prometheus-compute-0, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Nov 27 06:00:36 np0005537642 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v14: 353 pgs: 1 unknown, 1 remapped+peering, 351 active+clean; 457 KiB data, 144 MiB used, 60 GiB / 60 GiB avail; 44 KiB/s rd, 0 B/s wr, 18 op/s
Nov 27 06:00:36 np0005537642 ceph-mon[74338]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Nov 27 06:00:36 np0005537642 ceph-mon[74338]: log_channel(audit) log [INF] : from='mgr.14655 192.168.122.100:0/3058013073' entity='mgr.compute-0.qnrkij' 
Nov 27 06:00:36 np0005537642 ceph-mon[74338]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Nov 27 06:00:36 np0005537642 ceph-mon[74338]: log_channel(audit) log [INF] : from='mgr.14655 192.168.122.100:0/3058013073' entity='mgr.compute-0.qnrkij' 
Nov 27 06:00:36 np0005537642 ceph-4c838139-e0c9-556a-a9ca-e4422f459af7-nfs-cephfs-2-0-compute-0-ymahkb[97101]: 27/11/2025 11:00:36 : epoch 69282ef5 : compute-0 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f37b8001b80 fd 49 proxy header rest len failed header rlen = % (will set dead)
Nov 27 06:00:36 np0005537642 radosgw[89563]: ====== starting new request req=0x7f291eca25d0 =====
Nov 27 06:00:36 np0005537642 radosgw[89563]: ====== req done req=0x7f291eca25d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 27 06:00:36 np0005537642 radosgw[89563]: beast: 0x7f291eca25d0: 192.168.122.100 - anonymous [27/Nov/2025:11:00:36.879 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 27 06:00:36 np0005537642 radosgw[89563]: ====== starting new request req=0x7f291eca25d0 =====
Nov 27 06:00:36 np0005537642 radosgw[89563]: ====== req done req=0x7f291eca25d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 27 06:00:36 np0005537642 radosgw[89563]: beast: 0x7f291eca25d0: 192.168.122.102 - anonymous [27/Nov/2025:11:00:36.891 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 27 06:00:36 np0005537642 ceph-mon[74338]: mon.compute-0@0(leader).osd e118 do_prune osdmap full prune enabled
Nov 27 06:00:37 np0005537642 ceph-mon[74338]: from='mgr.14655 192.168.122.100:0/3058013073' entity='mgr.compute-0.qnrkij' 
Nov 27 06:00:37 np0005537642 ceph-mon[74338]: from='mgr.14655 192.168.122.100:0/3058013073' entity='mgr.compute-0.qnrkij' 
Nov 27 06:00:37 np0005537642 ceph-mon[74338]: mon.compute-0@0(leader).osd e119 e119: 3 total, 3 up, 3 in
Nov 27 06:00:37 np0005537642 ceph-mon[74338]: log_channel(cluster) log [DBG] : osdmap e119: 3 total, 3 up, 3 in
Nov 27 06:00:37 np0005537642 ceph-osd[82775]: osd.1 pg_epoch: 119 pg[9.12( v 57'872 (0'0,57'872] local-lis/les=118/119 n=4 ec=67/50 lis/c=115/67 les/c/f=116/68/0 sis=118) [1] r=0 lpr=118 pi=[67,118)/1 crt=57'872 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 27 06:00:37 np0005537642 ceph-mon[74338]: mon.compute-0@0(leader).osd e119 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 27 06:00:37 np0005537642 ceph-4c838139-e0c9-556a-a9ca-e4422f459af7-nfs-cephfs-2-0-compute-0-ymahkb[97101]: 27/11/2025 11:00:37 : epoch 69282ef5 : compute-0 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f37ac004000 fd 49 proxy header rest len failed header rlen = % (will set dead)
Nov 27 06:00:37 np0005537642 ceph-4c838139-e0c9-556a-a9ca-e4422f459af7-nfs-cephfs-2-0-compute-0-ymahkb[97101]: 27/11/2025 11:00:37 : epoch 69282ef5 : compute-0 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f37ac004000 fd 49 proxy header rest len failed header rlen = % (will set dead)
Nov 27 06:00:37 np0005537642 ceph-mon[74338]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Nov 27 06:00:37 np0005537642 ceph-mon[74338]: log_channel(audit) log [INF] : from='mgr.14655 192.168.122.100:0/3058013073' entity='mgr.compute-0.qnrkij' 
Nov 27 06:00:37 np0005537642 ceph-mon[74338]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Nov 27 06:00:38 np0005537642 ceph-mon[74338]: log_channel(audit) log [INF] : from='mgr.14655 192.168.122.100:0/3058013073' entity='mgr.compute-0.qnrkij' 
Nov 27 06:00:38 np0005537642 ceph-mon[74338]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"} v 0)
Nov 27 06:00:38 np0005537642 ceph-mon[74338]: log_channel(audit) log [INF] : from='mgr.14655 192.168.122.100:0/3058013073' entity='mgr.compute-0.qnrkij' cmd=[{"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"}]: dispatch
Nov 27 06:00:38 np0005537642 ceph-mon[74338]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Nov 27 06:00:38 np0005537642 ceph-mon[74338]: log_channel(audit) log [DBG] : from='mgr.14655 192.168.122.100:0/3058013073' entity='mgr.compute-0.qnrkij' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 27 06:00:38 np0005537642 ceph-mon[74338]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Nov 27 06:00:38 np0005537642 ceph-mon[74338]: log_channel(audit) log [INF] : from='mgr.14655 192.168.122.100:0/3058013073' entity='mgr.compute-0.qnrkij' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 27 06:00:38 np0005537642 ceph-mgr[74636]: [cephadm INFO cephadm.serve] Updating compute-0:/etc/ceph/ceph.conf
Nov 27 06:00:38 np0005537642 ceph-mgr[74636]: [cephadm INFO cephadm.serve] Updating compute-1:/etc/ceph/ceph.conf
Nov 27 06:00:38 np0005537642 ceph-mgr[74636]: [cephadm INFO cephadm.serve] Updating compute-2:/etc/ceph/ceph.conf
Nov 27 06:00:38 np0005537642 ceph-mgr[74636]: log_channel(cephadm) log [INF] : Updating compute-0:/etc/ceph/ceph.conf
Nov 27 06:00:38 np0005537642 ceph-mgr[74636]: log_channel(cephadm) log [INF] : Updating compute-1:/etc/ceph/ceph.conf
Nov 27 06:00:38 np0005537642 ceph-mgr[74636]: log_channel(cephadm) log [INF] : Updating compute-2:/etc/ceph/ceph.conf
Nov 27 06:00:38 np0005537642 ceph-mon[74338]: from='mgr.14655 192.168.122.100:0/3058013073' entity='mgr.compute-0.qnrkij' 
Nov 27 06:00:38 np0005537642 ceph-mon[74338]: from='mgr.14655 192.168.122.100:0/3058013073' entity='mgr.compute-0.qnrkij' 
Nov 27 06:00:38 np0005537642 ceph-mon[74338]: from='mgr.14655 192.168.122.100:0/3058013073' entity='mgr.compute-0.qnrkij' cmd=[{"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"}]: dispatch
Nov 27 06:00:38 np0005537642 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v16: 353 pgs: 353 active+clean; 457 KiB data, 144 MiB used, 60 GiB / 60 GiB avail; 229 B/s rd, 0 op/s; 49 B/s, 2 objects/s recovering
Nov 27 06:00:38 np0005537642 ceph-mon[74338]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "21"} v 0)
Nov 27 06:00:38 np0005537642 ceph-mon[74338]: log_channel(audit) log [INF] : from='mgr.14655 192.168.122.100:0/3058013073' entity='mgr.compute-0.qnrkij' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "21"}]: dispatch
Nov 27 06:00:38 np0005537642 ceph-mgr[74636]: [cephadm INFO cephadm.serve] Updating compute-2:/var/lib/ceph/4c838139-e0c9-556a-a9ca-e4422f459af7/config/ceph.conf
Nov 27 06:00:38 np0005537642 ceph-mgr[74636]: log_channel(cephadm) log [INF] : Updating compute-2:/var/lib/ceph/4c838139-e0c9-556a-a9ca-e4422f459af7/config/ceph.conf
Nov 27 06:00:38 np0005537642 ceph-mgr[74636]: [cephadm INFO cephadm.serve] Updating compute-1:/var/lib/ceph/4c838139-e0c9-556a-a9ca-e4422f459af7/config/ceph.conf
Nov 27 06:00:38 np0005537642 ceph-mgr[74636]: log_channel(cephadm) log [INF] : Updating compute-1:/var/lib/ceph/4c838139-e0c9-556a-a9ca-e4422f459af7/config/ceph.conf
Nov 27 06:00:38 np0005537642 ceph-mgr[74636]: [cephadm INFO cephadm.serve] Updating compute-0:/var/lib/ceph/4c838139-e0c9-556a-a9ca-e4422f459af7/config/ceph.conf
Nov 27 06:00:38 np0005537642 ceph-mgr[74636]: log_channel(cephadm) log [INF] : Updating compute-0:/var/lib/ceph/4c838139-e0c9-556a-a9ca-e4422f459af7/config/ceph.conf
Nov 27 06:00:38 np0005537642 systemd-logind[801]: New session 38 of user zuul.
Nov 27 06:00:38 np0005537642 ceph-4c838139-e0c9-556a-a9ca-e4422f459af7-nfs-cephfs-2-0-compute-0-ymahkb[97101]: 27/11/2025 11:00:38 : epoch 69282ef5 : compute-0 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f37c4004580 fd 49 proxy header rest len failed header rlen = % (will set dead)
Nov 27 06:00:38 np0005537642 systemd[1]: Started Session 38 of User zuul.
Nov 27 06:00:38 np0005537642 radosgw[89563]: ====== starting new request req=0x7f291eca25d0 =====
Nov 27 06:00:38 np0005537642 radosgw[89563]: ====== req done req=0x7f291eca25d0 op status=0 http_status=200 latency=0.001000029s ======
Nov 27 06:00:38 np0005537642 radosgw[89563]: beast: 0x7f291eca25d0: 192.168.122.100 - anonymous [27/Nov/2025:11:00:38.881 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Nov 27 06:00:38 np0005537642 radosgw[89563]: ====== starting new request req=0x7f291eca25d0 =====
Nov 27 06:00:38 np0005537642 radosgw[89563]: ====== req done req=0x7f291eca25d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 27 06:00:38 np0005537642 radosgw[89563]: beast: 0x7f291eca25d0: 192.168.122.102 - anonymous [27/Nov/2025:11:00:38.894 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 27 06:00:39 np0005537642 ceph-mon[74338]: mon.compute-0@0(leader).osd e119 do_prune osdmap full prune enabled
Nov 27 06:00:39 np0005537642 ceph-mon[74338]: from='mgr.14655 192.168.122.100:0/3058013073' entity='mgr.compute-0.qnrkij' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 27 06:00:39 np0005537642 ceph-mon[74338]: Updating compute-0:/etc/ceph/ceph.conf
Nov 27 06:00:39 np0005537642 ceph-mon[74338]: Updating compute-1:/etc/ceph/ceph.conf
Nov 27 06:00:39 np0005537642 ceph-mon[74338]: Updating compute-2:/etc/ceph/ceph.conf
Nov 27 06:00:39 np0005537642 ceph-mon[74338]: from='mgr.14655 192.168.122.100:0/3058013073' entity='mgr.compute-0.qnrkij' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "21"}]: dispatch
Nov 27 06:00:39 np0005537642 ceph-mon[74338]: Updating compute-2:/var/lib/ceph/4c838139-e0c9-556a-a9ca-e4422f459af7/config/ceph.conf
Nov 27 06:00:39 np0005537642 ceph-mon[74338]: Updating compute-1:/var/lib/ceph/4c838139-e0c9-556a-a9ca-e4422f459af7/config/ceph.conf
Nov 27 06:00:39 np0005537642 ceph-mon[74338]: Updating compute-0:/var/lib/ceph/4c838139-e0c9-556a-a9ca-e4422f459af7/config/ceph.conf
Nov 27 06:00:39 np0005537642 ceph-mon[74338]: log_channel(audit) log [INF] : from='mgr.14655 192.168.122.100:0/3058013073' entity='mgr.compute-0.qnrkij' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "21"}]': finished
Nov 27 06:00:39 np0005537642 ceph-mon[74338]: mon.compute-0@0(leader).osd e120 e120: 3 total, 3 up, 3 in
Nov 27 06:00:39 np0005537642 ceph-mon[74338]: log_channel(cluster) log [DBG] : osdmap e120: 3 total, 3 up, 3 in
Nov 27 06:00:39 np0005537642 ceph-mgr[74636]: [cephadm INFO cephadm.serve] Updating compute-2:/etc/ceph/ceph.client.admin.keyring
Nov 27 06:00:39 np0005537642 ceph-mgr[74636]: log_channel(cephadm) log [INF] : Updating compute-2:/etc/ceph/ceph.client.admin.keyring
Nov 27 06:00:39 np0005537642 ceph-mgr[74636]: [cephadm INFO cephadm.serve] Updating compute-1:/etc/ceph/ceph.client.admin.keyring
Nov 27 06:00:39 np0005537642 ceph-mgr[74636]: log_channel(cephadm) log [INF] : Updating compute-1:/etc/ceph/ceph.client.admin.keyring
Nov 27 06:00:39 np0005537642 ceph-4c838139-e0c9-556a-a9ca-e4422f459af7-nfs-cephfs-2-0-compute-0-ymahkb[97101]: 27/11/2025 11:00:39 : epoch 69282ef5 : compute-0 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f37b8001b80 fd 49 proxy header rest len failed header rlen = % (will set dead)
Nov 27 06:00:39 np0005537642 ceph-mgr[74636]: [cephadm INFO cephadm.serve] Updating compute-0:/etc/ceph/ceph.client.admin.keyring
Nov 27 06:00:39 np0005537642 ceph-mgr[74636]: log_channel(cephadm) log [INF] : Updating compute-0:/etc/ceph/ceph.client.admin.keyring
Nov 27 06:00:39 np0005537642 ceph-4c838139-e0c9-556a-a9ca-e4422f459af7-nfs-cephfs-2-0-compute-0-ymahkb[97101]: 27/11/2025 11:00:39 : epoch 69282ef5 : compute-0 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f37ac004000 fd 49 proxy header rest len failed header rlen = % (will set dead)
Nov 27 06:00:40 np0005537642 ceph-mgr[74636]: [cephadm INFO cephadm.serve] Updating compute-2:/var/lib/ceph/4c838139-e0c9-556a-a9ca-e4422f459af7/config/ceph.client.admin.keyring
Nov 27 06:00:40 np0005537642 ceph-mgr[74636]: log_channel(cephadm) log [INF] : Updating compute-2:/var/lib/ceph/4c838139-e0c9-556a-a9ca-e4422f459af7/config/ceph.client.admin.keyring
Nov 27 06:00:40 np0005537642 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v18: 353 pgs: 353 active+clean; 457 KiB data, 144 MiB used, 60 GiB / 60 GiB avail; 194 B/s rd, 0 op/s; 41 B/s, 1 objects/s recovering
Nov 27 06:00:40 np0005537642 ceph-mon[74338]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "22"} v 0)
Nov 27 06:00:40 np0005537642 ceph-mon[74338]: log_channel(audit) log [INF] : from='mgr.14655 192.168.122.100:0/3058013073' entity='mgr.compute-0.qnrkij' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "22"}]: dispatch
Nov 27 06:00:40 np0005537642 ceph-mon[74338]: from='mgr.14655 192.168.122.100:0/3058013073' entity='mgr.compute-0.qnrkij' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "21"}]': finished
Nov 27 06:00:40 np0005537642 ceph-mon[74338]: Updating compute-2:/etc/ceph/ceph.client.admin.keyring
Nov 27 06:00:40 np0005537642 ceph-mon[74338]: Updating compute-1:/etc/ceph/ceph.client.admin.keyring
Nov 27 06:00:40 np0005537642 ceph-mon[74338]: Updating compute-0:/etc/ceph/ceph.client.admin.keyring
Nov 27 06:00:40 np0005537642 ceph-mgr[74636]: [cephadm INFO cephadm.serve] Updating compute-1:/var/lib/ceph/4c838139-e0c9-556a-a9ca-e4422f459af7/config/ceph.client.admin.keyring
Nov 27 06:00:40 np0005537642 ceph-mgr[74636]: log_channel(cephadm) log [INF] : Updating compute-1:/var/lib/ceph/4c838139-e0c9-556a-a9ca-e4422f459af7/config/ceph.client.admin.keyring
Nov 27 06:00:40 np0005537642 ceph-mgr[74636]: [cephadm INFO cephadm.serve] Updating compute-0:/var/lib/ceph/4c838139-e0c9-556a-a9ca-e4422f459af7/config/ceph.client.admin.keyring
Nov 27 06:00:40 np0005537642 ceph-mgr[74636]: log_channel(cephadm) log [INF] : Updating compute-0:/var/lib/ceph/4c838139-e0c9-556a-a9ca-e4422f459af7/config/ceph.client.admin.keyring
Nov 27 06:00:40 np0005537642 ceph-mon[74338]: mon.compute-0@0(leader).osd e120 do_prune osdmap full prune enabled
Nov 27 06:00:40 np0005537642 ceph-mon[74338]: log_channel(audit) log [INF] : from='mgr.14655 192.168.122.100:0/3058013073' entity='mgr.compute-0.qnrkij' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "22"}]': finished
Nov 27 06:00:40 np0005537642 ceph-mon[74338]: mon.compute-0@0(leader).osd e121 e121: 3 total, 3 up, 3 in
Nov 27 06:00:40 np0005537642 ceph-4c838139-e0c9-556a-a9ca-e4422f459af7-nfs-cephfs-2-0-compute-0-ymahkb[97101]: 27/11/2025 11:00:40 : epoch 69282ef5 : compute-0 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f37a8003c10 fd 49 proxy header rest len failed header rlen = % (will set dead)
Nov 27 06:00:40 np0005537642 ceph-mon[74338]: log_channel(cluster) log [DBG] : osdmap e121: 3 total, 3 up, 3 in
Nov 27 06:00:40 np0005537642 ceph-mon[74338]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2.devices.0}] v 0)
Nov 27 06:00:40 np0005537642 radosgw[89563]: ====== starting new request req=0x7f291eca25d0 =====
Nov 27 06:00:40 np0005537642 radosgw[89563]: ====== req done req=0x7f291eca25d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 27 06:00:40 np0005537642 radosgw[89563]: beast: 0x7f291eca25d0: 192.168.122.100 - anonymous [27/Nov/2025:11:00:40.884 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 27 06:00:40 np0005537642 radosgw[89563]: ====== starting new request req=0x7f291eca25d0 =====
Nov 27 06:00:40 np0005537642 radosgw[89563]: ====== req done req=0x7f291eca25d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 27 06:00:40 np0005537642 radosgw[89563]: beast: 0x7f291eca25d0: 192.168.122.102 - anonymous [27/Nov/2025:11:00:40.897 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 27 06:00:40 np0005537642 ceph-mon[74338]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1.devices.0}] v 0)
Nov 27 06:00:40 np0005537642 ceph-mon[74338]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Nov 27 06:00:40 np0005537642 ceph-osd[82775]: osd.1 pg_epoch: 121 pg[9.15( empty local-lis/les=0/0 n=0 ec=67/50 lis/c=83/83 les/c/f=84/84/0 sis=121) [1] r=0 lpr=121 pi=[83,121)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 27 06:00:41 np0005537642 ceph-mon[74338]: log_channel(audit) log [INF] : from='mgr.14655 192.168.122.100:0/3058013073' entity='mgr.compute-0.qnrkij' 
Nov 27 06:00:41 np0005537642 ceph-4c838139-e0c9-556a-a9ca-e4422f459af7-nfs-cephfs-2-0-compute-0-ymahkb[97101]: 27/11/2025 11:00:41 : epoch 69282ef5 : compute-0 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f37c4004580 fd 49 proxy header rest len failed header rlen = % (will set dead)
Nov 27 06:00:41 np0005537642 ceph-4c838139-e0c9-556a-a9ca-e4422f459af7-nfs-cephfs-2-0-compute-0-ymahkb[97101]: 27/11/2025 11:00:41 : epoch 69282ef5 : compute-0 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f37b8001b80 fd 49 proxy header rest len failed header rlen = % (will set dead)
Nov 27 06:00:42 np0005537642 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v20: 353 pgs: 353 active+clean; 457 KiB data, 144 MiB used, 60 GiB / 60 GiB avail; 170 B/s rd, 0 op/s; 36 B/s, 1 objects/s recovering
Nov 27 06:00:42 np0005537642 ceph-mon[74338]: mon.compute-0@0(leader).osd e121 do_prune osdmap full prune enabled
Nov 27 06:00:42 np0005537642 ceph-mon[74338]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2}] v 0)
Nov 27 06:00:42 np0005537642 ceph-mon[74338]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Nov 27 06:00:42 np0005537642 ceph-mon[74338]: log_channel(audit) log [DBG] : from='mgr.14655 192.168.122.100:0/3058013073' entity='mgr.compute-0.qnrkij' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Nov 27 06:00:42 np0005537642 ceph-mon[74338]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "23"} v 0)
Nov 27 06:00:42 np0005537642 ceph-mon[74338]: log_channel(audit) log [INF] : from='mgr.14655 192.168.122.100:0/3058013073' entity='mgr.compute-0.qnrkij' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "23"}]: dispatch
Nov 27 06:00:42 np0005537642 ceph-4c838139-e0c9-556a-a9ca-e4422f459af7-nfs-cephfs-2-0-compute-0-ymahkb[97101]: 27/11/2025 11:00:42 : epoch 69282ef5 : compute-0 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f37ac004000 fd 49 proxy header rest len failed header rlen = % (will set dead)
Nov 27 06:00:42 np0005537642 ceph-mon[74338]: log_channel(audit) log [INF] : from='mgr.14655 192.168.122.100:0/3058013073' entity='mgr.compute-0.qnrkij' 
Nov 27 06:00:42 np0005537642 ceph-mon[74338]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1}] v 0)
Nov 27 06:00:42 np0005537642 ceph-mon[74338]: mon.compute-0@0(leader).osd e122 e122: 3 total, 3 up, 3 in
Nov 27 06:00:42 np0005537642 ceph-mon[74338]: Updating compute-2:/var/lib/ceph/4c838139-e0c9-556a-a9ca-e4422f459af7/config/ceph.client.admin.keyring
Nov 27 06:00:42 np0005537642 ceph-mon[74338]: from='mgr.14655 192.168.122.100:0/3058013073' entity='mgr.compute-0.qnrkij' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "22"}]: dispatch
Nov 27 06:00:42 np0005537642 ceph-mon[74338]: Updating compute-1:/var/lib/ceph/4c838139-e0c9-556a-a9ca-e4422f459af7/config/ceph.client.admin.keyring
Nov 27 06:00:42 np0005537642 ceph-mon[74338]: Updating compute-0:/var/lib/ceph/4c838139-e0c9-556a-a9ca-e4422f459af7/config/ceph.client.admin.keyring
Nov 27 06:00:42 np0005537642 ceph-mon[74338]: from='mgr.14655 192.168.122.100:0/3058013073' entity='mgr.compute-0.qnrkij' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "22"}]': finished
Nov 27 06:00:42 np0005537642 ceph-mon[74338]: from='mgr.14655 192.168.122.100:0/3058013073' entity='mgr.compute-0.qnrkij' 
Nov 27 06:00:42 np0005537642 ceph-osd[82775]: osd.1 pg_epoch: 122 pg[9.15( empty local-lis/les=0/0 n=0 ec=67/50 lis/c=83/83 les/c/f=84/84/0 sis=122) [1]/[2] r=-1 lpr=122 pi=[83,122)/1 crt=0'0 mlcod 0'0 remapped mbc={}] PeeringState::start_peering_interval up [1] -> [1], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 1, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Nov 27 06:00:42 np0005537642 ceph-osd[82775]: osd.1 pg_epoch: 122 pg[9.15( empty local-lis/les=0/0 n=0 ec=67/50 lis/c=83/83 les/c/f=84/84/0 sis=122) [1]/[2] r=-1 lpr=122 pi=[83,122)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Nov 27 06:00:42 np0005537642 ceph-mon[74338]: log_channel(audit) log [INF] : from='mgr.14655 192.168.122.100:0/3058013073' entity='mgr.compute-0.qnrkij' 
Nov 27 06:00:42 np0005537642 ceph-mon[74338]: log_channel(cluster) log [DBG] : osdmap e122: 3 total, 3 up, 3 in
Nov 27 06:00:42 np0005537642 radosgw[89563]: ====== starting new request req=0x7f291eca25d0 =====
Nov 27 06:00:42 np0005537642 radosgw[89563]: ====== req done req=0x7f291eca25d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 27 06:00:42 np0005537642 radosgw[89563]: beast: 0x7f291eca25d0: 192.168.122.100 - anonymous [27/Nov/2025:11:00:42.885 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 27 06:00:42 np0005537642 radosgw[89563]: ====== starting new request req=0x7f291eca25d0 =====
Nov 27 06:00:42 np0005537642 radosgw[89563]: ====== req done req=0x7f291eca25d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 27 06:00:42 np0005537642 radosgw[89563]: beast: 0x7f291eca25d0: 192.168.122.102 - anonymous [27/Nov/2025:11:00:42.899 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 27 06:00:42 np0005537642 ceph-mon[74338]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Nov 27 06:00:43 np0005537642 ceph-mon[74338]: log_channel(audit) log [INF] : from='mgr.14655 192.168.122.100:0/3058013073' entity='mgr.compute-0.qnrkij' 
Nov 27 06:00:43 np0005537642 ceph-mon[74338]: log_channel(audit) log [INF] : from='mgr.14655 192.168.122.100:0/3058013073' entity='mgr.compute-0.qnrkij' 
Nov 27 06:00:43 np0005537642 ceph-4c838139-e0c9-556a-a9ca-e4422f459af7-mgr-compute-0-qnrkij[74632]: ::ffff:192.168.122.100 - - [27/Nov/2025:11:00:43] "GET /metrics HTTP/1.1" 200 48266 "" "Prometheus/2.51.0"
Nov 27 06:00:43 np0005537642 ceph-mgr[74636]: [prometheus INFO cherrypy.access.139953872425936] ::ffff:192.168.122.100 - - [27/Nov/2025:11:00:43] "GET /metrics HTTP/1.1" 200 48266 "" "Prometheus/2.51.0"
Nov 27 06:00:43 np0005537642 ceph-mon[74338]: log_channel(audit) log [INF] : from='mgr.14655 192.168.122.100:0/3058013073' entity='mgr.compute-0.qnrkij' 
Nov 27 06:00:43 np0005537642 ceph-mon[74338]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Nov 27 06:00:43 np0005537642 ceph-4c838139-e0c9-556a-a9ca-e4422f459af7-nfs-cephfs-2-0-compute-0-ymahkb[97101]: 27/11/2025 11:00:43 : epoch 69282ef5 : compute-0 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f37ac004000 fd 49 proxy header rest len failed header rlen = % (will set dead)
Nov 27 06:00:43 np0005537642 ceph-mon[74338]: mon.compute-0@0(leader).osd e122 do_prune osdmap full prune enabled
Nov 27 06:00:43 np0005537642 ceph-4c838139-e0c9-556a-a9ca-e4422f459af7-nfs-cephfs-2-0-compute-0-ymahkb[97101]: 27/11/2025 11:00:43 : epoch 69282ef5 : compute-0 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f37c4004580 fd 49 proxy header rest len failed header rlen = % (will set dead)
Nov 27 06:00:43 np0005537642 ceph-mon[74338]: log_channel(audit) log [INF] : from='mgr.14655 192.168.122.100:0/3058013073' entity='mgr.compute-0.qnrkij' 
Nov 27 06:00:44 np0005537642 ceph-mon[74338]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.nfs.cephfs}] v 0)
Nov 27 06:00:44 np0005537642 ceph-mon[74338]: log_channel(audit) log [INF] : from='mgr.14655 192.168.122.100:0/3058013073' entity='mgr.compute-0.qnrkij' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "23"}]': finished
Nov 27 06:00:44 np0005537642 ceph-mon[74338]: mon.compute-0@0(leader).osd e123 e123: 3 total, 3 up, 3 in
Nov 27 06:00:44 np0005537642 ceph-mon[74338]: from='mgr.14655 192.168.122.100:0/3058013073' entity='mgr.compute-0.qnrkij' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "23"}]: dispatch
Nov 27 06:00:44 np0005537642 ceph-mon[74338]: from='mgr.14655 192.168.122.100:0/3058013073' entity='mgr.compute-0.qnrkij' 
Nov 27 06:00:44 np0005537642 ceph-mon[74338]: from='mgr.14655 192.168.122.100:0/3058013073' entity='mgr.compute-0.qnrkij' 
Nov 27 06:00:44 np0005537642 ceph-mon[74338]: from='mgr.14655 192.168.122.100:0/3058013073' entity='mgr.compute-0.qnrkij' 
Nov 27 06:00:44 np0005537642 ceph-mon[74338]: from='mgr.14655 192.168.122.100:0/3058013073' entity='mgr.compute-0.qnrkij' 
Nov 27 06:00:44 np0005537642 ceph-mon[74338]: from='mgr.14655 192.168.122.100:0/3058013073' entity='mgr.compute-0.qnrkij' 
Nov 27 06:00:44 np0005537642 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v23: 353 pgs: 1 unknown, 352 active+clean; 457 KiB data, 148 MiB used, 60 GiB / 60 GiB avail; 407 B/s rd, 0 op/s
Nov 27 06:00:44 np0005537642 ceph-mon[74338]: log_channel(cluster) log [DBG] : osdmap e123: 3 total, 3 up, 3 in
Nov 27 06:00:44 np0005537642 ceph-osd[82775]: osd.1 pg_epoch: 123 pg[9.16( v 57'872 (0'0,57'872] local-lis/les=88/89 n=4 ec=67/50 lis/c=88/88 les/c/f=89/89/0 sis=123 pruub=12.755812645s) [2] r=-1 lpr=123 pi=[88,123)/1 crt=57'872 mlcod 0'0 active pruub 313.831512451s@ mbc={}] PeeringState::start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Nov 27 06:00:44 np0005537642 ceph-osd[82775]: osd.1 pg_epoch: 123 pg[9.16( v 57'872 (0'0,57'872] local-lis/les=88/89 n=4 ec=67/50 lis/c=88/88 les/c/f=89/89/0 sis=123 pruub=12.755731583s) [2] r=-1 lpr=123 pi=[88,123)/1 crt=57'872 mlcod 0'0 unknown NOTIFY pruub 313.831512451s@ mbc={}] state<Start>: transitioning to Stray
Nov 27 06:00:44 np0005537642 ceph-mon[74338]: log_channel(audit) log [INF] : from='mgr.14655 192.168.122.100:0/3058013073' entity='mgr.compute-0.qnrkij' 
Nov 27 06:00:44 np0005537642 ceph-mon[74338]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Nov 27 06:00:44 np0005537642 ceph-mon[74338]: log_channel(audit) log [DBG] : from='mgr.14655 192.168.122.100:0/3058013073' entity='mgr.compute-0.qnrkij' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 27 06:00:44 np0005537642 ceph-mon[74338]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Nov 27 06:00:44 np0005537642 ceph-mon[74338]: log_channel(audit) log [INF] : from='mgr.14655 192.168.122.100:0/3058013073' entity='mgr.compute-0.qnrkij' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 27 06:00:44 np0005537642 ceph-mon[74338]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Nov 27 06:00:44 np0005537642 ceph-mon[74338]: log_channel(audit) log [DBG] : from='mgr.14655 192.168.122.100:0/3058013073' entity='mgr.compute-0.qnrkij' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 27 06:00:44 np0005537642 ceph-4c838139-e0c9-556a-a9ca-e4422f459af7-nfs-cephfs-2-0-compute-0-ymahkb[97101]: 27/11/2025 11:00:44 : epoch 69282ef5 : compute-0 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f37c4004580 fd 49 proxy header rest len failed header rlen = % (will set dead)
Nov 27 06:00:44 np0005537642 radosgw[89563]: ====== starting new request req=0x7f291eca25d0 =====
Nov 27 06:00:44 np0005537642 radosgw[89563]: ====== req done req=0x7f291eca25d0 op status=0 http_status=200 latency=0.001000029s ======
Nov 27 06:00:44 np0005537642 radosgw[89563]: beast: 0x7f291eca25d0: 192.168.122.100 - anonymous [27/Nov/2025:11:00:44.887 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Nov 27 06:00:44 np0005537642 radosgw[89563]: ====== starting new request req=0x7f291eca25d0 =====
Nov 27 06:00:44 np0005537642 radosgw[89563]: ====== req done req=0x7f291eca25d0 op status=0 http_status=200 latency=0.001000028s ======
Nov 27 06:00:44 np0005537642 radosgw[89563]: beast: 0x7f291eca25d0: 192.168.122.102 - anonymous [27/Nov/2025:11:00:44.902 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Nov 27 06:00:45 np0005537642 ceph-mon[74338]: mon.compute-0@0(leader).osd e123 do_prune osdmap full prune enabled
Nov 27 06:00:45 np0005537642 podman[102699]: 2025-11-27 11:00:45.160155976 +0000 UTC m=+0.049945399 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Nov 27 06:00:45 np0005537642 ovs-vsctl[102736]: ovs|00001|db_ctl_base|ERR|no key "dpdk-init" in Open_vSwitch record "." column other_config
Nov 27 06:00:45 np0005537642 ceph-4c838139-e0c9-556a-a9ca-e4422f459af7-nfs-cephfs-2-0-compute-0-ymahkb[97101]: 27/11/2025 11:00:45 : epoch 69282ef5 : compute-0 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f37ac004000 fd 49 proxy header rest len failed header rlen = % (will set dead)
Nov 27 06:00:45 np0005537642 ceph-4c838139-e0c9-556a-a9ca-e4422f459af7-nfs-cephfs-2-0-compute-0-ymahkb[97101]: 27/11/2025 11:00:45 : epoch 69282ef5 : compute-0 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f37ac004000 fd 49 proxy header rest len failed header rlen = % (will set dead)
Nov 27 06:00:45 np0005537642 ceph-mon[74338]: mon.compute-0@0(leader).osd e124 e124: 3 total, 3 up, 3 in
Nov 27 06:00:46 np0005537642 podman[102699]: 2025-11-27 11:00:46.102167083 +0000 UTC m=+0.991956406 container create 13408358973b98d2d4e842ce0279ba20373b415607eeefbc8be0e5c992b73707 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=great_yalow, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 27 06:00:46 np0005537642 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v24: 353 pgs: 1 unknown, 352 active+clean; 457 KiB data, 148 MiB used, 60 GiB / 60 GiB avail; 341 B/s rd, 0 op/s
Nov 27 06:00:46 np0005537642 ceph-mon[74338]: from='mgr.14655 192.168.122.100:0/3058013073' entity='mgr.compute-0.qnrkij' 
Nov 27 06:00:46 np0005537642 ceph-mon[74338]: from='mgr.14655 192.168.122.100:0/3058013073' entity='mgr.compute-0.qnrkij' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "23"}]': finished
Nov 27 06:00:46 np0005537642 ceph-mon[74338]: from='mgr.14655 192.168.122.100:0/3058013073' entity='mgr.compute-0.qnrkij' 
Nov 27 06:00:46 np0005537642 ceph-mon[74338]: from='mgr.14655 192.168.122.100:0/3058013073' entity='mgr.compute-0.qnrkij' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 27 06:00:46 np0005537642 ceph-osd[82775]: osd.1 pg_epoch: 124 pg[9.15( v 57'872 (0'0,57'872] local-lis/les=0/0 n=4 ec=67/50 lis/c=122/83 les/c/f=123/84/0 sis=124) [1] r=0 lpr=124 pi=[83,124)/1 luod=0'0 crt=57'872 mlcod 0'0 active mbc={}] PeeringState::start_peering_interval up [1] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 1 -> 1, role -1 -> 0, features acting 4540701547738038271 upacting 4540701547738038271
Nov 27 06:00:46 np0005537642 ceph-osd[82775]: osd.1 pg_epoch: 124 pg[9.16( v 57'872 (0'0,57'872] local-lis/les=88/89 n=4 ec=67/50 lis/c=88/88 les/c/f=89/89/0 sis=124) [2]/[1] r=0 lpr=124 pi=[88,124)/1 crt=57'872 mlcod 0'0 remapped NOTIFY mbc={}] PeeringState::start_peering_interval up [2] -> [2], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 2, role -1 -> 0, features acting 4540701547738038271 upacting 4540701547738038271
Nov 27 06:00:46 np0005537642 ceph-osd[82775]: osd.1 pg_epoch: 124 pg[9.15( v 57'872 (0'0,57'872] local-lis/les=0/0 n=4 ec=67/50 lis/c=122/83 les/c/f=123/84/0 sis=124) [1] r=0 lpr=124 pi=[83,124)/1 crt=57'872 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 27 06:00:46 np0005537642 ceph-osd[82775]: osd.1 pg_epoch: 124 pg[9.16( v 57'872 (0'0,57'872] local-lis/les=88/89 n=4 ec=67/50 lis/c=88/88 les/c/f=89/89/0 sis=124) [2]/[1] r=0 lpr=124 pi=[88,124)/1 crt=57'872 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Nov 27 06:00:46 np0005537642 systemd[1]: Started libpod-conmon-13408358973b98d2d4e842ce0279ba20373b415607eeefbc8be0e5c992b73707.scope.
Nov 27 06:00:46 np0005537642 ceph-mon[74338]: log_channel(cluster) log [DBG] : osdmap e124: 3 total, 3 up, 3 in
Nov 27 06:00:46 np0005537642 systemd[1]: Started libcrun container.
Nov 27 06:00:46 np0005537642 podman[102699]: 2025-11-27 11:00:46.515950333 +0000 UTC m=+1.405739726 container init 13408358973b98d2d4e842ce0279ba20373b415607eeefbc8be0e5c992b73707 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=great_yalow, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 27 06:00:46 np0005537642 podman[102699]: 2025-11-27 11:00:46.533284842 +0000 UTC m=+1.423074205 container start 13408358973b98d2d4e842ce0279ba20373b415607eeefbc8be0e5c992b73707 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=great_yalow, org.label-schema.build-date=20250325, CEPH_REF=squid, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 27 06:00:46 np0005537642 systemd[1]: libpod-13408358973b98d2d4e842ce0279ba20373b415607eeefbc8be0e5c992b73707.scope: Deactivated successfully.
Nov 27 06:00:46 np0005537642 great_yalow[102768]: 167 167
Nov 27 06:00:46 np0005537642 conmon[102768]: conmon 13408358973b98d2d4e8 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-13408358973b98d2d4e842ce0279ba20373b415607eeefbc8be0e5c992b73707.scope/container/memory.events
Nov 27 06:00:46 np0005537642 ceph-4c838139-e0c9-556a-a9ca-e4422f459af7-nfs-cephfs-2-0-compute-0-ymahkb[97101]: 27/11/2025 11:00:46 : epoch 69282ef5 : compute-0 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f37b8002ae0 fd 49 proxy header rest len failed header rlen = % (will set dead)
Nov 27 06:00:46 np0005537642 radosgw[89563]: ====== starting new request req=0x7f291eca25d0 =====
Nov 27 06:00:46 np0005537642 radosgw[89563]: ====== req done req=0x7f291eca25d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 27 06:00:46 np0005537642 radosgw[89563]: beast: 0x7f291eca25d0: 192.168.122.100 - anonymous [27/Nov/2025:11:00:46.890 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 27 06:00:46 np0005537642 radosgw[89563]: ====== starting new request req=0x7f291eca25d0 =====
Nov 27 06:00:46 np0005537642 radosgw[89563]: ====== req done req=0x7f291eca25d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 27 06:00:46 np0005537642 radosgw[89563]: beast: 0x7f291eca25d0: 192.168.122.102 - anonymous [27/Nov/2025:11:00:46.904 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 27 06:00:47 np0005537642 ceph-4c838139-e0c9-556a-a9ca-e4422f459af7-nfs-cephfs-2-0-compute-0-ymahkb[97101]: 27/11/2025 11:00:47 : epoch 69282ef5 : compute-0 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f37c4004580 fd 49 proxy header rest len failed header rlen = % (will set dead)
Nov 27 06:00:47 np0005537642 ceph-4c838139-e0c9-556a-a9ca-e4422f459af7-nfs-cephfs-2-0-compute-0-ymahkb[97101]: 27/11/2025 11:00:47 : epoch 69282ef5 : compute-0 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f37ac004000 fd 49 proxy header rest len failed header rlen = % (will set dead)
Nov 27 06:00:48 np0005537642 ceph-mon[74338]: mon.compute-0@0(leader).osd e124 do_prune osdmap full prune enabled
Nov 27 06:00:48 np0005537642 podman[102699]: 2025-11-27 11:00:48.188638808 +0000 UTC m=+3.078428131 container attach 13408358973b98d2d4e842ce0279ba20373b415607eeefbc8be0e5c992b73707 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=great_yalow, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 27 06:00:48 np0005537642 podman[102699]: 2025-11-27 11:00:48.189952956 +0000 UTC m=+3.079742279 container died 13408358973b98d2d4e842ce0279ba20373b415607eeefbc8be0e5c992b73707 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=great_yalow, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 27 06:00:48 np0005537642 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v26: 353 pgs: 1 remapped+peering, 1 peering, 351 active+clean; 457 KiB data, 148 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Nov 27 06:00:48 np0005537642 ceph-4c838139-e0c9-556a-a9ca-e4422f459af7-nfs-cephfs-2-0-compute-0-ymahkb[97101]: 27/11/2025 11:00:48 : epoch 69282ef5 : compute-0 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f37a8003c10 fd 49 proxy header rest len failed header rlen = % (will set dead)
Nov 27 06:00:48 np0005537642 radosgw[89563]: ====== starting new request req=0x7f291eca25d0 =====
Nov 27 06:00:48 np0005537642 radosgw[89563]: ====== req done req=0x7f291eca25d0 op status=0 http_status=200 latency=0.001000028s ======
Nov 27 06:00:48 np0005537642 radosgw[89563]: beast: 0x7f291eca25d0: 192.168.122.100 - anonymous [27/Nov/2025:11:00:48.892 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Nov 27 06:00:48 np0005537642 radosgw[89563]: ====== starting new request req=0x7f291eca25d0 =====
Nov 27 06:00:48 np0005537642 radosgw[89563]: ====== req done req=0x7f291eca25d0 op status=0 http_status=200 latency=0.001000028s ======
Nov 27 06:00:48 np0005537642 radosgw[89563]: beast: 0x7f291eca25d0: 192.168.122.102 - anonymous [27/Nov/2025:11:00:48.908 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Nov 27 06:00:48 np0005537642 lvm[103187]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Nov 27 06:00:48 np0005537642 lvm[103187]: VG ceph_vg0 finished
Nov 27 06:00:49 np0005537642 systemd[1]: var-lib-containers-storage-overlay-651fb6425d6ada9340185321ab4305d96387d1e80234246238e05bb9b6fc1a11-merged.mount: Deactivated successfully.
Nov 27 06:00:49 np0005537642 ceph-mon[74338]: mon.compute-0@0(leader).osd e125 e125: 3 total, 3 up, 3 in
Nov 27 06:00:49 np0005537642 ceph-mon[74338]: log_channel(cluster) log [DBG] : osdmap e125: 3 total, 3 up, 3 in
Nov 27 06:00:49 np0005537642 ceph-osd[82775]: osd.1 pg_epoch: 125 pg[9.16( v 57'872 (0'0,57'872] local-lis/les=124/125 n=4 ec=67/50 lis/c=88/88 les/c/f=89/89/0 sis=124) [2]/[1] async=[2] r=0 lpr=124 pi=[88,124)/1 crt=57'872 mlcod 0'0 active+remapped mbc={255={(0+1)=4}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 27 06:00:49 np0005537642 ceph-osd[82775]: osd.1 pg_epoch: 125 pg[9.15( v 57'872 (0'0,57'872] local-lis/les=124/125 n=4 ec=67/50 lis/c=122/83 les/c/f=123/84/0 sis=124) [1] r=0 lpr=124 pi=[83,124)/1 crt=57'872 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 27 06:00:49 np0005537642 podman[102699]: 2025-11-27 11:00:49.339069308 +0000 UTC m=+4.228858631 container remove 13408358973b98d2d4e842ce0279ba20373b415607eeefbc8be0e5c992b73707 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=great_yalow, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 27 06:00:49 np0005537642 systemd[1]: libpod-conmon-13408358973b98d2d4e842ce0279ba20373b415607eeefbc8be0e5c992b73707.scope: Deactivated successfully.
Nov 27 06:00:49 np0005537642 podman[103311]: 2025-11-27 11:00:49.5262379 +0000 UTC m=+0.039160829 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Nov 27 06:00:49 np0005537642 podman[103311]: 2025-11-27 11:00:49.621324069 +0000 UTC m=+0.134246938 container create 7de8e57c298333c9d5a368d0d83bf00c849b7207dcf811cd306aaaf5a179005f (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=musing_wright, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Nov 27 06:00:49 np0005537642 systemd[1]: Started libpod-conmon-7de8e57c298333c9d5a368d0d83bf00c849b7207dcf811cd306aaaf5a179005f.scope.
Nov 27 06:00:49 np0005537642 systemd[1]: Started libcrun container.
Nov 27 06:00:49 np0005537642 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/cab25aa538e4d5a43edeb8b63931c9f7437f05697a636b406235d1415d41d9f1/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 27 06:00:49 np0005537642 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/cab25aa538e4d5a43edeb8b63931c9f7437f05697a636b406235d1415d41d9f1/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 27 06:00:49 np0005537642 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/cab25aa538e4d5a43edeb8b63931c9f7437f05697a636b406235d1415d41d9f1/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 27 06:00:49 np0005537642 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/cab25aa538e4d5a43edeb8b63931c9f7437f05697a636b406235d1415d41d9f1/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 27 06:00:49 np0005537642 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/cab25aa538e4d5a43edeb8b63931c9f7437f05697a636b406235d1415d41d9f1/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Nov 27 06:00:49 np0005537642 podman[103311]: 2025-11-27 11:00:49.772847634 +0000 UTC m=+0.285770523 container init 7de8e57c298333c9d5a368d0d83bf00c849b7207dcf811cd306aaaf5a179005f (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=musing_wright, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, OSD_FLAVOR=default, io.buildah.version=1.40.1)
Nov 27 06:00:49 np0005537642 podman[103311]: 2025-11-27 11:00:49.781426592 +0000 UTC m=+0.294349461 container start 7de8e57c298333c9d5a368d0d83bf00c849b7207dcf811cd306aaaf5a179005f (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=musing_wright, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True)
Nov 27 06:00:49 np0005537642 ceph-4c838139-e0c9-556a-a9ca-e4422f459af7-nfs-cephfs-2-0-compute-0-ymahkb[97101]: 27/11/2025 11:00:49 : epoch 69282ef5 : compute-0 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f37b8002ae0 fd 49 proxy header rest len failed header rlen = % (will set dead)
Nov 27 06:00:49 np0005537642 podman[103311]: 2025-11-27 11:00:49.797102643 +0000 UTC m=+0.310025512 container attach 7de8e57c298333c9d5a368d0d83bf00c849b7207dcf811cd306aaaf5a179005f (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=musing_wright, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True)
Nov 27 06:00:49 np0005537642 ceph-4c838139-e0c9-556a-a9ca-e4422f459af7-nfs-cephfs-2-0-compute-0-ymahkb[97101]: 27/11/2025 11:00:49 : epoch 69282ef5 : compute-0 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f37c4004580 fd 49 proxy header rest len failed header rlen = % (will set dead)
Nov 27 06:00:50 np0005537642 ceph-mon[74338]: mon.compute-0@0(leader).osd e125 do_prune osdmap full prune enabled
Nov 27 06:00:50 np0005537642 musing_wright[103381]: --> passed data devices: 0 physical, 1 LVM
Nov 27 06:00:50 np0005537642 musing_wright[103381]: --> All data devices are unavailable
Nov 27 06:00:50 np0005537642 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v28: 353 pgs: 1 remapped+peering, 1 peering, 351 active+clean; 457 KiB data, 148 MiB used, 60 GiB / 60 GiB avail; 167 B/s rd, 0 op/s; 17 B/s, 0 objects/s recovering
Nov 27 06:00:50 np0005537642 systemd[1]: libpod-7de8e57c298333c9d5a368d0d83bf00c849b7207dcf811cd306aaaf5a179005f.scope: Deactivated successfully.
Nov 27 06:00:50 np0005537642 podman[103311]: 2025-11-27 11:00:50.194978474 +0000 UTC m=+0.707901343 container died 7de8e57c298333c9d5a368d0d83bf00c849b7207dcf811cd306aaaf5a179005f (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=musing_wright, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 27 06:00:50 np0005537642 systemd[1]: var-lib-containers-storage-overlay-cab25aa538e4d5a43edeb8b63931c9f7437f05697a636b406235d1415d41d9f1-merged.mount: Deactivated successfully.
Nov 27 06:00:50 np0005537642 ceph-mon[74338]: mon.compute-0@0(leader).osd e126 e126: 3 total, 3 up, 3 in
Nov 27 06:00:50 np0005537642 podman[103311]: 2025-11-27 11:00:50.408860193 +0000 UTC m=+0.921783062 container remove 7de8e57c298333c9d5a368d0d83bf00c849b7207dcf811cd306aaaf5a179005f (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=musing_wright, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, ceph=True, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 27 06:00:50 np0005537642 ceph-osd[82775]: osd.1 pg_epoch: 126 pg[9.16( v 57'872 (0'0,57'872] local-lis/les=124/125 n=4 ec=67/50 lis/c=124/88 les/c/f=125/89/0 sis=126 pruub=14.892864227s) [2] async=[2] r=-1 lpr=126 pi=[88,126)/1 crt=57'872 mlcod 57'872 active pruub 322.033264160s@ mbc={255={}}] PeeringState::start_peering_interval up [2] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 2 -> 2, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Nov 27 06:00:50 np0005537642 ceph-osd[82775]: osd.1 pg_epoch: 126 pg[9.16( v 57'872 (0'0,57'872] local-lis/les=124/125 n=4 ec=67/50 lis/c=124/88 les/c/f=125/89/0 sis=126 pruub=14.892785072s) [2] r=-1 lpr=126 pi=[88,126)/1 crt=57'872 mlcod 0'0 unknown NOTIFY pruub 322.033264160s@ mbc={}] state<Start>: transitioning to Stray
Nov 27 06:00:50 np0005537642 ceph-mon[74338]: log_channel(cluster) log [DBG] : osdmap e126: 3 total, 3 up, 3 in
Nov 27 06:00:50 np0005537642 systemd[1]: libpod-conmon-7de8e57c298333c9d5a368d0d83bf00c849b7207dcf811cd306aaaf5a179005f.scope: Deactivated successfully.
Nov 27 06:00:50 np0005537642 ceph-4c838139-e0c9-556a-a9ca-e4422f459af7-nfs-cephfs-2-0-compute-0-ymahkb[97101]: 27/11/2025 11:00:50 : epoch 69282ef5 : compute-0 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f37ac004000 fd 49 proxy header rest len failed header rlen = % (will set dead)
Nov 27 06:00:50 np0005537642 radosgw[89563]: ====== starting new request req=0x7f291eca25d0 =====
Nov 27 06:00:50 np0005537642 radosgw[89563]: ====== req done req=0x7f291eca25d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 27 06:00:50 np0005537642 radosgw[89563]: beast: 0x7f291eca25d0: 192.168.122.100 - anonymous [27/Nov/2025:11:00:50.894 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 27 06:00:50 np0005537642 radosgw[89563]: ====== starting new request req=0x7f291eca25d0 =====
Nov 27 06:00:50 np0005537642 radosgw[89563]: ====== req done req=0x7f291eca25d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 27 06:00:50 np0005537642 radosgw[89563]: beast: 0x7f291eca25d0: 192.168.122.102 - anonymous [27/Nov/2025:11:00:50.911 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 27 06:00:51 np0005537642 podman[103726]: 2025-11-27 11:00:51.019023572 +0000 UTC m=+0.026035240 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Nov 27 06:00:51 np0005537642 podman[103726]: 2025-11-27 11:00:51.303930328 +0000 UTC m=+0.310942006 container create bae0cba2e967758dbc97b85d2a92781531b320a71deeb600a6f5df4772b494f3 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=gracious_joliot, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, org.label-schema.vendor=CentOS)
Nov 27 06:00:51 np0005537642 ceph-mon[74338]: mon.compute-0@0(leader).osd e126 do_prune osdmap full prune enabled
Nov 27 06:00:51 np0005537642 ceph-mon[74338]: mon.compute-0@0(leader).osd e127 e127: 3 total, 3 up, 3 in
Nov 27 06:00:51 np0005537642 systemd[1]: Started libpod-conmon-bae0cba2e967758dbc97b85d2a92781531b320a71deeb600a6f5df4772b494f3.scope.
Nov 27 06:00:51 np0005537642 systemd[1]: Started libcrun container.
Nov 27 06:00:51 np0005537642 ceph-mon[74338]: log_channel(cluster) log [DBG] : osdmap e127: 3 total, 3 up, 3 in
Nov 27 06:00:51 np0005537642 podman[103726]: 2025-11-27 11:00:51.578815693 +0000 UTC m=+0.585827411 container init bae0cba2e967758dbc97b85d2a92781531b320a71deeb600a6f5df4772b494f3 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=gracious_joliot, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, CEPH_REF=squid, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Nov 27 06:00:51 np0005537642 podman[103726]: 2025-11-27 11:00:51.594295344 +0000 UTC m=+0.601306982 container start bae0cba2e967758dbc97b85d2a92781531b320a71deeb600a6f5df4772b494f3 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=gracious_joliot, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 27 06:00:51 np0005537642 gracious_joliot[103753]: 167 167
Nov 27 06:00:51 np0005537642 podman[103726]: 2025-11-27 11:00:51.603931414 +0000 UTC m=+0.610943092 container attach bae0cba2e967758dbc97b85d2a92781531b320a71deeb600a6f5df4772b494f3 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=gracious_joliot, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325)
Nov 27 06:00:51 np0005537642 systemd[1]: libpod-bae0cba2e967758dbc97b85d2a92781531b320a71deeb600a6f5df4772b494f3.scope: Deactivated successfully.
Nov 27 06:00:51 np0005537642 podman[103726]: 2025-11-27 11:00:51.605189421 +0000 UTC m=+0.612201099 container died bae0cba2e967758dbc97b85d2a92781531b320a71deeb600a6f5df4772b494f3 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=gracious_joliot, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325)
Nov 27 06:00:51 np0005537642 systemd[1]: var-lib-containers-storage-overlay-a256f9056ed449e1ac19011a609c3d579fd57020bb496036ad91f207dded2a77-merged.mount: Deactivated successfully.
Nov 27 06:00:51 np0005537642 podman[103726]: 2025-11-27 11:00:51.687753815 +0000 UTC m=+0.694765463 container remove bae0cba2e967758dbc97b85d2a92781531b320a71deeb600a6f5df4772b494f3 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=gracious_joliot, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Nov 27 06:00:51 np0005537642 systemd[1]: libpod-conmon-bae0cba2e967758dbc97b85d2a92781531b320a71deeb600a6f5df4772b494f3.scope: Deactivated successfully.
Nov 27 06:00:51 np0005537642 ceph-4c838139-e0c9-556a-a9ca-e4422f459af7-nfs-cephfs-2-0-compute-0-ymahkb[97101]: 27/11/2025 11:00:51 : epoch 69282ef5 : compute-0 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f37a8003c30 fd 49 proxy header rest len failed header rlen = % (will set dead)
Nov 27 06:00:51 np0005537642 podman[103786]: 2025-11-27 11:00:51.866533022 +0000 UTC m=+0.051089079 container create 8d122932edeea588a21a32a52473916042cbf35df1686c8b110ec546ca4db7fe (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=heuristic_dirac, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0)
Nov 27 06:00:51 np0005537642 ceph-4c838139-e0c9-556a-a9ca-e4422f459af7-nfs-cephfs-2-0-compute-0-ymahkb[97101]: 27/11/2025 11:00:51 : epoch 69282ef5 : compute-0 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f37b8002ae0 fd 49 proxy header rest len failed header rlen = % (will set dead)
Nov 27 06:00:51 np0005537642 systemd[1]: Started libpod-conmon-8d122932edeea588a21a32a52473916042cbf35df1686c8b110ec546ca4db7fe.scope.
Nov 27 06:00:51 np0005537642 podman[103786]: 2025-11-27 11:00:51.846310793 +0000 UTC m=+0.030866880 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Nov 27 06:00:51 np0005537642 systemd[1]: Started libcrun container.
Nov 27 06:00:51 np0005537642 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7d62f5f048c5d8e3b62ef3e9f85e09ee7e2b3a8357a2d5099fddfb5177d49b43/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 27 06:00:51 np0005537642 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7d62f5f048c5d8e3b62ef3e9f85e09ee7e2b3a8357a2d5099fddfb5177d49b43/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 27 06:00:51 np0005537642 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7d62f5f048c5d8e3b62ef3e9f85e09ee7e2b3a8357a2d5099fddfb5177d49b43/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 27 06:00:51 np0005537642 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7d62f5f048c5d8e3b62ef3e9f85e09ee7e2b3a8357a2d5099fddfb5177d49b43/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 27 06:00:51 np0005537642 podman[103786]: 2025-11-27 11:00:51.963918418 +0000 UTC m=+0.148474495 container init 8d122932edeea588a21a32a52473916042cbf35df1686c8b110ec546ca4db7fe (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=heuristic_dirac, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.build-date=20250325, CEPH_REF=squid, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Nov 27 06:00:51 np0005537642 podman[103786]: 2025-11-27 11:00:51.97187868 +0000 UTC m=+0.156434737 container start 8d122932edeea588a21a32a52473916042cbf35df1686c8b110ec546ca4db7fe (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=heuristic_dirac, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325)
Nov 27 06:00:51 np0005537642 podman[103786]: 2025-11-27 11:00:51.975178066 +0000 UTC m=+0.159734143 container attach 8d122932edeea588a21a32a52473916042cbf35df1686c8b110ec546ca4db7fe (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=heuristic_dirac, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, CEPH_REF=squid)
Nov 27 06:00:52 np0005537642 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v31: 353 pgs: 1 remapped+peering, 1 peering, 351 active+clean; 457 KiB data, 148 MiB used, 60 GiB / 60 GiB avail; 170 B/s rd, 0 op/s; 18 B/s, 0 objects/s recovering
Nov 27 06:00:52 np0005537642 heuristic_dirac[103808]: {
Nov 27 06:00:52 np0005537642 heuristic_dirac[103808]:    "1": [
Nov 27 06:00:52 np0005537642 heuristic_dirac[103808]:        {
Nov 27 06:00:52 np0005537642 heuristic_dirac[103808]:            "devices": [
Nov 27 06:00:52 np0005537642 heuristic_dirac[103808]:                "/dev/loop3"
Nov 27 06:00:52 np0005537642 heuristic_dirac[103808]:            ],
Nov 27 06:00:52 np0005537642 heuristic_dirac[103808]:            "lv_name": "ceph_lv0",
Nov 27 06:00:52 np0005537642 heuristic_dirac[103808]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Nov 27 06:00:52 np0005537642 heuristic_dirac[103808]:            "lv_size": "21470642176",
Nov 27 06:00:52 np0005537642 heuristic_dirac[103808]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=whPowo-sd77-WkNQ-nG3J-nhwn-01QM-SzpkeN,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=4c838139-e0c9-556a-a9ca-e4422f459af7,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=047f3e15-ba18-4c86-b24b-f8e9584c5eff,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Nov 27 06:00:52 np0005537642 heuristic_dirac[103808]:            "lv_uuid": "whPowo-sd77-WkNQ-nG3J-nhwn-01QM-SzpkeN",
Nov 27 06:00:52 np0005537642 heuristic_dirac[103808]:            "name": "ceph_lv0",
Nov 27 06:00:52 np0005537642 heuristic_dirac[103808]:            "path": "/dev/ceph_vg0/ceph_lv0",
Nov 27 06:00:52 np0005537642 heuristic_dirac[103808]:            "tags": {
Nov 27 06:00:52 np0005537642 heuristic_dirac[103808]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Nov 27 06:00:52 np0005537642 heuristic_dirac[103808]:                "ceph.block_uuid": "whPowo-sd77-WkNQ-nG3J-nhwn-01QM-SzpkeN",
Nov 27 06:00:52 np0005537642 heuristic_dirac[103808]:                "ceph.cephx_lockbox_secret": "",
Nov 27 06:00:52 np0005537642 heuristic_dirac[103808]:                "ceph.cluster_fsid": "4c838139-e0c9-556a-a9ca-e4422f459af7",
Nov 27 06:00:52 np0005537642 heuristic_dirac[103808]:                "ceph.cluster_name": "ceph",
Nov 27 06:00:52 np0005537642 heuristic_dirac[103808]:                "ceph.crush_device_class": "",
Nov 27 06:00:52 np0005537642 heuristic_dirac[103808]:                "ceph.encrypted": "0",
Nov 27 06:00:52 np0005537642 heuristic_dirac[103808]:                "ceph.osd_fsid": "047f3e15-ba18-4c86-b24b-f8e9584c5eff",
Nov 27 06:00:52 np0005537642 heuristic_dirac[103808]:                "ceph.osd_id": "1",
Nov 27 06:00:52 np0005537642 heuristic_dirac[103808]:                "ceph.osdspec_affinity": "default_drive_group",
Nov 27 06:00:52 np0005537642 heuristic_dirac[103808]:                "ceph.type": "block",
Nov 27 06:00:52 np0005537642 heuristic_dirac[103808]:                "ceph.vdo": "0",
Nov 27 06:00:52 np0005537642 heuristic_dirac[103808]:                "ceph.with_tpm": "0"
Nov 27 06:00:52 np0005537642 heuristic_dirac[103808]:            },
Nov 27 06:00:52 np0005537642 heuristic_dirac[103808]:            "type": "block",
Nov 27 06:00:52 np0005537642 heuristic_dirac[103808]:            "vg_name": "ceph_vg0"
Nov 27 06:00:52 np0005537642 heuristic_dirac[103808]:        }
Nov 27 06:00:52 np0005537642 heuristic_dirac[103808]:    ]
Nov 27 06:00:52 np0005537642 heuristic_dirac[103808]: }
Nov 27 06:00:52 np0005537642 systemd[1]: libpod-8d122932edeea588a21a32a52473916042cbf35df1686c8b110ec546ca4db7fe.scope: Deactivated successfully.
Nov 27 06:00:52 np0005537642 podman[103786]: 2025-11-27 11:00:52.317209346 +0000 UTC m=+0.501765393 container died 8d122932edeea588a21a32a52473916042cbf35df1686c8b110ec546ca4db7fe (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=heuristic_dirac, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default)
Nov 27 06:00:52 np0005537642 systemd[1]: var-lib-containers-storage-overlay-7d62f5f048c5d8e3b62ef3e9f85e09ee7e2b3a8357a2d5099fddfb5177d49b43-merged.mount: Deactivated successfully.
Nov 27 06:00:52 np0005537642 podman[103786]: 2025-11-27 11:00:52.370877519 +0000 UTC m=+0.555433576 container remove 8d122932edeea588a21a32a52473916042cbf35df1686c8b110ec546ca4db7fe (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=heuristic_dirac, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.build-date=20250325, ceph=True, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid)
Nov 27 06:00:52 np0005537642 systemd[1]: libpod-conmon-8d122932edeea588a21a32a52473916042cbf35df1686c8b110ec546ca4db7fe.scope: Deactivated successfully.
Nov 27 06:00:52 np0005537642 ceph-4c838139-e0c9-556a-a9ca-e4422f459af7-nfs-cephfs-2-0-compute-0-ymahkb[97101]: 27/11/2025 11:00:52 : epoch 69282ef5 : compute-0 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f37c40045a0 fd 49 proxy header rest len failed header rlen = % (will set dead)
Nov 27 06:00:52 np0005537642 radosgw[89563]: ====== starting new request req=0x7f291eca25d0 =====
Nov 27 06:00:52 np0005537642 radosgw[89563]: ====== req done req=0x7f291eca25d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 27 06:00:52 np0005537642 radosgw[89563]: beast: 0x7f291eca25d0: 192.168.122.100 - anonymous [27/Nov/2025:11:00:52.897 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 27 06:00:52 np0005537642 radosgw[89563]: ====== starting new request req=0x7f291eca25d0 =====
Nov 27 06:00:52 np0005537642 radosgw[89563]: ====== req done req=0x7f291eca25d0 op status=0 http_status=200 latency=0.001000029s ======
Nov 27 06:00:52 np0005537642 radosgw[89563]: beast: 0x7f291eca25d0: 192.168.122.102 - anonymous [27/Nov/2025:11:00:52.914 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Nov 27 06:00:53 np0005537642 ceph-mon[74338]: mon.compute-0@0(leader).osd e127 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 27 06:00:53 np0005537642 podman[103939]: 2025-11-27 11:00:52.993939244 +0000 UTC m=+0.028579303 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Nov 27 06:00:53 np0005537642 podman[103939]: 2025-11-27 11:00:53.184133413 +0000 UTC m=+0.218773452 container create 4a01b496d93fd1767628403bf1472abb7fc8d3a588752b6241bc570fe1c22290 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=optimistic_johnson, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 27 06:00:53 np0005537642 systemd[1]: Started libpod-conmon-4a01b496d93fd1767628403bf1472abb7fc8d3a588752b6241bc570fe1c22290.scope.
Nov 27 06:00:53 np0005537642 systemd[1]: Started libcrun container.
Nov 27 06:00:53 np0005537642 systemd[1]: Starting Hostname Service...
Nov 27 06:00:53 np0005537642 podman[103939]: 2025-11-27 11:00:53.394428257 +0000 UTC m=+0.429068256 container init 4a01b496d93fd1767628403bf1472abb7fc8d3a588752b6241bc570fe1c22290 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=optimistic_johnson, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, CEPH_REF=squid, org.label-schema.build-date=20250325, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 27 06:00:53 np0005537642 podman[103939]: 2025-11-27 11:00:53.405330624 +0000 UTC m=+0.439970633 container start 4a01b496d93fd1767628403bf1472abb7fc8d3a588752b6241bc570fe1c22290 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=optimistic_johnson, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_REF=squid, io.buildah.version=1.40.1, ceph=True, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 27 06:00:53 np0005537642 optimistic_johnson[103970]: 167 167
Nov 27 06:00:53 np0005537642 systemd[1]: libpod-4a01b496d93fd1767628403bf1472abb7fc8d3a588752b6241bc570fe1c22290.scope: Deactivated successfully.
Nov 27 06:00:53 np0005537642 podman[103939]: 2025-11-27 11:00:53.465938189 +0000 UTC m=+0.500578248 container attach 4a01b496d93fd1767628403bf1472abb7fc8d3a588752b6241bc570fe1c22290 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=optimistic_johnson, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2)
Nov 27 06:00:53 np0005537642 podman[103939]: 2025-11-27 11:00:53.46666501 +0000 UTC m=+0.501305039 container died 4a01b496d93fd1767628403bf1472abb7fc8d3a588752b6241bc570fe1c22290 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=optimistic_johnson, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1)
Nov 27 06:00:53 np0005537642 systemd[1]: Started Hostname Service.
Nov 27 06:00:53 np0005537642 ceph-mgr[74636]: [prometheus INFO cherrypy.access.139953872425936] ::ffff:192.168.122.100 - - [27/Nov/2025:11:00:53] "GET /metrics HTTP/1.1" 200 48266 "" "Prometheus/2.51.0"
Nov 27 06:00:53 np0005537642 ceph-4c838139-e0c9-556a-a9ca-e4422f459af7-mgr-compute-0-qnrkij[74632]: ::ffff:192.168.122.100 - - [27/Nov/2025:11:00:53] "GET /metrics HTTP/1.1" 200 48266 "" "Prometheus/2.51.0"
Nov 27 06:00:53 np0005537642 systemd[1]: var-lib-containers-storage-overlay-2f69b99c3cf32778e53ea254e5d9f32a01dcc9b78de4c4e7e002f992771a0f92-merged.mount: Deactivated successfully.
Nov 27 06:00:53 np0005537642 ceph-4c838139-e0c9-556a-a9ca-e4422f459af7-nfs-cephfs-2-0-compute-0-ymahkb[97101]: 27/11/2025 11:00:53 : epoch 69282ef5 : compute-0 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f37ac004000 fd 49 proxy header rest len failed header rlen = % (will set dead)
Nov 27 06:00:53 np0005537642 podman[103939]: 2025-11-27 11:00:53.799984127 +0000 UTC m=+0.834624176 container remove 4a01b496d93fd1767628403bf1472abb7fc8d3a588752b6241bc570fe1c22290 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=optimistic_johnson, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, ceph=True, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 27 06:00:53 np0005537642 systemd[1]: libpod-conmon-4a01b496d93fd1767628403bf1472abb7fc8d3a588752b6241bc570fe1c22290.scope: Deactivated successfully.
Nov 27 06:00:53 np0005537642 ceph-4c838139-e0c9-556a-a9ca-e4422f459af7-nfs-cephfs-2-0-compute-0-ymahkb[97101]: 27/11/2025 11:00:53 : epoch 69282ef5 : compute-0 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f37a8003c50 fd 49 proxy header rest len failed header rlen = % (will set dead)
Nov 27 06:00:54 np0005537642 podman[104022]: 2025-11-27 11:00:54.023306631 +0000 UTC m=+0.062402998 container create f06475652a1a4543104be009239ca6b192c66af61481bbadc380459112f0ea14 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=musing_grothendieck, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 27 06:00:54 np0005537642 systemd[1]: Started libpod-conmon-f06475652a1a4543104be009239ca6b192c66af61481bbadc380459112f0ea14.scope.
Nov 27 06:00:54 np0005537642 systemd[1]: Started libcrun container.
Nov 27 06:00:54 np0005537642 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0672e2ab1686651405d8635776197ba2320b6b2818be7a3e2f7ebe0731b9a1c5/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 27 06:00:54 np0005537642 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0672e2ab1686651405d8635776197ba2320b6b2818be7a3e2f7ebe0731b9a1c5/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 27 06:00:54 np0005537642 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0672e2ab1686651405d8635776197ba2320b6b2818be7a3e2f7ebe0731b9a1c5/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 27 06:00:54 np0005537642 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0672e2ab1686651405d8635776197ba2320b6b2818be7a3e2f7ebe0731b9a1c5/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 27 06:00:54 np0005537642 podman[104022]: 2025-11-27 11:00:54.002142285 +0000 UTC m=+0.041238672 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Nov 27 06:00:54 np0005537642 podman[104022]: 2025-11-27 11:00:54.137123415 +0000 UTC m=+0.176219802 container init f06475652a1a4543104be009239ca6b192c66af61481bbadc380459112f0ea14 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=musing_grothendieck, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid)
Nov 27 06:00:54 np0005537642 podman[104022]: 2025-11-27 11:00:54.150823744 +0000 UTC m=+0.189920111 container start f06475652a1a4543104be009239ca6b192c66af61481bbadc380459112f0ea14 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=musing_grothendieck, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, ceph=True)
Nov 27 06:00:54 np0005537642 podman[104022]: 2025-11-27 11:00:54.163463713 +0000 UTC m=+0.202560070 container attach f06475652a1a4543104be009239ca6b192c66af61481bbadc380459112f0ea14 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=musing_grothendieck, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.license=GPLv2, ceph=True, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 27 06:00:54 np0005537642 ceph-mgr[74636]: log_channel(cluster) log [DBG] : pgmap v32: 353 pgs: 353 active+clean; 457 KiB data, 148 MiB used, 60 GiB / 60 GiB avail; 341 B/s rd, 0 op/s; 36 B/s, 1 objects/s recovering
Nov 27 06:00:54 np0005537642 ceph-mon[74338]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "24"} v 0)
Nov 27 06:00:54 np0005537642 ceph-mon[74338]: log_channel(audit) log [INF] : from='mgr.14655 192.168.122.100:0/3058013073' entity='mgr.compute-0.qnrkij' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "24"}]: dispatch
Nov 27 06:00:54 np0005537642 ceph-4c838139-e0c9-556a-a9ca-e4422f459af7-nfs-cephfs-2-0-compute-0-ymahkb[97101]: 27/11/2025 11:00:54 : epoch 69282ef5 : compute-0 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f37b8003c70 fd 49 proxy header rest len failed header rlen = % (will set dead)
Nov 27 06:00:54 np0005537642 lvm[104224]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Nov 27 06:00:54 np0005537642 lvm[104224]: VG ceph_vg0 finished
Nov 27 06:00:54 np0005537642 ceph-mon[74338]: mon.compute-0@0(leader).osd e127 do_prune osdmap full prune enabled
Nov 27 06:00:54 np0005537642 radosgw[89563]: ====== starting new request req=0x7f291eca25d0 =====
Nov 27 06:00:54 np0005537642 radosgw[89563]: ====== req done req=0x7f291eca25d0 op status=0 http_status=200 latency=0.001000029s ======
Nov 27 06:00:54 np0005537642 radosgw[89563]: beast: 0x7f291eca25d0: 192.168.122.100 - anonymous [27/Nov/2025:11:00:54.899 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Nov 27 06:00:54 np0005537642 musing_grothendieck[104048]: {}
Nov 27 06:00:54 np0005537642 radosgw[89563]: ====== starting new request req=0x7f291eca25d0 =====
Nov 27 06:00:54 np0005537642 radosgw[89563]: ====== req done req=0x7f291eca25d0 op status=0 http_status=200 latency=0.000000000s ======
Nov 27 06:00:54 np0005537642 radosgw[89563]: beast: 0x7f291eca25d0: 192.168.122.102 - anonymous [27/Nov/2025:11:00:54.918 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Nov 27 06:00:54 np0005537642 ceph-mon[74338]: from='mgr.14655 192.168.122.100:0/3058013073' entity='mgr.compute-0.qnrkij' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "24"}]: dispatch
Nov 27 06:00:54 np0005537642 systemd[1]: libpod-f06475652a1a4543104be009239ca6b192c66af61481bbadc380459112f0ea14.scope: Deactivated successfully.
Nov 27 06:00:54 np0005537642 systemd[1]: libpod-f06475652a1a4543104be009239ca6b192c66af61481bbadc380459112f0ea14.scope: Consumed 1.131s CPU time.
Nov 27 06:00:54 np0005537642 podman[104239]: 2025-11-27 11:00:54.997949783 +0000 UTC m=+0.036267177 container died f06475652a1a4543104be009239ca6b192c66af61481bbadc380459112f0ea14 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=musing_grothendieck, ceph=True, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2)
Nov 27 06:00:55 np0005537642 ceph-mon[74338]: log_channel(audit) log [INF] : from='mgr.14655 192.168.122.100:0/3058013073' entity='mgr.compute-0.qnrkij' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "24"}]': finished
Nov 27 06:00:55 np0005537642 ceph-mon[74338]: mon.compute-0@0(leader).osd e128 e128: 3 total, 3 up, 3 in
Nov 27 06:00:55 np0005537642 ceph-mon[74338]: log_channel(cluster) log [DBG] : osdmap e128: 3 total, 3 up, 3 in
Nov 27 06:00:55 np0005537642 systemd[1]: var-lib-containers-storage-overlay-0672e2ab1686651405d8635776197ba2320b6b2818be7a3e2f7ebe0731b9a1c5-merged.mount: Deactivated successfully.
Nov 27 06:00:55 np0005537642 ceph-4c838139-e0c9-556a-a9ca-e4422f459af7-nfs-cephfs-2-0-compute-0-ymahkb[97101]: 27/11/2025 11:00:55 : epoch 69282ef5 : compute-0 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f37c40045c0 fd 49 proxy header rest len failed header rlen = % (will set dead)
Nov 27 06:00:55 np0005537642 podman[104239]: 2025-11-27 11:00:55.796527449 +0000 UTC m=+0.834844833 container remove f06475652a1a4543104be009239ca6b192c66af61481bbadc380459112f0ea14 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=musing_grothendieck, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1)
Nov 27 06:00:55 np0005537642 systemd[1]: libpod-conmon-f06475652a1a4543104be009239ca6b192c66af61481bbadc380459112f0ea14.scope: Deactivated successfully.
Nov 27 06:00:55 np0005537642 ceph-mon[74338]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Nov 27 06:00:55 np0005537642 ceph-mon[74338]: log_channel(audit) log [INF] : from='mgr.14655 192.168.122.100:0/3058013073' entity='mgr.compute-0.qnrkij' 
Nov 27 06:00:55 np0005537642 ceph-mon[74338]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Nov 27 06:00:55 np0005537642 ceph-4c838139-e0c9-556a-a9ca-e4422f459af7-nfs-cephfs-2-0-compute-0-ymahkb[97101]: 27/11/2025 11:00:55 : epoch 69282ef5 : compute-0 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f37ac004000 fd 49 proxy header rest len failed header rlen = % (will set dead)
Nov 27 06:00:55 np0005537642 ceph-mon[74338]: log_channel(audit) log [INF] : from='mgr.14655 192.168.122.100:0/3058013073' entity='mgr.compute-0.qnrkij' 
Nov 27 06:00:55 np0005537642 ceph-mon[74338]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.prometheus}] v 0)
Nov 27 06:00:55 np0005537642 ceph-mon[74338]: log_channel(audit) log [INF] : from='mgr.14655 192.168.122.100:0/3058013073' entity='mgr.compute-0.qnrkij' 
Nov 27 06:00:55 np0005537642 ceph-mon[74338]: from='mgr.14655 192.168.122.100:0/3058013073' entity='mgr.compute-0.qnrkij' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "24"}]': finished
Nov 27 06:00:55 np0005537642 ceph-mon[74338]: from='mgr.14655 192.168.122.100:0/3058013073' entity='mgr.compute-0.qnrkij' 
Nov 27 06:00:55 np0005537642 ceph-mon[74338]: from='mgr.14655 192.168.122.100:0/3058013073' entity='mgr.compute-0.qnrkij' 
